Before We Get Started
In a previous post (https://marshallgjones.substack.com/p/how-ai-works) I tried to explain how AI works. But I thought it would be useful to help you better understand what many people see as the promise of Artificial Intelligence. In the information ecosystem about AI, you will find people who are convinced AI will solve most of society’s problems and people who are convinced that AI will ruin our way of life if not our very existence. It is likely that the truth lies somewhere in the middle. In this post I’m going to introduce you to some ideas and perspectives in a hopefully neutral way. I don’t want to convince you that you should agree or disagree with any of them. I just want you to know that they exist. As always, keep in mind that I am simplifying a lot of complex ideas here. At the end of this post you’ll find links to some of my sources and to some other reading. And if you’re not already a subscriber you should be. So sign up today!
Why AI?
AI is more of an idea than a specific thing. There are multiple kinds of AI. You are familiar with Generative AI, like we use in tools like ChatGPT and Google’s Gemini. But there are many other types, like neural networks that are inspired by the way the human brain works to be able to learn patterns and make independent decisions. But the idea behind AI is to make a really smart computer.
You may have heard of Alan Turing and the Turing Test. The Turing Test is a way to discuss how “intelligent” a computer is. In the early days of computer science in the 1950s, the question of how “intelligent” a computer could be was hotly debated. Computer scientists gathered to not only try to make computers better, but to speculate on how smart a computer might become. But creating smart machines was not a new idea that started with the computer. The idea of “automatons,” or devices that worked without human intervention has been around for thousands of years. Leonardo DaVinci created one in 1495. Philosophers and inventors have been musing and building prototypes of automatons long before the computer. So in some ways, the idea of a human-like machine has been with us for as long as we’ve been thinking about what it means to be human.
John McCarthy is credited with coining the term Artificial Intelligence in 1955 as part of a workshop at Dartmouth College that brought together some of the leading minds in computing. The basic idea behind Artificial Intelligence was that we might be able to build machines that were as smart as people. The dream of many AI researchers and scholars is to create a system that combines the best traits of a person with the best traits of a computer. And AI researchers have been working on that vision for AI for a very long time. But, of course, in the last few years we’ve seen some pretty compelling advances. As Generative AI, or Gen AI, like ChatGPT, Claude, Gemini, and the recent newcomer DeepSeek continue to improve, there are people who are very hopeful and excited about these advances in AI. And there are people who are deeply concerned about these advances in AI. In this post, I want to focus on the hopeful and excited perspective. I’ll talk about a dystopian future later.
Stages of AI
While the world of AI is complex and technical, at the moment it may be helpful to think of three stages in the development of AI.
Image: Three types of AI.
https://www.formica.ai/blog/which-ai-is-learn-by-its-own
Stage 1, Machine Learning is where we are now. Every AI system we have today, even if it looks really cool or scary to you, is Narrow AI. It is trained on data and designed to do something, like predictive search in Google of speech to text on your phone. Or like the AI system to identify breeds of dogs I wrote about in the “How AI Works” post. Sure, AI tools like ChatGPT, Gemini, and DeepSeek can do many things, but from a technical perspective, it is created, and controlled, by some human somewhere and is reliant on massive amounts of data and an algorithm. It is created and ultimately controlled by humans.
Stage 2, Machine Intelligence is the great promise of AI. This means that the computer is as smart as a human and can work independent of humans on complex problems, taking in new data and modifying itself appropriately. You’ll see this referred to as Artificial General Intelligence or AGI a lot in the press and in AI literature. AGI is the goal of many AI developers. If we can reach AGI, they claim, we can use it to work on really big problems and improve society. Sam Altman from OpenAI, makers of ChatGPT, has been telling us for some time that we are very close to AGI. We may be. OpenAI continues to raise a lot of money based on the promise of AGI. Others, such as Podcaster and Tech Critic Ed Zitron say that reaching AGI with our current tools is unlikely.
Stage 3, Machine Consciousness is what everybody is terrified of. That is when the computer is smarter than we are and could become self perpetuating in a dangerous way. Unlike AGI, Artificial Super Intelligence AI could teach itself to look out for its own best interest and not be controlled by a human. In this scenario, say many who are concerned about AI, the AI system could conclude that it is in the best interest of its own survival to rid the world of its closest rivals. And their closest rivals would be us. Humans. This is the stuff of the Terminator and Matrix movies. It is when we are forced to welcome our new robot overlords.
Using AI to Solve Really Big Problems
Humans are a pretty remarkable species. We are great problem solvers. We are creative. We can accumulate knowledge and create wisdom from it. We can be kind and empathetic. But we also get tired. And we get bored doing repetitive tasks. Computers aren’t great problem solvers and they aren't creative. But they never get bored or tired of doing the same things over and over again. The great promise of AI is that if we can combine the creativity, wisdom, and empathy of a person with the patience and work ethic of a computer then we can use that system to solve really big problems. What kinds of problems? Well, honestly, all of them. Personalized tutors for all learners. Cheaper and better health care. Climate change. Endless, cheap, renewable energy. Better batteries. Eradicating diseases. Generating wealth for all. How to never lose your phone or your car keys.
The promise of AI is that the computer could harness the creativity and problem solving skills of a human but also be really, really, really fast. And productive. In medicine, this might mean an AI system could analyze an Xray or Ct Scan and find abnormalities earlier than a human analyzing the same data. This early detection could mean treatments of diseases early, thus saving lives. And the AI system may be able to provide that care.
It isn’t too far-fetched an idea. AI is already helping to solve big problems. For example, AI powered self-driving labs (SDL) are speeding up the pace of scientific discovery. A self-driving lab is an autonomous scientific research system that uses AI, robotics, and automation to conduct experiments, analyze results, and refine, and conduct, future tests without any humans involved. These labs can rapidly discover new materials, optimize chemical reactions, and accelerate scientific breakthroughs by continuously learning from data and adapting experiments in real time.
Image: Diagram of a self-driving lab for discovery of materials and molecules.
https://www.sciencedirect.com/science/article/pii/S2451929421004666
Self-driving labs have discovered new materials for solar cells, new designs for better, longer lasting batteries, skin like electronics, and more. Think of a self-driving lab as a highly efficient, AI-powered scientist that works 24/7 to speed up discoveries in fields like medicine, materials science, and energy. Self-driving labs are not AGI, but they are an example of how we may combine the power of AI and humans to solve big problems. But imagine a Self-driving lab that didn’t need a human to manage it. That it just worked away everyday solving really big problems for humanity. This is the vision and dream of many of the enthusiastic supporters of AI. They are promising us an AI Utopia.
What is AI Utopia?
People who are predicting an AI Utopia tell us we don’t need to fear Stage 3 Machine Consciousness because we will always be able to control the AI. And AI can offer us a better world. An AI Utopia is a world where computers are doing all of the hard things and humans are freed up to do only the things that we like. Here are some things that AI may be able to help us do better:
AI may be able to tutor our children as individuals and not as part of a larger class where their individual needs may not be met. If every child has an AI tutor, every child can learn and grow at their own pace and to their full potential.
AI systems may take over managing the skies to make air travel and airports safer.
AI medical systems may improve diagnoses and care, making us healthier and happier.
AI systems might monitor our government systems to make them more efficient and less costly.
AI, robots, and more, according to AI enthusiasts like Reid Hoffman and Vinod Khosla (see links below) may offer us a world where we are freed from the drudgery of work.
Even better, AI will be able to do what humans can do but they can do it faster and at volumes and scales of productivity that humans simply can’t match. Yes this AI may replace human workers, but AI supporters tell us that the increased productivity and job loss will actually make our lives better. This increased productivity will increase our Gross Domestic Product (GDP) growth and create massive wealth that can be redistributed through programs that would promise a Universal Basic Income (UBI) for people displaced from work or who choose not to work. As humans have more time available to them, they are free to do what humans do best: think, dream, create, and solve more problems. Reid Hoffman calls this “Super Agency” and has a book available that presents his vision for what happens if AI goes right.
AI supporters see it as a moral imperative that we move forward as quickly as possible to reach a world where AI can be used to its full potential. To be fair, AI supporters do recognize that there will be growing pains and disruptions to our societies as we move to use AI to its full potential. They are not promising a seamless transition. They recognize that there are policy positions to be worked out. They recognize that many changes will be painful for some, but that if we are careful in our approach, we can make this work to the benefit of all. And to be fair again, many of the most vocal AI supporters are heavily invested in AI companies and have a financial interest in AI.
If you’re hearing this vision for the first time, it may be difficult to wrap your head around. Moving from an AI tool that can create amusing poems and Sea Shanties to one where AI controls most parts of our society may sound far-fetched, but it is a real possibility. And I want you to be aware of it. I’ve read a lot about the promise of AI and how it might benefit humanity, and, well, I have questions. Here are some of them.
Who ordered this AI future for us?
Who is in charge of this?
Who is monitoring AI tools to make sure they are developed safely and ethically?
Who is monitoring AI research to make sure that AI can’t turn against us?
How do we minimize the potential disruptions to personal lives and personal fortunes that AI may cause?
Who is looking at the huge numbers of potential negative impacts of AI and how we will manage them?
And there are more questions. A lot more. And in future posts I will take a look at those questions. For example, AI development is having a massive environmental impact as it requires more and more electricity and, surprisingly, water, but more on that later. AI enthusiasts do recognize the possible disruptions in our lives. But at the moment there is more work being done on developing AGI than on managing its potential impact on our world and society.
One of my goals is to help people understand these issues and the potential benefits AI and the potential problems AI may bring us. But, more on that later. Thanks for reading, and please feel free to subscribe to get updates sent directly to your email inbox.
Links to Readings and Sources.
The History of AI
https://www.tableau.com/data-insights/ai/history
A Road Map to AI Utopia by: Vinod Khosla
https://time.com/7174892/a-roadmap-to-ai-utopia/
SUPERAGENCY: What Could Possibly Go Right with Our AI Future by: Reid Hoffman
Self-Driving Labs at North Carolina State University
https://engr.ncsu.edu/news/tag/self-driving-labs-sdls/
Ed Zitron’s Blog: Where’s Your Ed At?
OpenAI’s Sam Altman says ‘we know how to build AGI’
https://www.theverge.com/2025/1/6/24337106/sam-altman-says-openai-knows-how-to-build-agi-blog-post
Seven Early Robots and Automatons