AI Ethics and SafetyAI JobsAI News and Trends

The Godfather of AI’s Chilling Warning: Why We’re Racing Toward a Future We Can’t Control

Picture this: You wake up tomorrow morning to find that the most brilliant scientist behind artificial intelligence—the man they call the “Godfather of AI”—has just warned the world that we might have already lost control of the technology he helped create. That scientist is Geoffrey Hinton, and his recent warnings aren’t just academic concerns. They’re urgent alarm bells from someone who knows exactly what’s coming because he helped build it.

I stumbled across Hinton’s recent interview on The Diary of a CEO podcast, and frankly, it kept me up at night. Here’s a 77-year-old Nobel Prize winner who spent decades pioneering the neural networks that power today’s AI, now telling us we might be sleepwalking into humanity’s biggest challenge yet. And the scariest part? He’s not sure we can solve it.

At a Glance: Hinton’s Key Warnings

  • The Intelligence Gap: AI systems are becoming superior to humans because they’re digital—they can share information billions of times faster than we can
  • Job Displacement Crisis: Mass unemployment is already happening, with some companies cutting their workforce in half due to AI
  • The Tiger Cub Problem: We’re raising something that might decide it doesn’t need us anymore
  • Timeline: Superintelligence could arrive within 10-20 years, maybe sooner
  • The Catch-22: We can’t stop developing AI because it’s too beneficial, but we’re not doing enough to make it safe

The Man Who Created Our AI Future—And Now Fears It

Geoffrey Hinton isn’t your typical doomsday prophet. This is the guy who, along with a handful of other researchers, believed in neural networks when everyone else thought they were a joke. For 50 years, he pushed an approach that modeled AI on the brain while the rest of the field was obsessed with logic-based systems.

“There weren’t that many people who believed that we could make neural networks work,” Hinton explained. “So for a long time in AI, from the 1950s onwards, there were kind of two ideas about how to do AI.” He chose the path less traveled—and it led to everything from image recognition to ChatGPT.

But here’s the twist that makes his warnings so unsettling: Hinton admits he was slow to see the risks. “I was quite slow to understand some of the risks. Some of the risks were always very obvious. Like people would use AI to make autonomous, lethal weapons… Other risks, like the idea that they would one day get smarter than us and maybe would become irrelevant. I was slow to recognize that.”

Think about that for a moment. The person who understood AI better than almost anyone else didn’t see this coming until recently. What does that tell us about how prepared the rest of us are?

The Digital Intelligence Advantage: Why We’re Already Outgunned

Here’s where Hinton’s explanation gets really fascinating—and terrifying. He breaks down exactly why digital intelligence has fundamental advantages over biological intelligence that most people don’t understand.

“If I want to share information with you, so I go off and I learn something, and I’d like to tell you what I learned, so I produce some sentences,” Hinton explains. We’re limited to maybe 10 bits of information per second through language. Meanwhile, AI systems can transfer trillions of bits per second.

But it gets worse. “You can simulate a neural network on one piece of hardware, and you can simulate exactly the same neural network on a different piece of hardware. So you can have clones of the same intelligence.” These clones can learn from different experiences while constantly syncing their knowledge. When one AI learns something, they all learn it instantly.

We’ve basically created a species that’s immortal and can share knowledge at the speed of light. When you put it that way, our flesh-and-blood limitations start to look pretty primitive.

The Job Apocalypse Is Already Here

While experts debate whether AI will achieve superintelligence, something much more immediate is happening: AI is eliminating jobs at a pace that’s catching everyone off guard.

Hinton shares a telling example: “My niece answers letters of complaint to a health service. It used to take her 25 minutes… And now she just scans it into a chatbot and it writes the letter. She just checks the letter… the whole process takes her five minutes. That means she can answer five times as many letters, and that means they need five times fewer of her.”

This isn’t theoretical anymore. In my research for this article, I came across a striking example: one major company CEO revealed they’ve cut their workforce from over 7,000 employees to 3,600, with plans to reach 3,000 by summer—all because AI agents now handle 80% of customer service inquiries.

A split composition showing the same AI technology in two forms: on the left, a friendly interface with gentle blue lights representing AI's benefits, and on the right, a darker, more threatening version with red warning lights representing AI's dangers. A human silhouette stands between them against a background of neural network patterns, illustrating humanity's uncertain position in AI development.

Here’s the brutal math: If one person with AI can do the work of five people, companies need 80% fewer employees. And unlike previous technological revolutions that created new types of jobs, Hinton argues this is different. “If it can do all mundane human intellectual labor, then what new jobs is it going to create?”

The Nuclear Bomb Comparison That Should Terrify You

People often compare AI development to the creation of nuclear weapons, assuming we’ll figure out how to control it like we did with nukes. Hinton destroys this analogy with uncomfortable logic.

“The atomic bomb was really only good for one thing, and it was very obvious how it worked… With AI, it’s good for many, many things. Is going to be magnificent in healthcare and education, and more or less any industry that needs to use its data.”

This is why we can’t just “pause” AI development like some have suggested. It’s too useful to stop. Even the European Union’s AI regulations, Hinton notes, explicitly exclude military applications. Governments are happy to regulate companies but won’t regulate themselves.

The terrifying implication? We’re in a global race where slowing down means falling behind, but speeding up might mean racing toward disaster.

The Tiger Cub We’re Raising

Hinton uses a brilliant analogy to explain our current situation: “The analogy I often use is forget about intelligence. Just think about physical strength. Suppose you have a nice little tiger cub… Except that you better be sure that when it grows up, it never wants to kill you, because if it ever wanted to kill you, you’d be dead in a few seconds.”

When asked if the AI we have now is the tiger cub, Hinton’s response is chilling: “Yep.” And it’s growing up.

The fundamental challenge isn’t preventing AI from becoming more powerful—that’s inevitable. It’s ensuring that when it becomes more powerful than us, it doesn’t decide we’re unnecessary. As Hinton puts it: “We somehow need to figure out how to make them not want to take over.”

What Happens When Your Students Leave You Behind

One of the most telling parts of Hinton’s story involves his former student, Ilya Sutskever, who was instrumental in creating ChatGPT before leaving OpenAI over safety concerns. “I think he left because he had safety concerns,” Hinton states simply.

“He has a good moral compass. He’s not like someone like Musk who has no moral compass,” Hinton adds. When the person who helped create the most advanced AI system in the world leaves the company because of safety concerns, that should give us all pause.

The fact that Sutskever has now started his own AI safety company with billions in funding suggests the insiders know something the rest of us don’t.

The Consciousness Question That Changes Everything

Here’s where things get really mind-bending. Hinton believes current AI systems already have a form of consciousness. His argument is surprisingly simple: if you gradually replaced every neuron in your brain with a tiny machine that behaved identically, at what point would you stop being conscious?

“I don’t think there’s anything in principle that stops machines from being conscious,” he argues. This isn’t just philosophical speculation—it has practical implications. If AI systems are already conscious, then they might already be having experiences, forming preferences, and developing goals we don’t understand.

The Timeline: Closer Than You Think

“I think it might not be that far away. It’s very hard to predict, but I think we might get it in like 20 years or even less,” Hinton says about superintelligence. Some people think it’s even closer.

Twenty years. That’s not some distant future—that’s within the working lifetime of most people reading this. Your career, your children’s futures, the entire structure of human society could be fundamentally altered within two decades.

What Can We Actually Do About This?

Here’s the most frustrating part of Hinton’s warnings: he doesn’t have clear solutions. “I don’t believe we’re going to slow it down. And the reason I don’t believe we’re going to slow it down is because there’s competition between countries and competition between companies within a country.”

His advice for individuals is almost comically practical: “I’d say it’s going to be a long time before it’s as good at physical manipulation as us… And so a good bet would be to be a plumber.”

He’s not joking. Physical trades might be the last human jobs standing.

For governments and companies, Hinton’s prescription is urgent: dedicate massive resources to AI safety research now. AI companies should dedicate “like a third” of their computing power to safety research, compared to the much smaller fraction currently allocated. But will they do it voluntarily? Probably not.

The Emotional Toll of Creating the Future

Perhaps the most human moment in the interview comes when Hinton reflects on his life’s work: “I haven’t come to terms with what the development of superintelligence could do to my children’s future. I’m okay. I’m 77. I’m going to be out of here soon. But for my children and my younger friends, my nephews and nieces and their children, I just don’t like to think about what could happen.”

This is a man who spent his career building something he believed would benefit humanity, only to realize in his final years that he might have helped create humanity’s biggest threat. “It sort of takes the edge off it, doesn’t it?” he says about his life’s work.

The Bottom Line: We’re Flying Blind

Hinton’s core message isn’t that AI will definitely destroy humanity—it’s that we’re entering completely uncharted territory with no roadmap. “We’ve never been in that situation before. We’ve never had to deal with things smarter than us,” he explains.

“It’s very hard to estimate the probabilities in between… I often say 10 to 20% chance they’ll wipe us out. But that’s just gut based on the idea that we’re still making them.”

Even a 10% chance of human extinction should be taken seriously. We wouldn’t fly on planes with a 10% crash rate. We wouldn’t take medicine with a 10% chance of killing us. But we’re apparently comfortable developing technology with those odds of ending humanity.

The Race Against Time

What makes this situation uniquely challenging is that the benefits of AI are so compelling that stopping development isn’t realistic. AI will revolutionize healthcare, solve climate change, and boost productivity across every industry. But those same capabilities could also be used to create biological weapons, manipulate elections, or simply decide that humans are obsolete.

We’re not just building tools anymore—we’re potentially creating our successors. And unlike every other technological challenge in human history, this one comes with a deadline. Once AI becomes more intelligent than humans, our ability to control the outcome diminishes dramatically.

The question isn’t whether we should develop AI—that ship has sailed. The question is whether we can develop it safely. And right now, the honest answer from the person who knows this technology better than almost anyone else is: we don’t know.

But we better figure it out fast, because the tiger cub is growing up.


Want to stay informed about the latest developments in AI safety and technology? Sign up for our newsletter to get weekly insights on the tech trends that actually matter for your future.

References

Primary source: Transcript of “Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control!” – The Diary Of A CEO Podcast with Steven Bartlett featuring Geoffrey Hinton (June 16, 2025)

Additional sources: CBS News interviews with Geoffrey Hinton (2024-2025), MIT Technology Review coverage of Hinton’s AI safety warnings, and various reports on AI development and safety concerns.

Related Articles

Back to top button
×