geeky NEWS: Navigating the New Age of Cutting-Edge Technology in AI, Robotics, Space, and the latest tech Gadgets
As a passionate tech blogger and vlogger, I specialize in four exciting areas: AI, robotics, space, and the latest gadgets. Drawing on my extensive experience working at tech giants like Google and Qualcomm, I bring a unique perspective to my coverage. My portfolio combines critical analysis and infectious enthusiasm to keep tech enthusiasts informed and excited about the future of technology innovation.
What the AI Godfather Geoffrey Hinton Fears Most About Its Rapid Advance in AGI
Updated: May 05 2025 11:22
AI Summary: Geoffrey Hinton, often called the "godfather of AI," left Google in 2023 to freely discuss his evolving views on AI, highlighting both its immense promise and significant perils. He notes AI is developing faster than anticipated, potentially leading to AGI sooner than previously thought, while offering revolutionary benefits in fields like healthcare and education. However, he expresses serious concerns about job displacement and the existential risk posed by advanced AI, criticizing the lack of regulation and the competitive rush among companies, emphasizing the urgent need for increased safety research and public awareness.
Often referred to as the "godfather of artificial intelligence," Hinton recently sat down for an illuminating conversation about the current state and future of AI, nearly two years after his previous interview on the subject. This Nobel laureate's insights reveal both the promise and peril of our increasingly AI-driven world.
Hinton's Departure from Google: A Watershed Moment
In May 2023, Geoffrey Hinton made headlines when he left Google after a decade of employment to speak freely about AI risks. "I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told The New York Times, expressing regret about certain aspects of his life's work.
This departure marked a pivotal moment for the tech industry. As one of the foundational figures in modern AI development—his 2012 breakthrough with students Ilya Sutskever and Alex Krishevsky laid the groundwork for today's neural network technologies—Hinton's defection from Google to the ranks of AI critics carried special significance.
His decision wasn't taken lightly. A lifelong academic whose career was driven by personal convictions, Hinton had previously left Carnegie Mellon University in the 1980s because he was reluctant to take Pentagon funding, demonstrating his long-standing ethical concerns about technology applications.
AI Development: Faster Than Expected
One of Hinton's most striking observations is how quickly AI has evolved in just the past couple of years. "AI has developed even faster than I thought," he notes, particularly highlighting the emergence of AI agents that can actively perform tasks in the real world rather than merely answering questions—a development he considers "scarier than before."
When pressed about his timeline for the arrival of artificial general intelligence (AGI) or superintelligence, Hinton estimates there's "a good chance it comes between four and 19 years from now," revising his previous estimate downward. More specifically, he believes there's "a good chance it'll be here in 10 years or less now."
This accelerated timeline reflects a dramatic shift in Hinton's thinking. In his New York Times interview, he revealed: "The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
The Promising Future of AI
Despite his concerns, Hinton acknowledges several fields where AI presents tremendous opportunities:
In healthcare, AI is poised to transform patient care in multiple ways:
Medical imaging analysis will soon surpass human experts
AI-powered family doctors could draw on experience from "millions of patients"
Integration of genomic information with patient history and test results
Superior diagnostic capabilities in difficult cases
More effective drug design
While Hinton believes universities will face challenges, he thinks graduate-level research at good institutions in education will likely survive as "the best source of truly original research."
AI tutors could help students learn 2-4 times faster than traditional methods
Personalized learning experiences tailored to individual misunderstandings
Custom examples to clarify concepts for each student
AI could help address climate change through:
Development of better materials, particularly advanced batteries
Potential breakthroughs in room temperature superconductivity
Possible carbon capture innovations (though Hinton expresses skepticism about energy considerations)
More broadly, AI will enhance productivity across industries with economic efficiency:
Better prediction capabilities from data
Enhanced customer service experiences
Streamlined operations across sectors
The Job Displacement Concern
While Hinton once downplayed concerns about AI-driven job displacement, his position has evolved. "AI's got so much better in the last few years," he explains, expressing worry about jobs in call centers, law, journalism, accounting, and other routine knowledge work.
The core issue, as Hinton sees it, isn't about technology itself but about how its benefits are distributed: "It ought to be that if you can increase productivity, everybody benefits... But we know it's not going to be like that. We know what's going to happen is the extremely rich are going to get even more extremely rich and the not very well-off are going to have to work three jobs."
In his New York Times interview, he expanded on this concern: "Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. 'It takes away the drudge work,' he said. 'It might take away more than that.'"
The Existential Risk
Perhaps most sobering is Hinton's assessment of the existential risk posed by advanced AI systems. When asked about the probability of AI eventually "taking over," Hinton places it at roughly "10 to 20%" while acknowledging this is "just a wild guess." He suggests most experts would agree the probability lies somewhere between 1% and 99%.
He compares our current situation to having "this really cute tiger cub" that we need to worry about as it grows up, unless we can be absolutely certain it won't want to harm us. The fundamental challenge is that we have no experience controlling entities more intelligent than ourselves.
"How many examples do you know of less intelligent things controlling much more intelligent things?" Hinton asks rhetorically. His kindergarten analogy is particularly vivid: just as adults could easily manipulate young children to gain control, superintelligent AI could manipulate humans with relative ease.
This concern is not merely theoretical. Hinton notes that current AI systems are "already capable of deliberate deception. They're capable of pretending to be stupider than they are, of lying to you so that they can kind of confuse you into not understanding what they're up to."
The Regulatory Challenge
Given these concerns, what about regulation? Hinton expresses disappointment with the current regulatory landscape, noting that major AI companies are "lobbying to get less AI regulation" despite the minimal oversight that currently exists.
He particularly criticizes the release of model weights by companies like Meta and OpenAI, comparing it to allowing anyone to purchase fissile material for nuclear weapons on Amazon. When weights are released, what once required hundreds of millions of dollars to develop can be fine-tuned for malicious purposes at a fraction of the cost.
Hinton is skeptical about the current U.S. government's appetite for meaningful regulation, observing that "all of the big AI companies have got into bed with Trump." He expresses particular disappointment with Google's reversal on its promise not to support military applications of AI.
Speaking to The New York Times, Hinton explained: "The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
The Corporate Landscape
Hinton's assessment of the major AI companies is similarly sobering. He notes how OpenAI, originally established "explicitly to develop superintelligence safely," has gradually de-emphasized safety concerns and is now "trying to go public" and become "a for-profit company."
When asked which AI company he would feel comfortable working for today, Hinton initially said none, though he later amended this to possibly Google or Anthropic. He singles out Anthropic as "the most concerned with safety," noting that many safety researchers who left OpenAI went there, creating "much more of a culture concerned with safety."
His concern extends to the competitive dynamics driving AI development. In his Times interview, he noted that until recently, Google had acted as a "proper steward" for AI technology. However, with Microsoft challenging Google's core business by integrating ChatGPT technology into Bing, Google has been forced to accelerate its own AI deployment. "The tech giants are locked in a competition that might be impossible to stop," Hinton observed.
Hinton's Epiphany About AI Capabilities
A pivotal moment in Hinton's evolving perspective came during his research at Google. Initially, he believed that while neural networks were powerful for language processing, they remained inferior to human intelligence in fundamental ways.
However, as companies like Google and OpenAI built systems using vastly larger amounts of data, his view shifted dramatically. He began to recognize that these systems might be "actually a lot better than what is going on in the brain" in certain respects.
One particular insight that alarmed him involved the communication advantage AI systems have over humans. While humans must communicate ideas slowly through language—"a few hundred bits of information at most"—digital AI systems can share "trillions of bits a second" between instances of themselves. This realization about AI's collaborative potential was a key factor in heightening his concerns.
Conclusion: Finding Hope Among Concerns
Despite his many concerns, Hinton hasn't given up hope. He believes that public awareness and pressure could force governments and companies to take AI safety more seriously. His recommendation is that AI companies should dedicate "a significant fraction" of their computing resources—something like one-third—to safety research.
In his New York Times interview, Hinton suggested that "the best hope is for the world's leading scientists to collaborate on ways of controlling the technology," adding, "I don't think they should scale this up more until they have understood whether they can control it."
As AI continues its rapid evolution, Hinton's insights offer a crucial perspective from someone who has been at the forefront of the field for decades. His combination of technical expertise and ethical concern provides a valuable framework for understanding both the opportunities and challenges that lie ahead.