Inside Google DeepMind: How 15 People Accomplished What Took Humanity 50 Years

Inside Google DeepMind: How 15 People Accomplished What Took Humanity 50 Years

AI Summary

At the London premiere of "The Thinking Game," Google DeepMind's Demis Hassabis revealed that a small team of 15-20 people accomplished in a few years what took tens of thousands of scientists five decades: determining protein structures. While the global scientific community had painstakingly identified 150,000 protein structures over 50 years, DeepMind's AlphaFold system predicted 200 million in just a few years, compressing what would be a billion years of PhD-level work into a single year and making these structures accessible to over 2.5 million researchers globally, with potential to revolutionize drug discovery and other scientific fields.


May 27 2025 15:19

When Derek Muller, creator of the science YouTube channel Veritasium, sat down with Google Deepmind Demis Hassabis at the Science Museum in London at the London premiere of "The Thinking Game," the conversation revealed something extraordinary: a small team of 15-20 people had just accomplished what took tens of thousands of scientists five decades to achieve.

The numbers are staggering. For 50 years, the global scientific community painstakingly determined the structures of 150,000 proteins using expensive, complicated equipment. Then DeepMind's AlphaFold system predicted the structures of 200 million proteins in just a few years.


The Moment That Changed Everything

It was very satisfying to see that this sort of idea that we maybe if we crack this really important problem, potentially millions of researchers around the world will make use of it," Hassabis recalled, describing the moment they released AlphaFold's protein structure database to the world. "To see that sort of lighting up all across the globe is really a kind of humbling and amazing experience.


The visual Hassabis describes—a map of the globe lighting up in real time as researchers accessed the data, captures something profound about how AI can democratize scientific discovery. Within moments of the release, scientists from nearly every country began incorporating these protein structures into their research.

But the scale becomes truly mind-bending when you consider the human effort equivalent. As Hassabis explained, it typically takes a PhD student their entire doctoral program, about five years, to determine the structure of a single protein. Multiply that by 200 million structures, and you get a billion years of PhD-level work compressed into a single year.

Building on Giants' Shoulders

Hassabis was quick to acknowledge that this breakthrough wasn't created in a vacuum. "We couldn't have done it without the first 150,000," he emphasized, paying tribute to the structural biology community whose decades of painstaking work provided the foundation for AlphaFold's learning.

This illustrates a crucial principle in AI development: these systems don't replace human expertise, they amplify it exponentially. AlphaFold learned from those initial 150,000 protein structures, then used that knowledge to predict millions more, even learning from its own best predictions in a self-improving cycle.

The impact extends far beyond the immediate scientific community. Over 2.5 million researchers from virtually every country now use AlphaFold's predictions, working on biology and medical problems that were previously intractable.


From Proteins to Pills: The Drug Discovery Revolution

Understanding protein structures is just the beginning. Hassabis revealed that DeepMind has spun out a sister company, Isomorphic Labs, specifically focused on drug discovery. The company is working on more than a dozen drug programs, targeting cancers, cardiovascular diseases, and other conditions.

The potential timeline compression is remarkable. Traditional drug discovery takes about 10 years on average to go from understanding a target to having a drug in clinical trials. Hassabis envisions reducing this to "maybe a matter of months, perhaps even weeks, just like we did with the protein structures." Current applications already show promise:

  • Designing enzymes to break down ocean plastics
  • Creating proteins for carbon capture
  • Developing new approaches to environmental cleanup
  • Accelerating the development of treatments for previously intractable diseases

The Unlikely AI Pioneer

Hassabis's path to AI leadership began during what many considered an "AI winter", a period when AI research had fallen out of favor. When he graduated from Cambridge, most people had written off AI as overhyped.

But Hassabis saw something others missed. Growing up fascinated by chess and intelligence itself, he believed AI would be "not only the most powerful tool to help us do science but the most interesting thing to develop in itself."

His insight was rooted in biology. Neural networks and reinforcement learning—the core techniques behind modern AI, aren't just computational abstractions. They're inspired by how biological brains actually work. "The human brain is a form of those," Hassabis explained. "Neural networks—that's what inspired artificial neural networks in the first place."

This biological grounding gave him confidence that these approaches could scale, even when others were skeptical. The dopamine system in human brains implements reinforcement learning, the same technique that powers many of DeepMind's breakthroughs.


The London Advantage

Despite pressure from investors to move to Silicon Valley, Hassabis chose to build DeepMind in London. His reasoning reveals important insights about the future of AI development.

First, there was untapped talent. The UK's universities, Cambridge, Oxford, Imperial College, UCL, were producing brilliant graduates who wanted intellectually challenging work but had limited options beyond finance. "There weren't that many companies doing that kind of stuff in the UK or actually in Europe really," Hassabis noted.

But the deeper reason was philosophical. "AI is so important. It's going to affect the whole world," he said. "I felt that it needs the international sort of approach and cooperation around what we want to do with this technology."

Building AI in a single geographic concentration—even one as innovative as Silicon Valley—seemed insufficient for a technology that would transform every aspect of human society. The development process needed "social scientists, economists, psychologists, governments, academia, all to be involved in defining how this enormously transformative technology should go."

The Race Dynamics Problem

Despite AI's promise, Hassabis acknowledges serious risks ahead. The most pressing concern isn't rogue AI, it's rogue humans in positions of power over AI development.

"Even if all the actors are good in that environment, let alone if you have some bad actors, that can drag everyone to rush too quickly, to cut corners," he explained. This creates what economists call a "tragedy of the commons" situation—individual rational decisions that collectively lead to bad outcomes.

Initially, the massive resources required for AI development seemed like a natural brake on bad actors. Like nuclear weapons, AI appeared to require state-level sponsorship or huge corporate backing. But recent developments have changed this calculation.

Companies like DeepSeek and Alibaba have shown that powerful AI systems can be built more efficiently than previously thought. This democratization has benefits—more researchers can access and build upon AI tools—but also risks, as it lowers the barrier for potential misuse.

Preparing the Next Generation

When asked about preparing children for an AI-transformed world, Hassabis offered surprisingly traditional advice: "Send them to school. I say my kids too."

His reasoning draws from his own generation's experience with computers. Initially feared and misunderstood, computers became second nature to those who grew up with them. "They're often the ones that can extend it into new ways we couldn't even dream of today," he noted.

The key subjects remain mathematics and computer science, as these provide the foundation for understanding and extending AI systems. But equally important is developing the ability to "learn to learn"—adapting quickly to new technologies that seem to emerge almost weekly.

A New Renaissance

Looking ahead five to ten years, Hassabis envisions "a new renaissance, almost a new golden age" in scientific discovery. AlphaFold represents just the beginning—he expects AI to drive breakthroughs across multiple scientific fields, from curing diseases to developing new energy sources and addressing climate change.

This optimistic vision comes with caveats. The same technologies that could solve humanity's greatest challenges could also create unprecedented risks if developed irresponsibly or controlled by bad actors.

The conversation revealed a leader deeply aware of both the promise and peril of the technology he's helping create. Hassabis has planned for success from the beginning, but that planning includes grappling with "enormous risks of misuse" that come with AI's transformative power.

DeepMind is working toward AGI, AI systems that match or exceed human cognitive abilities across all domains. Whether such systems will develop consciousness remains an open question, one that Hassabis sees as part of the journey itself. "I always felt actually answering that question was one of the things that will come about being on this journey with AI," he explained. The process of building artificial minds and comparing them to human brains may finally help us understand mysteries like dreaming, emotions, creativity, and consciousness itself.

As "The Thinking Game" documentary captures over five years inside DeepMind, we're witnessing not just technological development but a fundamental shift in humanity's relationship with intelligence itself. The small team that compressed 50 years of protein research into a few years of computation has opened a door that can't be closed.


Watch the "The Thinking Game" Film: https://thinkinggamefilm.com/stream

Recent Posts