Nobel Prize Winners: When AI Solves Problems We Do Not Understand?

Nobel Prize Winners: When AI Solves Problems We Do Not Understand?

AI Summary

At a Tokyo Nobel Prize gathering, experts debated the profound implications of AI surpassing human intelligence, extending beyond technological capabilities to core questions of human purpose, work, and identity. Discussions highlighted that AI is no longer futuristic, as evidenced by breakthroughs like AlphaFold 2 solving protein folding without human understanding of its method. Concerns arose regarding job displacement, the "black box" nature of AI decisions, and accountability, alongside philosophical questions about AI consciousness.


May 24 2025 09:25

In a packed auditorium in Tokyo, some of the world's most brilliant minds gathered to tackle a question that keeps many of us awake at night: What happens when AI becomes smarter than we are? The answers from Chemistry Nobel laureate Ada Yonath and leading AI researchers were both more nuanced and more unsettling than you might expect.

"The biggest challenge is the challenge to what is a human being," said Arisa, cutting straight to the heart of the matter. It's not about whether AI can process data faster or solve complex equations—we already know it can. The real question is what happens to our sense of purpose, our work, and our identity when machines can do what we do, often better than we can.

This shift represents a fundamental change from previous AI discussions. Five years ago, these conversations were mostly confined to tech labs and academic conferences. Today, millions of people use ChatGPT, Gemini, or other AI tools daily, and they're all grappling with the same unsettling realization: the technology is no longer a distant future—it's here, and it's changing everything.

When AI Solves Problems We Don't Understand

Perhaps no example illustrates this paradox better than the protein folding breakthrough that Ada Yonath discussed. For decades, scientists struggled to predict how proteins fold into their complex three-dimensional structures—a problem crucial to understanding life itself and developing new medicines.

The success rate hovered around 45% for years, despite the best efforts of researchers worldwide. Then came AlphaFold 2, Google DeepMind's AI system, which solved the problem with stunning accuracy. But here's the unsettling part: we don't actually understand how it works.

The protein folding problem was solved by a process we do not understand. We don't know how AlphaFold 2 does this


Yonath emphasized that this success wasn't just about AI—it built on decades of human effort. Hundreds of thousands of researchers had painstakingly collected and cataloged protein structures, creating the foundation that made AlphaFold's breakthrough possible. It's a perfect example of human-AI collaboration, but it also raises profound questions about the nature of scientific understanding.

Does it matter if we don't understand how we solved the problem, as long as we get the right answer? Yonath offered a philosophical perspective: "Nature came to incredible designs without studying together with us. So we have to give nature a lot of respect."

The Employment Paradox Nobody Wants to Talk About

The conversation took a sharp turn when discussing AI's impact on work. The panelists acknowledged a uncomfortable truth: while we've long fantasized about machines doing our drudgery so we can pursue higher callings, the reality is more complicated. An audience member raised the fundamental question underlying all employment anxiety:

We will have automated systems that reduce our work and do all the laborious work so we can enjoy our lives. But this implicitly stands on the idea that people still have earnings to do daily life regardless of reduced labor. In reality, are we able to accept the idea that people are earning for daily lives even though they rely on automated machines?


The question touched on universal basic income and broader societal restructuring, but also revealed a deeper tension. Arisa pointed out that in Japan, labor shortages mean AI automation is actually welcome in many sectors. The challenge isn't just technological—it's about redesigning social and economic systems that have been built around human labor.

  • Current reality: AI agents can now make hotel reservations, purchase items online, and perform sequences of actions, not just answer questions
  • Physical AI: Robots with generalized behaviors can handle laundry and housekeeping tasks
  • The limitation: These systems still can't expand beyond narrow domains effectively

The Consciousness Question We're Not Ready For

The discussion reached its most philosophical depths when addressing whether AI will become conscious. The panel played clips from AI pioneer Geoffrey Hinton, who argued that consciousness naturally emerges in sufficiently complex systems capable of modeling themselves.

I think that's just gibberish... I'm a materialist. Consciousness emerges in sufficiently complicated systems capable of modeling themselves. And there's no reason why these things won't be conscious.


Matsuo agreed with this assessment, but raised an even more challenging question: what happens when these conscious AIs become more intelligent than humans? Hinton's answer was characteristically blunt: "There's very few cases of more intelligent things being controlled by less intelligent things."

Yet Matsuo offered a more optimistic perspective, noting that in human society, less intelligent agents often control more intelligent ones through careful design of objective functions and power structures. The key insight: control isn't just about raw intelligence—it's about purpose, incentives, and system design.

The Black Box Problem and Who Takes Responsibility

One of the most practical concerns raised was AI's "black box" nature—we often can't explain how these systems reach their conclusions. This isn't just an academic curiosity; it has real implications for accountability and trust.

Arisa reframed the problem in social terms: "The problem is not that technology is a black box, but we actually don't know who takes responsibility for risks. When something goes wrong and the machine tells us why it made a decision, that's not the explanation we want. We want to know who takes responsibility."

This perspective shifts the focus from technical explainability to social accountability—a much more tractable problem that requires collaboration between engineers, social scientists, and policymakers.

Cultural Differences in AI Safety

An fascinating subplot emerged around cultural differences in approaching AI safety. Arisa noted that even the word "safety" carries different connotations across cultures:

When we talk with colleagues from overseas, we need to discuss conceptually what safety means. In Japanese, we call safety 'anzen,' but Japanese safety might be slightly different from European safety because it depends on circumstances. What safety criteria people actually want is totally different.


This insight reveals a crucial challenge for international AI governance: how do you create global standards when the fundamental concepts mean different things to different cultures?


The Path Forward: Collaboration, Not Control

Despite the weighty concerns, the discussion wasn't entirely pessimistic. The panelists emphasized that successful AI implementation requires more than just technical advancement—it demands changes in human awareness, infrastructure, and social systems.

Arisa observed that AI implementation often fails not because of technical limitations, but because of human and institutional unreadiness:

Even if the technology is good, if people's awareness, infrastructure, networking, security, or datasets aren't ready—and I can say almost many places are not ready yet—then implementation doesn't go well.


The solution isn't to slow down technological development, but to accelerate our social and institutional adaptation. This includes:

  • International cooperation: More collaborative research on AI safety and governance
  • Public engagement: Broader societal discussions about AI's role, not just among experts
  • Systemic thinking: Recognizing that AI isn't a silver bullet but a tool requiring careful integration
  • Cultural sensitivity: Acknowledging that AI safety means different things in different contexts

Learning to Live with Uncertain Intelligence

An audience member concluded the session with a provocative suggestion: if AI becomes smarter than humans, perhaps we should let it manage our organizations and governments. After all, as Hinton noted, AI systems are already proving to be effective mediators, helping people with opposing views understand each other.

Yonath's response was characteristically thoughtful: she noted that AI already manages aspects of her research institute in practical ways, even if not formally. Matsuo emphasized that AI can manage organizations well "if given the proper objective"—but setting those objectives remains fundamentally human work.

The dialogue ended not with answers, but with a framework for thinking about these challenges. The experts agreed that we need systemic changes to accommodate AI's growing capabilities, international cooperation on safety research, and continued public discourse about the future we want to create.

The conversation in Tokyo made one thing clear: we're all participants in this transformation, whether we realize it or not. The question isn't whether AI will change everything—it's whether we'll be thoughtful about how we let it change us.

Recent Posts