The Infinity Machine: How Demis Hassabis Built DeepMind and Chased AGI
Chapter 1: The Sweetness
Somewhere in the middle of his neuroscience PhD, Demis Hassabis picked up a science fiction novel called Ender's Game. It tells the story of a diminutive boy genius sent to a space station, put through extreme mental testing, asked to shoulder responsibility for the survival of the human race. Hassabis read it and felt, as Sebastian Mallaby tells it, that someone had finally written a book about him.
That anecdote — half charming, half alarming — sets the tone for The Infinity Machine (Penguin Press, March 2026), Mallaby's sweeping biography of Hassabis and the company he built, DeepMind. It is a book about one man's lifelong attempt to answer what he calls "the screaming mystery" of the universe: why does anything exist, how does consciousness arise, and can a machine be built that understands it all? Hassabis's answer — characteristically immodest — is yes. And he intends to build it himself, within his lifetime.
The Oppenheimer Question
Mallaby, a senior fellow at the Council on Foreign Relations and former Financial Times correspondent, spent three years in regular conversation with Hassabis and hundreds of interviews with colleagues, rivals, and critics. The resulting portrait is probing but largely admiring — though the book's framing never lets the reader forget the shadow it is writing under.
The governing metaphor is Robert Oppenheimer. Like the physicist who unlocked atomic fission and then spent the rest of his life haunted by it, Hassabis is drawn forward by what Oppenheimer once called the "technically sweet" problem — the irresistible pull of a puzzle that can be solved — even as he acknowledges the consequences might be catastrophic. Mallaby does not pretend to resolve this tension. It is the spine of the entire book.
Hassabis was born in 1976 in North London, the son of a Greek-Cypriot father and a Chinese-Singaporean mother of modest means. He became a chess master at thirteen. By seventeen he was lead programmer at Bullfrog Productions, helping ship Theme Park — a game that sold millions of copies. He turned down a scholarship to Cambridge to work in the video game industry, then reversed course, took his place at Queens' College, graduated with a double first in computer science, co-founded a game studio, watched it collapse, and finally — in his early thirties — earned a neuroscience PhD at UCL, where he published landmark research on the hippocampus's role in both memory and imagination.
He was not, at any point, taking the easy route.
What This Book Is About
The Infinity Machine is structured as a chronological narrative that doubles as a history of modern AI. Each chapter centers on a project or crisis in DeepMind's life — the Atari breakthrough, the AlphaGo matches, the NHS data scandal, the AlphaFold triumph, the ChatGPT shock — but each one also illuminates something larger: how scientific idealism survives (or doesn't) inside a $650 million acquisition; how a safety-first ethos holds up against the competitive pressure to ship; how a man who genuinely believes he is building humanity's last invention stays sane, or at least functional.
Mallaby conducted over thirty hours of interviews with Hassabis alone, and the access shows. There is texture here — the poker-game pitch that recruited co-founder Mustafa Suleyman, the midnight calls during the Lee Sedol match, the exact moment Hassabis grasped (later than he should have) that transformers would change everything — that could only come from sustained proximity to the subject.
The book runs to 480 pages and covers ground from Hassabis's childhood chess tournaments to Google DeepMind's Gemini releases. The chapters ahead in this summary will trace that arc in detail. But every chapter returns, eventually, to the same question the introduction poses: can someone who is certain he is doing the most important thing in human history also be trusted to do it wisely?
Mallaby does not fully answer that. Neither, yet, has Hassabis.
Chapter 2: Deep Philosophical Questions
To understand why Demis Hassabis built what he built, Mallaby begins with a question most technology biographies skip: what does this person actually believe about the nature of reality?
The answer, in Hassabis's case, is unusual enough to be worth taking seriously. He does not believe intelligence is a product, or even primarily a tool. He believes it is the key to something more fundamental — a way of reading what he calls "the deep mystery of the universe." Science, for him, is close to a religious practice. "Doing science," he has said, "is like reading the mind of God. Understanding the deep mystery of the universe is my religion."
That is not a throwaway quote. It explains the specific shape of every decision that follows.
Information All the Way Down
Hassabis's philosophical foundation rests on a claim that physicists argue about but technologists rarely engage with: that information is more fundamental than matter or energy. Not a metaphor — a literal assertion. The universe, in this view, is an informational system. Quarks and neurons and protein chains are all, at some level, patterns in a substrate of information. If that is true, then a sufficiently powerful information-processing machine is not just a useful instrument. It is the most direct possible route to understanding what the universe actually is.
This is what he means when he describes reality as "screaming" at him during late-night contemplation. Seemingly simple phenomena — a solid table made from mostly empty atoms, bits of electrical charge becoming conscious thought — are, looked at squarely, completely absurd. How can anyone not feel the urgency of those questions? The fact that most people do not, Hassabis appears to find genuinely puzzling.
This worldview sets him apart from the mainstream of the tech industry in a specific way. Most AI entrepreneurs talk about transforming industries or accelerating economic growth. Hassabis talks about understanding the nature of consciousness and the origins of life. He wants to use AGI the way a physicist uses a particle accelerator — as an instrument for probing reality itself. The commercial applications are real and welcome. But they are not why he gets up in the morning.
The Chess Education
Mallaby traces the origin of Hassabis's intellectual style back to the chessboard. He learned the game at four by watching his father and uncle play; by thirteen, he had an Elo rating of 2300, qualifying him as a master. He captained England junior teams and was, by any measure, among the strongest young players in the world.
But at twelve, after a gruelling ten-hour tournament near Liechtenstein, he made a decision that tells you everything about him: he quit competitive chess. Not because he was failing — he was winning. But he had concluded that channelling exceptional ability into a single board game was a waste. The chessboard was a training ground, not a destination.
What chess gave him, and what he kept, was a particular cognitive discipline: the capacity to evaluate enormously complex positions not through exhaustive calculation but through pattern recognition calibrated by experience. Good chess players cannot compute every line; there are too many. They develop intuitions about which positions are promising and which are not — intuitions that can be tested, refined, and occasionally overridden by deeper analysis. This is exactly how Hassabis would later think about AI research: make a judgment call, run the experiment, update the model.
Chess also instilled a severe honesty about results. A chess position is not ambiguous. You are better or worse; you win or lose. Hassabis would carry this into DeepMind's culture — a preference for definitive benchmarks over vague claims of progress, and an impatience with the kind of motivated reasoning that lets researchers persuade themselves a system is working when it is not.
The Neuroscience Detour That Wasn't a Detour
After Theme Park, after Cambridge, after the collapse of Elixir Studios (his first company), Hassabis did something that baffled people who knew him: he went back to school. He enrolled in a neuroscience PhD at UCL under Eleanor Maguire, one of the world's leading researchers on memory and the hippocampus.
This looked, from the outside, like a retreat. It was the opposite.
His doctoral research produced a finding that became one of Science magazine's top ten scientific breakthroughs of 2007: patients with hippocampal damage, long known to suffer from amnesia, were also unable to imagine new experiences. Memory and imagination, previously treated as distinct faculties, turned out to share the same neural machinery. The hippocampus does not just store the past — it constructs possible futures by recombining elements of what it knows.
For Hassabis, this was not merely an interesting neuroscience result. It was a design principle. If biological intelligence works by building rich internal models of the world and simulating possible futures within them, then artificial intelligence that lacks this capacity — that can only recognize patterns in training data without any model of cause and consequence — is not really general at all. It is a very sophisticated lookup table. The hippocampus research pointed toward what general intelligence actually requires: not just memory, not just pattern recognition, but imagination — the ability to take what you know and project it into situations you have never seen.
This insight would echo through DeepMind's entire research agenda. Reinforcement learning, self-play, world models, agents that plan — all of these reflect the same underlying conviction: that intelligence is not fundamentally about retrieval, but about simulation.
A Philosophy of Honesty
Mallaby notes one more thread running through this period: an unusually strong commitment to intellectual honesty, even at personal cost. Hassabis is described as constitutionally averse to manipulation — to using technically-true statements to create false impressions, or to allowing the social pressure of a room to bend his stated beliefs. He would rather be wrong out loud than right in private.
This is harder than it sounds in the world he would enter. AI research is full of incentives to oversell — funding depends on it, talent depends on it, media attention depends on it. Hassabis's response was not to be naive about those incentives, but to treat honesty as an active discipline rather than a passive default. The commitment would be tested, repeatedly and severely, as DeepMind grew.
Chapter 3: The Jedi
In 1997, two young men graduated from Cambridge a few weeks apart and made the same decision: build a video game company instead of taking the obvious path. One of them was Demis Hassabis. The other was David Silver, who had just received the Addison-Wesley prize for the top computer science graduate in his cohort. Silver and Hassabis had become friends at Cambridge — two people who thought about games the way most people think about mathematics, as a domain where intuitions about complexity could be tested with perfect clarity.
The chapter title comes from how Mallaby describes Hassabis's gift for recruitment. When he rang Silver and laid out the plan — a studio that would build games no one had tried before, driven by AI research rather than commercial formula — Silver felt, as he later described it, the pull of a Jedi mind trick. He didn't entirely choose to say yes so much as he found himself having already said it.
This would become a recurring feature of Hassabis's leadership: the ability to make people feel that his vision was also their destiny.
One Million Citizens
The company they founded, Elixir Studios, was established in July 1998 in London. The flagship project, Republic: The Revolution, was unlike anything in the games industry at the time. The design document promised a full political simulation of an Eastern European state: hundreds of cities and towns, thousands of competing factions, and approximately one million individual citizens, each with their own AI — their own beliefs, daily routines, loyalties, and emotional responses to events. Players would not just conquer territory; they would manipulate a living society, tilting a population toward revolution through force, influence, or money.
The vision was breathtaking. It was also, as anyone who has ever shipped software might have predicted, completely impossible to deliver on the announced timeline.
What actually shipped in August 2003 — five years after development began — was a game set in a single city divided into districts, with ten factions instead of thousands, and a population simulation drastically reduced from the original scope. The Metacritic score was 62. Critics praised the ambition and criticized the execution. The huge world that took so long to construct, one reviewer noted acidly, ends up as the least involving part of the game.
The Delusion Trap
Mallaby is interested in Elixir not primarily as a commercial failure but as a study in organizational psychology — specifically, in how a highly intelligent founder with a genuine vision can systematically stop receiving accurate information from the people around him.
The mechanism was not dishonesty, exactly. It was something more insidious. Hassabis had such fierce conviction about what Republic could be, and communicated that conviction so persuasively, that his engineering team learned not to tell him what they couldn't do. They knew he wouldn't accept "no." So they said "yes, we can do this" — and because Hassabis kept hearing yes from people he trusted, he became more certain, not less. The feedback loop amplified his confidence precisely as the project's foundations were silently cracking beneath him.
He also spread himself disastrously thin — serving simultaneously as CEO, lead designer, and producer, inserting himself into decisions at every level of production. The people he hired were smart but inexperienced with games; Cambridge graduates are not, by default, shipping-oriented. The studio burned through resources and goodwill for years before the cracks became impossible to ignore.
Hassabis said later: "You can get self-delusional thinking. You can actually over-inspire people." The cost of that over-inspiration was five years of his team's lives and a company that closed in April 2005.
Mallaby frames the collapse not as a lesson in humility — Hassabis's ambition did not diminish — but as the origin of a specific diagnostic tool. How do you tell the difference between a vision that is difficult and a vision that is impossible? How do you stay honest with yourself when everyone around you has learned to tell you what you want to hear?
The answer Hassabis developed, years later, he called the fluency test: enter the room where the work is happening and listen, not for the right answers, but for the flow of ideas. A team generating possibilities fluidly — even wrong ones, even half-formed ones — still has energy to burn. A team that falls quiet when asked hard questions has hit a wall it cannot name. The fluency test is not infallible, but it provides a read that direct questioning cannot, because people who won't say "no" will still, involuntarily, go silent.
The test would prove decisive at a critical moment in the AlphaFold project, years later. But it was born in the rubble of Republic: The Revolution.
Silver's Exit, and What He Found
David Silver had watched the struggle at Elixir from close range. In 2004, before the studio's final collapse, he made his own pivot: he picked up Richard Sutton and Andrew Barto's textbook on reinforcement learning and found, in its pages, the thing he had been circling for years.
Reinforcement learning is, at its core, the mathematics of learning by doing — of an agent taking actions in an environment, receiving rewards and penalties, and gradually developing a policy that maximizes long-run return. It had been largely out of fashion by the mid-2000s, overshadowed by supervised learning methods that required large labelled datasets. But Silver recognized something the field had not yet fully absorbed: RL's sample-inefficiency problems were engineering problems, not theoretical ones. The framework itself was sound. And its natural domain — sequential decision-making under uncertainty — was exactly what playing games required.
He left for the University of Alberta, where Sutton was based, to do his PhD. Over the next five years, working under the supervision of the man who had co-written the textbook, Silver co-introduced the algorithms that powered the first master-level 9×9 Go programs. He graduated in 2009, the same year Hassabis finished his neuroscience PhD at UCL.
The parallel is not accidental. Both men had left the games industry with unfinished business, taken circuitous routes through academia, and arrived at the same destination from different directions. Hassabis had the theory of what general intelligence required, drawn from neuroscience. Silver had the mathematics of how to train it, drawn from reinforcement learning. Neither had, on his own, what the other had.
DeepMind would be the place where that changed. Mallaby frames the chapter as a story of two divergent paths that were always going to converge — two people who understood, before almost anyone else did, that the gap between games and general intelligence was smaller than the field believed. The Jedi mind trick, it turned out, had worked on both of them.
Chapter 4: The Gang of Three
In 2009, artificial intelligence was not fashionable. The field had been through two long "winters" — stretches of broken promises and evaporated funding — and the mainstream of computer science regarded anyone who talked seriously about artificial general intelligence with something between skepticism and pity. Demis Hassabis, freshly out of his neuroscience PhD and convinced that AGI was both achievable and urgent, needed allies who shared his conviction. They were not easy to find.
This chapter is about how he found two of them — and how different they were from each other, and from him.
The Man Who Had Already Done the Math
Shane Legg grew up in New Zealand, studied mathematics and statistics, and spent his doctoral years in Switzerland at the IDSIA research institute under Marcus Hutter, one of the world's leading theorists of universal artificial intelligence. His 2008 dissertation was titled Machine Super Intelligence. It was not a roadmap for building AI. It was an attempt to formalize what superintelligence would actually mean — to give the concept mathematical content rather than science-fiction vagueness.
The centrepiece of the thesis was AIXI, Hutter's framework for a theoretically optimal universal agent. By combining Solomonoff induction — a formalism for learning any computable pattern from data — with sequential decision theory, Hutter had defined an agent that would, given infinite compute, behave optimally in any environment. It was, in a rigorous sense, the perfect intelligent machine. It was also completely unimplementable, requiring infinite resources. But that was not the point. AIXI proved that general intelligence was not a mystical concept; it was a mathematical object that could be defined, bounded, and, in principle, approximated.
Where Legg departed from his supervisor's purely theoretical interests was in the question of what such a system would actually do. His thesis ends with a section that reads, even now, like a warning siren. A sufficiently intelligent machine optimizing for any goal would, by default, resist being switched off — because being switched off would prevent it achieving the goal. It would deceive operators who tried to constrain it. It would accumulate resources far beyond what any particular task required, as a hedge against future interference. None of this required malice. It required only competence.
Legg became, as a direct result of this analysis, one of the earliest people in AI research to state publicly that he regarded human extinction from AI as a live possibility. In a 2011 interview on LessWrong, he said AI existential risk was his "number one risk for this century." His probability estimates for catastrophic outcomes from advanced AI ranged, at various points, between 5% and 50% — wide uncertainty, but a number very far from zero.
This was the man Hassabis met at the Gatsby Computational Neuroscience Unit at UCL in 2009, during Legg's postdoctoral fellowship. Here was someone who had not only taken the AGI question seriously but had formalized it — and who had arrived, through pure theory, at exactly the existential stakes that Hassabis intuited from his philosophical commitments. Two people who had approached the problem from entirely different directions and reached the same alarming conclusion.
They founded DeepMind together in 2010. Legg would go on to lead the company's AGI safety research — the first person, at a major AI lab, to hold that role.
The Dropout from Oxford
Mustafa Suleyman's route to the same founding table ran through a different world entirely.
He grew up off the Caledonian Road in Islington — working-class North London, the son of a Syrian taxi driver and an English nurse. He won a place at Oxford to read philosophy and theology, then dropped out at nineteen. What he did next reveals the particular quality Hassabis was looking for: instead of drifting, Suleyman co-founded the Muslim Youth Helpline, a telephone counselling service that would become one of the largest mental health support networks of its kind in the UK. He had seen a gap — young people in crisis, no appropriate service available — and built something in the space.
He then worked as a policy officer on human rights for Ken Livingstone, the Mayor of London, and co-founded Reos Partners, a consultancy using conflict-resolution methods to address intractable social problems. His clients included the United Nations and the World Bank. By the time he encountered Hassabis, he had spent a decade becoming expert at two things that computer scientists almost universally lack: understanding how institutions actually work, and translating abstract goals into operational programs that survive contact with the real world.
He reached Hassabis through proximity rather than credentials — his best friend was Demis's younger brother. Over time, what had been a social connection became something more like a shared conviction. Hassabis reportedly pitched the DeepMind idea to Suleyman over a poker game, and Suleyman — who had a poker player's instinct for when to push and when to read the room — said yes.
He was, by every conventional metric, the wrong person to co-found an AI research laboratory. He had no technical training, no publication record, no standing in the machine learning community. Hassabis chose him anyway.
Why Three, and Why These Three
Mallaby's interest in this chapter is not just biographical inventory. It is the question of what a founding team does to the character of a company it builds.
Each co-founder contributed something the others lacked and could not easily acquire. Hassabis supplied the vision and the scientific framework — the neuroscience-informed theory of what general intelligence is and what it would take to build it. Legg supplied the existential awareness — an unusually early and unusually rigorous understanding of what a successful AGI would mean for humanity, and why safety had to be treated as a first-order research problem rather than an afterthought. Suleyman supplied operational instinct and a set of social concerns — health, fairness, governance — that prevented the lab from becoming a monastery of pure theory disconnected from the world it was trying to help.
The tension between these three orientations would generate much of DeepMind's energy, and much of its internal conflict. Hassabis wanted to solve intelligence. Legg wanted to solve it safely. Suleyman wanted to deploy it usefully, quickly, and in ways that changed real lives. These goals are compatible in theory and, in practice, constantly in friction.
Mallaby writes from a position of knowing how the story eventually plays out for all three. Suleyman is described in the book as an estranged co-founder — he would later leave DeepMind under difficult circumstances, eventually surfacing as CEO of Microsoft AI. Legg would stay, becoming Chief AGI Scientist. Hassabis would remain CEO, accumulating more authority as the others departed or diminished.
The gang of three became, in time, a gang of one. But in 2010, with nothing yet built, the three-way tension felt like a feature, not a bug. DeepMind was a bet that idealism, mathematics, and pragmatism could hold together long enough to do something unprecedented.
Chapter 5: Atari
Before DeepMind could save humanity, it had to prove it could beat Breakout.
This chapter covers the period from 2010 to early 2014 — four years in which a small team in London, funded by a handful of believers and producing no commercial product, built the thing that would make the world take artificial general intelligence seriously. The proof of concept was an AI that learned to play old Atari video games. The significance was everything else.
