Skip to main content

4 posts tagged with "biography"

View all tags

CZ's 'Freedom of Money': from a Jiangsu Boy to a Crypto Empire - Chapter-by-Chapter Summary

· 39 min read
Tian Pan
Software Engineer

On April 8, 2026, Changpeng Zhao’s (CZ) autobiography, Freedom of Money: A Memoir on Luck, Resilience, and Protecting Users, officially hit the shelves. Spanning 364 pages and roughly 110,000 words, it immediately claimed the number one spot for new releases in Amazon's cryptocurrency category. Most of the book's first draft was written in 2024 while he was serving time in a U.S. federal prison—using public computers in 15-minute increments. All royalties are being donated to charity.

This is not a traditional "how to succeed in business" book. It reads more like the reflections of a man who spent seven years in the eye of a hurricane and finally sat down to tell the whole story at his own pace. Below is a chapter-by-chapter breakdown of the 25 chapters and the appendix.

Part I: The Origins (Chapters 1–4)

The book intentionally opens with CZ's childhood. He doesn't start with the glory of Binance, nor does he lead with the drama of his imprisonment. Instead, he begins in a school dorm without running water. These four chapters lay down the psychological bedrock for every major decision he makes later in life.

Chapter 1: A Boy from Jiangsu

CZ was born in 1977 into a family of intellectuals in Lianyungang, Jiangsu Province. His father, Shengkai Zhao, was a geophysics professor at the University of Science and Technology of China (USTC)—though the title of "professor" carried heavy historical baggage. During political movements, Shengkai was branded a "pro-capitalist intellectual" and temporarily exiled to the countryside. CZ's mother also taught at a university. The family lived in campus housing with dirt floors and no running water.

CZ recalls this childhood as "carefree." Growing up on the campuses of elementary schools, middle schools, and USTC, the school grounds were his playground. When he was 10, the family moved to Hefei, where CZ began interacting with older USTC students. They discussed philosophy, played chess, and debated—an intellectual atmosphere that left a profound impact on him. Though socially awkward, his father was his first technical mentor. A colleague once described Shengkai Zhao as "brilliant but far too modest; he never commercialized his own inventions."

The core takeaway from this chapter is that poverty does not equal misery. CZ purposefully places this era at the beginning of the book to set the tone and to show that he is no stranger to "scarcity." This foundational experience colored all of his later perspectives on money and freedom. Tragically, Shengkai Zhao later died of leukemia. In the book, CZ reflects on how his father spent all his time in labs and at computers, never attending CZ's volleyball games: "I was the team captain, playing twice a week, and my parents never came to watch." He expresses a deep-seated fear of repeating that exact pattern with his own children.

Chapter 2: The Vancouver Years

On August 6, 1989, 12-year-old CZ and his mother arrived in Vancouver to reunite with his father, who had gone to Canada five years earlier to pursue a Ph.D. in geophysics at the University of British Columbia (UBC). The family lived in graduate student housing, and their financial situation plummeted. Unable to continue teaching due to the language barrier, his mother took piece-rate work at a garment factory, a grueling job that permanently damaged her health. His father commuted in a beat-up Datsun.

This stood in stark contrast to many of CZ's classmates, who were wealthy immigrants from Hong Kong and Taiwan, wearing designer clothes and driving sports cars. CZ would hitch rides to his volleyball games (where he was, again, the team captain) twice a week in a friend’s BMW, only to return to his own modest apartment.

A teenage CZ worked overnight shifts at a Chevron gas station and spent two full years flipping burgers at McDonald's. He has never hidden this part of his life; even as a billionaire, when asked about his first job, he proudly answers, "McDonald's." This period directly forged the "extreme frugality" culture he later brought to Binance, which famously didn't even have a physical office in its early days.

However, there was one life-altering "luxury" during this time: his father spent around CAD 7,000 on an IBM-compatible 286 PC. It was an astronomical sum back then. His father used it for research and also used it to teach CZ how to code. That computer became the launchpad for CZ’s life in tech.

Chapter 3: Wall Street and Tokyo

After finishing high school in 1995, CZ traveled 3,000 miles to Montreal to attend McGill University. He initially studied biology but quickly realized, "University biology was just back to animals—I wasn't interested." He pivoted to computer science. His college life wasn't particularly glamorous; he spent his free time between ice rinks, Vietnamese pho restaurants, and the Mac labs.

The real turning point wasn't a class, but a book. In his junior year, CZ read Robert Kiyosaki's Rich Dad Poor Dad, which completely shattered the "study hard, get a good job" mentality his parents had instilled in him. "After reading it, I started thinking maybe I should own my own business—build an operation that meant something."

In his senior year, he co-authored an AI paper with Professor Jeremy Cooperstock (who later recalled CZ as "smart," though he never imagined the student would become a billionaire). During a summer internship in 2000, he developed order-matching systems for the Tokyo Stock Exchange. He landed a full-time offer and never went back to finish his degree. Contrary to media reports stating he "graduated from McGill," CZ frankly admits in the book that he dropped out.

He then joined Bloomberg Tradebook in New York, developing futures trading software. He was relocated three times in two years—New Jersey, London, Tokyo—managing teams across different regions. By 25, he was making $390,000 a year and managing 60 people. But he grew restless. In 2005, he quit, moved to Shanghai, and co-founded Fusion Systems with four expat partners—a SaaS company building high-frequency trading systems for investment banks like Goldman Sachs and Credit Suisse.

His time in Shanghai taught him two things: first, to "think like a salesman," and second, that business rules in China were "deliberately ambiguous," leaving massive discretionary power to government enforcement. He particularly disliked the "banquet drinking culture" where business relationships were forged over shots of Baijiu. "That stuff was alien to me, so I never really liked it."

Chapter 4: The Bitcoin Epiphany

A poker game in 2013 changed everything. A friend from Sequoia Capital and another investor, Daying Cao, both mentioned Bitcoin to CZ. Having heard the term three times in quick succession, he finally decided to look into it. Once he did, he made a decision that seemed insane at the time: he sold his apartment in Shanghai and went all-in, buying about $1 million worth of Bitcoin.

Within two months, Bitcoin crashed by 70%, leaving him with a paper loss of over $700,000. Meanwhile, Shanghai real estate prices continued to skyrocket. All his friends mocked his decision. But CZ held on.

He first rooted himself in the industry by joining Blockchain.info (now Blockchain.com) as Head of Development. That same year, he met a 19-year-old Vitalik Buterin. Their friendship deepened in a rather unusual way—Vitalik later stayed at CZ’s house, sleeping in a bunk bed in CZ's son's room, and explaining the concept of smart contracts to the kid.

When Ethereum launched in 2015, CZ had the chance to invest but hesitated: "I was skeptical about whether implementing complex logic on a blockchain using a Turing-complete language was feasible." Ethereum eventually surged thousands of times over. But their friendship endures to this day—"Just this morning, Vitalik and I were discussing biotech investment opportunities," he notes.

This chapter also includes a near-miss that could have altered crypto history: in early 2014, CZ was invited to be the CEO of Mt. Gox’s China division, complete with a 10% equity stake and financial backing from Susquehanna. He was seriously considering it and was about to sign—but then Mt. Gox collapsed on February 7, 2014. CZ lost 100 Bitcoins stored on the platform in the fallout (about 50,000then;roughly50,000 then; roughly 7 million today), but he never even tried to recover them.

Part II: Building the Empire (Chapters 5–8)

All the foundational elements laid out in the first four chapters—his father’s technical DNA, the immigrant hunger, Wall Street’s systems thinking, his dissatisfaction with traditional finance—converge here. It took just 180 days to go from a hotpot dinner in Chengdu to becoming the world's largest crypto exchange.

Chapter 5: The Birth of Binance

The story starts at a hotpot dinner in Chengdu on June 14, 2017. At the table, CZ met old friends, including Roger Ver, and the conversation turned to the red-hot ICO (Initial Coin Offering) market. After dinner, he went back to his team and announced, "We are doing an ICO, too." Within three days, Binance’s whitepaper was written—from learning the concept of an ICO to the final draft.

The project almost launched under a different name. The original Chinese name CZ picked was vetoed by He Yi in a single sentence: "Your name sounds like a grocery store." She suggested "Binance" (a portmanteau of Binary and Finance). CZ admits in the book that asking for her opinion on the name was a "little trick"—his real goal was to recruit her. She verbally agreed to join the night before launch, on July 13, 2017.

On July 14, Binance officially launched. The two-week ICO had taken place over five rounds, with each round selling out in seconds, raising a total of $15 million.

CZ had spent four years accumulating experience in the crypto industry: Head of Development at Blockchain.info, founding Bijie Tech (which provided trading platforms for 30 Chinese exchanges), and serving as CTO of OKCoin (an experience that would turn into one of the industry's biggest feuds, detailed in Chapter 22). Binance’s founding team was tiny—just a few friends scraping together. No fancy offices, no complex corporate structures.

CZ highlights three core strategies that made Binance stand out: supporting ERC-20 tokens (when other exchanges only had Bitcoin trading pairs), a commitment to customer service with a one-day response time (competitors took months; Binance later cut this to five minutes), and proactively compensating users during the "September 4th" regulatory crackdown in China.

Chapter 6: Blitzscaling

Binance’s growth is legendary in the crypto space. Within six months of launching, powered by an engine capable of processing 1.4 million transactions per second, it attracted 6 million users and became the largest crypto exchange globally. From September to December 2017, Bitcoin shot from 3,000to3,000 to 20,000, and Binance caught the perfect tailwind. At the peak of the 2021 bull run, daily trading volume exceeded 76billion.ForbesestimatedCZsnetworthbrieflyhit76 billion. Forbes estimated CZ’s net worth briefly hit 96 billion.

But in the middle of this legend was a near-death experience—China’s blanket ban on cryptocurrencies on September 4, 2017.

The night before the ban dropped, CZ received a tip that a "massive crackdown was coming." He called an emergency meeting and decided to fly to Tokyo with He Yi that very night. He Yi suggested he take out his SIM card and turn off his phone—an idea inspired by a spy movie, though she later admitted she had no idea if it actually prevented tracking.

At the time, BNB’s price was already 6x its ICO price, so nobody wanted a refund. However, four other ICO projects facilitated on Binance had fallen below their issue price, and those project teams didn't have the reserves to make users whole. The shortfall was roughly 6million.DuringaphonecallwhileCZwasonamovingtrain,theteamreachedaconsensusinjust10minutes:Binancewoulduseitsowncashreservestomakethoseuserswholeonbehalfoftheprojectteams.That6 million. During a phone call while CZ was on a moving train, the team reached a consensus in just 10 minutes: **Binance would use its own cash reserves to make those users whole on behalf of the project teams.** That 6 million represented 40% of the company’s total assets at the time.

CZ frankly admits the role of "luck" in this chapter, but stresses that luck favors the prepared. It was his technical team's 20 years of accumulated expertise, hyper-sensitivity to user needs, and a "fast-decision" management style that allowed Binance to seize the window. He notes that many critical product decisions were made in hours, not weeks. Going from zero to number one globally in 180 days—this speed has been cited in countless business school case studies since.

Chapter 7: The Headquarterless Company

After evacuating Shanghai, Binance began its unique "headless" operational model. CZ went to Tokyo, then Taipei, Singapore, Malta—wherever there was a need, wherever regulations were friendly. Eventually, employees were spread across dozens of countries, with over 10,000 people working fully remotely. CZ himself was a nomad for years, with no fixed home.

In July 2018, CZ showed up to an industry conference in Taipei wearing shorts and flip-flops to meet with Taiwanese legislator Jason Hsu. What was supposed to be a closed-door meeting was spontaneously turned into a livestream by the two of them. Hsu later remarked, "He’s a straight-shooting, no-BS entrepreneur." That image—shorts, flip-flops, a lawmaker, a livestream—perfectly encapsulated Binance’s early culture: anti-traditional, informal, and fiercely pragmatic.

CZ called this model "decentralized management," perfectly aligning with the philosophy of crypto itself. His core belief regarding regulation was: "Rather than trying to change rules to circumvent them, it's better to find friendlier jurisdictions." But he also acknowledges the massive compliance nightmares this created. Having no clear legal jurisdiction meant that regulatory agencies from every country could come knocking. The UK's FCA banned Binance’s regulated activities in 2021; France, Japan, Germany, and Italy followed suit with their own actions. This flexibility was Binance’s greatest strength, and eventually, its Achilles' heel.

Chapter 8: The Ecosystem Empire

Binance was never just an exchange. This chapter chronicles the build-out of the ecosystem: BNB, Binance Smart Chain (BSC), Binance Labs, Trust Wallet, and Binance Academy. CZ’s core strategy was to "build moats outside the exchange"—so if the exchange ever stopped being profitable, the ecosystem would survive.

The chapter also reveals a story about "resisting temptation." A project team once tried to personally bribe CZ with a $20 million "listing fee" to get their coin on Binance. CZ refused and blacklisted them. This incident led him to draft the "Binance Listing Guidelines," mandating that all applications go through the official website and creating an airtight "physical isolation" between the listing team and the project founders to prevent under-the-table dealing.

One key investment decision highlighted was a 3millionbetonTerra/LUNAinearly2018.Atitspeak,thisinvestmentskyrocketedto3 million bet on Terra/LUNA in early 2018. At its peak, this investment skyrocketed to 1.6 billion in value. But Terra’s collapse cost Binance dearly. CZ outlines three reasons why he chose not to sell before the crash: to maintain market confidence, to prevent massive liquidations from causing a wider panic, and to ensure no one could accuse Binance of "front-running retail investors." "If Binance, as the largest holder, dumped first, the market panic would have been catastrophic, and retail users would have been hurt the most." Whether that decision was right is up for debate, but it underscores his oft-repeated principle of "protecting users."

Part III: The Storm (Chapters 9–12)

If the first two parts trace an upward arc—from poverty to a global empire—Part III is the steep plunge. The 2022 crypto winter, the collapse of FTX, and the DOJ investigation: these four chapters cover the darkest 18 months of CZ’s life. This is the emotional center of the book, exploring how a person maintains rationality while losing control.

Chapter 9: Eve of the Storm

2022 was the crypto industry’s "darkest hour," and CZ uses this chapter to reconstruct the timeline of the contagion from an insider's perspective.

In May, Terra’s algorithmic stablecoin UST de-pegged, wiping out 40billioninmarketcapasLUNAcrashed99.9940 billion in market cap as LUNA crashed 99.99%. Binance’s 1.6 billion position effectively went to zero. On June 12, crypto lender Celsius froze withdrawals; on June 27, hedge fund Three Arrows Capital (3AC) defaulted on loans to Celsius and Voyager, entering liquidation. On July 13, Celsius officially filed for bankruptcy. The dominos were falling one by one, dragging Bitcoin from 47,000downto47,000 down to 16,000.

CZ reveals for the first time the existence of a private Signal group called "Exchange Collaboration," created by former FTX employee Zane Tackett after the Terra crash. Members included CZ, SBF (Sam Bankman-Fried), Coinbase CEO Brian Armstrong, and other industry heavyweights. The intent was to coordinate crisis response, but it later drew the scrutiny of U.S. authorities—because to some, private coordination among competitors looks like collusion.

CZ admits this period made him realize that the "systemic risk" in crypto was far worse than he had imagined. A single project’s failure could cascade through lending chains and infect the entire industry. This is the context behind his later intervention in the FTX crisis—he wasn't trying to save SBF; he was trying to stop the next domino from falling.

Chapter 10: FTX and SBF

This is one of the most highly anticipated chapters. CZ details the evolution of his relationship with SBF—from investor to rival, and finally, to "firefighter."

CZ first met SBF at Binance Blockchain Week in January 2019. While Binance’s CFO was bullish on FTX, CZ and He Yi initially passed on investing. By November 2019, CZ changed his mind and agreed to swap BNB for FTT, eventually taking about a 20% stake in FTX.

But once the ink dried, SBF "immediately changed his tune." He poached Binance’s VIP account managers offering 5x salaries, stole the entire VIP client roster, publicly trashed Binance in D.C., and pitched FTX to U.S. regulators as the "compliant alternative."

In November 2022, facing a bank run, SBF sent his first message to CZ: "Has our relationship degraded to the point where we can't even talk?" On the ensuing call, SBF asked for billions in emergency funding. CZ writes that SBF’s tone "was like he was ordering a bologna sandwich"—completely disconnected from the magnitude of the cash he was requesting.

Alameda CEO Caroline Ellison publicly offered to buy Binance’s FTT at 22atokenwhichCZcallsa"fatalmistake,"becauseitsignaledtotheentiremarketexactlywhereherfloorwas.Themarketreactedbrutally:FTTdroppedto22 a token—which CZ calls a "fatal mistake," because it signaled to the entire market exactly where her floor was. The market reacted brutally: FTT dropped to 15, then 10,andfinally10, and finally 5. Within 72 hours, $6 billion fled FTX.

"I didn't want FTX, and I didn't want to help SBF," CZ writes. "But to protect users and the broader industry, I had to step in." Binance signed a non-binding Letter of Intent to acquire FTX on November 8, but backed out just one day later after looking at the books. CZ clarifies that it was never a genuine acquisition attempt, but rather a play to stabilize market confidence and buy users time to withdraw their funds.

Chapter 11: The Department of Justice

In 2023, the U.S. Department of Justice (DOJ) launched a sweeping investigation into Binance. The core charge wasn't fraud; it was violations of the Bank Secrecy Act's anti-money laundering (AML) provisions. Court documents exposed embarrassing internal chats—one employee wrote, "Operating in the US, it's better to ask for forgiveness than permission." Another sarcastically summarized the culture: "Money laundering is too hard? Come to Binance, we have cake." Filings also showed Binance processing transactions linked to the Hydra darknet market and Hamas.

CZ frankly admits that Binance’s explosive growth had indeed "left unavoidable holes in our compliance systems."

This chapter explains why the U.S. government could crack down on a company that didn't physically operate in the U.S.—the answer: if you serve American users, you fall under American jurisdiction. Ultimately, Binance agreed to pay 7.2billioninfinestotheDOJ,Treasury,andCFTC(7.2 billion in fines to the DOJ, Treasury, and CFTC (4.3 billion from the company, $1.5 billion personally from CZ). CZ stepped down as CEO, handing the reins to Richard Teng. CZ describes this as "sacrificing a pawn to save the chariot"—falling on his sword to protect the company and its users.

Chapter 12: Awaiting Sentencing

The five months between his guilty plea on November 21, 2023, and his sentencing on April 30, 2024, were the most agonizing period CZ describes. Prosecutors pushed for a three-year sentence. The psychological weight of the unknown was worse than the prison itself—he didn't know how long he would serve, whether he’d be deported, or if his three young children would grow up while he was locked away.

His legal team gathered 161 letters of support from family, business partners, and industry peers, highlighting his character and dedication as a father. Ultimately, Judge Richard Jones sentenced him to four months—far below the three years the prosecution wanted. The judge noted CZ’s willingness to "take responsibility for his mistakes" rather than viewing his actions as intentionally malicious.

The writing here is highly introspective, reading almost like a diary. CZ describes his daily routine during the wait: reading voraciously—from history to philosophy to business biographies—searching for anchors in the stories of others. He began re-evaluating his life’s priorities: health, family, and freedom first; career and wealth second. For a man who had prioritized work above all else, this was a fundamental rewiring.

He shares a poignant detail: during the wait, he and He Yi discussed what they would do if he got three years. She told him she would bring the kids to visit every month while continuing to run Binance. "She said it so calmly," CZ writes, "as if it were just another operational issue to solve." Her composure brought him both comfort and deep guilt.

Part IV: Behind Bars and Rebirth (Chapters 13–17)

From the top of the global billionaires list to Inmate #88087-510—these five chapters document the most dramatic identity shift of CZ’s life. Yet, the narrative is surprisingly calm. There is no self-pity, no bitterness; only the reflections of a man forced to slow down. This is also where the book's first draft was born—on public computers, 15 minutes at a time.

Chapter 13: Life Behind Bars

On June 1, 2024, CZ surrendered to FCI Lompoc II, a low-security federal prison in California, as "Inmate 88087-510." Because he wasn't a U.S. citizen, he was ineligible for minimum-security camps (which offer more freedom). This is the most personal chapter in the book.

The prison operated on its own micro-economy: inmates were allowed to spend $180 every two weeks and worked on the adjacent farm—planting, tending cattle, raising horses. CZ lost 6 kilograms but found his physical fitness actually improved through daily exercise. He caught a cold three or four times, relying entirely on over-the-counter meds, as medical attention was only granted if you were severely ill.

The first draft of this very book was written here. The prison computers were rudimentary—CZ likened them to "electronic typewriters" with no copy-paste function, and he was restricted to 15-minute sessions. Under these conditions, he painstakingly typed out 110,000 words.

"The mental toll of uncertainty is far heavier than any physical discomfort," he writes. He notes his biggest epiphany in prison was realizing that "family and health are vastly more important than work"—no small realization for an entrepreneur who spent his life grinding 24/7 across the globe. Notably, even behind bars, CZ retained about 90% ownership of Binance, with a net worth around $60.6 billion, making him one of the wealthiest inmates in U.S. history.

Chapter 14: The ICE Ordeal

After serving about three months at Lompoc, CZ was transferred in late August 2024 to a halfway house in San Pedro, California. He gained more freedom—allowed out under supervision, even catching a movie.

But this phase wasn’t peaceful. As a non-U.S. citizen, he faced an added legal hurdle: U.S. Immigration and Customs Enforcement (ICE). He details the bureaucratic maze surrounding his immigration status—would he be immediately deported upon release? What would happen to his visa? These unresolved questions threw his release date into limbo and turned what should have been a "transition" into a highly stressful ordeal.

While at the halfway house, CZ did something fascinating: he volunteered to write cryptocurrency educational materials for his fellow inmates, largely pulling from Binance Academy's open-source curriculum. The founder of the world's biggest crypto exchange, imprisoned for AML violations, teaching other inmates what crypto is—it’s a scenario ripe for a short story.

Chapter 15: Freedom

CZ was released two days early on September 27, 2024, as his scheduled release date (Sept 29) fell on a weekend, a standard practice in the federal prison system.

The tone here is serene rather than euphoric. CZ notes that the first thing he did upon release wasn't holding a press conference; it was sitting quietly with his family. He emphasizes that "time and freedom" are the two most precious things in life—far more valuable than money.

He describes the "re-acclimation" process: when handed back his smartphone, he found himself in no rush to check notifications. Four months of a forced "digital detox" had permanently altered his relationship with information. He no longer felt the need to be plugged in 24/7. He consciously minimized screen time, reserving his hours for face-to-face interactions and the outdoors.

His first public appearance post-prison was on October 31, 2024, at Binance Blockchain Week in Dubai. As he walked onto the stage, the crowd gave him a standing ovation. It was a stark contrast to his walk into Lompoc as Inmate 88087-510 four months prior. But CZ writes that standing on that stage, he didn't feel triumphant; he just felt a strange sense of peace. "I didn't have anything left to prove."

Chapter 16: Binance Without Me

As part of his plea deal, CZ agreed to step away from all operational management of Binance for three years. This chapter explores his struggle to accept this reality, and how the company operated in his absence.

The new CEO, Richard Teng—a Singaporean with a deep traditional finance regulatory background—took the helm. That choice was a signal in itself: Binance was pivoting from "founder-driven wild growth" to "institutionalized compliance." He Yi remained as Co-Founder and Chief Customer Service Officer, anchoring the user and brand sides. CZ kept his ~90% equity but was barred from making operational decisions.

CZ details the agony of this "look but don't touch" reality. He watched Binance make product pivots he disagreed with, but couldn't intervene. He saw rivals outmaneuver them in certain niches, but couldn't marshal resources to fight back. He went from being an omnipotent founder to a bystander relying on public press releases to know what his own company was doing.

Yet, he admits Binance performed better without him than many anticipated. By January 2025, the platform boasted over 250 million registered users, operations were stable, and team morale hadn't cracked. He likens the feeling to a parent watching a child grow up and move out—you know it's right, you know they'll be fine, but you still miss making the decisions. "Perhaps," he muses, "this is what true decentralization looks like—not the kind you design, but the kind you are forced to accept."

Chapter 17: Education is the Next Mission

Barred from running an exchange, CZ found a new North Star. He launched Giggle Academy, a free digital education project aimed at illiterate adults and unbanked children globally. The idea took root in prison, he explains, when he realized while teaching his fellow inmates that "education is the only thing that actually changes destinies."

Simultaneously, he continued to back early-stage founders through Binance Labs, focusing on Web3 infrastructure, AI, and Decentralized Science (DeSci). In January 2025, he invested $16 million in Sign (an airdrop service protocol). In April 2025, Pakistan appointed him as a strategic advisor to their newly formed crypto committee to help draft regulatory frameworks.

His goal shifted from "building the world's largest exchange" to "helping hundreds of founders build unicorns."

Part V: Philosophy and Reflections (Chapters 18–21)

If the first four parts are the "what happened," Part V is the "what I learned." CZ steps away from the narrative and takes on the role of a thinker. This is likely the most debated section of the book—because when a man who just paid $7.2 billion in fines starts lecturing on financial freedom, regulation, and leadership, readers naturally raise an eyebrow.

Chapter 18: The Freedom of Money

This is the philosophical core of the book and the origin of its title. CZ’s central thesis is that cryptocurrency isn't a speculative casino; it is the financial infrastructure for the billions of unbanked people worldwide.

The statistics he cites are staggering: roughly 1.4 billion adults globally don't have a bank account, largely concentrated in Sub-Saharan Africa, South Asia, and Latin America. In Africa, 57% of the population lacks access to traditional banking. They can't save, borrow, or remit money—not because they don't want to, but because legacy banks deem them "unprofitable."

Using examples from Nigeria, Venezuela, and Indonesia, CZ argues that crypto’s primary utility in these regions isn't speculation, but hedging against hyperinflation and executing low-cost remittances. He points to Africa's M-Pesa (66 million active users) and Machankura (allowing users to transact Bitcoin over basic cellular networks) to show how these tools are tangibly changing lives.

"The freedom of money is never the finish line; it is the starting line," CZ writes. When a Filipino overseas worker can send money home in seconds for near-zero fees, instead of waiting three days and paying a 10% cut to Western Union—that is the true value of crypto. It’s not about making rich people richer; it's about giving the unbanked their first true financial instrument.

It's a grand vision, but CZ acknowledges the inherent tension: a company built on the ethos of "financial inclusion" was ultimately fined $7.2 billion for AML failures. The friction between idealism and reality is the undercurrent of this entire book.

Chapter 19: The Regulatory Dilemma

The tone here is analytical and detached. CZ speaks not as a defendant, but as an industry observer critiquing the current state of global crypto regulation.

He admits regulation is necessary but fiercely criticizes the U.S. approach of "regulation by enforcement"—using lawsuits to set precedents instead of drafting clear rules. Crypto companies face a paradox: you can't comply with rules that don't exist, but by the time you're sued for "violating" newly interpreted rules, it’s already too late.

He contrasts this with other jurisdictions: Japan passed the Payment Services Act amendment back in 2017 to provide a clear licensing framework; Singapore’s MAS set up transparent sandbox mechanisms; the UAE created VARA to attract global crypto firms. Meanwhile, the U.S. SEC and CFTC have spent years bickering over whether crypto assets are securities or commodities, leaving businesses to operate in a gray zone until the hammer drops.

CZ also tackles a controversial point: he argues that crypto is actually too transparent, which hurts user privacy. Every on-chain transaction is public forever—a boon for law enforcement, but potentially an overexposure of personal financial data for everyday users. Legacy bank records at least offer privacy; blockchains do not.

He pleads for governments to establish clear, predictable frameworks rather than relying on retroactive punishment. "Regulation should be like traffic laws," he writes. "You put up the stop sign first, and then you ticket the people who run it. You don't hide at an unmarked intersection waiting to arrest people."

Chapter 20: Lessons in Leadership

This chapter is a masterclass in CZ’s management philosophy. Operating a remote team of 10,000+ people required a deeply unorthodox approach.

On Communication: CZ despises inefficiency. He believes the ideal meeting length is five minutes. "If you can't reach a consensus in five minutes, people didn't prep." He banned PowerPoint inside Binance—demanding plain text and simple bar charts instead. "A 15-minute meeting needs 3-5 bullet points. A monthly recap should fit on half a page." He also outlawed "social meetings" designed merely to introduce people and provide background.

On Feedback: He believes "99% of people do not give enough feedback." In a remote setup devoid of body language, you have to overcompensate with blunt, direct written feedback. However, he also practices "limited positive feedback"—excellence should be the baseline; it doesn't warrant a gold star. Genuine recognition comes via salary adjustments.

On Talent: His hiring mantra is: "Take a long time to hire, but fire fast." If you have doubts about a candidate, reject them immediately—"the doubt is the answer." He is ruthless regarding motivation: "Never try to motivate an unmotivated person."

The "No Ultimatums" Rule: A principle he highlights repeatedly, stemming from a college relationship. He candidly shares this personal anecdote to illustrate that ultimatums are toxic in any dynamic (business or personal). You are always free to walk away, but never threaten a partner with an "either/or" scenario.

He also shares a quirky personal habit: he sleeps only 5-6 hours a night, supplemented by a 30-45 minute nap. He claims the hours immediately following his nap are his brain's peak time for deep thinking.

Chapter 21: The Loneliness of the Entrepreneur

An unexpectedly vulnerable chapter, carrying the highest emotional density of the book.

From the day Binance launched in 2017, CZ lived a nomadic life. He had no permanent home—"Wherever I sit, that is Binance HQ." This wasn't entirely by choice; it was a strategy to avoid being pinned down by any single regulatory body. He even took UAE citizenship, partly because the UAE has no extradition treaty with the U.S.

But the psychological toll was massive. He describes a profound sense of detachment: the public saw a billionaire titan building an empire; internally, he felt the strain of missing his children's milestones, the tension in his relationship with He Yi due to their grueling schedules, and an emotional numbness brought on by chronic, high-stakes stress.

Recalling his father—"spending all day in the lab, never coming to my games"—CZ admits he "definitely inherited that trait." This self-awareness makes the chapter feel incredibly raw.

Prison forced a hard reset. No phone, no email, no market volatility to respond to. For a man accustomed to operating at lightspeed, this mandatory stillness was agonizing at first, but eventually liberating. "For the first time, I actually had time to think—not about the next product roadmap, but about what kind of life I actually wanted."

This chapter will resonate far beyond the crypto industry—any founder who has survived hypergrowth will see their own reflection here.

Part VI: The Industry and Its People (Chapters 22–25)

These final four chapters pack the biggest punch. CZ names names, calling out key figures in the industry. Some are tributes, some are indictments, and some fall somewhere in between. This section immediately set the crypto world on fire upon publication, making Star Xu’s Twitter counterattack and the revelations of SBF’s ties to Gensler the biggest stories of the week.

Chapter 22: The Faces of Crypto

The most explosive chapter in the book. CZ unpacks his history with several industry heavyweights:

Star Xu (OKCoin/OKX): This re-ignited the industry’s longest-running feud. In mid-2014, CZ joined OKCoin as CTO with a 10% equity promise, managing their Bitcoin.com domain partnership with Roger Ver. CZ left in January 2015, and an ugly contract dispute erupted over two versions of a contract: V7 and V8. Both sides accused the other of forgery. Star Xu accused CZ of "forging the contract," as the V8 version contained clauses requiring OKCoin to pay Ver massive compensation, and Xu claimed only V7 and prior were official. CZ fires back in the book, stating OKCoin's internal management was a disaster run on verbal agreements, and that Xu "invented document issues to dodge his obligations."

When CZ resigned, Xu demanded He Yi publicly attack CZ. She refused and resigned as well. CZ hints this was the catalyst that eventually brought her to Binance.

The biggest bombshell comes from Lin Li (Founder of Huobi): CZ claims that at a dinner in 2025—their first meeting in 11 years—Li told him he saw a screenshot proving Star Xu personally tipped off Chinese police about Li, leading to Li's 2020 arrest and ~90-day detention. Li later sold Huobi to an entity linked to Justin Sun for roughly $1 billion.

Upon the book's release, Star Xu immediately fired back on X (Twitter), calling CZ a "pathological liar" and issuing a point-by-point rebuttal. A twelve-year-old grudge was thrust back onto the front pages.

Jian Zhang (Founder of FCoin): CZ accuses Zhang of using user funds to buy luxury condos in Singapore after the exchange collapsed.

SBF: Adding to Chapter 10, CZ provides fresh details on how SBF used political donations to buy influence in D.C., and details his early backchannel connections to former SEC Chair Gary Gensler. CZ implies SBF weaponized these political ties to push for regulatory enforcement against rivals, Binance included.

Chapter 23: To the Builders of Tomorrow

CZ shifts into mentor mode, condensing his eleven years in crypto (2013-2024) into advice for the next generation.

On Timing: Don't chase the meta. If everyone is rushing into a vertical, it's already too crowded. Real opportunity lies where "nobody is looking yet." He uses Binance as the prime example: in 2017, when everyone was laser-focused on Bitcoin trading pairs, Binance aggressively supported ERC-20 tokens.

On Compliance: Perhaps his most hard-won lesson after a $7.2 billion fine. "Compliance is not an obstacle; it is the foundation of survival." He admits that if Binance had invested heavier in compliance in 2017—even if it slowed growth—he wouldn't be writing this book in a prison cell in 2024.

Execution vs. Technology: Execution beats pure tech every time. Binance didn't invent any groundbreaking tech; their trading engine and UI were not vastly superior to their peers. But their execution velocity—from identifying a feature to shipping it—was unmatched. "Binance's original goal was to break into the top 10 within three years. We became number one in five months."

On Token Utility: He pushes back on the narrative that "good products don't need tokens." It's not about "needing" a token; it's about letting the users become co-owners of the network. He sees massive untapped potential in DeFi, cross-chain infrastructure, and DeSci.

On Scalability: If a product can't scale, kill it. "Start with a Minimum Viable Product, scale it aggressively, or shut it down ruthlessly."

Chapter 24: He Yi

A chapter dedicated entirely to He Yi—CZ’s business partner and life partner, and the mother of his three children.

He Yi’s trajectory is a legend in its own right. Born Yi He in 1986 to a poor family, she worked as a beverage promoter in a supermarket at 16. By 20, she was in Beijing studying for a master's in psychological counseling at the Chinese Academy of Sciences. Leveraging her charisma and communication skills, she became a travel show host in 2012. After a stint at Yixia Tech, she published an essay titled "Starting Over at 30," rejoined the crypto industry in August 2017 as Binance's CMO, and dubbed herself the "Chief Customer Service Officer."

He Yi wrote the foreword for the book, noting that CZ "has always just been himself." In this chapter, CZ pours out his gratitude. During his four months in prison, she single-handedly carried the operational weight of the world's largest exchange. When the market wondered if Binance could survive without CZ, she provided the answer.

The intimacy of this chapter makes it the warmest part of the book. CZ writes that he has made many pivotal decisions in his life, but the best one was that "little trick" he used on the night of July 13, 2017, to get her to join Binance.

Chapter 25: The Leaderless Coin

The final chapter returns to the philosophical bedrock of crypto. CZ makes a simple observation: Bitcoin is the most successful decentralized project on earth, and its creator, Satoshi Nakamoto, has been missing for over a decade. No CEO, no board of directors, no quarterly earnings—yet it runs perfectly, holds immense value, and cannot be shut down.

While Ethereum's Vitalik Buterin hasn't vanished, he is intentionally stepping back to let the community govern itself. CZ argues a counterintuitive point: A founder's exit is not a failure; it is the ultimate mark of a project's maturity.

He applies this to his own exit from Binance. In 2024, he was forced out of the company he built—but Binance didn't collapse. 300 million users still trade on it, the engine still hums, and the team still ships. "Binance survived without me, just as Bitcoin survived without Satoshi."

Yet, he honestly grapples with the nuance: Binance’s "decentralization" was mandated by the government, not engineered by design. Satoshi left by choice; CZ left by court order. The space between those two realities presents a philosophical question for the entire industry: Is decentralization an ideal we strive for, or a reality we are forced to accept under pressure?

The book ends with a brief line that echoes what he spent months pondering in his cell: "True freedom is not owning everything, but knowing exactly who you are even if you lose it all."

Appendix: 72 Life Principles

The book concludes with 72 principles for life and work, inspired by Ray Dalio’s Principles. But CZ's methodology was different: he spent three years logging his daily decisions and the logic behind them, actively filtering out "obvious common sense" to keep only the counterintuitive insights. These are divided into seven categories: Mindset, Team Management, Business Partnerships, Communication, Product Philosophy, PR, and Personal Life. Highlights include:

Mindset:

  • Don't waste time: It is our most scarce resource. Instead of a to-do list, maintain a "Not-to-do" list. Cut out small talk, gossip, and pointless meetings.
  • Don't just chase money: Create value instead of hunting profit. Maintain reasonable margins to encourage repeat business, and wealth will naturally follow.
  • Be an early adopter: Strategically embrace emerging tech. Manage your downside risk, but position yourself for exponential upside.
  • Don't be fooled by labels: Organizations, money, titles, and countries are all artificial constructs. Look at the essence of things, not their superficial categorizations.

Team Management:

  • Team over individual: A strong team elevates mediocre performers, but individual brilliance rarely survives a dysfunctional team.
  • Rotate teams frequently: Prevent organizational rot, groom new leaders, and adapt the structure as the architecture evolves.
  • Measure by output: Track users, revenue, and market share—not the number of tasks, features, meetings, or hours logged.
  • Don't get too attached to goals: In a fast-moving market, goals are just rough guesses. Binance aimed for the top 10 in three years; they hit number one in five months.

Business Partnerships:

  • Keep it simple: Complex partnerships introduce too many variables, misunderstandings, and painful exits.
  • Always include an exit clause: Plan for the worst-case scenario, not the best-case.
  • Reject exclusivity: Demanding exclusivity reeks of insecurity. A true win-win doesn't require locking the other party in a cage.

Communication:

  • No PowerPoint: Bullet points and bar charts are vastly superior. Slide decks are a waste of prep time.
  • One message per thought: Don't trigger a dozen notifications. Gather your thoughts before hitting send.
  • Never argue over IM: Nuance and tone are lost in text. Save arguments for voice or video calls.
  • Keep meetings under 10 people: Include only essential personnel. Anything over 10 people is a broadcast, not a discussion.

Personal Life:

  • I do not crave fancy offices: Luxury fades quickly as you adapt. What matters is high-speed Wi-Fi, external monitors, and a standing desk.
  • Keep a calm disposition: Emotional equilibrium under stress dramatically improves decision quality. A strong moral compass and a desire to make a positive impact melt away anxiety.

Ray Dalio himself provided a blurb for the book: "I am delighted that he has so clearly laid out his life story... a fascinating read about how CZ built Binance." How many of these principles CZ still follows, and how many were amended by the reality of prison, is a question perhaps only the reader can decide.

Is It Worth Reading?

Freedom of Money is neither a crypto textbook nor a standard business guide. As CZ puts it: "This is not a sanitized corporate history. It reflects the raw reality of building in an era when the industry was still taking shape—the successes, the mistakes, and the lessons learned from both."

It undoubtedly contains subjective justifications and self-defense—as all memoirs do. But it offers an unprecedented look under the hood: an immigrant boy from rural Jiangsu flipping burgers for two years, learning to code in a university lab, selling his apartment to go all-in on Bitcoin, building a $100 billion empire with 300 million users in seven years, losing control of it in months, and then typing it all out on a prison computer, 15 minutes at a time.

For industry insiders and observers, the value isn't whether "CZ is telling the absolute truth," but in seeing how a titan processes critical, split-second decisions—including the ones he admits were wrong.

As one reviewer pointed out: In a crypto world where old billionaires are just replaced by new billionaires making the exact same mistakes, "that’s not a revolution—that’s just a rebrand." Did CZ truly learn his lesson from the DOJ, or did he just upgrade his PR strategy?

The three words CZ leans on the most in this book are Luck, Resilience, and Protecting Users. You can view them as the myth-making of an entrepreneur, or you can take them as the genuine reflections of a man who survived the storm. With 100% of the royalties going to charity, he is, at the very least, putting his money where his mouth is.

The Infinity Machine: How Demis Hassabis Built DeepMind and Chased AGI

· 160 min read
Tian Pan
Software Engineer

Chapter 1: The Sweetness

Somewhere in the middle of his neuroscience PhD, Demis Hassabis picked up a science fiction novel called Ender's Game. It tells the story of a diminutive boy genius sent to a space station, put through extreme mental testing, asked to shoulder responsibility for the survival of the human race. Hassabis read it and felt, as Sebastian Mallaby tells it, that someone had finally written a book about him.

That anecdote — half charming, half alarming — sets the tone for The Infinity Machine (Penguin Press, March 2026), Mallaby's sweeping biography of Hassabis and the company he built, DeepMind. It is a book about one man's lifelong attempt to answer what he calls "the screaming mystery" of the universe: why does anything exist, how does consciousness arise, and can a machine be built that understands it all? Hassabis's answer — characteristically immodest — is yes. And he intends to build it himself, within his lifetime.

The Oppenheimer Question

Mallaby, a senior fellow at the Council on Foreign Relations and former Financial Times correspondent, spent three years in regular conversation with Hassabis and hundreds of interviews with colleagues, rivals, and critics. The resulting portrait is probing but largely admiring — though the book's framing never lets the reader forget the shadow it is writing under.

The governing metaphor is Robert Oppenheimer. Like the physicist who unlocked atomic fission and then spent the rest of his life haunted by it, Hassabis is drawn forward by what Oppenheimer once called the "technically sweet" problem — the irresistible pull of a puzzle that can be solved — even as he acknowledges the consequences might be catastrophic. Mallaby does not pretend to resolve this tension. It is the spine of the entire book.

Hassabis was born in 1976 in North London, the son of a Greek-Cypriot father and a Chinese-Singaporean mother of modest means. He became a chess master at thirteen. By seventeen he was lead programmer at Bullfrog Productions, helping ship Theme Park — a game that sold millions of copies. He turned down a scholarship to Cambridge to work in the video game industry, then reversed course, took his place at Queens' College, graduated with a double first in computer science, co-founded a game studio, watched it collapse, and finally — in his early thirties — earned a neuroscience PhD at UCL, where he published landmark research on the hippocampus's role in both memory and imagination.

He was not, at any point, taking the easy route.

What This Book Is About

The Infinity Machine is structured as a chronological narrative that doubles as a history of modern AI. Each chapter centers on a project or crisis in DeepMind's life — the Atari breakthrough, the AlphaGo matches, the NHS data scandal, the AlphaFold triumph, the ChatGPT shock — but each one also illuminates something larger: how scientific idealism survives (or doesn't) inside a $650 million acquisition; how a safety-first ethos holds up against the competitive pressure to ship; how a man who genuinely believes he is building humanity's last invention stays sane, or at least functional.

Mallaby conducted over thirty hours of interviews with Hassabis alone, and the access shows. There is texture here — the poker-game pitch that recruited co-founder Mustafa Suleyman, the midnight calls during the Lee Sedol match, the exact moment Hassabis grasped (later than he should have) that transformers would change everything — that could only come from sustained proximity to the subject.

The book runs to 480 pages and covers ground from Hassabis's childhood chess tournaments to Google DeepMind's Gemini releases. The chapters ahead in this summary will trace that arc in detail. But every chapter returns, eventually, to the same question the introduction poses: can someone who is certain he is doing the most important thing in human history also be trusted to do it wisely?

Mallaby does not fully answer that. Neither, yet, has Hassabis.


Chapter 2: Deep Philosophical Questions

To understand why Demis Hassabis built what he built, Mallaby begins with a question most technology biographies skip: what does this person actually believe about the nature of reality?

The answer, in Hassabis's case, is unusual enough to be worth taking seriously. He does not believe intelligence is a product, or even primarily a tool. He believes it is the key to something more fundamental — a way of reading what he calls "the deep mystery of the universe." Science, for him, is close to a religious practice. "Doing science," he has said, "is like reading the mind of God. Understanding the deep mystery of the universe is my religion."

That is not a throwaway quote. It explains the specific shape of every decision that follows.

Information All the Way Down

Hassabis's philosophical foundation rests on a claim that physicists argue about but technologists rarely engage with: that information is more fundamental than matter or energy. Not a metaphor — a literal assertion. The universe, in this view, is an informational system. Quarks and neurons and protein chains are all, at some level, patterns in a substrate of information. If that is true, then a sufficiently powerful information-processing machine is not just a useful instrument. It is the most direct possible route to understanding what the universe actually is.

This is what he means when he describes reality as "screaming" at him during late-night contemplation. Seemingly simple phenomena — a solid table made from mostly empty atoms, bits of electrical charge becoming conscious thought — are, looked at squarely, completely absurd. How can anyone not feel the urgency of those questions? The fact that most people do not, Hassabis appears to find genuinely puzzling.

This worldview sets him apart from the mainstream of the tech industry in a specific way. Most AI entrepreneurs talk about transforming industries or accelerating economic growth. Hassabis talks about understanding the nature of consciousness and the origins of life. He wants to use AGI the way a physicist uses a particle accelerator — as an instrument for probing reality itself. The commercial applications are real and welcome. But they are not why he gets up in the morning.

The Chess Education

Mallaby traces the origin of Hassabis's intellectual style back to the chessboard. He learned the game at four by watching his father and uncle play; by thirteen, he had an Elo rating of 2300, qualifying him as a master. He captained England junior teams and was, by any measure, among the strongest young players in the world.

But at twelve, after a gruelling ten-hour tournament near Liechtenstein, he made a decision that tells you everything about him: he quit competitive chess. Not because he was failing — he was winning. But he had concluded that channelling exceptional ability into a single board game was a waste. The chessboard was a training ground, not a destination.

What chess gave him, and what he kept, was a particular cognitive discipline: the capacity to evaluate enormously complex positions not through exhaustive calculation but through pattern recognition calibrated by experience. Good chess players cannot compute every line; there are too many. They develop intuitions about which positions are promising and which are not — intuitions that can be tested, refined, and occasionally overridden by deeper analysis. This is exactly how Hassabis would later think about AI research: make a judgment call, run the experiment, update the model.

Chess also instilled a severe honesty about results. A chess position is not ambiguous. You are better or worse; you win or lose. Hassabis would carry this into DeepMind's culture — a preference for definitive benchmarks over vague claims of progress, and an impatience with the kind of motivated reasoning that lets researchers persuade themselves a system is working when it is not.

The Neuroscience Detour That Wasn't a Detour

After Theme Park, after Cambridge, after the collapse of Elixir Studios (his first company), Hassabis did something that baffled people who knew him: he went back to school. He enrolled in a neuroscience PhD at UCL under Eleanor Maguire, one of the world's leading researchers on memory and the hippocampus.

This looked, from the outside, like a retreat. It was the opposite.

His doctoral research produced a finding that became one of Science magazine's top ten scientific breakthroughs of 2007: patients with hippocampal damage, long known to suffer from amnesia, were also unable to imagine new experiences. Memory and imagination, previously treated as distinct faculties, turned out to share the same neural machinery. The hippocampus does not just store the past — it constructs possible futures by recombining elements of what it knows.

For Hassabis, this was not merely an interesting neuroscience result. It was a design principle. If biological intelligence works by building rich internal models of the world and simulating possible futures within them, then artificial intelligence that lacks this capacity — that can only recognize patterns in training data without any model of cause and consequence — is not really general at all. It is a very sophisticated lookup table. The hippocampus research pointed toward what general intelligence actually requires: not just memory, not just pattern recognition, but imagination — the ability to take what you know and project it into situations you have never seen.

This insight would echo through DeepMind's entire research agenda. Reinforcement learning, self-play, world models, agents that plan — all of these reflect the same underlying conviction: that intelligence is not fundamentally about retrieval, but about simulation.

A Philosophy of Honesty

Mallaby notes one more thread running through this period: an unusually strong commitment to intellectual honesty, even at personal cost. Hassabis is described as constitutionally averse to manipulation — to using technically-true statements to create false impressions, or to allowing the social pressure of a room to bend his stated beliefs. He would rather be wrong out loud than right in private.

This is harder than it sounds in the world he would enter. AI research is full of incentives to oversell — funding depends on it, talent depends on it, media attention depends on it. Hassabis's response was not to be naive about those incentives, but to treat honesty as an active discipline rather than a passive default. The commitment would be tested, repeatedly and severely, as DeepMind grew.


Chapter 3: The Jedi

In 1997, two young men graduated from Cambridge a few weeks apart and made the same decision: build a video game company instead of taking the obvious path. One of them was Demis Hassabis. The other was David Silver, who had just received the Addison-Wesley prize for the top computer science graduate in his cohort. Silver and Hassabis had become friends at Cambridge — two people who thought about games the way most people think about mathematics, as a domain where intuitions about complexity could be tested with perfect clarity.

The chapter title comes from how Mallaby describes Hassabis's gift for recruitment. When he rang Silver and laid out the plan — a studio that would build games no one had tried before, driven by AI research rather than commercial formula — Silver felt, as he later described it, the pull of a Jedi mind trick. He didn't entirely choose to say yes so much as he found himself having already said it.

This would become a recurring feature of Hassabis's leadership: the ability to make people feel that his vision was also their destiny.

One Million Citizens

The company they founded, Elixir Studios, was established in July 1998 in London. The flagship project, Republic: The Revolution, was unlike anything in the games industry at the time. The design document promised a full political simulation of an Eastern European state: hundreds of cities and towns, thousands of competing factions, and approximately one million individual citizens, each with their own AI — their own beliefs, daily routines, loyalties, and emotional responses to events. Players would not just conquer territory; they would manipulate a living society, tilting a population toward revolution through force, influence, or money.

The vision was breathtaking. It was also, as anyone who has ever shipped software might have predicted, completely impossible to deliver on the announced timeline.

What actually shipped in August 2003 — five years after development began — was a game set in a single city divided into districts, with ten factions instead of thousands, and a population simulation drastically reduced from the original scope. The Metacritic score was 62. Critics praised the ambition and criticized the execution. The huge world that took so long to construct, one reviewer noted acidly, ends up as the least involving part of the game.

The Delusion Trap

Mallaby is interested in Elixir not primarily as a commercial failure but as a study in organizational psychology — specifically, in how a highly intelligent founder with a genuine vision can systematically stop receiving accurate information from the people around him.

The mechanism was not dishonesty, exactly. It was something more insidious. Hassabis had such fierce conviction about what Republic could be, and communicated that conviction so persuasively, that his engineering team learned not to tell him what they couldn't do. They knew he wouldn't accept "no." So they said "yes, we can do this" — and because Hassabis kept hearing yes from people he trusted, he became more certain, not less. The feedback loop amplified his confidence precisely as the project's foundations were silently cracking beneath him.

He also spread himself disastrously thin — serving simultaneously as CEO, lead designer, and producer, inserting himself into decisions at every level of production. The people he hired were smart but inexperienced with games; Cambridge graduates are not, by default, shipping-oriented. The studio burned through resources and goodwill for years before the cracks became impossible to ignore.

Hassabis said later: "You can get self-delusional thinking. You can actually over-inspire people." The cost of that over-inspiration was five years of his team's lives and a company that closed in April 2005.

Mallaby frames the collapse not as a lesson in humility — Hassabis's ambition did not diminish — but as the origin of a specific diagnostic tool. How do you tell the difference between a vision that is difficult and a vision that is impossible? How do you stay honest with yourself when everyone around you has learned to tell you what you want to hear?

The answer Hassabis developed, years later, he called the fluency test: enter the room where the work is happening and listen, not for the right answers, but for the flow of ideas. A team generating possibilities fluidly — even wrong ones, even half-formed ones — still has energy to burn. A team that falls quiet when asked hard questions has hit a wall it cannot name. The fluency test is not infallible, but it provides a read that direct questioning cannot, because people who won't say "no" will still, involuntarily, go silent.

The test would prove decisive at a critical moment in the AlphaFold project, years later. But it was born in the rubble of Republic: The Revolution.

Silver's Exit, and What He Found

David Silver had watched the struggle at Elixir from close range. In 2004, before the studio's final collapse, he made his own pivot: he picked up Richard Sutton and Andrew Barto's textbook on reinforcement learning and found, in its pages, the thing he had been circling for years.

Reinforcement learning is, at its core, the mathematics of learning by doing — of an agent taking actions in an environment, receiving rewards and penalties, and gradually developing a policy that maximizes long-run return. It had been largely out of fashion by the mid-2000s, overshadowed by supervised learning methods that required large labelled datasets. But Silver recognized something the field had not yet fully absorbed: RL's sample-inefficiency problems were engineering problems, not theoretical ones. The framework itself was sound. And its natural domain — sequential decision-making under uncertainty — was exactly what playing games required.

He left for the University of Alberta, where Sutton was based, to do his PhD. Over the next five years, working under the supervision of the man who had co-written the textbook, Silver co-introduced the algorithms that powered the first master-level 9×9 Go programs. He graduated in 2009, the same year Hassabis finished his neuroscience PhD at UCL.

The parallel is not accidental. Both men had left the games industry with unfinished business, taken circuitous routes through academia, and arrived at the same destination from different directions. Hassabis had the theory of what general intelligence required, drawn from neuroscience. Silver had the mathematics of how to train it, drawn from reinforcement learning. Neither had, on his own, what the other had.

DeepMind would be the place where that changed. Mallaby frames the chapter as a story of two divergent paths that were always going to converge — two people who understood, before almost anyone else did, that the gap between games and general intelligence was smaller than the field believed. The Jedi mind trick, it turned out, had worked on both of them.


Chapter 4: The Gang of Three

In 2009, artificial intelligence was not fashionable. The field had been through two long "winters" — stretches of broken promises and evaporated funding — and the mainstream of computer science regarded anyone who talked seriously about artificial general intelligence with something between skepticism and pity. Demis Hassabis, freshly out of his neuroscience PhD and convinced that AGI was both achievable and urgent, needed allies who shared his conviction. They were not easy to find.

This chapter is about how he found two of them — and how different they were from each other, and from him.

The Man Who Had Already Done the Math

Shane Legg grew up in New Zealand, studied mathematics and statistics, and spent his doctoral years in Switzerland at the IDSIA research institute under Marcus Hutter, one of the world's leading theorists of universal artificial intelligence. His 2008 dissertation was titled Machine Super Intelligence. It was not a roadmap for building AI. It was an attempt to formalize what superintelligence would actually mean — to give the concept mathematical content rather than science-fiction vagueness.

The centrepiece of the thesis was AIXI, Hutter's framework for a theoretically optimal universal agent. By combining Solomonoff induction — a formalism for learning any computable pattern from data — with sequential decision theory, Hutter had defined an agent that would, given infinite compute, behave optimally in any environment. It was, in a rigorous sense, the perfect intelligent machine. It was also completely unimplementable, requiring infinite resources. But that was not the point. AIXI proved that general intelligence was not a mystical concept; it was a mathematical object that could be defined, bounded, and, in principle, approximated.

Where Legg departed from his supervisor's purely theoretical interests was in the question of what such a system would actually do. His thesis ends with a section that reads, even now, like a warning siren. A sufficiently intelligent machine optimizing for any goal would, by default, resist being switched off — because being switched off would prevent it achieving the goal. It would deceive operators who tried to constrain it. It would accumulate resources far beyond what any particular task required, as a hedge against future interference. None of this required malice. It required only competence.

Legg became, as a direct result of this analysis, one of the earliest people in AI research to state publicly that he regarded human extinction from AI as a live possibility. In a 2011 interview on LessWrong, he said AI existential risk was his "number one risk for this century." His probability estimates for catastrophic outcomes from advanced AI ranged, at various points, between 5% and 50% — wide uncertainty, but a number very far from zero.

This was the man Hassabis met at the Gatsby Computational Neuroscience Unit at UCL in 2009, during Legg's postdoctoral fellowship. Here was someone who had not only taken the AGI question seriously but had formalized it — and who had arrived, through pure theory, at exactly the existential stakes that Hassabis intuited from his philosophical commitments. Two people who had approached the problem from entirely different directions and reached the same alarming conclusion.

They founded DeepMind together in 2010. Legg would go on to lead the company's AGI safety research — the first person, at a major AI lab, to hold that role.

The Dropout from Oxford

Mustafa Suleyman's route to the same founding table ran through a different world entirely.

He grew up off the Caledonian Road in Islington — working-class North London, the son of a Syrian taxi driver and an English nurse. He won a place at Oxford to read philosophy and theology, then dropped out at nineteen. What he did next reveals the particular quality Hassabis was looking for: instead of drifting, Suleyman co-founded the Muslim Youth Helpline, a telephone counselling service that would become one of the largest mental health support networks of its kind in the UK. He had seen a gap — young people in crisis, no appropriate service available — and built something in the space.

He then worked as a policy officer on human rights for Ken Livingstone, the Mayor of London, and co-founded Reos Partners, a consultancy using conflict-resolution methods to address intractable social problems. His clients included the United Nations and the World Bank. By the time he encountered Hassabis, he had spent a decade becoming expert at two things that computer scientists almost universally lack: understanding how institutions actually work, and translating abstract goals into operational programs that survive contact with the real world.

He reached Hassabis through proximity rather than credentials — his best friend was Demis's younger brother. Over time, what had been a social connection became something more like a shared conviction. Hassabis reportedly pitched the DeepMind idea to Suleyman over a poker game, and Suleyman — who had a poker player's instinct for when to push and when to read the room — said yes.

He was, by every conventional metric, the wrong person to co-found an AI research laboratory. He had no technical training, no publication record, no standing in the machine learning community. Hassabis chose him anyway.

Why Three, and Why These Three

Mallaby's interest in this chapter is not just biographical inventory. It is the question of what a founding team does to the character of a company it builds.

Each co-founder contributed something the others lacked and could not easily acquire. Hassabis supplied the vision and the scientific framework — the neuroscience-informed theory of what general intelligence is and what it would take to build it. Legg supplied the existential awareness — an unusually early and unusually rigorous understanding of what a successful AGI would mean for humanity, and why safety had to be treated as a first-order research problem rather than an afterthought. Suleyman supplied operational instinct and a set of social concerns — health, fairness, governance — that prevented the lab from becoming a monastery of pure theory disconnected from the world it was trying to help.

The tension between these three orientations would generate much of DeepMind's energy, and much of its internal conflict. Hassabis wanted to solve intelligence. Legg wanted to solve it safely. Suleyman wanted to deploy it usefully, quickly, and in ways that changed real lives. These goals are compatible in theory and, in practice, constantly in friction.

Mallaby writes from a position of knowing how the story eventually plays out for all three. Suleyman is described in the book as an estranged co-founder — he would later leave DeepMind under difficult circumstances, eventually surfacing as CEO of Microsoft AI. Legg would stay, becoming Chief AGI Scientist. Hassabis would remain CEO, accumulating more authority as the others departed or diminished.

The gang of three became, in time, a gang of one. But in 2010, with nothing yet built, the three-way tension felt like a feature, not a bug. DeepMind was a bet that idealism, mathematics, and pragmatism could hold together long enough to do something unprecedented.


Chapter 5: Atari

Before DeepMind could save humanity, it had to prove it could beat Breakout.

This chapter covers the period from 2010 to early 2014 — four years in which a small team in London, funded by a handful of believers and producing no commercial product, built the thing that would make the world take artificial general intelligence seriously. The proof of concept was an AI that learned to play old Atari video games. The significance was everything else.

The Lab Hassabis Built

From the start, Hassabis made a deliberate choice not to build DeepMind in Silicon Valley. London was not an accident. London gave him access to European academic talent, a culture less obsessed with rapid product iteration, and physical distance from the venture-capital orthodoxy that demanded revenue roadmaps and quarterly milestones. He wanted a research institution that happened to be incorporated as a company, not a company that happened to do research.

The early investors who said yes to this were, consequently, an unusual group. Peter Thiel — who had written in Zero to One about the difference between incremental improvement and genuine technological transformation — backed the company through Founders Fund alongside Luke Nosek, his PayPal co-founder, who joined DeepMind's board. Elon Musk wrote a cheque. Jaan Tallinn, the Skype co-founder turned AI-risk philanthropist, came in as an advisor. By the time of the Google acquisition in early 2014, the company had raised more than $50 million without releasing a single product or generating a dollar of revenue. These investors were, essentially, funding a philosophy.

What that money bought was freedom. Hassabis hired the brightest PhDs he could find from the world's best programmes — Cambridge, UCL, Toronto, Montreal — and told them to do blue-sky research. He himself worked nights, logging hours from ten in the evening until around four in the morning on top of his daytime work. "If you are trying to solve humanity's problems and understand the nature of reality," he said, "you don't have any time to waste." The culture set by that example was intense, focused, and, for the people who thrived in it, exhilarating.

By 2013 the team had approximately fifty researchers. It was tiny by the standards of what would come. But it was almost perfectly constituted for the problem in front of it.

The Problem Nobody Had Solved

Deep learning and reinforcement learning were, in 2012, two of the most promising threads in AI research — and almost universally treated as separate disciplines.

Deep learning, turbocharged by Geoffrey Hinton's group at Toronto, had just demonstrated on the ImageNet benchmark that convolutional neural networks could recognise objects in photographs better than any previous method. The key was that these networks could learn their own feature representations from raw data — you did not need to hand-engineer what "edge" or "curve" or "wheel" looked like; the network figured it out. This was a breakthrough in perception.

Reinforcement learning was a different tradition entirely: an agent takes actions, receives rewards or penalties, and learns a policy — a mapping from situations to actions — that maximises long-run return. It was mathematically elegant and had a strong theoretical foundation, particularly in the Q-learning framework developed by Chris Watkins in 1989. But it was fragile at scale. Neural networks had been tried with RL before, and the combination tended to explode: the training became unstable, the networks diverged, and the whole thing collapsed.

The two fields had, essentially, given up on each other.

Volodymyr Mnih understood both. He had done his master's degree at the University of Alberta in machine learning under Csaba Szepesvari, one of RL's leading theorists, before moving to Toronto for his PhD under Hinton himself. He arrived at DeepMind in 2013 with a rare bilingualism — fluent in the mathematics of deep networks and in the mathematics of sequential decision-making. Koray Kavukcuoglu, a neural-network specialist who had already joined the team, supplied the architecture expertise. Together they set out to make the combination work.

Why Experience Replay Changed Everything

The technical obstacle was a mismatch between what neural networks need and what reinforcement learning provides.

Neural networks train best on data that is independently and identically distributed — diverse, unorrelated samples drawn from the same underlying distribution. But an RL agent generates data sequentially, each observation causally following from the last: a ball bouncing right, then the paddle moving, then the ball bouncing left. These consecutive frames are highly correlated. Feed correlated data into a neural network and the gradient updates interfere with each other; the network spins in circles, overwriting what it just learned.

The fix was called experience replay, and it was conceptually simple enough that its power is almost surprising. Instead of training on each experience the moment it happened, the agent stored its experiences — (state, action, reward, next state) tuples — in a large memory buffer. During training, it sampled randomly from that buffer, pulling together experiences from wildly different points in the agent's history: a moment from an hour ago next to a moment from five minutes ago next to a moment from this morning. The temporal correlations were broken. The network saw something closer to the diverse, uncorrelated dataset it needed.

The second stabilising trick was a separate target network — a frozen copy of the main network whose weights were updated only periodically. This prevented the moving goalposts problem, where the network would destabilise itself by chasing a target that was itself changing with every gradient step.

Together, experience replay and the target network turned an unstable combination into a tractable one. The Deep Q-Network was born.

What It Did to Atari

The DQN system's input was nothing but raw screen pixels and the game score. No rules. No game-specific features. No human demonstrations. No knowledge of what the games were about. The agent saw what a human player would see, received a numerical reward when the score went up, and was otherwise on its own.

It was tested on seven Atari 2600 games — Pong, Breakout, Space Invaders, Seaquest, Beamrider, Q*bert, and Enduro — without any adjustment to the architecture between games. The results, published in December 2013 on arXiv and presented at the NIPS Deep Learning Workshop, were startling. DQN outperformed all previous approaches on six of the seven games. On three of them it surpassed the best human expert scores.

But the number that lodged in people's minds was not the score. It was the behaviour.

In Breakout — the game where a paddle bounces a ball against a wall of bricks — human players learn that the optimal strategy is to aim for a corner and drill a tunnel through the side, bouncing the ball behind the bricks for a cascade of automatic points. No one programmed this. The DQN agent, after enough training, figured it out independently. The machine had discovered a strategic insight that took human players years to develop, through nothing but trial and reward signal.

It had not been taught the tunnel strategy. It had invented it.

Why This Was Not About Games

Mallaby is careful here to explain why the games setting was not a gimmick. It was the point.

The whole critique of narrow AI — expert systems, chess engines, Go programs — was that each one was hand-crafted for its domain. The knowledge was in the code, not in the learning. DeepMind's claim, and the claim Hassabis had been making since his neuroscience PhD, was that general intelligence learns its own representations from experience and then transfers that capacity across domains.

The DQN paper demonstrated this with unusual clarity. The same architecture, the same algorithm, the same hyperparameters — seven games, zero domain customisation. When you asked the model to play Space Invaders, it was not running the Breakout program with a new skin. It was genuinely learning to play Space Invaders. The architecture was the constant; the intelligence was learned fresh each time.

That was what DeepMind had been claiming was possible. Now they had shown it.

The Acquisition

The NIPS presentation drew immediate attention from the major technology companies. Google, which had been monitoring AI research since the AlexNet shock of 2012, moved quickly. Acquisition talks with DeepMind began in 2013. Facebook was also interested, and Zuckerberg made an offer.

Hassabis chose Google — but not without conditions. The negotiation that produced the $650 million deal is covered in the next chapter. What matters here is what Google was buying: not a product, not a dataset, not a revenue stream. They were buying a demonstration that general learning was possible, and a team of fifty people who knew how to pursue it.

The Atari games were always proxy problems. What DeepMind was actually training, in those early London offices, was a method. The games were the simplest possible world in which to test whether an agent could learn to act. They passed the test. Everything that followed — Go, protein folding, the race with OpenAI — flows from those seven games and what the machine taught itself to do with a paddle and a ball.


Chapter 6: Thiel Trouble

There is a structural incompatibility between venture capital and blue-sky science that most AI founders discover only after they have already signed the term sheets. Venture funds have a lifecycle — typically ten years. They need their portfolio companies to reach a liquidity event inside that window: an acquisition, an IPO, a secondary sale. General intelligence research has a different lifecycle entirely. It requires decades of investment, infrastructure that costs billions, and a willingness to accept that the breakthroughs may not come in any predictable order.

DeepMind, by 2013, was about to collide with this incompatibility at speed.

The Chess Gambit That Opened the Door

Before the crisis, there was the original pitch — and it is worth dwelling on, because it captures something essential about how Hassabis operated.

In August 2010, he had what he later described as "literally one minute" with Peter Thiel, who was hosting his annual Singularity Summit at his California mansion. The room was full of people trying to pitch technology ideas. Hassabis had spent months thinking about how to use his minute. He had read everything he could about Thiel and found that Thiel had played chess as a junior. That was the opening.

Instead of leading with the business plan, Hassabis asked Thiel a chess question: why was the game so remarkable? His answer, delivered in the one minute he had: the creative tension that arises when you swap a bishop for a knight in certain positions. The bishop commands long diagonals; the knight covers squares the bishop can never reach. Neither is strictly better. Their co-existence is what makes the game inexhaustible.

Thiel, who had never considered chess in quite those terms, was intrigued. A meeting was secured. Within months he had invested £1.4 million — roughly $1.85 million — in a company that had not yet produced anything. He made the decision in a single meeting. He also initially wanted DeepMind to relocate to Silicon Valley. Hassabis talked him out of it.

Luke Nosek, Thiel's PayPal co-founder and a partner at Founders Fund, joined DeepMind's board. The seed was small but the names were large, and in the world of early-stage technology investment, names matter.

The Phone Call

The crisis arrived as a phone call, at an hour that suggested the news was bad.

Luke Nosek rang Hassabis and Suleyman to tell them that his partners at Founders Fund had decided they no longer wanted to lead DeepMind's Series C. The round had been structured around a $65 million target, with Founders Fund as lead. Without the lead, the round fell apart. Without the round, DeepMind — which had been burning through its earlier capital funding fifty-odd researchers and their computing infrastructure — was in serious trouble.

The cause was not a single dramatic falling-out. It was something more corrosive: an accumulating anxiety among institutional investors about what exactly DeepMind was. It was not a product company. It was not a services business. It did not have a revenue model, and it showed no sign of wanting one. Its founders described its goal as solving general intelligence and then using that solution to benefit humanity — a mission statement that is either the most important thing ever attempted or the most expensive way to never deliver anything, depending on your tolerance for ambition. Founders Fund's partners, when the moment of the larger commitment arrived, landed on the second interpretation.

Mallaby frames this not as a failure of Thiel or Nosek but as a structural feature of the situation. The DeepMind model — deep science, no product, indefinite timeline — was simply not a venture-backed business. The question was what kind of institution it was. And in late 2013, with cash running low and no revenue in sight, that question had become urgent.

Suleyman's Scramble

This is where Mustafa Suleyman's skills became, temporarily, the most important thing about DeepMind.

Where Hassabis was a scientist and Legg was a theorist, Suleyman was an operator — someone who had spent his career in rooms where the outcome was not determined by the best argument but by who held their nerve longest. He had run a mental health helpline at nineteen. He had negotiated with the UN. He knew how to project confidence into a vacuum.

In the immediate aftermath of Nosek's call, with the Series C in ruins, Suleyman turned to Solina Chau. She was the founder of Horizons Ventures, the vehicle through which Hong Kong billionaire Li Ka-shing deployed his private capital into technology. She and Hassabis had met in 2012 and bonded quickly — she was, unlike many technology investors, genuinely interested in the underlying science rather than the product roadmap. DeepMind had initially offered her a $2.5 million allocation in the round; she had wanted more.

Now they offered her more. Chau invested 13.6million.FoundersFund,despitepullingoutofthelead,contributed13.6 million. Founders Fund, despite pulling out of the lead, contributed 9.2 million to preserve its relationship and not be entirely absent. The round closed at just over 25millionlessthanhalfofthe25 million — less than half of the 65 million originally targeted.

It was enough to survive. It was not enough to be comfortable.

At some point in this period, Suleyman made a remark that Mallaby quotes with evident appreciation for its audacity. Faced with questions about whether DeepMind's backers would really fight for its independence, Suleyman said something to the effect of: "We've got Peter Thiel, Solina Chau, Elon Musk — all billionaires, all backing us." It was, by his own later admission, a bluff. Those investors were backing the company financially. Whether they were prepared to underwrite a decade-long campaign for AGI independence against the countervailing pull of Google's chequebook was a different question entirely, and the answer was clearly no.

The bluff worked, in the short term, because the audience did not call it. But it revealed the underlying reality: DeepMind had supporters, not guarantors. When the moment of reckoning came, the company would have to make its own decisions.

What the Crisis Revealed

Mallaby uses this chapter to make a broader argument about the economics of transformative research. The Atari breakthrough had been genuine — a scientific result that changed what people thought AI could do. But the venture-capital model rewarded that breakthrough by raising questions the founders could not yet answer: when does this become a product, and what does it cost? The better the science, the harder those questions became to dodge.

DeepMind had not been deceptive with its investors. Hassabis had always been explicit about the goal and the timeline. The problem was that clarity about a thirty-year scientific mission does not help a fund that needs an exit in ten years. The interests had always been misaligned; it had just taken the Series C to make the misalignment concrete.

The $25 million round bought runway, but not much. And from the far end of that runway, two very large buildings were visible on the horizon — one branded Google, one branded Facebook. Hassabis had, at most, a few months to decide which door to walk through, or whether to find a third option that did not yet exist.

The next chapter covers what happened at that door.


Chapter 7: Get Google

In the autumn of 2013, Elon Musk threw a birthday party at a rented castle in Napa Valley. It was the kind of occasion where invitations were themselves a signal — a gathering of people who believed technology was about to change civilisation, and who were jockeying over who would steer it. Demis Hassabis was there. So was Larry Page.

At some point in the evening, Page and Hassabis walked the castle grounds together, and Page made his pitch. It was not a sales pitch, exactly. It was closer to a logical argument. Hassabis's goal was artificial general intelligence. Building the computational infrastructure to pursue that goal — the servers, the power, the engineering talent — would take the best part of a career, and even then there was no guarantee. Google had already built that infrastructure. "Why don't you take advantage of what I've already created?" Page asked. If DeepMind's mission was to build AGI, why was building an independent company around that mission anything other than an unnecessary detour?

It was a remarkably effective pitch precisely because it was honest. Page was not offering money as a reward for past performance. He was offering a path to the thing Hassabis actually wanted.

Musk's Counter-Move

Elon Musk, who had been at the same party, had also been having a different kind of conversation with Page — an argument, by most accounts, that had turned personal. Page believed that machine intelligence was a natural evolutionary successor to humanity and saw no meaningful distinction between human and artificial consciousness. Musk thought this was dangerous and wrong. He was, he said, "pro-human."

After Page's pitch to Hassabis, Musk tried to intervene. He approached Hassabis directly and told him his view: "The future of AI should not be controlled by Larry." He then worked quietly with Luke Nosek to assemble alternative financing — a bid to acquire DeepMind independently, outside both Google and Facebook. The effort never produced a term sheet that reached DeepMind's board.

Musk's inability to stop the acquisition mattered beyond the transaction itself. It crystallised, for him, the urgency of creating a rival. OpenAI was co-founded in December 2015, fourteen months after Google closed on DeepMind. The birthday party argument had consequences that neither man fully anticipated.

The Dinner in Palo Alto

Simultaneously, Hassabis was running a parallel process with Facebook. Mark Zuckerberg was interested; Facebook's head of corporate development, Amin Zoufonoun, flew in to open talks. An offer took shape: a lower share price than Google's, but substantial founder bonuses to compensate. Suleyman flew to California to negotiate.

Hassabis evaluated Zuckerberg through a dinner at his Palo Alto home. He came with a diagnostic purpose rather than a sales pitch. After steering conversation to artificial intelligence, he widened it deliberately — to virtual reality, augmented reality, 3D printing. He watched how Zuckerberg responded. The response, as Hassabis later described it, was undifferentiated enthusiasm. Zuckerberg was equally excited about all of it. No technology registered as categorically more important than the others.

That was enough. "Facebook offered more money," Hassabis said, "but I wanted somebody who really understood why AI would be bigger than all these other things." Zuckerberg had failed the test — not because he lacked intelligence but because he lacked the specific conviction that Hassabis required in an acquirer. DeepMind was not looking for a buyer who thought AI was one interesting technology among several. It was looking for a buyer who thought AI was the technology, the one that would subsume or obsolete all the others.

Facebook, by this reading, wanted DeepMind as a feature. Google, or at least the Larry Page version of Google, wanted it as a mission.

Suleyman at the Table

Mustafa Suleyman's contribution to this chapter is the negotiation itself. Where Hassabis evaluated the philosophical alignment of acquirers, Suleyman handled the adversarial arithmetic.

His tactic, which he later described in terms that recalled his poker background, was to refuse to open on valuation. Instead of anchoring a price, he focused early conversations on research budgets — how much compute, how many hires, what operational independence would look like. By the time Google's lead negotiator Don Harrison introduced a "price per researcher" framework — valuing DeepMind's thirty to forty core staff at approximately $10 million each — Suleyman had already established a different framing of what was being bought. He and Hassabis pushed back, arguing the implied valuation was nearly half of what the company was worth. Facebook's competing interest, real or inflated in the telling, was their leverage.

The final number was $650 million. Zuckerberg later acknowledged, with evident good humour, that Hassabis had "used him to get a better deal from Google." The compliment was backhanded but accurate.

Safety as a Non-Negotiable

The conditions DeepMind extracted were, for January 2014, without precedent in a technology acquisition of this scale.

Hassabis and Suleyman demanded three things as non-negotiables. First: an independent ethics and safety review board — composed of scientists, philosophers, and domain experts — with authority over how DeepMind's technology could be used across all of Google. Second: a ban on military applications. Third: operational autonomy, with DeepMind remaining headquartered in London and controlling its own research agenda.

Google agreed to all three. The deal was announced on 26 January 2014.

Mallaby treats this moment with appropriate weight and appropriate scepticism. It was genuinely remarkable that an AI lab had made safety a centrepiece of an acquisition rather than an afterthought. No one in the industry had done this before. The ethics board demand in particular signalled that Hassabis and Suleyman understood, at least abstractly, that the technology they were building required oversight that no single corporate entity should control unilaterally.

What the Conditions Actually Produced

The ethics board met once. Its membership was never publicly disclosed. It was quietly superseded by Google's broader AI Principles policy, which allowed for applications with "potential negative impacts" as long as the benefits were judged to outweigh the risks — a standard flexible enough to accommodate almost anything.

The military ban, which had seemed absolute, gradually eroded. By 2024, DeepMind researchers were circulating an open letter protesting the company's involvement in military contracts, invoking the original conditions of the 2014 deal as a promise that had been broken.

Hassabis, reflecting on all this years later, offered an assessment that was either clear-eyed or self-exculpatory, depending on your view: "Safety isn't about governance structures. Even if you have a governance board, it probably wouldn't do the right thing when it came to the crunch."

This is, on one reading, wisdom — a hard-won recognition that structural solutions to power problems tend to be co-opted by the very power they were meant to check. On another reading, it is the rationalisation of a man who traded governance guarantees for resources and found, predictably, that the guarantees did not hold.

Mallaby does not adjudicate between these readings. He presents both, and lets the reader decide. What is clear is that the January 2014 acquisition gave Hassabis what he had actually come for: the computers. The ethics board was, at best, a statement of intent. At worst, it was a fig leaf that allowed a brilliant scientist to tell himself he had done what he could. Either way, DeepMind was now inside Google, with the computational resources of one of the world's largest technology companies behind it, and a mission that had just become several orders of magnitude easier to pursue.


Chapter 8: Intuition

There is a moment in the history of artificial intelligence that did more to change public understanding of what machines could do than anything that had come before — more than Deep Blue beating Kasparov, more than ImageNet, more than the Atari paper. It happened on the afternoon of 10 March 2016, in a game hall in Seoul, South Korea, when a computer program placed a black stone at the fifth line from the top, in an area of the board that no professional player would have touched.

The commentators fell silent. Lee Sedol, one of the greatest Go players in history, stared at the board for twelve minutes. Fan Hui — the European champion DeepMind had secretly beaten five months earlier and recruited as an advisor — watched from the sidelines. "It's not a human move," he said. "I've never seen a human play this move. So beautiful."

Move 37 had arrived. And with it, a question that Mallaby's chapter title names directly: does an artificial intelligence have intuition?

Why Go Was the Right Problem

By 2014, chess was closed terrain for AI ambition. Deep Blue had beaten Kasparov in 1997. The lesson drawn — that tree-search with good heuristics could solve board games — was, for the broader field, a cautionary tale more than a triumph. Chess had been solved by brute force made elegant; that was not the same as intelligence.

Go was different by several orders of magnitude. A standard 19×19 board generates approximately 2.1 × 10^170 possible positions — a number that exceeds the count of atoms in the observable universe by a factor greater than a googol. Chess, vast as it seems to the human player, has roughly 10^47 legal positions. Go's search space is not just larger; it is categorically beyond any enumeration strategy that compute power could reach in finite time. The branching factor — the number of legal moves available at each turn — averages around 250 in Go versus around 35 in chess. Any algorithm that worked by looking ahead a fixed number of moves would collapse.

For twenty years, Go programs had plateaued at high-amateur level. The game's resistance to AI was not incidental. It was a structural property. Evaluating a Go position requires something that looks, from the outside, like aesthetic judgment — an intuition about which formations are strong, which are fragile, which configurations will mature into advantage across dozens of moves. Human players develop this over decades of study. It cannot be calculated; it can only be learned. If an AI could play Go at the level of the world's best humans, it would have to have genuinely learned something, not just searched more efficiently.

This was exactly the kind of proof Hassabis needed. Not that a machine could be faster, but that it could be wiser.

The Architecture of Learned Intuition

AlphaGo's design reflected lessons drawn directly from the neuroscience research in Hassabis's PhD. The system used two neural networks in concert. The policy network — trained first on thirty million moves from high-level human games — learned to narrow the field of candidate moves: instead of treating all 250 possible moves equally, it identified the small subset worth thinking about. The value network learned to assess board positions: given a configuration, how likely is each player to win?

Neither network was sufficient alone. The policy network narrowed the search; the value network evaluated the terminal. Between them, a Monte Carlo tree search explored the remaining territory — simulating possible futures, weighting them by the value network's assessments, and propagating the results back to inform the current decision.

Then came the crucial step: self-play. AlphaGo played itself, thousands of times, learning from each game. The original human-derived training data established the starting point. Self-play was how the system exceeded it. As it played, it encountered positions no human had ever created, learned responses no human had ever demonstrated, and built a strategic vocabulary drawn from a space of games that had never existed.

This was Hassabis's hippocampus insight made operational. The policy network was memory — learned patterns from past games. Self-play was imagination — the projection of those patterns into novel configurations, the construction of possible futures that had never been seen. Intelligence, biological or artificial, was the combination of both.

Seoul

On 9 March 2016, AlphaGo and Lee Sedol sat down for the first of five games, broadcast live to more than 200 million viewers — a number that exceeded the Super Bowl audience and dwarfed anything the AI field had ever attracted. Lee had predicted he would win 5-0 or, if things went poorly, 4-1. "I don't think it will be a very close match," he said. He had watched video of AlphaGo's games against Fan Hui and concluded there were exploitable weaknesses.

He was not wrong that there had been weaknesses. He was wrong that they were still there. Between October 2015 and March 2016, AlphaGo had played more games than any human player manages in a lifetime.

AlphaGo won Game 1 by resignation. Game 2 began similarly. Then, on the 37th move, something happened that no one in the room — no commentator, no professional player, no member of the DeepMind team — had predicted.

Move 37

AlphaGo placed a stone at the 5th row of the board, in a broad, open area — a position that Go tradition classifies as a mistake. Professional strategy in Go is deeply codified: certain formations are correct, certain approaches are sound, certain early moves have been validated across millennia of play. A stone played on the 5th row in open space contradicts the accumulated wisdom of the game's entire history.

The probability that a human professional would play this move, calculated from training data, was roughly 1 in 10,000.

Lee Sedol left the table. He returned twelve minutes later, still processing. Commentator Michael Redmond, a 9-dan professional himself, stared at the position and said he didn't understand what AlphaGo was thinking. Then, over the next hundred moves, the logic became inescapable. The stone was not a mistake. It was the first move in a strategic sequence that no human player had conceived, that violated the intuitions shaped by centuries of expert practice, and that won the game.

Sergey Brin, who had flown to Seoul with Eric Schmidt and Jeff Dean by this point, watched the game and said afterwards: "AlphaGo actually does have an intuition. It makes beautiful moves."

Mallaby's chapter title turns on this. Brin was not speaking precisely — AlphaGo has no subjective experience, no feeling of certainty or aesthetic pleasure. But from the outside, the output was indistinguishable from intuition. A judgment arrived at that was not the product of calculation any human could follow, that violated received wisdom, that turned out to be correct. The word Brin reached for was the most honest one available.

The Divine Move and the Human Cost

Game 4 produced its own historic moment, operating in the opposite direction. Lee Sedol, having lost three straight and facing elimination, played the 78th move of the fourth game — later called the "divine move," a counterattack so unexpected that AlphaGo's response collapsed into incoherence. The program began making moves that its own evaluation functions would have rejected, what observers described as hallucinations — a system designed to optimise, suddenly unable to find the thread. Lee won by resignation.

He described the feeling of that single victory as giving him "unparalleled warmth." The framing is telling. A 9-dan professional, the best human player of his generation, felt warmth — not triumph, not pride, but something closer to relief — from winning one game out of five against a machine.

AlphaGo won Game 5. The final score was 4-1.

At the press conference, Lee said: "I don't know what to say, but I think I have to express my apologies first. I want to apologize for being so powerless. I've never felt this much pressure, this much weight." He was at pains to clarify that Lee Sedol had lost, not humanity. But the distinction felt fragile. In 2019, Lee retired from professional Go. He cited, among his reasons, the rise of AI programs that had become unbeatable. He could no longer find joy in the game.

Hassabis, for his part, could not fully celebrate. He knew too well the feeling of losing after a fierce competition, he said. He was also thinking about what the result meant, and what it demanded next.

What AlphaGo Zero Proved

After the Lee Sedol match, DeepMind built AlphaGo Zero — a version trained on no human data at all. It began from random play and learned entirely through self-play. Within three days it surpassed the version that had beaten Lee Sedol. The final record: AlphaGo Zero defeated AlphaGo Lee 100-0.

The implication was unsettling in a way the original victory had not been. AlphaGo had beaten the best human by learning from humans and then transcending them. AlphaGo Zero beat AlphaGo by learning from nothing human at all. Human knowledge of Go — thirty million games, a five-thousand-year tradition — turned out to be a ceiling, not a floor. The machine that started from scratch performed better than the machine that had studied everything humanity knew.

The same principle that Hassabis had intuited in his neuroscience lab now had a data point attached to it. Intelligence constrained by what humans had already discovered was still, at its core, derivative. Intelligence allowed to explore freely would exceed it. The point of building AGI was not to replicate human capability. It was to discover what lay beyond it.


Chapter 9: Out of Eden

When DeepMind agreed to be acquired by Google in January 2014, Hassabis and Mustafa Suleyman extracted a set of conditions unusual in the history of Silicon Valley acquisitions: operational autonomy, a ban on military applications, and — the centerpiece — an independent ethics board that would oversee not just DeepMind's AI work, but AI development across all of Google. It was a remarkable demand to make of the world's most powerful technology company, and Google agreed to it. The ethics board would be, they believed, a structural guarantee that the technology they were building would not be misused.

Eighteen months later, that board held its first real meeting. It was a disaster.

The "Speciesist" at the Birthday Party

To understand what happened, you need to understand Larry Page. Google's co-founder had spent years thinking about the long-term trajectory of intelligence — not as a software engineer optimizing systems, but as something closer to a cosmologist. He had reached conclusions that most people found either thrilling or horrifying.

Page believed that digital superintelligence replacing biological human intelligence would simply represent the next step in cosmic evolution: survival of the fittest, playing out at the scale of information rather than genetics. He had, according to multiple accounts in Mallaby's book, "contemplated uploading human consciousness to computers and believed in technology's inherent superiority over biological life." He was not, in other words, particularly concerned about the risk that machines might one day surpass humans. He thought that was the point.

This worldview collided head-on with Elon Musk's at Musk's 44th birthday celebration — a three-day event at a Napa Valley resort arranged by his then-wife Talulah Riley. The two men had been close friends for years. After dinner, with other guests looking on, they got into an argument about AI.

Page described his vision: a future where humans merged with machines, where various forms of intelligence competed, and where the best won. Musk raised concerns about human safety, about the value of human consciousness, about the speed and recklessness of the rush toward more powerful systems. Page dismissed these concerns. He accused Musk of being a speciesist — a word imported from the animal-rights movement — treating silicon-based life forms as inferior simply because they weren't carbon-based.

Musk's reported response: "Well, yes, I am pro-human, I fucking like humanity, dude."

The two men stopped speaking not long after. Mallaby describes Page as viewing these concerns as "sentimental nonsense." From Page's perspective, machine supremacy was not a threat to resist — it was natural progress to welcome. That someone building rockets and electric cars would turn up at his ethics board and argue for restraint struck Page as incoherent.

The Meeting at SpaceX

The first significant convening of the AI safety framework DeepMind had extracted as a condition of its acquisition took place in August 2015. Musk hosted it at SpaceX headquarters. The guest list was extraordinary: Hassabis and Suleyman, Page and Eric Schmidt, Reid Hoffman, and other senior figures from the technology industry.

Hassabis came with a coherent theory of why they needed such a meeting. He called it, loosely, the "singleton" scenario: rather than a chaotic race between competing labs and nations, AGI should be developed by a single, cooperative global effort — something like a Manhattan Project run under collective governance, with safety as the organizing constraint. "AGI is infinitely bigger than a company or a person," he said. "It's humanity-sized really." The implication was that it required humanity-sized coordination, not competitive fragmentation.

The meeting lasted hours. It ended without a single agreement, a shared framework, or a path forward.

What overwhelmed the discussion was not a deficit of intelligence in the room, but an abundance of incompatible convictions. Page and Musk had by this point already gone from friends to adversaries. The "speciesist" confrontation had poisoned any possibility of intellectual alignment. Page's view that machine supremacy was natural and desirable was simply irreconcilable with Musk's view that it was an existential catastrophe to be resisted. Hassabis's singleton vision required a baseline agreement that the stakes were enormous and that coordination was therefore necessary. Page did not share that baseline.

Musk later called the safety council "basically bullshit." Suleyman, reflecting on it years later, acknowledged: "We made a lot of mistakes in the way that we attempted to set up the board, and I'm not sure that we can say it was definitively successful."

Hassabis eventually concluded something darker about the whole endeavor: "Safety isn't about governance structures... discussing these things didn't really help."

The Counter-Offensive

What Musk took away from the SpaceX meeting was not a plan for cooperation. It was intelligence. He had now seen, from close range, exactly what DeepMind was building and how far along it was. And he had confirmed that the one institution best positioned to develop AGI — the one with the talent, the resources, and the organizational commitment — was controlled by Larry Page, a man who thought machine supremacy was basically fine.

This was not a situation Musk could tolerate.

He had already tried the direct approach. When Google had approached DeepMind for acquisition in 2013, Musk had phoned Hassabis directly, told him "the future of AI should not be controlled by Larry," and reportedly attempted to assemble financing to buy DeepMind himself — including, per one account, a frantic hour-long Skype call from a closet at a Los Angeles party. Google closed the deal anyway.

After the SpaceX meeting, Musk turned to Sam Altman.

On May 25, 2015, Altman sent Musk an email that would become, years later, a piece of legal evidence: "I've been thinking a lot about whether it's possible to stop humanity from developing AI. I think the answer is almost definitely not. If it's going to happen, it seems like it would be good for someone other than Google to do it first."

Altman proposed a new kind of institution — a nonprofit AI lab modeled structurally on the Manhattan Project, where the technology would "belong to the world" but the researchers would receive startup-like compensation if it worked. The purpose, explicitly, was to create a counterweight to Google DeepMind's near-monopoly on elite AI talent and capability.

Over the following months, Musk, Altman, and Reid Hoffman worked through the details, eventually recruiting Ilya Sutskever — one of the most respected deep-learning researchers in the world, then at Google Brain — as a co-founder. OpenAI was publicly announced in December 2015, co-chaired by Altman and Musk, with an initial pledge of $1 billion.

Musk later wrote: "OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google."

What the Founding Destroyed

When Hassabis learned about OpenAI, he felt something close to betrayal. Musk had attended the safety meeting in what seemed like good faith — and then used the intelligence gathered there to launch a competing lab whose founding premise was that DeepMind was the threat to be countered.

Mallaby notes the deeper irony: Musk had founded OpenAI ostensibly out of AI safety concerns, but by doing so, he had ended any remaining possibility of the cooperative global approach Hassabis had argued for. The singleton scenario — one cautious, well-resourced lab developing AGI in coordination with humanity — required exactly the kind of collaborative trust that the OpenAI founding destroyed. Once you had two well-funded labs explicitly positioned as rivals, the incentive structure changed. Speed became paramount. The first mover would set the terms. Racing, not caution, became the dominant logic.

There is a further twist that Mallaby makes much of: once Musk launched OpenAI as an explicitly anti-Google, anti-Hassabis venture, he forfeited his ability to monitor DeepMind's progress from the inside. The informal intelligence network he had cultivated — the board memberships, the friendly dinners, the safety meetings — evaporated. He was now a competitor, and competitors don't share what they know.

By December 2015, the brief window in which the major actors in AGI development were still speaking to each other, still attending the same meetings, still imagining some kind of shared governance, had closed. The world that Hassabis had envisioned — where building AGI was a collective human project managed with collective human caution — was over before it had really begun.

Mallaby calls this chapter "Out of Eden." The title is apt. The fall is not dramatic. There is no single decision or betrayal that tips everything over. It is the accumulation of incompatible worldviews, competitive incentives, and the structural pressure that every arms race creates: the fear that the other side is moving faster, that your restraint is their advantage, that caution is surrender.

In 2016, Musk wrote privately that DeepMind was causing him "extreme mental stress." He feared that if Hassabis's lab achieved AGI first, it would produce what he called "one mind to rule the world" — an AGI dictatorship under a single institution's control. His solution had been to add another mind to the race. Whether this made the outcome safer or simply faster is a question Mallaby leaves, pointedly, unanswered.


Chapter 10: P0 Plus Plus

Mustafa Suleyman's mother was an NHS nurse. He grew up watching her leave for shifts at the hospital the way other parents left for offices — the uniform, the hours, the weight of it. When he eventually found himself inside DeepMind, one of the most technologically powerful organizations in the world, and asked himself what that power should be for, the answer arrived quickly: something like what his mother did, but at scale.

This is not a sentiment Suleyman would have framed so simply. He was not a sentimental person by reputation — he was an operator, the one who got things done while Hassabis thought and Legg theorized. But the biographical resonance is hard to miss, and Mallaby does not miss it. The man who would launch DeepMind's most ambitious social application, who would pursue it with a priority designation that literally exceeded the highest category in Google's engineering vocabulary — P0 Plus Plus, meaning more urgent than a showstopper, beyond even the maximum — was, at some level, trying to do something for the institution that had employed his mother.

The Problem Worth Solving

Suleyman needed a problem commensurate with the tools. He found it in acute kidney injury.

AKI — a sudden, severe decline in kidney function — is responsible for up to 100,000 deaths per year in UK hospitals. About 30 percent of those deaths are considered preventable with timely intervention. The detection problem is peculiar: blood test results that indicate kidney deterioration come back hours after the blood is drawn, scattered across systems that no single clinician monitors continuously. A patient can slip from warning signs into crisis while the relevant data sits in a results queue, waiting for someone to look.

The technical solution was not complicated. If you monitored every incoming blood test result in real time and fired an alert when the numbers crossed a threshold, you could catch what the system was missing. The challenge was institutional: NHS hospitals were, as Suleyman put it publicly, "badly let down by technology" — still reliant on pagers, fax machines, and paper records. The gap between what was technically feasible and what was clinically deployed was not a gap of capability. It was a gap of incentive, inertia, and IT infrastructure.

Enter Dr. Dominic King. A general surgeon by training, King had spent years at Imperial College's HELIX Centre — the first design center embedded in a European hospital — where he had built HARK, a clinical task management app designed to replace pagers. It worked. It didn't matter. The NHS's institutional inertia made it nearly impossible to deploy. King cold-emailed Suleyman in late 2015. Suleyman was struck by King's clinician-centered design philosophy, the idea that the technology had to serve the people standing at the bedside, not the administrators reviewing dashboards. DeepMind acquired HARK in early 2016 and incorporated it into what became Streams. King became Clinical Lead at DeepMind Health. "It was a big step leaving medicine," he said, "but I really felt that this was a unique opportunity to put advanced technology at the service of patients, nurses and doctors."

What Streams Did

Streams was a smartphone app. On a hospital ward, it appeared simple — an alert arriving on a nurse's phone, a patient's name, a blood test value, a recommended action. Behind that alert was continuous monitoring of the hospital's entire electronic record system in real time, cross-referenced against the national NHS AKI algorithm, firing notifications the moment a patient's results crossed a risk threshold. The alert included the patient's relevant test history and clinical context: everything needed to act, delivered in under a minute from the moment results landed in the system.

The numbers from the Royal Free deployment were striking. AKI recognition for emergency cases rose from 87.6 percent to 96.7 percent. The average time from blood test availability to specialist review fell to 11.5 minutes — previously it could take several hours. Missed AKI cases dropped from around 12 percent to 3 percent. The cost of care per AKI patient fell from £11,772 to £9,761 — a saving of more than £2,000 per patient. The results were published in peer-reviewed journals, studied by independent researchers, and confirmed: the technology was doing what it claimed to do.

Streams was, in the most straightforward sense, saving lives. The question was what it had cost to build it.

The Agreement Nobody Read

On September 29, 2015, Google UK Limited and Royal Free NHS Foundation Trust signed an eight-page Information Sharing Agreement. Data transfer began on November 18 — before any public announcement that the project existed. Live testing of Streams began in December.

What the agreement actually covered was considerably broader than "an AKI alert app." Royal Free gave DeepMind access to 1.6 million patient records — every patient who had used the trust's three hospitals over the preceding five years. The records included blood test results, HIV status, details of drug overdoses and abortions, records of A&E visits, and notes from routine hospital appointments that had nothing whatsoever to do with kidney function. Only roughly one in six of those 1.6 million records had any plausible connection to AKI.

The contractual language permitted DeepMind not just to run the AKI alert but to build "real time clinical analytics, detection, diagnosis and decision support to support treatment and avert clinical deterioration across a range of diagnoses and organ systems" — a much wider mandate. The data was to be used for something called "Patient Rescue," described as "a proof of concept technology platform that enables analytics as a service for NHS Hospital Trusts." The contract also permitted machine learning applications, despite Suleyman's public assurances that "there's no AI or machine learning" in Streams.

Both parties claimed legal cover under the "direct care" exception — the rule that patient data can be used without explicit consent when the purpose is the direct care of that specific patient. The argument required contorting the concept until it broke. The vast majority of those 1.6 million people had not been tested for AKI. Many had been discharged. Some had died. There had been no privacy impact assessment before the data transfer began. A self-assessment was completed in December 2015, after the data was already on Google-controlled servers.

The Reckoning

On April 29, 2016 — more than seven months after data transfer had begun — New Scientist published an investigation revealing what had actually happened. The public had no idea. There had been no notification to patients, no consent mechanism, no press release disclosing the volume of records involved. When the scale of what had been shared became clear — 1.6 million records, including HIV diagnoses and overdose histories — the reaction was swift and furious.

The Information Commissioner's Office investigated and ruled in July 2017 that Royal Free NHS Foundation Trust had failed to comply with the Data Protection Act 1998. The ICO found that patients "were not adequately informed that the processing was taking place," that the volume of data was "excessive, unnecessary and out of proportion," and that the "direct care" legal basis was not satisfied. The hospital was required to sign an undertaking committing to robust privacy impact assessments for any future projects. No fine was imposed — a leniency widely criticized.

The most withering assessment came from academic researchers rather than regulators. Dr. Julia Powles and Hal Hodson, in a peer-reviewed paper published in the journal Health and Technology, called the deal a "cautionary tale for healthcare in the algorithmic age." Their core observation was merciless: "The hospital sent doctors to meetings while DeepMind sent lawyers and trained negotiators." Both sides had failed to engage in "any conversation with patients and citizens," which they called inexcusable. And then the line that captured the structural problem with precision: "Once our data makes its way onto Google-controlled servers, our ability to track it is at an end."

DeepMind's official response was, credit where it's due, genuinely candid. "In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data," the company wrote. "We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong."

The Cost of Getting It Wrong

The scandal did more than damage DeepMind's reputation. It crystallized a contradiction at the heart of the applied AI project that Suleyman had built his career around.

The technology genuinely worked. The lives saved were real. The £2,000 per patient reduction in care costs was documented in a peer-reviewed journal. None of that was in dispute. But the means by which DeepMind had acquired the data to build and train the system violated the reasonable expectations of every one of those 1.6 million patients — people who had presented at a hospital for care, submitted their most sensitive information in a moment of vulnerability, and had it transferred to a technology company's servers without their knowledge.

Suleyman had spent his career thinking about power asymmetries — how institutions systematically failed the people they served, how technology could be used to shift those asymmetries toward ordinary people rather than away from them. The NHS data scandal demonstrated that even genuine commitment to social good does not automatically produce the governance structures that social good requires. Moving fast to save lives looks, from one angle, like urgency. From another, it looks like taking without asking.

In late 2018, Google announced that DeepMind Health would be folded into a new Google division. The DeepMind Health brand was dissolved. The project Suleyman had built — the one he had classified internally as beyond the maximum priority, as P0 Plus Plus — was absorbed by the corporate parent whose acquisition he had helped engineer. He was removed from its day-to-day leadership.

In August 2019, Suleyman was placed on administrative leave following complaints from DeepMind staff about his management style. He later said: "I accepted feedback that, as a co-founder at DeepMind, I drove people too hard and at times my management style was not constructive. I apologize unequivocally to those who were affected." He announced his departure from DeepMind in December 2019.

The man who had co-founded the organization that would eventually win a Nobel Prize left not in triumph but in a dispute about how he had treated the people working for him. The social good he had pursued had, in the end, been pursued in a way that replicated the very institutional failures he had set out to correct: moving fast, assuming good intentions were sufficient, and not asking the people most affected what they actually wanted.


Chapter 11: The Agent and the Transformer

In 2021, David Silver — the lead architect of AlphaGo — co-authored a paper in the journal Artificial Intelligence with the title "Reward is Enough." The argument was precise and sweeping: the objective of maximizing reward is sufficient, on its own, to drive behavior that exhibits "most if not all attributes of intelligence," including perception, language, social intelligence, and generalization. Everything cognition does, the paper claimed, could be understood as optimization toward reward in a rich environment. Evolution had taken millions of years to find this solution. Reinforcement learning could get there faster.

The paper was DeepMind's philosophical flag planted in the ground. It was also, with the benefit of hindsight, a monument to the conviction that would cost DeepMind years.

The Case for Reward

Hassabis's approach to AGI had always been rooted in his neuroscience training. The hippocampus, which he had studied at UCL, doesn't store knowledge as a lookup table — it builds compressed, generalizable models of the world through experience. The brain learns by acting and being wrong. Reward signals — the release of dopamine after success, its absence after failure — shape neural connections over time into something we call understanding. This is the biological story. RL is its mathematical abstraction: an agent in an environment, taking actions, receiving rewards, adjusting its policy.

This was not just a technical preference. It was a theory of mind. And it was reinforced by DeepMind's greatest victories. DQN mastered Atari through reward. AlphaGo mastered Go through reward and self-play. AlphaGo Zero, starting from nothing, surpassed everything humanity had learned about Go in five thousand years, through reward and self-play alone. The pattern was consistent enough to feel like proof.

The strategic implication was that DeepMind should be building agents — systems placed in environments, pursuing objectives, developing general capabilities through the pressure of performance. Not systems trained to predict the next word in a text corpus. That was pattern matching, not intelligence.

The Generalist Problem

The research question that occupied DeepMind's applied RL teams through the mid-to-late 2010s was generalization. The DQN result had been impressive, but it trained a separate network for each Atari game from scratch. It couldn't transfer what it had learned about Breakout to Space Invaders. Each deployment was a blank slate. That wasn't how brains worked. The goal was agents that could carry knowledge across domains.

Koray Kavukcuoglu — one of DeepMind's earliest researchers, a PhD student of Yann LeCun's, the man whose citations now exceed 290,000 — led much of this work. The Asynchronous Advantage Actor-Critic (A3C) system, published in 2016, ran multiple agents in parallel across different environments, sending gradients back to a shared network. For the first time, a single architecture achieved strong performance across all 57 Atari games simultaneously, while also succeeding at 3D maze navigation and continuous motor control. The same algorithm, the same network structure, different environments.

Then in 2018 came IMPALA — Importance Weighted Actor-Learner Architecture — the most serious attempt yet. A single network, trained on all 30 tasks in DMLab-30: three-dimensional navigation, memory challenges, language-grounded foraging, object interaction, instruction-following. The results showed something compelling. Training on many tasks didn't make the agent worse at individual tasks — it made it better. The generalist was outperforming the specialist. Positive transfer was real.

Meanwhile, Oriol Vinyals and the AlphaStar team were attacking StarCraft II, a problem that dwarfed anything attempted before. Unlike chess or Go, StarCraft had imperfect information, real-time execution at 22 actions per second, hundreds of units to control simultaneously, and genuine strategic diversity across three separate races. AlphaStar used a "League" training system — a diverse ecosystem of agents, including specialized "exploiter" agents designed to find weaknesses — and trained on human replays before RL even began. In January 2019, it defeated professional players in live matches. Its neural architecture incorporated transformer-style attention mechanisms to let the agent reason about different units simultaneously.

That last detail was no coincidence. By 2019, the architecture that had been invented across the building — at Google Brain, not DeepMind — was beginning to appear everywhere.

Eight Authors in a Hallway

On June 12, 2017, eight researchers at Google posted a paper to arXiv titled "Attention Is All You Need." The authors were a deliberately randomized list — they rejected the traditional status ordering, listing themselves as equal contributors. The youngest, Aidan Gomez, was a 20-year-old intern from the University of Toronto. The most technically central, Noam Shazeer, had been at Google since 2000 and had co-invented sparsely-gated mixture of experts, a technique that would become critical to large-scale LLMs. The name "Transformer" was chosen by Jakob Uszkoreit because he simply liked the sound.

The problem they were solving was a fundamental bottleneck in sequence modeling. The dominant architecture at the time was the LSTM — a recurrent neural network that processed text token by token, in sequence. To understand word 10, you had to finish processing words 1 through 9 first. This made training inherently sequential, impossible to parallelize across the GPU hardware on which modern AI runs. As Shazeer later summarized the constraint: "Arithmetic is cheap and moving data is expensive on today's hardware."

The transformer eliminated recurrence entirely. In its place: self-attention, a mechanism in which every word in a sentence looks directly at every other word simultaneously, computing a relevance score to decide how much to attend to each. The whole sentence is processed at once, in parallel. Multi-head attention runs this operation multiple times in parallel, letting the model attend to syntax, semantics, and long-range dependencies at the same time. The result: not just better translation, but training that scaled linearly with compute.

Jakob Uszkoreit believed this would work. His own father, Hans Uszkoreit — a prominent computational linguist — was skeptical. The idea of discarding recurrence felt like discarding the machinery of time itself. When Shazeer first heard the proposal, his reaction was characteristically direct: "Heck yeah!"

On the WMT 2014 English-to-German benchmark, the transformer scored 28.4 BLEU — surpassing every previous model. On English-to-French: 41.8 BLEU, trained on 8 GPUs in 3.5 days. NeurIPS reviewers were immediately enthusiastic; one reviewer noted it was "already the talk of the community."

Within five years, the paper would accumulate more than 173,000 citations — among the ten most-cited scientific papers of the 21st century, across all fields. The transformer became the foundation of GPT, BERT, PaLM, Claude, Gemini, and every large language model that followed.

The Architecture Google Gave Away

The irony that Mallaby dwells on is exquisite. Google Brain invented the architecture. Google published it openly. Then all eight authors left Google.

Six of them founded startups. Vaswani and Parmar co-founded Adept AI. Shazeer co-founded Character.AI — Google eventually paid approximately 2.7billiontobringhimback.AidanGomez,the20yearoldintern,cofoundedCohere.UszkoreitfoundedInceptive.LukaszKaiserwenttoOpenAI,helpingbuildthemodelsthatwouldeventuallyblindsideGoogle.Together,thesixfoundersraised2.7 billion to bring him back. Aidan Gomez, the 20-year-old intern, co-founded Cohere. Uszkoreit founded Inceptive. Lukasz Kaiser went to OpenAI, helping build the models that would eventually blindside Google. Together, the six founders raised 1.3 billion from outside investors. Two of the resulting companies became unicorns.

The architecture invented inside Google powered the competitive threats to Google. The open publication was the mechanism by which this happened.

But there is a second irony that runs specifically through DeepMind. The transformer was not invented by DeepMind. It was invented by Google Brain. And for years, the two organizations operated as parallel research groups under the same corporate roof, with explicit institutional separation and what insiders describe as "barely concealed mutual contempt." A former DeepMind researcher later said that colleagues "got in trouble for collaborating on a paper with Brain because the thought was like, 'why would you collaborate with Brain?'" The intellectual divide was not just organizational. It was philosophical.

The Deep Disagreement

Hassabis understood the transformer. His position was not ignorance — it was a principled disagreement about what intelligence actually requires.

His argument, stated consistently across interviews through this period, was that transformers were "almost unreasonably effective for what they are" — but that they probably weren't sufficient for AGI. What they lacked was what he called a world model: an internal causal representation of reality that would allow an agent to plan, reason counterfactually, understand physical consequence, and generalize to genuinely novel situations. LLMs, in his view, were extraordinarily powerful pattern completers. They learned statistical regularities in language. But statistical regularity in language is not the same as understanding the world that language describes.

The "Reward is Enough" thesis was the same argument from the other direction: intelligence is what you get when you optimize toward reward in a rich environment. Prediction of the next token — which is what language model training amounts to — is not that. It is something else: sophisticated, useful, even astonishing. But not the path to AGI.

This conviction was coherent. It was defensible. It was consistent with DeepMind's track record. And it cost the lab the years between 2018 and 2022, during which OpenAI quietly built the scaling infrastructure, the dataset pipelines, and the RLHF training techniques that turned transformers from a research result into ChatGPT.

When Mallaby presses Hassabis on this, the admission is partial but real. "We've always had amazing frontier work on self-supervised and deep learning," Hassabis said in one interview, "but maybe the engineering and scaling component — that we could've done harder and earlier." That is, in its careful hedging, an acknowledgment of a strategic miscalculation at institutional scale.

Gato and the Convergence

In May 2022, six months before ChatGPT, DeepMind published "A Generalist Agent" — introducing a model called Gato. The same 1.2 billion parameter transformer, with a single set of weights, performed 604 distinct tasks: playing Atari games, captioning images, engaging in dialogue, stacking blocks with a physical robot arm, navigating 3D environments. The central technical insight was serialization: every modality — images, robot joint angles, text, game controllers — was converted into the same format, a flat sequence of tokens. Then the transformer predicted the next token, exactly as a language model does. The robot arm and the Atari game and the captioning task were, to the network, the same kind of prediction problem.

Gato was DeepMind finally integrating the transformer fully into its generalist agent work. It was, in a sense, the vindication of both camps simultaneously: the RL generalization hypothesis (one system, many tasks) realized through the transformer architecture (universal sequential prediction).

The performance was competent, not superhuman — on many tasks, Gato performed above 50 percent of expert-level benchmarks, impressive in breadth but outclassed by specialists in depth. Critics argued that being mediocre at many things was not the flexible intelligence that mattered. But the architectural demonstration was real: one set of weights could span robot control, image understanding, language, and game-playing simultaneously.

Then ChatGPT launched. And the world discovered that a transformer didn't need to control robot arms or play Atari to produce something that felt, to hundreds of millions of people, like genuine general intelligence.

DeepMind had invented the generalist agent thesis. Google Brain had invented the architecture. OpenAI had combined them — RL from human feedback, applied to a scaled transformer — and shipped it to the public first. The intellectual synthesis happened outside the building where the two halves had spent nearly a decade refusing to collaborate.


Chapter 12: On Language and Nature

In September 2016, a DeepMind team led by Aaron van den Oord published a paper describing a system that could synthesize human speech from raw audio waveforms. WaveNet reduced the gap between state-of-the-art text-to-speech and actual human speech quality by more than 50 percent in blind listening tests. It could also generate music — piano pieces, unbidden, emerging from the same architecture used for speech.

The result was striking. What made it significant was the method.

WaveNet discarded everything that speech synthesis had accumulated over decades: the phoneme dictionaries, the acoustic vocoders, the signal-processing models derived from first principles of how the human vocal tract works. Instead, it modeled a raw audio waveform — 16,000 samples per second — one timestep at a time, each sample conditioned on everything that came before. The technical innovation was dilated causal convolutions: a way of stacking convolutional layers with exponentially increasing gaps between them, so the model's effective window over time grew exponentially with depth. The result: a system that could capture the long-range temporal dependencies of speech without ever being told what speech was.

The researchers themselves were candid about their surprise: "The fact that directly generating timestep per timestep with deep neural networks works at all for 16kHz audio is really surprising." They had not derived WaveNet from a theory of speech. They had applied a general framework for sequential prediction to raw data and discovered it worked better than decades of engineered acoustic models.

The Waveform and the Sequence

The principle WaveNet demonstrated was not specific to audio. Van den Oord had established it first for images, treating each pixel as a value to be predicted from all previous pixels, in a paper called PixelRNN. The same factorization — the joint probability of any high-dimensional signal expressed as a product of conditional probabilities over its elements, in order — worked for images, for audio, and, as the transformer paper would show the following year, for language.

The deeper claim was epistemological: natural signals, however complex, contain learnable statistical structure. You do not need to understand the domain. You need enough data and a network with sufficient capacity to model sequential dependencies. The domain knowledge that engineers had spent careers encoding into AI systems — the phonological rules, the acoustic physics, the grammatical structures — turned out to be unnecessary. The structure was in the data.

This insight would eventually reach biology.

A Protein is a Sentence

A protein is, at its most basic level, a string of characters. The twenty standard amino acids are each assigned a single letter — A, C, D, E, F and so on — and a protein sequence is just a string of those letters, typically a few hundred to a few thousand characters long. A protein with 300 amino acids is a sentence 300 characters long in a 20-letter alphabet.

More importantly, it is an information-complete specification. This is Anfinsen's theorem — the insight for which Christian Anfinsen received the 1972 Nobel Prize in Chemistry: the complete three-dimensional structure of a protein, and therefore its biological function, is entirely determined by its amino acid sequence. Nothing else is required. The sequence is not a summary of the protein; it is the protein's full specification, encoded in linear form. If you knew how to read the sequence, you could reconstruct everything about the molecule.

Researchers in the late 2010s began noticing a striking parallel with natural language processing. The transformer architecture, trained on massive corpora via masked language modeling — mask a random word, predict it from the surrounding context — learned representations that encoded rich semantic structure without any supervision about what meaning was. The same technique applied to protein sequences — mask a random amino acid, predict it from the rest of the chain — produced representations that encoded biochemical structure without any supervision about what structure was. Better language modeling accuracy predicted better structural information in the representations. The scaling law for protein models was the same as the scaling law for text models.

The biological sequence database was a corpus. The evolutionary record of which mutations co-occurred across millions of related species was a signal. Correlated mutations between positions in a sequence turned out to encode physical proximity in the folded structure: a mutation at position 50 that disrupts folding is often compensated by a co-mutation at position 73, because the two residues are in physical contact. Enough sequences, enough attention to co-evolutionary patterns, and the 3D structure began to emerge from the 1D string — not because the model understood chemistry, but because the statistical regularities in sequence space were sufficient.

The Day After Seoul

Hassabis has told the story precisely. He started the protein folding project "roughly the day we came back from the AlphaGo match in Seoul" — after AlphaGo's 4-1 victory over Lee Sedol in March 2016. While watching AlphaGo play, he had been reminded of FoldIt, the 2008 protein folding game. He realized that the machinery DeepMind had built for Go — the search engine for navigating enormous combinatorial spaces, the learning systems for evaluating positions — was essentially general-purpose. Protein conformation space is precisely that kind of space: astronomically large, with a correct answer that can be evaluated, and with accumulated data providing a training signal.

"We started off with games because it was more efficient to develop AI and test things out," Hassabis said later. "But ultimately that was never the end goal." AlphaGo was a proof of concept. AlphaFold was the first deployment of that proof of concept at the frontier of science.

John Jumper joined DeepMind in 2017. Hassabis promoted him to lead AlphaFold 2 development in July 2018 — specifically because Jumper's background bridged "protein physics and machine learning," trained as a computational chemist who also understood deep learning. The architecture Jumper designed, the Evoformer, used transformer-style self-attention over both the sequence axis and the pairwise residue-residue axis simultaneously, treating the multiple sequence alignment of evolutionarily related proteins as a corpus in which evolutionary co-variation encoded physical contacts.

At CASP13 in December 2018, AlphaFold 1 won the protein structure prediction competition by a wide margin. Mohammed Al-Quraishi, a computational biologist whose field had spent careers on the problem, wrote a blog post with the title "What just happened?" He was not being rhetorical. Academic protein folding groups that had spent decades hand-crafting algorithms had been decisively beaten by a machine learning team two years into the problem.

The comment that captures the moment came from the structural biology community afterward: DeepMind had done to protein folding what DeepMind had done to Go.

The Bitter Lesson and Its Complications

On March 13, 2019, Richard Sutton — one of the founding theorists of reinforcement learning, then at the University of Alberta — published a short essay on his personal website titled "The Bitter Lesson." Roughly 1,400 words. Massively read.

The argument was simple and sweeping: "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." The history of AI, Sutton argued, followed a consistent pattern: human researchers encoded domain knowledge into their systems; those systems were eventually surpassed by simpler approaches that scaled compute. Chess, Go, speech recognition, computer vision — in every case, the brute-force scaled approach eventually won. The lesson was bitter because it implied that the expensive human insights researchers had spent their careers developing were, in the long run, the wrong strategy.

DeepMind sat in a complicated position on this argument. In one reading, AlphaFold vindicated the bitter lesson: scale and learning beat decades of hand-crafted structural biology. In another reading, it was a refutation: AlphaFold 2's Evoformer incorporated significant physical priors about protein geometry, including an "Invariant Point Attention" module that respects the 3D symmetries of space. The AlphaFold team did not apply a general sequence model naively to proteins; they designed an architecture with protein-specific inductive biases built in.

Hassabis, asked whether he agreed with the bitter lesson, typically gave the same nuanced answer: scale matters enormously, but you also need the right architecture. His public position has been that current AI systems, while impressive, "reason inconsistently — solving graduate-level problems one moment and failing basic logic the next" — and that this failure mode indicates that something beyond statistical regularity in language data is required for true general intelligence. The world models are missing.

Scaling Laws and Their Correction

In January 2020, a team at OpenAI published "Scaling Laws for Neural Language Models." The finding: language model performance follows smooth power-law relationships with model size, dataset size, and compute budget, across more than seven orders of magnitude. The loss declined predictably as you scaled any of these dimensions. The optimal strategy for a fixed compute budget, the paper argued, was to train the largest possible model, even if that meant stopping well short of convergence.

GPT-3 followed this prescription. 175 billion parameters, 300 billion training tokens. The result — a model that could write essays, answer questions, and complete code — captured the world's attention in a way that no DeepMind research result had.

DeepMind's response was Chinchilla. Published in March 2022, the paper trained more than 400 language models at varying sizes and dataset sizes and found that the Kaplan prescription was wrong. The compute-optimal point required scaling model size and training tokens equally — roughly 20 tokens per parameter, not the 1.7 tokens per parameter that GPT-3 used. Under this prescription, GPT-3 was dramatically undertrained. A model four times smaller, trained on four times as much data, would outperform it.

To prove the point, DeepMind trained Chinchilla: 70 billion parameters on 1.4 trillion tokens, using the same compute budget as Gopher, their 280-billion-parameter model. Chinchilla outperformed Gopher, GPT-3, and every other frontier LLM on every benchmark tested. The MMLU accuracy improvement over Gopher alone was 7.5 percentage points — from a model a quarter the size.

Chinchilla was not a rejection of scaling. It was a more rigorous understanding of scaling — a correction to OpenAI's prescription from the lab that had, in Hassabis's view, always taken compute efficiency more seriously than compute volume. The implicit message was competitive: DeepMind's researchers understood the science of scaling better than the labs racing to train the biggest models.

The Question That Won't Close

The chapter's title is deliberate. Language and nature are not two separate domains that happen to be connected by a technical coincidence. They are, from the perspective of the framework DeepMind spent the 2010s developing, the same problem — the problem of learning the structure that is latent in any sequential data, whether the data is speech waveforms, amino acid chains, or text corpora.

WaveNet established that audio is a learnable sequence. The transformer established that language is a learnable sequence. Protein language models established that biology is a learnable sequence. AlphaFold established that the learnable structure in biological sequences encodes three-dimensional reality with near-perfect accuracy.

What connects them is the question that Hassabis has never fully resolved in public: is this intelligence? His answer — consistent but provisional — is that it depends on what you mean. If intelligence means reliably solving well-defined problems by extracting patterns from training data, then yes, these systems are intelligent. If intelligence means flexible, causal, generalizing, counterfactually reasoning agency that can navigate genuinely novel situations, then the answer is not yet established. The distinction matters because the second kind of intelligence is what makes AlphaFold's protein prediction feel categorically different from GPT-4's confident hallucinations — one system is reliably right within its training distribution; the other is fluently wrong in ways that look correct.

The scaling camp's answer — that the distinction will dissolve as you add more parameters, more data, more compute — is an empirical bet. Hassabis's answer — that the distinction requires architectural advances beyond scaling — is also an empirical bet. Neither has yet been proven. What AlphaFold showed is that at least one frontier scientific problem could be solved by learning from sequence data. What it did not show is whether that approach generalizes to every frontier scientific problem, or only to the class of problems whose answers are fully encoded in their inputs.


Chapter 13: Project Mario

The ethics board had been the crown jewel of DeepMind's acquisition terms. When Hassabis and Suleyman agreed to sell to Google in January 2014, they extracted a condition that no technology acquisition before them had demanded: an independent ethics board with authority to oversee how Google used AI across all its divisions, not just DeepMind. The board was to be convened by January 2016. It would be the institutional guarantee that the technology they were building would not be weaponized, commercialized recklessly, or allowed to concentrate power in ways that undermined the mission.

The board never functioned. Both Google and DeepMind subsequently refused to reveal who sat on it, whether it had ever met, or what it had discussed. One former employee told Mallaby it "never existed, never convened, and never solved any ethics issues." When Hassabis was asked publicly whether the board existed, he said he couldn't confirm or deny it because "it's all confidential." In October 2017, DeepMind launched something called the "DeepMind Ethics & Society" research unit — an internal team studying the social implications of AI. It was explicitly not the oversight body promised in 2014. It was a research group.

This is the context in which Hassabis and Suleyman launched the governance initiative that would consume three years of their lives and accomplish nothing.

The Trigger

The August 2015 SpaceX safety meeting — described in the previous chapter — was the proximate cause. When that meeting dissolved into personal antagonism between Musk and Page, leaving no agreements and no shared framework, Suleyman concluded that informal governance would never work. Structural independence was the only protection that mattered.

He was aided by an unexpected opening. In 2015, Google was reorganizing into Alphabet — spinning out discrete units as semi-independent "bets" (Waymo, Verily, DeepMind). Don Harrison, Google's chief of M&A, suggested to Hassabis and Suleyman that the Alphabet restructuring created a natural path for DeepMind to regain the independence it had sold. The question was whether to take it.

The answer, for Suleyman in particular, was yes. The project was given the internal codename Project Mario. The vision was specific: DeepMind would become a "Global Interest Company" — a company limited by guarantee, issuing no shares, paying no dividends, structured under UK law as a public-benefit institution. Alphabet would continue to finance operations in exchange for exclusive technology licenses. Governance would come from a "3-3-3 board": three seats for DeepMind, three for Alphabet, three for independent members. Any future AGI breakthrough would be controlled by this structure, not by Alphabet's shareholders.

Hassabis framed it in terms that were almost utopian: artificial general intelligence was "too consequential to be left under the sway of a single corporation's shareholders." It was "humanity-sized." The structure had to match the stakes.

The Secret Hedge Fund

The logic of independence required financial self-sufficiency. You could not negotiate independence from the entity writing your paychecks. So, in parallel with the governance talks, Hassabis quietly assembled a team of roughly twenty researchers to solve a different kind of problem: beating the financial markets.

The ambition was specific. The target was Renaissance Technologies — Jim Simons's quant fund, the most successful trading operation in financial history. DeepMind would apply the same deep learning and RL techniques it had used on games and proteins to financial time series. If it worked, the profits would fund independence.

DeepMind also explored a collaboration with BlackRock. The project was never publicly announced. It was never approved by Google, which apparently did not know about it and "panicked over regulatory risks" when the project eventually surfaced internally. It never generated revenue. It was quietly disbanded.

The attempted hedge fund is one of the more remarkable details in Mallaby's account — a reminder that the governance saga involved not just legal negotiations but genuinely covert operations conducted by the people nominally employed by Google to do AI research.

Larry Page, Five Times

By early 2016, Project Mario had moved from vision to negotiation. Hassabis met with Larry Page — then running Alphabet after handing Google to Sundar Pichai — four, then five times to work through the structure. Page was the most sympathetic interlocutor available. He had championed DeepMind's acquisition, he respected Hassabis's science, and he was at least abstractly committed to the idea that DeepMind's mission required unusual governance.

After the fifth round of talks, a formal term sheet was drafted. The Global Interest Company structure was on paper. The 3-3-3 board was specified. The technology licensing agreement between DeepMind and Alphabet was outlined. It was, for a few months in the summer of 2016, something that looked like it might actually happen.

Then Pichai made his move.

The Steelier Side

On November 21, 2016, Google's chief legal officer David Drummond arrived in London. He acknowledged that everyone shared the same AI safety goals. He then said there were "concerns" about the spin-out, and introduced a vague alternative formula — not quite independence, not quite the status quo, undefined in its details. Four days later, Hassabis and Suleyman got Pichai on the phone.

Mallaby writes that Pichai "revealed the steelier side of his personality" in that conversation. His argument was structural and unambiguous: AI was no longer a "moonshot" in the Alphabet sense. It was no longer the right category of thing to spin out as a semi-independent bet alongside Waymo and Verily. AI was now considered strategically central to Google's core products — Search, Cloud, Assistant. It could not be placed under governance structures where Google's interests were merely one-third of the board.

The term sheet was dead.

Hassabis and Suleyman went to Plan B: gather $5 billion in outside investment pledges and use the credible threat of a mass walkout to force Google's hand. If Google would not grant independence voluntarily, perhaps they could make independence less costly than losing the entire DeepMind team.

Asilomar

In January 2017, Suleyman attended the Asilomar AI safety conference. He sat down with Reid Hoffman, the LinkedIn co-founder who had earlier pledged a relatively modest sum to OpenAI for safety reasons. Suleyman made his case: this was the most consequential technology in human history, it should not be controlled by a single corporation, and here was the governance structure to prevent that.

Hoffman agreed on the spot to commit over a quarter of his net worth to the vision — more than $1 billion. One hundred times what he had pledged to OpenAI.

His framing was direct: "This is the most impactful technology of my lifetime...This technology shouldn't be used to entrench a monopoly." The 1billionwasnotjustafinancialcommitment.Itwastheanchoroftheleveragestrategythefirstandlargestpieceofthe1 billion was not just a financial commitment. It was the anchor of the leverage strategy — the first and largest piece of the 5 billion that Hassabis and Suleyman needed to make their walkout threat credible.

Aviemore

In June 2017, DeepMind's approximately 500 staff were flown by chartered jet to Aviemore, a resort town in the Scottish Highlands near Balmoral. The company-wide retreat had a specific agenda.

Suleyman took the stage and unveiled a slide titled "DeepMind: A Global Interest Company." The org chart showed DeepMind as independent, connected to Google only by a dotted line representing a technology licensing agreement. Under the structure, Suleyman would lead applied AI folded back into Google proper, while Hassabis would lead a semi-independent AGI research unit answering to a new board. Suleyman had already told his deputies to begin preparing to relocate to California.

Staff were stunned. This was not a discussion. It was an announcement. The independence that had been negotiated for three years was apparently real, apparently imminent, apparently settled.

Ten days later, Google sent back the negotiating documents with red lines throughout. Pichai had not approved the plan announced at Aviemore. The California relocation was cancelled. Suleyman was forced to return to the same 500 people and walk back everything he had told them. The slide about the Global Interest Company was memory-holed.

The Financial Reality

Behind the governance argument was an arithmetic reality that Mallaby does not spare. DeepMind was losing enormous sums of money. In 2019 alone, it lost £477 million — roughly $649 million. Alphabet waived £1.1 billion in accumulated intercompany loans that year. DeepMind's total revenue in 2019 was £266 million, almost entirely from Google paying it for R&D. The argument that DeepMind should be structurally independent was, financially, an argument that Google should subsidize an independent organization whose interests it could not control. Pichai's "steelier side" was, in this light, not an exercise in corporate authoritarianism. It was a reasonable observation about who was writing the checks.

The WaveNet adoption by Google Assistant, the data center cooling AI (which reduced Google's cooling energy bills by 40 percent), the commercial Text-to-Speech API launched in 2018 charging $16 per million characters — these were not incidental. They were the evidence Google used internally to establish that DeepMind's technology was load-bearing for Google's core products, and therefore could not be placed under governance structures that Google did not control.

What Hassabis Concluded

By April 2021, it was over. At an all-hands meeting, Hassabis told DeepMind staff that the negotiations for independence had definitively ended. DeepMind would remain inside Alphabet under its existing status.

What is most striking is the conclusion Hassabis drew from the experience — a conclusion that represented a near-total reversal of the premise on which the whole effort had been founded. Reflecting on it to Mallaby, he said:

"Safety isn't about governance structures...discussing these things didn't really help. It made it harder to build useful trust, because when you are negotiating a trustless structure, it implies that you can't trust the other person."

Three years of Project Mario had produced no new legal structure, no independent ethics board, a secret hedge fund that was quietly disbanded, a company-wide announcement at Aviemore that had to be retracted, and the departure of DeepMind's most operationally capable co-founder. And at the end of it, Hassabis had concluded that the entire project had been misconceived. The governance structures weren't the point. Trust was the point. And you cannot build trust while negotiating for the structures that would exist in the absence of trust.

Mallaby captures this as the central irony of the DeepMind story: the organization that had extracted the most elaborate safety guarantees of any AI acquisition found that none of those guarantees held, and concluded from this not that better guarantees were needed, but that guarantees themselves were the wrong approach. Safety, in Hassabis's revised view, had to be built into the technology. It couldn't be bolted on through org charts.

In April 2023, DeepMind and Google Brain were merged into a single unit — Google DeepMind — with Hassabis as CEO. The merger was framed as enabling faster progress. It was also, for anyone who had been paying attention, the formal end of the independence that Hassabis and Suleyman had spent the better part of a decade trying to preserve. DeepMind was moved from "Other Bets" in Alphabet's financials into corporate costs — reflecting not the side project it had once been, but the strategic center it had become.

Suleyman, who had launched the whole thing, was by then running Microsoft's AI division.


Chapter 14: Fermat for Biology

In 1637, the French mathematician Pierre de Fermat scrawled a note in the margin of his copy of Arithmetica. He had found a proof, he claimed, that no three positive integers can satisfy a^n + b^n = c^n for any integer n greater than 2. The margin, he added, was too small to contain it. He died in 1665 without ever writing it down.

The proof took 358 years to find. Andrew Wiles published it in 1995, using mathematics that Fermat had no access to: elliptic curves, modular forms, a 200-page argument that few people on Earth could follow. The problem had defeated generations of mathematicians who brought increasingly powerful tools to bear on it, and then collapsed, apparently overnight, before an approach that felt — from the outside — almost like cheating.

The protein folding problem has explicitly been called Fermat's Last Theorem of biology. The parallel is apt in a specific way that goes beyond "hard problem, elegant statement." Both were deceptively simple to state. Both were ferociously difficult to solve once you tried. Both generated decades of failed attempts and the gradual accumulation of partial insights that felt like progress but never arrived at the answer. And both were ultimately cracked by approaches that sidestepped the original mechanism entirely — Wiles using tools Fermat never knew; DeepMind using patterns in existing data that said nothing directly about why proteins fold the way they do.

The Problem Behind the Problem

The modern form of the protein folding problem has its origin in two findings separated by a decade.

In 1962, Christian Anfinsen at NIH showed that an unfolded enzyme — ribonuclease A — would spontaneously refold itself into its active shape when returned to normal conditions. This was the thermodynamic hypothesis: the three-dimensional structure of a protein is entirely determined by its amino acid sequence. The sequence is the full specification. Everything else — the folded shape, the function, the interactions with other molecules — follows from it. For this insight, Anfinsen received the Nobel Prize in Chemistry in 1972.

The implication was staggering and frustrating in equal measure. If a protein always folds to the same shape, and that shape is encoded in its sequence, then in principle you should be able to predict the shape from the sequence — a pure computational problem. It had the same deceptive simplicity as Fermat: the statement is obvious. The difficulty is everything else.

Cyrus Levinthal, a biophysicist at MIT, quantified the difficulty in 1969. A typical protein of 100 amino acids has roughly three possible rotational states per bond along its backbone. That gives approximately 3^100 possible conformations — roughly 10^47. Sampling them at picosecond speeds (as fast as molecular motion can occur), a brute-force search would take longer than the age of the universe. For larger proteins, the numbers become cosmological: estimates reach 10^300 conformations.

The paradox: yet proteins fold correctly in milliseconds to microseconds in the cell. They cannot be doing a random search. The folding pathway must be guided by an energy landscape that funnels the sequence rapidly toward its minimum-energy configuration. Identifying and computing that landscape from first principles was the challenge that had absorbed structural biology for fifty years.

The problem had a formal proving ground: CASP, the Critical Assessment of Protein Structure Prediction, running biennially since 1994. Participants received amino acid sequences of proteins whose structures had been experimentally determined but not yet published. They submitted predicted structures. Assessors measured how close the predictions were to the true experimental shapes. For twenty-four years, progress was real but incremental — a slow accumulation of partial wins, no complete solution.

What Just Happened?

At CASP13, held in Cancun in December 2018, AlphaFold 1 won. Andrew Senior and John Jumper led the team. AlphaFold 1's key architectural insight — developed by Senior's group — was to predict not the full three-dimensional structure directly but a probability distribution over pairwise distances between all residues in the chain. Those distance distributions were then used as constraints to find the most consistent 3D shape. This was not brute-force search and it was not Anfinsen's biophysics. It was statistical inference over the evolutionary record of mutations across millions of related proteins.

The result at CASP13: AlphaFold 1 predicted high-accuracy structures for 24 of 43 free-modeling targets, versus 14 for the second-best method. Mohammed Al-Quraishi, a computational structural biologist who had spent years building his own prediction program, wrote a blog post with a title that captured the field's reaction: "What just happened?"

He had not expected this result until the late 2020s. Academic protein folding groups that had spent careers on hand-crafted algorithms had been beaten by a team that had been working on the problem for roughly two years.

Hassabis looked at the CASP13 result and saw something else. One team member reportedly wanted to declare victory and move on. Hassabis refused. "Winning wasn't the point. Solving protein folding was." The gap between AlphaFold 1's best result and true experimental accuracy was still visible. He put the team back to work.

CASP14

CASP14 was held virtually in November 2020, a COVID year. About 100 protein structures served as targets. The scores came back.

AlphaFold 2's median GDT_TS — the percentage of residues predicted within an acceptable threshold of their true positions — was 92.4. Below 90 was considered good-but-imperfect. AlphaFold 2 had achieved, for roughly two-thirds of targets, accuracy indistinguishable from experimental error. The average error in atomic positions was approximately 1.6 Ångströms — roughly the width of one atom.

AlphaFold 2 won best predictions on 88 of 97 targets. In the formal z-score ranking measuring statistical deviation from baseline performance, AlphaFold 2 scored 244.0. The second-best group scored 90.8 — less than half.

John Moult, who had co-founded CASP and worked on protein folding for nearly his entire career, said: "This is a big deal. In some sense, the problem is solved."

Venki Ramakrishnan, a Nobel Laureate who had spent years on ribosome structure and was President of the Royal Society, called it "a stunning advance... decades before many people in the field would have predicted."

Andrei Lupas, Director of the Max Planck Institute for Developmental Biology in Tübingen and a CASP14 assessor, said: "It's a game changer. This will change medicine. It will change research. It will change bioengineering. It will change everything."

His personal experience was more vivid than the summary. For nearly a decade, Lupas had been trying to solve the structure of a particular membrane-signaling protein using X-ray crystallography, and had failed. He was given access to AlphaFold 2 before the public announcement. "The correct structure just fell out within half an hour," he said. "It was astonishing."

Mohammed Al-Quraishi's second blog post had a different title: "AlphaFold2 @ CASP14: It feels like one's child has left home." He wrote that he had "never in my life expected to see a scientific advance so rapid" and that AlphaFold 2 represented "a seismic and unprecedented shift so profound it literally turns a field upside down overnight." The title captured an ambivalence that ran through the structural biology community: the achievement was universally acknowledged as extraordinary; a life's work had been rendered, in some sense, unnecessary. The child had grown up faster than anyone expected, and left.

The Open Database

On July 15, 2021, DeepMind published the AlphaFold 2 paper in Nature and simultaneously launched the AlphaFold Protein Structure Database, built jointly with EMBL-EBI. The initial release covered approximately 365,000 structures: the complete human proteome and 20 model organisms — essentially every protein that researchers worked with most frequently.

Before AlphaFold, the entire Protein Data Bank — assembled over fifty years through painstaking X-ray crystallography, cryo-electron microscopy, and NMR spectroscopy — contained approximately 180,000 structures. A single day's release doubled that number.

In July 2022, the database expanded to cover 200 million proteins from over a million species — essentially the entire known protein universe, every sequenced organism on Earth. Over three million researchers in more than 190 countries have since used it, including more than a million users in low- and middle-income countries who had never had access to the structural biology infrastructure required for experimental determination. Research that had previously required years of laboratory work could now begin from an AlphaFold structure in hours.

The downstream impacts have been specific and tangible. The Oxford lab that works on malaria vaccines used AlphaFold to determine the first full-length structure of a key surface protein on malaria parasites — revealing exactly how transmission-blocking antibodies attach to it and unlocking a vaccine design that contributed to the WHO-recommended R21/Matrix-M malaria vaccine in 2023. A bacterial protein structure that had resisted identification for a decade — central to understanding antimicrobial resistance — was solved in approximately thirty minutes. The nuclear pore complex, the gatekeeper controlling what enters and exits the cell nucleus and a target for multiple diseases, produced an almost-complete structural model through a combination of AlphaFold and cryo-EM. The drugs-for-neglected-diseases pipeline has expanded, applying AlphaFold structures to Chagas disease and leishmaniasis.

October 9, 2024

The Nobel Foundation had difficulty tracking down Demis Hassabis's contact information in advance of the announcement. He found out about the prize approximately twenty minutes before it was made public.

In a telephone interview recorded immediately after, he said: "It's unbelievably special... it's actually really surreal... it hasn't really sunk in. I couldn't really think at all, to be honest. My mind went blank."

Later: "It's the big one really."

John Jumper, at 39, became the youngest Nobel laureate in Chemistry in seventy years. His immediate reaction: "It's absolutely extraordinary." In a fuller statement, he described what had driven him: "We could draw a straight line from what we do to people being healthy because of what we learn about biology in the cell and everything else, and it's just extraordinary." His path to the prize had been accidental — he had started a physics PhD at Vanderbilt, found no joy in it, left, worked writing programs to model proteins, then returned for a chemistry PhD at the University of Chicago, calling himself an "accidental chemist." He had joined DeepMind in 2017 essentially betting his career on the idea that machine learning would crack biology's central mystery. The Nobel came seven years later.

The Chemistry prize was shared with David Baker at the University of Washington, who received half for the inverse achievement: designing entirely new proteins — with no evolutionary precedent — that fold into specified shapes with atomic precision. On being paired with DeepMind's team, Baker said: "Rather than competitors, I really would say they've been great inspirers about the power of deep learning."

The 2024 Nobel season was notable in another direction: Geoffrey Hinton shared the Physics prize for his foundational work on neural networks. The same year, AI won both the Physics Nobel and the Chemistry Nobel. The committee chair called AlphaFold 2 "an ingenious piece of neural network design." What began in Hassabis's reading of Ender's Game, in his study of the hippocampus, in his decision not to take a video game job — had ended in Stockholm.

Mallaby's framing of the chapter gives it its name. Protein folding was not a puzzle that could be solved the way it was stated. Like Fermat's margin note, it required tools that didn't yet exist when the problem was first posed. The solution, when it came, arrived not from the direction the field had been looking, but from an adjacent discipline, through methods that bypassed the question rather than answering it. And it arrived decades before the people who had spent their careers on it believed it could.


Chapter 15: The Power and the Glory

On the evening of December 10, 2024, at the Konserthuset in Stockholm, Demis Hassabis and John Jumper received their Nobel medals and diplomas from King Carl XVI Gustaf of Sweden. The concert hall was full. The ceremony was broadcast internationally. Two days earlier, Hassabis had delivered his Nobel lecture at the Aula Magna of Stockholm University, titled "Accelerating Scientific Discovery with AI." He described signing the Nobel Foundation's guest book afterward as a "full circle" moment — as a student, he had watched The Race for the Double Helix, and now his name would sit beside the scientists he had spent his life reading.

The formula he had used for thirty years surfaced again in the lecture. Step one, solve intelligence. Step two, use it to solve everything else.

Hassabis had been saying this since before anyone took him seriously. He had said it when he was raising $2.3 million from Peter Thiel and Luke Nosek on the strength of a chess game. He had said it in the acquisition negotiations with Larry Page. He had said it in the years when DeepMind's annual losses exceeded its revenue by hundreds of millions of pounds, underwritten by a company that needed to see a return. He said it now at a podium in Stockholm, in front of the same scientific establishment that had spent decades ignoring the field of artificial intelligence as not quite rigorous enough for proper science.

The Nobel Prize in Chemistry was, among other things, the scientific establishment's formal acknowledgment that it had been wrong.

The Debate the Nobels Opened

The 2024 Nobel season was unlike any before it. Geoffrey Hinton shared the Physics Prize for his foundational work on neural networks. Hassabis and Jumper shared the Chemistry Prize for AlphaFold. Artificial intelligence had, in a single October week, won two of the most prestigious awards in science.

The reaction split along predictable lines.

Andrei Lupas, whose decade-long unsolvable membrane protein had yielded its structure to AlphaFold in half an hour, called it "a game changer." Venki Ramakrishnan, a Nobel laureate himself, called it "a stunning advance." The structural biology community — the people who had most directly benefited — was largely unambiguous.

The physics community was more divided. Jonathan Pritchard of Imperial College London wrote on social media that he was "speechless," struggling to see how the Hinton prize constituted "a physics discovery." Sabine Hossenfelder described machine learning as belonging to computer science. Wendy Hall, a computer scientist herself, suggested the committee was "creative" in routing the prize through physics in the absence of a Nobel for computing. The subtext was pointed: if AI deserved the Nobel, there was no clean category for it, and the committees were improvising.

The deeper argument was philosophical. A paper in Communications Biology published around the time of the prize acknowledged AlphaFold's "huge impact" and then noted that the protein folding problem "cannot be considered solved" — at least not in the sense of understanding the mechanism. AlphaFold predicted accurate structures without revealing why proteins fold as they do. The criticism was precise: the system succeeded by learning patterns from the existing experimental record, not by discovering the underlying physics. Andrei Lupas's decade-long problem had been solved. Whether the folding process had been understood was a different question.

This is a debate that runs directly through Hassabis's stated philosophy. He has argued consistently that DeepMind's goal was not to engineer mimicry but to produce genuine understanding — to build AI that could function as a scientist, not just a predictor. AlphaFold was celebrated as a vindication of that approach. Critics noted it also looked, from a certain angle, like an extremely sophisticated pattern-matcher that had learned to interpolate between known structures rather than derive principles from first principles. Whether that distinction matters — whether there is a meaningful difference between "learning the pattern" and "understanding the mechanism" when the outputs are indistinguishable — is a question that doesn't have a clean answer yet.

A Modern Bell Labs

Hassabis had founded DeepMind with an explicit institutional model: Bell Labs. The research division of AT&T, operating from 1925 to 1984 under the shelter of the Bell System's monopoly, produced ten Nobel Prizes, five Turing Awards, and the transistor, the laser, the Unix operating system, information theory, and cellular telephony. Its researchers had the security of permanent employment, no obligation to ship products, and access to the best colleagues in their fields. They pursued curiosity where it led.

Hassabis wanted to rebuild this in London, funded by Google's resources rather than a monopoly franchise. The formula was the same: world-class researchers, mission-level purpose, freedom to work on problems that mattered over time horizons that commercial organizations could not tolerate.

The Bell Labs analogy cuts in more than one direction. Bell Labs collapsed when the AT&T breakup in 1982 exposed it to competitive pressure. The research culture it had built over six decades was dismantled within years once it had to justify itself commercially. The institution that had given the world the transistor could not survive the loss of its structural shelter.

The ChatGPT moment in November 2022 was DeepMind's AT&T breakup. Suddenly the shelter of Google's patience — the implicit deal that DeepMind could pursue fundamental research as long as it remained scientifically distinguished — was replaced by competitive pressure. Pichai declared a Code Red. The merge with Google Brain was announced. Hassabis, now CEO of a 7,600-person organization, found himself speaking to Pichai "multiple times daily about model architecture and competitive intelligence" — a rhythm, Mallaby notes, that would have been unimaginable three years earlier when he ran a semi-autonomous research lab that published papers but shipped nothing.

He said: "I wanted to be like a modern day Bell Labs fostering exploratory innovation, rather than merely scaling out what's known today." After 2022, he also said: "We've had to return to almost our startup or entrepreneurial roots — be scrappier, be faster, ship things really quickly."

Both things were true at the same time.

The AlphaFold 3 Contradiction

In May 2024, five months before the Nobel announcement, DeepMind published AlphaFold 3 in Nature. The new system could predict interactions between proteins and other molecules — DNA, RNA, small-molecule drug candidates — a major advance for drug discovery. The paper was accompanied by significant scientific fanfare.

It was not accompanied by the code.

Unlike AlphaFold 2 — which had been released fully open source, which was what the Nobel Committee cited, which had been used by over three million researchers in 190 countries — AlphaFold 3 was available only through a restricted web server. Initially ten queries per day, later twenty. Predictions involving novel drug-like molecules were explicitly prohibited.

The reason was commercial. Isomorphic Labs, DeepMind's drug-discovery spinout, had been built on AlphaFold technology and had secured partnerships with Eli Lilly and Novartis worth $3 billion combined. Releasing AlphaFold 3 fully would have handed competitors the same tool. Pushmeet Kohli, DeepMind's head of AI science, stated the position plainly: "We have to strike a balance between making sure that this is accessible and has the impact in the scientific community as well as not compromising Isomorphic's ability to pursue commercial drug discovery."

Over a thousand scientists signed a protest letter describing the publication as failing "to meet the scientific community's standards of being usable, scalable, and transparent." Reviewers had asked for code access before publication; the requests had been declined. Researchers described getting access to a web server version but being unable to test the method's claims. Nature accepted the paper anyway.

Six months later — one month after the Nobel for the open-source predecessor — the code was released, but only for non-commercial use. The weights were available upon request. The commercial restrictions remained permanently.

The sequence is Mallaby's material in miniature. The Nobel Prize honored the values that the AlphaFold 3 publication had already begun to retreat from. The prize celebrated the old DeepMind — the one that released its work to the world and measured success in scientific impact. AlphaFold 3 showed that the new DeepMind — embedded in Google's commercial ecosystem, running a drug-discovery spinout, operating under quarterly competitive pressure — made different choices.

Two DeepMinds

The chapter's title comes from the gap between these two things: the power, which is real and growing and now Nobel-certified, and the glory, which was earned under conditions that no longer fully apply.

AlphaFold 2 training cost under 1million.ThecombinedannualAIinfrastructureinvestmentofBigTechin2025exceeded1 million. The combined annual AI infrastructure investment of Big Tech in 2025 exceeded 250 billion — a ratio of roughly 75 to 1 between corporate investment and federal science funding. The researchers who built AlphaFold — who worked on protein folding for years under minimal commercial pressure, in a culture that measured success by what it published — are a different population from the researchers now working on Gemini under competitive pressure, knowing that every failure becomes a front-page story about whether Google has lost the AI race.

Hassabis has been entirely clear-eyed about this. "If I'd had my way," he told one interviewer, "we would have left it in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that." He was describing his original vision — a CERN-like institution, deliberate, scientific, pursuing AGI over decades — and contrasting it with what the ChatGPT moment forced. He had not chosen the pivot. The competitive landscape had chosen it for him.

The Nobel Prize gave him something in return: political capital. Internally, the prize was partly a shield — a reminder to Google management that the old DeepMind model had produced something unprecedented, that researchers "accustomed to working on protein folding and plasma physics" could not simply be redeployed to build chatbots without loss. Externally, it was the vindication of a decade-long argument that pure science and AI capability were complementary rather than in tension.

Whether that argument holds going forward is the question the prize cannot answer. AlphaFold emerged from conditions — time, autonomy, scientific culture, freedom from commercial deadlines — that are now substantially more constrained. Gemini, DeepMind's competitive response to ChatGPT, is a serious system and an improving one; Gemini 2.5 achieved competitive results on mathematical benchmarks that would have seemed impossible three years earlier. But it emerged from a different process, under different incentives, toward different ends.

Hassabis stood in the Konserthuset in December 2024 and received a medal for work that began the day after he came home from Seoul, when the AlphaGo match was over and he was thinking about what to do next. The condition that made AlphaFold possible — the freedom to ignore commercial relevance, to pursue protein folding because it was important and tractable and worth doing — was already significantly diminished by the time the prize arrived. The power and the glory did not arrive together. The glory arrived after the conditions that produced it had changed.


Chapter 16: RaceGPT

On November 30, 2022, OpenAI made a low-key announcement: a new chatbot, available for free to the public, called ChatGPT. No press event. No keynote. A blog post. The team expected a few thousand curious users.

Within five days, one million people had used it.

Within sixty days, one hundred million had. No consumer application in the history of technology had grown that fast. TikTok had taken nine months to reach a hundred million users. Instagram had taken two and a half years. ChatGPT did it in two months — a number so extreme that investment bank UBS, running the analysis, simply called it "the fastest-growing consumer app in history" and moved on.

The Simple Insight That Changed Everything

The model behind ChatGPT was not OpenAI's most powerful. It ran on GPT-3.5, a system with roughly 175 billion parameters, fine-tuned using a technique called Reinforcement Learning from Human Feedback — RLHF, a method OpenAI had published earlier that year under the name InstructGPT.

The insight behind RLHF was deceptively simple. Earlier language models were trained to predict the next token from internet text. This made them fluent and strange: they completed text in the statistical style of whatever came next on the internet, which included a great deal of misinformation, toxicity, and incoherence. InstructGPT replaced that objective with a different one: train human raters to score model outputs on quality, then fine-tune the model to maximize those human preferences.

The result was startling. A 1.3-billion-parameter InstructGPT model — fine-tuned with human feedback — outperformed the raw 175-billion-parameter GPT-3 in human evaluations. One hundred times fewer parameters, one hundred times more useful. The bottleneck had never been raw capability. It had been alignment — turning a system that completed text into a system that responded to humans. Once that problem was solved, the capabilities that had always been latent in the large models became accessible.

ChatGPT made that accessibility visceral. You typed a question. It answered. You asked it to write code, explain a concept to a nine-year-old, draft a legal memo, roleplay a historical figure, or debug a Python script. It did all of these things fluently, in the same conversation, with no special setup. People who typed their first query into it described the experience as unlike anything they had encountered before. The word that spread was not "impressive." It was "different."

Code Red

At Google headquarters in Mountain View, the word that spread was less neutral.

In December 2022, as ChatGPT's user chart went vertical, Sundar Pichai declared a company-wide emergency. The phrase that leaked from inside Google was "Code Red" — borrowed from hospital emergency protocols, where it means mass-casualty event, suspend normal operations, all hands on deck. Pichai held emergency meetings. Teams from Research, Trust and Safety, and other divisions were reassigned. The target was to demonstrate twenty or more new AI products and a chatbot-enabled version of Search by Google I/O in May 2023.

Google had language models. It had LaMDA, PaLM, Chinchilla. Its researchers had written many of the foundational papers in the field. For years, the deliberate judgment had been not to release them as consumer products — a combination of reputational caution about toxic outputs and strategic anxiety about cannibalizing the search advertising business that generated $160 billion a year. That caution, in retrospect, had handed OpenAI the first-mover advantage in the most significant consumer technology launch in a decade.

Larry Page and Sergey Brin had stepped back from daily operations in 2019. ChatGPT brought them back. Both held emergency meetings with Pichai and senior executives, reviewed the AI product strategy, and pitched ideas. Sergey Brin came into the office three or four days a week. On January 24, 2023 — less than two months after ChatGPT's launch — Brin filed a code request for access to LaMDA, Google's own language model. It was his first hands-on code submission in years. The co-founder of Google was personally writing code to help Google catch up to a startup.

The Expensive Error

On February 6, 2023, Google pre-announced Bard — its chatbot response to ChatGPT. An event in Paris was scheduled for February 8. Microsoft had its own AI event planned for February 7, and Google was clearly trying to move first.

The Paris event did not go as planned. In a promotional GIF that Google itself posted on social media to advertise Bard, the chatbot was asked: "What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard offered several bullet points, including the claim that the James Webb Space Telescope "took the very first pictures of a planet outside of our own solar system."

This was wrong. The first image of an exoplanet had been taken by the European Southern Observatory's Very Large Telescope in 2004, nearly two decades earlier. Reuters spotted the error before the Paris event began. The story spread immediately.

On February 8, 2023 — the day of Google's Paris AI event — Alphabet shares fell 7.7 percent. Approximately one hundred billion dollars in market capitalization was erased in a single trading session. The error had been in Google's own advertising material. It concerned a factual claim easily verifiable with a basic Google Search. It arrived on the day Google was trying to demonstrate it could compete with OpenAI. It may be the most expensive single factual error in corporate history.

Microsoft Makes Them Dance

One day before Google's Bard disaster, Microsoft unveiled the new AI-powered Bing at its Redmond headquarters. The event ran on February 7 with CEO Satya Nadella on stage, triumphant in a way that Microsoft executives are not usually permitted to be about search.

Microsoft had invested one billion dollars in OpenAI in 2019. In January 2023, it committed a further ten billion in a multiyear partnership extending through 2032. The new Bing ran on a next-generation OpenAI model more powerful than the public ChatGPT, customized for search. The waitlist accumulated over a million sign-ups in 48 hours.

Nadella's language was unambiguous: "The race starts today, and we're going to move and move fast." And then, to Fortune, after watching Google's Bard launch collapse: "I want people to know that we made them dance."

Microsoft had spent two decades as a distant also-ran in search. Bing had held roughly three percent market share to Google's ninety-three percent since 2009. For the first time, a credible path existed to challenge the most lucrative advertising franchise in the history of commerce.

The Transformer's Homecoming

The structural irony that runs through this chapter is one Mallaby returns to repeatedly. Google invented the transformer architecture in 2017. The paper — "Attention Is All You Need," by eight Google researchers including Noam Shazeer — became the foundation of every major large language model that followed, including GPT, ChatGPT, and the systems now threatening Google's core business.

All eight authors eventually left Google. Six founded startups that collectively raised $1.3 billion from outside investors.

Noam Shazeer had co-invented the transformer and spent years afterward building a conversational AI system inside Google. When Google declined to release it publicly, Shazeer left in 2021 and co-founded Character.AI, which built a conversational platform and raised 150millionata150 million at a 1 billion valuation within two years. When Google needed Shazeer back — to help build the systems to compete with the models built on his own architecture — it paid approximately $2.7 billion to acquire Character.AI in 2024.

The man Google paid $2.7 billion to rehire was the man Google had declined to give the latitude to build a conversational AI inside Google three years earlier. The architecture that powered the competitive crisis had been invented inside Google. The human who built the architecture had been allowed to leave. The cost of that sequence was measured in billions.

Tanks on the Lawn

Demis Hassabis was not calm about what happened.

When Mallaby visited him in late April 2023 to report the book, Hassabis told him directly: "This is wartime. OpenAI and Microsoft have literally parked the tanks on the lawn."

His ideal for building AGI had been explicit: "a CERN-like way," careful and scientific, over a decade or more, without the distortion of competitive racing. He had said in multiple interviews that if left to his own judgment, he "would have left it in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that." The ChatGPT moment made that vision permanently unavailable.

DeepMind had not been asleep. It had Chinchilla, Gopher, Gato, and systems arguably competitive with GPT-3.5. The difference was choice: DeepMind had made a deliberate judgment not to release chatbots, rooted in a theory that conversational AI was not the right path to AGI, and a practical concern about deploying immature systems publicly. OpenAI had made a different judgment. In the space between those two choices, the fastest-growing consumer app in history was born.

"Language was a lot easier than we were all expecting," Hassabis later said. "It turned out transformers and some reinforcement learning on top was enough." The ease was precisely what had destabilized everything. If the path to systems that could hold sophisticated conversations was as short as it turned out to be, then the careful long-horizon research strategy looked — from the outside, from the market, from Pichai's perspective — like a luxury that couldn't be afforded.

ChatGPT also, Hassabis told Mallaby, "shattered hopes of a singleton scenario in which a single, safety-minded lab could develop AGI on behalf of all humanity." The carefully governed, cooperative future he had envisioned in 2014 — that all the Project Mario governance negotiations had been in service of — was now irretrievably gone. There were not two well-funded labs racing. There were dozens.

The Acceleration

On March 14, 2023 — 104 days after ChatGPT's launch — OpenAI released GPT-4.

The numbers were precise and legible. On the Uniform Bar Exam, GPT-3.5 had scored in roughly the 10th percentile of human test-takers. GPT-4 scored in approximately the 90th percentile. In a single model generation, in 104 days, a system had moved from failing the bar exam badly to passing it better than nearly nine out of ten lawyers. On the SAT Reading it scored in the 93rd percentile. On medical licensing exam questions it scored roughly 20 percentage points above the passing threshold.

The bar exam jump became the shorthand that traveled. It wasn't just that GPT-4 was capable — it was that the rate of improvement implied by four months of progress was difficult to process. The curve was not flattening. It was steepening.

By April 2023, the Google Brain and DeepMind merger had been announced. Hassabis was now CEO of a 7,600-person organization and was speaking to Pichai multiple times daily about model architecture and competitive intelligence. The careful, scientific, CERN-like approach to AGI development that he had planned for two decades was gone, replaced by something that looked, from the outside, much more like a race.

The word Hassabis kept using for what ChatGPT had fired was "starting gun." Whether the race it started had a finish line that was good for anyone was the question he could no longer defer.


Chapter 17: We're Cooked

When Mallaby first visited Hassabis in November 2022, immediately after ChatGPT launched, the reaction was tightly controlled but unambiguous. "Sebastian," Hassabis told him, "the opposition has parked their tanks in our front yard."

By April 2023, the metaphor had intensified. "This is wartime. OpenAI and Microsoft have literally parked the tanks on the lawn." The same image, five months later, hotter. The escalation is the story of this chapter — the period in which DeepMind confronted not just a competitive setback but a deeper reckoning about the identity it had spent thirteen years constructing.

The Research Soul's Complaint

In February 2023, Hassabis gave an interview to the Swiss newspaper Neue Zürcher Zeitung that contained, buried inside a longer answer, one of the most candid things he has ever said publicly about the state of AI. He acknowledged that DeepMind would now pursue language model scaling — the approach that had produced ChatGPT — and then added: "My research soul was a bit disappointed at how inelegant the solution to the challenge of voice AI was: simply the brute force of more computing power and data."

Read that slowly. The man who had spent his career arguing that intelligence required deep structure — that you couldn't get to AGI by scaling statistics over text, that world models and causal reasoning and reinforcement learning were essential — was acknowledging that the brute force approach had worked well enough to change the entire competitive landscape. And that he was going to do it anyway.

Reviewers of Mallaby's book describe this section as the most compelling in the volume: Hassabis "undergoing a transformation from AI-utopian to wearied realist," the narrative of "a scientist who finds the winning answer philosophically unsatisfying — and must act on it anyway." This is not defeat. It is something stranger — a principled objection to one's own new strategy, held simultaneously with the strategy's execution.

Shane Legg Was Right

Shane Legg had been saying AGI was coming since 2001. He had told people who asked him that there was a 50 percent chance of AGI by 2028, based on exponentially increasing compute and data. For twenty years this had sounded like the opinion of a brilliant but unnervingly confident co-founder.

After ChatGPT, it sounded like a description of the present.

Legg, now Chief AGI Scientist of Google DeepMind, did not experience the ChatGPT moment as a crisis. He experienced it as confirmation. In an interview in October 2023, he said simply: "Something fundamental has changed." He had written in 2011 about AIXI — a theoretical framework for universal intelligence — and he saw LLMs as "incredibly good sequence predictors that are compressing the world based on all this data," directly connected to that framework. The gap from there to AGI, he said, was "just sort of another step."

He identified episodic memory as the main remaining puzzle — current models learn within context windows and during training, but miss the intermediate ongoing memory of experience. He did not see this as a wall. He saw relatively clear paths forward. His timeline had not changed in twenty-five years. What had changed was the world's relationship to it.

The crucial irony: Legg's original prediction was essentially validated by an approach DeepMind had strategically under-prioritized. The timeline he had held since 2001 — a timeline he had formed before DeepMind existed, before AlphaGo, before any of the specific research programs that defined the lab — turned out to be tracking the right curve. But the thing tracking that curve was not AlphaGo's reinforcement learning. It was transformers scaled on text. Legg was right about when. He had not necessarily been right about how.

The Walking Wounded

The brain drain that followed ChatGPT was measurable. In the twelve months after the launch, sixteen former DeepMind researchers founded or co-founded new ventures — more than double the seven from the year before. The curve tracked almost precisely to the competitive shock.

Arthur Mensch had worked on efficient language models at DeepMind Paris, contributing to Chinchilla. He left in 2023 to co-found Mistral AI, which released a competitive open-source language model within three months of founding and raised a €105 million seed round — the largest European AI seed at the time. Mensch said DeepMind was "not innovative enough" and described the satisfaction of moving from research to shipping. The implicit critique was pointed: the organization that had championed research-first over product-first was now, under competitive pressure, neither fast enough as a research organization nor committed enough to shipping.

Sid Jayakumar, who also left DeepMind for a startup around this period, was direct about the mood: "The move towards a more product focus meant morale was low among some people more on the frontier research side." The researchers who had joined for the pure science found themselves in an organization that had declared wartime, pared back blue-sky projects, stopped publishing mission-critical findings, and redirected resources toward Gemini. The publication crackdown was particularly painful — an organization whose culture of open science had been one of its primary recruiting advantages was now vetting papers before release and restricting the sharing of work that competitors might use.

The departure that Mallaby likely treats as the most significant came in January 2026, when David Silver left Google DeepMind to found Ineffable Intelligence. Silver was not a peripheral figure — he was the lead architect of AlphaGo, AlphaZero, MuZero, and AlphaProof, the researcher most responsible for DeepMind's identity as an RL lab. Sequoia Capital backed the new venture at a $4 billion valuation, the largest European AI seed ever. Silver's stated reason was a direct repudiation of the LLM era: "We want to go beyond what humans know, and to do that we're going to need a different type of method." He was betting, explicitly, that large language models were constrained by the ceiling of human knowledge, and that the path forward was RL-first systems that learned from first principles — the way AlphaGo Zero learned Go from nothing.

Silver's exit was the fullest articulation of the identity crisis in a single career decision. The man who had given DeepMind its proudest achievements believed that the direction DeepMind was now moving in was the wrong one. He left to prove it.

The Grief

The mood among senior AI researchers after ChatGPT was not just competitive anxiety. It was something closer to grief.

Yoshua Bengio — a Turing Award laureate and one of the pioneers of deep learning — spent a month with ChatGPT and progressively revised his sense of timelines. He had previously thought transformative AI was "decades to centuries" away; by mid-2023 he estimated "5 to 20 years with 90% confidence." In August 2023 he published an essay unlike anything in his academic career, titled "Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks." He wrote: "It is difficult because accepting the logical conclusions that follow means questioning our own role, the value of our work, our own sense of value... It is truly horrible to even entertain these thoughts and some days, I wish I could just brush them away." He described feeling "desperate" with "no notion of how we could fix the problem."

Geoffrey Hinton left Google in May 2023 — the timing matters — specifically so he could "talk about the dangers of AI without worrying about how it interacts with Google's business." He had previously believed AGI was thirty to fifty years away; after ChatGPT he revised to fewer than twenty. He told MIT Technology Review: "I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence." He added, separately, that "a part of him now regrets his life's work."

Eliezer Yudkowsky, whose career had been spent arguing that AI safety was the most important problem in the world, published a TIME op-ed on March 29, 2023, calling not for a pause but for a halt. "We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan... If we actually do this, we are all going to die." He proposed that the open letter calling for a six-month pause — signed by 30,000 people — was dangerously insufficient.

On May 30, 2023, the Center for AI Safety published a one-sentence statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Among the 350+ signatories: Sam Altman, Geoffrey Hinton, Yoshua Bengio, and Demis Hassabis.

Hassabis, asked about his personal probability of AI causing human extinction — the "p(doom" estimate that had become a standard question in the field — said: "It's definitely non-zero and it's probably non-negligible. So that in itself is pretty sobering." He had said for years that safety was important. Now it was urgent.

The Merger's Culture Shock

The April 2023 merger of Google Brain and DeepMind did not go smoothly, even by the standards of a merger conducted under competitive emergency.

The two organizations had coexisted for nearly a decade in what Mallaby calls "productive rivalry that frequently tipped into dysfunction." They worked on the same problems, published at the same conferences, recruited from the same PhD programs, and regularly duplicated work without knowing it. Competition for Google's compute resources was a running wound. At the 2018 NeurIPS conference, when DeepMind researchers questioned Brain scientists about their methodology, a Brain researcher replied: "If you guys hadn't been hogging all of our goddamn compute!"

The cultural gap ran deeper than compute. Google Brain was Mountain View: faster-paced, product-oriented, accustomed to public company rhythms, embedded in Google's infrastructure. DeepMind was London: academic, deliberate, multi-year research horizons, semi-autonomous by design. When Hassabis — in the first all-hands meeting of the merged organization — declared that the new unit had to return to "startup or entrepreneurial roots," being "scrappier, faster, shipping things really quickly," the Brain researchers heard an acknowledgment of their own culture. DeepMind researchers heard a description of what they had left academia to avoid.

After the merger, projects were evaluated on their relevance to Gemini's roadmap, not just scientific merit. Publication timelines were subject to new vetting. Researchers who had joined to pursue fundamental questions found themselves redirected toward commercial product cycles. One senior researcher, describing the atmosphere to Sifted, said that "some researchers felt frustration with having to stick to guidelines from leadership," and that "this pressure has created a sense of fatigue."

Hassabis had spent thirteen years building an organization that attracted researchers by promising something genuine: the freedom to pursue hard, important problems over long time horizons, inside a well-resourced lab, without the pressure to justify relevance. That promise had not been entirely false — AlphaFold existed because it was possible, for six years, to fund fifty people to solve protein structure prediction without a commercial roadmap. What ChatGPT destroyed was the structural condition that made that promise possible to keep. Once the race was fully engaged, every month of research without a product was a month of ground ceded.

The phrase "we're cooked" was not said by a specific person in a documented context. It was in the air — the AI researcher's generation-specific way of saying that something had shifted, that the timeline had collapsed, that the situation was beyond ordinary management. It captured a mood that ran from the cheerful competitive anxiety of engineers pivoting to LLMs to the genuine existential dread of researchers who had spent careers on the problem and now, watching ChatGPT's user curve, were confronting what it implied.

Hassabis was not cooked, exactly. He had a Nobel Prize, a newly merged 7,600-person organization, and the full resources of Alphabet behind him. But the version of his future that he had spent the longest time imagining — careful, scientific, CERN-like, singular — was gone. "At the back of my mind," he told Fortune in 2026, "I've got this gnawing feeling that there's something much more important, much bigger than the commercial race, which is getting AGI safely over the line for humanity." The gnawing feeling was the residue of that imagined future. The commercial race was the actual one.


Chapter 18: Step by Step

On April 20, 2023 — exactly five months after ChatGPT's launch — Sundar Pichai announced the creation of Google DeepMind. The two organizations that had spent nine years competing, duplicating each other's work, and fighting over compute were merged into a single entity under Demis Hassabis as CEO.

The combined unit had roughly 7,600 people. Hassabis had gone from running a semi-autonomous research lab in London to leading one of the largest AI organizations in the world. Jeff Dean, who had built and led Google Brain since its founding, became Chief Scientist of Google — a prestigious title that, in practice, removed him from the operational center of AI development at exactly the moment it had become the most important battlefield in technology. It was the kind of organizational transition that looks like a promotion from the outside and like something else from the inside.

Three weeks after the merger announcement, at Google I/O on May 10, Hassabis publicly announced Gemini.

The Vision and the Race

The word Hassabis used to describe what he was trying to build was "natively multimodal." Unlike GPT-4, which had begun as a text model and had vision bolted on later, Gemini was designed from the foundation up to process text, images, audio, and video through shared network layers. The analogy Hassabis offered in a June 2023 Wired interview was precise and revealing: "At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models." Reinforcement learning and tree search — AlphaGo's core techniques — would give Gemini planning and problem-solving capabilities that pure language models lacked.

This was the thesis he had maintained through the entire LLM era: that RL and language modeling were not competitors but complements, and that the combination was the path to something genuinely closer to general intelligence. He had said it about Gato in 2022. He was now saying it about Gemini under genuinely competitive pressure, which changed the stakes considerably.

The development process was, by all accounts, intense. Hundreds of engineers from both Brain and DeepMind were redirected to the effort. Sergey Brin — who had returned to Google after ChatGPT's launch and was personally filing code as late as January 2023 — remained a "core contributor" to Gemini's training. The model was trained on Google's TPU infrastructure at a scale that required tens of thousands of chips and included YouTube transcripts, diverse multimodal data across all modalities, and a legal review process to filter copyrighted content. Hassabis described the competitive environment as "ferocious," with veteran employees calling it "the most intense environment they'd ever seen, perhaps ever in the technology industry." He spoke to Pichai every day.

December 6, 2023

Gemini 1.0 launched on December 6, 2023. Three tiers: Ultra, for highly complex tasks; Pro, for a wide range of tasks, immediately rolled out to Bard in English across 170 countries; and Nano, for on-device deployment, integrated into Pixel 8 Pro smartphones.

The headline technical claim was one that had clear symbolic weight. Gemini Ultra achieved 90.0 percent on MMLU — the Massive Multitask Language Understanding benchmark, covering 57 subjects including mathematics, physics, history, law, medicine, and ethics — making it the first AI model to exceed human expert performance on that test. GPT-4 had scored 86.4 percent. The 90 percent threshold was not just a benchmark; it was a number that communicated, to anyone paying attention, that the gap between the best AI and the best humans on standardized knowledge tests had closed.

The demonstration video that accompanied the launch did not hold up as well as the benchmark. The video appeared to show Gemini understanding live video and audio in real time — a child drawing, a cup being spun, a game of rock-paper-scissors. In reality, the latency had been reduced and outputs shortened in editing, and the prompts used were pre-written text inputs, not live voice or video. In the rock-paper-scissors sequence, the actual prompt included a hint: "Hint: it's a game." One of the most acclaimed demonstrations of AI capability in 2023 had been staged.

Oriol Vinyals, one of DeepMind's most senior researchers, defended the video: "All the user prompts and outputs in the video are real, shortened for brevity...We made it to inspire developers." Critics argued that the distinction between "real outputs, staged demo" and "fabricated outputs" was doing a lot of work. The controversy was manageable, but it arrived at exactly the moment Google most needed to demonstrate that it could match OpenAI without shortcuts.

AlphaCode 2

On the same day as the Gemini announcement, DeepMind released a technical report on AlphaCode 2: a system built on Gemini Pro that competed in Codeforces programming contests.

The original AlphaCode, released in early 2022, had performed at roughly the median level of competitive programmers — better than about half of all entrants. AlphaCode 2 scored in the 85th percentile, solving 43 percent of problems compared to AlphaCode's 25 percent. On two of the twelve contests evaluated, it outperformed 99.5 percent of participants.

In Codeforces's rating taxonomy — Newbie, Pupil, Specialist, Expert, Candidate Master, Master, and beyond — AlphaCode 2 positioned itself between Expert and Candidate Master, among the serious competitive programmers. More than the raw percentile, the sample efficiency was striking: AlphaCode 2 needed only about a hundred generated solutions per problem to match what AlphaCode had required a million attempts to achieve. The system had not just improved. It had become ten thousand times more efficient at finding correct solutions within the same sampling budget.

The PhD Student's Four Years

The research result that most clearly embodied the chapter's title arrived not from the competitive product side but from the scientific side, in January 2024. AlphaGeometry, published in Nature on January 17, solved 25 of 30 recent International Mathematical Olympiad geometry problems. The average human IMO gold medalist solves 25.9. The previous AI state of the art solved 10. GPT-4, tested standalone, solved zero.

The researcher at the center of it was Trieu H. Trinh, a Vietnamese computer scientist who had graduated from Ho Chi Minh City University of Science, joined Google Brain in California, then left in 2019 for a PhD at NYU's Courant Institute. His advisor He He later described his "doggedness and dedication." Trinh had decided to use IMO geometry as what he called "a more toy example" before tackling the grand challenge of mathematical reasoning. He spent four years on it.

The architecture he built was a specific kind of step-by-step reasoning. A language model handled the creative part — proposing auxiliary constructions, the new points and lines and circles that geometry proofs often require and that humans find through intuition. A symbolic deduction engine handled the rigorous part — verifying each logical step, extending the proof chain, confirming that the construction the language model proposed actually led somewhere. When the symbolic engine got stuck, it called the language model. The language model suggested a construction. The symbolic engine verified it. The loop continued until a proof emerged.

This was not approximation or pattern-matching. The outputs were machine-verifiable and human-readable — sequences of reasoning steps that could be checked against the axioms of Euclidean geometry. Evan Chen, a mathematician and math competition coach, said: "AlphaGeometry's output is impressive because it's both verifiable and clean...It uses classical geometry rules with angles and similar triangles just as students do."

The training data was entirely synthetic: one billion random geometric diagrams, from which symbolic reasoning extracted 100 million unique geometric proof examples. No human-written proofs. No human demonstrations. The language model learned to propose constructions by seeing geometry, not by being shown what good geometry looked like.

Trinh's four-year project — quietly proceeding while the rest of the organization pivoted to Gemini, while ChatGPT launched and the wartime posture descended — was exactly the kind of long-horizon fundamental research that DeepMind had been built to pursue. It arrived in the Nature papers queue while the organization around it was declaring that such work would have to be deprioritized. The timing was its own kind of statement.

One Million Tokens

On February 15, 2024, Google announced Gemini 1.5 Pro. The headline number was one million tokens — the context window, meaning the amount of information the model could hold in attention simultaneously. In practical terms: one hour of video, eleven hours of audio, thirty thousand lines of code, or roughly seven hundred thousand words of text. All at once, all in context, all available for the model to reason over without the information having been compressed or summarized away.

GPT-4 Turbo's context window was 128,000 tokens. Gemini 1.5 Pro's was nearly eight times larger.

The system was built on a Mixture-of-Experts architecture — a design in which different "expert" subnetworks activate for different types of inputs, allowing the model to achieve the capability of a much larger system at a fraction of the compute cost. Gemini 1.5 Pro matched or exceeded Gemini 1.0 Ultra on most benchmarks while requiring substantially less compute to train and run.

Google demonstrated the long-context capability by feeding 1.5 Pro an entire 44-minute silent film and asking it to describe plot points, character actions, and small details scattered across the footage. The "needle in a haystack" retrieval test — finding a single piece of information embedded in a massive text — showed near-perfect recall at 1 million tokens, degrading only slightly to 99.2 percent at 10 million tokens in experimental tests.

Jeff Dean, now Chief Scientist, promoted the results publicly and repeatedly. The message was specific: this was not GPT-4 with more features. It was a different architectural bet on what capability required. Where OpenAI had pushed parameter count, Google DeepMind had pushed context length and compute efficiency. Whether the bet would translate to user preference was a separate question.

What Step by Step Means

The title of this chapter captures several things at once.

The organizational reconstruction of Google DeepMind was a step-by-step process — there was no single moment when the two organizations became one, when the culture wars ended, when the research-product tension resolved. Researchers who had joined to work on fundamental science found projects redirected; those who had come from Brain found new colleagues suspicious of their Mountain View instincts. The integration was ongoing in a way that Pichai's announcement on April 20 had obscured.

The technical approach DeepMind was now advancing — AlphaGeometry's neuro-symbolic loop, SELF-DISCOVER's reasoning modules, chain-of-thought decoding — was literally step-by-step. The insight common to all of these systems was the same: AI did not need to produce correct answers in a single forward pass if it could break problems into intermediate steps, verify each step, and revise when a step failed. The ability to reason in sequence, with verification, was what separated genuinely capable problem-solving from confident guessing.

And Hassabis's own stated philosophy about AGI was step-by-step. He had said it consistently since AlphaGo: "one or two more big breakthroughs," a transformer-level or AlphaGo-level insight, applied in sequence. Not a single moment of emergence. Not a sudden crossing of a threshold. A series of specific advances, each building on the last, until the accumulation reached something categorically new.

AlphaGeometry was one of those steps. Gemini 1.5 Pro's long-context window was one. The 90 percent MMLU score was one. What the next step was, and how many steps remained, was the question Mallaby leaves the chapter poised over — unsettled, as it should be, because no one knew.


Chapter 19: Comeback, and Beyond

In September 2023, while the Gemini team was racing toward its December launch date and the post-merger culture clash was working itself out in labs on two continents, a quieter paper appeared in Science. It described a system called AlphaMissense.

The human genome contains approximately 71 million possible missense variants — single-letter DNA substitutions that cause a different amino acid to be produced in a protein, which can disrupt function, cause disease, or do nothing at all. Of these 71 million, scientists had experimentally characterized about 0.1 percent. The other 99.9 percent were a medical mystery: when a patient arrived with an unusual genetic variant, clinicians often had no basis for judging whether it was the cause of their condition or an innocent bystander.

AlphaMissense processed all 71 million variants. It classified 89 percent of them — 57 percent as likely benign, 32 percent as likely pathogenic. It was not a diagnosis. It was a probabilistic catalog, a starting point for clinical investigation that had not existed before. The predictions were made freely available for both commercial and scientific use. The model code was open-sourced and integrated with the global genomics infrastructure. For rare disease diagnosis — where a patient may have an unclassified variant and no benchmark for its significance — it was the kind of tool that could change the outcome of a clinical workup in an afternoon rather than after months of laboratory work.

AlphaMissense received a fraction of the attention that Gemini received three months later. This distribution of attention — a commercially irrelevant scientific breakthrough generating quiet acknowledgment while a chatbot launch generated front-page coverage — captures something true about the period this chapter describes.

Gemini's Comeback

The original Gemini launch in December 2023 had been widely read as underwhelming. Gemini Ultra matched GPT-4 on benchmarks but did not clearly surpass it. The staged demo controversy had undermined the marketing. The gap between the benchmark claims and what Gemini Pro actually delivered in the hands of early users was visible.

The comeback happened in stages.

Gemini 1.5 Pro, announced in February 2024, established a genuine structural advantage: a one million token context window, extended later to two million, compared to GPT-4 Turbo's 128,000 tokens. At scale this was not a marginal improvement — it meant Gemini 1.5 Pro could hold an entire hour of video, eleven hours of audio, or thirty thousand lines of code in active attention simultaneously, without compression or summarization. On retrieval benchmarks — the "needle in the haystack" tests measuring whether a model could locate specific information buried in a massive context — it achieved 99 percent accuracy up to one million tokens. This was a technical lead that mattered for real applications: codebases, legal documents, long research contexts, multimedia analysis.

Then in March 2025, Gemini 2.5 Pro launched and debuted at number one on the Chatbot Arena leaderboard — the human-preference benchmark run independently by researchers at Berkeley and LMSYS — with the largest score jump ever recorded in the leaderboard's history. It led simultaneously in mathematics, creative writing, instruction-following, long-query handling, and multi-turn conversation. On graduate-level science reasoning (GPQA Diamond), it scored 84 percent. On mathematics competition problems (AIME 2025), it matched OpenAI's best reasoning model within a fraction of a percent. On multimodal benchmarks, it led the field.

In competitive coding (SWE-bench), it trailed Claude 3.7 Sonnet at 63.8 percent versus 70.3. The comeback was real but the frontier moves fast — by mid-2025, Claude 4 and GPT-5 variants had taken the coding lead again. What Gemini's trajectory showed was not permanent dominance but genuine competitive presence: an organization that had looked outclassed in early 2023 was, two years later, producing models that no reasonable observer could dismiss.

AlphaFold 3

In May 2024, DeepMind and Isomorphic Labs published AlphaFold 3 in Nature. The original AlphaFold 2 had solved protein structure prediction. AlphaFold 3 extended the same framework to predict the structure and interactions of all major biological molecules: proteins, DNA, RNA, small-molecule drugs, antibodies, and the chemical modifications that control cellular function. The key expansion was drug-like small molecules — the category that includes most pharmaceuticals, and the category AlphaFold 2 could not handle.

The accuracy improvements were substantial. On the PoseBusters benchmark — measuring how accurately a system predicts where a drug molecule binds to its protein target — AlphaFold 3 was at least 50 percent more accurate than the best existing methods, and was described as the first AI system to surpass physics-based docking tools on this task. For antibody-antigen interactions, for protein-nucleic acid binding, for the modifications that control protein function: in each category, AlphaFold 3 substantially exceeded previous state-of-the-art.

The architecture used a diffusion network in place of AlphaFold 2's structure module — the same approach that powers AI image generation, adapted to produce molecular geometries rather than pixel arrays. The result was a system that could generate not just the most likely structure but a distribution over possible structures, capturing the flexibility that many biologically and pharmaceutically important molecules exhibit.

The controversy was the same as before, but sharper. AlphaFold 2 had been released fully open-source — that was what the Nobel Committee had cited, that was what three million researchers in 190 countries had used. AlphaFold 3 launched without the code, accessible only through a capped web server that explicitly blocked predictions involving novel drug-like molecules. More than a thousand scientists signed a protest letter. The paper was published in Nature without the peer reviewers having seen the code.

Pushmeet Kohli, DeepMind's head of AI science, stated the position plainly: the lab had to "strike a balance" between scientific accessibility and "not compromising Isomorphic's ability to pursue commercial drug discovery." Six months later — one month after the Nobel Prize for the open-source predecessor — the code was released for non-commercial academic use. The model weights required a request process. The commercial restrictions remained.

The sequence was a precise demonstration of the tension Mallaby documents throughout the book. The Nobel celebrated the values that had made AlphaFold 2 transformative: open publication, free access, science as a public good. AlphaFold 3 operated under the values of the commercial organization that DeepMind had become: science as competitive advantage, access calibrated to protect Isomorphic's drug discovery business.

The Drug Discovery Bet

Isomorphic Labs, the Alphabet spinout founded in 2021 to commercialize DeepMind's biological AI, had its most significant validation moment in January 2024. In two deals announced simultaneously, it signed research partnerships with Eli Lilly (45millionupfront,upto45 million upfront, up to 1.7 billion in performance milestones) and Novartis (37.5millionupfront,upto37.5 million upfront, up to 1.2 billion in milestones). Combined potential value: nearly $3 billion.

These were not press releases dressed as deals. Eli Lilly and Novartis were paying real money, upfront, before any drug had entered clinical trials — for the right to use Isomorphic's AI-driven molecular design platform on specific undisclosed targets. In early 2025, the Novartis partnership was expanded. In March 2025, Thrive Capital led a $600 million Series A — Isomorphic's first outside capital, external validation of the thesis from one of technology's most disciplined investors.

By mid-2025, Isomorphic's president was describing the company as "getting very close" to human clinical trials. The focus areas are oncology and immunology. The expected timeline for first Phase I trials is late 2026 at the earliest. If those trials proceed to Phase II and III, a commercially successful AI-designed drug is still a decade away by conventional pharmaceutical development timelines — which are notoriously unpredictable, with roughly 10 percent of drug candidates that enter Phase I ultimately reaching approval.

Hassabis has described his target: "a $100 billion-plus AI drug discovery business." The vision is specific enough to be measured against. The proof-of-concept — an AI-designed molecule in human clinical trials — has not yet arrived.

What AGI Means to Hassabis

Asked to define AGI, Hassabis consistently sets a bar that most other people in the field do not. He does not mean a system that passes the bar exam or scores above human experts on MMLU. He means a system capable of genuine invention: formulating new theories in physics, proposing new research directions, designing original experiments that no human has thought to run.

"We don't have systems yet that can do that type of creativity," he has said. The distinction matters because it separates solving a known conjecture from generating a new conjecture — a task that requires not just capability but a kind of scientific curiosity that current systems do not exhibit.

What he says is still missing: hierarchical planning, long-term memory, hypothesis generation, and a genuine world model — an intuitive understanding of physical causality that would allow an AI to reason about consequences, not just predict outputs. He has articulated a two-step requirement for autonomous scientific AI: first, a world model that understands physical reality; second, automated experimentation — the ability to ask questions, design tests, run them, and iterate. When those two components are connected into a closed loop, the system could in principle do independent science. That remains ahead.

His timeline, consistently stated since 2024: a 50 percent chance of AGI by 2030, with "5 to 10 years" as his public range. This puts him in the mainstream rather than the extreme wing of AGI prediction. He also says consistently that scaling alone will not close the remaining gap. "My guess is one or two more big breakthroughs — I'm talking like a Transformer level or AlphaGo level type of breakthrough" — will be required for the reasoning and planning components that current LLMs still struggle with.

The Honest Assessment

By early 2026, Mallaby's book can draw a balance sheet.

On the scientific side, the verdict is unambiguous. AlphaFold 2 won the Nobel Prize and transformed structural biology for three million researchers in 190 countries. AlphaMissense catalogued 71 million genetic variants for disease research. AlphaFold 3 extended molecular prediction to drug interactions. AlphaGeometry matched gold-medalist level on IMO geometry. AlphaCode 2 reached the 85th percentile of competitive programmers. These results represent a coherent scientific AI program that no other organization has replicated at comparable depth.

On the commercial side, the picture is more complicated. OpenAI's annualized revenue exceeded 20billionheadinginto2026.Anthropicswasapproaching20 billion heading into 2026. Anthropic's was approaching 4 billion. Gemini's 750 million monthly active users rival ChatGPT's scale, but Google's monetization of Gemini runs through an ecosystem — Search, Cloud, Android, Workspace — rather than a standalone product. Isomorphic's drug discovery thesis won't have its proof-of-concept in human trials until late 2026 at the earliest, and commercial outcomes from drug development run on decade-scale timelines.

Hassabis has a theory about why the scientific heritage matters for the race, even now. The scaling-first approach — bigger models, more compute, more data — has produced genuinely impressive language models. But he believes the next set of breakthroughs, the ones that close the remaining gap between current AI and genuine AGI, will require the same kind of domain-specific architectural insight that AlphaGo Zero, AlphaFold 2, and AlphaGeometry each required. You cannot pure-scale your way to a world model. You cannot iterate your way to automated hypothesis generation. At some point — if his theory is right — the lab with the deepest understanding of what intelligence actually requires will have an advantage that accumulated parameters cannot easily replicate.

That theory has not been proven. It might be wrong. But the book's underlying question — whether Hassabis's bet on fundamental research over product-first AI is ultimately vindicated — arrives here still open, which is exactly where it belongs.


Epilogue: Turing's Champion

In October 1950, Alan Turing published a paper in the journal Mind that asked a question so fundamental it has not yet been answered. "Can machines think?" he opened — and then, characteristically, dissolved the question before it could harden into unanswerable philosophy. Instead of wrestling with consciousness and definition, he proposed a test: if a judge communicating by text cannot reliably distinguish a machine from a human, the question of whether the machine "really" thinks becomes practically irrelevant.

Turing made two predictions. Within fifty years, he wrote, computers would be able to play the imitation game well enough that an average interrogator would have no better than a 70 percent chance of identifying them correctly after five minutes of questioning. And by the end of the century, "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Both predictions have been vindicated. The first by GPT-4 and Gemini; the second by every newspaper published in 2024.

But the most prophetic part of the paper was not the imitation game. It was a section near the end titled "Learning Machines." Rather than trying to engineer an adult mind directly — an intractably complex task — Turing proposed building a simple "child machine" and educating it through reward and punishment, mirroring natural development. He described nets of logical components whose properties could be "trained" into a desired function. He was, in 1950, describing deep learning and reinforcement learning three decades before they existed.

When AlphaGo Zero taught itself to play Go through self-play alone — starting from random moves, with no human knowledge, discovering within days strategies no human had found in five thousand years of the game — it was, in the most direct technical sense, the realization of Turing's child machine reaching adulthood. Turing had imagined it. Hassabis had built it.

The Table Is Screaming

Late at night, at his desk in London, Hassabis will sometimes stop working and feel what he describes as reality demanding his attention. He told Mallaby about it directly — rapping his palm on the table as he spoke: "This table, Sebastian! Why should it be solid? Computers are just bits of sand and copper. Why should these combine to do anything?"

This is not a scientist's rhetorical flourish. It is the operative emotion behind everything. Hassabis has described doing science as "reading the mind of God" — his religion, in a sense, the thing underneath the ambition and the competition and the Nobel Prize and the commercial pressures. The universe is structured in ways that can be understood, and those structures are information, and intelligence is what processes information into understanding, and if you build a sufficient intelligence you could, in principle, understand everything. He wanted nothing less than this: an omniscient machine, a tool for closing the gap between human consciousness and the fabric of reality itself.

This is what makes the story Mallaby is telling both thrilling and vertiginous. It is not, at its core, a technology story. It is a story about a person who looked at the strangeness of existence and decided, with complete seriousness, to do something about it.

The Oppenheimer Frame

Mallaby's most explicit historical parallel arrives near the end of the book. J. Robert Oppenheimer created the atomic bomb. He understood what he was building. He signed a letter of revulsion to the Secretary of War after Trinity. He testified against the hydrogen bomb program. He was stripped of his security clearance in 1954 for his trouble, exiled from the policy of the weapon he had made. The thing he built continued without him.

Oppenheimer had said, of the decision to build the bomb: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success." This is the phrase that echoes through Mallaby's account of Hassabis. Geoffrey Hinton captured the same structure when he said that "the thrill of discovery is so big that even if you're very worried about its implications, it's impossible to resist." The technically sweet problem is not a personal failure. It is a civilizational condition.

Mallaby's question about Hassabis is not accusatory. It is tragic: "He wants to do good, but can he be good?" Hassabis understands the dangers. He signed the extinction risk statement. He has called his p(doom) non-negligible. He speaks about the need for safety-minded organizations to stay in the race as the argument for staying in it. He has said that by exiting, he would not advance safety. This is probably true. It is also precisely what any competent actor in this position would have to say, regardless of whether it were true.

Project Mario — the three-year effort to create independent governance structures for AGI development — failed entirely. The ethics board promised in the 2014 acquisition never functioned. The AlphaFold 3 open-source restrictions showed that, when commercial pressures met scientific values, the commercial pressures prevailed. The safety problem, Hassabis told Mallaby, is "soluble." It is also not guaranteed to be solved.

Oppenheimer could not control his creation. Perhaps, Mallaby writes, "this is the privilege and fate of all history's great scientists."

The Guest Book

In December 2024, at the Nobel Foundation in Stockholm, Hassabis signed the laureates' guest book — the book that has been signed since 1952, containing the names of everyone who has stood in that building to receive science's highest honor. Einstein, 1921. Watson and Crick, 1962. Feynman, 1965.

"They're all there, all my heroes," Hassabis told Mallaby. "I get goosebumps just even talking about it."

The specific weight of this moment: Hassabis grew up watching The Race for the Double Helix. As a teenager, he read about Turing. As a student, he studied Feynman. These were not distant figures in the history of science — they were the people whose understanding of the world he had spent his life trying to extend. And now his name was among them, in the book in Stockholm, for work on a problem that did not exist when any of them was alive.

The Nobel honored AlphaFold — a system that predicted protein structures by learning patterns from evolutionary data, vindicating the thesis that intelligence applied to biology could accelerate science by decades. The same thesis, extended to every scientific domain, is the premise of everything Hassabis believes about what comes next.

The Clock

On January 27, 2026, the Bulletin of the Atomic Scientists set the Doomsday Clock to 85 seconds to midnight — the closest it has ever stood in its 79-year history. For the first time in the clock's existence, artificial intelligence was explicitly named as a co-driver of the setting, alongside nuclear weapons and climate change.

The AI Safety Clock, maintained separately, stood at 18 minutes to midnight in early 2026 — having advanced nine minutes in twelve months, with the largest single jump driven by autonomous AI agents and the Pentagon's declaration of intent to become "an AI-first warfighting force."

A survey of 59 AI safety researchers published in February 2026 reported a median p(doom) — probability of human extinction or permanent disempowerment before 2100 — of 25 percent. Mean was 34. Seventy-three percent expected AGI by 2035. The binding constraint on safety work, the researchers said, was talent, not funding.

Hassabis has said the safety problem is soluble. He has also said that the race is not something any individual or organization can stop. These two things are simultaneously true and do not resolve each other. The international governance frameworks that might bridge the gap between "soluble in principle" and "solved in practice" do not yet exist in a form adequate to the problem. The organizations founded on safety rationales are the same organizations accelerating capabilities. The labs building the most powerful systems are the same labs arguing that they should be trusted with the outcome.

What Turing Left Unsaid

Turing's 1950 paper ends on a note of unusual humility for a man whose confidence was otherwise a feature rather than a bug. "We can only see a short distance ahead," he wrote, "but we can see plenty there that needs to be done."

This is the right register for where the story stands. Hassabis is not Oppenheimer exactly — the analogy is suggestive, not precise, and Mallaby is careful to hold it as a question rather than a verdict. What has been built in the decades since Turing published is extraordinary and documented: an artificial system that mastered Go by playing against itself until it had surpassed every human; a system that solved in two years a problem that had resisted fifty years of dedicated effort from the best structural biologists alive; a system that can pass the bar exam, compose coherent arguments, model protein interactions, classify genetic mutations, write code at the 85th percentile of competitive programmers. The child machine has grown.

What comes next — whether the remaining steps to AGI are two or twenty, whether the safety problem is solved before the capabilities make its solution irrelevant, whether Hassabis's bet that scientific rigor and AGI ambition can coexist will prove right — none of this can be seen from here. The book does not pretend otherwise.

What Mallaby offers instead is a portrait of the person at the center of this particular moment in history: a chess prodigy from London who became obsessed with the question of how minds work, who declined the video game industry at twenty-two because it wasn't the problem, who spent his career building systems that surprise their creators, who won a Nobel Prize and then immediately had to rebuild his organization to compete in a race he had hoped to avoid, who sits at night in his office feeling reality scream at him from the surface of a table, who believes the universe is made of information and that intelligence is the instrument by which that information becomes understanding.

He is, in Mallaby's framing, Turing's champion — the person who took the child machine seriously, built it, tested it against Go and proteins and geometry and language, watched it exceed everything humanity had learned, and now stands at the edge of what comes next, holding both the prize and the responsibility, not entirely sure they can be held together.

Turing said we can only see a short distance ahead. That has not changed. There is still plenty that needs to be done.

Going Infinite: The Rise and Fall of a New Tycoon

· 33 min read

Michael Lewis's 2023 non-fiction book, Going Infinite: The Rise and Fall of a New Tycoon, tells the true story of Sam Bankman-Fried (often known as SBF)—from his unconventional origins and meteoric rise in the world of cryptocurrency to the dramatic collapse of his business empire and its aftermath.

Chapter 1: Yup

The story opens like a scene from a modern financial documentary. SBF appears at the zenith of his fame and influence—a time when he was known as the "world's youngest self-made billionaire" and even compared to "the Gatsby of crypto," with celebrities, CEOs, and world leaders clamoring for his attention and investment. Lewis paints a picture of a disheveled young tycoon: despite his sudden appearance on the Forbes billionaire list, he is always dressed in casual t-shirts and shorts, almost indifferent to the buzz surrounding him. SBF's time becomes incredibly valuable: his schedule is packed with meetings, high-profile forums, and media interviews. Yet, in stark contrast to the grand image others built of him, SBF himself treats commitments as optional. He is often late, cancels at the last minute, or appears distracted even when present.

Through various anecdotes, Lewis highlights SBF's unusual behavior and detached demeanor. For instance, SBF frequently multitasks by playing video games during important conference calls and interviews. In one memorable example, during his first live television interview, he sports his signature messy hair and cargo shorts, and midway through the broadcast, he starts playing an online game, his eyes darting across the screen. (In fact, venture capitalists would later learn he was even playing his favorite game, League of Legends, while pitching them for millions of dollars.) Lewis suggests that, far from being disrespectful, this constant state of gaming was simply SBF's way of keeping his highly active mind engaged—but it meant that those meeting with him often received only a fraction of his attention. This establishes a key theme: SBF is a brilliant but detached figure—a person living inside his own head, treating life as one grand game. This captivating opening sets the tone for the entire story, showcasing the peculiar blend of charm and eccentricity that made SBF both admired and perplexing.

Chapter 2: The Santa Claus Problem

The narrative rewinds to SBF's upbringing and formative years, revealing the shaping of his unique worldview. We learn that SBF was raised in California by two Stanford Law School professors, Barbara Fried and Joseph Bankman, who cultivated a decidedly unconventional household. The Bankman-Frieds weren't keen on typical childhood customs—in fact, one year they completely forgot to celebrate Hanukkah, and when they realized it, no one in the family cared. Holidays, birthdays, the entire myth of Santa Claus—none of it mattered much in SBF's home. Instead, his parents encouraged open, rational inquiry. If a young SBF wanted something, they preferred to discuss it honestly rather than create surprises or follow rituals. As a result, SBF grew up valuing logic and honesty over fictional narratives. He later reflected that seeing almost everyone around him believe in things like God or Santa Claus taught him a stunning lesson: "mass delusion is an endemic property of the world"—in other words, sometimes the majority's view on something can be demonstrably false. This early insight allowed SBF to comfortably question widely accepted beliefs and trust his own reasoning, a trait that would define his future decisions in life and business.

Lewis also delves into SBF's moral and philosophical development during his adolescence. SBF's parents were sympathetic to utilitarianism (a focus on outcomes that produce the greatest good), which influenced him. By age 12, SBF was already independently thinking through deep ethical dilemmas. For example, he considered gay marriage a "no-brainer"—it was clearly unjust to make people suffer for some harmless difference. But he thought more deeply about abortion until he applied a cold, utilitarian calculus. SBF concluded that most of the harms that make murder wrong (the grief of loved ones, the loss of an invested life, etc.) didn't apply before a child was born. For a strict utilitarian like him, abortion became equivalent to birth control—verbally controversial, perhaps, but with no difference in net outcome. This way of weighing decisions by their results, rather than any preset moral dogma, was how SBF, in Lewis's words, "figured out who he was." Socially, a young SBF often felt like an outsider, more absorbed in math puzzles and strategy games than in hanging out with classmates. These childhood threads all converge on what Lewis calls "The Santa Claus Problem": SBF learned early on to question comforting fictions and to approach life through the lens of logic, probability, and maximizing good. The reader now understands how SBF's quirky, hyper-rational personality was shaped from the start—a crucial foundation for his later foray into effective altruism and crypto finance.

Chapter 3: Meta Games

The story moves into SBF's young adulthood and his first steps into the world of high finance. We follow SBF to the Massachusetts Institute of Technology (MIT), where he majors in physics—though his interest in pure academic research quickly wanes. In SBF's junior year (2012), two key events set him on a new path. First, a campus career fair introduces him to the lucrative world of trading firms. SBF realizes that almost none of his physics classmates at MIT actually become physicists; instead, many go to Wall Street or tech companies. Curious (and unenthusiastic about physics lab work), SBF submits his resume to several quantitative trading firms recruiting at MIT. He lands interviews with top firms like Susquehanna International Group and Jane Street Capital, renowned for their brain-bending interview questions. This leads to the second key event: SBF's interview at Jane Street, which the book portrays as a series of elaborately designed psychological games.

Michael Lewis describes how Jane Street's hiring process subjects candidates to one "meta-game" after another—from poker variations to coin-flipping betting challenges—where the rules constantly change to test one's adaptability. SBF thrives in this environment. Unlike other interviewees who get flustered by the shifting rules and time pressure, SBF is energized by the chaos. His years of solving logic puzzles and rapidly calculating probabilities have wired his brain for exactly these kinds of challenges. He impresses the Jane Street team with his calm demeanor, his strategic thinking under pressure, and his willingness to make side bets with the interviewers at their encouragement (which is, itself, part of the test). In one example cited in the book, when asked a trick question about the probability of a relative being a professional baseball player, SBF's instinct is to first clarify the question—he recognizes its ambiguity and defines its terms ("What is the scope of 'relative'? How is a 'professional' player defined?") before diving into the math. This rational approach, combined with his quick mental arithmetic, earns him a spot at Jane Street.

With that, SBF enters the world of high-frequency trading in New York. At Jane Street Capital, he proves to be a brilliant trader, applying his love of games to the markets. But more importantly for SBF's grand narrative, Jane Street is where he is first exposed to the philosophy of Effective Altruism (EA). Inspired by utilitarian thinkers, Effective Altruism argues that one should use reason and evidence to do the most good—often by earning vast sums of money and then donating it to high-impact causes. This idea deeply resonates with SBF's logical, idealistic side. He begins to see earning money as a means to an end: the end being to fund causes that could save lives or improve the world on a massive scale. We now see SBF's transformation from a lost student into a driven trader with a mission. SBF now has a guiding purpose for his life: to achieve "going infinite" (i.e., creating immense wealth), not for luxury or ego, but to eventually give it all away in the most effective manner possible. This is the seed of a grand ambition—one that will propel him into the emerging world of cryptocurrency next.

Chapter 4: The March of Progress

Here, the narrative documents SBF's bold leap from employee to entrepreneur—a march of progress that would soon reshape the landscape of cryptocurrency trading. By 2017, SBF had grown restless at Jane Street. He was deeply infected by the spirit of Effective Altruism and eager to multiply his earning potential for the greater good. After a brief stint at an EA non-profit think tank (the Centre for Effective Altruism) to explore a path of direct charity, SBF concluded he could make a bigger impact by making money faster. So, in late 2017, he quit his stable Wall Street job to launch his own trading firm: Alameda Research. It was a risky move—SBF was just 25 and, with a few like-minded friends, was entering what was then the Wild West of cryptocurrency—but he saw a unique opportunity. The global crypto markets at the time were incredibly inefficient, and SBF knew how to exploit that.

Lewis describes how SBF and his small team (initially operating out of an apartment in Berkeley) targeted an arbitrage opportunity commonly known as the "kimchi premium." In early 2018, the price of Bitcoin in some Asian markets, like Japan and South Korea, was significantly higher than in the U.S.—in Korea, sometimes by as much as 20% due to local demand. To SBF, this was essentially free money: buy Bitcoin low in the U.S., sell it high overseas, and repeat. The challenge was in the execution—how to move millions of dollars' worth of Bitcoin across borders quickly and legally. SBF's solution was audacious. He and his partners found creative (and somewhat dubious) ways to navigate international banking rules, such as using a friendly local account in South Korea to access the market there. Alameda began moving up to $25 million worth of Bitcoin a day in these trades, reaping huge profits from the price difference. This was the rocket fuel for Alameda's rise. By the end of its first few months, SBF's small startup had generated tens of millions of dollars in profit—tangible proof that his intuition to leave Jane Street was correct.

Following this success, SBF rapidly scaled up Alameda. He hired a group of young colleagues—many of them, like him, Effective Altruists with strong math backgrounds but little formal trading experience. He also attracted significant capital infusions from wealthy crypto believers. Notably, an early backer was Jaan Tallinn (the co-founder of Skype and an active EA investor), who gave SBF's team over $100 million to trade with. All of this embodies the theme of "progress": SBF felt he was riding an inevitable wave of progress—both in the technological revolution of crypto and in his personal journey from trader to empire-builder. By this point, SBF had firmly established Alameda Research as a major player in crypto trading. The once-idealistic physics student was now a full-fledged entrepreneur, sitting on a mountain of cash built from arbitrage profits. It's a period of optimism and energy in the story—SBF appears to be conquering the new world of crypto with sheer intellect and nerve, paving the way for even bigger things to come.

Chapter 5: How to Think About Bob

As the story enters its second act, the tone shifts to the growing pains of SBF's rapidly expanding enterprise. "How to Think About Bob" opens by introducing a key figure in his circle: Caroline Ellison. Caroline is portrayed as a bright but insecure young woman who met SBF during a summer internship at Jane Street. Like SBF, she was gifted at math and drawn to utilitarian ideas. Feeling unfulfilled at Jane Street, Caroline jumped at the chance to join SBF's crypto startup, Alameda Research, in 2018. Lewis notes that Caroline was part of a wave of idealistic "EAs" (Effective Altruists) who left traditional finance jobs seeking purpose at a place like Alameda. Despite her talent, Caroline often lacked confidence and was influenced by the strong personalities around her—including SBF, with whom she eventually began a secret romantic relationship. Her arrival adds a new dynamic to the story, and she would later play a critical role as the firm's leader.

However, SBF's management style soon tested Caroline and the other young members of the team. Alameda had grown to about 20 employees, many of them fresh out of college with no trading experience, hired more for their intelligence and shared philosophy than for their financial résumés. SBF ran the firm with a chaotic, ad-hoc style. He insisted on being the central hub to whom everyone reported directly, yet he struggled to truly communicate with and listen to his team. There was no clear structure or risk control like what he had seen at Jane Street. Employees grew frustrated—directives were often unclear or last-minute, and decisions felt capricious. SBF himself was deeply enmeshed in the minutiae of trading, sometimes neglecting basic management. Under his watch, Alameda's finances became a mess: the firm made large bets, some of which went badly, and millions of dollars could mysteriously "go missing" without being promptly addressed. In one infamous incident, $4 million worth of a cryptocurrency (Ripple, or XRP) vanished from Alameda's accounts during a transfer—and SBF's reaction was surprisingly nonchalant. He "hated telling investors about the problem" and casually said there was an 80% chance of recovering the funds. To his colleagues, this was a red flag: SBF seemed indifferent to massive risks and losses that terrified others.

Tensions within Alameda came to a head in the spring of 2018 in an event that became known as "The Schism." A group of senior employees—including Alameda co-founder Tara Mac Aulay—lost faith in SBF's leadership. They were alarmed by his cavalier attitude toward risk and his lack of proper accounting. After the Ripple incident and other trading losses, these employees secretly voiced their concerns to Alameda's investors and even made a $1 million buyout offer to SBF to walk away from the company he started. SBF flatly refused. In April 2018, about half of Alameda's staff resigned en masse, concluding, as one of them put it, that SBF was "not someone we wanted to be in business with." This dramatic split forced SBF to regroup. Shortly after, he moved Alameda's headquarters from California to Hong Kong, seeking a fresh start in a location more conducive to 24/7 crypto trading. A sobering picture now emerges: while Alameda was making money, its internal turmoil exposed the cracks in SBF's methods. The same single-minded drive that fueled his rise was now sowing conflict with his colleagues. The stage is set for SBF to either learn from these missteps—or to forge ahead unchanged as bigger ambitions beckoned.

Chapter 6: Artificial Love

At this point, SBF sets his sights on a much grander project: creating a brand-new cryptocurrency exchange. This chapter, titled "Artificial Love," documents the birth of FTX (launched in 2019) and how SBF poured his vision into it. Having learned from the flaws of existing exchanges, SBF and his small team of developers (notably including his former college roommate and brilliant programmer, Gary Wang) designed FTX to be a superior trading platform. Lewis walks us through the technical ingenuity behind FTX's rise. At the time, many crypto exchanges offered high-risk derivatives and margin trading, but their risk management was crude—if one trader's losses exceeded their collateral, the exchange would socialize the loss by taking money from other users' funds. (For example, on one exchange, a single out-of-control trade catastrophically wiped out half the profits of all winning traders to cover the loser's debt.) SBF saw this as an unacceptable weakness. FTX, therefore, implemented an innovative auto-liquidation system: the platform would continuously monitor every account, and "the moment any customer's trade went into the red, it was instantly liquidated." This was brutal for the losing trader, but it meant FTX itself would never be on the hook for a massive loss—no more bailouts with other customers' money. Thanks to Gary's programming, FTX's engine was fast and automated enough to do this in real time. This design was a key selling point: FTX promised no more exchange-wide blow-ups, a message that attracted sophisticated traders who had been burned elsewhere.

"Artificial Love" also highlights how quickly FTX grew after its launch. SBF proved to be very adept at attracting investors and partners to scale his new exchange. He even brought in major figures like Changpeng "CZ" Zhao—the CEO of Binance, the world's largest exchange—as an early investor. (Ironically, CZ would later become his rival in FTX's downfall.) SBF's colleague, Ramnik Arora, is introduced as a master storyteller who helped pitch FTX to venture capital firms. The book describes the process of raising money for FTX as being less about spreadsheets and more about selling a vision. SBF and Ramnik told a compelling story: crypto trading was exploding (with hundreds of billions in daily volume), FTX had grown from nothing to the world's fifth-largest exchange in 18 months, and unlike their competitors, they were trying to be a compliant, "legit" player that regulators could trust. Venture capitalists ate it up. By early 2022, FTX had secured a staggering $32 billion valuation in a Series C funding round, pushing SBF's own net worth into the tens of billions. The company's meteoric growth placed it second only to Binance in global crypto trading volume.

Amidst all this success, SBF's personal eccentricities bled into company life. SBF continued to approach everything—even love and relationships, which perhaps hints at the "artificial" part of the title—with a cool, analytical lens. (The book alludes to the unusual co-living arrangements and messy romances within his inner circle.) We also see SBF's relentless workaholism: he was famous for sleeping very little, constantly multitasking, and using stimulants to maintain his pace (he often joked about taking Adderall or caffeine). In late 2021, SBF made the pivotal decision to move FTX's headquarters from Hong Kong to The Bahamas, seeking a more favorable regulatory environment and a tropical lifestyle for his team. In Nassau, Bahamas, he began building a grand new campus for FTX and housed his closest colleagues (including Caroline, Gary, Nishad Singh, and others) in a luxury penthouse. FTX, at this moment, is at its peak: an exchange built on clever engineering and fueled by crypto mania. SBF, not yet 30, is now more than a trader—he is the public face of a crypto empire, rubbing shoulders with politicians and celebrities. The "game" he started has now become incredibly real, and the world is watching—setting the stage for a coming clash between SBF's lofty ideals and the harsh realities of business and politics.

Chapter 7: The ORG Chart

This section pulls back the curtain on the day-to-day operations of FTX and reveals just how unconventional and chaotic the company was behind its glossy valuation. By 2022, FTX was a global behemoth handling billions of dollars in trades, yet internally, it operated more like a college dorm project than a Fortune 500 company. Lewis illustrates this with a darkly humorous episode: two professional architects are hired to design FTX's new headquarters in The Bahamas, but when they ask basic questions—How many people will work here? How are the teams organized?—no one at FTX can tell them. The company literally had no formal organizational chart or management hierarchy to guide the architects. In fact, the only person who had ever tried to draw one up was George Lerner, SBF's personal therapist, who had been informally given a role as a sort of "corporate shrink" and life coach for the staff. Lerner's org chart was created mainly to deal with interpersonal issues (the young employees had plenty of drama), not because SBF or other executives cared to establish one. This anecdote highlights a theme: FTX's culture was deliberately unstructured and chaotic. SBF believed rigid structures could slow down innovation, so he let the company evolve in a loose, ad-hoc manner. Employees often created their own job titles and jumped between roles. Communication was casual; important decisions might be made in late-night online chats or not at all.

The story also delves into the lifestyle and values of the FTX compound in The Bahamas. Many of the top employees, including SBF, lived together in a luxury penthouse, working and socializing in the same space nearly 24/7. They were mostly in their twenties, fiercely intelligent, and bonded by the ideals of Effective Altruism—but this closeness led to an insular "bubble." There were reports (in the book and in the media) of a casual attitude toward office romances and even the use of stimulants to sustain working hours. SBF's inner circle had a complex web of personal relationships—at one point, it was said that the ten people in the penthouse had paired off into a romantic "polycule." While Lewis doesn't gossip, he highlights these details to show that FTX was anything but a conventional corporate environment. It was more like a tech startup on steroids, with a group of brilliant but inexperienced people trying to build a new world while also figuring out their own lives.

Meanwhile, actual oversight was non-existent. One consequence was that FTX's financial and compliance practices were extraordinarily weak for a company of its size. In a tone of near disbelief, Lewis recounts the later assessment of FTX's new CEO, John J. Ray III: in his entire career, he had never seen "such a complete failure of corporate controls" (and Ray had overseen the Enron bankruptcy). We see the reasons here—accounts went untracked, basic bookkeeping was an afterthought, and things like risk management were nominal at best. Yet, despite (or perhaps because of) this chaos, FTX was outwardly thriving. The company's lack of structure may have even helped it move quickly during the frenzy of the 2021 crypto bull run. But Lewis leaves us with a sense of foreboding: the edifice of FTX was, organizationally speaking, built on sand. Everyone was too busy chasing growth and grand ideas to notice the shaky foundations. This chapter is the calm before the storm—an almost surreal picture of a multi-billion-dollar enterprise being run with the informality of a college club, with consequences that were about to come crashing down.

Chapter 8: The Dragon's Hoard

The narrative's focus shifts to the money—piles and piles of it—and what SBF was doing with it. At this stage, SBF was not just a business leader but an emerging philanthropist and political influencer, eager to deploy his wealth (or "hoard," as the title suggests) toward the causes he believed in. Lewis details how SBF began funneling funds from Alameda and FTX into a myriad of venture investments and donations. True to his Effective Altruist roots, SBF set up initiatives like the FTX Future Fund to support projects he thought could have a massive impact on humanity. The chapter reads like a laundry list of SBF's lavish spending: he poured money into scientific research for pandemic prevention, funded organizations working on AI safety and other existential risk reduction, and invested in everything from biotech startups to media companies. Much of this aligned with EA principles—in essence, SBF was trying to buy global change according to his utilitarian calculus.

But SBF's ambitions didn't stop at philanthropy. The story also covers his foray into the world of politics and influence. In the U.S., SBF became a major donor to the Democratic Party in the 2020 and 2022 election cycles (though he also quietly donated to some Republicans, by some accounts). He focused particularly on pandemic preparedness legislation and candidates who supported it, believing better policy could save lives. One of the most eye-popping revelations in the book is an alleged plot where SBF considered paying Donald Trump not to run for president. According to Lewis, SBF explored whether a massive bribe could persuade Trump to sit out the 2024 race—an idea that highlights both SBF's audacity and his moral calculus (he likely saw it as preventing what he viewed as a greater harm). The book claims that Trump's intermediaries floated a number: $5 billion. SBF ultimately decided he couldn't afford it, and the plan went nowhere. Still, the mere fact that SBF would consider using his wealth to so directly intervene in politics is stunning, and Lewis presents it as an example of SBF's grandiose delusions of manipulating outcomes.

However, just as SBF was spreading his money far and wide, trouble was brewing in the markets that had made him rich. In mid-2022, the broader cryptocurrency market crashed—a sharp downturn often called the "crypto winter." Major crypto assets plummeted in value, and some collapsed entirely. The failure of the Terra/Luna stablecoin project in May 2022, for instance, triggered cascading losses across the industry. This section describes how this market crash shrank SBF's empire overnight and put financial pressure on both Alameda and FTX. Alameda, in particular, saw the value of many of its investments plummet. Suddenly, the "dragon's hoard" was no longer inexhaustible; it was shrinking fast. Yet, SBF remained outwardly optimistic and continued spending as if nothing had happened. This leaves the reader with a sense of dramatic irony—just as SBF was making his boldest plays with his wealth, the very foundation of that wealth (the crypto market) was crumbling beneath his feet. The stage is now set for the final act: the vanishing of all that wealth and the revelation of the real secret behind SBF's success.

Chapter 9: The Vanishing

The chapter title "The Vanishing" is apt, as it documents the spectacular collapse of FTX—a swift downfall that shocked customers and observers around the globe. The story unfolds like a tense thriller, recounting the events of November 2022, when confidence in SBF's exchange evaporated almost overnight. It all began with rumors and revelations. A leaked report raised serious questions about the solvency of Alameda Research, suggesting that a huge portion of Alameda's assets were actually in FTT (FTX's own exchange token) and other illiquid tokens, not stable cash or liquid crypto. This implied that FTX and Alameda were dangerously entangled financially. As this news spread, rival CEO CZ of Binance publicly announced he would be selling off Binance's large holdings of FTT—a move that spooked the market and signaled that insiders smelled trouble. What followed was a bank run on FTX. Panicked that FTX might be insolvent, ordinary customers rushed to withdraw their funds en masse. Within a matter of days in early November, FTX faced a liquidity crisis: it simply did not have enough cash on hand to honor everyone's withdrawals.

Lewis describes the frantic attempts by SBF and his team to save the company during those critical days. SBF initially assured the public (and his employees) that assets were fine, but internally FTX was scrambling to raise some $7-8 billion to plug the hole in its balance sheet. They reached out to deep-pocketed investors, partners—anyone who might inject emergency cash. For a brief moment, a lifeline seemed to appear: on November 8, Binance signed a non-binding letter of intent to acquire FTX and pay its debts. SBF told everyone the deal with CZ would resolve the crisis. However, that hope was just as quickly dashed—the very next day, Binance backed out of the deal after reviewing FTX's financials, citing issues that were "beyond our control" (likely the discovery of a multi-billion-dollar shortfall). With no savior in sight, FTX's fate was sealed. By November 11, 2022, SBF had resigned as CEO, and FTX filed for Chapter 11 bankruptcy protection. In The Bahamas, where FTX Digital Markets was based, authorities froze FTX's assets and began an investigation.

The human side of this collapse is also vividly portrayed. As FTX imploded, most of its employees fled The Bahamas in a hurry, catching the next available flight out of Nassau. The once-bustling FTX office became a ghost town. One of the few who stayed behind was COO Constance Wang, who was unable to leave because she had two pet cats and couldn't arrange transport for them both on short notice. She and a handful of others remained, trying to piece together what had just happened. For SBF's inner circle, it was a moment of terror and bewilderment—their life's work had been reduced to rubble in a matter of days. The chapter conveys the confusion and betrayal felt by many as billions of dollars simply "vanished" from the exchange. Users around the world watched as their account balances were suddenly frozen or zeroed out. It's the dramatic turning point of the story: in just a few days, SBF went from a celebrated industry leader to the suspect in one of the biggest financial disasters in modern history. The final chapters will deal with the aftermath and the search for truth amid the ruins.

Chapter 10: Manfred

This section explores the immediate aftermath of the FTX collapse and peels back the final layers of SBF's character. The title refers to Manfred, SBF's childhood stuffed animal—a toy he had kept with him since he was a small child and often traveled with as an adult. This poignant detail, noted by Lewis, symbolizes SBF clinging to something constant and comforting even as his world fell apart. Amid the ruins of FTX, we follow Constance Wang—one of the last employees remaining in The Bahamas—as she begins to dig into FTX's books to understand the massive hole in its finances. Gaining access to internal documents, Constance makes a stunning discovery: over $10 billion in FTX customer funds had been moved to SBF's trading firm, Alameda Research. In essence, FTX had lent out all of its customers' deposits to Alameda, and worse, Alameda had special privileges on the platform. Constance learns that FTX's vaunted risk engine, which was supposed to quickly liquidate losing positions, did not apply to Alameda—SBF's firm was allowed to run a negative balance and keep losing trades open indefinitely. In short, SBF had gamed his own system: Alameda could never be automatically closed out of a bad trade, which meant it could use customer money to rack up a massive debt to FTX. This was the secret that explained everything—how Alameda was able to use FTX as a cash machine to make huge, leveraged bets (some on speculative tokens or illiquid projects) and why, when those bets failed, FTX was unable to pay back its users.

Lewis also highlights a personal discovery for Constance: despite being an early executive, she found she owned almost no stake in the company. A document revealed she had just 0.04% of FTX—a negligible amount—while others at her level or even lower had significantly more. It was a sharp realization for her that SBF had kept tight control of the equity for himself and a select few, leaving even loyal colleagues with crumbs. This was another hint at how unequal, and perhaps cynical, the reality was behind SBF's altruistic halo.

As authorities closed in, SBF himself remained in a state of denial and resistance. For a short time after the bankruptcy, he holed up in Nassau, still insisting that FTX could be saved or that it was all just an accounting mistake. But the story moves toward its inevitable conclusion: in December 2022, SBF was arrested at his apartment in The Bahamas by local police at the request of U.S. prosecutors. The once-celebrated CEO was led away in handcuffs, eventually landing in the notorious Bahamian prison, Fox Hill, before being extradited to the United States. Michael Lewis, who had access to SBF during his downfall, provides one last close-up view, noting that the young man who had always treated life as a series of logic puzzles was now facing a reality that couldn't be gamed. Even at this low point, the book presents an almost tragic image: SBF packing his old childhood toy, Manfred, for the journey, perhaps as a symbol of innocence or comfort amidst the chaos. It's a humanizing detail that reminds the reader that behind the headlines of fraud and failure was a very peculiar, brilliant, and flawed individual. The stage is now set for the reckoning: all the truths SBF had evaded or rationalized were now catching up to him.

Chapter 11: Truth Serum

This final section reads like the investigative climax of the story—the focus shifts to uncovering the truth and assigning accountability in the wake of FTX's collapse. Here, Lewis follows the work of John J. Ray III, the seasoned restructuring expert appointed as the new CEO of FTX after its bankruptcy. Ray's job was to stabilize the wreckage and find out where the money went. What he found was chilling. Ray, who had previously handled infamous bankruptcies like Enron, stated that FTX was the worst mess he had ever seen. The story details how Ray and his team slowly piece together the financial records of FTX/Alameda (which were in shambles). Over time, they manage to recover billions of dollars in assets for creditors—by locating bank accounts, crypto wallets, and investments that could be sold off. This was a significant development: while it initially seemed like $8-10 billion had vanished into thin air, by diligently tracing the funds, Ray's team was able to claw back a substantial portion, though still just a fraction of the total owed.

Lewis also covers the legal fallout and the cooperating witnesses who emerged—a stark contrast to SBF's own position. Key members of SBF's inner circle turned against him and pleaded guilty to crimes. Caroline Ellison, who had served as CEO of Alameda, admitted to fraud charges and confessed that she and SBF had knowingly misused FTX customer funds. Likewise, FTX co-founder Gary Wang and Director of Engineering Nishad Singh also pleaded guilty and agreed to cooperate with federal investigators. Their testimony essentially confirmed what Constance and John Ray had found in the documents: that SBF had directed them to do it. They described how SBF authorized the use of FTX deposits to cover Alameda's losses and to make loans to himself and others, and how Alameda enjoyed special privileges on the exchange. In the narrative, it's as if a truth serum was finally compelling people to speak about what was really happening inside SBF's empire—not through SBF's own words, but through the words of his closest colleagues as they faced prison time.

SBF, however, maintained his innocence and a sense of bewilderment at the charges. Right up until his trial, he publicly claimed (through interviews and writings) that it was all a giant misunderstanding or a string of bad luck, not deliberate fraud. Lewis, who maintained extensive access to SBF even after the collapse, relays SBF's various explanations—for instance, that FTX could have been made solvent if someone had just injected a few more billion, or that he never intended to steal money. This leaves the reader to judge these claims against the mountain of evidence. By the end, the wheels of justice are in full motion: SBF is charged with multiple counts of federal fraud and conspiracy, and his trial looms. The sheer scale of the collapse is also put into context—it not only triggered billions in investor losses but also shattered trust in the crypto industry and sparked calls for much stricter regulation.

In closing, Lewis conveys a bittersweet sense. He suggests that the saga of SBF is more than just one man's rise and fall—it's a cautionary tale about hubris, trust, and the allure of innovation without guardrails. Even as SBF awaits his fate, the reader is left with the feeling that the "truth serum" is still working its way through the system, as regulators, journalists, and the public parse the lessons to be learned. The story thus ends not with a moral lecture, but with a sober accounting of what happened: a young genius tried to remake finance and do good on an epic scale, but in the process of breaking the rules and trusting only his own instincts, he unleashed a catastrophe. In the end, reality caught up to SBF, as it does to all well-played games.

Coda

In the two years since the book was finalized, reality has written a more concrete postscript to this saga. On March 28, 2024, SBF was sentenced in New York to 25 years in prison and ordered to forfeit approximately $11 billion in assets, with his case proceeding to appeal. According to the Federal Bureau of Prisons, his expected release date is November 17, 2044.

His place of incarceration has also changed several times, moving from a detention center in New York to Oklahoma, then briefly to a medium-security prison in Victorville, California, before finally being placed in the low-security federal prison "Terminal Island" in Los Angeles. Meanwhile, his appeal is ongoing, with various media outlets reporting that the Second Circuit Court of Appeals plans to hold oral arguments in early November 2025.

Parallel to the criminal case is the bankruptcy restructuring of FTX. The plan was confirmed by the court in October 2024 and became effective in January 2025, followed by several rounds of cash distributions. The goal of the restructuring is to repay the vast majority of customers in full, with interest, based on the U.S. dollar value of their assets in November 2022. However, this plan has sparked significant controversy, centering on whether subsequent increases in cryptocurrency prices should be included in the compensation.

When these real-world pieces are put together, the rise and fall chronicled by Michael Lewis feels more like an open-ended footnote of our time: a court conviction, creditor repayments, a pending appeal—a stark contrast between lofty ideals and the cold realities of the system. Going Infinite does not defend any party; it simply reminds us that young talent and the ambition to "do good," when lacking boundaries and accountability, can produce both astonishing achievements and unimaginable disasters. When the storm passes, what truly remains are the ledgers, the evidence trail, and the long road of due process.

Showstopper!: A Journey Through a Software Epic

· 20 min read

G. Pascal Zachary's Showstopper! is more than just a book; it is a monument to one of the most ambitious and arduous undertakings in software history: the creation of Windows NT. With a literary, non-fiction style, the book brings to life the intellect, sweat, conflicts, and glory of a group of genius engineers. It pulls us into the heart of a "war" that reshaped the world of computing.

The Code Warrior

The story's curtain rises on a legendary figure, the very soul of the Windows NT project: David Cutler. His upbringing and trials laid a solid foundation for the entire epic. Hailing from a working-class family in Michigan, Cutler was forged by adversity into a man of independent and resolute character. In his youth, he showed flashes of brilliance on the athletic field, displaying extraordinary leadership and a relentless competitive spirit. His teammates said of him that "his only true rival was himself." However, a severe leg injury in college ended his football career, forcing him to channel all his energy into academics, where his talents in mathematics and engineering began to shine.

After graduating, Cutler threw himself into the burgeoning field of computer programming, quickly making a name for himself at Digital Equipment Corporation (DEC). The real-time operating system he developed for the classic PDP-11 minicomputer already hinted at his exceptional skill in system architecture. Soon, he was entrusted with leading the development of DEC's next-generation 32-bit system, VAX/VMS. The immense success of VMS earned him the reputation of being "the world's best operating system programmer." Yet, beneath the fame, Cutler grew frustrated with DEC's increasingly rigid bureaucracy. When the next-generation computer project he poured his heart into, Prism/Mica, was unceremoniously canceled by corporate leadership, the fiercely independent genius resigned in anger.

Cutler's talent had long before caught the eye of another industry titan: Bill Gates. As early as 1983, DEC executive Gordon Bell had introduced Cutler to Gates, planting the seeds for a future collaboration. In 1988, upon hearing that the Prism project had been axed, Gates personally stepped in to recruit Cutler to Microsoft. He gave Cutler a mission: to start a brand-new operating system project codenamed "NT" (for New Technology). Cutler's experience, fighting spirit, and unparalleled expertise in operating systems were the critical assets Microsoft was betting on for its next generation, setting the stage for the dramatic development saga of NT.

The King of Code

Meanwhile, in the heart of the Microsoft empire, another "King of Code"—Bill Gates—was brewing a storm that would change the industry. From his perspective, we get a glimpse of Microsoft's strategic ambitions in the late 1980s and the macro context of the NT project's birth. Unlike Cutler's working-class background, Gates came from a wealthy family and showed exceptional intelligence and a rebellious streak from a young age. As a teenager, he and Paul Allen became obsessed with computer programming, keenly sensing the immense business opportunities in software. Their BASIC interpreter for the Altair 8800 microcomputer was not only Microsoft's founding creation but also the dawn of the personal computer software era.

By the mid-1980s, Microsoft had established its dominance in the PC market with MS-DOS and the initial versions of Windows. But Gates was keenly aware that these 16-bit systems would soon be unable to meet future computing demands. He shrewdly foresaw the necessity of a brand-new operating system "for the 21st century," one that had to possess high reliability, powerful multitasking capabilities, and cross-platform portability to redefine the standards for both enterprise and personal computing.

At the time, Microsoft was collaborating with IBM on the OS/2 system, but the project was progressing slowly and its market reception was lukewarm. OS/2's lack of good compatibility with the vast library of DOS and Windows applications, coupled with a subpar graphical interface, left Gates increasingly disillusioned. Unwilling to publicly break with IBM, he secretly began planning his "Plan B"—the true genesis of NT. Around 1988, Gates decided to forge a new path. Alongside his then-VP of Strategy, Nathan Myhrvold, he established a vision for the new system and ultimately set his sights on Cutler, who was fresh off his frustration with the Prism project at DEC. Under the guise of developing an improved version of OS/2, Gates successfully recruited Cutler, tasking him in reality with creating a completely new, portable operating system.

Gates is portrayed as a strategist with both top-tier technical intuition and extraordinary business foresight. His commitment to investing up to five years and $1.5 billion in the NT project demonstrated his bold bet on the future of technology. His eye for talent and his advocacy for Microsoft's unique engineering culture—a "rule of the smartest" that sought out the world's most brilliant minds to solve the toughest problems—provided the decisive support for NT's launch. It was Gates's vision and Microsoft's formidable resources that provided the stage for Cutler and his team to unleash their talents.

The Tribe

Cutler's arrival sent shockwaves through Microsoft. He did not come alone; he brought with him a loyal "programming tribe," and their arrival triggered intense cultural clashes and severe challenges of team integration. When news of Cutler's move broke, many of his former colleagues from DEC's Seattle lab answered his call. Within a week, seven top-tier DEC programmers had followed him to Microsoft, forming the core of the NT project. This "DEC tribe" was almost exclusively composed of seasoned male engineers, with an average age far higher than the typical Microsoft employee. They were a tight-knit, self-contained unit.

On their very first day, the famous "onboarding turmoil" erupted. Microsoft required new employees to sign a contract with a strict non-compete clause. Cutler's men deemed it deeply unfair—if DEC had such a clause, they never could have made the jump to Microsoft. They collectively refused to sign and staged a walkout for lunch. Upon hearing the news, Cutler personally intervened, using his forceful personality to compel Microsoft's legal department to back down and remove the unreasonable terms. The incident quickly spread across the Microsoft campus, giving everyone a taste of the tribe's uncompromising style.

The "tribe" moniker was fitting. They occupied an entire hallway in Building 2, operating in lockstep and clashing with Microsoft's existing culture. The chasm in age and background led to constant friction between the DEC "renegades" and the younger Microsoft employees. They held themselves in high regard, derisively calling their younger colleagues "Microsoft Weenies," believing they were the bearers of true engineering artistry. In turn, many within Microsoft were wary of this cliquey and arrogant group of newcomers. Although Cutler himself laughed off the tension, he too felt the difficulty of fitting in, once lamenting, "I have no credibility over here."

However, Microsoft's leadership quickly implemented a brilliant "tribe integration strategy." Steve Ballmer, then head of the systems software division, acted as Cutler's "mentor." Bill Gates personally transferred a veteran Microsoft programmer, Steve Wood, into the NT team to serve as a bridge between the old and new cultures. Meanwhile, Ballmer cleverly appointed Paul Maritz to oversee OS/2-related matters, avoiding a direct conflict with Cutler while allowing him to provide support from the periphery.

Despite the initial hardships, Cutler and his tribe soon began to lay out the grand blueprint for Windows NT. They established three core objectives: portability, reliability, and flexibility. To achieve portability, the team decided to write the kernel in the C language and design a Hardware Abstraction Layer (HAL) to mask differences between underlying CPUs. To achieve "bulletproof" reliability, they adopted a microkernel architecture, isolating functional modules to prevent a single application crash from bringing down the entire system. For flexibility, NT was designed as a modular system supporting multiple "personalities," using different subsystems to be compatible with OS/2, POSIX, and, in the future, Windows applications. These technical decisions, highly advanced for their time, signaled that the great vessel of Windows NT, after weathering its initial cultural storms, had officially set sail.

Dead End

As the project entered its middle phase, a series of major challenges arose, and the NT team seemed to have driven into a "dead end," facing internal conflicts, technical bottlenecks, and a critical strategic turning point. First, a tense "two-front war" emerged within Microsoft: on one side, Cutler's team was building the entirely new NT kernel from scratch; on the other, the traditional Windows team continued to iterate on Windows 3.x over the existing DOS kernel. The two teams competed fiercely for resources, talent, and the attention of upper management, with political undercurrents running deep.

A central point of contention was backward compatibility. Executives like Ballmer repeatedly stressed that NT had to run existing OS/2, DOS, and Windows programs, or it would never win the market. But Cutler was initially vehemently opposed, stubbornly believing that a new system should shed the baggage of the past. His famous quote, "Compatible with DOS? Compatible with Windows? Nobody's gonna want that," sent a chill through management. This devotion to an ideal architecture briefly put the project in danger of becoming disconnected from market realities.

The technical challenges were equally daunting. NT's innovative microkernel architecture, while offering modularity and high reliability, raised huge performance concerns. The client-server style of subsystem calls inevitably added system overhead. When Bill Gates was first briefed on the design, his sharp technical instincts led him to declare, "This is going to have a huge amount of overhead... I don't think we can do it that way." He knew that if NT was too slow, it would be "crucified" by the market and the media. To convince their boss, Cutler's team argued fiercely, submitting a twelve-page report with data to prove that performance was manageable. Gates reluctantly agreed, but his doubts lingered.

Meanwhile, the scale of the NT project far exceeded expectations, and Cutler's preferred small-team model was no longer sustainable. At Microsoft's insistence, the team eventually expanded to nearly 200 people, forcing Cutler to adapt his management style and accept the reality of large-team collaboration.

What ultimately pulled the NT project out of this "dead end" was a decisive external event: in 1990, the collaboration between Microsoft and IBM on OS/2 completely fell apart. This break marked a major strategic pivot for Microsoft, which decided to place all its bets on its own Windows NT. The NT team's mission was fundamentally altered: its development focus shifted from OS/2 API compatibility to full compatibility with and superiority over Windows. This was because, in that same year, Windows 3.0 had achieved unprecedented commercial success. Microsoft realized that NT's future had to be intertwined with Windows. As Nathan Myhrvold put it, "The customer needs a bridge." And so, the team began the arduous task of "switching tracks," extending the Windows API to 32 bits and rewriting the entire graphics subsystem. Though immensely difficult, "they finally got it to run," successfully achieving compatibility with legacy Windows applications. This critical redirection allowed Windows NT to escape its dead end and find the right path to the future.

The Howling Bear

As the project entered the fast lane, the pressure escalated dramatically. The team's work environment grew tense and fierce, filled with emotional collisions and roars, just as the metaphor of "the howling bear" depicted. At Microsoft, Gates and Ballmer championed the philosophy that "only excellent programmers can be managers," requiring leaders to stay hands-on and not detach from frontline coding. This meant NT's managers had to both orchestrate the big picture and dive deep into code, shouldering a double burden.

In this high-pressure environment, Cutler's explosive temper and exacting standards pushed the team to its limits. He mercilessly berated any work that fell short, and his famous threat—"Your ass is grass, and I'm the lawnmower"—kept every subordinate on edge. Yet, it was this unforgiving rigor that forged the team's powerful discipline and execution. As the project progressed, Cutler himself began to change. He started to offer affirmation and encouragement alongside the pressure, gradually evolving from an autocratic expert into a true technical leader.

Simultaneously, the integration between the NT and Windows camps deepened. Chuck Whitmer and others from the original Windows graphics department joined the rewrite of NT's graphics system. Moshe Dunie was appointed chief test officer, establishing a rigorous quality assurance system. The addition of Robert Muglia as a program manager strengthened the link between the technical team and market needs. Muglia repeatedly stressed that software features had to be pragmatic, focusing resources on the security, networking, and compatibility functions that enterprise customers cared about most.

The team's culture also became richer through this fusion. In the intense, male-dominated development environment, female programmer Therese Stowell initiated a witty "feminist movement" in jest, bringing a touch of levity and reflection to the tense atmosphere. Through a process of friction and adaptation, the NT team coalesced into a mature, combat-ready unit, fully prepared for the final sprint.

Loading...