Or: How the company that created modern AI accidentally created its biggest competitors

Author’s Note: This piece was written in collaboration with Claude AI. The research, analysis, and narrative structure were developed together, with editorial direction and key insights provided by Vikram Pawar. All conclusions and interpretations remain my own.

There’s a moment in every Silicon Valley success story where the founders realize they’ve built something so valuable that keeping the team together becomes impossible. For OpenAI, that moment came in waves—each departure more damaging than the last, each exit creating not just a competitor, but an existential threat.

Today, if you want to understand who’s really driving AI research forward, you need to look beyond OpenAI’s San Francisco headquarters. The intellectual DNA of GPT-2 and GPT-3—the models that launched the current AI revolution—now lives scattered across the industry. And increasingly, it’s thriving at a company called Anthropic.

The Departure That Started It All

Let’s start with Dario Amodei, because his story explains everything about how we got here. When Amodei left OpenAI in 2020 to start Anthropic, it wasn’t your typical Silicon Valley founder drama. There were no leaked Slack messages, no public Twitter feuds, no dramatic board meetings. Instead, it was something far more dangerous for OpenAI: a philosophical divorce.

“It’s more effective to start a company and do things your way than to argue with your boss,” Amodei later explained, with the kind of diplomatic understatement that masks seismic industry shifts.

Amodei wasn’t just any employee. As OpenAI’s VP of Research, he had been instrumental in creating GPT-2 and GPT-3. More importantly, he was one of the voices arguing that AI safety should be central to the company’s mission, not an afterthought. When that vision clashed with OpenAI’s increasingly commercial focus, he didn’t stick around to fight. He left—and took Daniela Amodei (former VP of safety and policy), along with nine other key researchers with him.

The exodus wasn’t exactly subtle. Eleven OpenAI employees breaking off to establish Anthropic in early 2021 was the AI equivalent of the best players leaving your fantasy football league to start their own, better league. Except in this case, the stakes were the future of artificial intelligence.

The Numbers Tell the Story

Here’s where things get really interesting. Data from SignalFire’s 2025 State of Talent Report reveals just how one-sided this talent war has become. Engineers at OpenAI are eight times more likely to leave for Anthropic than the reverse. At DeepMind, that ratio is almost 11-to-1 in Anthropic’s favor.

Think about that for a second. We’re not talking about typical job-hopping here. This is systematic brain drain, the kind that ends corporate dynasties.

Anthropic also leads the AI industry in retention, with an 80% retention rate for employees hired over the last two years. OpenAI’s retention trails at 67%—still decent by Big Tech standards, but devastating when your competitive advantage depends entirely on having the smartest people in the room.

The departures keep coming. Jan Leike, who co-led OpenAI’s superalignment team, left for Anthropic in May 2024, publicly criticizing his former employer for focusing on “shiny products” over AI safety. John Schulman, an OpenAI co-founder, followed the same path. Even more recently, key researchers like Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai have jumped ship to Meta’s new AI superintelligence team.

But the most painful loss was Ilya Sutskever—OpenAI’s co-founder and chief scientist, the man who helped create the scaling hypothesis that powers modern AI. When Sutskever left to start Safe Superintelligence Inc., it wasn’t just a personnel change. It was like watching Einstein leave the Manhattan Project.

The Original Sin

To understand why this is happening, you need to understand what OpenAI has become versus what it was supposed to be. The company started as a nonprofit in 2015, explicitly committed to ensuring that “AI benefits all of humanity.” The founding team included some of the brightest minds in AI research, united by a shared mission to build artificial general intelligence safely.

But mission creep is Silicon Valley’s most predictable plot twist. By 2019, OpenAI had restructured into a “capped profit” entity to attract investment. Microsoft pumped in billions. The focus shifted from pure research to shipping products. ChatGPT’s explosive success in late 2022 transformed OpenAI from an AI research lab into the hottest startup in the world.

The problem? Many of the researchers who built the foundation of OpenAI’s success never signed up to work at a product company. They were scientists, not entrepreneurs. They wanted to solve the fundamental problems of AI alignment and safety, not optimize conversion rates for a chatbot.

This tension exploded during OpenAI’s November 2023 board crisis, when Sutskever and other board members briefly ousted CEO Sam Altman. The coup failed, but it revealed the fault lines that had been building for years. Soon after, Sutskever stepped back from the company, and by May 2024, he was gone entirely.

The Anthropic Advantage

Meanwhile, Anthropic has positioned itself as everything OpenAI was supposed to be. The company describes itself as an “AI safety company” and has structured itself as a public benefit corporation—legally committed to considering societal impact alongside profits.

More importantly, Anthropic has created what employees describe as a culture of “intellectual discourse and researcher autonomy.” Translation: the smart people get to work on hard problems without someone breathing down their neck about quarterly metrics.

The company’s approach to AI development also reflects its research-first mentality. While OpenAI races to ship new features for ChatGPT, Anthropic has focused on fundamental advances like “constitutional AI”—a method for making AI systems safer by embedding principles directly into their design.

This isn’t just philosophical positioning. Anthropic’s Claude models have consistently outperformed OpenAI’s offerings on key benchmarks, particularly in areas like software engineering and complex reasoning. When you can offer both cutting-edge research opportunities and competitive compensation, the talent follows.

The Money Chase

Of course, none of this would matter if Anthropic couldn’t fund its ambitions. But here’s where the story gets even more interesting: the company has managed to secure massive funding while maintaining its research focus.

Amazon leads the charge with $8 billion invested, making AWS Anthropic’s primary cloud and training partner. Google has pumped in over $3 billion and recently committed another $1 billion. In March 2025, Anthropic closed a $3.5 billion funding round at a $61.5 billion valuation, led by Lightspeed Venture Partners.

The funding strategy is brilliant in its simplicity: instead of picking sides in the cloud wars, Anthropic has managed to get both Amazon and Google bidding for its attention. Amazon gets deep AWS integration, Google gets access to cutting-edge models, and Anthropic gets the resources to compete with OpenAI without surrendering control to any single patron.

What’s particularly fascinating is that some investors are hedging their bets by backing both companies. Fidelity, for instance, has positions in both OpenAI and Anthropic—despite OpenAI’s explicit requests that investors avoid funding competitors. It’s the investment equivalent of betting on both the Yankees and the Red Sox in the same season.

The Originators’ Exodus

Perhaps the most damning evidence of OpenAI’s talent problem is what happened to the people who created its most important breakthroughs. All four authors of the original GPT paper—the 2018 research that launched the entire transformer revolution—have now left the company.

Alec Radford, the lead author who spearheaded GPT-2, was the last to go. Tim Salimans left for Google in 2018. Karthik Narasimhan departed for Princeton and eventually Conversational AI platform Sierra. And of course, Ilya Sutskever left to start Safe Superintelligence, which raised $1 billion in funding within months of launching.

The GPT-3 team has similarly scattered. Tom Brown, Benjamin Mann (now at Anthropic), Amanda Askell (also at Anthropic), and many others have moved on. It’s as if the inventors of the iPhone all left Apple right after the launch.

What OpenAI Lost

With Jakub Pachocki now serving as chief scientist—a brilliant researcher who led GPT-4’s development—OpenAI isn’t exactly hurting for talent. But there’s a difference between having smart people and having the people who created your foundational breakthroughs.

The departing researchers didn’t just take their individual expertise with them. They took institutional knowledge about why certain design decisions were made, what approaches didn’t work, and where the real opportunities lie. In a field moving as fast as AI, that kind of deep context is invaluable.

More importantly, they took the safety-focused culture that originally defined OpenAI. Today’s OpenAI is undeniably successful—ChatGPT has over 200 million weekly users and the company is on track for $11.6 billion in revenue next year. But it’s also a fundamentally different organization than the one that set out to ensure AI benefits all humanity.

The Bigger Picture

The talent migration from OpenAI to Anthropic (and other competitors) isn’t just a corporate story—it’s a story about what happens when a research mission collides with commercial reality. OpenAI’s transformation from nonprofit to product company was probably inevitable given the costs of training cutting-edge AI models. But that transformation came with consequences that are only now becoming clear.

The irony is perfect: OpenAI’s success in creating ChatGPT and driving mainstream AI adoption has made it harder to retain the very researchers who made that success possible. The company that launched the AI revolution is now struggling to keep up with the competitors it accidentally created.

This dynamic raises uncomfortable questions about the future of AI development. If the most safety-conscious researchers keep leaving the most successful AI companies to start their own ventures, are we building an ecosystem that rewards rapid deployment over careful development?

Maybe that’s the real legacy of OpenAI’s talent exodus: not just that it created Anthropic, but that it revealed the fundamental tension between AI as a scientific endeavor and AI as a business. In Silicon Valley, that tension usually resolves in favor of the business. But when the technology in question might reshape civilization, the stakes feel a little higher.

The next few years will determine whether Anthropic can maintain its research focus as it scales, or whether it too will eventually face the same pressures that transformed OpenAI. For now, though, the talent is voting with their feet. And they’re walking toward San Francisco’s Mission District, where a company founded by OpenAI refugees is building what OpenAI was supposed to become.

The future of AI, it turns out, might not be determined by whoever has the biggest models or the most users. It might be determined by whoever can keep their best people in the building. And right now, that company isn’t OpenAI.