

Dipak Kurmi
(The writer can be reached at dipakkurmiglpltd@gmail.com.)
The optics at the recent India AI Impact Summit in New Delhi captured, in a fleeting but telling instant, the deeper fault lines shaping the global artificial intelligence race. As Prime Minister Narendra Modi stood alongside industry leaders to endorse a set of commitments aimed at democratising AI, the stage momentarily froze around two figures: Sam Altman of OpenAI and Dario Amodei of Anthropic. While others linked hands in a choreographed show of unity, the two executives hesitated, briefly weighing whether to follow suit. They ultimately raised their fists instead, a subtle but unmistakable gesture that underscored the intensifying strategic divergence between their organisations. The moment, widely circulated by Reuters, was more than a social media curiosity; it symbolised an increasingly consequential philosophical split over how artificial intelligence should be built, governed and monetised in the coming decade.
Behind the awkward choreography lies a well-documented history of ideological friction. Before founding Anthropic in 2021, Amodei spent formative years inside OpenAI, serving as Vice President of Research and helping steer the development of influential large language models such as GPT-2 and GPT-3. As those systems demonstrated the exponential scaling potential of modern AI, Amodei grew concerned that the pace of advancement was outstripping the field’s safety guardrails. In 2020, citing differences over the organisation’s approach to responsible development, he departed alongside his sister Daniela Amodei and several colleagues. Their new venture was explicitly framed around building “constitutional AI” systems designed with safety and interpretability at the core. In a 2024 podcast, Amodei characterised the break more bluntly, remarking that it is “incredibly unproductive to try and argue with someone else’s vision.” That philosophical divergence has since hardened into one of Silicon Valley’s most closely watched rivalries, often simplified into a contest between rapid commercial scaling and cautious, safety-first engineering.
The tension spilt into public view earlier this year when Anthropic aired a pointed advertising campaign during the Super Bowl. One widely discussed spot showed a young man seeking fitness advice, only to be met with a chatbot-style monologue that abruptly pivoted into a product pitch for “StepBoost Max” insoles. Another, titled “How Can I Communicate Better With My Mother?”, depicted an AI therapist dispensing advice before similarly veering into commercial promotion. The closing line, “Ads are coming to AI. “But not to Claude” was widely interpreted as a critique of OpenAI’s exploration of advertising-supported models. Altman responded on X with measured irritation, acknowledging the ads were amusing but accusing Anthropic of mischaracterising OpenAI’s plans. He stressed that the company’s principles would prevent intrusive advertising practices and argued that broad free access to AI tools remained central to its mission. The exchange revealed how business models, not just safety doctrines, are becoming defining battlegrounds in the AI era, with each firm attempting to frame its approach as more aligned with user trust and long-term societal benefit.
Yet the rivalry between OpenAI and Anthropic is also a story about the remarkably tight social network from which much of today’s AI leadership has emerged. As observers have noted, the field’s most influential researchers often trained under the same academic mentors, circulated through the same laboratories and repeatedly recruited from one another’s teams. The pattern echoes the early-2000s “PayPal Mafia” phenomenon that followed eBay’s acquisition of PayPal, when alumni including Peter Thiel, Elon Musk and Reid Hoffman went on to seed a generation of Silicon Valley heavyweights. Their extended network helped launch or finance firms such as YouTube, LinkedIn, Tesla, Yelp, SpaceX and Palantir. Artificial intelligence now appears to be undergoing a similar founder-factory moment, with a small cluster of researchers repeatedly spinning out new ventures that quickly become industry contenders.
OpenAI itself was born from such a dense web of relationships. Alongside Musk and Altman, the founding team included leading researchers who sought to ensure that advanced AI would benefit humanity broadly rather than concentrate power. Thiel served as a major early backer, contributing significantly to Altman’s initial venture fund and later pledging support when OpenAI launched in 2015 with an ambitious $1 billion commitment from donors. Ironically, the organisation’s success has helped generate its own competitive ecosystem. Anthropic, now widely regarded as OpenAI’s most formidable challenger, is the most prominent example of alumni-driven competition, but it is far from the only one. The industry’s talent flows increasingly resemble a high-stakes academic diaspora, where departures routinely give birth to well-funded new labs.
Internal tensions within OpenAI have mirrored the broader philosophical divide playing out across the sector. In May 2024, co-founder and chief scientist Ilya Sutskever left the company after reportedly participating in the failed 2023 leadership challenge against Altman that was partly rooted in concerns about AI safety governance. Though employees ultimately rallied behind Altman, restoring him to the helm, the episode exposed unresolved questions about how rapidly frontier systems should be deployed. Soon after his departure, Sutskever co-founded Safe Superintelligence, a venture dedicated to building highly capable AI under tightly controlled safety frameworks. The pattern continued when former OpenAI CTO Mira Murati exited to establish Thinking Machines Lab in early 2025, while researcher Aravind Srinivas left earlier to co-found the AI search company Perplexity. Each departure reinforced the sense that the frontier AI landscape is being shaped by a relatively small but highly mobile cadre of researchers.
Another powerful node in this ecosystem is Google DeepMind, often described as one of the industry’s most prolific “founder factories.”. More than 200 of its former employees have gone on to establish their own startups, helping to diffuse advanced AI expertise across the technology sector. The lab’s co-founder Mustafa Suleyman now serves as CEO of AI at Microsoft, further illustrating how leadership talent continues to circulate among a handful of dominant institutions. This dense interconnection complicates the notion of cleanly separated rival camps; despite public competition, many of the field’s key figures share intellectual lineages, professional histories and even overlapping investors.
What the New Delhi moment ultimately revealed is that the contest between OpenAI and Anthropic is not merely a corporate rivalry but a proxy for deeper questions about the future architecture of artificial intelligence. Should progress prioritise rapid capability gains paired with broad deployment, or should it move more deliberately under stringent safety frameworks? Is advertising-supported access a pragmatic path toward democratisation, or does it risk distorting user trust? And perhaps most importantly, can a field driven by such a small, tightly knit community maintain genuine diversity of thought as systems approach transformative levels of capability? The raised fists on the summit stage may have lasted only seconds, but they captured a strategic debate that will likely define the next phase of the AI revolution.