“blackhole” by adam.lofting is licensed under CC BY 2.0
In the golden age of platforms, when Google and Facebook rose to dominance by capturing and monetizing user data, Silicon Valley developed a rhetoric of benevolence to mystify the new business models. Capitalists have long boasted of being job creators and producers of affordable goods, but these surveillance capitalists, as Soshana Zuboff styles them, had a different pitch. They weren’t really capitalists. As humble public servants, they simply provided online tools and spaces in which users could connect with one another, share experiences, and participate in a more open world. Meanwhile, beneath the sleek surface of user-friendly apps, algorithms helped convert clicks into capital.
Sam Altman, CEO of OpenAI, is the apotheosis of Silicon Valley benevolence. Platforms walked so that OpenAI could run. While they curated online spaces to snare data, OpenAI used even more massive datasets scraped from the platforms and other parts of the internet to train its large language models and image generators. If Google once justified its surveillance with the merely negative assurance that it would not be evil, Altman presents his company as an unmitigated force for good—a steward of the most important innovation in history, artificial general intelligence (AGI). “In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents,” Altman has prophesied. “Everyone’s lives can be better than anyone’s life is now.” With aims as generous as these, OpenAI’s multibillion-dollar collaborations with Microsoft and its astronomical valuation seem besides the point. “Our primary fiduciary duty,” states OpenAI’s earnest company charter, “is to humanity.”
Nobody can top Altman’s do-goodism, and no company’s goals contradict the reality of its practices more dramatically than OpenAI’s. Together, the CEO and the company that he has molded in his own image embody the hollowness of technological progress under capitalism. This is my main takeaway from Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Even if tech companies start out with genuinely benevolent intentions—Hao concedes that OpenAI “may have begun as a sincere stroke of idealism”—they will eventually sacrifice the common good to capital accumulation, reckless growth, and the labor exploitation and environmental destruction that trail them like a ball and chain.
***
When Altman and others started OpenAI in 2015, the company wasn’t supposed to follow in the platforms’ footsteps. Elon Musk, whose literalist misinterpretations of science fiction are well documented, feared that Google was on course to develop an AGI that could annihilate humanity. Altman admired Musk for his entrepreneurial zeal and shared his techno-determinist conviction that AGI was inevitable. In an email to Musk, Altman started with the sci-fi premise that AGI was fated and then proposed that “it would be good for someone other than Google to do it first.” OpenAI would become the Rebel Alliance to Google’s Galactic Empire. In contrast to Musk’s nightmare image of an AGI so powerful that it could even hunt down his Mars colonists, Altman painted OpenAI as a benign nonprofit that would prioritize building safe AGI, keep it out of the hands of tech monopolies, and ensure that humanity at large benefitted.
As Hao explains, the tale that OpenAI “had to build good AGI before someone else built a bad one” was “a pillar of OpenAI’s founding mythology.” Altman was never much of a doomsayer. First as a startup founder, then as president of the Silicon Valley accelerator Y Combinator, Altman honed his ability to tell people what they needed to hear. Musk would later complain that Altman had bamboozled him into investing in OpenAI, and in 2023, Altman’s penchant for half-truths would contribute to his brief firing by OpenAI’s board. But in the company’s heady early days, Altman’s story about winning the AGI race for humanity did exactly what business science fictions are meant to do: attract belief, capital, and labor. Investors pledged $1 billion, and top researchers forwent potentially larger salaries elsewhere to join Altman’s upstart team.
But what is “good” AGI? And how, exactly, will OpenAI ensure that AGI benefits humanity? These questions reveal the black hole swirling at the center of OpenAI’s mission. When Hao put them to Greg Brockman and Ilya Sutskever, OpenAI’s president and chief scientist, respectively, they vaguely answered that AGI could “solve” health care and climate change. Did they mean using machine learning to scan medical images or help design smart electrical grids? These already existing techniques are too pedestrian to capture OpenAI’s ambitions—as are most other concrete examples. While Brockman and Sutskever did briefly mention an AGI doctor, their responses to Hao kept the conversation at the level of nebulous platitudes and outright guesses. They didn’t elaborate on how AGI would mitigate climate change, and when Hao pointed out that the example of the AGI doctor implied that the goal was to replace humans, Brockman could only evasively wax philosophical about “economic freedom” and muse on the possibility that OpenAI could distribute a universal basic income to the masses who had been liberated from their livelihoods.
Altman has his own conjectures. He has speculated about personal armies of artificial assistants, for example. And the OpenAI charter does include a working definition of AGI: “highly autonomous systems that outperform humans at most economically valuable work.” But these ideas are hazy. They need to be because “AGI” doesn’t really mean anything in particular. The term is more like the odd city slogan of Genova, Italy: “More Than This.” Whether you’re admiring the fountain at the Piazza De Ferrari or strolling along the docks at Porto Antico, “More Than This” crops up on public art installations to negate your immediate surroundings in the name of some ambiguous “more,” an excess that can never arrive because it exists only as a surplus beyond the given world. Similarly, AGI is just something or other beyond the equivocal term “AI” and the human intelligence upon which it’s modeled. Without clear definitions of “intelligence” and its more “general” characteristics, or “human,” or what it would mean to “outperform” humans, or what constitutes economic “value,” there can be no way to verify if OpenAI has achieved AGI. Without a robust account of the good, nobody can explain good AGI, its benefits, how to spread them, or why a corporation should be in charge of doing so. Ultimately, “AGI” is the empty “more” of a vacuous notion of progress.
Yet as advertising has demonstrated, a term doesn’t need to mean anything to move product. Likewise, AGI’s vagueness is strategically useful because it lets OpenAI move its own goalposts and sell the next version of itself. Sharing research was initially a cornerstone of building good AGI; then it wasn’t. Just a few months after Altman appealed to Musk’s fear of Google, Sutskever explained to Musk that OpenAI could become “less open” and “not share the science” once it had recruited idealistic researchers. OpenAI has released some models openly, but not the models that power its flagship product, ChatGPT. When Musk and his money left in 2018, Altman rejiggered OpenAI’s legal structure to create a for-profit arm within it, paving the way for the Microsoft deals and the development of ChatGPT to give Microsoft a return on investment. Today, after Altman has turned OpenAI into a for-profit public benefit corporation, little remains of the original nonprofit and its pretentions to openness. The anti-Google is now just another Google.
***
Justifying his oscillations as necessary steps toward AGI, Altman has repeatedly proven Hao’s point that “in a vacuum of agreed-upon meaning, ‘artificial intelligence’ or ‘artificial general intelligence’ can be whatever OpenAI wants.” And what OpenAI has wanted is a technology that scales. Nobody could adequately explain AGI to Hao, but Altman and OpenAI’s researchers thought they knew how to get it: by building ever larger language models. Tech prophets love to discover technical “laws” because they think they can divine the structure and trajectory of reality itself; at OpenAI, this tendency produced the so-called scaling laws, which describe how models improve by leveling up data, computer chips, and parameters. If the latter are analogous to neurons in the human brain, and if brain size correlates with intelligence, so the dubious reasoning goes, then increasing the model’s parameters in proportion with data volume and processing power should make it smarter.
So much for the ancient maxim that says nothing can come from nothing.For if “AGI” is a nothingburger, an empty signifier, it has nonetheless spawned a very large something: a tech empire that pillages both the virtual and material world. It was the scaling laws that provided the alibi for OpenAI’s colonization of the internet, turning Common Crawl, Wikipedia, YouTube, GitHub, copyrighted books and art, and other undisclosed sources into massive training datasets for its proprietary models. More data meant fewer scruples about its quality, which in turn led OpenAI to contract Kenyan workers to filter out the worst material. For all their devotion to humanity, OpenAI didn’t deem it necessary to pay these particular humans well, even as their work required them to sift through myriad descriptions and images of child sexual abuse, rape fantasies, bestiality, and self-harm. If ChatGPT has mostly avoided becoming another Tay, the 2016 chatbot whose offensive rants forced Microsoft to shut it down within hours of its release, this serviceability was purchased with the trauma of Global South workers. There is no AI output that is not at the same time a document of barbarism.
In addition to the energy that powers human ethical judgment, large language models have needed thousands of megawatt hours of electricity, much of it generated with fossil fuels, during training. Though OpenAI is not open about its training data, some researchers have been able to estimate that building GPT-4 used enough electricity “to power San Francisco for three days.” Once trained, large language models devour ever greater amounts of electricity to answer daily queries, each of which costs about ten times more energy than a standard google. Queries are routed through datacenters whose size can be reckoned in terms of football fields or the largest university campuses, and whose water consumption rivals that of entire cities. Once again, “More Than This” captures OpenAI’s logic of excess. For its pursuit of scale necessitates that the already colossal datacenters metastasize in the coming years, housing more servers and chips to crunch more data, appropriating more land and water, and generating more tons of carbon dioxide. Shortly before the publication of Empire of AI, OpenAI and President Trump announced Stargate, an AI infrastructure venture that plans to build ten new datacenters in the US. The project, whose name is yet another nod to science fiction, has since expanded to Norway and the United Arab Emirates.
***
Altman stood proudly beside Trump during the Stargate announcement. Is OpenAI an empire because Altman is imperious? Ultimately, is Altman’s desire to benefit himself the reason for OpenAI’s failure to create technology that benefits humanity? One of the key insights of Hao’s book is that OpenAI is much bigger than Altman, and that’s why understanding the company and its CEO requires a conceptual enlargement that connects to the history of empire. It might not literally invade and plunder in the style of empires past, but OpenAI acts like an empire when it captures data and resources, exploits global labor, and justifies its power through a rhetoric of progress and rivalry with other tech empires. The analogy is clearest in places like Kenya, Chile, Venezuela, and Uruguay, where Hao documents how big tech continues colonial patterns of extraction. On the other hand, Altman is the main character in Hao’s story of how a handful of Silicon Valley elites are controlling the AI industry and blocking alternative approaches. Empire of AI is bookended by a detailed account of Altman’s firing and rehiring, a narrative that illustrates OpenAI’s abandonment of its nonprofit ideals through Altman’s eventual victory in the struggle over the company’s governance. Hao doesn’t reduce the company to the man, but she does present OpenAI as “a reflection and extension of the man who runs it.”
Hao’s spotlight on Altman is a necessary corrective to his technological determinism and the broader discourse of inevitability surrounding AI. No, AGI is not destined. No, OpenAI is not locked in a race to develop AGI first and ensure that it is “good.” These are science fictions, and what makes them relevant is not that they reflect our fate but that people like Altman have amassed ludicrous amounts of money and power in an attempt to make them our fate. In other words, Hao’s focus on Altman is a crucial form of demystification, a pulling back of the curtain that reveals the wannabe prophet, humanity’s supposed benefactor, to be just a rich tech guy with weird and dangerous ideas. As Edward Zitron puts it with refreshing candor, it’s high time to stop venerating Altman and start ridiculing him.
But another demystifying maneuver is possible. What if we pulled back the curtain and nobody was there for us to pin responsibility on? Of course, Hao is right that people make technology—or more precisely, a select few do—and it’s true that the technologies they make embody their values. Yet if we again zoom out from individuals to structures, other forces besides empire come into view and make even the elites’ motivations look more like the dutiful gestures of servants rather than the fiats of masters. Perhaps technological determinism is right about heteronomy but wrong about its source; perhaps the imperial expansion of AI is driven neither by greedy tech bros nor technical laws, but by an economy that demands perpetual expansion. For what is OpenAI’s reckless pursuit of scaling if not another modality of capitalism’s insatiable appetite for growth?
The late anthropologist David Graeber once remarked on a 2007 letter written by CEOs of several major energy companies in which they asked the government to force them to lower carbon emissions. For Graeber, the letter demonstrated that even the most powerful elites understood that the necessity of maximizing profit compelled them to keep overheating the planet. Only a conscious interruption of capitalism’s mechanisms would allow them to act ethically. In contrast, Altman is so drunk on our era’s obscene wealth inequality, which allows OpenAI to hoard and burn through billions annually without turning a profit, that he recognizes few limits (if any). His delusions about AGI’s ability to fix climate change—even as OpenAI’s pursuit of AGI exacerbates this very problem—are as deep as his shopworn faith in growth. Instead of asking the government to stop him, he wants taxpayers to subsidize further climate change by funding new datacenters. Yet while he lacks the CEOs’ self-interested sobriety, Altman is just as beholden to the market as they are. Competition with Google locked OpenAI into scaling, and the need to turn a profit and provide returns to investors is driving OpenAI’s mad scramble for energy. The alleged inevitability of AGI is simply the intractability of capitalism in disguise. While the chains that bind Altman to this monster may be gilded, he is still unfree.
***
When Altman announced OpenAI’s newest model, GPT-5, on X, he used a screenshot of the Death Star from Rogue One: A Star War’s Story. Finally, Altman has admitted that the rebels didn’t topple the Empire but created one of their own.
But like the Death Star in the original Star Wars, OpenAI has a fatal flaw at its core. Scaling, like growth, has limits. Sutskever has acknowledged that training data cannot keep expanding because “we have but one internet,” and OpenAI has already thoroughly plundered the data commons. With the emergence of DeepSeek, the Chinese large language model that was trained for a fraction of the cost of GPT-4, OpenAI’s narrative that it needs to accumulate vast amounts of money and computing resources has passed from science fiction to pure fantasy. Perhaps most damning of all, not only has OpenAI failed to advance toward AGI—GPT-5 was a disappointment—but the company can’t even deliver on its crasser economic promises. A recent study shows that “95% of organizations are getting zero return” from tens of billions of dollars invested in AI, and there is widespread recognition of an AI bubble.
No wonder that Altman has continued to pivot. Now the goal isn’t AGI but the equally vapid notion of “superintelligence,” while the threat of China and “authoritarian” AI has replaced the previous pretext of stopping computers from becoming self-aware and evil. As OpenAI seeks funding and political favors to inject life into its moribund project, Altman has joined forces with another dying empire, the United States under the Trump administration.
Instead of simply waiting for the AI empire to collapse under its own weight, could we do something to hasten its end? Hao concludes Empire of AI with a sketch of how to redistribute knowledge, resources, and influence. Knowledge can be redistributed by funding independent organizations and experts who can critically evaluate corporate models and research alternative forms of AI that are smaller, task-specific, community-based, and less environmentally destructive. To redistribute one of the most basic AI resources, human labor, unions can protect data workers from exploitation, while policy that mandates transparent training data can help identify models that infringe on creative workers’ intellectual property. Finally, influence should be redistributed from phony altruists and mystics to a public educated in AI’s “strengths and shortcomings,” “the systems that shape its development,” and “the worldviews and fallibility of the people and companies developing these technologies.”
Empire of AI is a vital contribution to this educational project, and the alternatives that Hao outlines are reasonable and desirable. However, one form of redistribution is conspicuously absent. Robustly achieving the three forms of redistribution for which Hao advocates would require a disruption far more profound than anything dreamed up by Silicon Valley science-fictioneers, namely, the abolition of Silicon Valley, and with it, redistribution of the market’s control to a much broader array of people and democratic structures. Restrains on the class power of the rich, democratic oversight over private corporations, hard limits on corporate appropriation and exploitation, strong unions, not-for-profit science, environmentally sustainable technologies, vigorous educational institutions—all the things that Hao proposes run counter to the logic of profitability and growth. They are best described with a word that doesn’t appear once in Empire of AI: socialism.
Socialism is the true alternative to Altman’s faux benevolence, empire, and the alienated hope for a computer intelligence so advanced that it can either save or slaughter us. Against the empty slogans of capitalist progress, socialism names a technological future that is not simply “more” but different: egalitarian, cooperative, human, sustainable.

Leave a comment