OpenAI’s AI Empire: A Quest for Power and the Unintended Consequences of Generalized Intelligence
At the heart of every empire lies an ideology, driving its advancement and justifying expansion, even if such progress contradicts the ideology’s stated purpose. In the case of today’s artificial intelligence (AI) dominion, it is the pursuit of Artificial General Intelligence (AGI), touted as a means to “benefit all humanity.” OpenAI stands at the forefront of this movement, influencing the industry in a manner that has significantly altered AI development.
In an interview with Equity, Karen Hao, author of the bestselling book “Empire of AI,” expressed her conviction about OpenAI’s imperial nature:
“To truly grasp the magnitude and influence of OpenAI’s actions, one must acknowledge their power that surpasses most nation-states globally. They wield not just economic might but also political power. Their impact extends beyond geopolitics, permeating our daily lives, making them akin to an empire.”
OpenAI characterizes AGI as “a highly autonomous system that outperforms humans at most economically valuable work,” with the potential to “elevate humanity by increasing abundance, turbocharging the economy, and facilitating new scientific discoveries that expand human potential.” However, these aspirations have fueled an industry marked by immense resource consumption, vast data scraping, overburdened energy grids, and a readiness to deploy untested systems in the name of progress.
Hao contends that this trajectory was not predestined and asserts that scaling isn’t the sole path to AI advancement:
“Alternatively, one can develop novel algorithms or enhance existing ones to reduce data and computational requirements.”
However, such an approach would necessitate a compromise on speed.
“When the pursuit of beneficial AGI is framed as a competition where the winner takes all, as OpenAI has done, then speed becomes paramount above all else – speed over efficiency, safety, or exploratory research,” Hao explained.
To guarantee speed, Hao suggests that OpenAI has opted to refine existing techniques by pouring more data and computational power into them rather than investing in innovative strategies.
The financial investment required for this pursuit is staggering. Last week, OpenAI announced plans to burn through $115 billion in cash by 2029. Meta revealed a planned expenditure of up to $72 billion on AI infrastructure this year, while Google expects to reach up to $85 billion in capital expenses by 2025, most of which will be allocated for expanding AI and cloud infrastructure.
Despite the colossal investment, the promised benefits for humanity have yet to materialize, while harms such as job loss, wealth concentration, and the exacerbation of mental health issues persist. Hao’s book also documents instances of workers in developing countries like Kenya and Venezuela, who were exposed to disturbing content including child sexual abuse material, and were paid meager wages – around $1 to $2 per hour – for roles such as content moderation and data labeling.
Hao argues against the notion that AI progress must be sacrificed in favor of immediate harms:
“Google DeepMind’s AlphaFold, which is trained on amino acid sequence data and complex protein folding structures, demonstrates the potential of beneficial AI systems. It accurately predicts the 3D structure of proteins from their amino acids, significantly aiding drug discovery and disease understanding without causing mental health crises or environmental harm due to its minimal infrastructure demands.”
A recurring narrative surrounding AGI has been the race to surpass China in AI dominance, with the hope that Silicon Valley’s influence would promote liberalization worldwide. However, Hao contends that the reverse has occurred:
“Contrary to expectations, the gap between the U.S. and China has persisted, and rather than fostering liberalization, Silicon Valley has had an illiberalizing effect on global societies. The only entity seemingly immune to this impact is Silicon Valley itself.”
Some argue that OpenAI and other AI companies have positively impacted humanity through the release of large language models like ChatGPT, which promise substantial productivity gains by automating tasks such as coding, writing, research, customer support, and knowledge work. However, OpenAI’s dual nature – part non-profit, part for-profit – muddies its definition and measurement of impact on humanity. This complexity is further compounded by the recent news that OpenAI has reached an agreement with Microsoft, moving it closer to eventual public ownership.
Two former OpenAI safety researchers shared their concerns with TechCrunch about the potential blurring of OpenAI’s for-profit and non-profit missions. Hao concurs with these apprehensions, warning against becoming so consumed by the mission that reality is disregarded:
“As evidence mounts that what they are building is causing significant harm to people, the mission continues to mask this reality.”