Move fast and forget your mission: What the OpenAI coup tells us
The conflict at OpenAI is between quickly seizing a huge economic opportunity and taking the time to develop AI responsibly. Society needs public institutions to step up.
The shakeup at OpenAI might have been easy to overlook unless you closely follow tech news. On Friday, OpenAI's board dismissed CEO Sam Altman, triggering the exit of other crucial team members.
What led the board to dismiss the CEO of one of the world's most influential and successful startups? It turns out that it was the board of OpenAI's non-profit arm (OpenAI Inc.), rather than the commercial company (OpenAI Global LLC), that made this decision. While details are still surfacing, it appears that Sam Altman's aggressive strategy in establishing OpenAI as a leading technology firm, capturing unprecedented business opportunities, clashed with the non-profit board's mission. Led by AI luminary Ilya Sutskever, the board favored a slower, more responsible approach to AI development.
Move fast and forget your mission
From a business standpoint, Altman's approach was not only understandable but executed brilliantly. ChatGPT, launched just a year ago, quickly became the fastest-growing application in history with over 100 million active users, millions of whom pay $20 monthly. The unveiling of new features at the recent OpenAI DevDays was staggering. One minute into the presentation, Altman remarked, "there’s a lot, you don’t have to clap each time," as he went on to make groundbreaking announcements that rendered thousands of startups obsolete. Some have even named the event AI's red wedding. ChatGPT is a constant topic in board meetings worldwide, arguably the most important product launch of this century.
Despite this, Altman was ousted as CEO.
To fully understand the situation, it's important to note that OpenAI began with a commitment to responsible AI development. Elon Musk, a co-founder at the time, highlighted concerns about AI's power, especially if monopolized by a single company, likely alluding to Google. Citing British Historian John Dalberg-Acton's principle that "Freedom consists in the distribution of power, absolutism in its concentration," the small non-profit aimed to develop AI with humanistic values at its core, ensuring its power was widely distributed, not centralized in a few hands.
However, a few years into its journey, the attraction of potentially enormous profits and the need for extensive computing power led to the formation of a for-profit subsidiary, while maintaining full ownership under the non-profit entity. This structure meant that the non-profit's board, comprising six members including Sam Altman and Greg Brockman (board chair), along with Chief Scientist Ilya Sutskever and non-employees Adam D’Angelo, Tasha McCauley, and Helen Toner, would govern the fate of the for-profit company.
At the time of writing, the remaining four-member board is facing pressure to reverse their decision and reinstate Sam Altman and Greg Brockman. This likely implies the resignation of the current board. Alternatively, Altman might start a new company, possibly with key employees who are loyal to him.
The tension between business and social good
As far as we know, the conflict that emerges here is between the goals of the non-profit, which is to develop artificial general intelligence (AGI) for the benefit of humanity, and the enormous business opportunities that open up for an entity at the cutting edge of this development.
In principle, OpenAI has been very clear that the non-profit mission takes precendence. They enshrined this principle in a pink box with a bold black outline, clearly stating the following (emphasis mine): “The Company's duty to this mission and the principles advanced in the OpenAl, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so.”
In the light of this, the recent developments make more sense. They highlight the tension of doing what is socially responsible, and generating enormous business opportunities.
These two are not necessarily in conflict. But you cannot optimize both goals at the same time. This is why we have a private and a public sector. The public sector now needs to step up its game.
Public sector: Your turn
I highly recommend the lecture of the OpenAI page where the above was taken from. It starts as follows:
We founded the OpenAI Nonprofit in late 2015 with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity. A project like this might previously have been the provenance of one or multiple governments—a humanity-scale endeavor pursuing broad benefit for humankind.
Seeing no clear path in the public sector, and given the success of other ambitious projects in private industry (e.g., SpaceX, Cruise, and others), we decided to pursue this project through private means bound by strong commitments to the public good.
The OpenAI founders didn't see a viable path in the public sector because none was offered. The public sector has largely fallen behind in driving technological advancements in the digital era, a stark contrast to its role in the last century. Developing AGI is resource-intensive, similar to creating nuclear bombs or large particle accelerators. In these cases, governments recognized the importance and took charge, understanding these were too critical to be left to the private sector.
Consider the hypothetical scenario where the atomic bomb was developed by a company under a non-profit aimed at societal welfare. This would be alarming, yet this mirrors our current situation with AGI.
In my opinion, public universities should now become central in AGI development and align it with humanistic values. They have the two essential elements: access to top talent and public ownership. The former is crucial for building AGI, and the latter ensures the technology isn't controlled by a few.
Whether universities should tackle this independently or form a network similar to CERN for AGI is a secondary matter. The primary concern is that governments must be prepared to significantly fund open and safe AI. Yes, it's costly – OpenAI raised over 10 billion dollars, CERN's annual budget is 1.4 billion, and the Manhattan Project cost 25 billion (adjusted for inflation). But what's the alternative?
It's critically important that we invest publicly in AI development. We’re talking about a technology that will forever change humanity. The current approach - developing regulation to keep the private sector aligned with society’s interests - is important, but it's not enough. The public sector must become an active and leading player in this development.
CODA
This is a newsletter with two subscription types. I highly recommend to switch to the paid version. While all content will remain free, I will donate all financial support, starting in 2024, to the EPFL AI Center.
To stay in touch, here are other ways to find me:
Writing: I write another Substack on digital developments in health, called Digital Epidemiology.
Ne perdons pas espoir, mais même Oppenheimer a eu la faiblesse d'accepter le largage de bombes sur Hiroshima et Nagasaki plutôt que de faire une manifestation publique qui était un scénario alternatif. Mais le désir de vengeance était trop fort, comme celui qui anime Nadella de Micro$oft, trop heureux d'en découdre avec Google, dont le moteur de recherche domine Bing depuis de nombreuses années. Source: https://nsarchive2.gwu.edu/nukevault/ebb525-The-Atomic-Bomb-and-the-End-of-World-War-II/documents/025.pdf
---
Don't give up hope, but even Oppenheimer had the weakness to acquiesce in the dropping of bombs on Hiroshima and Nagasaki rather than making a public demonstration which was an alternative scenario. But the desire for revenge was too strong, like the one that drives Nadella from Micro$oft, too happy to battle with Google, whose search engine defeated Bing since many years.
Source: https://nsarchive2.gwu.edu/nukevault/ebb525-The-Atomic-Bomb-and-the-End-of-World-War-II/documents/025.pdf