Look, if you don’t know what Hacker News is, I’ll bet what you’re probably thinking about OpenAI’s weekend leadership troubles: Who cares? If you’re familiar with the name Sam Altman at all, he probably seems to you like just another rich guy in a pile of tech moguls that dominate headlines these days.
But Altman’s Friday firing as CEO and chairman of OpenAI in a boardroom coup—even with the dramatic resolution announced by Microsoft CEO Satya Nadella on Sunday just before midnight on the west coast—is more than just Succession: AI Edition. It reflects a disruptive organization failing to understand how leadership matters just as much as tech.
Altman’s messy weekend began by finding out, with no advance notice, that he was getting canned. His top deputy, now-departed OpenAI President Greg Brockman, found out just before the news went public. (So did Microsoft, which is heavily invested in OpenAI. Other investors didn’t even get a courtesy call.)
ADVERTISEMENT
As details slowly, painfully dripped out from well-connected tech news sources like The Information, The Verge, podcaster Kara Swisher, and the understandably bewildered Altman and Brockman, it quickly became clear how messy things actually were. (“We too are still trying to figure out exactly what happened,” Brockman wrote on X, formerly Twitter, on Friday evening.)
The problem boils down to OpenAI’s roots as a nonprofit with an altruistic mission in mind, albeit one with a founder list that included Elon Musk. A few years ago, OpenAI created a for-profit subsidiary that has pushed its emerging momentum into overdrive.
Altman, who previously led the tech accelerator Y Combinator, essentially ran OpenAI like a high-flying startup, with frequent new feature releases, including one that dropped just two weeks ago. Per The Information, Russian-born OpenAI chief scientist Ilya Sutskever worried about the ethical impacts of Altman’s commercial-minded leadership approach, especially after the most recent product launch. Quickly, Sutskever got other board members on his side.
The board was small, with just six members (including Altman, Brockman, and Sutskever), and none with ties to investors like Microsoft. Further, Altman had no direct equity stake in OpenAI, minimizing his leverage.
Sutskever didn’t have to convince very many people that his view on Altman was correct—after all, two board members are tied to the Effective Altruism philosophical movement, which worries about things just like this. Hence, the brutal, sudden firing.
Now Sutskever and other OpenAI staffers have come to regret what happened and on Monday morning called for the board’s removal—and Altman’s return to the helm.
I’ve seen lots of historic comparisons in recent days. Altman’s firing at the height of OpenAI’s success reminded me of Jack Tramiel’s unexpected exit from the eight-bit computing giant Commodore in 1984. (Tramiel landed on his feet almost immediately, taking over the consumer division of Atari later that year. With Microsoft hiring Altman and his team to lead an artificial intelligence research arm, the parallel fits.)
Others have suggested it puts Altman in the category of ousted Apple cofounder Steve Jobs—though Jobs was the one trying to put on a boardroom coup in 1985, not the other way around.
While Microsoft has pledged to publicly stand behind OpenAI, which is taking on new leadership in the form of former Twitch CEO Emmett Shear, the truth is that OpenAI may find itself under threat of investor lawsuits, while giving Microsoft the ability to cut bait at any time. (The Windows-maker, after all, already has exclusive licenses to OpenAI’s most valuable assets.)
There’s an unmistakable irony to all of this: The company that has been promising us a future with large language models (LLMs) and machine learning at the center has been stunted by the most human of actions—infighting.
Observers have compared the rush around LLMs to other recent tech waves, especially the blockchain-enabled Web3. But Web3 was quickly pushed aside, while ChatGPT and DALL-E have already found wide use among tech companies and even mainstream businesses.
Lots of companies smaller than Microsoft are deeply invested in OpenAI’s success. Meanwhile, this OpenAI-driven wave has proven more disruptive than past waves of innovation, with creative workers, for example, suddenly in danger of being usurped by a language model.
But don’t take my word for it. An academic working paper from this past March, written with the help of a trio of OpenAI researchers, projected that roughly 80 percent of the workforce could find portions of their work augmented or replaced by generative pretrained transformers (GPTs) or other kinds of LLMs, with some more affected than others.
“We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications,” the company’s Tyna Eloundou, Sam Manning, and Pamela Mishkin, joined by the University of Pennsylvania’s Daniel Rock, wrote.
If this technology could cause this much societal change, and signs are that it might, we’re going to want a stable organization at the forefront, making the right decisions and developing this technology ethically, considering the social impact.
OpenAI is not that. But the scary thing is, Microsoft might be.