Skip to Content

How OpenAI so royally screwed up the Sam Altman firing

Analysis by David Goldman, CNN

New York (CNN) — OpenAI’s overseers worried that the company was making the technological equivalent of a nuclear bomb, and its caretaker, Sam Altman, was moving so fast that he risked a global catastrophe.

So the board fired him. That may ultimately have been the logical solution.

But the manner in which Altman was fired – abruptly, opaquely and without warning to some of OpenAI’s largest stakeholders and partners – defied logic. And it risked inflicting more damage than if the board took no such action at all.

A company’s board of directors has an obligation, first and foremost, to its shareholders. OpenAI’s most important shareholder is Microsoft, the company that gave Altman & Co. $13 billion to help Bing, Office, Windows and Azure leapfrog Google and stay ahead of Amazon, IBM and other AI wannabes.

Yet Microsoft was not informed of Altman’s firing until “just before” the public announcement, according to CNN contributor Kara Swisher, who spoke to sources knowledgeable about the board’s ousting of its CEO. Microsoft’s stock sank after Altman was let go.

Employees weren’t told the news ahead of time, either. Neither was Greg Brockman, the company’s co-founder and former president, who said in a post on X that he found out about Altman’s firing moments before it happened. Brockman, a key supporter of Altman and his strategic leadership of the company, resigned Friday. Other Altman loyalists also headed for the exits.

Suddenly, OpenAI was in crisis. Reports that Altman and ex-OpenAI loyalists were about to start their own venture risked undoing everything that the company had worked so hard to achieve over the past several years.

So a day later, the board reportedly asked for a mulligan and tried to woo Altman back. It was a shocking turn of events and an embarrassing self-own by a company that its widely regarded as the most promising producer of the most exciting new technology.

Strange board structure

The bizarre structure of OpenAI’s board complicated matters.

The company is a nonprofit. But Altman, Brockman and Chief Scientist Ilya Sutskever in 2019 formed OpenAI LP, a for-profit entity that exists within the larger company’s structure. That for-profit company took OpenAI from worthless to a valuation of $90 billion in just a few years – and Altman is largely credited as the mastermind of that plan and the key to the company’s success.

Yet a company with big backers like Microsoft and venture capital firm Thrive Capital has an obligation to grow its business and make money. Investors want to ensure they’re getting bang for their buck, and they’re not known to be a patient bunch.

That probably led Altman to push the for-profit company to innovate faster and go to market with products. In the great “move fast and break things” tradition of Silicon Valley, those products don’t always work so well at first.

That’s fine, perhaps, when it’s a dating app or a social media platform. It’s something entirely different when it’s a technology so good at mimicking human speech and behavior that it can fool people into believing its fake conversations and images are real.

And that’s what reportedly scared the company’s board, which remained majority controlled by the nonprofit wing of the company. Swisher reported that OpenAI’s recent developer conference served as an inflection point: Altman announced that OpenAI would make tools available so anyone could create their own version of ChatGPT.

For Sutskever and the board, that was a step too far.

A warning not without merit

By Altman’s own account, the company was playing with fire.

When Altman set up OpenAI LP four years ago, the new company noted in its charter that it remained “concerned” about AI’s potential to “cause rapid change” for humanity. That could happen unintentionally, with the technology performing malicious tasks because of bad code – or intentionally by people subverting AI systems for evil purposes. So the company pledged to prioritize safety – even if that meant reducing profit for its stakeholders.

Altman also urged regulators to set limits on AI to prevent people like him from inflicting serious damage on society.

Proponents of AI believe the technology has the potential to revolutionize every industry and better humanity in the process. It has the potential to improve education, finance, agriculture and health care.

But it also has the potential to take jobs away from people – 14 million positions could disappear in the next five years, the World Economic Forum warned in April. AI is particularly adept at spreading harmful disinformation. And some, including former OpenAI board member Elon Musk, fear the technology will surpass humanity in intelligence and could wipe out life on the planet.

Not how to handle a crisis

With those threats – real or perceived – it’s no wonder the board was concerned that Altman was moving at too rapid of a pace. It may have felt obligated to get rid of him and replace him with someone who, in their view, would be more careful with the potentially dangerous technology.

But OpenAI isn’t operating in a vacuum. It has stakeholders, some of them with billions poured into the company. And the so-called adults in the room were acting, as Swisher put it: like a “clown car that crashed into a gold mine,” quoting a famous Meta CEO Mark Zuckerberg line about Twitter.

Involving Microsoft in the decision, informing employees, working with Altman on a dignified exit plan…all of those would have been solutions more typically employed by a board of a company OpenAI’s size – and all with potentially better outcomes.

Microsoft, despite its massive stake, does not hold an OpenAI board seat, because of the company’s strange structure. Now that could change, according to multiple news reports, including the Wall Street Journal and New York Times. One of the company’s demands, including the return of Altman, is to have a seat at the table.

With OpenAI’s ChatGPT-like capabilities embedded in Bing and other core products, Microsoft believed it had invested wisely in the promising new tech of the future. So it must have come as a shock to CEO Satya Nadella and his crew when they learned about Altman’s firing along with the rest of the world on Friday evening.

The board angered a powerful ally and could be forever changed because of the way it handled Altman’s ouster. It could end up with Altman back at the helm, a for-profit company on its nonprofit board – and a massive culture shift at OpenAI.

Alternatively, it could become a competitor to Altman, who may ultimately decide to start a new company and drain talent from OpenAI.

Either way, OpenAI is probably left off in a worse position now than it was in on Friday before it fired Altman. And it was a problem it could have avoided, ironically, by slowing down.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

This story has been updated to clarify the first sentence

Article Topic Follows: CNN-Opinion

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KTVZ NewsChannel 21 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content