🔒 Silicon Valley showdown: Elon Musk’s lawsuit unveils OpenAI’s struggle between altruism and ambition

In the turbulent world of technology, where hyperbole often masks reality, the clash between image and truth at OpenAI is exposed. Once a benefactor of the nonprofit aimed at preventing AI harm, Elon Musk now sues, unveiling a complex narrative of broken promises and conflicting visions. Accusations fly about OpenAI’s for-profit shift and alleged breaches, raising questions about Musk’s motives. As Silicon Valley drama unfolds, the battle between altruism and ambition takes centre stage, exposing the fine line between visionary goals and opportunistic gains.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

By Max Chafkin ___STEADY_PAYWALL___

Fast-growing technology companies exaggerate constantly. This isn’t to say they lie; it’s just that when you’re raising large sums of money, it helps to present an impressionist picture of your current business rather than a photorealistic one. Google’s founders talked about organizing “the world’s information” back when they weren’t even the most successful search engine from Stanford University; Airbnb’s founders argued that vacation rentals would, by some unknown alchemy, end hatred; and WeWork spun a fairly conventional real estate startup into a pseudo-spiritual lifestyle brand.

This form of BS is mostly benign, but it can create vulnerabilities when it’s overused or when a company that has previously stretched the truth finds itself targeted by an equally adept BSer. That’s the case with OpenAI, Silicon Valley’s hottest startup and the developer of ChatGPT. On Feb. 29 the company was sued by Elon Musk, the trollish billionaire with a simmering grudge against it and a lot of experience in delivering an overambitious sales pitch.

Musk’s complaint centers on the chasm between OpenAI’s image as a nonprofit research lab dedicated to preventing harms caused by artificial intelligence and the reality that it’s also the dominant maker of commercial AI software via a for-profit subsidiary that’s backed by Microsoft Corp. and valued at $86 billion. Musk provided seed funding for the nonprofit in 2016 but broke with the organization around the time it went for-profit, and he’s long professed outrage over this discrepancy. “This would be like, let’s say you funded an organization to save the Amazon rainforest,” Musk said in a television interview last year. “And instead they became a lumber company.” In a blog post OpenAI suggested that Musk’s account was incomplete and that he’d only turned against OpenAI when the non-profit resisted his effort to take it over. In an internal memo the company also suggested that Musk had picked a fight out of jealousy.

According to the suit, Musk and Sam Altman, OpenAI’s chief executive officer, have a history that began almost a decade ago when both men were loudly warning about the dangers posed by so-called artificial general intelligence, or AGI. They feared, in essence, a Matrix-like scenario in which computers develop superhuman intelligence and then threaten to exterminate humankind. Many experts think this distracts from less cinematic threats posed by AI, but Musk and Altman persisted, founding OpenAI in 2015 as a nonprofit that promised “to advance digital intelligence in the way that is most likely to benefit humanity,” an announcement explained.

The idea was that OpenAI would race to develop an altruistic AGI (before a less scrupulous software company did so first) and head off the risk of a hostile takeover by out-of-control code. It’s never been entirely clear how this would work, but Musk has said he hoped to weaken the market leader, Google, whose co-founder Larry Page had suggested to him in a private conversation that machine dominance of humans would “be the next stage of evolution,” according to the suit. (Musk has made versions of this claim before; Page hasn’t commented on it.) In response, Musk claims, he donated $44 million to OpenAI, recruited scientists and even came up with the name.

Elon Musk Photographer: Grzegorz Wajda/SOPA Images/LightRocket/Getty Images

In 2019 the nonprofit spun off its capped-profit arm and struck a deal with Microsoft that would ultimately put the company’s chatbots into the Bing search engine and Microsoft’s wildly successful Office suite. The deal also helped finance ChatGPT, which became a successful product in its own right as Altman made grandiose claims about plans to capture and redistribute most of the world’s wealth, flirting with the prospect of raising billions (or even trillions) to speed things further.

In Musk’s telling, this adds up to a betrayal: A philanthropic venture was “transformed into a closed-source de facto subsidiary of the largest technology company in the world,” according to the lawsuit. “Where some like Mr. Musk see an existential threat in AGI, others see AGI as a source of profit and power,” the suit says, apparently referring to Altman.

There are many reasons to question Musk’s rationale and motives. The lawsuit claims OpenAI breached its agreement with Musk, but that agreement seems to exist mostly in Musk’s mind. His lawyers have produced no contract—only an email from Altman that mentions OpenAI would operate “for the good of the world” and a copy of the articles of incorporation. Musk also has a history of ill-conceived stunts and questionable lawsuits. The same day he filed his OpenAI complaint, a federal judge referred to arguments in a separate Musk suit aimed at a different nonprofit, the Center for Countering Digital Hate, as “one of the most vapid extensions of law I’ve ever heard.”

On the other hand, Musk’s complaint rightly points out that Altman took the art of Silicon Valley spin to new heights. Rather than simply embellish his company’s capabilities, Altman tried a new tack: His stuff was so good, he said, it might accidentally destroy the world. This was OK, though, according to Altman, because AGI was so promising. A race was on, and Altman—the self-styled “Oppenheimer of our age”—argued that the world would be better off if OpenAI prevailed, since it was governed by a nonprofit. Borrowing an old saw from the gun rights movement, Altman argued that the only way to stop a bad guy with a killer robot was a good guy with a killer robot.

It was always unclear whether OpenAI could rein itself in, but those questions became more pronounced last year when Altman was fired by his board for unspecified dishonesty. Days later, a Microsoft-backed countercoup brought him back and led to a purge of the board. An internal investigation is underway—along with reports of a US Securities and Exchange Commission inquiry—but it’s hard to believe the board would feel more empowered in the future.

Apart from questions of oversight, Altman’s logic seemed a bit self-serving. If you truly believed you were making a weapon of mass destruction, a rational response might be to shut it down, instead of sticking it into hundreds of millions of copies of Microsoft Excel. By issuing warnings about worst-case scenarios, Altman diverted attention from OpenAI’s more mundane failings, especially its penchant for making stuff up, even as ethicists warned that ChatGPT would be misused by scammers and propagandists.

Musk’s lawsuit portrays Altman as a self-promoting hypocrite, a description that might also apply to Musk, who’s the master of cloaking entrepreneurial ambitions in exaggerated claims of do-goodery and who has a strong incentive to disrupt OpenAI’s business. Two Musk companies compete with OpenAI: Tesla Inc., which recruits engineers from the same talent pool as OpenAI, and xAI Corp., which offers a chatbot that Musk promotes as a less woke version of ChatGPT.

Moreover, the suggestion that Altman’s control of OpenAI represents a threat to the nonprofit’s mission is rich coming from Musk, who himself tried to take over OpenAI in 2018, according to a report from Semafor, turning critic only after the board rejected his offer. Tesla, unlike the rest of Musk’s empire, is publicly traded and subject to board oversight—though Musk seems to have little problem subverting that whenever he feels the need. In January, a Delaware judge ruled that the $56 billion pay package Tesla had awarded to Musk was unlawful, because Musk had exercised undue influence over the board. Musk is expected to appeal the ruling, and in the meantime he’s asked for a second stock award that could be worth another $50 billion or more. If he doesn’t get it, he’s said that he’ll develop AI products outside Tesla—that is, inside a company he controls.

In making this demand for greater pay, Musk adopted the same high-minded rhetoric about safety that Altman has employed, saying he would feel “uncomfortable growing Tesla to be a leader in AI & robotics” without an equity stake worth at least 25%. But behind this is the same basic motivation that Musk attributes to Altman. Both men say they’re focused on the risks posed by AI. Their actions suggest that in the meantime they wouldn’t mind taking advantage of those risks to make themselves very, very rich.

Read also:

© 2024 Bloomberg L.P.