The lawyers delivered the closure arguments in THE Musk vs. Altman Lawsuit Thursday in a final attempt to convince a judge and jury that their respective clients, Elon Musk and Sam Altman, are the most well-meaning and honest stewards of OpenAI’s nonprofit founding mission. A ruling could be handed down as soon as next week, ending a decade-long battle between two of the tech sector’s most influential entrepreneurs.
But whatever the outcome, there are a lot of losers in this matter. Based on ample evidence, it appears that the people worst off are employees, policymakers, and members of the public who believed in the mission of a nonprofit research lab and supported OpenAI for that reason. What seemed to be a precedent for Musk and the other co-founders of OpenAI, at almost every moment, was to build the world. leading An AI lab, even if it meant creating a multi-billion dollar for-profit company.
“It’s difficult to see how the public interest is protected by any of these parties, and that’s really what’s ultimately at stake in a nonprofit case,” says Jill Horwitz, a law professor at Northwestern University with expertise in nonprofits and innovation, who listened to the closing arguments. “The public interest in nonprofit organization is at risk, regardless of who wins.” »
OpenAI’s stated mission is to ensure that artificial general intelligence (AGI) benefits humanity, but humanity is not a party to this. In practice, OpenAI has spent the last decade trying to compete with multi-billion dollar companies like Google and building AGI first. Additionally, Musk and Altman fought tooth and nail to control OpenAI.
“Musk and Altman are basically in a race to be the first to build superintelligence, and they both rightly fear what the other will do if he wins. The rest of us should fear them both,” says Daniel Kokotajlo, a former OpenAI researcher who joined OpenAI in 2022 and has raised concerns about the company’s safety culture. He was part of a group of former OpenAI researchers who filed a amicus brief in this case against OpenAI’s conversion to for-profit, arguing that the non-profit structure was essential in their decision to join the company.
During the trial, OpenAI’s non-profit organization was discussed as if it were another corporate investor. OpenAI’s lawyers have argued that giving the nonprofit a $200 billion stake in the for-profit company is proof that OpenAI is fulfilling its mission. Public advocacy groups disagree that funding alone is enough.
“I’m one of many people who is happy to see how many philanthropic resources the OpenAI Foundation has to do good work,” says Nathan Calvin, vice president of state affairs for the AI security nonprofit Encode, which filed the complaint. amicus brief opposing OpenAI’s restructuring earlier in this case. “But it’s worth remembering that the nonprofit also has a governance role and the mission of the nonprofit is not that of a typical foundation, it is specifically about ensuring that AGI benefits all humanity. Money is important to that goal and is useful all things equal, but it is not the goal in itself.”
Origin story
Evidence revealed in this case suggests that Altman and Musk were in agreement about launching OpenAI as a non-profit organization and operating much like a typical startup. They shared the goal of beating Google DeepMind in the AGI race. But creating OpenAI as a nonprofit turned out to be a terribly awkward way to win that race.
Musk accused Altman, CEO of OpenAI, and Greg Brockman, its co-founder and president, to move away from the founding mission of the nonprofit organization. He claims the founders used his $38 million investment to turn OpenAI into an $850 billion company and make several of its co-founders billionaires.
To win this case, Musk must convince a jury and a judge that he attached certain conditions to his investment, including that OpenAI could only use the money for charitable purposes and that he filed the case in a timely manner. In response, OpenAI argued that Musk failed to prove any of these accusations and was simply left with sour grapes for losing control of the AI lab.
In one of the first emails Altman sent Musk the creation of a “kind of non-profit organization” that eventually became OpenAI. In May 2015, he wrote that people working there would receive “start-up-like compensation.” Musk said it was “worth discussing.”
Virtually nothing presented at trial explained what the business partners planned to do if the nonprofit ended up with more money than it needed. There have been some discussions about open source technology, but OpenAI’s lawyers have argued that there has never been an agreement on it. In practice, the focus appears to be on purchasing expensive servers to generate more powerful AI models, although extensive research is being conducted to develop protections around these.
In her closing argument, Sarah Eddy, OpenAI’s lawyer, said it was essentially “undisputed” among the co-founders that they would eventually need more money than they could hope to raise from donations alone. She cited Ilya Sutskever’s testimony that “OpenAI’s mission is larger than a structure.” Eddy went on to say that if OpenAI hadn’t gotten the funds it needed, the mission would have failed.
OpenAI’s co-founders have repeatedly stated, in emails and testimonials, that they have benefited from the nonprofit structure and mission. They argued that this gave OpenAI “moral height,” which would prove strategically valuable in its quest to overtake Google DeepMind. The nonprofit mission was used to attract talented researchers, as well as to gain the goodwill of policymakers and the public.
But throughout OpenAI’s history, the nonprofit structure has apparently been seen as an obstacle to OpenAI’s transformation into a massive company. In December 2016, Musk wrote a e-mail to OpenAI’s co-founders, saying that creating OpenAI “as a nonprofit might, in hindsight, have been a bad decision,” adding that “the sense of urgency is not as high.” The following year, Musk and the co-founders attempted to create a for-profit arm and even considered deleting the nonprofit altogether. However, negotiations broke down after Musk demanded control of the company and Brockman and Sutskever requested significant stakes. Around that time, Brockman wrote in his journal how OpenAI could make him a billionaire.
Shortly after these discussions, in February 2018, Musk suggested integrating OpenAI into Tesla, his for-profit automaker, and even tried to recruit Altman to lead the AI unit, offering him a seat on Tesla’s board to lure him. Shivon ZilisMusk’s deputy and mother of four of his children, wrote in text messages at the time that Altman and Brockman had not “internalized the benefits of burying this in Tesla for a stealth advantage.” In an FAQ Zilis wrote for the proposed Tesla AI group, she said its strategy hasn’t been determined but “could be deeply proprietary.”
Kevin Scott, from Microsoft chief technology officer, wondered at that time whether OpenAI’s early backers, such as tech investor Reid Hoffman, were OK with OpenAI becoming essentially a for-profit company. “I can’t imagine they funded an open effort to focus [machine learning] talent so they can then build a closed, for-profit thing on its back,” he wrote in an email to his boss. Hoffman indicated he didn’t mind, and Microsoft later agreed to deepen its financial and technical support for OpenAI after launching a for-profit branch.
During OpenAI’s brief ouster of Altman in November 2023, which was harped on ad nauseum in this trialtext messages show that Altman and Microsoft CEO Satya Nadella selected new nonprofit board members. Altman presented them to the former board members who had fired him as conditions for his return to the company. “I was ready to go back into a burning building” Altman said.
William Savitt, OpenAI’s lawyer, emphasized Thursday that no other AI company in the world relies on a nonprofit organization. “OpenAI remains a charity…stronger and more powerful than ever,” he said.
Despite OpenAI’s unique structure, it faces the pitfalls of any tech giant. In several lawsuits filed by ChatGPT users and their families, OpenAI has been accused of negligence and wrongful death for allegedly contributing to a suicide, drug overdose, mass shooting and other fatal incidents. Last month, OpenAI supported an Illinois bill that helping AI labs avoid liability if their models contribute to societal disasters (a rival, Anthropic, objected). Media companies sued OpenAI for copyright infringement. Current and former employees say OpenAI’s economic research unit transformed into a defense arm of the company.
OpenAI has defended its work, launching new initiatives to address the societal impacts of AI and introducing safeguards to mitigate the dangers of AI models. Google DeepMind, Meta and other competitors face many of the same allegations. In fact, OpenAI is increasingly difficult to distinguish from profitable publicly traded companies as it continues to chase ever higher valuations. The nonprofit once burnished OpenAI’s public image, but Musk vs. Altman seems to have removed all but the last of the shine.
This is an edition of Maxwell Zeff Model Behavior Newsletter. Read previous newsletters here.
