Elon Musk Leads the AI ​​Culture Wars

It was only a matter of time before the culture wars came to AI.

Since the release of ChatGPT in late 2022, Elon Musk has railed on Twitter against what he calls “Woke AI.” He specifically criticized ChatGPT’s developer, OpenAI, for features designed to prevent the chatbot from parroting racism and sexism.

Now, the billionaire is courting AI researchers with a proposal to start a new AI company to rival the developer of ChatGPT, tech news site The Information reported on Wednesday.

“The danger of training AI to wake up — in other words, lie — is deadly,” Musk tweeted in December.

It is true that large language models (LLMs)—the technology on which ChatGPT is based—have difficulties in telling the truth, often confidently conveying false information. But his recently statementsMusk appears to conflate the problem of AI honesty with largely separate efforts within AI companies to make their LLMs less racist and sexist.

The racism and sexism of unfiltered LLMs comes from vast amounts of internet data trained by AIs. But a narrative appears to be developing in rightwing areas of the internet—now amplified by Musk—that racism and sexism are desirable features of AI, and that efforts to eliminate AIs from such prejudice is another form of “censorship” by powerful liberal forces. . Political right influencers have drawn parallels between these efforts and the steps taken by social media companies to reduce hate speech and toxicity on their platforms.

If Musk follows through on his rumored plans to start an AI company, it won’t be his first rodeo. He was a member of the founding team of OpenAI, which was founded in 2015 as a counter to what Musk and his co-founders saw as a dangerous concentration of AI expertise in the hands of for-profit tech companies. OpenAI was founded as a nonprofit that aims to make its research open and accessible to all. But when it began developing large-scale language models it changed that approach, arguing that the technology was too dangerous to release to the public. Musk left OpenAI in 2018 amid what Musk later said disagreements to approach it. OpenAI has since transitioned from a nonprofit to a for-profit company, arguing that selling its services is the only way to reach the scale needed to cover the costs of cutting-edge development. that AI.

Musk also expressed alarm at the rapid rise in AI power. “I’m a little worried about the AI ​​stuff,” Musk said at a Tesla investor event on Wednesday. “I think this is something we should be concerned about. […] It’s a dangerous technology and I’m afraid I might have done some things to speed it up. “

But if Musk believes that his role in building OpenAI has accelerated the development of dangerous technology, he does not believe that starting another AI company will have the same effect. On Tuesday, before the investor meeting, Musk tweeting a meme suggesting that “BasedAI” – the rumored name for his new venture – will do away with “Woke AI” and “Closed AI.” (The latter appears to refer to the practice by tech companies of hiding the most racist and sexist versions of AI chatbots from the public eye.) The expression “base” comes from hip hop slang, where it a term of respect that can mean you believe someone is being true to their true self. But it has since been co-opted by rightwing online communities, where it is used to praise people who are not afraid to express controversial opinions.

Igor Babuschkin, a researcher who was reportedly approached by Musk about his plans to start a new AI company, told The Information that Musk’s goal is not to create a chatbot with a small part of safety than ChatGPT: “The purpose is to improve reasoning abilities and the truth of these language models.”

However, some AI researchers who spoke to TIME for this article said they were concerned that by talking about AI in the language of social media culture wars, Musk could disrupt the dynamic. in a field where cooperation is very important — especially as technology becomes more powerful. “By calling out the measures put in place to protect users as [instead] part of a ‘larger liberal conspiracy,’ Musk undermines the work of actually making these products better and more useful,” said Rumman Chowdhury, Twitter’s former AI ethics lead. “What I see the funny thing about this tactic is, it serves no one but himself and his cronies who believe in these very conservative tactics. There is very little or no tangible value in the man’s writing at large, and even there is no good business reason to do what he does. Its intention is purely ideological and political.”

Other AI safety experts have also questioned Musk’s apparent opposition to “Closed AI.” “If someone can build a nuclear weapon in their basement for $10,000 – I’ll just say I’m glad we don’t live in that world,” said Michael Cohen, an AI safety researcher at the Future of Humanity. Institute of the University of Oxford. “The idea that you can fix the dangers of uncontrolled AI by giving people AI they can’t control is ridiculous.”

As with all things Elon Musk, take his supposed plans with a pinch of salt. “Elon has a lot of bluster and posturing,” said Chowdhury, who briefly worked under the billionaire at Twitter before he was fired. “The last thing we need to do is think what he said is going to happen.”

More Must Reads From TIME

Write to Billy Perrigo and billy.perrigo@time.com.

Leave a Comment