The dispute between Scarlett Johansson and OpenAI is adding fresh heat to long-simmering questions about CEO Sam Altman’s credibility.
Why it matters: Altman and OpenAI, the company avowedly dedicated to making sure that AI “benefits humanity,” have to persuade a skeptical world that AI can be trusted — and that will be a lot harder if they lose trust themselves.
Flashback: Altman won an epic boardroom fight last fall against directors who fired him because they said he was “not consistently candid” with them.
At the time, it was a puzzling explanation for an abrupt dismissal — but a growing chorus of critics is now saying, “Oh, this is what you were talking about.”
Driving the news: OpenAI and Altman continue to insist that ChatGPT’s female voice named “Sky” wasn’t modeled on Johansson’s, which famously represented an AI ᴀssistant in the celebrated 2013 movie “Her.”
But Johansson said Monday that Altman had twice approached her to model the voice herself — a relevant bit of evidence that the company never shared.
OpenAI is now putting a “pause” on Sky’s use.
The Sky mess followed a week that saw the disbanding of a “superalignment” team at OpenAI dedicated to researching long-term risks of advanced AI.
At the same time, the team’s two leaders — OpenAI co-founder Ilya Sutskever and Jan Leike — left the firm.
Sutskever was among the board members who voted to fire Altman before an about-face in which he signed the nearly unanimous open letter from OpenAI employees demanding Altman’s return.
Leike blasted the company on his way out, saying “safety culture and processes have taken a backseat to shiny products.”
When OpenAI announced the superalignment team in July, it said it would dedicate 20% of its computing resources to the work.
But Leike complained that his team had been “struggling for compute” and “sailing against the wind.”
The intrigue: Vox reported that OpenAI’s “extremely restrictive off-boarding agreement” requires departing employees to keep mum with criticism of the firm or even mentions of the agreement itself. If they don’t sign, they risk losing their vested stock options.
After Vox published its story, Altman posted on X, formerly known as Twitter, that “we have never clawed back anyone’s vested equity,” that he didn’t know about the harsh terms and was “genuinely embarrᴀssed,” and that the exit agreements would be changed.
Yes, but: While some critical voices on X and elsewhere have begun to call for Altman’s resignation, that’s unlikely to happen unless his troubles dramatically deepen.
Members of the company’s new board, reconsтιтuted after the fight last year, are also much less likely than their predecessors were to challenge him.
The big picture: Altman and OpenAI have had to do a lot of explaining and backtracking of late. But the most important matter the company continues to equivocate around is what data it has used to train its AI models.
In a widely criticized Wall Street Journal interview in March, OpenAI CTO Mira Murati said she didn’t know whether the company had used YouTube videos to train its Sora video-making tool.
“We used publicly available data and licensed data,” Murati said.
As Axios has written, “publicly available data” can mean almost anything.
What we’re watching: OpenAI and the wider AI industry could defuse a lot of this distrust by sharing more about the training data they use.
But transparency would backfire if they reveal they really did use huge amounts of copyrighted material.
One reason these companies prefer to settle conflicts like Johansson’s rather than fight them in court is that trials can pry loose information the firms would rather keep close.