(2024-05-22) ZviM Do Not Mess With Scarlett Johansson

Zvi Mowshowitz: Do Not Mess With Scarlett Johansson. Andrej Karpathy (co-founder OpenAI, departed earlier), May 14: The killer app of LLMs is Scarlett Johansson. You all thought it was math or something

People noticed it sounded suspiciously like Scarlett Johansson, who voiced the AI in the movie Her, which Sam Altman says is his favorite movie of all time,

I mean, surely that couldn’t have been intentional.

Again: Do not mess with Scarlett Johansson. She is Black Widow. She sued Disney.

This seems like a very clear example of OpenAI, shall we say, lying its ass off?

It’s increasingly clear that the board fired him (Sam) for the reasons they gave at the time: he is not honest or trustworthy, and that’s not an acceptable trait for a CEO!

And it seems they’re doubling down on this, with carefully worded statements that don’t really get to the heart of the matter

Sure Seems Like OpenAI Violated Their Own Position

On March 29, 2024, OpenAI put out a post entitled Navigating the Challenges and Opportunities of Synthetic Voices

Sam Altman’s Original Idea Was Good, Actually

Ultimately, I think that the voices absolutely should, when desired by the user, mimic specific real people’s voices, with of course that person’s informed consent, participation and financial compensation.

This Seems Like a Really Bad Set of Facts for OpenAI?

In this case, he made one crucial mistake: The first rule of asking forgiveness rather than permission is not to ask for permission. The second rule is to ask for forgiveness. Whoops, on both counts.

Does Scarlett Johansson Have a Case? I am not a lawyer, but my read is: Oh yes. She has a case.

It all seems like far more than enough for a civil case, especially given related public attitudes

Companies have tried to impersonate famous voices before when they can’t get those voices. Generally doesn’t go well for the company.

The Big Rule Adjustment

As I have said before: Many of our laws and norms will need to adjust to the AI era, even if the world mostly ‘looks normal’ and AIs do not pose or enable direct existential or catastrophic risks.

In many places, fully enforcing the existing laws via AI and AI-enabled evidence would grind everything to a halt or land everyone involved in prison. In most cases that is a bad result. Fully enforcing the strict versions of verbally endorsed norms would often have a similar effect. In those places, we are going to have to adjust.

Often we are counting on human discretion to know when to enforce the rules, including to know when a violation indicates someone who has broken similar rules quite a lot in damaging ways versus someone who did it this once because of pro-social reasons or who can learn from their mistake.


Edited:    |       |    Search Twitter for discussion