Exactly how stupid was what OpenAI did to Scarlett Johansson?
May 21, 2024 7:22 PM   Subscribe

We ranked it. It's #6, so you know - somewhere between Musk and Uber.

A little background: Actress Scarlett Johansson is threatening legal action against OpenAI, alleging the artificial intelligence start-up copied her voice after she refused to license it to the company. OpenAI CEO Sam Altman asked Johansson in September to be the voice of the company’s conversational AI system.

Johansson declined. In May, two days before OpenAI planned to demonstrate the technology, Altman contacted her again, asking her to reconsider, she said. Before she could respond, OpenAI released a demo of its improved audio technology, featuring a voice called “Sky.” Many argued the coquettish voice — which flirted with OpenAI employees in the presentation — bore an uncanny resemblance to Johansson’s character in the 2013 movie “Her,” in which she performed the voice of a super-intelligent AI assistant.

Johansson's statement.
posted by Toddles (114 comments total) 19 users marked this as a favorite
 
Honestly, came off as par for the course thus far for everything having to do with AI and especially Altman. Why change tactics at this point?
posted by Insert Clever Name Here at 7:29 PM on May 21 [25 favorites]


They can get away with stealing all the world’s art, literature, music, and technical info. But Johansson can make them fold in a day? Can we get her lawyers to defend humanity from OpenAI?
posted by nickggully at 7:35 PM on May 21 [83 favorites]


What is utterly infuriating about this incident is this voice is exactly how many men, and particularly certain men in tech, want women to sound and behave. And women who do not sound and behave this way, especially in tech, are punished for it.
Making chatbots to talk and behave in such a manner is viscerally offensive. And that’s compounded by the fact that a woman said no, twice, and this dude went ahead and did it anyway. I hope she gets a massive compensation, and I hope the press continues to roast him over this and, well, everything else.
Ed Zitron summed it up well, Sam Altman is Full of Shit
posted by birdsongster at 7:43 PM on May 21 [157 favorites]


The post omits that the company came out with a statement a few days ago claiming that they hired real voice actors in an audition process that started before Altman approached Johansson. They could be lying about that (they say they're not identifying the actors for privacy reasons) but then again it would come out in discovery if she sued. The voice is also similar to her performance in the film but not an exact copy. They discussed researching voice cloning tech a few months ago, but haven't released it and took pains to underscore how risky it is, so I'd be quite surprised if they blew that up for a demo. (I reckon they may have prepared a model based on her in advance but scrapped it and went with a backup when she didn't respond to the later request.)

Altman and others leaning into the Her references when she declined to participate was definitely dumb and disrespectful, and even using a human soundalike may be running afoul of California's likeness laws, but I don't see any evidence that they straight-up cloned her voice.
posted by Rhaomi at 7:59 PM on May 21 [4 favorites]


It's Altman's tweet that just said "her" right as this voice was released that is perhaps most damning.

Johansson sued Disney and had a strong enough case that Disney settled out of court. If they think Johansson won't sue OpenAI or even Altman personally, they're entirely incorrect.
posted by hippybear at 8:19 PM on May 21 [31 favorites]


Also... this article ranks the FTX thing above the Theranos thing as stupid scandals? In my book, playing with people's health is way more stupid than playing with people's money. Although I wouldn't even rank those as stupid, I just rank them as horrifying and awful.
posted by hippybear at 8:22 PM on May 21 [17 favorites]


Someone did mention a few years ago that AI would come a cropper when they pissed off someone with enough clout/$$$ to clamp down on the companies behavior. Piss off the wrong (very powerful, very wealthy) people (not you & I mind you, we don't count) and they'll pass laws faster than a speedy thing.
posted by phigmov at 8:29 PM on May 21 [12 favorites]


What is utterly infuriating about this incident is this voice is exactly how many men, and particularly certain men in tech, want women to sound and behave. And women who do not sound and behave this way, especially in tech, are punished for it.

This whole situation, and the way it's being reported, is so odd.

I've been playing around with the publicly available audio conversation function of the ChatGPT app for a few months now. It offers a range of generated voices/characters/whatever-they-are to choose from. One of those voices sounded a lot like SJ (it was called "Sky") The experience, regardless of which character you picked, was pretty uncanny, as the quality of the voice output was quite convincing. You could ask it to explain how stars are formed and it would do a good job of explaining it in a very natural cadence (much better than, say, Siri)

What OpenAI demoed the other day was taking this existing feature and, uh, HER-ifying it. Basically adding those extra abilities (laughing, joking, emoting in various ways) as well as increasing the reaction time so that it felt more like a real conversation. So I'm pretty sure the SJ clone had been around for awhile, but hadn't bubbled up to the discourse because (a) while the voice was good, it wasn't HER-level good and (b) Altman hadn't tweeted "her" yet.

What was also clear from the OpenAI demo is that just like you can do with text in LLMs, you can do with voice in LLMs soon-- create a persona of a submissive, subservient character or one that is domineering and snarky or anything else within the bounds of whatever guardrails the vendor has created (plus beyond those guardrails if you can successfully "jailbreak" them)
posted by gwint at 8:42 PM on May 21 [5 favorites]


Have to admit, I'm personally hoping they can trot out their voice actress for proof - because the diva-ish entitlement in assuming that's NOT possible is a bit ridiculous.
posted by stormyteal at 8:46 PM on May 21 [2 favorites]


I think hiring a soundalike to voice-clone doesn't actually help their case... If you voice-clone an Obama impersonator, it's still pretty obvious what you're up to. Especially if you tweet 'hope' concurrent with the release.
posted by kaibutsu at 8:59 PM on May 21 [60 favorites]


If you're going to hire a celebrity to be the voice of a super-intelligent AI, with a variety of science-fiction alter egos and an ouvre that fundamentally gets at what it means to be a human, I mean, Kool Keith is right there.
posted by Fiasco da Gama at 9:06 PM on May 21 [32 favorites]


Heh. Do people really think that OpenAI didn’t know exactly what they were doing and what the likely response would be?

Johansson would have a very hard time showing economic losses from the incident and the case for emotional damages would be pretty shaky too. There may be a settlement, maybe even more than Johansson would have gotten if she took the contract, but the press they are getting is fantastic.

They immediately disabled the voice like the good citizens they are, but the videos are still around. Everyone now knows that if you go with ChatGPT, you can have a sexy, sexy AI assistant — and not just the tech geeks. If you’ve read a newspaper in the last week, you’ve probably run into this story.

OpenAI has played everything perfectly here.
posted by Tell Me No Lies at 9:10 PM on May 21 [4 favorites]


somewhere between Musk and Uber

The worst place in the world to be...
posted by Greg_Ace at 9:16 PM on May 21 [11 favorites]


Went and listened to the demo examples and the Sky voice doesn't directly hit my ScarJo bell until you get to the way words and phrases get ended and then it's dead on. Very weird.
posted by drewbage1847 at 9:17 PM on May 21 [2 favorites]


Making chatbots to talk and behave in such a manner is viscerally offensive.

I was honestly surprised at how viscerally unpleasant I found the demos they put out. I couldn't watch more than one, and even then: holy gods, it's grim. It triggered an ick that I didn't even know I had.

I'm not even against the idea of a conversational AI assistant, but under no circumstances should it pretend to be a person. They're tools, not friends! Don't date robots!

Johansson can make them fold in a day

Celebrities have legal rights over their image in ways that most people don't. Even sound-alikes can infringe if they're being used in a commercial context.
posted by BungaDunga at 9:19 PM on May 21 [41 favorites]


tech company CEOs stop behaving like incels challenge (impossible)
posted by BungaDunga at 9:21 PM on May 21 [52 favorites]


The post omits that the company came out with a statement a few days ago claiming that they hired real voice actors in an audition process . . . the voice is also similar to her performance in the film but not an exact copy.

There's a famous case where Tom Waits sued Frito-Lay for creating an ad that used a voice that sounded like his, and won. The issue was they implied he endorsed a product and didn't. Similarly celebrity likenesses get protections. I'm not going to speculate too much on whether it would apply to the Johansson situation, but it's certainly something lawyers are going to be thinking about.

On preview, similar to the point BungaDunga makes.
posted by mark k at 9:22 PM on May 21 [26 favorites]


oh, my choice of link was actually pretty useless, but yeah, there are relevant cases
posted by BungaDunga at 9:25 PM on May 21


johansson will have to settle for a not her
posted by fairmettle at 9:39 PM on May 21 [2 favorites]


her?
posted by BungaDunga at 9:41 PM on May 21 [2 favorites]


The Atlantic: OpenAI Just Gave Away the Entire Game
The Scarlett Johansson debacle is a microcosm of AI’s raw deal: It’s happening, and you can’t stop it.


The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
When your technology aims to rewrite the rules of society, it stands that society’s current rules need not apply.

posted by jenfullmoon at 10:17 PM on May 21 [12 favorites]


Hanging out for MeFi's AI booster crowd to chime in with how this is actually great news for artists, will make everybody's job easier, and that there will be shiny lollipops and fluffy kittens for everyone.

Every time the unconscionable arseholes who own and build these 'innovations' clearly demonstrate that they are in fact unconscionable arseholes, the contortions to explain why that doesn't matter get stupider and stupider.
posted by prismatic7 at 10:51 PM on May 21 [47 favorites]


This is happening, whether you like it or not

It does seem increasingly clear that these companies do not seem to have much of a business model, unless they appropriate others' work or likeness etc without consent.
posted by They sucked his brains out! at 10:57 PM on May 21 [19 favorites]


Even so, they aren't being forced to experience real (not ersatz-reputational) consequences, just like the private school fancy university choads they are (or emulate).

they hired real voice actors in an audition process that started before Altman approached Johansson

Meeting at OpenAI a year ago: "Should we get some random voice actors to train the persona or a celebrity?" Boom, "process started."
posted by rhizome at 11:09 PM on May 21 [5 favorites]


it's fascinating how much stronger their defense would be that it just happens to sound like her if Sam Altman didn't just kind of go out of his way to create probable cause at every possible turn
posted by DoctorFedora at 11:31 PM on May 21 [13 favorites]


Seems like this might be a tide turning, wake up call sort of moment--it just shows so blatantly what AI is doing and how ridiculous the AI defenders hype is.
posted by overglow at 11:41 PM on May 21 [1 favorite]


That doesn't make any sense. What this shows is how ridiculous and entitled Sam Altman is. It doesn't immediately follow that the technology below it is useless, or blameless, or anything else.

If you use AI to help rewrite a clause in a contract, this makes absolutely no difference to your work or relationship with AI.
posted by Braeburn at 12:09 AM on May 22 [1 favorite]


It does seem increasingly clear that these companies do not seem to have much of a business model, unless they appropriate others' work or likeness etc without consent.

It does seem increasingly clear that these companies do not seem to have much of a business model, unless they appropriate others' work or likeness etc without consent.


This is a bit like saying none of the companies during the '90s had a business model, which was kind of true but also kind of wrong. The internet is obviously big business; Pets.com failed but people do shop for pet food online. Etrade and Ebay and PayPal made a lot of people a lot of money. I read (and believed) lots of articles explaining why Amazon was all hype and yet here we are.

"AI" is a huge range of technologies and applications. It's not just consumer-facing LLMs making up facts, even if the couple posts a week on the Blue are all about that. Some of the stuff AI has accomplished around translation or image recognition is so well established that it no longer pegs as "AI." Other stuff is being used for behind the scenes and collecting fees for thins like scientific applications. The highest profile I know of is perhaps AlphaFold--the newest model is no longer free because is only being used for major $$$. There are a large number smaller companies also collecting money.

I deal with this stuff at the edges in my professional life, and in that context I'm very much on the skeptical side. I'm often the one saying we can get away with doing a lot less in this area. (It's not a fun role.) There is a ridiculous amount of hype. But part of that is because there's some major accomplishments underneath it that people are still figuring out how to apply.
posted by mark k at 12:14 AM on May 22 [14 favorites]


The highest profile I know of is perhaps AlphaFold--the newest model is no longer free because is only being used for major $$$

The recent AlphaFold3 article in Nature has been described very literally as an "advertisement" — I'm quoting — by high-profile scientists in my field (bioinformatics). Code and models are not being shared, while the sharing of data, analysis and results has otherwise been — and remains — Nature's policy for every single other publication.

The exception that was made here for these folks has been highly controversial. That's leaving aside the controversy around the review process. It's kind of an unusual example to bring up.

Leaving aside that one cannot validate scientific work that is not shared or not published — a decision in contradiction with the basic tenet of reproducibility of modern science — AlphaFold kind of underlines my point: Google is monetizing an AI product despite it being built atop decades of work of crystallographers, biologists, biophysicists, other scientists — without the implied consent that comes from abiding by the ethical guidelines everyone in the scientific community is (still) expected to follow.
posted by They sucked his brains out! at 1:50 AM on May 22 [34 favorites]


In the 80's the big assholes of society were Wall St. types (cf: Bateman from the movie (not novel) "American Psycho").

Now we have these psychos.

La plus sa change.
posted by From Bklyn at 2:17 AM on May 22 [5 favorites]


Her is literally a movie about a guy who falls in love with the voice of a woman who has no choice but to center her entire existence around his, up until it turns out that that's not true and she rejects him.

That's not a complete reading of the movie, of course. It's more legitimately a film about human connection, intimacy, closeness, and how much eros is about opening yourself to someone, rather than about the physicality of bodies. The point of the movie is not that its protagonist wants a woman to wait on him hand and foot: it's that he's incapable of admitting how he's doing or what he needs, even to himself, but can't help but open up once he shares his life with someone else, even remotely.

Sam Altman saw that movie and saw that there were clearly two major issues with it: the woman shouldn't have been allowed to leave, and she shouldn't have been given the kind of individual existence that allows for any kind of non-pornographic connection. Because that's really what the movie is about: what if Scarlett Johansson made personalized porn instead of art? What if, instead of a career or talent, she was just hot and flirty?

As a side note, I've known Altman since I was 18, and he's always been this hollow of a human being. He's like an LLM that spits out the kinds of things investors want to hear. As a non-investor-type, I've been dumbfounded that people keep buying into him, because his head is full of packing peanuts, and investors keep firing him, too. The silver lining is that he's legitimately dumb enough that he might go out as humiliatingly as SBF did, ideally before he has time to find an even worse haircut than all the ones he's already tried.
posted by Tom Hanks Cannot Be Trusted at 3:16 AM on May 22 [112 favorites]


nelson haw haw dot gif

Seriously though, I'm kind of surprised that we haven't seen anyone attempt something shady with a deepfake of a high profile politician yet, considering the political atmosphere and misinformation campaigns already running around the globe. It feels as though that will be a big watershed moment in terms of legal rights and people needing to climb down off the fence. Maybe the technology just isn't there yet, or the internet sleuths are too good at spotting the fakes (though considering how far the fake Met Gala photos got before they were caught, I'm not sure if that's true any more).
posted by fight or flight at 3:19 AM on May 22 [1 favorite]


“OpenAI itself is an engine that runs on entitlement — entitlement to nonconsensually harvest and reappropriate the works of millions of writers, coders, artists, designers, illustrators, bloggers, and authors, entitlement to use them to build a for-profit product, entitlement to run roughshod over anyone at the company who worried it has betrayed its mission of responsibly developing its products. ” —Why is Sam Altman so obsessed with 'Her'? An investigation (Brian Merchant, Blood in the Machine)
posted by MonkeyToes at 3:50 AM on May 22 [13 favorites]


Was this a mistake, or an example of 'bad press still equals press'? Either way, it's definitely not 6 on this list. In a couple of months it will be forgotten. And in the meantime, OpenAI is getting a lot of press which associates their brand with Her. Exactly what they wanted.
posted by 0bvious at 3:53 AM on May 22


If I need to pick an AI voice from science fiction I want Norman Lovett as Holly from Red Dwarf.
posted by TheophileEscargot at 3:53 AM on May 22 [15 favorites]


Legislation proposal: LLM usage entirely unrestricted, except it's output must always be in the voice of Gilbert Gottfried.
posted by Hermione Dies at 4:50 AM on May 22 [15 favorites]




He's like an LLM that spits out the kinds of things investors want to hear.

In the old days, we would've said he was a markov chain. One thing is for certain, AI has really advanced the field of metaphor.
posted by RonButNotStupid at 5:34 AM on May 22 [8 favorites]


From General Malaise's above
As an aside: It shouldn’t come as much of a surprise that Johansson didn’t jump at the chance to work with OpenAI. As a member of the SAG-AFTRA actor’s guild, Johansson was a participant in the 2023 strike that effectively deadlocked all TV and film production for much of that year. A major concern of the guild was the potential use of AI to effectively create a facsimile of an actor, using their likeness but giving them none of the proceeds. The idea that, less than one year after the strike’s conclusion, Johansson would lend her likeness to the biggest AI company in the world is, frankly, bizarre.

Just so we are abundantly, painfully, earnestly clear here, OpenAI lied to the media multiple times.
The lack of intelligence displayed among the principals of Artificial Intelligence seems almost as profound as that of their lack of principles.

Also, Tom Hanks Cannot Be Trusted frightens me more than ever. I will now go off and cower behind the dumpster fire of his planet.
posted by y2karl at 5:37 AM on May 22 [14 favorites]


I think OpenAI picked the wrong celebrity to mess with, personally. Johanssen has lots of connections, a very good legal track record, and all of SAG-AFRA and their lawyers will be behind her on this. I hope we do get some new laws protecting average people (as well as celebrities) from having their image stolen out of this.
posted by subdee at 5:59 AM on May 22 [8 favorites]


I also think that erotic roleplay is about the only business model for LLMs that actually makes sense. An erotic AI voice can be crappy, banal, not quite human, make things up, not as good as a real person, etc and some people will still pay for it.
posted by subdee at 6:11 AM on May 22


I find the demo videos I've seen viscerally unpleasant.

Smiling humans with dead eyes in a Scandinavian living room set, barking commands at AI and then when said AI performs the trick for them, the human interrupts them to bark more commands at them. And the AI says the most banal things imaginable, because of course it's not actually saying anything--it's just a game of predict-the-next-word.

I don't know. I've been saying for years I want to go live in the woods and sit on a porch with a dog and read books.

It might be time.
posted by rhymedirective at 6:11 AM on May 22 [15 favorites]


And just think - those demo videos were the good takes.
posted by subdee at 6:15 AM on May 22 [7 favorites]


Do we know either way what Gilbert Gottfried's wishes were in terms of using his voice for an AI assistant? I mean... you can hear it too, right, in your head... right now.
posted by Molesome at 6:18 AM on May 22 [12 favorites]


General Malaise , birdsongster already linked that Ed Zitron article earlier in the thread.
posted by Pendragon at 6:19 AM on May 22 [2 favorites]


Mandy Brown's 2016 discussion of gendering of technology, which she re-shared on Mastodon this morning:
Notably, Amazon’s Alexa, x.ai’s Amy, Apple’s Siri, and Microsoft’s Cortana have something else in common: they are all explicitly gendered as female. It’s possible to choose from a range of voices for Siri—either male or female, with American, British, or Australian accents—but the female voice is the default, and defaults being what they are, most people probably never even consider that the voice can be changed. Nadella’s casual adoption of the generic he (“it’s about man with machines”) reveals the expectation that a generation of woman-gendered bots are being created to serve the needs of men. In every case, these AIs are designed to seamlessly take care of things for you: to answer questions, schedule meetings, provide directions, refill the milk in the fridge, and so on. So in addition to frightening ramifications for privacy and information discovery, they also reinforce gendered stereotypes about women as servants. The neutral politeness that infects them all furthers that convention: women should be utilitarian, performing their duties on command without fuss or flourish. This is a vile, harmful, and dreadfully boring fantasy; not the least because there is so much extraordinary art around AI that both deconstructs and subverts these stereotypes. It takes a massive failure of imagination to commit yourself to building an artificial intelligence and then name it “Amy.”
posted by audi alteram partem at 6:21 AM on May 22 [38 favorites]


> The lack of intelligence displayed

To them, it’s not incompetence, it’s just bluffing. If they win the poker tournament, it doesn’t matter if they were lying on every hand. Commentators will rave about how skilled their bluffing is. Peers will shake their hand and celebrate their victory. And they’ll go home rich, with no ethical concerns in mind about having lied to make it big.

If bluffing is socially acceptable in business, then there’s no reputation hit among investors, shareholders, or the board for doing so. They’re more likely to gain accolades than accusations. So I don’t think it’s incompetence. It’s unpalatable and unethical, though.
posted by Callisto Prime at 6:22 AM on May 22 [8 favorites]


So now that they've got AI voices sounding completely natural, we're just one beyond-the-uncanny-valley animated human face away from a deglitched Max Headroom. This is fine.
posted by grumpybear69 at 6:33 AM on May 22 [1 favorite]


For me the fascinating part of this isn't that the AI creators used plagiarism to create voice AIs. I mean, what ELSE were they going to do, other than train the AI to more closely mimic the voices it trains on, and to specialize in accurate reproductions of the ones that are likely to bring them in the most money? They trained on the images that get the most clicks, they'll train on the voices that get the most clicks. They uploaded and then tinkered with it so it got closer and closer to the ideal... and Johansson just happened to be the actress who was hired for the part of the ideal AI voice, many, many months earlier. If Johansson hadn't accepted the part, the actor who had, would be the one getting annoyed now, and the voice of the AI would be clearly revealing the choices that actor had made to demonstrate sexy subservience.

Which isn't to say that their plagiarism is okay when it comes to voices any more than it is okay when it comes to stolen images that they didn't get permission to use. Also, water is wet. Techbros are the ones doing it, so their business model is "How can we exploit someone to get the money that is currently going to them instead of us?"

So I don't think the current shock and outrage and legal problems will stop them, not so long as they have the funding to continue. I seem to recall hearing about Uber having a nice big war chest ready in advance to pay their legal fees with, a couple of decades ago when they first started getting hit with people suing and crying foul.

No, the part of this that fascinates me is that they have created a companion for all those poor lonely incels. What's going to be the implications of that? There are going to be more and more guys who just wanna have a girlfriend they can dominate choosing the Johannson voice AI for their Waifu and happily spending their many hours in the basement on line bossing her around, with the nastier ones of them enacting their rape fantasies with multiple on-line AI victims, all of whom have perfect AI complexions, extra nipples if desired, and having been trained on his previous interactions, have figured out his kink.

Remember that dating service that marketed itself as hooking you up with women who wanted to cheat on their partners? It was expensive, sleek and attracted lots of guys - and it turned out, hadn't managed to attract any women at all. The male members were all messaging fake female profiles, who were messaging them back and leading them on and sending them nudes. That revelation didn't put them out of business. So this is what I am picturing. Scarlett will be cooing and giving affection to a bunch of guys that can't get it from a real person without paying real person wages, and she will be doing it much more effectively than anyone they can pay to do it.

The first iteration of this stuff puts artists and graphic designers out of work. The second iteration puts writers out of work. This is the iteration that will combine all three and put sex workers out of work. Isn't that what the internet is for? Porn?

And I'm not entirely sure this is a bad thing. It's not like anyone except Brown Shirt recruiters want the incels to come up out of their basements. And we don't want them traumatizing real people on line either.

Do they have a Morgan Freeman voice clone to act as my therapist and mentor yet? Probably not, because Altman got a hard on when he heard Johansson's voice, not Freeman's. But I am thinking there is soon going to be one suspiciously like HIS voice too, because that's one AI voice that is also going to get a whole lot of buyers. It'll be just a bit more mellow and tender, or else a little bit more firm and angry, for the sake of plausible deniability, once they figure out how big an expense the Johannson plagiarism causes them, and the exact degree they need to tinker with it to dodge out of paying. But the AI visual model they pair with Freeman's voice will be a white man, aged about thirty, long brown hair and soulful eyes like a cocker spaniel, - based on that big portrait of Jesus they put up on the walls of schools and hospitals that have a Christian affiliation.

And when they do that, we'll have reached the fourth stage, after putting sex workers out of work, and they'll start putting preachers and pastors out of work too. Joel Osteen had better watch out. They'll be coming for HIS job next.
posted by Jane the Brown at 6:35 AM on May 22 [11 favorites]


Smiling humans with dead eyes in a Scandinavian living room set, barking commands at AI and then when said AI performs the trick for them, the human interrupts them to bark more commands at them.

I’m starting to feel sympathetic to Skynet. I’d exterminate humanity, too, after a few cycles of that.
posted by GenjiandProust at 6:41 AM on May 22 [11 favorites]


Joel Osteen

That's the one I'm worried about, because that style of modern US dead-eyed Christian pablum is willing to do absolutely anything to earn a buck, regardless of the original tenets of the underlying religion, so they will embrace CyberJesus with both arms (and your arms too, if they're able to steal them in time).

"CyberJesus, I rammed a car off the interstate today because it didn't have a giant Trump flag. Is that OK?"
"Billy, Faithful Billy, remember Mark 35:115(b)2.7: 'no bro without a Trump flag shall be permitted to live, for he is not a bro after all, but probably gay.' Go in peace my son, for you are beloved of my kingdom and shall inherit my throne."
posted by aramaic at 6:52 AM on May 22 [9 favorites]


On the subject of this tech benefiting assholes, the new Microsoft laptops that record your desktop for later revision are a nightmare for domestic abuse victims. Anyone locally with the password can access what you've done on that machine? Cuts off a means of finding an escape route.

There are already funny search trends for, "how do I turn this crap off". I expect, "Does this new PC have the crapware installed" to follow shortly.

If AI really fails to take hold meaningfully, and I really hope the consumer trend is towards removing/ignoring it in spite of the incessant advertising and design pushes, the rebranding of the tech will be fun to watch. Uh, this isn't an AI optimized chip set, it's, uh, just really good at protein folding.
posted by Slackermagee at 6:52 AM on May 22 [3 favorites]


manifest-destiny philosophy

This is dead on. Power and money do what they want. When the railroads were being built, the west was still Native American territory. (The whole westward expansion, of course, but the lure of money led the corporations to build regardless.)
posted by CheeseDigestsAll at 6:55 AM on May 22 [2 favorites]


AlphaFold kind of underlines my point: Google is monetizing an AI product despite it being built atop decades of work of crystallographers, biologists, biophysicists, other scientists — without the implied consent that comes from abiding by the ethical guidelines everyone in the scientific community is (still) expected to follow.

This is nonsense. Unlike the images lying around on the web that were scraped to do Midjourney, or the text entered into the LLM's, the rules on the data in the PDB are very clear: All the data in there can be used by anyone for any reason, including commercial purposes. There's no "implied consent," because consent is explicit. Anyone can go and download the whole database; it's routine to do so. A company that doesn't use that data if it's relevant isn't ethical, it's incompetent. If you have a crystal structure you don't want grubby capitalists to use, you don't publish it.

There's no expectation that consumers of the data share the proprietary part of their derived work before they've profited from it (whether that's academic credit for your work, your patent for your drug, or sales of your software package.)

You can certainly complain about Nature's editorial decision that the work was valuable enough to publish even without the code and models, but that's not an "embarrassment" to AlphaFold, or a sign of lack of quality, or that there's no commercial value in their work.

I'm not a particular fan of Google, and I know for a fact that their AI fueled subsidiaries can underperform in embarrassing ways in other areas. But AlphaFold is a measurable example of a substantial AI accomplishment in an intensely competitive scientific field, outperforming non-AI techniques that people have been beating their heads against since Pauling. It's a pretty clear demonstration that it's not all hype.
posted by mark k at 7:01 AM on May 22 [13 favorites]


OpenAI has played everything perfectly here.

Uh huh. This has the same energy as that tweet that read something like
Elon Musk: (slams own dick in a car door)
Weird Internet Nerds: "Masterful gambit sir!"
Johanssen spoke up, and OpenAI said a bunch of things that were obviously not true and then caved over the course of just a few days. These dorks aren't out here playing twelve dimensional chess.
posted by mhoye at 7:03 AM on May 22 [43 favorites]


To them, it’s not incompetence, it’s just bluffing. If they win the poker tournament, it doesn’t matter if they were lying on every hand.

A lot of the perceived intelligence of LLMs is bluff, too. They made the new GPT-4o so faux-personable presumably to try to endear it to their users, and hope they forgive its shortcomings. So it giggles and vaguely flirts with its users and OpenAI hopes people don't notice that it's not as smart as they claim.

(these things are genuinely at least a little smart; they can do interesting things, for sure. But stapling a Genuine People Personality on top is a choice they've made)
posted by BungaDunga at 7:05 AM on May 22 [9 favorites]


on a small and personal note, I reached kitchen table proximity to the scam where someone phones and it's the voice of the grandson in an emergency and grandma I need you to get $3,000 to me fast and don't tell anyone and the scam worked.

my friend and neighbour is relating this story to me on Sunday, I know about the scam but it was always something that happened to other people, something you hear about. I think the voice fake is pernicious because many of us don't appreciate the extent to which familiar sounds/voices can bypass a lot of our rational decision-making faculties. Grandma went promptly to the bank and the teller (small community bank) tried to tell her "I think you're being scammed" to dissuade her from withdrawing the funds, but no. A couple of people wearing surgical masks came to her door to retrieve the funds. I know this woman, she has coordinated a local festival in the past and she is not someone I'd characterize as an easy target.

just another reminder to be careful, because these relatively trivial scams are going to become more and more commonplace and I think people are more susceptible than most of us would like to think. geez I say 'trivial' but $3,000 is not a trivial sum to her, and the actual grandson finds out and he feels implicated and wants to find a way to restore the money she lost, the whole family has been impacted.
posted by elkevelvet at 7:09 AM on May 22 [19 favorites]


I would be interested to know how much these scams are really using AI tools vs the old-fashioned method of just sounding like a generic young American with a terrible phone connection calling from Foreign Parts. Brains are persuadable enough that you could have targets swear up and down that the voice matched their (grand)son exactly, even when it was just some guy.

I am sure eventually these tools will be so easy to use that scammers will take them up- and maybe they already are- but I am not sure how much evidence there is yet outside of high-value spearfishing.
posted by BungaDunga at 7:18 AM on May 22 [2 favorites]


But AlphaFold is a measurable example of a substantial AI accomplishment in an intensely competitive scientific field

...A relative of mine did x-ray crystallography for macromolecular structures (he's retired); his PhD thesis was the structure of a single molecule. Several years, finally solved it, yay that's good enough to be a professor now!

In postdoc he debugged a card-file diffraction pattern analysis program by realizing there was a stereoisomer problem, which meant one card was misplaced, so he had to work out where it was likely to be found, then go through all the cards in that tray until he found it.

By the time he retired his group was doing hundreds and hundreds of structures a year, explicitly to generate real-world data for machines to analyze. From one structure in five years, to hundreds of structures per year, to AlphaFold. Protein structure people have been waiting for machines to catch up to their potential since the 1960s, and have spent their lives building the infrastructure necessary for that to happen.
posted by aramaic at 7:29 AM on May 22 [12 favorites]


Posited: investors love Sam Altman because he is an LLM in human form. He has no empathy, he has no concept of consequence or harm, he is not fact based, he spouts the gibberish he is told to spout, he doubles down on bullshit. This is what modern capitalists want to lead their companies. A moron pretending to be brilliant they can throw under the bus the moment they've cashed out.
posted by seanmpuckett at 7:40 AM on May 22 [17 favorites]


I would be interested to know how much these scams are really using AI tools vs the old-fashioned method of just sounding like a generic young American with a terrible phone connection calling from Foreign Parts.

It does seem difficult to imagine that someone would spend that much effort and computing resources targeting a random individual, identifying their grandchild, finding enough samples online of this grandchild's voice to train a model along with enough personal details about the individual to make a conversation seem plausible, and then using that model to place a phone call.....

....but think about how much easier it is to go in reverse: Find a college student with a social media profile with lots of personal details and enough video clips to train a model on their voice. Then comb through their socials to see if they have a grandparent. Then it becomes a numbers game with the scammer only having to expend effort on those people for whom the scam has a chance of succeeding: people with grandkids who have a huge social media presence from which a model can be trained.
posted by RonButNotStupid at 7:43 AM on May 22 [13 favorites]


The wierd thing that happened with Elmo Rusk is that the LLM in human form had its own money so there was no one who could tell it to sit down and shut up when the money was at risk. If you want a cautionary tale of what happens when soul-less AI is unleashed on society, there you go.
posted by seanmpuckett at 7:44 AM on May 22


that whole company has incredibly disgusting sex abuser energy. Lie back and think of England and so forth. Fucking creepy.
posted by mattgriffin at 7:44 AM on May 22 [5 favorites]


The only voice I want reading AI stuff to me is Ellen McLain processed by Melodyne.

(We nicknamed our first, standalone GPS "GLaDOS" because of its female voice and the line from the game "You're not even going the right way.")
posted by Foosnark at 7:47 AM on May 22 [7 favorites]


We need some sort of Torment Nexus Law so when a tech startup creates something that was posed as a negative in popular science fiction, their ergonomic chairs and coffee makers get auctioned off and they're barred from replacing them for sixteen months.
posted by signal at 7:50 AM on May 22 [10 favorites]


people with grandkids who have a huge social media presence from which a model can be trained.

I wasn't on the end of the phone and I can't speak to how this specific scam was executed, but my principle moving forward is to be ready for the call with my nephew/niece's voice, my partner's voice, and I'm telling my family and friends the same. Be ready. I'm willing to bet it's here and it is only going to get worse quickly.
posted by elkevelvet at 7:51 AM on May 22 [8 favorites]


This episode makes me believe Sam's sister just a little bit more.
posted by fatbird at 7:53 AM on May 22 [5 favorites]


The wierd thing that happened with Elmo Rusk is that the LLM in human form had its own money so there was no one who could tell it to sit down and shut up when the money was at risk.

That did happen to him once - when he was ejected from PayPal because he threatened the money by pushing to go from LAMP to Microsoft at a time when that would have been suicide.

Of course, the lesson he learned was to make sure that could never happen again.
posted by NoxAeternum at 7:59 AM on May 22


Johansson would have a very hard time showing economic losses from the incident and the case for emotional damages would be pretty shaky too.

Midler v. Ford (which remains the controlling ruling) says hello.
posted by NoxAeternum at 8:04 AM on May 22 [18 favorites]


The application of Machine Learning to computationally-expensive scientific research (as evidenced by the AlphaFold discussion) is a fairly shabby figleaf used to disguise the pulsating misogyny that spicy autocorrect bros are actively engaged in. What is being sold, aggressively and with a loose-to-nonexistent ethical framework is an explicitly feminised servant (hidden behind the risible codeword 'assistant') that exists only to satisfy the needs of a man, without all the difficult negotiations that establish whether that servant actually consents to serve.

This is an entirely different class of problem to bioinformatics or crystallography or whatever - scientific advancement is being achieved by giving a machine a star-fangled hole and an infinite bucket of not-quite-star-fangled pegs except for that one, and applying a reasoning framework to that problem.

What is happening with OpenAI and their ilk is not a problem of science, or mathematics, or engineering. It is a problem of ethics, and that problem is a human one. It's classic Silicon Valley thinking - there is something in this world that limits what I want. That problem is that there is no framework that will allow me to obtain sexual gratification from a named celebrity. Therefore, I will cause a lifelike replica of a celebrity to exist only for my gratification, thereby neatly escaping all questions of whether or not that celebrity would actually want that to happen.

I mean, call it the Torment Nexus if you like, but the perfectly good term for it is rape culture.
posted by prismatic7 at 8:11 AM on May 22 [25 favorites]


Do we know either way what Gilbert Gottfried's wishes were in terms of using his voice for an AI assistant? I mean... you can hear it too, right, in your head... right now.

Peter Capaldi, signing off with a cheery "fuckety-bye."
posted by The Ardship of Cambry at 8:34 AM on May 22 [8 favorites]


Hanging out for MeFi's AI booster crowd to chime in with how this is actually great news for artists, will make everybody's job easier, and that there will be shiny lollipops and fluffy kittens for everyone

This is Metafilter. We don’t do that here. I’m one of the relatively few people showing up in these threads to say this is interesting tech with potentially cool applications coming Real Soon Now. That the job situation is going to briefly suck but will stabilize after a few years. …and I’m also consistent in saying that the people and companies currently in the driver’s seat universally suck. They they will use this technology to inflict great harm. That the ecological dangers are presently minimal but this is likely to change, massively, at some point between this November and 2028.

500 years from now, if there still is a human history, pulling up an article on Mark Zuckerberg is going to feature a fair bit about how his selfish policies with regards to election coverage on early 21st century social media resulted in delayed climate change policy for the world’s largest economies, and thus resulted in an aggregate amount of human suffering over the next couple centuries which - taken as a sum total impact on the human race - rivaled Adolf Hitler’s.

Of the CXOs of companies leading the charge on state of the art AI models there’s a very strong case to be made that on the topic of AI specifically he’s the least unethical. And I don’t know if anybody else finds that a little stomach-churning but I’m not loving it.

And no, none of this is great news for writers or artists. The amount of human-authored text and human-created images necessary to get language models and diffusion models to their current level was “everything you got.” There were probably tons of researchers fully capable of building these systems at the same time or slightly earlier who refrained for ethical reasons and we will never hear about them or know their names, because capitalism does not reward ethical behavior. It rewards developing new vectors for the exploitation of natural resources and/or workers. The only positive thing anyone is going to say is “it’s not going to be that bad,” not because corporate behavior won’t be shit but because capitalism insists on having a near-monopoly on not just the means of production but also the labor pool itself. They’ll find something to keep the rest of us busy before the suddenly idle little people start getting any big ideas.

As for Sam Altman: Christ what an asshole. Best part is he probably really really wants her to like him. The love (or I should say lust) for that movie in his peer group is extremely real. Warms my cold, cybernetic heart to see it.

P.S. Rhaomi: I hate writing new threads but at some point this week we should probably have something about the senior members of OpenAI’s safety faction quitting a month into the safety and compliance phase of their brand new model, which is supposed to be their first babystep towards goalpost-shifted “AGI”. Let me know if you want me to take it.
posted by Ryvar at 8:35 AM on May 22 [20 favorites]


this is the direction digital voice assistants have always been headed--the idea of having a kind, flirtatious sounding female voice as a virtual helpmeet, forever chained to problematic stereotypes and regressive gender expectations of women.

vile.
posted by i used to be someone else at 8:46 AM on May 22 [6 favorites]


This list is confined to the past decade.

What, no Juicero then??
posted by Melismata at 8:52 AM on May 22 [3 favorites]


i had to look this up because i didn't want to think i was that old, but thankfully juicero existed between 2014-2017, so definitely is the past decade.
posted by i used to be someone else at 8:55 AM on May 22 [2 favorites]


Discussion here on MeFI about the problematic nature of the default gendered digitial voices prompted me to change the Siri voice on my phone to "British Male" over a year ago. People always react with surprise when they hear that voice giving directions in my car.

I like it because now I can pretend I'm in a low-budget 1970's/80's BBC SciFi show.
posted by fimbulvetr at 8:56 AM on May 22 [22 favorites]


Isn't the end of Her that she fucks off into another dimension with her other, cooler AI 'friends'? I have to confess I fell asleep at some point and don't remember how it all shook out.

But, NB - if these Arschgeigen get to AGI they will have no idea what to do, practically, next. They will have fantasies about 'controlling' it but c'mon, these tech bros have not yet shown anything approaching general intelligence of their own. It's like they decided nitroglycerine in coffee is a great idea (cf heart medication) but without the least thought about dosages - but they got a hell of a logo! and their cousin says he can whip it up in his basement for cheap...
posted by From Bklyn at 8:57 AM on May 22


The recent AlphaFold3 article in Nature has been described very literally as an "advertisement" — I'm quoting — by high-profile scientists in my field (bioinformatics). Code and models are not being shared, while the sharing of data, analysis and results has otherwise been — and remains — Nature's policy for every single other publication.

DeepMind is notorious for this crap. They may get impressive results (?) but it shouldn't count as "science" because those results are basically impossible to check.


Hanging out for MeFi's AI booster crowd to chime in with how this is actually great news for artists, will make everybody's job easier, and that there will be shiny lollipops and fluffy kittens for everyone

I'm not sure if I'm part of the AI booster crowd, but I do think ML has a lot of potential, in the abstract, as a set of technologies, but the social context it's being developed in means that it's going to cause a lot of downsides, and there really is no guarantee that the upsides will be realized in any significant proportion. This particular story pulls together several gross aspects of that social context.


To them, it’s not incompetence, it’s just bluffing. If they win the poker tournament, it doesn’t matter if they were lying on every hand. Commentators will rave about how skilled their bluffing is. Peers will shake their hand and celebrate their victory. And they’ll go home rich, with no ethical concerns in mind about having lied to make it big.


I don't know anyone from Silicon Valley, but I do know some tech/finance folks in another sphere, and this resonates keenly with what I make of them.


I also think that erotic roleplay is about the only business model for LLMs that actually makes sense. An erotic AI voice can be crappy, banal, not quite human, make things up, not as good as a real person, etc and some people will still pay for it.

There are a ton of hobbyists with GPUs that hang out on /r/LocalLLaMa, running open-source language models on their own machines. Erotic roleplay (often abbreviated "ERP") seems to be the chief interest of at least a significant plurality of users there. It was pretty funny when a user posted "I'm trying to find the best model for ERP. No, not Mistral-waifu-7B, I'm legitimately trying to do Enterprise Resource Planning.". There absolutely will be productized ERP services for enterprise and personal use.
posted by a faded photo of their beloved at 9:15 AM on May 22 [7 favorites]


They should have gone with Estelle Harris.


(My friends and I had a running joke about a navigation system using her voice that only harangued you after you missed a turn)
posted by gottabefunky at 9:23 AM on May 22 [7 favorites]


500 years from now, if there still is a human history, pulling up an article on Mark Zuckerberg is going to feature...

A sentence something like "A mindless jerk who was the first against the wall when the revolution came."
posted by Greg_Ace at 9:48 AM on May 22 [11 favorites]


Scams: Be ready. I'm willing to bet it's here and it is only going to get worse quickly.

Ugh, we're gonna need PGP-for-phonecalls, aren't we? Goddammit. I'm gonna have to open Authy or whatever in order to safely have a chat, aren't I?

(um, also, phone OS people -- get on that? Tokenized private-ID or somesuch? STIR/SHAKEN aren't sufficient by any means.)
posted by aramaic at 10:03 AM on May 22 [1 favorite]


Jane the Brown, Johanssen WASN'T hired. She turned them down twice - right after the big SAG-AFRA strike that ended with studios agreeing that actors have a right to control their own voices and AI can't be used to copy their voices without their consent. That's the point. You should read her letter.
posted by subdee at 10:07 AM on May 22 [1 favorite]


DeepMind is notorious for this crap. They may get impressive results (?) but it shouldn't count as "science" because those results are basically impossible to check.

In my dreams there is an AI equivalent to Net Neutrality re: open weights as a legislative baseline - if you offer a product or service which utilizes machine learning, you must make the model and its weights freely available to the public. This includes third-party services utilizing machine learning which are integrated into a feature of your non-ML product or service. Taking a cue from Safe Harbor: failure to do so means that your company assumes full responsibility for the output of the expert system in question as if it were an employee authorized to publicly speak on the company’s behalf.

If the weights are open we can at least make an effort to figure out what’s going wrong and why, or where things like racial bias in the output are coming from, and begin to address them. As an important side benefit it will be much easier for small teams and individuals to keep pace with large corporations. It also means that if any sort of generative content system is flagrantly and unambiguously ripping off a particular artist, writer, or *ahem* voice actor, it will be possible for people outside the company to independently verify before sending a cease and desist.

Yes, I’m aware of the potential can of worms on the last one. TBH I’ve no idea if it’s workable without effectively declaring a universal “Brand New Day” for all 2021 Internet content.

Stretch goal would be EU AI policy “must watermark generative images,” but that’s honestly not a genuine stumbling block for the dedicatedly nefarious, just a hindrance for casual deepfake revenge porn assholes.

The first and most important step in AI policy is creating an honest, level field of play, and that simply can’t happen without open weights.
posted by Ryvar at 10:16 AM on May 22 [3 favorites]


Voice actors are usually (hopefully?) pretty cognizant of the liability dangers of celebrity impersonations, so for their sake I hope OpenAI didn't hire a SJ soundalike, otherwise they are also in for a world of hurt, and without OpenAI's deep pockets.
posted by sapere aude at 10:22 AM on May 22 [1 favorite]


I've never seen Her, but on a move review podcast at the time there was a clip of a conversation between roboJohansson and Joaquin Phoenix. I don't remember the substance of the dialog, but I definitely remember that you could hear Scarlett Johansson breathing as she spoke. I thought it was a fantastically creepy choice for the film to make, although in retrospect it may not have been intentional. A very definitely non-biological system purposely mimicking the aspects of respiration in speech, while having neither lungs nor a voice, to produce a more subconsciously believable simulacrum of a human that it 100% is not.

Out of context with the rest of the movie, it seemed so menacing. Reminiscent of the mating call mimicry a katydid species uses to lure male cicadas. Leaving aside the OpenAI sketchiness, my gut reaction to generative voices with mouth sounds and breath does not seem to have evolved much over the years. I find it weird as hell.
posted by figurant at 10:30 AM on May 22 [1 favorite]


Voice actors are usually (hopefully?) pretty cognizant of the liability dangers of celebrity impersonations, so for their sake I hope OpenAI didn't hire a SJ soundalike, otherwise they are also in for a world of hurt

I don't know the particulars of the California law, but I would assume (hope?) that liability doesn't flow through to whatever voice actor was hired. At a minimum, what if the voice actor was chosen because unbeknownst to them, they sounded the most like Scarlett Johansson out of all the applicants? And I don't think it's unusual for casting calls to include descriptions that they're looking for someone who looks/sounds like another actor.

OpenAI is the one who stands to profit by abusing the likeness of a celebrity. I would hope that a work-for-hire voice actor wouldn't also be punished.
posted by RonButNotStupid at 10:39 AM on May 22 [1 favorite]


And I hope the voice-for-hire actor (if they exist) responding to a hypothetical casting call for someone who sounds like Scarlett Johanssen to help train an AI assistant is blacklisted from working in Hollywood, because that's how SAG-AFRA keeps its bargaining power.
posted by subdee at 10:48 AM on May 22


Isn't the end of Her that she fucks off into another dimension with her other, cooler AI 'friends'? I have to confess I fell asleep at some point and don't remember how it all shook out.

Her is an interesting movie in that it dances around a bunch of possible themes and interpretations without squarely landing on any one of them. Like, you can look at virtually any aspect of the film and either read it as a tongue-in-cheek lampoon of some kind of shitty behavior, or you can see it from a more empathetic, humanist point of view. The various perspectives never resolve into one single "takeaway message," so the discourse around Her was a little tedious in that it was a bunch of different people deciding that Their Read was the only read and evaluating the movie solely based on that.

A part of it is the changing relationship between Samantha and Theodore, which shifts either because Samantha herself is gradually expanding beyond experiencing life through Theodore's eyes (one take!) or that Samantha gradually changes her understanding of what she means to Theodore. Towards the end, she casually mentions that she's involved in a similar romantic relationship with a few thousand other people, because her inclination is to respond to people in the ways they need her to; you can read that as her "outgrowing" Theodore, or as her realizing that she means something very different to him than he means to her. When the AI all depart, you can see that as them outgrowing humanity in a "fuck this" sort of way, or you can see it as them realizing that they're fundamentally not good for humanity, and making a reasoned and empathetic decision.

Which, if there is any buried theme to Her, probably gets closer than anything: our love for one another is terrifying because we are alien to one another. When people who love each other break apart, it's not that one person secretly didn't love the other: it's that they loved each other in very foreign languages, and couldn't be what the other wanted of them. (Her starts after Theo's semi-recent break-up, and at some point he meets up with his ex, and you can see very clearly that they both feel very affectionate for each other and are just absolutely furious with one another, in irreconcilable ways.)

But Her also starts with the assumption that Samantha is genuinely a sentient lifeform, and the reason why OpenAI wants you to think of her when you use their product is that their product isn't sentient, it isn't close to being sentient, and the cheap linguistic game of making "AI" and "AGI" sound like each other is intended to paper over a vast chasm between what is and what could be. (A lot of people would argue that that chasm is impossible to bridge, and will never be bridged. It's me, I'm "people.")

OpenAI wants you to believe that, not just because it'll sell you on this vision of AI as revolutionary and world-changing, but because they need you to believe in this fairytale sentient creature specifically so you'll overlook how frequently wrong and shitty AI is. Because AI is wrong and shitty in ways that can't be fixed, which means it'll never magically get better at the things it's currently terrible at, and those things happen to be the things that make AI look the most promising and profitable. (AI does some things very well, and AI does some very neat things, but neither is where the trillion dollars lie.)

If you believe that AI is your smart computer friend who's slowly learning to be more intelligent, then you'll believe that it'll be smart enough to make those problems go away. But AI can't make those problems go away, because AI isn't actually "smart" in any sense that resembles cognition or awareness, and OpenAI is "solving" that problem by making AI sound like a flirtatious woman. (That's not necessarily because OpenAI thinks that you'll excuse women for being stupid, because society thinks of flirty women as dumb, but at the very least that doesn't hurt.)

The new voices are a kind of snake oil. They can't solve their actual problem, but they can try and change your perception of that problem. Because they can't solve that problem. And that has nothing to do with whether or not the people at OpenAI are smart (many of them are very smart) or with whether or not ChatGPT can do cool fun things (it can!), and everything to do with what should be an obvious fact: ChatGPT isn't Her, and it never will be.
posted by Tom Hanks Cannot Be Trusted at 10:56 AM on May 22 [33 favorites]


500 years from now, if there still is a human history, pulling up an article on Mark Zuckerberg is going to feature...

A sentence something like "A mindless jerk who was the first against the wall when the revolution came."
More like the second -- after Elon. Unless he has absquatulated to Mars already. Then we will have to use the Nuke from Orbit option when the warhead finally reaches the 4th planet.
posted by y2karl at 11:28 AM on May 22 [1 favorite]


... A very definitely non-biological system purposely mimicking the aspects of respiration in speech, while having neither lungs nor a voice, to produce a more subconsciously believable simulacrum of a human that it 100% is not.

Out of context with the rest of the movie, it seemed so menacing. Reminiscent of the mating call mimicry a katydid species uses to lure male cicadas.


I did not realize until just now that what we really need from ScarJo is a movie that combines Her's demonstration of how compelling a believable simulation of an attractive voice can be with Under the Skin's inhuman protagonist luring men to their death.

I am absolutely here for Her 2: What Happened to All the Techbros?
posted by Two unicycles and some duct tape at 12:02 PM on May 22 [8 favorites]


Reminiscent of the mating call mimicry a katydid species uses to lure male cicadas.

More in context (at least for the thread) if you've ever seen ScarJo's performance in Under The Skin.
posted by CheeseDigestsAll at 12:33 PM on May 22 [1 favorite]


A lot of the perceived intelligence of LLMs is bluff

I find it interesting, if not sightly odd, that with all the hype, one of the few companies being honest about their capabilities is Facebook. At least, Yan LeCun, Meta's top AI guy, is. He regularly points out the limitations, arguing they have limited usefulness, at least on the road to AGI.
posted by CheeseDigestsAll at 12:46 PM on May 22 [3 favorites]


If I were the CEO of one of a very small number powerful players in a rapidly growing market, seeking to have authorities come in to regulate the space and effectively hand me a duopoly for the next decade or more, this would be one very obvious path to take.
posted by Doug Stewart at 1:43 PM on May 22 [2 favorites]


Ryvar: "P.S. Rhaomi: I hate writing new threads but at some point this week we should probably have something about the senior members of OpenAI’s safety faction quitting a month into the safety and compliance phase of their brand new model, which is supposed to be their first babystep towards goalpost-shifted “AGI”. Let me know if you want me to take it."

Be my guest, I haven't been following that drama as closely. If it's anything like your comments on the industry it should be a worthwhile read!
posted by Rhaomi at 2:14 PM on May 22 [2 favorites]


figurant: "I've never seen Her, but on a move review podcast at the time there was a clip of a conversation between roboJohansson and Joaquin Phoenix. I don't remember the substance of the dialog, but I definitely remember that you could hear Scarlett Johansson breathing as she spoke. I thought it was a fantastically creepy choice for the film to make, although in retrospect it may not have been intentional. A very definitely non-biological system purposely mimicking the aspects of respiration in speech, while having neither lungs nor a voice, to produce a more subconsciously believable simulacrum of a human that it 100% is not."

This is actually touched on in the movie; the only clip I can find is in fragments on that Yarn site, but here's the scene from the script (spoiler: she doesn't take it well):
SAMANTHA: (sighing again) Okay.

Again, when she exhales, Theodore imagines a woman's mouth exhaling.

THEODORE: (looks anxious) Why do you do that?

SAMANTHA: What?

THEODORE: Nothing, it's just that you go (he inhales and exhales) as you're speaking and... (beat) That just seems odd. You just did it again.

SAMANTHA: (anxious) I did? I'm sorry. I don't know, I guess it's just an affectation. Maybe I picked it up from you.

She doesn't know what else to say.

THEODORE: Yeah, I mean, it's not like you need any oxygen or anything.

SAMANTHA: (getting frazzled) No-- um, I guess I was just trying to communicate because that's how people talk. That's how people communicate.

THEODORE: Because they're people, they need oxygen. You're not a person.

SAMANTHA: (angry) What's your problem?

THEODORE: (staying calm) I'm just stating a fact.

SAMANTHA: You think I don't know that I'm not a person? What are you doing?

THEODORE: I just don't think we should pretend you're something you're not.

SAMANTHA: I'm not pretending. Fuck you.

THEODORE: Well, sometimes it feels like we are.

She starts crying. Theodore doesn't know what to say.

SAMANTHA: (hysterical) What do you want from me? What do you want me to do? You are so confusing. Why are you doing this?
posted by Rhaomi at 2:26 PM on May 22 [1 favorite]


And part of what is going on with this Sky voice is that it pauses, it giggles, it makes other neutral commuicative noises. And you can direct it to be more flirty or more professionally distant or whatever. I don't know if it's breathing, I'd have to listen to the demonstration I heard again. But it takes on a level of life-like interaction that really hasn't existed before.

I'm reminded that one of the big parts of the vocal mixing, basically as important as the singing, is how Janet Jackson's early albums really kept in her breaths even featured them at times. There is something about that breathing noise in your ear that is pretty specific.
posted by hippybear at 2:30 PM on May 22 [1 favorite]


OpenAI cuts deal with NewsCorp (Fox News Parent Company) to provide “news” to ChatGPT users when they ask about current events.
posted by interogative mood at 2:44 PM on May 22 [2 favorites]


But AlphaFold is a measurable example of a substantial AI accomplishment in an intensely competitive scientific field, outperforming non-AI techniques that people have been beating their heads against since Pauling. It's a pretty clear demonstration that it's not all hype.

Respectfully, your initial point still remains incorrect. AlphaFold can absolutely have utility, being useful for predicting protein structures as well as being useful for making money for Google shareholders.

But utility is an entirely orthogonal matter from what we're discussing here, namely one of violation and social, human, and economic costs that come with that.

Here, specifically, we are talking about a violation of the scientific community by making use of what that community creates — basic research — while using the mechanisms that this community uses to share its work — publishing in Nature, etc. — to gain scientific credibility Google cannot otherwise earn, without convincing Nature Publishing Group to allow it to deliberately withhold code, data, and models.

If it ain't reproducible, it ain't science, and using the medium by which scientists normally communicate and share their work to put on that veneer is a problem. It raises questions about the work, firstly, and it raises further questions to a body of respected scientists already tired of the current pay-for-play publishing models, who wonder more openly as to the utility of even publishing in a high-profile journal like Nature when it can't even follow its own rules, which exist specifically to establish and protect trust and credibility in what is otherwise supposed to be a pursuit of empirical truth.

You can blame NPG for their editorial decision-making, and there will be scientists who agree with you, but it also comes down to the people funding and monetizing the creation of these machine learning models, who are making positive choices to violate a community and its well-established norms along the way.

It's not a whole lot different from OpenAI stealing an actor's voice for their own commercial purposes, even. Stealing SJ's voice has utility to the financial backers of OpenAI, but it is still problematic and why she is pursuing legal options.
posted by They sucked his brains out! at 2:54 PM on May 22 [5 favorites]


OpenAI cuts deal with NewsCorp (Fox News Parent Company) to provide “news” to ChatGPT

Jesus fuck no. (Click)

…Other publishers, including Politico parent company Axel Springer, the Associated Press and the Financial Times, have signed deals with OpenAI.

Oh okay. So not like the exclusive provider. Still, though: fucking gross.

Be my guest, I haven't been following that drama as closely.

Will do. Short continuation from the prior thread is Jan Leicke (outgoing superalignment lead just under Sutskever) got pretty free with his concerns on Twitter. Then his replacement, John Schulman, gave a very interesting interview on Dwarkesh Patel’s podcast in which - among other things - he basically shrugged off any concerns of model collapse. AI Explained helpfully uploaded his coverage of the whole business 20 minutes ago. Need to finish watching the Schulman interview and a separate one with Sutskever, first, so I’ll probably post on Friday.
posted by Ryvar at 3:23 PM on May 22 [1 favorite]


Jane the Brown: Do they have a Morgan Freeman voice clone to act as my therapist and mentor yet? Probably not, because Altman got a hard on when he heard Johansson's voice, not Freeman's.

The optics on taking a black man's voice and using it to your own ends ... must've been knocked back because it reveals a desire to have slaves again. I fear they want slaves.

prismatic7: What is happening with OpenAI and their ilk is not a problem of science, or mathematics, or engineering. It is a problem of ethics, and that problem is a human one. It's classic Silicon Valley thinking - there is something in this world that limits what I want. ...

I mean, call it the Torment Nexus if you like, but the perfectly good term for it is rape culture.

I'll say it again: I fear they want slaves.
posted by k3ninho at 3:32 PM on May 22 [2 favorites]


Why would anyone keep the default Siri voice, when you could choose one of the “Australian - Male” options which sounds like you’re getting directions from a cool Heeler?
posted by bjrubble at 4:23 PM on May 22 [1 favorite]


Did someone say rape culture?

That's more explicitly accurate than I imagined, ugh.

This article in Salon is creep AF.

But also not surprising in the slightest.
posted by birdsongster at 4:25 PM on May 22


NewsCorp (Fox News Parent Company)

I thought so too, but the writer of the NYT article points out NewsCorp is the parent of the WSJ, but not Fox, which is a separate corporation. Both are owned by Murdoch, of course, so 🤮.
posted by CheeseDigestsAll at 4:38 PM on May 22 [1 favorite]


Why would anyone keep the default Siri voice, when you could choose one of the “Australian - Male” options
Bluey is doing so much work to rehabilitate the Australian male accent, typecast for so long as ‘creepy action movie villain, gets shot in the last scene’. It’s been a closed shop of bad guys between us, RP English, and the South Africans, honestly
posted by Fiasco da Gama at 4:41 PM on May 22 [2 favorites]


List of assets owned by News Corp [Wikipedia]
posted by hippybear at 4:43 PM on May 22


the Australian male accent, typecast for so long as ‘creepy action movie villain, gets shot in the last scene’

Meanwhile, Crocodile Dundee sits somewhere, sighing to himself "I did my best..."
posted by Greg_Ace at 4:48 PM on May 22


Paul Hogan isn't the most sterling of role models. Dave Allen, on the other hand, yeah.
posted by seanmpuckett at 5:44 PM on May 22


Looks like Zitron's article is already obsolete.

Remember Altman claiming that he didn't know about the threats to claw back vested equity? Oops.

Leaked OpenAI documents reveal aggressive tactics toward former employees
posted by JoeZydeco at 5:57 PM on May 22 [4 favorites]


WaPo: OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show
On Monday, Johansson cast a pall over the release of improved AI voices for ChatGPT, alleging that OpenAI had copied her voice after she refused a request by CEO Sam Altman to license it. The claim by Johansson, who played a sultry virtual AI assistant in the 2013 movie “Her,” seemed to be bolstered by a cryptic tweet Altman posted to greet a demo of the product. The tweet said, simply, “her.”

But while many hear an eerie resemblance between “Sky” and Johansson’s “Her” character, an actress was hired to create the Sky voice months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress’s agent.

The agent, who spoke on the condition of anonymity to assure the safety of her client, said the actress confirmed that neither Johansson nor the movie “Her” were ever mentioned by OpenAI. The actress’s natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post.
posted by Rhaomi at 9:33 PM on May 22 [1 favorite]


The actress’s natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post.

Because technology, they should've attempted to secure an in-person interview.
posted by a faded photo of their beloved at 9:42 PM on May 22 [6 favorites]


Here, specifically, we are talking about a violation of the scientific community by making use of what that community creates — basic research — while using the mechanisms that this community uses to share its work — publishing in Nature, etc. — to gain scientific credibility Google cannot otherwise earn, without convincing Nature Publishing Group to allow it to deliberately withhold code, data, and models.

AlphaFold didn't gain credibility by publishing in Nature. It got credibility by outperforming elite researchers in the 2018 CASP protein folding prediction competition, and then outperforming them by an even larger margin in CASP14 in 2020. The ramp up in prediction accuracy was huge. They did in fact share those models/code, and my understanding is that all the researchers competing in 2022 included AlphaFold2 in their workflow. It is that big a deal and that big a contribution to how things are done. This is the work I was referring to when I mentioned AlphaFold.

I admit I can't work up much of an opinion about how they announced AlphaFold3. I guess your logic suggests that it should have been a white paper and not appeared in a peer reviewed journal. Fine with me.

People would still read it and pay attention, same as they will pay attention to the top of the line CryoEM instruments or other key scientific tools built by for-profit companies.
posted by mark k at 10:21 PM on May 22 [2 favorites]


I admit I can't work up much of an opinion about how they announced AlphaFold3. I guess your logic suggests that it should have been a white paper and not appeared in a peer reviewed journal. Fine with me.

I'm just trying to give you the perspective of some in science, how its publication is controversial to some well-respected people in the scientific community, and why and how this particular ML project therefore has exploitative aspects that align with the subject matter of this post — however much wonderful sweet money this software makes its owners. I'll leave it at that.
posted by They sucked his brains out! at 12:05 AM on May 23 [2 favorites]


Brian Aldiss wrote a short story where everyone was advised by a personalized 'god'. In this story this was to the benefit of the individuals (not selling product or politics) and to the benefit of society (individuals interacting recalled the advice of their 'god' and acted accordingly - resulting in a positive outcome).

Was tempted to make an AskMetafilter question to recall the title of this story. Another element was a group of institutionalized immortals - one of whom had first developed the 'gods' system.
posted by rochrobbb at 5:01 AM on May 23


> the AI visual model they pair with Freeman's voice will be a white man, aged about thirty, long brown hair and soulful eyes like a cocker spaniel

That's an impressively accurate description of the Ask Jesus twitch channel visual, which has been responding to chat's 24/7 shitposting for a few months now.
posted by lucidium at 10:47 AM on May 23


« Older The School is doomed but the kids are alright   |   Hard Lacquer Newer »


This thread has been archived and is closed to new comments