Starliners, CrowdStrike and AI slop
January 8, 2025 8:47 AM   Subscribe

The 8 worst technology failures of 2024, according to MIT Technology Review.
posted by gottabefunky (32 comments total) 7 users marked this as a favorite
 
The exploding pagers were not a "technology failure." They were an example of technology working spectacularly well, in that the pagers were designed to injure, main, and frighten their users, and spread terror through the community.

The output of a system is the purpose of the system.
posted by JohnFromGR at 8:55 AM on January 8 [27 favorites]


Starliner may be terrible, but at least it isn't fascist.
posted by RonButNotStupid at 9:01 AM on January 8 [4 favorites]


Bowery may have failed at "vertical farming" but there are at least two organizations in Ottawa who have quietly been doing just that for a decade now. To be fair, I know of a few failures as well. Doing pesticide and herbicide free greenhouse farming is technically hard. But It would be wrong to consider it a failure and impossible to do. What I buy from the grocery store says otherwise.

Things like the supermarket growing their own crops on their roof may even become more common.
posted by bonehead at 9:06 AM on January 8 [10 favorites]


Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books.

Et tu, MIT Technology Review?
posted by Lemkin at 9:08 AM on January 8 [13 favorites]


Yeah, if the thing you find objectionable about AI is "wokeness," I'm going to go ahead and assume you're an asshole.
posted by Sing Or Swim at 9:16 AM on January 8 [59 favorites]


IMO things like Shrimp Jesus are one of the few things AI is actually good at - making truly bizarre imagery that is photorealistic. And with NVIDIA's new personal supercomputer that they just released, people can do this at home with far less energy wasted in the process.
posted by grumpybear69 at 9:20 AM on January 8 [8 favorites]


Boeing's problems with Starliner are simply part of their larger issues that relate to the financialization of their company. Jack Welch did it to GE first, then his disciples took over Boeing a couple of decades ago. They have hollowed out the company in the intervening time, prioritizing profit over quality products. It's a credit to the original Boeing that they've been able to coast on their engineering equity for this length of time, but the gravy train started to derail about 5-8 years ago. The 737MAX issues come from the same place.
posted by bonehead at 9:37 AM on January 8 [14 favorites]


For me using the word "woke" at all now in any political context is pretty much an indication that the author is on the ass to dark triad spectrum.
posted by bonehead at 9:41 AM on January 8 [38 favorites]


The actual problem with coding diversity into an AI isn’t the wokeness itself, but the fact that the AI is racist/sexist/etc by default because the training data is that way. Doing some crappy hack to make the results more diverse was never going to be viable, but does showcase the underlying problem and show that it is unsolvable or very difficult to solve. It was a blunder, but the article doesn’t really get to the explanation of why it is significant.
posted by snofoam at 9:48 AM on January 8 [15 favorites]


MetaFilter: I'm going to go ahead and assume you're an asshole.
posted by Lemkin at 9:50 AM on January 8 [20 favorites]


The Gemini fiasco was legit a problem: I’m not sure what the correct number of white women is when a user asks for “8 images of Swedish women”, and I am very glad I don’t have to even try to answer that because I really shouldn’t be the person answering it, but I am fairly comfortable saying the answer is probably neither 0 or 8.

Using the word “Woke” here is just wildly irresponsible journalism. You don’t have to play into the chud narrative at all to recognize that a user asking for several images of Swedish women is not looking for exclusively non-white depictions, and that 0 is still not the answer even if your approach is giving the users what they should want if they were better people than they actually are.
posted by Ryvar at 9:51 AM on January 8 [6 favorites]


but the fact that the AI is racist/sexist/etc by default because the training data is that way

It is, because society is wildly racist not just consciously in the usual cross burning sense or unconsciously in the quiet systemic bank loan approval sense but even a layer below that where if you spider the entire Internet, select from the set of all non-CP/CSA images greater than 5KB in size and with greater than 5 words of descriptive text in an associated alt tag (this was the process for the initial 400 million image training set for both OpenAI’s CLIP and Stable Diffusion via LAION) then the string “beautiful woman” or “beautiful women” is going to vastly overrepresent white women. Pure selection bias of an inherently racist society. And researchers correct for this to a significant extent and there are metrics for that correction (the Bias Benchmark QA or literally BBQ score), but it immediately gets into the sticky territory of how much should AI be correcting for society’s racism: what is the ideal amount of non white women in the response to a prompt asking for “Swedish” women. Or even worse, for “beautiful” women? I know it definitely should be less racist than the training set, but… how much less racist, exactly, before it just palpably breaks with reality for the majority of users and causes them to abandon the model for one that more closely matches their expectations? What’s the cutoff where the average user gets what a better version of themselves would want or expect, and themselves as they actually are will still accept?

Also diversity amongst AI researchers in general is not great, nevermind sporadic bad faith management like Elon actively striving to make the problem worse. Also also we’re talking neural structures here so everything you do to solve one problem has some amount of impact on a thousand others, and you’re basically trapped in the Australian wildlife management problems hellscape: fix a completely broken ecological diversity and restore a careful balance when half of what made the original balance possible is now gone.

It’s honestly one of the things that makes me glad I went into game development instead, where we have no issues with representation at all. {/sarcasm}
posted by Ryvar at 10:13 AM on January 8 [17 favorites]


There is no correct number. Language is just too ambiguous, so every prompt is always going to have some degree of wishing upon a monkey's paw as determined by what was in the training set. That's the problem with AI.
posted by RonButNotStupid at 10:14 AM on January 8 [1 favorite]


That's the problem with AI.

What I’m saying is that I don’t think it’s an AI problem. Like, if you had a morally ideal image generation system that wasn’t based on work taken without consent, where proceeds from its use were distributed to fund the arts and compensate the (consenting) providers of the training material… what would the morally ideal output look like?

And the fact that there may not be a correct answer is a much more fundamental issue than the subject of AI can encompass.
posted by Ryvar at 10:19 AM on January 8 [6 favorites]


And the fact that there may not be a correct answer is a much more fundamental issue than the subject of AI can encompass.

It's still a problem with AI if it gives authoritative answers to questions which have no correct answer.

"So I gave the guy directions, even though I didn't know the way. Because that's the kind of guy I am this week."

--Homer Simpson
posted by RonButNotStupid at 10:29 AM on January 8 [5 favorites]


I guess this is technology fails so more flashy overall, but there were a fair number of horribly thought out and launched “AI” tools that did nothing and never will. The Humane AI pin I read some fun articles about earlier in the year.

And yeah, the pagers didn’t fail. They were assassination and terrorism devices and quite successful.
posted by OnTheLastCastle at 10:47 AM on January 8


I think the Humane Pin launched in late 2023 and failed instantly, so presumably on last year’s list.
posted by snofoam at 10:58 AM on January 8 [1 favorite]


As noted above, the moral failure of the Israeli intelligence service doesn’t qualify as a tech failure to me. It is a very odd inclusion.
posted by snofoam at 11:03 AM on January 8 [7 favorites]


AI is racist/sexist/etc by default because the training data is that way.

Yes, and that's in part because the data is unrepresentative. See, e.g., What AI thinks a beautiful woman looks like. But even if the data were perfect, AI would be trained on data that shows that fighter pilots are overwhelmingly male and that nursery school teachers are overwhelmingly female, because they are. If I ask for a photo of a nursery school teacher, should AI 1) always show me a woman, because that's most representative; 2) show me a woman 99 percent of the time, because that's the share of the group that's female, or 3) show a mix of women and men, because that's what we should aspire to?
posted by Mr.Know-it-some at 11:14 AM on January 8 [9 favorites]


I could probably count the good uses of AI on one hideously misshapen, 8-fingered hand! But seriously, it is kind of amazing that AI can have a totally stereotyped view of what a beautiful woman looks like while not managing to pick up our stereotypes about how many fingers a hand has.
posted by snofoam at 11:14 AM on January 8 [8 favorites]


Ungated.
posted by doctornemo at 11:14 AM on January 8


“The problem is the voluntary carbon market is voluntary,”
posted by doctornemo at 11:16 AM on January 8 [2 favorites]


Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books.

Last Sunday I searched to see how much premier league footballers run in a game. It said 100km. Fine you're a busted ass AI that thinks that footballers are faster than marathon runners while also playing a ball game. But then I scrolled down and it gave authoritative sounding numbers for other pro leagues in Europe all around 100km as well. Then it listed women's pro soccer at 10km (which is a realistic number). So the AI was both wrong and hugely sexist in that it's error wildly inflated male performance only.

I'll take woke over broke and sexist any day of the week.
posted by srboisvert at 11:23 AM on January 8 [3 favorites]


what would the morally ideal output look like?

Makes me think of that RBG quote: "People ask me sometimes... 'When will there be enough women on the court?' And my answer is, 'When there are nine.' People are shocked, but there'd been nine men, and nobody's ever raised a question about that."
posted by solotoro at 11:25 AM on January 8 [9 favorites]


Well that was 5 minutes of my life I'm not going to get back.

Is Tech Review going the way of Nat Geo and Sci Am?
posted by Aardvark Cheeselog at 11:33 AM on January 8 [9 favorites]


A coup for Israel’s spies. But was it a war crime?
posted by maggiemaggie at 1:42 PM on January 8


coup for Israel’s spies. But was it a war crime?

What I know is that it shattered my faith. I saw it burn that day. Wrote a song to try to process. That should not be in this mediocre review. I really hope that’s not the lighthearted takeaway for most people from what happened.
posted by Flight Hardware, do not touch at 2:24 PM on January 8 [2 favorites]


I think the world changed that day, as surely as the world changed on Sept. 11th 2001
posted by ginger.beef at 2:26 PM on January 8 [2 favorites]


I don't mind moral failure being considered a kind of tech failure. There's lots of tech that does what it is supposed to, but really ought to not be used in that way.
posted by surlyben at 4:46 PM on January 8


Just a note that the RISKS Forum and Digest still exist.
posted by snuffleupagus at 5:52 PM on January 8 [1 favorite]


I participated in the voluntary carbon market this year!
I had to create a Christmas worship service out of whole cloth for about 40 children at an interreligious congregation that is both very traditional and occasionally quite silly.

I wrote a pageant featuring the members of the Council of Yule, each of them appearing in turn to offer a story, a gift, and a blessing. I ended up having to go with characters based on the actors and costumes I had at hand (church folk, parents, and heaps of nonsense I retrieved from the century-old basement.)

We ended up having the Council of Yule feature La Befana (chairperson), Belsnickle, The Snow Queen (Hans Christian Andersen version), and Saint Nicholas of Myra. I forgot to cast myself so at the last minute I dressed up as Mr. Jingle Ringalingading, the Christmas Accountant and appointed myself secretary.

I also forgot to plan on a gift for the kiddos so at the last minute I threw a few hundred bucks around on this website and distributed carbon offset certificates to the children in celebration of the end of the fiscal year.
I like to think a few of them ended up on the refrigerator door.
posted by Baby_Balrog at 7:32 PM on January 8


My personal experience, based on the past 20 (!) years of internet use, has been that current AI tech is merely a reflector of the dominant Western culture, which has been handily and effectively distributed through the internet (a technology that simply would not exist without the intervention of the motherfucking U.S. Military)…which in turn is merely an amplified repetition of the this very simple and age-old axiom: “fuck you, I got mine”.

Honestly, what else could it be? Modern “AI” (we laugh, because it’s clearly not artificial** or intelligent for Christ’s sake).

**Jesus Fucking Christ, it cannot be artificial because it is a wholesale plagiarism machine, stop denying this

All these think pieces are gleefully skipping over the most fundamental point: a bunch of Harvard Business Assholes bankrolled a plagiarism laundering system and now they want you to think it’s the goddamn Singularity when it very obviously isn’t even close. It’s just theft. Stop pretending it’s not theft.
posted by Doleful Creature at 9:38 PM on January 8 [2 favorites]


« Older Fire season is year-round now   |   clickety clackety clock Newer »


You are not currently logged in. Log in or create a new account to post comments.