Moderate apocalypses
January 23, 2025 12:51 AM   Subscribe

Better without AI explores moderate apocalypses that could result from current and near-future AI technology. These are relatively overlooked risks: not extreme sci-fi extinction scenarios, nor the media’s obsession with “ChatGPT said something naughty” trivia. Rather: realistically likely disasters, up to the scale of our history’s worst wars and oppressions. Better without AI suggests seven types of actions you, and all of us, can take to guard against such catastrophes—and to steer us toward a future we would like.

Better without AI is available as a paperback, on Kindle, or for free in full here on the web. You can start reading now, read the blurb first, or check out the table of contents.
posted by chavenet (21 comments total) 24 users marked this as a favorite
 
Don't worry, recent AI investor SoftBank, who supposedly contributed to a $500 billion investment, doesn't have more than $10 billion to spend. This comforting news comes form (checks) unstable Nazi-saluter Elon Musk. Yep. World's gone mad.
posted by BiggerJ at 12:57 AM on January 23 [2 favorites]


Thanks for the post and links. Refreshing to see some thoughtful measured takes. Not unrelated: the peerless Bruce Schneier’s AI Mistakes Are Very Different from Human Mistakes
posted by jerome powell buys his sweatbands in bulk only at 1:47 AM on January 23 [9 favorites]


"Fluent new text generators, such as ChatGPT, have suddenly shown that powerful AI is here now." [emphasis theirs]

Even this claim— that AI is powerful— needs to be qualified.

It's an impressive demo. Especially if you don't know how it works. But it's also less powerful than they claim (Twitter thread: "After some fine-tuning, today's open models are no better than those of 2022/2023."). Oh, and we still have very little idea of how much it will cost (besides: "a lot").

The book mentions AGI but it's not the full story:
OpenAI launched as a nonprofit positioned to fulfill a business objective, was from the start rich with profit-making potential, and deployed marketing terms to create a narrative about its altruistic pursuit of building a “safe” “AGI.” It is immaterial whether or not the participants in this venture believe in the risks of AI—many clearly do—but these risks serve marketing purposes regardless. If an AI product may be all-powerful, it is all the more alluring to executives and managers who hope to harness it to increase productivity and save labor costs. From the start, OpenAI has positioned itself to be the only reliable steward of this dangerous AGI-adjacent technology—and so we should trust only them when the time comes to sell it.

Tracing the usage of the term “AGI” in OpenAI’s marketing materials, patterns emerge. The term is most often deployed at crucial junctures in the company’s fundraising history, or when it serves the company to remind the media of the stakes of its mission. OpenAI first made AGI a focus of official company business in 2018, when it released its charter, just after it announced Elon Musk’s departure; and again as it is negotiating investment from Microsoft and preparing to restructure as a for-profit company. [emphasis mine]
AI Generated Business: The Rise Of AGI And The Rush To Find A Working Business Model
posted by ftrtts at 4:06 AM on January 23 [2 favorites]


I didn't find this linked in the page, but any discussion of bad AI outcomes should include this story:
Manna by Marshall Brain
posted by KaizenSoze at 4:16 AM on January 23 [2 favorites]


(slowly) working my way through the content, but am I right in understanding that the fuel for this apocalypse engine is data, collected for advertising purposes, and therefore the root evil here is... capitalism?
posted by Molesome at 5:01 AM on January 23 [5 favorites]


am I right in understanding that the fuel for this apocalypse engine is data, collected for advertising purposes, and therefore the root evil here is... capitalism?

Yes, but it's also literal fuel. For example, OpenAI's corporate backer, Microsoft, is reopening Three Mile Island to power its data centres.
posted by ftrtts at 6:08 AM on January 23 [2 favorites]


A just machine to make big decisions
Programmed by fellows with compassion and vision
We'll be clean when their work is done
We'll be eternally free, yes, and eternally young

- Donald Fagen, way ahead of his time
posted by SoberHighland at 7:15 AM on January 23 [7 favorites]


First timer: I admit I used "AI" (it is neither artificial nor intelligent) the other day. First time ever.

I am responsible for doing social media at a social services nonprofit. I needed to make a post for MLK Day, and I was stressed and under a deadline and having writer's block. I created an image just fine, found a quote of Dr. King's that aligns with our mission. But when it came to writing a caption about Dr. King and the Day and how this all ties in with our values... I found myself typing out clichés and word salad.

I used Google's "AI" writing prompt and ended up with a general outline of what eventually became the caption. I took what was generated (stolen from thousands of other MLK themed comments) and ended up rewriting it at least by 60%, moving things around and changing parts to better reflect our organization.

I could probably have just web-searched for "Captions about MLK Day" and had similar success. What I got was an outline with a few key words and a phrase that I incorporated into the final caption. The actual "AI" result I got was not usable. It was more of a starting point that I took and ran with. The short blurb that the "AI" spat out was awkwardly written and repeated a concept with different terms. So the "AI" result was nothing more than a spark, a starting point.

Had I just copy/pasted the result? I would have looked like a bad writer, and probably would have gotten a "What the hell were you thinking with that awkward caption" comment from my bosses - deservedly. I can't even imagine the garbage these things would spit out for an entire presentation or a term paper. I guess I could have kept altering my input until I got a better result, but that would likely have taken just as long as rewriting it.

Apologies for the anecdote. I just wanted to share my general feelings about all this generative machine learning crap that is accumulating all around us. I'm sure college professors would have looked down on web searches for term papers back in 1995... so maybe this is all just early and everything will be looked at in a different way in ten years. But right now it all seems like fluffy add-on crust to sell more product, like Smart Refrigerators and apps for a washing machine.
posted by SoberHighland at 8:16 AM on January 23 [2 favorites]


Can someone tl;dr some examples of "moderate apocalypses" from this work?
posted by star gentle uterus at 8:26 AM on January 23 [5 favorites]


I'm sure college professors would have looked down on web searches for term papers back in 1995

I'm not so sure. I started just a couple years later, and we definitely looked stuff up on the web with our professor's blessings. I was a math major, and back then a lot of the mathy content was posted by math professors and grad students. I'm not certain but I think a lot of discussion of humanities topics would have been similarly biased toward being posted by people who knew a little bit about what they were talking about. Looking back, the real change in the internet was when corporates took over. It was much better when it was almost entirely academics and govts and hobbyists and nerds and regular people joking around and ranting on usenet or irc or bbs's.
posted by SaltySalticid at 8:36 AM on January 23 [5 favorites]


Someone on Mastodon coined the term "Informational Kessler Syndrome" and it's been living rent free in my head, despite not being able to find the original post.
posted by Zalzidrax at 8:38 AM on January 23 [4 favorites]


The book and its companion material Gradient Dissent are read aloud here (free downloads!).
posted by rabia.elizabeth at 9:15 AM on January 23 [1 favorite]


I don't disagree with the premise (wide rollout of AI is going to cause harm / is causing harm) or even much of the argument, but this is a very strange read:

- Describes search engine or stores suggesting "this might be relevant" as dangerous "recommender AI".
- Ascribes "deceptive cunning" to MATH used to train neural networks for image recognition. (Okay, this one is from the technical appendix Gradient Dissent.)
- Puts "scare quotes" around things, and makes pejorative names for things: Mooglebook?
- Moderate apocalypses: this is the chapter to skim if you want to read about the consequences of rolling out "AI". Quotes here, because he's including corporate behavior, recommendation systems, web advertising and a variety of social ills as "AI".

It reads mostly as a pop-sci polemic from someone who got their PhD in the first AI boom and is angry that neural nets got popular instead of whatever late 80s tech he worked with.

With the real dangers of adopting really really big neural nets looming, it must have seemed like a good time to spruce up this manifesto and push it into the world.
posted by Anonymous Function at 10:12 AM on January 23 [5 favorites]


Here's the list BTW:

Practical actions you can take against AI risks

End digital surveillance
Develop and mandate intrinsic cybersecurity
Mistrust machine learning
Fight DOOM AI with SCIENCE! and ENGINEERING!!
Spurn artificial ideology
Recognize that AI is probably net harmful
Create a negative public image for AI
posted by subdee at 10:59 AM on January 23 [2 favorites]


I like to think that many of us here on metafiter do our part to mistrust AI and give it a negative public image every time this topic comes up...
posted by subdee at 11:00 AM on January 23 [3 favorites]


I read an agi uprising a while back (I don't think it was Robopocalypse), where within seconds of 'life', it realises it needs a viable planet to ensure it continues to recieve electrical power, and it understands human-caused climate change threatens this .. and sets out to destry humanity.

Seeing as most unfettered so-called AIs rapidly devolve to lying, racism and halucination (typical of most techbros I have known), I doubt this tech. has anything good to offer humanity, but we are such a stupid species, ever drawn to 'all that glitters', while ignoring the reality beyond the shine
posted by unearthed at 11:59 AM on January 23 [1 favorite]


That Manna story made me want to scream and scream for the first bunch of chapters. After that, the rest of it is so ridiculously perfect and impossible it hurts me in the exact opposite way from the first bunch of chapters.
posted by jenfullmoon at 11:59 AM on January 23 [3 favorites]


I’m a bit more interested in this preprint on ArXiv that suggests that Microsoft generative AI products cannot safely handle secure data and likely will never be able to. A more experienced MeFite than I might come to a different conclusion I suppose, but that seems a bigger danger than any AGI bullshit.
posted by GenjiandProust at 5:29 PM on January 23


After that, the rest of it is so ridiculously perfect and impossible it hurts me

As an Australian looking at the US from outside, that makes perfect sense to me.

Obviously the Brain piece is a total FALGSC cartoon and obviously Australia has more in common with the US than any Australian is ever going to be comfortable admitting, but honestly from here we do just look over there and mostly feel kind of heartbroken for almost all of you; the bosses have such a tight lock on the Land Of The Free.

Egalitarianism really does feature in Australia's conception of itself to a much greater extent than in any other settler colonialist project I'm currently aware of, so I can see why Brain picked us to project his return-to-the-womb fantasy on.

Also, as an Australian I got a hearty laugh out of the idea that The Australia Project's founder (a) was American (b) structured it like a DAO from the get-go (c) colonized this whole continent with it starting from "the outback". Terra Nullius is the myth that just won't die.

Poor simple fucker wouldn't have lasted two hours in Wolf Creek. Those zip ties really are the duck's guts.
posted by flabdablet at 6:40 PM on January 23


Ugh.

As per a common experience when dealing with Rationalists and Rationalist-adjacent folks, a lot of this seems plausible and convincing, until you come across an area that you're familiar with on a deeper level, at which point it's so wildly wrong that you distrust everything else.

For me, it's his writing on "artificial ideologies," along with his discussion of "fuzzing," that highlighted that this is yet another Computer Guy with a Theory That Solves Everything, and footnoting to Yud didn't improve that take.

It's frustrating, because there are many reasons to be skeptical of AI, and he hits on some of them, but 1) the notion of recommenders fomenting an apocalypse relies on an overly-online view of human behavior; 2) his dismissal of actually thinking through how likely these things are leads him to overhype a lot of pretty harebrained theories about potential risk; 3) his views on ideologies and how political institutions actually function are based on a deterministic view of human behavior that hasn't been supported by history; 4) he wildly underestimates how adaptable humans are.

But really, his anti-ideology stuff is appealing to a certain type of nerd who likes to think that they're above politics, and he throws in some populist meat too, but it's fucking stupid to claim that, say, capitalism has been rendered irrelevant by the new artificial ideologies of AI. It's One Weird Trick hucksterism, even if he did get a PhD from MIT in 1991 (which explains why his theoretical model for how he thinks memes work is basically Rudy Rucker's Software).
posted by klangklangston at 6:54 PM on January 23 [6 favorites]


Hopefully this is sincere and not the usual "criti-hype":
AI developers need to generate criti-hype — “criticism” that says the AI is way too cool and powerful and will take over the world, so you should give them more funding to control it.
posted by TheophileEscargot at 9:30 PM on January 23


« Older The War That Almost Broke a Classic Fandom   |   Now's your chance Newer »


This thread has been archived and is closed to new comments