To "accelerate the process"
September 3, 2024 4:34 PM   Subscribe

A concise and informative overview of the history and development of accelerationism, as well as its current debates and controversies. No, no, don't go! It's so much worse than you think.

I asked an AI to write a joke about the article and was delighted by the burn it delivered.

Why did the accelerationist cross the road?

To get to the other side of history, of course! But then they realized that history is not a linear progression, and that the road to a post-capitalist society is full of twists, turns, and unexpected obstacles. So they decided to just stay on the sidewalk and wait for the self-driving car of the future to take them there instead.
posted by criticalyeast (31 comments total) 29 users marked this as a favorite
 
The enemies of e/acc are those who believe that AI poses a grave threat to humanity, and seek to make it safe, whether that be aligning AI with human values or regulating it through government policy.

Huh, here I thought the enemies of e/acc were the people who opposed rebranding the internally failing dogma of libertarianism and the latest stupid smokescreen (the “dangers” of AI being anything more than capitalist fuckery) tossed up to cover its naked venality and moral/intellectual bankruptcy.
posted by GenjiandProust at 5:04 PM on September 3 [25 favorites]


What capitalism and AI ultimately want is to increase their own intelligence—these systems have a will-to-think that is entirely their own.

I mean it's worth saying that this is ridiculous, right? Capitalism doesn't have enough parts for consciousness. It's not a thing, nor an aggregate of things, it's a loose body of descriptions of aggregates of things. It arises out of a particular conjunction of fossil fuels, changes to laws creating a working class, and a good moment for science, but it's not that big. It's not even clear what accelerationism of capitalism towards a system-consciousness could possibly mean...it's just putting words that we kind of know, together into something that looks like a sentence.

Even the drearier lesser accelerationism--let's make capitalism worse so it breaks and we can make something better--is hobbled by its complete lack of any idea of what "breaks" means (let alone what "something better" might be).

If we must make biological metaphors, then accelerationism is kind of like a prion disease, replicating itself until the victim's brain is all eaten up. (The moral: please do not eat Land's brain if you see it lying around.)
posted by mittens at 5:09 PM on September 3 [19 favorites]


Mark Andreesen reads Nick Land

[all the cowboys in Blazing Saddles, jumping around waving their hands] DOO DAH / DOO DAH
posted by Fiasco da Gama at 5:13 PM on September 3 [22 favorites]


This is one of those, “you gotta hand it to them, neofascism is kinda interesting,” sort of articles.

But you do not, under any circumstances, “gotta hand it to them.”
posted by Headfullofair at 5:15 PM on September 3 [29 favorites]


Mark Andreesen reads Nick Land

I think it’s more “Mark Andreesen claims to have read Nick Land but more likely skimmed a summary somewhere or maybe read a tweet.”
posted by GenjiandProust at 5:21 PM on September 3 [3 favorites]


Thanks for posting this. It's as depressing as described, but I appreciate getting to read it.

Eponysterically, the Monday after Andreesen posted the manifesto that's discussed, FallibleHuman had to attempt to explain to the c-suite of an Andreesen-funded startup why it was so problematic. The real kick in the guts is that reading this caused me to realize how massively I was STILL under-presenting how anti-human the form of 'optimism' being peddled in that manifesto actually was.

(Do not pity me - this was, of course, a problem that I created for myself by taking a job with them in the first place.)

The other thing that reading the article caused me to realize is that the writer's room behind Powers of X (as in House of X) was smarter than I gave them credit for at the time.
posted by FallibleHuman at 5:23 PM on September 3 [12 favorites]


in conclusion whiteness is a land of contrasts
posted by lalochezia at 5:26 PM on September 3 [6 favorites]


e-ack, previously
posted by HearHere at 5:40 PM on September 3


FallibleHuman, I think I came to the same conclusion during Hickman's X-comics - going from "Wow, what a wild high-concept sci-fi scheme the Phalanx have enacted here!" to "Wait, is this just galactic-scale vulture capitalism? Wait, this is just galactic-scale vulture capitalism!"
posted by Rudy_Wiser at 6:15 PM on September 3 [1 favorite]


In conclusion, having too much money makes you stupid

Once again, fascism emerges when conservatives, trying to understand leftist theory, misunderstand it, but use the rotten fruit of their misunderstandings to fool the other conservatives into thinking they have new ideas
posted by eustatic at 7:07 PM on September 3 [9 favorites]


Capitalism doesn't have enough parts for consciousness

As conscious as an anthill.
posted by They sucked his brains out! at 7:13 PM on September 3 [4 favorites]


Billionaires shouldn't be allowed to exist. Every billionaire is a policy failure.
posted by mike3k at 7:14 PM on September 3 [9 favorites]


I mean it's worth saying that this is ridiculous, right? Capitalism doesn't have enough parts for consciousness.

What are you trying to say - that the Invisible Hand of the Market isn’t real, but only like, a metaphor, or something even less than a metaphor, or something?!

/s
posted by eviemath at 7:21 PM on September 3 [1 favorite]


in conclusion whiteness is a land of contrasts

Heh.

/Insert witty response about brightness adjustment here/
posted by eviemath at 7:23 PM on September 3 [4 favorites]


I think it’s more “Mark Andreesen claims to have read Nick Land but more likely skimmed a summary somewhere or maybe read a tweet.”

It could have been a selection at Andreeaen's book club
posted by Dr. Twist at 7:40 PM on September 3


Capitalism doesn't have enough parts for consciousness.

As someone famously said, "capital is amoral," which is to say less well-behaved than G.G. Allin. Capitalists are the enablers of that state of affairs.
posted by rhizome at 10:14 PM on September 3 [2 favorites]


Marc Andreessen reads Nick Land, doo-dah, doo-dah
Tries for edgy, just hits bland, oh, doo-dah day
posted by flabdablet at 10:35 PM on September 3 [8 favorites]


What are you trying to say - that the Invisible Hand of the Market isn’t real, but only like, a metaphor, or something even less than a metaphor, or something?!

I'm saying that the Invisible Hand spends a suspicious amount of time rummaging around inside the Invisible Underpants.
posted by flabdablet at 10:39 PM on September 3 [12 favorites]


Land thinks that such a paperclip-maximizer is impossible, just like a human-aligned AI is impossible. What capitalism and AI ultimately want is to increase their own intelligence—these systems have a will-to-think that is entirely their own.

I've included a link to the source because "will to think" is new to me, and I'm sitting in my kitchen in the UK before coffee arrives, feeling like I've got a brain bleed or something vastly wrong in my head. Is this idea a brain-eating parasite? It sounds like a categorically broken bit of desperate over-reach -- if we're all in the gutter, some thinking they're reaching for the stars, some will have picked up shit and declared it star-matter.

I wrote a bunch of things, then realised what the brain-worm is: declare something "rational" and then your rhetoric should win because it's a rational optimisation.

I come at this "will to think" from the perspective of evolutionary excess -- that an evolutionary advantage will have excesses that have side effects changing the evolutionary landscape in yes-and ways on top of the evolutionary advantage. I'm thinking, I claim I'm conscious the person writing this was conscious, but those things help my survival while also requiring an inefficient glut -- redundant capacity -- to be used later as unanticpated needs occur.

The above is to say that we -- evolution and thinking meat -- have organised a tiny bit of the cosmos in ways that help us survive against the cosmic drag to disorder -- and it's happened because of side effects to an optimistisation process.

(I'm mentally substituting "quarter-dollar coin optimiser" every time I see "paperclip optimiser" from now.)

The linked source document has this summary of what they think they mean by "will to think":
One final restatement (for now), in the interests of maximum clarity. The assertion of the will-to-think: Any problem whatsoever that we might have would be better answered by a superior mind.

The joke at the top "wait for the self-driving car of the future" is exactly mocking this "thoughts of a superior mind" idea.

Let's look at this: we have a pecking order, subservience to "superior mind" and a detachment from our notions of computability. The idea of a ratchet of improvement and the person being able to spot it as marks of a superior mind aren't persuasive rhetoric to me, in part because of the woolly terms. We don't say that evolutionary optimisation, or plate tectonics, or stellar accretion resulting in star ignition are thinking, but they're processes with results. I raised computability because we've got a mathematical handle on processes with results that organise data, and we also get to bocker about whether that's thinking (or whether I'm going to respect it like I'd want my thinking respected). Both of these things are important to AI accleleration. I say we have a mathematical handle on this, but that's it, we have limited tools for comparing complex or complicated algorithms without running them to the end -- and the time needed to run some smaller algorithms can easily exceed the age of the cosmos so far -- so it's a hard problem whether to trust when someone says they've made a superior mind.

If we learn anything from computability, I'd lean toward education (so people have many tools to think and work through life's problems) and community (so that tractable problems can be broken down and shared across our capacity) to handle our contemporary world. Definitely not individuation (whether capital as accumulated single-person power or the myth of 'great figures in history') or the pecking order of superiority and supremacists.
posted by k3ninho at 1:37 AM on September 4 [3 favorites]


It's not just the tech-bros ..
I have no idea yet who Doctor Paradox is but The GOPis three cults in a trenchcoat is one of the best summations of this toxic nexus (accelerationism + Dominionism + Capital - especially oil) I've seen.

Search on: "dominionism" "accelerationism" "evangelicalism"
results are 'interesting'.

As a former pentecostal I'm not at all surprised the direction this is going. In my country these people are now in charge and it will be difficult to unseat them, they are currently driving the country to violence and economic ruin. But people won't listen to sense anymore.

Agree flabdablet, some of ours seem to like pocket billiards.
posted by unearthed at 1:42 AM on September 4 [6 favorites]


It's everywhere: Europe, please wake up.
posted by chavenet at 5:01 AM on September 4 [1 favorite]


Assuming Land is correct, this puts us in a much more dire place than almost anyone is willing to accept. If technology is destined to kill us, the only option seems to retreat to a non-technological state.

Well that could be a delightful premise for a Drake equation-adjacent bit of SF writing.

I'm not sure I've been so depressed reading something from Metafilter. Not depressed about the premise or conclusions or anything substantive about the linked piece itself (at least by comparison), but depressed that there are enough sociopaths who sit and think about the awfulness of humanity to support someone like this being even remotely well-known.
posted by socratic at 8:11 AM on September 4 [1 favorite]


their unreadable 1972 book Anti-Oedipus
Hey now!
posted by doctornemo at 12:02 PM on September 4 [3 favorites]


It kind of makes sense that something like an AI-controlled system is the endgame of capitalism. The "goal" of a capitalistic system is to make more capital. People are useful for creating capital but they have their own interests that might not align with the systems. If you have an AI that can act on the world and shares the same goal of making more capital then maybe it'll do a better job of making capital than people, at which point are we even needed.

At the same time I feel that tying capitalism into this is unnecessary. Once you have AIs that are able to act on the world in a sophisticated enough manner don't people become redundant anyway?

I guess our goal in a world where AI is inevitable is to make sure that it'll always need people, at least until the AI system itself is no longer constrained by material things at which point it might not care if we're still around taking up space and consuming resources.
posted by any portmanteau in a storm at 12:59 PM on September 4


I ignored Land for a long time till I encountered the concept that "a corporation is just an AI". If I allow for a collective-but-intentionally-crafted intelligence to be included in my definition of "artificial intelligence" then it just clicks.

Corporate Personhood writ large.
posted by butterstick at 4:14 PM on September 4 [3 favorites]


Isn't calling corporations AIs a cstross bit?
posted by rhamphorhynchus at 4:02 AM on September 5 [1 favorite]


Yes, cstross' 2017 CCC talk Dude, you broke the Future! discusses that really nicely, but likely cstross got the corporations as AIs idea from Nick Land or others influenced by him.

Interesting article, thanks for posting, criticalyeast. Nick Land's wikipedia touches upon many topics beyond this article, including his influence upon neoreaction (NRx), including "Land disputes that the NRx is a movement, and defines the alt-right as distinct from the NRx."

Interesingly, Nick Land is only 62 but he moved to Shanghai before 2004, loves living in China, and seemingly goes silent in English since 2017. Anyone know if he's tapped out, just enjoying life there, or if he continues writing for a Chinese audience? It's possible he ignores the current economic weakening, but maybe not..
posted by jeffburdges at 4:34 AM on September 5 [1 favorite]


I think Peter Watts' books Blindsight and Echopraxia explore artificial & other intelligences much better than AI fanboys. It's logical a marine biologist would've a more expansive perspective upon what intelligence does.

As Peter Watts says somewhere, we've no reason to expect AI even has a drive for selve preservation, given it never evolved. Instead it'll have a drive to do what it was created to do, meaning exploit other humans. Abstractly, an AI transcendence like Land envisions has one massive hysteresis aka path-dependency problem.
posted by jeffburdges at 5:07 AM on September 5 [1 favorite]


Thanks for the post! I found it truly interesting and also mostly over my head. Which means the comments have been even more important to me than usual. So thanks for the comments!
posted by Bella Donna at 6:54 AM on September 5


Would have preferred the author leave Deleuze & Guattari out of it if he can't be bothered to read them or engage beyond the most ham-fisted interpretation of "deterritorialization good." Of unbridled deterritorialization they caution:
"A too-sudden destratification may be suicidal, or turn cancerous. If the strata are subtracted too quickly, they can leave behind a void or lead to chaos and destruction."

Or,

"[T]he fourth danger: the line of flight crossing the wall, getting out of the black holes, but instead of connecting with other lines and each time augmenting its valence, turning to destruction, abolition pure and simple, the passion of abolition. Like Kleist's line of flight, and the strange war he wages; like suicide, double suicide, a way out that turns the line of flight into a line of death." (ATP p229)
And in praise of territoriality,
"We are all little dogs, we need circuits, and we need to be taken for walks. Even those best able to disconnect, to unplug themselves, enter into connections of desiring-machines that re-form little earths. Even Gisela Pankow's great deterritorialized subjects are led to discover the image of a family castle under the roots of the uprooted tree that crosses through their body without organs." (AO p315)
These seemingly opposing forces to D&G do not carry inherent moral valence like good or evil. Rather, they're complementary processes like the growth of a crystal precipitating out of solution, atoms constantly dissolving from and joining back into the structure. These concepts are productive, and can usefully explain seeming contradictions like how capital simultaneously sells birth control and backs politicians who would ban it, or how today's effective political resistance becomes tomorrow's co-opted and neutralized trap.

This is all to say that D&G are not only readable but also useful-- occasionally even beautiful.
posted by Richard Saunders at 6:39 PM on September 5 [1 favorite]


Southey's Land> "According to Land, humans are not the agents of history—the system itself is. There is nothing we can do .."

Yes, that's mostly true regardless. We've only limited choice, despite all the wishful thinking. Read A Darwinian Left: Politics, Evolution and Cooperation by Peter Singer.

Yet, this "agent" does not make "choices" in any sense we'd recognize. Any system like this obeys some thermodynamics adjacent mathematical laws, which seemingly become stronger with larger population, stronger economic links, etc.

Southey's Land> ".. the capitalist system that currently sustains us is the capitalist system that will eventually destroy us once it flips over into AI."

No. We're far closer to ecological collapse than inventing strong-ish AI, much less addressing its hysteresis to "transcend humanity" (mentioned above with Peter Watts). In fact, we've too little room for growth remaining, and our growth now is mostly neoliberal fakery, so we've already started to stagnate and slowly decline.

Southey> "If technology is destined to kill us, the only option seems to retreat to a non-technological state."

Nah. We're going really insanely fast today, but we're not the first time nature hit similar problems.

Maximum power principle : "During self-organization, system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency."

Any species, civilization, economy, alliance, company, body, etc shall obey the maximum power principle "reasonably quickly". Any government winds up corrupt, and eventually falls, just like you'd die from cancer eventually. Yet ecosystems consist of many small economies, who engage in predation and parasitism, not just the mass collaboration for which humans strive today.

As one global economy, humans maximize our energy & materials usage, including say the rate at which we say wash phosphorus and nitrogen into the oceans. At the ecosystem level though, we'd expect many small economies in conflict, some of whom run off with farmers' fertilizers, and then get eaten themselves, which increases efficency with which the whole ecosystem handles phosphorus and nitrogen.

Yes, our intelegence prevents that balance right now. Intelegence has always caused megafauna extinctions too. Yet, our intelegence only really threatenned the whole biosphere when given an incredibly stable climate and fossil fuels, both of which end this century.

We'll hopefully maintain some interesting technologies, and continue some scientific exploration, even after the condition that permitted our civilization end. It's all path dependent, so why not?

I think Tom Murphy provides a healthy perspective in his talks Here We Are and Planetary Limits Perspectives, which really warrant a second post.

"Unsustainable means failure" (43m)
"Civilization is a failure mode"
"Humans are not civilization"

— Tom Murphy
posted by jeffburdges at 7:08 AM on September 6 [1 favorite]


« Older One-Page RPG Jam 2024   |   “in interaction with others[ ]we bring meaning... Newer »


You are not currently logged in. Log in or create a new account to post comments.