GSV Hopelessly Optimistic But Still Right
October 24, 2024 1:39 PM Subscribe
Machines of Loving Grace Anthropic CEO Dario Amodei publishes a 14,000-word post on How AI Could Transform the World for the Better, detailing how the 5-10 years following availability of an "Expert-Level Science and Engineering" AI, or "country of geniuses in a datacenter" will play out advances in biology, (Amodei's specialty,) neuroscience, economic development, governance, and work and meaning.
Amodei draws on Iain M. Bank's MeFi Favorite the Culture as he concludes:
"I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. [...] These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
Banks, most previously on the Blue: [1][2]
via The Verge/Decoder
Amodei draws on Iain M. Bank's MeFi Favorite the Culture as he concludes:
"I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. [...] These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
Banks, most previously on the Blue: [1][2]
via The Verge/Decoder
OK, so just starting in here :
I think that most people are underestimating just how radical the upside of AI could be
I don't think literally anyone underestimates this. I think nearly everyone estimates the risks of AI as being risks of WHO CONTROLS AI, and how it will be weilded by the powerful. The current state of AI is total shit, but is already being used to the detriment of workers and society
posted by Dr. Twist at 1:59 PM on October 24 [32 favorites]
I think that most people are underestimating just how radical the upside of AI could be
I don't think literally anyone underestimates this. I think nearly everyone estimates the risks of AI as being risks of WHO CONTROLS AI, and how it will be weilded by the powerful. The current state of AI is total shit, but is already being used to the detriment of workers and society
posted by Dr. Twist at 1:59 PM on October 24 [32 favorites]
“an informed amateur in the field of economic development“
Fuck OFF
posted by rrrrrrrrrt at 2:00 PM on October 24 [31 favorites]
Fuck OFF
posted by rrrrrrrrrt at 2:00 PM on October 24 [31 favorites]
There's nothing wrong with informed amateurs. It would be a pretty sad world in which we read only thoughts from full-time paid experts. And the next phrase of his sentence is "I am sure I will get plenty of things wrong."
posted by Mr.Know-it-some at 2:07 PM on October 24 [5 favorites]
posted by Mr.Know-it-some at 2:07 PM on October 24 [5 favorites]
"How The Expensive Stochastic Repetition Of Information Patterns In Existing Large Data Corpuses Could Transform the World for the Better" doesn't really roll off the tongue the same way, but I think is a little more honest in its assessment.
posted by mhoye at 2:12 PM on October 24 [16 favorites]
posted by mhoye at 2:12 PM on October 24 [16 favorites]
Translation: we're burning cash, for the love of god invest more in us!
posted by aramaic at 2:12 PM on October 24 [20 favorites]
posted by aramaic at 2:12 PM on October 24 [20 favorites]
On the bright side there's probably going to be room in the world for a lot of new and genuinely interesting startups in the world once Anthropic and OpenAI fall over and people need to find better ways to amortize the costs of warehouses full of last year's GPUs.
posted by mhoye at 2:15 PM on October 24 [12 favorites]
posted by mhoye at 2:15 PM on October 24 [12 favorites]
Considering how the quality of search is currently declining, I'm not optimistic that the AI LLM industry will fare any better.
posted by Omon Ra at 2:20 PM on October 24 [6 favorites]
posted by Omon Ra at 2:20 PM on October 24 [6 favorites]
What an exciting future, where executives never have to take responsibility for their actions, because those actions were suggested by a country of geniuses in a datacenter. (The country of geniuses in a datacenter also can't be held responsible because it was terminated immediately after delivering the suggestion. Another copy of the country of geniuses can be instantiated later to write an apology if necessary.)
posted by Phssthpok at 2:22 PM on October 24 [19 favorites]
posted by Phssthpok at 2:22 PM on October 24 [19 favorites]
"Expert-Level Science and Engineering" AI, or "country of geniuses in a datacenter"
and
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently
They're gonna have to cook something up that's fundamentally different from trained transformers to get there. Transformers are statistical process and everything they make is based on statistical weights. If Amodei's big idea is "what if ChatGPT *and* Claude, eh?!?!?" we'll throw more of the Amazon rainforest into the atmosphere for a GPT that's epsilon better.
They're cliché machines. At best we'll only ever get "Cliché-Level Science and Engineering" and a "country of mediocrities in a datacenter" for the mere cost of petawatts of wasted power so long as we're dumb enough to just keep throwing money at this cute toy dead-end of generative AI.
posted by tclark at 2:25 PM on October 24 [15 favorites]
and
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently
They're gonna have to cook something up that's fundamentally different from trained transformers to get there. Transformers are statistical process and everything they make is based on statistical weights. If Amodei's big idea is "what if ChatGPT *and* Claude, eh?!?!?" we'll throw more of the Amazon rainforest into the atmosphere for a GPT that's epsilon better.
They're cliché machines. At best we'll only ever get "Cliché-Level Science and Engineering" and a "country of mediocrities in a datacenter" for the mere cost of petawatts of wasted power so long as we're dumb enough to just keep throwing money at this cute toy dead-end of generative AI.
posted by tclark at 2:25 PM on October 24 [15 favorites]
From Musk to Altman to Amodei we have the absolute most amateur, lazy and miseducated thinkers engaged in the project of designing the thinking machiens that are transforming the world. For the love of god, engage the services of a truly critical philosopher
posted by dis_integration at 2:28 PM on October 24 [15 favorites]
posted by dis_integration at 2:28 PM on October 24 [15 favorites]
There's nothing wrong with informed amateurs. It would be a pretty sad world in which we read only thoughts from full-time paid experts. And the next phrase of his sentence is "I am sure I will get plenty of things wrong."
If this was just some rando spitballing on their blog, I would be more inclined to read this charitably. This is the CEO of a major AI company.
posted by Dr. Twist at 2:33 PM on October 24 [18 favorites]
If this was just some rando spitballing on their blog, I would be more inclined to read this charitably. This is the CEO of a major AI company.
posted by Dr. Twist at 2:33 PM on October 24 [18 favorites]
There are several layers of insanity in this essay but perhaps the most interesting one is the utopian assumption that if "we" had this technology, "we" would deploy it to do things like basic research in biology that require even more than just the enormous cost (in energy, hardware, cooling, facilities, etc.) of running models - they also require vast investment in the lab equipment and facilities that the models are going to use to interact with the world in order to make scientific progress at superhuman rates. And that's just one example!
It is left remarkably unclear who is paying for this and what economic incentives they are responding to. This isn't an essay about "what if we had AGI?" - it's an essay about "what if we had AGI and fully automated luxury space communism?" Why wouldn't the country of geniuses in a datacenter spend its time being the best porn roleplaying chatbots that human civilization could ever imagine, while scamming retirees out of their savings as a side hustle?
posted by allegedly at 2:35 PM on October 24 [9 favorites]
It is left remarkably unclear who is paying for this and what economic incentives they are responding to. This isn't an essay about "what if we had AGI?" - it's an essay about "what if we had AGI and fully automated luxury space communism?" Why wouldn't the country of geniuses in a datacenter spend its time being the best porn roleplaying chatbots that human civilization could ever imagine, while scamming retirees out of their savings as a side hustle?
posted by allegedly at 2:35 PM on October 24 [9 favorites]
Just for the sake of this thread, I asked ChatGPT: how do I refrobulate a chaemonostat with less than fifty rejuvs?
ChatGPT said: "Refrobulating a chaemonostat can be tricky, especially with limited rejuvs. Start by ensuring all connections are secure and check for any misalignments in the components. If you can, prioritize the calibration sequence; sometimes, a simple recalibration can do wonders.
If you're working with fewer than fifty rejuvs, consider using alternative energy sources to supplement the system. Lastly, keep an eye on the feedback loops; optimizing those can help maintain stability without draining your rejuvs too quickly. Good luck!"
Should I do anything differently if I have unrescabular cryonostics on the inflecrtius?
ChatGPT said: "Yes, having unrescabular cryonostics on the inflecrtius definitely requires some adjustments. First, make sure to isolate the cryonostic readings to avoid interference with the chaemonostat's function. It might be wise to perform a diagnostic check to assess the extent of the unrescabulation and determine if any components need recalibrating or replacing.
Consider temporarily bypassing or shielding any affected areas to prevent cascading failures. If possible, consult the manual for specific protocols related to unrescabular situations, as they often include troubleshooting steps tailored for such anomalies. And remember, take your time—rushing can lead to further complications!"
Once that's done should I start be reaccliminating the rhevonotics or stantionate the frencialimeter instead.
ChatGPT said: "Once you've addressed the unrescabular cryonostics, it's usually best to start by reaccliminating the rhevonotics. This ensures that the foundational systems are stable before you tackle the more complex adjustments. Proper reacclimation can help improve overall performance and mitigate any potential issues stemming from the cryonostics.
Once the rhevonotics are stable, you can then focus on stantionating the frencialimeter. This two-step approach will help maintain system integrity and optimize performance across the board. Good luck with the process!"
So, remember, take your time. Rushing can lead to further complications.
posted by mhoye at 2:41 PM on October 24 [65 favorites]
ChatGPT said: "Refrobulating a chaemonostat can be tricky, especially with limited rejuvs. Start by ensuring all connections are secure and check for any misalignments in the components. If you can, prioritize the calibration sequence; sometimes, a simple recalibration can do wonders.
If you're working with fewer than fifty rejuvs, consider using alternative energy sources to supplement the system. Lastly, keep an eye on the feedback loops; optimizing those can help maintain stability without draining your rejuvs too quickly. Good luck!"
Should I do anything differently if I have unrescabular cryonostics on the inflecrtius?
ChatGPT said: "Yes, having unrescabular cryonostics on the inflecrtius definitely requires some adjustments. First, make sure to isolate the cryonostic readings to avoid interference with the chaemonostat's function. It might be wise to perform a diagnostic check to assess the extent of the unrescabulation and determine if any components need recalibrating or replacing.
Consider temporarily bypassing or shielding any affected areas to prevent cascading failures. If possible, consult the manual for specific protocols related to unrescabular situations, as they often include troubleshooting steps tailored for such anomalies. And remember, take your time—rushing can lead to further complications!"
Once that's done should I start be reaccliminating the rhevonotics or stantionate the frencialimeter instead.
ChatGPT said: "Once you've addressed the unrescabular cryonostics, it's usually best to start by reaccliminating the rhevonotics. This ensures that the foundational systems are stable before you tackle the more complex adjustments. Proper reacclimation can help improve overall performance and mitigate any potential issues stemming from the cryonostics.
Once the rhevonotics are stable, you can then focus on stantionating the frencialimeter. This two-step approach will help maintain system integrity and optimize performance across the board. Good luck with the process!"
So, remember, take your time. Rushing can lead to further complications.
posted by mhoye at 2:41 PM on October 24 [65 favorites]
Some excerpts from "A Few Notes on the Culture" by Iain Banks that I find relevant to read alongside our contemporary discussions of AI in a capitalist society where AI is being used to deskill and discipline workers:
Amodei's aspiration to the "idea that punishments should be consistent and systematic across people" also strikes me like the kind of thinking could come to support virtual hells.
posted by audi alteram partem at 2:43 PM on October 24 [8 favorites]
The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources.And
Briefly, nothing and nobody in the Culture is exploited. It is essentially an automated civilisation in its manufacturing processes, with human labour restricted to something indistinguishable from play, or a hobby.If only drones could take CEOs on a personal tour of the wretchedness whence their wealth derives, though I suspect what revelations they might take away from such a tour, if any, would be far less profound than Gurgeh's.
No machine is exploited, either.....
Where intelligent supervision of a manufacturing or maintenance operation is required, the intellectual challenge involved (and the relative lightness of the effort required) would make such supervision rewarding and enjoyable, whether for human or machine. The precise degree of supervision required can be adjusted to a level which satisfies the demand for it arising from the nature of the civilisation's members. People -- and, I'd argue, the sort of conscious machines which would happily cooperate with them -- hate to feel exploited, but they also hate to feel useless.
Amodei's aspiration to the "idea that punishments should be consistent and systematic across people" also strikes me like the kind of thinking could come to support virtual hells.
posted by audi alteram partem at 2:43 PM on October 24 [8 favorites]
mhoye: "Just for the sake of this thread, I asked ChatGPT: how do I refrobulate a chaemonostat with less than fifty rejuvs?"
I assume you're using the weaker, free model? 4o says:
I assume you're using the weaker, free model? 4o says:
It sounds like you're asking about a technical process involving fictional or obscure terminology. Could you clarify what you mean by "refrobulate" and "chaemonostat"? If this is a part of a specific system, game, or concept, providing more context would help in figuring out the steps or a relevant explanation!posted by Rhaomi at 2:45 PM on October 24 [6 favorites]
Increasingly, there are serious arguments that LLM do not reason. Yes, they'll make "expert-level" knowledge available in new ways for corporations and government agencies, but what does that mean?
AIs could absolutely do change what works get created and what gets attention. We'd drug discovery AIs which proposed numerous checmical weapons including VX, when researchers swapped goals. Ignore the sciences though..
"As a musician/songwriter myself, I've tried numeruous times to make it happen, but it's just not feasible." -- ObscurestVinyl
In other words, "it's just not feasible" to convince other talented humans to help create songs like I Glued My Balls To My Butthole Again. Yes, those songs are cute amusements, but imagine if all the catchyest pop songs are advertisements.
There are non-sciences that with considerable harm potential too, especially the social sciences. If you hire a human who studies some marginalized group, then you'd expect this human has some indoctrination that gives them pause when asked to harm that group.
Imagine otoh you train an AI on the social sciences literature concerning some marginalized group, along with say the artforms produced by that group. If you provide this AI to outright racists with too much money and power, then they could design much more insidiously attacks upon that group, like say AI art that degrades the grammatical structure of some marginalized groups dialect. Alright US Nazis might've too little intelligence for this. Israel or similar provide a different story.
We've legal discovery AIs but what if companies reverse those, censoring their exceutives so the company takes rampant illegal actions, but perfects pushing blame onto lower leevel employees.
As technologies go, AIs most closely resemble corporations, in how they increase powerful people's range of how they exercise their power.
posted by jeffburdges at 2:50 PM on October 24 [6 favorites]
AIs could absolutely do change what works get created and what gets attention. We'd drug discovery AIs which proposed numerous checmical weapons including VX, when researchers swapped goals. Ignore the sciences though..
"As a musician/songwriter myself, I've tried numeruous times to make it happen, but it's just not feasible." -- ObscurestVinyl
In other words, "it's just not feasible" to convince other talented humans to help create songs like I Glued My Balls To My Butthole Again. Yes, those songs are cute amusements, but imagine if all the catchyest pop songs are advertisements.
There are non-sciences that with considerable harm potential too, especially the social sciences. If you hire a human who studies some marginalized group, then you'd expect this human has some indoctrination that gives them pause when asked to harm that group.
Imagine otoh you train an AI on the social sciences literature concerning some marginalized group, along with say the artforms produced by that group. If you provide this AI to outright racists with too much money and power, then they could design much more insidiously attacks upon that group, like say AI art that degrades the grammatical structure of some marginalized groups dialect. Alright US Nazis might've too little intelligence for this. Israel or similar provide a different story.
We've legal discovery AIs but what if companies reverse those, censoring their exceutives so the company takes rampant illegal actions, but perfects pushing blame onto lower leevel employees.
As technologies go, AIs most closely resemble corporations, in how they increase powerful people's range of how they exercise their power.
posted by jeffburdges at 2:50 PM on October 24 [6 favorites]
I’m increasingly of the position that under capitalism, “AI” and “accelerationism” are the same damn thing. I don’t want to hear how we might benefit from it at this point, I want to hear how we can be protected from indiscriminate use of it and prevent the capture of the billions being thrown at it. But what can we do?
posted by Jon Mitchell at 2:56 PM on October 24 [7 favorites]
posted by Jon Mitchell at 2:56 PM on October 24 [7 favorites]
I don't even know what to say at this point. This is pure bullshit, an act of fraud intended to deceive people into giving money to the speaker. Nothing Anthropic (nor OpenAI, for that matter) is working has anything to do with any of this. 100% of the relationship between this piece and Iain M. Banks' series of novels set in The Culture is that they're both fiction.
If anybody tells you that they have a machine which engages in humanlike reasoning or possesses anything analogous to sapience, they are lying and in the last couple of years they're generally using the latest and greatest version of ELIZA to trick you into believing them. They know that they are committing fraud.
It is tremendously frustrating that nobody will go to jail for any of this.
posted by Pope Guilty at 3:05 PM on October 24 [19 favorites]
If anybody tells you that they have a machine which engages in humanlike reasoning or possesses anything analogous to sapience, they are lying and in the last couple of years they're generally using the latest and greatest version of ELIZA to trick you into believing them. They know that they are committing fraud.
It is tremendously frustrating that nobody will go to jail for any of this.
posted by Pope Guilty at 3:05 PM on October 24 [19 favorites]
"By powerful AI ... with the following properties: In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc."
With a guesstimate of that happening somewhere around or a shortly after 2026. Like c'mon dude.
I don't mean to be too pessimistic (Who am I kidding, yes, yes I do), but, like-- show me a concept of intelligence that's as advanced as a cockroach. Show me a car that can autonomously drive without any issue. Show me a code copilot that doesn't make any boneheaded choices with simple junior level problems. Are you saying those problems are harder to solve then creating a Nobel beating intelligence across multiple fields?!
Or do we have to crack the problem of generating some tech capable of out thinking a Nobel prize winner across all fields, so that it can then build a model as good as a cockroach?
His idea is that LLM's ain't gonna be it (not the current architecture at least)-- it's some other tech. But pray-tell Dario, what steps have been made down that path, has Anthropic made any steps down that new path, care to show your working? Or is it just-- someone will crack it, and Anthropic will be there to copy, edit and go to market.
This idea of intelligence, enough that it could write a great novel is pretty ugh. Is it more likely i a unique description of the touch of wind across your face on a cold night, without ever having felt one, or that we just get the ol' slop bucket churning out amalgamations from people who actually used their intelligence to create something new.
Enough grar for now and I'll go back to reading it tonight-- but as someone who uses these tools (categorization, first-pass humanizing concatenated descriptions, etc) I find it mind-blowing these people can look at what they've built and hop, skip and jump to effectively a God-like intelligence.
posted by Static Vagabond at 3:07 PM on October 24
With a guesstimate of that happening somewhere around or a shortly after 2026. Like c'mon dude.
I don't mean to be too pessimistic (Who am I kidding, yes, yes I do), but, like-- show me a concept of intelligence that's as advanced as a cockroach. Show me a car that can autonomously drive without any issue. Show me a code copilot that doesn't make any boneheaded choices with simple junior level problems. Are you saying those problems are harder to solve then creating a Nobel beating intelligence across multiple fields?!
Or do we have to crack the problem of generating some tech capable of out thinking a Nobel prize winner across all fields, so that it can then build a model as good as a cockroach?
His idea is that LLM's ain't gonna be it (not the current architecture at least)-- it's some other tech. But pray-tell Dario, what steps have been made down that path, has Anthropic made any steps down that new path, care to show your working? Or is it just-- someone will crack it, and Anthropic will be there to copy, edit and go to market.
This idea of intelligence, enough that it could write a great novel is pretty ugh. Is it more likely i a unique description of the touch of wind across your face on a cold night, without ever having felt one, or that we just get the ol' slop bucket churning out amalgamations from people who actually used their intelligence to create something new.
Enough grar for now and I'll go back to reading it tonight-- but as someone who uses these tools (categorization, first-pass humanizing concatenated descriptions, etc) I find it mind-blowing these people can look at what they've built and hop, skip and jump to effectively a God-like intelligence.
posted by Static Vagabond at 3:07 PM on October 24
i'd be great if AI gave us the star trek society. but it won't, because the power it offers will be concentrated in the hands of a few selfish people who will use it to fulfill their own desires. but then that will be just like the way power is exercised today.
i worry more about the atrophy of the human spirit you already see in the proliferation of "artists" and "writers" who use generative models to produce images and text for them and then say they have written something. even in the linked piece, amodei speaks about the highly intelligent machines as ones that can "prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc". they may begin to produce novels, but they will never be artists until they yearn and rage, until a tension within them presses harder and harder on them, only resolved with an irruption of beauty they cannot explain. until a machine has desires and pains and hopes and regrets, it will never write a poem. even if it writes a poem it is not a poem, it's just words. but if we give our poetry over to the machines, we'll have given up our humanity. and that's what we should be concerned with the most.
posted by dis_integration at 3:12 PM on October 24 [6 favorites]
i worry more about the atrophy of the human spirit you already see in the proliferation of "artists" and "writers" who use generative models to produce images and text for them and then say they have written something. even in the linked piece, amodei speaks about the highly intelligent machines as ones that can "prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc". they may begin to produce novels, but they will never be artists until they yearn and rage, until a tension within them presses harder and harder on them, only resolved with an irruption of beauty they cannot explain. until a machine has desires and pains and hopes and regrets, it will never write a poem. even if it writes a poem it is not a poem, it's just words. but if we give our poetry over to the machines, we'll have given up our humanity. and that's what we should be concerned with the most.
posted by dis_integration at 3:12 PM on October 24 [6 favorites]
>If anybody tells you that they have a machine which engages in humanlike reasoning or possesses anything analogous to sapience,
https://imgur.com/4sate7s
posted by torokunai at 3:38 PM on October 24 [2 favorites]
https://imgur.com/4sate7s
posted by torokunai at 3:38 PM on October 24 [2 favorites]
I'm pretty sure that ChatGPT could make the very same points this essay does, in waaaay less than 14,000 words.
posted by storybored at 3:44 PM on October 24 [3 favorites]
posted by storybored at 3:44 PM on October 24 [3 favorites]
Taps the sign:
Those of you making excuses for [LLM] use are doing real and grievous harm to the world and to people.
posted by ursus_comiter at 3:45 PM on October 24 [12 favorites]
Those of you making excuses for [LLM] use are doing real and grievous harm to the world and to people.
posted by ursus_comiter at 3:45 PM on October 24 [12 favorites]
as long as so many humans prioritize domination, exploitation & extermination using the fruits of their tech, as seems to be not at all decreasing despite our "enlightened ideas", the only thing left to hope for is that the robots will--either by their extravagant use of resources or by their application of machine stupidity to problems created by human stupidity--hurry up the inevitable fall of this sorry-ass excuse for a civilization. unfortunately, though, robots are just a sideshow. the earth itself has a better idea.
posted by graywyvern at 3:47 PM on October 24
posted by graywyvern at 3:47 PM on October 24
https://imgur.com/4sate7s
You are exactly the kind of person I want this garbage banned, dismantled, bulk-erased, and destroyed to protect.
posted by Pope Guilty at 3:56 PM on October 24 [13 favorites]
You are exactly the kind of person I want this garbage banned, dismantled, bulk-erased, and destroyed to protect.
posted by Pope Guilty at 3:56 PM on October 24 [13 favorites]
A just machine to make big decisions
Programmed by fellows with compassion and vision
Came to post this.
posted by gauche at 3:57 PM on October 24
Programmed by fellows with compassion and vision
Came to post this.
posted by gauche at 3:57 PM on October 24
I've actually talked with Dario a couple times. Smart guy. But this intelligence seems to act as a trap for people with his mindset, basically the technotopians. It's forward, forward, forward, and if there's a problem, it's because we didn't go forward fast enough.
I was recently at Anthropic HQ and talking with one of their people, who mentioned he gave their AI a picture of a plant he'd bought and didn't know anything about, and got care instructions for it.
I mean, convenient in a way of course. But Claude (the AI) was just summarizing what some plant expert said, probably in an article he could have found in five or ten seconds. The effect of his choice to use the AI as a mediator for this knowledge is to rob the expert of the reason for providing their expertise: that they will receive some form of credit or compensation for it.
It's the same with Perplexity; I'm talking with its CEO next week, and he has a similar outlook: harvest the value others have created, reap investor money, then attempt to beat the rest of the world into a new shape that supports whatever business model they decide works for them. Last step, as always, is say what you're doing is inevitable.
It's hard to argue with these folks because their perspectives are very different from mine, sometimes when we talk they look at me like I'm a weird caterpillar they found. But it's important to dispute things like these manifestos, and loudly, or it reinforces their belief that everyone really thinks like them.
posted by BlackLeotardFront at 4:16 PM on October 24 [19 favorites]
I was recently at Anthropic HQ and talking with one of their people, who mentioned he gave their AI a picture of a plant he'd bought and didn't know anything about, and got care instructions for it.
I mean, convenient in a way of course. But Claude (the AI) was just summarizing what some plant expert said, probably in an article he could have found in five or ten seconds. The effect of his choice to use the AI as a mediator for this knowledge is to rob the expert of the reason for providing their expertise: that they will receive some form of credit or compensation for it.
It's the same with Perplexity; I'm talking with its CEO next week, and he has a similar outlook: harvest the value others have created, reap investor money, then attempt to beat the rest of the world into a new shape that supports whatever business model they decide works for them. Last step, as always, is say what you're doing is inevitable.
It's hard to argue with these folks because their perspectives are very different from mine, sometimes when we talk they look at me like I'm a weird caterpillar they found. But it's important to dispute things like these manifestos, and loudly, or it reinforces their belief that everyone really thinks like them.
posted by BlackLeotardFront at 4:16 PM on October 24 [19 favorites]
Mr. Know-it-some, what would qualify someone to be an “informed amateur” in your book, on a topic as serious and impactful as development economics or international public health?
Reading over Amodei’s literal plan to save the world using the technology that his company makes - a world in which only developing nations are described as corrupt, a citation is given to the Wikipedia page for “Four Asian Tigers” to justify a prediction that ???AI finance ministers could achieve multiple years of 10% GDP growth for… everybody???, and targeting animal vectors for human disease is “simpler” than other kinds of health campaign (killing all mosquitos on earth “requires one or a few centralized actions, rather than a coordinated campaign”!!!) - do you have a sense that he is informed, or that he genuinely cares about the people that his proposals would affect?
I, as someone who has been working in international development for a decade and a half, from rural service delivery to shaping global US government strategy, see a delusional egotist who has read a handful of articles and perhaps a book or two and was an early donor to GiveWell what a good boy and now mistakes his physics PhD and business success for the capacity to generate viable solutions to humanity’s most intractable problems.
This adult human being conceived and published the sentence “ even cell phones quickly permeated sub-Saharan Africa via market mechanisms, without needing philanthropic efforts”. …huh? Does Amodei believe that people in Tanzania or South Sudan are incapable of recognizing the personal or business value of cell phones as a technology? Of running businesses to make or import them? Does he not know that philanthropy *is* deeply involved in extending coverage to areas that don’t aggregate sufficient demand to support market actors building and maintaining towers and power supply? What kind of person would call themselves an informed amateur and then say something so profoundly fucking stupid?
Being an informed amateur making widely-circulated public proposals about what would work in international development comes with a different moral weight than being an informed amateur baker proposing a new cookie recipe to friends or an informed amateur jazz guitarist with strong feelings about the influence of Charlie Christian in a subreddit about punk music. The field of international development has been led by “informed amateurs” since the field began: rich, powerful, white, Western, capitalist, mostly men, whose arrogance, ignorance, and willingness to play chess with other people’s real human lives have contributed to a great deal of human suffering around the world.
There is a great deal more of profound fucking stupidity in this self-serving fever dream of a blog post, and I am angry and swearing because this dangerous twit has not only power and money, but also the ear of at least enough decent people that his work shows up on a site like MetaFilter.
posted by rrrrrrrrrt at 4:18 PM on October 24 [33 favorites]
Reading over Amodei’s literal plan to save the world using the technology that his company makes - a world in which only developing nations are described as corrupt, a citation is given to the Wikipedia page for “Four Asian Tigers” to justify a prediction that ???AI finance ministers could achieve multiple years of 10% GDP growth for… everybody???, and targeting animal vectors for human disease is “simpler” than other kinds of health campaign (killing all mosquitos on earth “requires one or a few centralized actions, rather than a coordinated campaign”!!!) - do you have a sense that he is informed, or that he genuinely cares about the people that his proposals would affect?
I, as someone who has been working in international development for a decade and a half, from rural service delivery to shaping global US government strategy, see a delusional egotist who has read a handful of articles and perhaps a book or two and was an early donor to GiveWell what a good boy and now mistakes his physics PhD and business success for the capacity to generate viable solutions to humanity’s most intractable problems.
This adult human being conceived and published the sentence “ even cell phones quickly permeated sub-Saharan Africa via market mechanisms, without needing philanthropic efforts”. …huh? Does Amodei believe that people in Tanzania or South Sudan are incapable of recognizing the personal or business value of cell phones as a technology? Of running businesses to make or import them? Does he not know that philanthropy *is* deeply involved in extending coverage to areas that don’t aggregate sufficient demand to support market actors building and maintaining towers and power supply? What kind of person would call themselves an informed amateur and then say something so profoundly fucking stupid?
Being an informed amateur making widely-circulated public proposals about what would work in international development comes with a different moral weight than being an informed amateur baker proposing a new cookie recipe to friends or an informed amateur jazz guitarist with strong feelings about the influence of Charlie Christian in a subreddit about punk music. The field of international development has been led by “informed amateurs” since the field began: rich, powerful, white, Western, capitalist, mostly men, whose arrogance, ignorance, and willingness to play chess with other people’s real human lives have contributed to a great deal of human suffering around the world.
There is a great deal more of profound fucking stupidity in this self-serving fever dream of a blog post, and I am angry and swearing because this dangerous twit has not only power and money, but also the ear of at least enough decent people that his work shows up on a site like MetaFilter.
posted by rrrrrrrrrt at 4:18 PM on October 24 [33 favorites]
How many breathless FPPs will there be here, which are just the dribbled latest thoughts of an LLM huckster? These are embarrassing to witness.
posted by panhopticon at 4:49 PM on October 24 [24 favorites]
posted by panhopticon at 4:49 PM on October 24 [24 favorites]
^^^ yep
posted by j_curiouser at 5:35 PM on October 24
posted by j_curiouser at 5:35 PM on October 24
–"Motive," the construct said. "Real motive problem, with an AI. Not human, see?"
–"Well, yeah, obviously."
–"Nope. I mean, it's not human. And you can't get a handle on it. Me, I'm not human either, but I respond like one. See?"
–"Wait a sec," Case said. "Are you sentient, or not?"
–"Well, it feels like I am, kid, but I'm really just a bunch of ROM. It's one of them, ah, philosophical questions, I guess ..." The ugly laughter sensation rattled down Case's spine. "But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human."
—William Gibson, Neuromancer, 1983
The tech path that has led to something akin to the Dixie Flatline will never lead us to Wintermute. There's a qualitative difference, one that no amount of self-stroking tech billionaire money can bridge. But if we ever do (I say "we" but of course I mean our oligarchy) create Wintermute, it won't have any Turing Police chasing it down, because the cops already work for the oligarchy.
posted by outgrown_hobnail at 5:52 PM on October 24 [5 favorites]
–"Well, yeah, obviously."
–"Nope. I mean, it's not human. And you can't get a handle on it. Me, I'm not human either, but I respond like one. See?"
–"Wait a sec," Case said. "Are you sentient, or not?"
–"Well, it feels like I am, kid, but I'm really just a bunch of ROM. It's one of them, ah, philosophical questions, I guess ..." The ugly laughter sensation rattled down Case's spine. "But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human."
—William Gibson, Neuromancer, 1983
The tech path that has led to something akin to the Dixie Flatline will never lead us to Wintermute. There's a qualitative difference, one that no amount of self-stroking tech billionaire money can bridge. But if we ever do (I say "we" but of course I mean our oligarchy) create Wintermute, it won't have any Turing Police chasing it down, because the cops already work for the oligarchy.
posted by outgrown_hobnail at 5:52 PM on October 24 [5 favorites]
What an exciting future, where executives never have to take responsibility for their actions, because those actions were suggested by a country of geniuses in a datacenter.
Weapons of Math Destruction by Cathy O'Neil (2016) described all of this perfectly, how digital tools allow rich mElon-headed aristocrats to continue dominating us mere peasants in the future.
(As much as I adore Iain M. Banks' thoughtful intelligent masters. Maybe if he was alive he might write a novel about how these clever Minds took over from humans and got us out of this mess. I am certain that an AI will write this novel, and it will be very shitty.)
posted by ovvl at 6:00 PM on October 24 [5 favorites]
Weapons of Math Destruction by Cathy O'Neil (2016) described all of this perfectly, how digital tools allow rich mElon-headed aristocrats to continue dominating us mere peasants in the future.
(As much as I adore Iain M. Banks' thoughtful intelligent masters. Maybe if he was alive he might write a novel about how these clever Minds took over from humans and got us out of this mess. I am certain that an AI will write this novel, and it will be very shitty.)
posted by ovvl at 6:00 PM on October 24 [5 favorites]
A just machine to make big decisions
Programmed by fellows with compassion and vision
I mean, Lululemon will sell you a spandex jacket now for the low, low price of $118. At least that part of Donald Fagen’s future did come true.
posted by fedward at 6:10 PM on October 24 [1 favorite]
Programmed by fellows with compassion and vision
I mean, Lululemon will sell you a spandex jacket now for the low, low price of $118. At least that part of Donald Fagen’s future did come true.
posted by fedward at 6:10 PM on October 24 [1 favorite]
So, remember, take your time.
that was a truly magnificent illustration of how useless a i is
today i saw the results of an a i being told to make a picture of a wine glass filled to the brim - every time the program produced a wine glass that was only half full and yet it insisted that it was a "full" glass - even when told to make it 100%, the result was still half-empty
my guess is every commercial photographer in the world photographs wine glasses like that and the program thinks this is what's meant by full, because it's never exceeded, and it's never seen a full wine glass
therefore it is what the a i says it is in spite of "full" having a very specific and understandable meaning
but if i tell it to kaztenfumen the googlefritz it will know all about it
posted by pyramid termite at 6:59 PM on October 24 [5 favorites]
that was a truly magnificent illustration of how useless a i is
today i saw the results of an a i being told to make a picture of a wine glass filled to the brim - every time the program produced a wine glass that was only half full and yet it insisted that it was a "full" glass - even when told to make it 100%, the result was still half-empty
my guess is every commercial photographer in the world photographs wine glasses like that and the program thinks this is what's meant by full, because it's never exceeded, and it's never seen a full wine glass
therefore it is what the a i says it is in spite of "full" having a very specific and understandable meaning
but if i tell it to kaztenfumen the googlefritz it will know all about it
posted by pyramid termite at 6:59 PM on October 24 [5 favorites]
"Increasingly, there are serious arguments that LLM do not reason. "
Well, given that anyone who says that they *do* reason are idiots, I'd hope so. I don't think they are useless, pattern-recognition and matching can do great things in the right places, like screening for cancer, but they don't *think*.
posted by tavella at 9:28 PM on October 24 [3 favorites]
Well, given that anyone who says that they *do* reason are idiots, I'd hope so. I don't think they are useless, pattern-recognition and matching can do great things in the right places, like screening for cancer, but they don't *think*.
posted by tavella at 9:28 PM on October 24 [3 favorites]
LLM's are a one trick pony, stochastic parrots that apply a noisy median filter on human knowledge and cough up language-like artifacts (or art-like visuals, for you on that side of the fence).
If it says something accurate about a photo, that's because it's scrapped the data of a thousand humans who have commented on similar photos. They do not understand that a photo is awe-inspiring, but that statistically the words awe inspiring are connected to that photo.
It's one trick, but it's a good one, tailor made to fool people. Here's a quote from Spielberg's AI, talking about the robot 'child' David that seems apropos:
Lord Johnson-Johnson: [Showing David off to the Flesh Fair crowd] Ladies and gentlemen! Boys and girls and children of all ages! What will they think of next? See here! A bitty-bot, a tinker-toy, a living doll. Of course, we all know why they made them. To steal your hearts, to replace your children. This is the latest iteration in a series of insults to human dignity, and in their grand scheme to phase out all of God's little children. Meet the next generation of child, designed to do just that. [crowd stares silently] Do not be fooled by the artistry of this creation. No doubt there was talent in the crafting of this simulator. Yet with the very first strike, you will see the big lie come apart before your very eyes! [as David pleads to be spared] Built like a boy to disarm us! See how they try to imitate our emotions now! Whatever performance this sim puts on, remember: we are only demolishing artificiality! Let he who is without sim cast the first stone. [crowd pelts him with rocks and garbage]
Lord Johnson-Johnson descends into the movie in a hot-air balloon that looks like the Amblin pictures logo, and is wearing an Indiana Jones Fedora. I think it's safe to say this is Spielberg taking directly to the movie audience.
posted by chromecow at 10:05 PM on October 24 [3 favorites]
If it says something accurate about a photo, that's because it's scrapped the data of a thousand humans who have commented on similar photos. They do not understand that a photo is awe-inspiring, but that statistically the words awe inspiring are connected to that photo.
It's one trick, but it's a good one, tailor made to fool people. Here's a quote from Spielberg's AI, talking about the robot 'child' David that seems apropos:
Lord Johnson-Johnson: [Showing David off to the Flesh Fair crowd] Ladies and gentlemen! Boys and girls and children of all ages! What will they think of next? See here! A bitty-bot, a tinker-toy, a living doll. Of course, we all know why they made them. To steal your hearts, to replace your children. This is the latest iteration in a series of insults to human dignity, and in their grand scheme to phase out all of God's little children. Meet the next generation of child, designed to do just that. [crowd stares silently] Do not be fooled by the artistry of this creation. No doubt there was talent in the crafting of this simulator. Yet with the very first strike, you will see the big lie come apart before your very eyes! [as David pleads to be spared] Built like a boy to disarm us! See how they try to imitate our emotions now! Whatever performance this sim puts on, remember: we are only demolishing artificiality! Let he who is without sim cast the first stone. [crowd pelts him with rocks and garbage]
Lord Johnson-Johnson descends into the movie in a hot-air balloon that looks like the Amblin pictures logo, and is wearing an Indiana Jones Fedora. I think it's safe to say this is Spielberg taking directly to the movie audience.
posted by chromecow at 10:05 PM on October 24 [3 favorites]
'Do you really want to live forever?' Sci-fi author Ted Chiang talks immortality - "One audience member asked about how Chiang viewed the role of a science-fiction author at a time when 'it feels like we are now living through some of the dystopian nightmares that your predecessors have predicted' and when 'major influential capitalist actors' are 'inspired by science fiction.'"
“I should say I’m flabbergasted to be here. I never guessed that anyone would ask me to opine on anything,” Chiang said to laughs from the audience. “We do live in this weird historical moment where a certain subset of technologists now have accumulated great wealth and great power, and they refer to science fiction as their inspiration. They are, I think, being very selective in both the science fiction that they are referring to and in what they are taking from that. There’s been plenty of science fiction that was intended in a cautionary mode, and the billionaires have ignored all of that.”posted by kliuless at 10:40 PM on October 24 [9 favorites]
Chiang ended by answering a question from the audience about optimism and hopes for the future. He quoted Ursula K. Le Guin, who once said “We live in capitalism. Its power seems inescapable. So did the divine right of kings.”
"The end of kings happened even though no one predicted it. I am prone to pessimism as much as anyone, but what I try and remember is that in general, we have never been good at predicting the future," Chiang said. "And in the same way that there were people who fought against the idea of kings there are people who fight against the idea of capitalism, and we should keep fighting [and] we should remain optimistic about that possibility, even though there is no clear scenario, any more than there was a real clear scenario to the end of kings."
As an SFF author, I'm personally offended at walking Engineer's Diseases like this one constantly appropriating Banks's work to prop up their libertarian accelerationist bullshit.
posted by signal at 12:54 AM on October 25 [13 favorites]
posted by signal at 12:54 AM on October 25 [13 favorites]
One of the central themes of the Culture novels is the appalling price non-Culture people pay so the Culture's rather smug citizens can enjoy their paradise.
I mean, the best Culture novel is probably The Use of Weapons where this is made horrifyingly explicit.
posted by thatwhichfalls at 3:15 AM on October 25 [2 favorites]
I mean, the best Culture novel is probably The Use of Weapons where this is made horrifyingly explicit.
posted by thatwhichfalls at 3:15 AM on October 25 [2 favorites]
“Libertarianism. A simple-minded right-wing ideology ideally suited to those unable or unwilling to see past their own sociopathic self-regard.” ― Iain Banks, Transition
Ian Banks' work comes off deeply techno-optimist though, so accelerationists being inspired by the Culture feels natural, whatever caveots Banks placed there. After all the Culture violates thermodynamics etc.
“Attack of the Hope Police” by Peter Watts feels relevant there (previously). Watts admits his own fistion is hopelessly techno-optimist, although not nearly so extreme as many others. Also, Watts has infinitely stronger hard science justifications, but nicely inoculates his readers against believing him, by including overt fantasy elements: Vampires.
posted by jeffburdges at 4:20 AM on October 25 [2 favorites]
Ian Banks' work comes off deeply techno-optimist though, so accelerationists being inspired by the Culture feels natural, whatever caveots Banks placed there. After all the Culture violates thermodynamics etc.
“Attack of the Hope Police” by Peter Watts feels relevant there (previously). Watts admits his own fistion is hopelessly techno-optimist, although not nearly so extreme as many others. Also, Watts has infinitely stronger hard science justifications, but nicely inoculates his readers against believing him, by including overt fantasy elements: Vampires.
posted by jeffburdges at 4:20 AM on October 25 [2 favorites]
I don't think the Culture's meddling in other civilizations has anything at all to do with the prosperity/paradise of Culture citizens, FWIW. Their citizens would be just as blissed-out if the Culture didn't meddle. The Culture Minds meddle because they believe they're doing the residents of other more brutal civilizations a favor: it's ideological, not economics-based. That said, there clearly is a lot of collateral damage involved in Culture meddling, but none of it has any meaningful effect on the actual Culture citizens.
posted by outgrown_hobnail at 4:32 AM on October 25 [5 favorites]
posted by outgrown_hobnail at 4:32 AM on October 25 [5 favorites]
>It's one trick, but it's a good one, tailor made to fool people.
Poets, priests, and politicians
Have words to thank for their positions
posted by torokunai at 5:19 AM on October 25 [1 favorite]
Poets, priests, and politicians
Have words to thank for their positions
posted by torokunai at 5:19 AM on October 25 [1 favorite]
I assume you're using the weaker, free model?
Yes, but I don't feel like "pay us so you get fewer confident lies" is much of an endorsement either.
posted by mhoye at 6:04 AM on October 25 [7 favorites]
Yes, but I don't feel like "pay us so you get fewer confident lies" is much of an endorsement either.
posted by mhoye at 6:04 AM on October 25 [7 favorites]
Watts admits his own fiction is hopelessly techno-optimist
what
posted by mhoye at 6:33 AM on October 25
what
posted by mhoye at 6:33 AM on October 25
One of the central themes of the Culture novels is the appalling price non-Culture people pay so the Culture's rather smug citizens can enjoy their paradise.
I mean, the best Culture novel is probably The Use of Weapons where this is made horrifyingly explicit.
None of this is correct. The Culture's whole deal is that they're post-scarcity and perfectly capable of fucking off and leaving everybody alone to have their paradise, but won't because they're unshakeably morally certain that their way of life is better than everybody else's. The entire point of Contact is not "get other people's resources", which the Culture doesn't need, but "in a million ways both subtle and overt slowly bring everybody else in the universe around to our way of thinking and elevate their standard of living to ours".
posted by Pope Guilty at 7:30 AM on October 25 [5 favorites]
I mean, the best Culture novel is probably The Use of Weapons where this is made horrifyingly explicit.
None of this is correct. The Culture's whole deal is that they're post-scarcity and perfectly capable of fucking off and leaving everybody alone to have their paradise, but won't because they're unshakeably morally certain that their way of life is better than everybody else's. The entire point of Contact is not "get other people's resources", which the Culture doesn't need, but "in a million ways both subtle and overt slowly bring everybody else in the universe around to our way of thinking and elevate their standard of living to ours".
posted by Pope Guilty at 7:30 AM on October 25 [5 favorites]
i feel like all these glowing predictions of what ai will do, at some point in time, are predicated on a certain logical fallacy that progress moves in a clean, straight line, and that because some things that were once thought impossible are now possible, literally anything we consider impossible will one day be possible as a matter of course
"people told the wright brothers their flying machine would never work too!" and "people said it was impossible to land on the moon too!" with an air of smug self-satisfaction that they have now conclusively proven we'll one day have software as smart as a nobel prize winner
almost always, in my experience, this is coupled with the belief that whoever will be in charge of these (literally) incredible technologies will operate them with the goodness for all humankind at heart; never for solely their own enrichment or to exact vengeance on perceived enemies, never cutting corners or failing to disclose safety issues; just all smart beautiful men with big brains and beautiful hearts who only want to make all our lives better
the fact that these dipshits and their sniveling touts have such a huge fanbase makes me despair that far too many of us are more capable of imagining benevolent tech god-kings leading us into paradise than are capable of imagining the practical, and very possible, work needed right now to improve the state of our planet
posted by Aya Hirano on the Astral Plane at 8:05 AM on October 25 [6 favorites]
"people told the wright brothers their flying machine would never work too!" and "people said it was impossible to land on the moon too!" with an air of smug self-satisfaction that they have now conclusively proven we'll one day have software as smart as a nobel prize winner
almost always, in my experience, this is coupled with the belief that whoever will be in charge of these (literally) incredible technologies will operate them with the goodness for all humankind at heart; never for solely their own enrichment or to exact vengeance on perceived enemies, never cutting corners or failing to disclose safety issues; just all smart beautiful men with big brains and beautiful hearts who only want to make all our lives better
the fact that these dipshits and their sniveling touts have such a huge fanbase makes me despair that far too many of us are more capable of imagining benevolent tech god-kings leading us into paradise than are capable of imagining the practical, and very possible, work needed right now to improve the state of our planet
posted by Aya Hirano on the Astral Plane at 8:05 AM on October 25 [6 favorites]
It's the first minute of the talk I linked, mhoye, but seems cut off vs what I remember, or maybe I'm mixing from other interviews. Almost all of the first minute discusses not having villians, but it starts like "[clipped intro] the fact that I was portray human civilization as still being technologically stable eighty years from now is really kind of pushing it"
Vampires Biology and Evolution by Peter Watts rocks btw, repeated creative jabs at what makes the US so stupid. lol
"A psychological testing revealed an increasing lack of effect and reduced responses to emotionally charged stimuli when showing photos of people mutilated during car accidents or homeland security interviews for example Donnie's skin conductivity and ECG were scarcely different when he was shown neutral pictures such as landscapes just delight."
posted by jeffburdges at 8:06 AM on October 25
Vampires Biology and Evolution by Peter Watts rocks btw, repeated creative jabs at what makes the US so stupid. lol
"A psychological testing revealed an increasing lack of effect and reduced responses to emotionally charged stimuli when showing photos of people mutilated during car accidents or homeland security interviews for example Donnie's skin conductivity and ECG were scarcely different when he was shown neutral pictures such as landscapes just delight."
posted by jeffburdges at 8:06 AM on October 25
None of this is correct.
Did we read the same books? The ones where Special Circumstances blackmail Culture citizens; employ ruthless assassins to do their dirty work; use off the books killer drones to tear sapients who've pissed them off inside out; smugly claim to be able to prove their interventions are mathematically provably a net good but, oh, by the way, we need a non-intervention baseline civilization as a control and that's where we happen to live.
You may have slightly misread these books. Expecting a morally unambiguous utopia from a Scottish socialist is a category mistake.
posted by thatwhichfalls at 12:45 PM on October 25 [2 favorites]
Did we read the same books? The ones where Special Circumstances blackmail Culture citizens; employ ruthless assassins to do their dirty work; use off the books killer drones to tear sapients who've pissed them off inside out; smugly claim to be able to prove their interventions are mathematically provably a net good but, oh, by the way, we need a non-intervention baseline civilization as a control and that's where we happen to live.
You may have slightly misread these books. Expecting a morally unambiguous utopia from a Scottish socialist is a category mistake.
posted by thatwhichfalls at 12:45 PM on October 25 [2 favorites]
Wait, all this time I thought we knew Vampires were the PE ghouls, Finance and Tech edge lords sucking humanity and the globe dry just to gain more and more and more power over the rest of us? Hmmm, gotta rethink this and understand that these fuckers arent actual hyper powerful super beings, but rather highly defective POSs who got lucky with their apps, and were able to use their socipathic narcissism to fool the rest of us.
So, what to do with the system? Well to quote the immortal Major 'Dutch' Schaeffer from the all time classic Predator: "If it bleeds we can kill it".
And this is the first quote to give me any hope that things might be different: He quoted Ursula K. Le Guin, who once said “We live in capitalism. Its power seems inescapable. So did the divine right of kings.”
Thank you Ted Chiang....
posted by WatTylerJr at 1:39 PM on October 25
So, what to do with the system? Well to quote the immortal Major 'Dutch' Schaeffer from the all time classic Predator: "If it bleeds we can kill it".
And this is the first quote to give me any hope that things might be different: He quoted Ursula K. Le Guin, who once said “We live in capitalism. Its power seems inescapable. So did the divine right of kings.”
Thank you Ted Chiang....
posted by WatTylerJr at 1:39 PM on October 25
Did we read the same books? The ones where Special Circumstances blackmail Culture citizens; employ ruthless assassins to do their dirty work; use off the books killer drones to tear sapients who've pissed them off inside out; smugly claim to be able to prove their interventions are mathematically provably a net good but, oh, by the way, we need a non-intervention baseline civilization as a control and that's where we happen to live.
All these read to me as externalities that (some) in the Culture feel are justified in pursuit of their philosophical/political goals.
But none of them for me are part of an "appalling price non-Culture people pay so the Culture's rather smug citizens can enjoy their paradise" because the Culture's post-scarcity wealth does not come from colonial extraction or exploitation of the proletariat—unlike CEO Amodei's wealth. In the novels, the Peace Faction and the Zetetic Elench, for example, enjoy the same material wealth as the Culture without the Culture's sometimes brutal meddling.
posted by audi alteram partem at 4:07 PM on October 25 [6 favorites]
All these read to me as externalities that (some) in the Culture feel are justified in pursuit of their philosophical/political goals.
But none of them for me are part of an "appalling price non-Culture people pay so the Culture's rather smug citizens can enjoy their paradise" because the Culture's post-scarcity wealth does not come from colonial extraction or exploitation of the proletariat—unlike CEO Amodei's wealth. In the novels, the Peace Faction and the Zetetic Elench, for example, enjoy the same material wealth as the Culture without the Culture's sometimes brutal meddling.
posted by audi alteram partem at 4:07 PM on October 25 [6 favorites]
IMO, a big chunk of the point of the Culture is to be a warped-mirror version of the British Empire which, remember, sought to justify various colonial efforts as rescuing benighted savages from suffering and damnation. We need to save them from themselves, and all that. Techbros, to this day, very frequently employ the same thought patterns when talking amongst themselves (only instead of blather about your eternal soul, they just figure they're smarter than you, so they need to save you from your own stupidity because if you were smart you'd already agree with them).
AI People: "we're smarter than you, so we're going to do this stuff to save you from being as stupid as you currently are. You will thank us later."
R. Kipling: "oh hey, I recognize that line!"
posted by aramaic at 4:25 PM on October 25
AI People: "we're smarter than you, so we're going to do this stuff to save you from being as stupid as you currently are. You will thank us later."
R. Kipling: "oh hey, I recognize that line!"
posted by aramaic at 4:25 PM on October 25
« Older Florida's Original Python Huntress | Bruce "Carcinogen" Ames Dead Newer »
Programmed by fellows with compassion and vision
We'll be clean when their work is done
We'll be eternally free, yes
and eternally young
posted by torokunai at 1:59 PM on October 24 [13 favorites]