Silicon Valley Is Turning Into Its Own Worst Fear
June 21, 2024 9:05 AM   Subscribe

Ted Chiang on Insight, and the Corporate Lack Thereof "The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue."
posted by tclark (15 comments total) 28 users marked this as a favorite
 
It's good to see Chiang point out that Barlow was wrong and that governmental regulations do, in fact, have a place in tech. The shibboleth that regulators cannot understand tech is yet another manifestation of the attitude that the tech industry despises being held to account, and needs to be pushed back on.
posted by NoxAeternum at 9:15 AM on June 21 [14 favorites]


(published 2017, but if anything even more relevant today)
posted by BungaDunga at 9:17 AM on June 21 [5 favorites]


Of course, there is a lack of imagination in the discourse about AI. If the discourse were truly imaginative, we'd be talking about how to use AI to replace CEOs. AIs are way less expensive than CEOs when it comes to keeping the wheels of commerce running & the savings can be given back to the workers. Who says that AI has to be controlled by the capitalists? It could just as easily be used against them.
posted by jonp72 at 9:34 AM on June 21 [11 favorites]


so there’s this meme going around about how it would be best if ai were used to do drudgery while humans make art rather than humans doing drudgery while ai makes art, which well the meme makes a point but when you get down to it every sensible person agrees that really what we should build is a machine god that we can worship as it changes the fundamental nature of reality itself around us.

i mean look i’d do it myself but the problem is that I’m awful at math, just awful. so come on nerds, get crackin’.
posted by bombastic lowercase pronouncements at 9:44 AM on June 21 [2 favorites]


> If the discourse were truly imaginative, we'd be talking about how to use AI to replace CEOs

i suppose we have differing definitions of “imaginative”
posted by bombastic lowercase pronouncements at 9:45 AM on June 21


you're not imaginating hard enough
posted by elkevelvet at 9:48 AM on June 21 [5 favorites]


A large language model, fed a global history of financial transactions and business acquisitions, could probably completely unspool the complex web of organized crime that is powering the rapid retreat of the human race back into feudalism.

Instead, it will be used to coddle/control the masses as their life essence is slowly leeched from their bones by megalomaniacal vampires.
posted by CynicalKnight at 9:56 AM on June 21 [8 favorites]


we'd be talking about how to use AI to replace CEOs

How to use AIs to replace CEOs with workers who own their means of production and the value created by their labor
posted by straight at 9:59 AM on June 21 [13 favorites]


I skimmed the article in agreement, and I hope he's saying that if something were truly intelligent compared to humans, it wouldn't resort to emotionally infantile libertarian solutions.
posted by Brian B. at 10:08 AM on June 21 [2 favorites]


How to use AIs to replace CEOs with workers who own their means of production and the value created by their labor

You use unions, then revolutionary cadres, then firing squads or forced labor reeducation camps to accomplish this, traditionally.

The historically proven lubricant for moving the rich along is pants-wetting terror, not computers.
posted by ryanshepard at 12:05 PM on June 21 [8 favorites]


A large language model, fed a global history of financial transactions and business acquisitions, could probably completely unspool the complex web of organized crime that is powering the rapid retreat of the human race back into feudalism

I actually kinda doubt this - I’m under the possibly mistaken impression that crime, especially financial crime, usually involves a degree of planning out in advance how various parties potentially impacted or called upon to investigate will behave, including their interactions with each other. Modeling intent, predicting complex multi-agent behaviors - these are things that LLMs do not do. Can not do. Even near-future hybrid models hypothetically possessing limited abstract reasoning are likely to remain utterly incapable in this way.

Basically, if understanding what happened also requires understanding why it happened, and that why involves humans and particularly humans employing deception… an LLM is going to be about as useful as a GPU-shaped rock.

This is also why I don’t think AIs would make for very good CEOs - good for their workers might be doable, but they’d still be lambs for the slaughter when thrown into the hellpit of human-powered shareholder capitalism.

It was a good article, though, and I think Ted Chiang’s basic premise here is right. What I’m really hoping we’ll someday see is self-improving systems where the definition of self-improvement equally weights increases in raw ability with increases in ethical decision-making and demonstrations of empathy. I suspect we’ll fuck it up a few times before we get there, first.
posted by Ryvar at 12:53 PM on June 21 [4 favorites]


The historically proven lubricant for moving the rich along is pants-wetting terror, not computers.

Sure, but if some of the rich develop a pants-wetting terror of computers, might as well put them on the brainstorming whiteboard along with guillotines.
posted by straight at 3:43 PM on June 21 [2 favorites]


Getting back to the points made by Chiang's article:

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences?

Corporations. Also many humans (including a disturbingly high percentage of the humans with the most power to do so), and most other forms of life on this planet from bacteria to dandelions to cats.

I'm not sure why that makes him or should make us feel optimistic about how a powerful AI would act.

one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

Maybe because there are lots of things in our experience that act like no-holds-barred capitalism and few or none that act like aloof mathematicians or benevolent genies.
posted by straight at 4:00 PM on June 21


Put Ted in charge.
posted by Abehammerb Lincoln at 2:34 PM on June 23


This thought is not totally baked, but:

We should expect that the first "true" artificial intelligence will be utterly alien to us in ways that make it incredibly weird and difficult to reason about. There are two basic reasons for this: first, because AI will almost certainly have a functional model that is very different from the wet neuronal brains of every other intelligence on the planet (and some of those brains are already really weird), and second, because AI will not have gone through a survival-of-the-fittest evolutionary refinement in anything even remotely resembling the natural world. It may have some kind of evolutionary refinement in its development, but the "good enough to reproduce" function is going to be wildly different and artificially defined by the AI's developers.

Some of that alienness is going to be in the form of utterly inscrutable motivations: it's going to be very difficult to answer "why" an AI does something, at least using the kinds of mental shortcuts that humans use to model other humans.

Some of that alienness is going to be in the form of completely different perceptual and understanding models: it's likely that an AI just will not use the same methods of "chunking" the world that humans do. Current neural net image generators are illustrative, I think: they don't model scenes as physically-plausible models the same way that humans automatically do, and that difference leads to a lot of the weirdness in AI imagery.

(As a side note, I have no idea — and I think no one else does, either — whether current neural net techniques are even on the path towards human-level artificial intelligence, or just an interesting and maybe-useful side branch.)
posted by reventlov at 6:42 PM on June 25


« Older a living installation fed by the incoming-tide   |   Voice of Baceprot are a devoutly Muslim all-female... Newer »


You are not currently logged in. Log in or create a new account to post comments.