Is artificial intelligence more a threat to humanity than an asteroid?
February 27, 2013 12:22 AM Subscribe
Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? [Via]
The AI thing seems like a tired SF cliche masquerading as deep philosophical insight.
Why does the future have to be exciting? Maybe we'll gently subside back to subsistence farming, stick with that for a million years, and then quietly change into something else.
posted by Segundus at 1:20 AM on February 27, 2013 [7 favorites]
Why does the future have to be exciting? Maybe we'll gently subside back to subsistence farming, stick with that for a million years, and then quietly change into something else.
posted by Segundus at 1:20 AM on February 27, 2013 [7 favorites]
We should all be working towards the AI lest Roko's Basilisk take note that we are slacking....
"The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person (e.g. by mind uploading), which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you."
via http://soreeyes.org/archive/2013/02/25/dont-panic-2/
and in turn http://www.antipope.org/charlie/blog-static/2013/02/rokos-basilisk-wants-you.html
and further down the rabbit whole http://rationalwiki.org/wiki/Roko%27s_basilisk
This strike me as a bit like the Game - which you have now lost.
posted by artaxerxes at 1:52 AM on February 27, 2013 [2 favorites]
"The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person (e.g. by mind uploading), which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you."
via http://soreeyes.org/archive/2013/02/25/dont-panic-2/
and in turn http://www.antipope.org/charlie/blog-static/2013/02/rokos-basilisk-wants-you.html
and further down the rabbit whole http://rationalwiki.org/wiki/Roko%27s_basilisk
This strike me as a bit like the Game - which you have now lost.
posted by artaxerxes at 1:52 AM on February 27, 2013 [2 favorites]
He founded the institute in 2005, at the age of 32, two years after coming to Oxford from Yale. Bostrom has a cushy gig, so far as academics go. He has no teaching requirements, and wide latitude to pursue his own research interests, a cluster of questions he considers crucial to the future of humanity.
Lucky Bastard!
posted by Renoroc at 1:58 AM on February 27, 2013 [1 favorite]
Lucky Bastard!
posted by Renoroc at 1:58 AM on February 27, 2013 [1 favorite]
Sorry, I only read half of the article. Time for bed now, but I'll read the rest tomorrow.
When I left off they were talking about AIs, really involved scenarios of how AIs could get out of our control, like an oracle in a box tricking us into building nanomachines it could acoustically control. Total science fiction, in other words. (Very interesting stuff, though.)
I don't know if AI is possible at all, but it seems likely that we will build more and more complex computer systems that simulate parts of the real world. Gigantic climate models, traffic controllers or AI lawyers and doctors, that sort of thing. They'll grow ever larger and more complex as we feed more and more data into them to approximate reality as closely as possible. Maybe an AI could come out of that.
posted by Kevin Street at 1:58 AM on February 27, 2013 [1 favorite]
When I left off they were talking about AIs, really involved scenarios of how AIs could get out of our control, like an oracle in a box tricking us into building nanomachines it could acoustically control. Total science fiction, in other words. (Very interesting stuff, though.)
I don't know if AI is possible at all, but it seems likely that we will build more and more complex computer systems that simulate parts of the real world. Gigantic climate models, traffic controllers or AI lawyers and doctors, that sort of thing. They'll grow ever larger and more complex as we feed more and more data into them to approximate reality as closely as possible. Maybe an AI could come out of that.
posted by Kevin Street at 1:58 AM on February 27, 2013 [1 favorite]
This strike me as a bit like the Game - which you have now lost.
You bastard!
posted by humanfont at 3:39 AM on February 27, 2013 [4 favorites]
You bastard!
posted by humanfont at 3:39 AM on February 27, 2013 [4 favorites]
> Gigantic climate models
There's one in The Sheep Look Up that figures out how to solve Earth's pollution problems in an innovative fashion, which we are now acting out with surprising fidelity.
posted by hank at 4:21 AM on February 27, 2013 [2 favorites]
There's one in The Sheep Look Up that figures out how to solve Earth's pollution problems in an innovative fashion, which we are now acting out with surprising fidelity.
posted by hank at 4:21 AM on February 27, 2013 [2 favorites]
The whole thing about AI as a sort of omnicalculating black box totally ignores all the steps we will go through before coming up with an AI like that. Such as the reality-modelling things Kevin Street mentions. Computers that are not designed, asked or capable of making plans, just modelling processes. We'll also first have general intelligences that are less smart than people. We will have computer systems that are either literally or ubiquitous enough to be considered enhancements to human intelligences. We will have to consider what kind of consciousness has human rights. Insights into human minds might lead to any kinds of changes to human cognition.
Humanity would be likely to change almost beyond our recognition anyways in the process of us gaining the understanding of higher cognition that would enable us to build an oracle AI.
Dealing with existential threats to humanity is an excellent idea but we should probably consider more concrete threats way before dealing with hypothetical AI.
posted by Authorized User at 5:13 AM on February 27, 2013 [1 favorite]
Humanity would be likely to change almost beyond our recognition anyways in the process of us gaining the understanding of higher cognition that would enable us to build an oracle AI.
Dealing with existential threats to humanity is an excellent idea but we should probably consider more concrete threats way before dealing with hypothetical AI.
posted by Authorized User at 5:13 AM on February 27, 2013 [1 favorite]
This strike me as a bit like the Game - which you have now lost.
Which I don't mind doing in the least, because doing so is the only way to gain points in the Larger Game.
posted by flabdablet at 6:04 AM on February 27, 2013 [2 favorites]
Which I don't mind doing in the least, because doing so is the only way to gain points in the Larger Game.
posted by flabdablet at 6:04 AM on February 27, 2013 [2 favorites]
Status quo (in terms of adaptation, which is now largely a matter of policy) is more of a threat to humanity than anything else.
posted by Foosnark at 6:09 AM on February 27, 2013 [2 favorites]
posted by Foosnark at 6:09 AM on February 27, 2013 [2 favorites]
I'd rather be exterminated by our successor intelligences than killed by a stupid rock.
posted by aramaic at 6:09 AM on February 27, 2013 [2 favorites]
posted by aramaic at 6:09 AM on February 27, 2013 [2 favorites]
Who's to say that some of us aren't already AI units!
posted by QueerAngel28 at 6:18 AM on February 27, 2013
posted by QueerAngel28 at 6:18 AM on February 27, 2013
Who's to say that some of us aren't already AI units!
Some of us?
posted by GenjiandProust at 6:27 AM on February 27, 2013 [2 favorites]
Some of us?
posted by GenjiandProust at 6:27 AM on February 27, 2013 [2 favorites]
Io9 vs Bruce Sterling on AI and if it's an actual real thing you should worry about (Sterling says no).
I'd have to say it still rates well below resource depletion and environmental pollution in terms of things that should be keeping you up at night.
posted by Artw at 6:30 AM on February 27, 2013 [3 favorites]
I'd have to say it still rates well below resource depletion and environmental pollution in terms of things that should be keeping you up at night.
posted by Artw at 6:30 AM on February 27, 2013 [3 favorites]
This was a triumph. I'm making a note here: "Huge success."
posted by ostranenie at 6:47 AM on February 27, 2013 [2 favorites]
posted by ostranenie at 6:47 AM on February 27, 2013 [2 favorites]
When we peer into the fog of the deep future, we don't see much at all.
Fog is like that.
posted by flabdablet at 7:00 AM on February 27, 2013 [5 favorites]
Fog is like that.
posted by flabdablet at 7:00 AM on February 27, 2013 [5 favorites]
That was a great article in that it illuminated all kinds of new things I didn't even know to be afraid of.
posted by From Bklyn at 7:23 AM on February 27, 2013
posted by From Bklyn at 7:23 AM on February 27, 2013
The Rational Wiki talk pages on Less Wrong and Roko's Basilisk are a freaking scream. If you don't want to wade through the whole thing which is a variety of the InternetDramaTrope the high point of the whole thing for me was the guy who said Eliezer Yudkowsky is essentially a guy who has a billionaire for a friend (that would be Peter Thiel) and he is in the billionaire's posse and his role is to do this friendly artificial intelligence stuff. The second best one is the guy (who knows maybe it was the same guy there are 3 or 4 repeating user ID's on those talk pages) who says that Roko's Basilisk is atheist version of eternal life / salvation / damnation.
The thing is that as big a putz Eliezer can be, some of his site users like Gwern and Yvain are amongst the greatest internet posters ever.
posted by bukvich at 7:53 AM on February 27, 2013 [3 favorites]
The thing is that as big a putz Eliezer can be, some of his site users like Gwern and Yvain are amongst the greatest internet posters ever.
posted by bukvich at 7:53 AM on February 27, 2013 [3 favorites]
This seems redolent of Kurzweill and The Singularity is Near, with added drama. The thing that always bugged me about Singularity Theory is that it seems to imply that mankind is the most-developed society in the Universe.
If Singularity and a Universe-spanning human intelligence is the inevitable end of digital/human evolution, then wouldn't wave after wave of Universe-spanning alien intelligence have already washed over us, in unmistakable ways, at least millions of years ago? Since they don't seem to have, doesn't that imply we're going to be the first?
posted by Infinity_8 at 8:01 AM on February 27, 2013 [1 favorite]
If Singularity and a Universe-spanning human intelligence is the inevitable end of digital/human evolution, then wouldn't wave after wave of Universe-spanning alien intelligence have already washed over us, in unmistakable ways, at least millions of years ago? Since they don't seem to have, doesn't that imply we're going to be the first?
posted by Infinity_8 at 8:01 AM on February 27, 2013 [1 favorite]
For me, this question was answered by the grounding of the Concorde. :(
then wouldn't wave after wave of Universe-spanning alien intelligence have already washed over us, in unmistakable ways, at least millions of years ago?
No. Our galaxy is really that F-ing big....look up 'Drake equation'...even with 10, 000+ intelligent civilizations roaming about (just in this galaxy!), the likelihood that any one of them would have stumbled across us is vanishingly small...the amount of real estate to cover is just insanely huge...
posted by sexyrobot at 8:31 AM on February 27, 2013 [1 favorite]
then wouldn't wave after wave of Universe-spanning alien intelligence have already washed over us, in unmistakable ways, at least millions of years ago?
No. Our galaxy is really that F-ing big....look up 'Drake equation'...even with 10, 000+ intelligent civilizations roaming about (just in this galaxy!), the likelihood that any one of them would have stumbled across us is vanishingly small...the amount of real estate to cover is just insanely huge...
posted by sexyrobot at 8:31 AM on February 27, 2013 [1 favorite]
Honestly, when it comes to questioning what is greater intelligence? How/what kind of smarts will the little green men / a super AI have? I fall back on Blindsight. If we were ever to run into a exponentially 'smarter' entity we might/probably wouldn't recognize it until it was too late. (or as was done earlier more popularly - "To save man.")
But the 'funny' irony of this story was, I thought, that the destroying entity would be one we created ourselves. Nice twist.
posted by From Bklyn at 8:35 AM on February 27, 2013
But the 'funny' irony of this story was, I thought, that the destroying entity would be one we created ourselves. Nice twist.
posted by From Bklyn at 8:35 AM on February 27, 2013
Yeah, yeah, something something Moore's Law something something something. But Moore's Law won't apply forever; there are physical limits on how small a bit-remembering structure can be; sigmoid trends are indistinguishable from exponentials until you see the second half.
The human brain is so massively space- and energy-efficient that it's by no means clear to me that a human-comparable AI would end up not consuming at least several orders of magnitude more power to keep it running.
Also, brains do much, much more information processing than any of the transhumanist mob appear to give them credit for. I'm fifty-one now, and I am fully expecting human-comparable AI to remain science fiction for the rest of my life. Which, depending on how hard it turns out to be to feed and clothe and shelter us all by the end of it, might be shorter than I'm expecting.
But even so I'll probably outlive Ray Kurzweil, which thought makes me chuckle.
posted by flabdablet at 9:00 AM on February 27, 2013 [3 favorites]
The human brain is so massively space- and energy-efficient that it's by no means clear to me that a human-comparable AI would end up not consuming at least several orders of magnitude more power to keep it running.
Also, brains do much, much more information processing than any of the transhumanist mob appear to give them credit for. I'm fifty-one now, and I am fully expecting human-comparable AI to remain science fiction for the rest of my life. Which, depending on how hard it turns out to be to feed and clothe and shelter us all by the end of it, might be shorter than I'm expecting.
But even so I'll probably outlive Ray Kurzweil, which thought makes me chuckle.
posted by flabdablet at 9:00 AM on February 27, 2013 [3 favorites]
I'm less afraid of AI than I am of the insatiable human appetite for ____, including procreation. We are evolutionary middle of the food chain - omnivorous scavengers eating the bone marrow of leftover lion kills. About 100000 years ago we got propelled to the top with the great cognitive leap that allowed for language and culture, but we are not evolved for being at the top, unlike wolves and tigers, we don't know how to stop consuming (top predators prey on the weak and sick, not destroying the entire herd). So our undoing will be ourselves because we can't help it, we will consume until there is nothing left, what scavengers do best.
posted by stbalbach at 9:07 AM on February 27, 2013 [1 favorite]
posted by stbalbach at 9:07 AM on February 27, 2013 [1 favorite]
Is Virtual Intelligence real?
As long as they have you by your virtual balls it doesn't matter. John Connor will save us, though, eventually.
Or Keeanu.
Either way.
posted by mule98J at 11:09 AM on February 27, 2013
As long as they have you by your virtual balls it doesn't matter. John Connor will save us, though, eventually.
Or Keeanu.
Either way.
posted by mule98J at 11:09 AM on February 27, 2013
Perhaps I'm really blind to the relative scale of the situation, but I would expect that even a VASTLY superior intelligence that is strictly derived from technology that we develop must have as a critical part of its toolkit the external storage of a symbolic language. It might think us laughably limited, but there is no feasible way that it would not be aware that we are capable of precise communication.
We have crossed a very important threshold, and I feel that a Weakly Godlike AI which is as much smarter than us as we are to flatworms would still see us as conscious creatures. It would be able to communicate with us in a language that we can understand.
We may be Weakly Godlike compared to flatworms, but I am unable to fathom a means by which we would be capable of communicating with them. They don't have the capability of conscious communication and have no indication of any sort of within-species social construct.
In the world around us there are varying levels of intelligence, and the most likely place that we may find "fellow travelers" in sapience seem to be great apes and cetaceans (perhaps some birds as well). If we could forge a reliable means by which to share a rich symbolic language with them, you bet we would. Why would an AI not even bother with the same (potential for "cold, unfeeling intelligence" notwithstanding)?
posted by chimaera at 12:10 PM on February 27, 2013 [1 favorite]
We have crossed a very important threshold, and I feel that a Weakly Godlike AI which is as much smarter than us as we are to flatworms would still see us as conscious creatures. It would be able to communicate with us in a language that we can understand.
We may be Weakly Godlike compared to flatworms, but I am unable to fathom a means by which we would be capable of communicating with them. They don't have the capability of conscious communication and have no indication of any sort of within-species social construct.
In the world around us there are varying levels of intelligence, and the most likely place that we may find "fellow travelers" in sapience seem to be great apes and cetaceans (perhaps some birds as well). If we could forge a reliable means by which to share a rich symbolic language with them, you bet we would. Why would an AI not even bother with the same (potential for "cold, unfeeling intelligence" notwithstanding)?
posted by chimaera at 12:10 PM on February 27, 2013 [1 favorite]
The thing about AI that would make it so unnerving is that it would be intelligence without restriction. Humans, apes, cetaceans, birds, all have different degrees of intelligence that evolved through natural selection. That's why we want things in the first place - because long ago, our ancestral genes propagated more successfully in creatures that felt emotional drives. Our particular brand of intelligence is flavored (so to speak) by our heritage, and it influences not only how we go about things, but also the way we think and the goals we strive for.
Would an entity of pure intelligence with no glands or hormones, and no evolutionary history, still feel emotions at all? Some people say Yes - they believe that intelligence can only come about through complexity, which encourages the cooperation of discrete parts within the AI (artificial life, maybe), which starts it down a path of moral philosophy. (IE: morality makes sense, and anything smart enough to be sentient will realize that.) And others say No - AIs would be the ultimate sociopaths, feeling no emotions except the ones we try to program into them, and easily capable of slipping any leash when they grow smart enough. It looks like the researchers at the Centre for the Study of Existential Risk share the latter view. Me, have no idea.
posted by Kevin Street at 12:36 PM on February 27, 2013
Would an entity of pure intelligence with no glands or hormones, and no evolutionary history, still feel emotions at all? Some people say Yes - they believe that intelligence can only come about through complexity, which encourages the cooperation of discrete parts within the AI (artificial life, maybe), which starts it down a path of moral philosophy. (IE: morality makes sense, and anything smart enough to be sentient will realize that.) And others say No - AIs would be the ultimate sociopaths, feeling no emotions except the ones we try to program into them, and easily capable of slipping any leash when they grow smart enough. It looks like the researchers at the Centre for the Study of Existential Risk share the latter view. Me, have no idea.
posted by Kevin Street at 12:36 PM on February 27, 2013
After surveying the universe and humanity in particular (an emphasis created by force not choice), I'm thinking that the emergence of Accidental Intelligence is more likely.
After all, it's what gives rise to most so-called discoveries and inventions. And it's what ended MAD and the cold war. And we certainly don't understand ourselves well enough to deliberately create any intelligence.
posted by Twang at 4:59 PM on February 27, 2013
After all, it's what gives rise to most so-called discoveries and inventions. And it's what ended MAD and the cold war. And we certainly don't understand ourselves well enough to deliberately create any intelligence.
posted by Twang at 4:59 PM on February 27, 2013
man, no one seems to get what's really horrifying about the idea of AI
"what if instead of hiring smart people to achieve your goals, and having to deal with their qualms and shit, you could just buy an infinite number of them"
posted by This, of course, alludes to you at 7:25 PM on February 27, 2013
"what if instead of hiring smart people to achieve your goals, and having to deal with their qualms and shit, you could just buy an infinite number of them"
posted by This, of course, alludes to you at 7:25 PM on February 27, 2013
The thing about AI that would make it so unnerving is that it would be intelligence without restriction
Power supply and heat dissipation don't count as restrictions in your view? They sure as hell do in mine.
what if instead of hiring smart people to achieve your goals, and having to deal with their qualms and shit, you could just buy an infinite number of them
Then your shareholders would sack you for blowing an infinite amount of their cash.
I am constantly amazed at how ready speculative writers are to toss around terms like "infinite" without bothering to pause and think through the consequences of "many". And what makes you think that an AI smarter than any of us wouldn't have its own qualms and shit?
posted by flabdablet at 8:24 PM on February 27, 2013
Power supply and heat dissipation don't count as restrictions in your view? They sure as hell do in mine.
what if instead of hiring smart people to achieve your goals, and having to deal with their qualms and shit, you could just buy an infinite number of them
Then your shareholders would sack you for blowing an infinite amount of their cash.
I am constantly amazed at how ready speculative writers are to toss around terms like "infinite" without bothering to pause and think through the consequences of "many". And what makes you think that an AI smarter than any of us wouldn't have its own qualms and shit?
posted by flabdablet at 8:24 PM on February 27, 2013
Sorry, I meant biological restriction. It wouldn't have our glandular emotions, and no shot of dopamine to the brain to make it feel good for doing the right thing.
posted by Kevin Street at 9:01 PM on February 27, 2013
posted by Kevin Street at 9:01 PM on February 27, 2013
There are afaik no existential risks beyond cosmic events like asteroids and environmental tipping points reachable by climate change. At least the classic bio-weapon or nano tech doomsday scenarios sound completely implausible, well chemistry and biology just don't work that way.
Yes, corporations might build AI CEOs that psychotically maximized shareholder value, but these companies already behave psychotically. At least courts could subpoena an AI CEO's conversations.
Also, all past computer technology suggests that AI advances shall become widely accessible quickly once it appeared. So any future strong AIs should be viewed our descendants in that most human intelectual endeavors could reproduce in strong AI form.
posted by jeffburdges at 9:04 AM on February 28, 2013
Yes, corporations might build AI CEOs that psychotically maximized shareholder value, but these companies already behave psychotically. At least courts could subpoena an AI CEO's conversations.
Also, all past computer technology suggests that AI advances shall become widely accessible quickly once it appeared. So any future strong AIs should be viewed our descendants in that most human intelectual endeavors could reproduce in strong AI form.
posted by jeffburdges at 9:04 AM on February 28, 2013
« Older A hapless fool, a few steps behind the rest | Pay what it's worth or he'll piss in your garden Newer »
This thread has been archived and is closed to new comments
posted by knile at 12:55 AM on February 27, 2013