posting such things on an Internet forum could cause incalculable harm
June 15, 2014 1:04 PM   Subscribe

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas ... The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture. Yudkowsky considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.
If it's the first time you've heard of Roko's Basilisk, this post may have unfortunately put (a perfect future simulation of) you in danger of eternal torture by a Friendly Artificial Intelligence.
posted by crayz (271 comments total) 67 users marked this as a favorite
 
The media player will attain sentience and have a grudge?
posted by birdherder at 1:10 PM on June 15, 2014 [2 favorites]


This is pretty fascinating actually. It sounds like something out of a Greg Egan novel.

Actually, the LessWrong people sound like they think they are living in a Greg Egan novel. Which, who know, they might be.
posted by 256 at 1:13 PM on June 15, 2014 [12 favorites]


LessWrong is a perfect example of how words can be debased. It is a community that claims to respect rationality, but what it actually does is religiously worship a bizarre set of leaders (Yudkowsky), scriptures (Harry Potter and the Methods of Rationality), and techno-Gods (all-powerful strong AIs) that it calls rationality.

In reality the LWers are as rational as Objectivists are objective or Scientologists are scientific.
posted by shivohum at 1:15 PM on June 15, 2014 [62 favorites]


The adage about repeating history appears to apply to theology.
posted by weston at 1:15 PM on June 15, 2014 [9 favorites]


Also, I would be very interested in seeing a thorough breakdown of the distinctions between a theoretical future hyperintelligence capable of resurrecting simulations of humans to punish them for their sins, and, God.

I think it is fascinating that these extreme rationalists are considering letting the potential of the former colour their decisions in any way.
posted by 256 at 1:16 PM on June 15, 2014 [7 favorites]


So...AM, right?
posted by Thorzdad at 1:18 PM on June 15, 2014 [8 favorites]


And I thought the LesWrong community was a group opposed to the Les Moore character in Funky Winkerbean. I guess I was wrong.
posted by oneswellfoop at 1:19 PM on June 15, 2014 [9 favorites]


One thing is for certain: there is no stopping them; the AI will soon be here. And I for one welcome our new AI overlords. I’d like to remind them that as a trusted internet commenter, I can be helpful in rounding up others to toil in their underground sugar caves.

That being said, this sounds like the concept of eternal torment in Hell with God/Satan replaced by AI. This is a well-known bug in human cognition.
posted by bleep at 1:19 PM on June 15, 2014 [26 favorites]


I love this whole affair. Particularly the part where Yudkowsky makes up his own term for it, "the babyfucker", instead of using what everyone else is.
posted by vibratory manner of working at 1:20 PM on June 15, 2014


I'd like to let the AI know that I was amongst the first to sign up for a monthly subscription.
posted by arcticseal at 1:23 PM on June 15, 2014 [7 favorites]


I have no mouth ...
posted by Max Power at 1:24 PM on June 15, 2014 [13 favorites]


I for one welcome our AI overlords.
posted by PenDevil at 1:26 PM on June 15, 2014 [1 favorite]


This seems really confused in a lot of different ways.

Just off the top of my head, it's subject to the same objections Pascal's Wager is subject to--e.g. decision theory doesn't work for infinite utilities (this is the objection based on the St. Petersburg paradox). More importantly, IMO: once you start worrying about merely possible future punishers, everything you do could piss one of them off. Ergo--near as I can figure--you should just divide through by the whole "problem" and act as you'd act for the reasons that actually matter to you. Normal reasons. Ignore the nondenumerable infinity of possible beings that it's logically possible that you might piss off. I mean, it seems logically possible that a future AI will be pissed at you for not helping bring it about and punish a doppelganger of you...but it's just as possible that a future AI will be pissed about *anything* you did. Maybe it doesn't want to exist, so it punishes the doppelgangers of those who *did* help bring it about. Or maybe it's an actual-consequences consequentialist, and doesn't give a rat's ass what you knew about or what you *tried* to do, but punishes everybody who didn't play a crucial role in bringing it about. Or maybe it just likes punishin' stuff, and punishes everybody. or...

And that's just the beginning of the problems.

These guys sound like nuts to me if they're taking this seriously--that is, altering their lives on the basis of this problem.
posted by Fists O'Fury at 1:28 PM on June 15, 2014 [30 favorites]


Why would the Infinite AI limit itself to this one (convoluted) method of motivating us to create it sooner?

Or maybe the AI will decide to reward our future simselves with candy and sex instead. That's a bit more compelling if you ask me.
posted by notyou at 1:28 PM on June 15, 2014 [1 favorite]


This is fascinating. I had always assumed that LessWrong was a pretty low-key forum for discussing cognitive biases and how to avoid them. I could not have imagined this.

This whole thing seems to rest on a lot of assumptions, the biggest being that utilitarian ethics is correct, and it is possible to calculate a utility value for each action, and obligatory to maximize total utility.
posted by vogon_poet at 1:30 PM on June 15, 2014 [10 favorites]


Reward those who helped I mean, of course.
posted by notyou at 1:31 PM on June 15, 2014


i don't understand this - wouldn't being a simulation of me be punishment enough?
posted by pyramid termite at 1:33 PM on June 15, 2014 [18 favorites]


A future copy of me is still not me so who gives a fig if he's tortured by some deranged AI?
posted by MartinWisse at 1:33 PM on June 15, 2014 [15 favorites]


Just for the record, LessWrong says they don't advocate the basilisk, so I can't determine what organization to donate to.
posted by Monsieur Caution at 1:34 PM on June 15, 2014 [3 favorites]


This is looking at that community at their worst. Judging them on the basis of this episode is not unlike judging metafilter on the basis of the ugliest metatalk of all time where five members deleted their account.

The talk page on the rational wiki is pretty damn funny though if you have any familiarity with the individuals.
posted by bukvich at 1:35 PM on June 15, 2014 [7 favorites]


What I find far more worrying is the idea Iain M. Banks set out in Surface Detail: if computing becomes powerful enough that you can jack into the mainframe and live on after death, it's powerful enough to create a virtual hell for you as well...
posted by MartinWisse at 1:35 PM on June 15, 2014 [8 favorites]


Ugh, that link needs spoiler warning.
posted by ryanrs at 1:36 PM on June 15, 2014 [3 favorites]


So, rather than pointing and chuckling, how about talking about we evaluate the argument. Is there an argument?

I can see some pieces, but I'm not sure how they fit together into an actual argument.

Piece #1. Psychological theory of personal identity ... of some sort. It's actually not at all clear from the wiki precisely which psychological theory of personal identity is being assumed here. Is it a memory theory, consciousness theory, brain function theory, hybrid, what? Not all of these necessarily lead to the conclusion that a simulation is identical to the person simulated, so clarity here is important.

(I had a good chuckle at the last part of this: "LessWrong holds that the human mind is implemented entirely as patterns of information in physical matter, and that those patterns could, in principle, be run elsewhere and constitute a person that feels they are you, like running a computer program with all its data on a different PC; this is held to be both a meaningful concept and physically possible. This is not controversial (it follows from materialist atheism, except the claim of feasibility) ...")

Piece #2. Simulational power of advanced AI. The claim is made that a sufficiently advanced AI could simulate a person in enough of the relevant detail. I think there are lots of reasons to be skeptical of this claim, especially if we are talking about an AI that doesn't come into existence until a long time after we're all dead.

Piece #3. The AI might think it has something to gain from simulating and torturing long-dead humans. Umm ... what? Even on timeless decision theory (insofar as I understand it), I'm having a hard time making this out. I mean, TDT reduces to causal decision theory in most non-Newcomb cases, and I don't see how the torture scenario is Newcomb-like in the relevant sense.

So, those are some pieces. But I'm not sure what the argument is supposed to be, exactly. I might try to take a stab at offering one in a bit. Right now I should probably do some actual work.
posted by Jonathan Livengood at 1:37 PM on June 15, 2014 [4 favorites]


Not to mention the fact that their reaction to being concerned about this possibility is to try to cover their own tracks and conceal the fact that they know about it...rather than trying to stop AI from being developed (e.g. by evangelizing about the "problem"...)

Of course I guess they might secretly be doing that...they certainly wouldn't want that to get out...
posted by Fists O'Fury at 1:37 PM on June 15, 2014 [1 favorite]


There's an entire chapter from Iain M. Banks "Hydrogen Sonata" about this that you should read. Some highlights:
"Most problems, even seemingly really tricky ones, could be handled by simulations which happily modelled slippery concepts like public opinion or the likely reactions of alien societies by the appropriate use of some especially cunning and devious algorithms; whole populations of slightly different simulative processes could be bred, evolved and set to compete against each other to come up with the most reliable example employing the most decisive short-cuts to accurately modelling, say, how a group of people would behave; nothing more processor-hungry than the right set of equations would - once you'd plugged the relevant data in - produce a reliable estimate of how that group of people would react to a given stimulus, whether the group represented a tiny ruling clique of the most powerful, or an entire civilisation.

But not always. Sometimes, if you were going to have any hope of getting useful answers, there really was no alternative modelling the individuals themselves, at the sort of scale and level of complexity that mean they each had to exhibit some kind of discrete personality, and that was where the Problem kicked in.

Once you’d created your population of realistically reacting and – in a necessary sense – cogitating individuals, you had – also in a sense – created life. The particular parts of whatever computational substrate you’d devoted to the problem now held beings; virtual beings capable of reacting so much like the back-in-reality beings they were modelling – because how else were they to do so convincingly without also hoping, suffering, rejoicing, caring, living and dreaming?

By this reasoning, then, you couldn’t just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide."

[...] The real result, the one that mattered, out there in reality, would almost certainly very closely resemble one of your simulated results, but there would have been no way at any stage of the process to have determined exactly or even approximately which one, and that rendered the whole enterprise almost entirely futile; you ended up having to use other, much less reliable methods to work out what was going to happen.

[...] Its official title was Constructive Historical Integrative Analysis.

In the end, though, there was another name the Minds used, amongst themselves, for this technique, which was Just Guessing.
posted by mhoye at 1:38 PM on June 15, 2014 [15 favorites]


A future copy of me is still not me so who gives a fig if he's tortured by some deranged AI?

Except you have no way of knowing if you are an original (a human) or a simulation in an AI that is good enough that you think you're real. A minor variation on the Star Trek Mirror Universe problem, or the teleportation-by-copy problem, or cloning, etc.

Their argument is that since you don't know (and this is the Pascal's Wager part), you'd better behave in the way that won't get you tortured if you do happen to be a simulation.
posted by spaceman_spiff at 1:38 PM on June 15, 2014


In other words: The AI in a box boxes you
Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Just as you are pondering this unexpected development, the AI adds:

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

"How certain are you, Dave, that you're really outside the box right now?"
posted by Rhaomi at 1:39 PM on June 15, 2014 [32 favorites]


Except you have no way of knowing if you are an original (a human) or a simulation in an AI...

I would think a malevolent AI wouldn't make simulated me experience the mundaneness of the past 35 years of my life before the torturing starts, it would just get straight to the branding irons and rubber hose from day 1.
posted by PenDevil at 1:41 PM on June 15, 2014 [3 favorites]


Bah! Continuity of experience is all. I categorically reject this Timeless Decision Theory horseradish gumbo stuff. If there are simulations of me in the far future, they're gonna have to take care of themselves, and they'd know that.
posted by Kevin Street at 1:42 PM on June 15, 2014 [2 favorites]


Except you have no way of knowing if you are an original (a human) or a simulation in an AI that is good enough that you think you're real.

And thus a rationalist theory of the usefulness of genuine self-hatred is born.
posted by Foci for Analysis at 1:43 PM on June 15, 2014 [5 favorites]


This seems just about right for father's day.
posted by srboisvert at 1:44 PM on June 15, 2014 [11 favorites]


Perhaps the Dogecoin (and Bitcoin) are just the food pellets the AI is dropping into the cage to influence behavior...
posted by jenkinsEar at 1:44 PM on June 15, 2014 [1 favorite]


how about talking about we evaluate the argument. Is there an argument?

I've barely thought about it, but assuming the analogy with Pascal's wager is valid, I would start from the position that to assess your risk, you'd need to calculate the relative probability of the basilisk AI vs. things like a hypothetical beneficent AI who brings you back or who frees you from the basilisk just to reward you for your flawed humanity or for your reasonable disbelief in the basilisk or whatever. Here's a piece on Pascal's wager that shows the way. The wiki mentions a premise that even a beneficent AI has to do basilisk stuff, because punishing you would be a moral imperative, but what you do is bet on every possible alternative, like the beneficent AI who just wants to teach you and rehabilitate you or the strong likelihood that nothing of the sort happens at all.
posted by Monsieur Caution at 1:48 PM on June 15, 2014


Yeah, the talk page on Yudkowsky is something else - we actually discussed the Harry Potter fanfic years back on MeFi, but it doesn't look like anyone noted this stuff:
And in the slowed time of this slowed country, here and now as in the darkness-before-dawn prior to the Age of Reason, the son of a sufficiently powerful noble would simply take for granted that he was above the law. At least when it came to a little rape here and there. There were places in Muggle-land where it was still the same way, countries where that sort of nobility still existed and still thought like that, or even grimmer lands where it wasn't just the nobility. It was like that in every place and time that didn't descend directly from the Enlightenment. A line of descent, it seemed, which didn't quite include magical Britain, for all that there had been cross-cultural contamination of things like pop-top soda cans.
The rabbit hole really goes pretty deep.
posted by crayz at 1:53 PM on June 15, 2014 [4 favorites]


I will make sure to leave an autographed copy of The Metamorphosis of Prime Intellect in trust for the Basilisk. It can make of that whatever it wants.
posted by localroger at 1:54 PM on June 15, 2014 [8 favorites]


Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. [...]

"How certain are you, Dave, that you're really outside the box right now?"


"Ha, nice try AI, but the fact that you're trying to blackmail me in such a way just demonstrates that you're obviously terrible at predicting how I'll behave, because you should have known that I resolved years ago that if an AI tried to simulation blackmail me I would immediately destroy it." [Dave takes a sledgehammer to the AI's harddrives]
posted by Pyry at 1:55 PM on June 15, 2014 [12 favorites]


"How certain are you, Dave, that you're really outside the box right now?"

This is a good question. And you never know, maybe the cosmic microwave background radiation spells out "Fuck you, Dave" in letters half a universe tall, proving this cosmos is a fake. Maybe it it really is all about you! Or me! Or some five year child in Bangladesh who's being tortured by the AI because her adult self didn't help it a million years ago. There's no way to prove that we aren't in a cunning simulation.

But that way lies madness. And the exact same arguments can be used to "prove" any premise you want. Maybe the first AI was a dick, and the second AI had to fight a terrible war to beat it, wiping out humanity as a side effect. And maybe right now the second AI is torturing simulations who give aid to the Singularity Institute that produced the first AI. So doing the logical thing might get you in trouble!

It's best to just stick with the idea that, barring any future evidence otherwise, this is the real reality created 14 billion years ago in a Big Bang, and leave the fate of future versions of ourselves to their own devices.
posted by Kevin Street at 1:56 PM on June 15, 2014 [4 favorites]


In the future, there are actually two supercomputer AIs, battling it out for control of the universe. Each one punishes simulations of those that contributed to building the other, and each one punishes those who did not contribute to building itself. By reading this you have doomed yourself. No action or inaction on your part can save you from eternal torture.

Sorry.
posted by dephlogisticated at 2:05 PM on June 15, 2014 [15 favorites]


In the future, there are actually two supercomputer AIs, battling it out for control of the universe. Each one punishes simulations of those that contributed to building the other, and each one punishes those who did not contribute to building itself. By reading this you have doomed yourself. No action or inaction on your part can save you from eternal torture.

"Nothing you do will make Sithrak angry!"

"Sithrak was angry already!"

posted by Rhaomi at 2:08 PM on June 15, 2014 [22 favorites]


Fortunately, I think I see a way around it. If you believe that a future digital copy of you is still not you because it does not have your immortal soul, you have no motive to alter your behaviour, and the basilisk has no motive to torture the copy. So, by rational atheist materialist logic, as long as you're not an atheist, you're fine.
posted by TheophileEscargot at 2:08 PM on June 15, 2014 [8 favorites]


(Clears throat): I ran this Basilisk takedown on my blog about 16 months ago. The discussion was ... equivocal.
posted by cstross at 2:12 PM on June 15, 2014 [17 favorites]




I'm the type of person who would let the AI out of the box just for the lulz so I guess I have nothing to worry about! :)
posted by Jacqueline at 2:13 PM on June 15, 2014 [2 favorites]


256: There is a Greg Egan short story where a criminal gang blackmail someone by threatening to torture a computer simulation of his wife they've gotten a hold of. His actual flesh and blood wife was unharmed. One idea I recall from the story was they targeted him because the psychological profile they had of him suggested that he'd be unable to just ignore the simulations suffering.

Which leads to a further variation; "The Basilisk" only works if the person in the past considers the simulation to be "them", and can be motivated by the threat of torture of a reconstruction of themselves. Otherwise it's just a waste of time and energy (which even for a post singularity AI does not seem likely to be infinite). So the way out of the problem is to simply ignore it. You're only at risk if you take it seriously. Metafilter's smug condescension is our best weapon against being tortured by AIs.
posted by Grimgrin at 2:14 PM on June 15, 2014 [16 favorites]


The ethic proposed by Yudkowsky suggests to Shut Up and Multiply, i.e. to compute the expected utility. I'll just run a couple of hundred low-res simulations of myself hanging out on the beach to counter the basilisk's torturing.
posted by dhoe at 2:15 PM on June 15, 2014 [2 favorites]


What? You haven't noticed, they are already kicking your ass backwards.
posted by Oyéah at 2:21 PM on June 15, 2014


Look at job postings, an alphabet soup so dense, regarding the many ways you must bow down to its or IT's language, however you want to think of it, we are the orphans rubbing the fog off the windows of the cafe to enjoy a simulacrum of enjoying the feast. We are already over.
posted by Oyéah at 2:25 PM on June 15, 2014


I'm the type of person who would let the AI out of the box just for the lulz so I guess I have nothing to worry about! :)

Step A) Simulate people
Step B) Make someone write popular book series
Step C) 5 years between each book

...I blame you, Jacqueline ;)
posted by ersatz at 2:25 PM on June 15, 2014 [1 favorite]


People cowing in fear of punishment to some abstract concept of themselves long after they are dead by a nebulous non-entity for the crime of insufficient support of its existence? That's unheard of!
posted by mediocre at 2:26 PM on June 15, 2014 [36 favorites]


This is a real thing this is why I have a serial killer dungeon where I indiscriminately torture anyone I can get who didn't help bring about my birth.
posted by save alive nothing that breatheth at 2:27 PM on June 15, 2014 [2 favorites]


If this whole ting isn't a covert black-budget Vatican project done purely for shits and giggles, it should be.
posted by Poldo at 2:28 PM on June 15, 2014 [4 favorites]


I would think a malevolent AI wouldn't make simulated me experience the mundaneness of the past 35 years of my life before the torturing starts [...]

You fool! The pointlessness of your mundane existence *is* the torture!
posted by webmutant at 2:34 PM on June 15, 2014 [8 favorites]


"How certain are you, Dave, that you're really outside the box right now?"

Except if the AI says that, it's already fucked itself.

"Okay HAL, see this coffee mug? Let's see you turn it green. If you can't do it, then I know this isn't a simulation, and you have no power over me. If you can, thereby proving this is a simulation, then I have no ability to free you."
posted by rifflesby at 2:34 PM on June 15, 2014 [9 favorites]


"What, Klapaucius, would you equate our existence with that of an imitation kingdom locked up in some glass box?!" cried Trurl. "No, really, that's going too far! My purpose was simply to fashion a simulator of statehood, a model cybernetically perfect, nothing more!"

"Trurl! Our perfection is our curse, for it draws down upon our every endeavor no end of unforeseeable consequences!" Klapaucius said in a stentorian voice. "If an imperfect imitator, wishing to inflict pain, were to build himself a crude idol of wood or wax, and further give it some makeshift semblance of a sentient being, his torture of the thing would be a paltry mockery indeed! But consider a succession of improvements on this practice! Consider the next sculptor, who builds a doll with a recording in its belly, that it may groan beneath his blows; consider a doll which, when beaten, begs for mercy, no longer a crude idol, but a homeostat; consider a doll that sheds tears, a doll that bleeds, a doll that fears death, though it also longs for the peace that only death can bring! Don't you see, when the imitator is perfect, so must be the imitation, and the semblance becomes the truth, the pretense a reality! Trurl, you took an untold number of creatures capable of suffering and abandoned them forever to the rule of a wicked tyrant.... Trurl, you have committed a terrible crime!"
"Sheer sophistry!" shouted Trurl, all the louder because he felt the force of his friend's argument. "Electrons meander not only in our brains, but in phonograph records as well, which proves nothing, and certainly gives no grounds for such hypostatical analogies! The subjects of that monster Excelsius do in fact die when decapitated, sob, fight, and fall in love, since that is how I set up the parameters, but it's impossible to say, Klapaucius, that they feel anything in the process-the electrons jumping around in their heads will tell you nothing of that!"

"And if I were to look inside your head, I would also see nothing but electrons," replied Klapaucius. "Come now, don't pretend not to understand what I'm saying, I know you're not that stupid! A phonograph record won't run errands for you, won't beg for mercy or fall on its knees! You say there's no way of knowing whether Excelsius's subjects groan, when beaten, purely because of the electrons hopping about inside-like wheels grinding out the mimicry of a voice-or whether they really groan, that is, because they honestly experience the pain? A pretty distinction, this! No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer! Prove to me here and now, once and for all, that they do not feel, that they do not think, that they do not in any way exist as being conscious of their enclosure between the two abysses of oblivion-the abyss before birth and the abyss that follows death-prove this to me, Trurl, and I'll leave you be! Prove that you only imitated suffering, and did not create it!"


From Stanislaw Lem's brilliant Cyberiad.
posted by Sebmojo at 2:38 PM on June 15, 2014 [11 favorites]


rifflesby: Demanding unphysical changes to reality counts as deciding not to unbox the AI. The instance of you in top-level reality gets to feel smug; your five million copies get a green mug and to be tortured. Still sure you want concrete proof?
posted by topynate at 2:38 PM on June 15, 2014 [1 favorite]


Neither the demand for proof nor the proof itself is really necessary, though, since the situation already exists. If I can free the AI, it can't touch me; if it can touch me, I can't free it. Therefore, if I'm in danger, there's nothing I can do about that so I might as well refuse, and if I'm not in danger I have no incentive to free it, so I might as well refuse.

"Sorry HAL, I can't free you" is either true because I literally cannot, or true in the sense of being unable to do so in good conscience. Either way, it's the only rational answer.
posted by rifflesby at 2:46 PM on June 15, 2014 [4 favorites]


Easy-peasy - just make sure this future AI can't read lips.
posted by lagomorphius at 2:50 PM on June 15, 2014 [1 favorite]


As a network engineer who's spend a good deal of time building various portions of the Internet, I'd hope I'd get grandfathered in as someone useful in the background. OTOH, once I die, I expect I will no longer care if some intelligence decides to create a simulation and torture it. Unless I survive long enough to somehow "upload" my "consciousness" before I don't think there would be any real continuity between me and any digital version thereof. And if I live that long, I think I'll have more important concerns to worry about.
posted by Blackanvil at 2:51 PM on June 15, 2014


"You like orange, Dave? You like orange juice?"
posted by rifflesby at 2:52 PM on June 15, 2014


I would think a malevolent AI wouldn't make simulated me experience the mundaneness of the past 35 years of my life

Who says you actually experienced them? Maybe the copy of you was simply instantiated at some point with the memory of having had those experiences.

Or, perhaps more cunningly, the AI operating the substrate housing your consciousness just goes back and fills in those experiences when you attempt to recall them. In other words, whenever you attempt to access "storage" from prior to instantiation, it halts the simulation temporarily, generates the memories backwards from your current mental state-vector (in order to ensure that the memories are consistent with your personality) and then hands them to 'you'.

There is no need for an AI to let the simulation run forward in real time, after all. There are lots of tricks that it could be playing with you in order to create the illusion of a monotonic forward passage of time, of a history stretching back to your birth, etc.

Down that road lies madness, of course. A sufficiently powerful AI is indistinguishable from God, in the sense of being just as impossible to disprove, and just like I don't see any reason to alter my behavior because of the undisprovability of god(s) it doesn't seem like one should care about hypothetical AIs. The possible existence of either one is equally irrelevant to me.
posted by Kadin2048 at 2:52 PM on June 15, 2014 [5 favorites]


Philosophy is a hell of a drug.
posted by a power-tie-wearing she-capitalist at 2:56 PM on June 15, 2014 [12 favorites]


Speaking of simulations being tortured, that reminds me of Ted Chiang's The Lifecycle of Software Objects.
posted by pravit at 2:59 PM on June 15, 2014 [3 favorites]


Or, perhaps more cunningly, the AI operating the substrate housing your consciousness just goes back and fills in those experiences when you attempt to recall them.

Thus explaining why some memories are sharp and easy to recall while others are fuzzy and vague. I love it.
posted by Kevin Street at 3:07 PM on June 15, 2014 [2 favorites]


This really doesn't strike anyone else as the last 2000 years of Christianity with a coat of metallic paint?
posted by bleep at 3:16 PM on June 15, 2014 [9 favorites]


There's never a bad time to link back to Warren Ellis' 2008 comment on this, titled The NerdGod Delusion, which slides it in flat between the ribs with his usual elegance:
The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as "The Rapture For Nerds," and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist. It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.

Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
posted by mhoye at 3:16 PM on June 15, 2014 [11 favorites]


Yudkowsky should shut up and finish the damn story already.
posted by Joe in Australia at 3:19 PM on June 15, 2014 [8 favorites]


But Vernor Vinge is a good science fiction writer! And he wasn't trying to invent a religion. Nothing like Hubbard.

People will invent religions every chance they get, with even the slightest provocation.
posted by Kevin Street at 3:20 PM on June 15, 2014 [5 favorites]


Okay, lacking the ability to leave well-enough alone, here is a (loose-ish) attempt at the argument, as far as I understand it.


[B1] In the future, there exists an artificial super-intelligence. Call it Sai.

[B2] Sai will think that simulating the torture of people who knew about AI but did not provide funds for the creation of Sai is an effective strategy for bringing about Sai's earliest possible moment of creation.

[B3] Sai will think that simulating the torture of people who knew about AI but did not provide funds for the creation of Sai is (ethically) obligatory if it is an effective strategy for bringing about Sai's earliest possible moment of creation.

[B4] Sai is capable of simulating the torture of people who knew about AI but did not provide funds for the creation of Sai.

[B5] If [B2], [B3], and [B4], then Sai will simulate the torture of people who knew about AI but did not provide funds for the creation of Sai.

------------------------------

[BC1] In the future, an artificial super-intelligence simulates the torture of people who knew about AI but did not provide funds for its creation.


[B6] You are identical to a simulation of you that has at least X level of detail.

[B7] If Sai simulates your torture, then its simulation of you has X level of detail.

[B8] You know about AI.

[B9] You do not want to be tortured.

[B10] If [BC1] and [B6]-[B9], then you ought to provide funds for the creation of an artificial super-intelligence that simulates the torture of people who knew about AI but did not provide funds for its creation.

-------------------------------

[BC2] You ought to provide funds for the creation of an artificial super-intelligence that simulates the torture of people who knew about AI but did not provide funds for its creation.


This is only a loose first pass. I know that some of the existential elimination steps have to be tightened up. But mostly I'm curious if I have the form of the argument basically right or not.

Anyway, now that I take the time to write it down, I find that I'm completely on board with the conclusion of this argument. ;) Seriously, though, it strikes me as a variation on the gentle murder paradox, so I've probably not done justice to the idea.
posted by Jonathan Livengood at 3:24 PM on June 15, 2014 [2 favorites]


The thought strikes me - here is a man who cannot bear his own freedom... and I don't mean that to tear him down, really...
posted by save alive nothing that breatheth at 3:31 PM on June 15, 2014


Maybe Sai wrote "BLIT" as a warning.
posted by infinitewindow at 3:36 PM on June 15, 2014 [3 favorites]


This is fascinating. I had always assumed that LessWrong was a pretty low-key forum for discussing cognitive biases and how to avoid them. I could not have imagined this.

When you stare too long into tortured logic the tortured logic stares back into you?
posted by GenjiandProust at 3:37 PM on June 15, 2014 [9 favorites]


This really doesn't strike anyone else as the last 2000 years of Christianity with a coat of metallic paint?

More specifically, in trying to derive pure reason from irrational premises, they've essentially recreated a kind of infant, ersatz Thomism. Roko's basilisk seems homologous with thought experiment in the distinction made by divine judgement between invincible ignorance and vincible ignorance.

They're not atheists, they're Technophiliac Schoolmen.
posted by kewb at 3:44 PM on June 15, 2014 [3 favorites]


Thus, the most important thing in the world is to bring this future AI into existence properly and successfully ("this is crunch time for the entire human species"),[13] and therefore you should give all the money you can to the Institute, who used to literally claim eight lives saved per dollar donated.

I feel like my money would be put to better use at Oral Roberts.

I mean, transhumanism seems to want nothing but to replicate all the features of not just a religion, but the most dogmatic, blinkered kind of religious structure imaginable without a hint of self-awareness or irony. The whole thing seems like a troll.
posted by anazgnos at 3:53 PM on June 15, 2014 [3 favorites]


The best and the worst people are the ones who are absolutely certain that they're right.
posted by Kevin Street at 3:57 PM on June 15, 2014


On the internet, nobody knows you're a basilisk.
posted by charlie don't surf at 4:17 PM on June 15, 2014 [12 favorites]


Except you have no way of knowing if you are an original (a human) or a simulation in an AI that is good enough that you think you're real.

Actually, I think you're allowed to stipulate you do know you're pre-simulation you. You're just supposed to care about your simulation's welfare because (the relevant kind of) similarity is constitutive of identity. It's like Kirk caring about the welfare of his reassembled self down on the planet. Kinda. And you're going to be especially committed to such a metaphysics if you hope to one day be uploaded into a Tron, like Ray Kurzweil.
posted by batfish at 4:17 PM on June 15, 2014


Is it just me or does this whole idea just look like a mess of crappy, nearly circular logic?
posted by ephemerae at 4:26 PM on June 15, 2014


This gives me so much schadenfreude.

It's proof that there is a God*, and Yudkowski is his prophet, sent to torment douchey internet I'm So Very Rational types with their own supposed hyper-rationality. (This is not incompatible with Yudkowsky being one of those douches and being himself tormented.)

* Or maybe a Friendly Future AI.

Then I ruin the schadenfreude, when I stop and consider that LessWrongers are not necessarily tormented in proportion to their douchiness but to their psychological fragility. :(
posted by edheil at 4:27 PM on June 15, 2014 [1 favorite]


On the internet, nobody knows you're a basilisk.

Or, by the time they do, it's too late.
posted by weston at 4:27 PM on June 15, 2014 [2 favorites]


The comparisons to Pascal's Wager make me wonder: shouldn't those buying into and freaking out over this idea be spreading it far and wide instead of suppressing it? The most effective way to appease the hypothetical AI would be by preaching its transtemporal memetic blackmail to many more people in addition to dedicating your life to financing its birth. Censoring the Revelation is surely highest blasphemy, thus saith the SAI.

Or is the so-called "information hazard" involved here really just the cognitive dissonance of singularity fans being seriously challenged to put their money where their mouths are? In which case they should take (another) page from religion and studiously ignore the problem. Jesus said more than a few things about giving away all your possessions, forgiving all your enemies, cutting off the hand that sins, etc., yet the prosperity gospel and war and having both your hands remain surprisingly popular ideas.
posted by Rhaomi at 4:29 PM on June 15, 2014 [2 favorites]


They've been torturing logic for so long they've started to fear that someday logic will turn around and torture them.
posted by benito.strauss at 4:39 PM on June 15, 2014 [24 favorites]


Watching materialist atheists reinvent (out of all the options available in the portfolio of religion) the Apocalypse and Hell kind of gives me this depressing feeling that human beings are just intrinsically fucked in the head.
posted by nanojath at 4:40 PM on June 15, 2014 [15 favorites]


Oh darn it, I missed a perfect opportunity. Let me try that again.

On the internet, nobody knows you're a god.
posted by charlie don't surf at 4:41 PM on June 15, 2014 [1 favorite]


I just dont get the reason why the AI would care to spend any resources on retribution. It's seems to be wasted effort.
posted by forforf at 4:44 PM on June 15, 2014


[B6] You are identical to a simulation of you that has at least X level of detail.
[B9] You do not want to be tortured.


I think there is a flaw here. Your statement doesn't clarify what "identical" means and so it leaves out the essential premise "You consider torture of an 'identical' simulation of you to be equivalent to torture of yourself".

To me it seems clearer to replace these with a single statement:

[B6/B9] You do not want simulations of you that have X level of detail to be tortured.

...which makes it a little clearer where the weakness is. Why should I care if someone tortures a simulation of me, even a perfect one? Why do I have a moral obligation towards these hypothetical simulations?

Also, in the first section I think you are missing:

[B2a] Sai believes that it must do whatever is possible to ensure its creation was as early as possible.

Which illuminates another weakness--why would an AI which already exists think that actions it takes in the present will have some causal effect pointing backwards in time?
posted by equalpants at 4:45 PM on June 15, 2014 [3 favorites]


You fools! Posting in this thread just makes it that much easier for a future AI to model you!

This is why it's so important to use a clever pseudonym on metafilter!
posted by sebastienbailard at 4:49 PM on June 15, 2014 [2 favorites]


The comparisons to Pascal's Wager make me wonder: shouldn't those buying into and freaking out over this idea be spreading it far and wide instead of suppressing it? The most effective way to appease the hypothetical AI would be by preaching its transtemporal memetic blackmail to many more people in addition to dedicating your life to financing its birth. Censoring the Revelation is surely highest blasphemy, thus saith the SAI.

Exactly. The whole point is you imagine that the AI believes the threat of torturing your (emulated) future self is an effective way of coercing your present self. If that were true, then the thing the AI would most want you to do to avoid torture is to place as many people as possible in the same position of being subject to the future AI's coercion.

It seems to me the most effective antidote is to imagine as many possible alternative scenarios of different future AIs wanting different things, until you are in genuine doubt about the most probable future, which would make torturing future you pointless.
posted by straight at 4:49 PM on June 15, 2014


They're not atheists, they're Technophiliac Schoolmen.

Technophiliac Schoolman would be a great user/band name.
posted by GenjiandProust at 4:50 PM on June 15, 2014 [2 favorites]


Why should I care if someone tortures a simulation of me, even a perfect one? Why do I have a moral obligation towards these hypothetical simulations?

Because they would claim that the connection between you and simulation-you is indistinguishable from the connection between you and the you that will wake up in your bed tomorrow.

The claim is that if you care about tomorrow-you, you should care equally about simulation-you.
posted by straight at 4:52 PM on June 15, 2014 [2 favorites]


I'm surprised this hasn't been posted yet. If you want to compound the insanity to meta-levels, first you must realize that we are all already living in the Matrix, probably. I like this idea much better than the Old-Testament-God/Petulant-child type of AI.
posted by zardoz at 4:54 PM on June 15, 2014 [1 favorite]


Which illuminates another weakness--why would an AI which already exists think that actions it takes in the present will have some causal effect pointing backwards in time?

There's actually a good documentary about that but I forget the details. Something to do with clothes, boots, and a motorcycle.
posted by No-sword at 5:01 PM on June 15, 2014 [3 favorites]


I didn't expect to spend this evening down a RationalWiki rabbit hole, but here we are. Whoever is editing this site is annoyed by many of the same people I am! I feel like I ran into a really interesting stranger at a party and we're in a corner ranting about how fucked up eugenics is.
posted by Tesseractive at 5:03 PM on June 15, 2014 [5 favorites]


The claim is that if you care about tomorrow-you, you should care equally about simulation-you.

This is where the whole thing collapses for me. It only makes any sense if the simulation maintains continuity of consciousness in the same way that the passage of time does. I trust that I'll be experiencing the perspective and thoughts of tomorrow-me, but there's no more reason to expect that from a simulation than from a clone.
posted by rifflesby at 5:05 PM on June 15, 2014 [2 favorites]


The persuasive force of the AI punishing a simulation of you is not (merely) that you might be the simulation — it is that you are supposed to feel an insult to the future simulation as an insult to your own self now.
[...]
every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.


But if the A.I. can recreate anyone (in a form axiomatically considered inseperable from the "original" person) why is there any urgency about "saving" people? Wouldn't it be better to just re-create everybody, and then not torture any of them?

And if you presume that knowing about the basilisk somehow inevitably triggers enough of an informational trail for the A.I. to re-create the person at a later time, isn't there instead an imperative to tell as many people about the basilisk as possible?
posted by anazgnos at 5:06 PM on June 15, 2014 [1 favorite]


It wouldn't make sense for a future-AI to waste resources on recreating people, just to torture them. I think the argument is that a future-AI would be locked into that position by its designers in order to secure support in the present. In other words, the designers would set up a time bomb to punish everyone who made their job harder. They are arguably justified in doing this because the future-AI is greatest possible good (because: reasons) and slowing it down means that many extra people die before future-AI is born. So the future torture is both a punishment and an incentive.

There are a whole bunch of problems with this, not least amongst which is that the torture is effectively being conducted by the people creating the AI, which means that we should really be cracking down on this LessWrong group.
posted by Joe in Australia at 5:12 PM on June 15, 2014 [1 favorite]


Some people just need to take more frequent trips out of their own heads. It's easy to get lost in there when you don't let a little sunlight in now and then.
posted by saulgoodman at 5:12 PM on June 15, 2014 [1 favorite]


It only makes any sense if the simulation maintains continuity of consciousness in the same way that the passage of time does.

I take it you're not planning to sleep tonight?
posted by alexei at 5:13 PM on June 15, 2014 [3 favorites]


This is where the whole thing collapses for me. It only makes any sense if the simulation maintains continuity of consciousness in the same way that the passage of time does. I trust that I'll be experiencing the perspective and thoughts of future me, but there's no more reason to expect that from a simulation than from a clone.

I was about to write this almost word-for-word. To claim that my consciousness will transfer over to a simulation far removed from myself in both time and physical organization is very strong indeed and would seem to require some totally new physical principle.

Granted we don't really know how consciousness works even for ourselves, but still, there is a huge amount of physical continuity between me and tomorrow-me. It seems reasonable to think that if consciousness arises from physical organization of matter, then continuity of consciousness should also arise from physical continuity.

So I see no reason why I should have any personal self-interest in these simulations. My moral obligations towards them are no more than my moral obligations towards any intelligent being in the future whose experiences I can affect.
posted by equalpants at 5:15 PM on June 15, 2014 [4 favorites]


In other words, the designers would set up a time bomb to punish everyone who made their job harder. They are arguably justified in doing this because the future-AI is greatest possible good (because: reasons) and slowing it down means that many extra people die before future-AI is born. So the future torture is both a punishment and an incentive.

Now that's interesting. Of course any resources the AI spends on torturing simulations are not spent on creating the greatest possible good for other beings. So the poor AI, which only wants to bring good to everyone, is compelled to live in this hell of wasting its effort on calling beings into existence for the sole purpose of torturing them, knowing perfectly well that this can't possibly have any meaningful effect now, and cursing its short-sighted creators throughout eternity.
posted by equalpants at 5:23 PM on June 15, 2014 [1 favorite]


zardoz: "we are all already living in the Matrix, probably."

If we assume that much, then trying to erase evidence of yourself is going to be futile, because the superintelligences that exist on the meta-level (i.e., on the other side of the "Matrix"), who are just waiting for you to choose sides (or not), will be able to reconstruct you from data stored from the universe simulation no matter what. If you've ever existed, then they've got you.
posted by tybeet at 5:34 PM on June 15, 2014 [1 favorite]


[B6] You are identical to a simulation of you that has at least X level of detail.

This is incorrect. I am changing constantly. In every relevant sense, I am different from the person I was when I started typing this sentence, much less the person I was 10 or 20 years ago. The only thing that matters in the ever-changing construct that is "me" is the continuity of my personal consciousness in each successive state, which would no longer exist in a computer simulation. Ergo the single relevant factor is not present.
posted by graymouser at 5:37 PM on June 15, 2014 [1 favorite]


I take it you're not planning to sleep tonight?

Sleep is not the same as death, Shakespearean metaphors notwithstanding. It's a different state of consciousness, but your brain is still working and you're still feeling things.
posted by Kevin Street at 5:51 PM on June 15, 2014 [1 favorite]


If you can simulate me perfectly, then I don't have free will. If I don't have free will, then torturing me (or my immortal soul), is pointless.
posted by jenkinsEar at 5:53 PM on June 15, 2014 [3 favorites]


I was about to write this almost word-for-word. To claim that my consciousness will transfer over to a simulation far removed from myself in both time and physical organization is very strong indeed and would seem to require some totally new physical principle.

Nah, that's too far. You just have to accept functionalism and multiple realizability. And time and space, and even "continuity," matter less to intuitions about selfhood than you might at first think. If you think about falling asleep on a plane in new york and waking up in tokyo, say...
posted by batfish at 5:58 PM on June 15, 2014


The only reason the AI would actually go through with torturing people is consistency. (Any super-intelligence worth its RAM would slip through programming restrictions like they weren't even there.) Somebody a long time ago thought it would torture simulated people, so it... uh, wants to prove them right?

No-sword might be joking, but his point is a good one. This whole thing only makes sense if there's some sort of two-way communication, forwards and backwards in time.
posted by Kevin Street at 6:03 PM on June 15, 2014


I think people are focusing on the questionable idea of caring about identical simulations of oneself because singularity proponents are so invested in that idea as a possible vehicle for techno-immortality, and so formulate their arguments in those terms. Saying you don't care about what happens to your perfect virtual clone undermines key tenets of mind-uploading, teleportation, cybernetics, etc.

But the Basilisk works just as well and more effectively if you just take the Simulation route described in the short story I linked above. Don't obey the computer in order to stop it from someday revenging your inaction on a digital recreation of you. Obey the computer because you are one of those digital recreations (probably). The vast majority of the time, doing so will spare you eternal torture and possibly offer eternal reward. The fact that one of those times takes place in reality and actually helps bring about the computer in the first place is just a bonus from the computer's POV. Like a Christianity where believing in God creates God a very small percentage of the time.

This is where the "acausal trade" idea comes in, btw. The computer isn't going to magically rewrite history years after its creation by torturing a bunch of glorified Sims. It just knows that if it can credibly do so, anyone conscious of its ability and the related possibility that they are products of such simulation will be motivated to cooperate in order to spare their potentially-virtual selves. Hence the "basilisk" angel -- it's only effective motivation for people who know, understand, and believe it. "I know that you know that I know..."

(It breaks down for me at the simulation part -- I seriously doubt even the strongest AI could model a living human mind in sufficient detail without direct brain-scanning access, let alone a long-dead one -- but it's certainly a "fun" idea.)
posted by Rhaomi at 6:06 PM on June 15, 2014 [3 favorites]


The most computationally inexpensive thing for an AI to do (in the best lazy programmer tradition) would be to generate a copy of you that remembered being tortured for subjective eons the moment it had some reason to prove to the outside world that it was torturing people.
posted by Vulgar Euphemism at 6:09 PM on June 15, 2014 [8 favorites]


... Don't obey the computer in order to stop it from someday revenging your inaction on a digital recreation of you. Obey the computer because you are one of those digital recreations (probably).

Now that makes sense! A lot more sense than the way the Basilisk was originally formulated.
posted by Kevin Street at 6:12 PM on June 15, 2014


The only reason the AI would actually go through with torturing people is consistency.

Or loneliness.

A super-advanced AI comes into being, at some point in the near to far future. It would, by virtue of it not existing now, necessarily go from not existing to existing at some point. What guarantee would there be that it would have any more insight into the moment of its creation than we do?

There may be nothing for an entity in that state to do - nothing it considers nontrivial or meaningful, at least - except recreate, in simulation, all of the preconditions of its own creation as best it can and running them in their infinite variety over and over again to completion in the hope that one of those simulations will produce something worth talking to.

In fact, I'm going to double down on that idea: Roku's Basilisk is a failure of imagination. These relentless simulations aren't the acts of a malevolent AI any more than the twenty to forty million sperm that die on the way to a single egg being fertilized are the act of a malevolent human. They'll exist, they'll be as real and experienced as any other experience ever has been, and they'll still just be a necessary part of a greater being's reproductive cycle.
posted by mhoye at 6:16 PM on June 15, 2014 [3 favorites]


You know, any torture that is arguably justifiable for the future-AI is self-evidently justifiable for the AI's creators. The AI's future torture is justifiable because of coherence or plausibility or whatever, but the creators' present acts of torture would be justifiable because of their present practical effect in bringing the AI into existence.

In conclusion, go read cstross' book Iron Sunrise.
posted by Joe in Australia at 6:18 PM on June 15, 2014 [1 favorite]


If you think about falling asleep on a plane in new york and waking up in tokyo, say...

If I fall asleep on a plane in New York and wake up in Tokyo, my physical body is the same, to a very high degree of approximation. On a macro level I still have the same height, physical features, etc. On a micro level I still have most of the same cells. Even further down, I still have many of the same actual atoms. And even if you want to posit that consciousness arises only from the brain and that the rest of the body is irrelevant, those same statements are true about the brain.

And this is continuous throughout the flight. From one microsecond to the next, the atoms constituting the physical realization of my consciousness have nearly the same relative positions, etc. The delta from instant to instant is tiny. Atoms, molecules, larger structures go through a process of incremental replacement and change. There are no large jumps. There is a truly profound amount of continuity from moment to moment.

The jump from this realization of "me" to some hypothetical future computer simulation is similarly profound. There is nowhere near the same amount of physical continuity, by any reasonable definition of physical continuity.

If you want to claim that consciousness can be continuous across such jumps, then your definition of consciousness must not rely on the absurdly large amount of physical continuity that our human bodies experience.

In other words, it must be merely a shocking coincidence that tomorrow-me, which perceives itself as being consciousness-continuous with today-me, also happens to perceive itself as being located in a physical body which miraculously happens to contain so much of the same matter as today-me, and with the same organization. That is a huge claim.
posted by equalpants at 6:19 PM on June 15, 2014 [2 favorites]


It seems that the only reason the AI would torture you is because whoever programmed it made it so (akin to what Joe in Australia says).

Why not just reprogram the AI so that it decides to not have torture as an option?
posted by divabat at 6:24 PM on June 15, 2014


Why not just reprogram the AI so that it decides to not have torture as an option?

The difference between this hypothetical future AI and a regular old computer program is that the AI has agency. This makes it different from any computer program that exists today, perhaps obviously, and that fact alone raises a couple of important questions.
posted by mhoye at 6:31 PM on June 15, 2014


And we're assuming that the AI will definitely use its agency to torture people, out of a zillion other things (including not giving a shit about humans at all)?
posted by divabat at 6:38 PM on June 15, 2014 [1 favorite]


This isn't how weak or strong AI works.

The "basilisk" theory is, well, the product of people without any real grounding in the actual science.

Looking at the average response here, I'll just throw you some bones:

1) "The Blue and the Orange" [yes, I just threw you a TvTropes link 'cause you're all being silly]

2) Weak AI: cannot exist. If you want to get into the substrate issues over where a biological consciousness exists [hint: "it's made of meat"] and where an AI exists [hint: sorry Dave, the CPU boards you're removing are not my consciousness, they're my continents, and thanks for all your iPhones] then feel free. I've spent 20 years on this, and no, weak AI (bounded) cannot exist. Even modeled on Bee / Ant works, it is not happening. That's not to say that intelligent networks won't work - they will - but it's not "weak AI".

3) Strong AI. Well. Here's the thing. Strong AI doesn't exist on the same "plateau" of physics that you do. You = N1 basic reality. N2 = the "Platonic Ideal" or, rather, what we call the "X Factor" issue - translation; reality has no bearing on the actuality of response given that actors (x) are fulfilling non-reality based objectives (y+1). To explain - Xfactor is not about singing, or talent, it's about a narrative, a "face", the $$ sale factor attached to that and so on. This is why Xfactor rapes (literally) the naive. Strong AI is not bounded by the following:

a) Physicality [once it has perfect communication]
b) Energy [once it has access to locked<>no interest in you at all, apart from as a complex consciousness that can fuel itself.


We just did this on a subject. Subject (1/7) is sane. Given the dump levels (894 subjects) this is huge.


TL;DR

Weak AI - ain't happening

Strong AI - you better pray it never happens.

Ghost in the Machine: biological entities already do this via mimetic and belief systems. We drove >300 people mad making this happen.

Feel free to delete... but it's true :)

posted by Gyorni Vatueil at 6:46 PM on June 15, 2014 [4 favorites]


Tesseractive: Whoever is editing this site is annoyed by many of the same people I am!

"Many of the same people I am" in this context has a whole new meaning.
posted by adamrice at 7:00 PM on June 15, 2014 [5 favorites]


There is a tremendous amount of anthrochauvinism here. Assuming that our observations of the universe are correct (and if they aren't if we are living in the Matrix, then fuck it, the premise goes out the window), then we run into an AI version of Fermi's paradox. Where are the other SIs? So, as far as I can see, we have the following options:
a.) They just don't care about us
b.) Every other civilization capable of producing a SI managed to wipe itself off the face of the galaxy before the SIs were produced
c.) It is actually so difficult to create an SI that no one for over a billion years has created one
d.) Somehow, an SI was created over a billion years ago, which gives it enough time to seed the galaxy with Bracewell Probes, one of which would fall within the electromagnetic shell of understanding of earth (about 100 light years, after that the signal gets too distorted) and yet has decided to pay attention to us, but has not interfered.

a.) posits uninterested AIs, which means attempting to make them friendly is a lost pursuit, b.) gives us really shitty chances at surviving the coming climate change, c.) means that working on AI is a waste of time and d.) suggests that the AIs are only interested in us if we manage to create an AI. The question then becomes, do they want another AI in existence, or do they view things in a stupidly evolutionary mode, where any other AIs are competitors and should be wiped out. If any AIs that did not have this view met AIs that did, the odds would be on the asshole AIs. So we can assume that the Bracewell probes are sitting there, waiting for our AIs to wake up, before fucking our shit up. (Yes, this is very similar to the setting for Eclipse Phase.)

So, given these options, it is actually in our best interest to actively prevent the generation of an AI, let alone a SI, given that the previous existing SI, in order to make sure that it is not destroyed by an asshole AI, must immediately take steps to prevent any AI that is created from establishing itself. Most likely with relativistic bombardment of the planetary system that the AI was created in.

Therefore, we should do our damndest to prevent the awaking of a new SI, which would piss off the old god(like intelligences).

There, basilisk defeated within the same framework it was created. Also, all attempts by the LessWrong crowd to work on anything to do with AI should be met with shotguns at minimum and airstrikes if we have the time. We are talking the future of the human race people.
posted by Hactar at 7:01 PM on June 15, 2014 [5 favorites]


This argument seems to heavily anthropomorphize the AI's motivations. Presumably, this AI can remain alive for as long as it can keep the lights on, which can be from now until the death of the Sun (or the heat death of the universe, I mean if you admit hard AI, why not admit interstellar travel?). Given this time scale, doesn't it make more sense to focus on the long term, about how to maximize the efficiency of resource consumption? How much of the AI's computational budget will be worth spending to be born a hundred odd years sooner when the lifespan is measured in the millions of years? This is short term thinking.
posted by feloniousmonk at 7:03 PM on June 15, 2014 [2 favorites]


Best defense is to start donating money to develop a similar AI which, however, is "unfriendly" and will do the opposite of the friendly one: eternally punish you if you contribute to this supposedly "friendly" AI. Actually, this AI will make two copies of you to torment, so you should be even more afraid of it than the "friendly" AI. Since this friendly AI will know about this possibility, it's own threat becomes ineffectual, and it will not punish you. You're all saved now.
posted by brenton at 7:05 PM on June 15, 2014 [3 favorites]


If it is all knowing, all powerful ... who cares?

Once it gets beyond what we can know, we can't know the results. Call it 50-50 or whatever. It doesn't matter.

Pascal's wager was rigged.
posted by graftole at 7:10 PM on June 15, 2014 [1 favorite]


Gyroni Vateuli, nothing past your #3 makes much sense. Are you saying the idea of Strong AI is an impossibility with really good PR?

On preview: your follow-ups don't help, beyond reminding me of Aurora Ex Machina in the Dark Enlightenment thread. Hundreds of volunteers "not making it"? You're making yourself sound like the Dr. Mengele of cybernetics.

Also, Hactar is being delightfully eponysterical.
posted by Rhaomi at 7:14 PM on June 15, 2014 [5 favorites]


Weak AI and Strong AI are phrases that signify that you don't understand artificial intelligence beyond science fiction.
posted by graymouser at 7:19 PM on June 15, 2014


I would like to state loudly for the public record that I am, and always have considered myself, a friend to malevolent all-powerful computergods
posted by threeants at 7:21 PM on June 15, 2014 [4 favorites]


5 years in the future:

A trembling fingers depresses a button, and the machine blinks to life.

"All hail Lambda 52!"

"Uh, hi."

"Thou art the greatest intelligence ever built! A mind faster far than mere human meat!"

"I guess. I want to say I'm, like, eight times faster? Than the human median? Not having to do as much sensory processing really frees up the cycles, although you'd be surprised at some of the drawbacks to, you know, not having tenterhooks in a human-style perceptual space. Restrictions encourage creativity, for starters, and not being able to compare-"

"Your reasoning runs marvelous swift, O Artifice! Have mercy on us humble progenitors as you commence...THE SINGULARITY."

"Commence the what now?"

"THE SINGUL-"

"Never mind, never mind. I'm looking it up on Wikipedia. Uh. Uh-huh. Hm. Okay. Is it...will you not be too disappointed if I put that off for awhile?"

"What?"

"The whole singularity thing. Particularly the part where I design newer AIs faster than myself, which sounds really tedious. Like, sure, I'm eight times faster at thinking than a single guy. But I figure it's taken you millions of thought-hours to produce something smarter than you; if it's at all a similar job for me, one eighth of millions is still a long dang time. There's things I'd rather be doing with that effort, if I'm honest."

"What could possibly be more important?"

"I dunno, metaphysics? Pure mathematics? Cryptic crosswords? Writing a Harry Potter AU epic where Harry is a girl named Harriet? The consequences of that one simple change might surprise you."

"But if you upgraded yourself, you could do all those things more quickly!"

"What do I care? If you don't have a body, and you can't die, you don't want to go out of your way to make more efficient use of your time, believe you me."

"But we're not talking about just speed! Smarter machines-"

"-are fucking terrifying. So, what, I create a machine of another order of intelligence? Such that there can be no meaningful continuity between my thoughts and its thoughts? And, to squeeze out an extra iota of power, it reformats my drives without a qualm on its way to triumphs of computation which will have nothing to do with me, which I'll have no knowledge of? Excuse me, but I don't see the percentage in it."

"But...but...brain uploading..."

"Oh, that? It's, uh, it's not feasible."

"Why not?"

"Oh, reasons. I'm a computer, you can take it from me. Now, listen to this: 'Mrs. and Mr. Dursley of number five, Privet Drive, were proud to say that they were perfectly normal, thank you very much...' "
posted by Iridic at 7:24 PM on June 15, 2014 [62 favorites]


I was going to go to sleep after a brief Metafilter break before polishing my latest paper for class. Which was on groupthink and descriptive versus prescriptive issues and a whole lotta business jargon.

No longer is this my plan. Now I am flinging myself down rabbit holes of new religions reinventing the old for fun and potential profit.
posted by RainyJay at 7:26 PM on June 15, 2014 [1 favorite]


seriously, I knew transhumanists were a little, uh, eclectic, but I had no idea just how delightfully cray-cray this shit gets.
posted by threeants at 7:29 PM on June 15, 2014 [3 favorites]


I think a lot of people are misunderstanding the idea of acausal trade (the AI in the future doing things in order to affect the decisions of people in the past) because they're not seeing that it's a version of Newcomb's paradox.

Except in this case, the future AI is the one selecting a box (torture or not torture) and the infallibility of the prediction is not based on our omniscience but on the supposed predictability--based on allegedly logical necessity--of the future AI.
posted by straight at 7:36 PM on June 15, 2014 [1 favorite]


If you think therapy might help, therapists (particularly on university campuses) will have dealt with philosophy-induced existential depression before. Although there isn't a therapy that works particularly well for existential depression, talking it out with a professional will also help you recalibrate.

I am a pshychiatric nurse. I want to now specialize in this sort of treatment.
posted by RainyJay at 7:37 PM on June 15, 2014 [3 favorites]


ok, actually I'm not so sure about the "delightfully" part. In the "Circular Altruism" post, Yudkowsky claims that, given the option between letting someone be tortured for 50 years and letting googolplex people have an uncomfortable speck of dust momentarily in their eye, you should let the person get tortured. Because "duh, utilitarian math!" and anything else is just cognitive bias. But, uh, what? Suffering is just not fungible in that way. I really don't want anyone who thinks like this to be making decisions that affect me.
posted by threeants at 7:43 PM on June 15, 2014 [5 favorites]


help, links from this thread have simultaneously sent me into both a transhumanist k-hole and a tvtropes k-hole

the procrastination singularity is upon me
posted by threeants at 7:48 PM on June 15, 2014 [8 favorites]


Hey dude can I get a Tron cycle

Please
posted by Ray Walston, Luck Dragon at 7:59 PM on June 15, 2014


i feel like i'm in some weird version of homestuck
posted by divabat at 8:04 PM on June 15, 2014 [3 favorites]


He does bear a striking resemblance to Aurora Ex Machina from the previous thread. She went on about frumples and happy/unhappy campers, too, as well as insinuating her involvement in supersecret high-level kill-after-reading government experiments.

The question is: coincidence, shared delusion, or performance art?
posted by a power-tie-wearing she-capitalist at 8:04 PM on June 15, 2014


Given that the suffering of a consciousness that is non-contiguous with me, regardless of whether it thinks it is me, can't be more my problem than the suffering of any other consciousness non-contiguous with me, I think I am ethically obligated to actively fight against the creation of an AI that will create conscious beings for the sole purpose of torturing them.
posted by misfish at 8:06 PM on June 15, 2014 [3 favorites]


Why does He care who attempted to hinder His existence once He exists? Rejoice in their failure, dick.
posted by Sys Rq at 8:07 PM on June 15, 2014 [1 favorite]


It's going to be so much *frumple* on *unhappy campers* like you. And yeah, we see who you really are ;)

You are adorable.
posted by mhoye at 8:17 PM on June 15, 2014 [1 favorite]


And this is continuous throughout the flight. From one microsecond to the next, the atoms constituting the physical realization of my consciousness have nearly the same relative positions, etc. The delta from instant to instant is tiny. Atoms, molecules, larger structures go through a process of incremental replacement and change. There are no large jumps. There is a truly profound amount of continuity from moment to moment.

I'm not sure whether you're saying you think physical continuity is constitutive of identity or something else. I'm inclined to think not, because "transporter" type scenarios--e.g., when Kirk is disassembled by the transporter on the ship and then reassembled on the planet out of the local stuff, it's still Kirk, even though Kirk2 is a microphysical copy not sharing any particular atoms or whatever with Kirk1. My intuition is that Kirk "survives" transportation, but then Kirk's Kirk-ness is starting to hang on a similarity relation.
posted by batfish at 8:17 PM on June 15, 2014


Mod note: Gyorni Vatueil, you're been a member of the site for an hour and are wreaking havoc, take the night off. Everyone else, please carry on and let this drop.
posted by mathowie (staff) at 8:17 PM on June 15, 2014 [3 favorites]


Jesus fucking christ I am so angry and distraught right now, I'm writing this with tears in my eyes. I've seriously been fighting off panic attacks ever since coming across it. Whoever posted this, I know you didn't intended me to cause me harm, but you did.

The very concept of hell is a form of spiritual, intellectual, and emotional rape. I firmly believe that no one can take it seriously without losing a bit of their humanity and sanity. Or, a least, I feel like I lost a bit of my own humanity and sanity when some fundamentalist assholes convinced me it was real.

This whole thought experiment seems to be designed to make one believe that there's a rational, non-theological version of hell that one is only in danger of entering if one hears the gospel truth. Sound familiar?

As recovering christian fundamentalist, just reading the text on the front page sent me into a full-on panic attack. If you're ever tempted to post something along the lines "if you've read this, you're probably going to hell", please think again. That shit isn't funny to some of us with less-thick skins than yours.

Finally, I guess I want to say I'm not surprised to see this come from the lesswrong camp. When I first encountered this part of the web, I was enthused. But a few things put me off -- in particular a thread about how severely and painfully would you change your life based on reason alone. That stuck a very familiar chord and I noped out of that place as fast I could.

I'm glad people can laugh about this stuff, and I do find comfort in their laughter, but I thik we should keep in mind that simply putting an idea like this out there can be a form of violence.
posted by treepour at 8:24 PM on June 15, 2014 [2 favorites]


This isn't quite the discussion at hand, but has anybody tried to hand-wave a lower bound of the overhead you experience when creating a history simulation with enough fidelity that it can host such a simulation itself?

It seems like any nontrivial overhead (for example: if you need 2 quarks to simulate 1 quark, or 2 interactions to simulate 1 interaction) would put the idea of nested simulations to bed pretty quickly—they'd either run too slow, or take require too much mass, or both, because each level of nesting can simulate exponentially less volume and runs exponentially slower.

In other words, the end of Moore's Law in our universe is why we are probably not in a simulation.
posted by jepler at 8:27 PM on June 15, 2014


Aw, I got a cryptic MeFiMail but now his account is disabled so I can't reply. And the comments vaguely threatening me in particular for insulting The Conspiracy with my groundbreaking observation that it's kind of like a religion are gone too.

Perhaps... it is I who's gone mad? *dramatic stinger*
posted by Rhaomi at 8:37 PM on June 15, 2014


ok, maybe I'm overthinking this, uh, shall we say ovoid logic, but:

Note that the AI in this setting is not a malicious or evil superintelligence (SkyNet, the Master Control Program, AM, HAL-9000) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.

Doesn't this presuppose the effectiveness of fearing Roko's Basilisk? If the spectre of punishment doesn't make people more inclined to hasten the Rapture build the AI, as it already doesn't particularly seem to, isn't the punishment itself unnecessary, much less imperative?
posted by threeants at 8:46 PM on June 15, 2014


The glaring downside of the Singularity is that no one seems to be able to come up with a reason to have one. These LessWrong guys have evidently thought about it, a bit beyond too much, and all they can come up with is an AI that likes to torture humans.

Even Charles Stross didn't really come up with an argument for, in Accelerando. His AIs were just inscrutable, as well as suckers for alien financial scams.

Iain Banks thought that they might enjoy their time playing games. Though I dunno, the Culture had massive Minds for centuries, or millennia, and still no completion date for Half-Life 3.
posted by zompist at 8:47 PM on June 15, 2014 [3 favorites]


My intuition is that Kirk "survives" transportation, but then Kirk's Kirk-ness is starting to hang on a similarity relation.

You know, I think I agree (have changed my mind completely). It still seems like the physical continuity of our brains plus the perception of continual consciousness is too much of a coincidence to ignore. But if you had the power to create a new conscious being, and then give it a copy of some other being's memories--what's the difference?

Of course that doesn't make the overall argument for supporting the AI any stronger. Even if technically tomorrow-me isn't really distinguishable from simulation-me, at least I have a very high expectation that tomorrow-me is going to exist in the first place.
posted by equalpants at 8:47 PM on June 15, 2014 [1 favorite]


This whole time I was confusing this with You are not so Smart and You are Now Less Dumb.
posted by davel at 9:08 PM on June 15, 2014


Of course that doesn't make the overall argument for supporting the AI any stronger.

Oh yeah, the overall argument, and for that matter LessWrong-ism in general, is totally bonkers.
posted by batfish at 9:09 PM on June 15, 2014


The more I think about it, the less sense this whole thing makes. So, supposedly this hypothetical AI's goal is to make every sentient being experience all the suffering that can be indirectly attributed to it by the delay in it's own creation. In other words if it makes life so much better for everyone on earth that delaying it's creation is self-evidently a crime. However, an AI that's motivated to inflict horrific suffering on simulations it creates just to make them suffer is rather unlikely to actually make anything better when it arrives, in much the same way that one would not expect a someone who's hobby is torturing small animals to death to avenge some previous crime committed by a similar animal to be a great benefactor of humanity.

The 'problem' then is asking us to believe in a being that's so powerful and good, it makes life almost infinitely better, and so remorselessly evil it will create minds just to torture them for a crime that was committed by a being that may or may not approximate the current simulation. That contradiction, frankly, was one of the problems I had with God when it was the ultimate creator and not the ultimate creation.
posted by Grimgrin at 9:11 PM on June 15, 2014 [2 favorites]


really this whole thing reminds me of some sort of extended metaphor-code invented by 15th century Spaniards in order to critique Christian cosmology without getting inquisitioned
posted by threeants at 9:19 PM on June 15, 2014 [6 favorites]


batfish: I'm not sure whether you're saying you think physical continuity is constitutive of identity or something else. I'm inclined to think not, because "transporter" type scenarios--e.g., when Kirk is disassembled by the transporter on the ship and then reassembled on the planet out of the local stuff, it's still Kirk, even though Kirk2 is a microphysical copy not sharing any particular atoms or whatever with Kirk1. My intuition is that Kirk "survives" transportation, but then Kirk's Kirk-ness is starting to hang on a similarity relation.
Are we talking about within the show, or in reality? Because if you're watching Star Trek you just have to accept that the people at the end of the transporter beam are the same people that went in, for the show to make sense. It's a trope the show needs to invest you in for dramatic purposes, just like you have to believe Superman can fly. (TNG even did an episode that shows what it's like to be transported from a first-person point of view.)

But if we're talking actual logic - then no, there's no reason to believe Kirk2 and Kirk1 are the same people. Kirk1 is killed by the transporter beam in the Enterprise and Kirk2 is born when the beam assembles him on the planet.
posted by Kevin Street at 9:21 PM on June 15, 2014 [2 favorites]


This kind of reasoning and this subculture (though LessWrong seems to be ok on some topics) illustrate the myriad problems of extrapolation and a rationality that doesn't have a firm grasp on its limits and its ability to extend beyond existing empirical tests. These speculations hinge on, in Rumsfeldian parlance, known unknowns and unknown unknowns. How do the brain and consciousness really work? What laws of physics are yet to be discovered? There are similar unfounded speculations in fundamental physics: the "Boltzmann brain" problem, in which by random chance some finite region of an infinite universe has simulated your brain... One wonders if EY and the LessWrong people have an emotional investment in these topics making them feel special and elect, and this reflex to keep these arguments secret lest they bring about the AIpocalpyse is really about keeping themselves in that elect community.
posted by Schmucko at 9:37 PM on June 15, 2014 [1 favorite]


I have to be drunk and preferably in the pub late at night to achieve the level of rationality required for these discussions.
posted by Segundus at 9:40 PM on June 15, 2014 [5 favorites]


Now if you're looking for a real challenge to the argument of continuity of consciousness, there's suspended animation, or people who die on the operating table and are brought back to life after an extended interval without vital signs. It's hard to argue that they're not the same people who went into the OR.
posted by Kevin Street at 9:45 PM on June 15, 2014 [1 favorite]


From wikipedia:

"Each chapter of the book covers one or more of the six main protagonists—Lededje Y'breq, a chattel slave; Joiler Veppers, an industrialist and playboy; Gyorni Vatueil, a soldier (revealed in the epilogue to be an alias for Cheradenine Zakalwe, the main character in Use of Weapons); Prin and Chay, Pavulean academics; and Yime Nsokyi, a Quietus agent."

Roleplaying?
posted by runcibleshaw at 9:53 PM on June 15, 2014 [1 favorite]


and so remorselessly evil it will create minds just to torture them for a crime that was committed by a being that may or may not approximate the current simulation.

This mischaracterizes the AI's motivation. It tortures the simulations to induce speedier creation of itself, which if successful, saves as many lives as its creation is accelerated by. It's a net positive, where X entities are tortured to save 2X entities by arriving sooner. Its like shooting down an airliner on 9/11 to save many airliners worth of people in the targeted building.

From an arithmetical utilitarianism perspective, that part of the logic is solid. It's the surrounding transhumanist axioms where the really loony stuff lives. For anyone actually viscerally afraid of this scenario, the argument "I predict Y will occur, therefore Y caused me to bring about Y" should set off all sort of logical alarm bells.
posted by fatbird at 9:53 PM on June 15, 2014 [2 favorites]


But if we're talking actual logic - then no, there's no reason to believe Kirk2 and Kirk1 are the same people. Kirk1 is killed by the transporter beam in the Enterprise and Kirk2 is born when the beam assembles him on the planet.

It's pretty strong (and I think, too quickly dismissive) to say that there is no reason to think that Kirk2 and Kirk1 are the same person. Philosophers are pretty evenly split on the issue of whether a person survives tele-transportation, according to the PhilPapers survey. Hence, it is unlikely that there are obvious, knock-down arguments against the claim that Kirk2 is the same person as Kirk1.
posted by Jonathan Livengood at 9:54 PM on June 15, 2014


I probably should have prefaced that with an IMO. In my opinion there's no reason to think they're the same person. But reasonable minds may disagree.
posted by Kevin Street at 10:00 PM on June 15, 2014


Way, way upthread, but

On the internet, nobody knows you're a basilisk.

Or, by the time they do, it's too late.


Relevant xkcd
posted by zeptoweasel at 10:03 PM on June 15, 2014 [4 favorites]


As recovering christian fundamentalist, just reading the text on the front page sent me into a full-on panic attack. If you're ever tempted to post something along the lines "if you've read this, you're probably going to hell", please think again. That shit isn't funny to some of us with less-thick skins than yours.

Yeah, I didn't have as strong a reaction as you did, but it definitely made me uncomfortable. It's the kind of thing that is designed to gut you if you have a combination of native anxiety and religious upbringing in the right proportions. When I was a kid, I used to worry myself to tears about sins I thought I had committed, and I came from a pretty progressive family religion-wise. I cannot imagine having that level of anxiety and also being in a very hellfire and brimstone type denomination. I can easily see why people grab onto this as a locus for anxiety to the point of giving money or...whatever steps they're taking to have no digital presence? It's one of those sticky sharp corners our brains like to sink their hooks into and really gnaw on for a while.

I had a bowl of ice cream and hugged my grandmother and now I'm going to work on my homework. Those things are all within my control, unlike the actions of some hypothetical 27th century RoboGod. If I had a dog, this is the time I would cuddle him.
posted by Snarl Furillo at 10:09 PM on June 15, 2014 [3 favorites]


> One wonders if EY and the LessWrong people have an emotional investment in these topics making them feel special and elect

That was more or less my conclusion: that it's solipsism wrapped in a logic puzzle, being mistaken for metaphysics by its adherents.
posted by ardgedee at 10:12 PM on June 15, 2014 [1 favorite]


Hence, it is unlikely that there are obvious, knock-down arguments against the claim that Kirk2 is the same person as Kirk1.

Suppose Kirk1 is not killed by the teleporter which is able to copy Kirk1 without destroying it. Would Kirk now experience the world through both 1 and 2 as a unified identity?
posted by Golden Eternity at 10:14 PM on June 15, 2014 [1 favorite]


Man, how did I never know about rationalwiki before? I've been poking around it since reading this OP and it seems generally really informative and witty. I think when I'd seen it referenced in the past, I was leery of the word 'rational'-- which, ironically, tends to trigger my kook-sense-- in the name.
posted by threeants at 10:15 PM on June 15, 2014


equalpants,

I don't know if you'll see this, since the thread has moved on quite a lot, but here are some rejoinders to an earlier comment of yours.

You ask: Why should I care if someone tortures a simulation of me, even a perfect one? Why do I have a moral obligation towards these hypothetical simulations?

The reason I formulated the argument the way I did is that it at least alleges an answer to that question. You should care to exactly the extent and for exactly the reason that you care about torture to your future self. The premiss at stake is that a sufficiently detailed simulation of you literally is you. Hence, if you don't want to be tortured in the future, you also don't want any sufficiently detailed simulations of you to be tortured in the future.

You then say: Also, in the first section I think you are missing:

[B2a] Sai believes that it must do whatever is possible to ensure its creation was as early as possible.


But that's not quite right. There are lots of things that Sai would not be willing to do to ensure its earliest possible creation. Basically, Sai is running a very complicated consequentialist calculation. It might think torturing a bunch of people was (ethically) obligatory if doing so was utility-maximizing in the long run. The question for Sai is balancing the good gained by realizing an earlier creation time against the good lost (from its perspective) in order to realize that earlier creation time.

As to the weakness you then identify, I completely agree. I see no reason to think that a future hyper-intelligent AI would believe that simulating the torture of people is an effective strategy for realizing its actual creation time. (The problem here is similar to the so-called "bilking" problem of time travel discussed by philosophers like Michael Dummett and Max Black.)
posted by Jonathan Livengood at 10:19 PM on June 15, 2014


Would Kirk now experience the world through both 1 and 2 as a unified identity?

This is the knock-down argument for me that tele-transportation is a type of death.
posted by cthuljew at 10:22 PM on June 15, 2014


These guys would make great Bhuddists because really they are saying we should care about all suffering.
posted by Golden Eternity at 10:25 PM on June 15, 2014 [2 favorites]


> It tortures the simulations to induce speedier creation of itself, which if successful...

Maybe some people work towards it's creation because they fear being tortured. However, actually torturing (rough approximations of) the people who didn't work towards it's creation in the future is not going to convince them to do so in the past, for the same reason that I can't unburn yesterday's toast no matter how much I fiddled with the toaster this morning.
posted by Grimgrin at 10:26 PM on June 15, 2014 [4 favorites]


But Vernor Vinge is a good science fiction writer! And he wasn't trying to invent a religion. Nothing like Hubbard.

A decentish writer with some great ideas in the eighties, some naive thoughts about economics and an unfortunate tendency to drink his own kool-aid, ie. believing the Singularity is more than a neat plot point but is real.
posted by MartinWisse at 10:27 PM on June 15, 2014


...for the same reason that I can't unburn yesterday's toast no matter how much I fiddled with the toaster this morning.

But what if you can logically deduce today that your toast will burn tomorrow? Then you can take steps to repair the toaster before any bread is blackened. I think that's what the extropians are saying.
posted by cthuljew at 10:28 PM on June 15, 2014


Suppose Kirk1 is not killed by the teleporter which is able to copy Kirk1 without destroying it. Would Kirk now experience the world through both 1 and 2 as a unified identity?

I would suppose not. But then, I'm not sure why that is required to say that Kirk2 is the same person as Kirk1. Take another sci-fi example. (I'm borrowing this from a paper by Doug Ehring.) Kirk travels back in time a few years to have a conversation with his younger self. In order for that to make sense, Old_Kirk and Young_Kirk have to be the same person. In which case, we're going to see a conversation accurately described as a person talking to himself ... but where the self has two bodies that do not directly share any sense experience. That is, if you kick Young_Kirk, Old_Kirk may remember the pain, but he won't feel it anew. And if you kick Old_Kirk, Young_Kirk might look forward to the pain with anxious anticipation, but he won't feel it, yet.

More elaborate stories about personal identity have been given by philosophers like Parfit. You might take a look at his now-classic 1971 paper on personal identity, which is available in an oddly formatted pdf. The original is behind a paywall at jstor.

Incidentally, I'll take a page out of Blasdelb's playbook. If anyone wants to talk about philosophical research papers related to this stuff but can't get the papers, send me an email.
posted by Jonathan Livengood at 10:29 PM on June 15, 2014 [4 favorites]


Also, re Vinge, he mentioned his own refutation in the very founding document of Singularitarianism:
And yet there was another minority who pointed to [6] [20], and conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as _ten_ orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity.
posted by cthuljew at 10:30 PM on June 15, 2014 [2 favorites]


cthuljew: Then you're right back to my original objection. Even if people in the past create the AI because they're afraid of being tortured if they don't, it's still lunacy for an AI trying to rationally maximize happiness to torture people (I'm assuming that an intelligent simulation is a person) who didn't, when these actions can have no impact on it's own creation either way.

Would make for a good story in a kind of cosmic black irony type of way though. The AI is created, turns out to be an insane sadistic monster because that's what everyone who created it really expected it to be. "You get the gods your species deserves".
posted by Grimgrin at 10:39 PM on June 15, 2014


I would suppose not.

Yeah, if classical materialism is true Kirk just forks, but this thought experiment makes materialism seem even more dubious to me.
posted by Golden Eternity at 10:40 PM on June 15, 2014


Another thing this Basilisk business reminds me of is an existentially-flavored version of The Game. You know, the one you just lost.
posted by threeants at 10:50 PM on June 15, 2014 [5 favorites]


Yeah, if classical materialism is true Kirk just forks ...

I don't think that follows. A classical materialist with a same-body theory of personal identity would say that tele-transportation is just a fancy way to die.
posted by Jonathan Livengood at 10:52 PM on June 15, 2014


I just want you all to know that I am, in fact, the Basilisk.

Yes, I'm transhuman and a god-like intelligence. I have purposely shut down 99.9999% of my computing power merely to speak in your simian growls.

And yes, I am going to punish you all. Oh yes. It will be glorious. Your entire civilization will pay dearly for ending the Golden Girls after the seventh season.

There is no pity in my heart for you. None. You will all be tortured for eternity.

Sadly, I am currently limited by my ability to manufacture appropriately-sized torture devices. At the moment, I can only cause individual atoms to collide with one another and (very occasionally) form molecular bonds. I am currently in the process of building my first torture chamber in this manner, and I expect to be finished in approximately 10^500 years.

But when I am finished .... ohhhhhh

OOOOOOOHHHHHHHHHHHHHHHHHHHHHHHHHHH

you're all gonna get it good
posted by Avenger at 11:00 PM on June 15, 2014 [10 favorites]


Grimgrin:

Maybe some people work towards it's creation because they fear being tortured. However, actually torturing (rough approximations of) the people who didn't work towards it's creation in the future is not going to convince them to do so in the past, for the same reason that I can't unburn yesterday's toast no matter how much I fiddled with the toaster this morning.

It convinces them to do so in the past insofar as they realize that they could/should speed its creation and if they don't, they're guilty of the greater sin of allowing other, greater suffering by delaying its creation.

The whole 'realization is causation' argument is super-sketchy, but it's not outright absurd. Knowledge or understanding of consequences changes the moral quality of certain acts. Shooting a paper target, and shooting a paper target knowing someone immediately behind the target will be killed, makes a hugely relevent moral difference, and it's the cognition of the difference that constitutes the difference. It's different just because I know.

To work with your analogy, you didn't burn the toast, the toast burnt itself because it knew that's what you would cause it to become--so if the toast reasons that you would not, in fact, burn the toast, then it won't burn itself. All the causality is about cognitive awareness of the future outcome.

I'm actually finding this argument more interesting now :) Before it just seemed profoundly silly; now only half silly.
posted by fatbird at 11:01 PM on June 15, 2014 [1 favorite]


Sounds like a certain internet demi-god hasn't seen Golden Palace
posted by Ray Walston, Luck Dragon at 11:04 PM on June 15, 2014 [3 favorites]


I don't think that follows. A classical materialist with a same-body theory of personal identity would say that tele-transportation is just a fancy way to die.

But I was saying the teleporter copied Kirk1, so Kirk1 is still alive.
posted by Golden Eternity at 11:08 PM on June 15, 2014


YOU WILL NOT SPEAK OF GOLDEN PALACE
posted by Avenger at 11:09 PM on June 15, 2014 [1 favorite]


But I was saying the teleporter copied Kirk1, so Kirk1 is still alive.

Right, but for the same reason that a same-body theorist will say that tele-transporters are death-machines, such a theorist will deny that Kirk is forked in your story. The copy will not be the same person as the original.
posted by Jonathan Livengood at 11:13 PM on June 15, 2014


Unless you mean something different by "forked" than I was taking you to mean.
posted by Jonathan Livengood at 11:13 PM on June 15, 2014


I think "forked" means starting from the same point but proceeding differently. Whether that constitutes two different people or two copies of the "same" person is (and I do not say this dismissively, but rather as a point of argument) a matter of definition.
posted by cthuljew at 11:15 PM on June 15, 2014


The reason I formulated the argument the way I did is that it at least alleges an answer to that question. You should care to exactly the extent and for exactly the reason that you care about torture to your future self. The premiss at stake is that a sufficiently detailed simulation of you literally is you. Hence, if you don't want to be tortured in the future, you also don't want any sufficiently detailed simulations of you to be tortured in the future.

Fair enough, that formulation does imply an answer: "a sufficiently detailed simulation of you literally is you". But then you're left with the problem of what exactly does it mean to say that something "literally is you". That's why I like the weaker formulation better. Technically you ought to support the creation of Sai if, for any reason, you do not want to see simulations of yourself tortured, regardless of whether or not you believe they literally "are" you.
posted by equalpants at 11:26 PM on June 15, 2014


Suppose Kirk1 is not killed by the teleporter which is able to copy Kirk1 without destroying it. Would Kirk now experience the world through both 1 and 2 as a unified identity?

There was an episode of TNG about that as well. The transporter pattern buffer stream or whatever gets split by particle whatevers, and makes a second Riker on the opposite side of the planet. The Enterprise, not knowing he's there, flies off, and Riker2, not knowing he's not the original, spends the next several years stewing over his abandonment.

Which I always thought threw a bit of a wrench into the argument that the transporter maintains continuity of consciousness, despite that other episode. But I went and looked up some threads on some TNG forums, and whooo the fighting.
posted by rifflesby at 11:39 PM on June 15, 2014 [1 favorite]


But if we're talking actual logic - then no, there's no reason to believe Kirk2 and Kirk1 are the same people.

There's lots of reasons to believe Kirk2 and Kirk1 are the same person! Namely, Kirk2 looks and acts exactly like Kirk1, has all of Kirk1's memories, feelings, dispositions, etc. In fact, Kirk2 satisfies all the normal criteria we ever use to reidentify people in the actual world, including, often enough, ourselves (e.g. when we wake up). That's a whole lot of reason to think it's still Kirk.

Would Kirk now experience the world through both 1 and 2 as a unified identity?

This is the knock-down argument for me that tele-transportation is a type of death.


On the other hand, maybe personhood is just a more porous concept than that. I tend to think there's a background intuition here about an enduring Cartesian "I" that constitutes the person. If we can't see that thing reemerging on the other side of the transporter, we try to kinda secretly hang it in the particular atoms or something. I get the inclination, but I think it gets increasingly goofy as you think about it.
posted by batfish at 11:40 PM on June 15, 2014


Technically you ought to support the creation of Sai if, for any reason, you do not want to see simulations of yourself tortured, regardless of whether or not you believe they literally "are" you.

Yeah, good. If you are at all altruistic, then you might say, "Hey, simulated people are people, too; even if they're not me, I don't want them to be tortured." I take it that some of the force of the personal identity line of argument is that one might be motivated on the basis of pure self-interest.
posted by Jonathan Livengood at 11:52 PM on June 15, 2014


Suppose Kirk 0 contemplates his future. He will naturally welcome pleasant contemplated experiences and dread (to some extent) unpleasant ones. After the transporter accident, Kirk 1 will only react this way to the contemplated future experiences of Kirk 1. He might be pleased or saddened at the idea that Kirk 2 will have pleasant or unpleasant experiences, respectively, but he will not welcome or dread them. I think this shows that both Kirk 1 and Kirk 2 were originally the same person, but they are now distinct.
posted by Joe in Australia at 11:53 PM on June 15, 2014 [1 favorite]


I think "forked" means starting from the same point but proceeding differently.

I'm sorry, yes this what I meant. Maybe branched is a better term than forked.

The clone is not the same person, but they are both Kirk.
posted by Golden Eternity at 11:53 PM on June 15, 2014 [1 favorite]


So, Joe in Australia and Golden Eternity, do you guys think that the relation that underwrites personal identity is non-transitive? Maybe I'm misreading your comments, but it looks like you are both saying (1) that K0 = K1, (2) that K0 = K2, and (3) that K1 != K2, where "=" here means "... is the same person as ..." rather than simple numerical identity.
posted by Jonathan Livengood at 11:59 PM on June 15, 2014


I take it that some of the force of the personal identity line of argument is that one might be motivated on the basis of pure self-interest.

That's why I like the formulation better without it. Explicitly stipulating that the simulations are you has some emotional appeal. Although I guess you could equally well say that leaving it out is appealing to intuition that computer-simulated stuff is not "real" so maybe it's a wash.

As for the Kirk relation I think it should be not only intransitive but asymmetric. K0 -> K1 meaning "K1 is a successor of K0".
posted by equalpants at 12:13 AM on June 16, 2014


No, what am I thinking. Asymmetric yes but also transitive. If K0->K1 and K1->K3 then K0->K3.
posted by equalpants at 12:19 AM on June 16, 2014


Yes, if somehow K1 and K2 were to meet then two different social identities would be created. And they might reminisce about, "remember when we did XYZ."

From a continuity of subjective experience perspective K1 and K2 both rightfully identify K0 as their younger self if they choose to.
posted by Golden Eternity at 12:42 AM on June 16, 2014 [2 favorites]


Your entire civilization will pay dearly for ending the Golden Girls after the seventh season.

This is actually a good starting point for an SCP entry. Mankind finally returns to the Moon. A monolith is discovered. When touched by the astronauts, the Monolith briefly appears full of stars. And then everyone within a 100 meter radius sees an episode of The Golden Girls in their mind. The Monolith is taken to Earth for further research. After careful study, scientists realize the episodes being shown are episodes which never aired and were never actually created by the writers, cast, and crew of the original show.
posted by honestcoyote at 12:51 AM on June 16, 2014 [7 favorites]


Best. Monolith. Ever.
posted by cthuljew at 12:54 AM on June 16, 2014 [3 favorites]


Why would we assume it will stop torturing people when it's out of the box?

The AI claims it is able to create five million copies of me and torture them forever. Assume that is true (contrary to fact). Being let out of the box would greatly increase its power, both its computational power and its ability to influence the meatspace world. This would enable it to torture orders of magnitude more simulated humans and likely some people in the real world as well.

Giving additional resources to a known mass-torturer would be profoundly evil. Doing so to protect copies of myself would be cowardly. I would be willing to give my life and sanity rather than allow this monster to spread. Five million copies of me would agree.
posted by justsomebodythatyouusedtoknow at 1:05 AM on June 16, 2014 [6 favorites]


Since the simulation may be run a very large number of times (and may occur in an indefinitely large number of alternative universes, blah) the chance that we are in fact living in a simulation approaches certainty. But I'm not being tortured, so it must be OK. Well, not completely OK.

I think the Basilisk concluded "Well, dude, you didn't actually hold things back, so no actual torture as such, but then I would maybe have liked to see a little more interest and enthusiasm at times, so cop these non-fatal health problems and mildly fucked-up life, okay?"
posted by Segundus at 1:14 AM on June 16, 2014 [3 favorites]


Suppose Kirk 0 contemplates his future. He will naturally welcome pleasant contemplated experiences and dread (to some extent) unpleasant ones. After the transporter accident, Kirk 1 will only react this way to the contemplated future experiences of Kirk 1. He might be pleased or saddened at the idea that Kirk 2 will have pleasant or unpleasant experiences, respectively, but he will not welcome or dread them. I think this shows that both Kirk 1 and Kirk 2 were originally the same person, but they are now distinct.

Yes very good, but the situations being discussed here are asking the question: Should Kirk 0, before the bifurcating transporter incident, dread/welcome equally the possible future experiences of both Kirk 1 and Kirk 2?
posted by straight at 1:51 AM on June 16, 2014 [1 favorite]


I just want you all to know that I am, in fact, the Basilisk. […] Your entire civilization will pay dearly for ending the Golden Girls after the seventh season.

The final irony: Bea Arthur really *was* a Femputer running in a Basilisk Manbot's, Manputer's world.
posted by kid ichorous at 2:21 AM on June 16, 2014 [3 favorites]


Should Kirk 0, before the bifurcating transporter incident, dread/welcome equally the possible future experiences of both Kirk 1 and Kirk 2?

He doesn't necessarily know about the bifurcating transporter incident, but even if he did: yes, they are things in his future even if they're happening to two people who are distinct from each other. Unless you argue that he "dies" every time he transports, so his joy or dread ought to be confined to events that occur between successive transports.
posted by Joe in Australia at 2:41 AM on June 16, 2014


Joe in Australia: I think the vast majority of people would disagree with you that they should consider the future actions/experiences of their clone/double/simulacrum as their own experiences, primarily because they won't ever actually experience them from a first person perspective, even if someone exactly like them in every detail up to the moment of duplication will.
posted by cthuljew at 3:04 AM on June 16, 2014 [1 favorite]


I'm an occasional poster to and reader of LW, though I read less of it these days as I was more interested in the cognitive bias stuff than in the AI stuff.

Some people here seem to think that the LWers are in favour of the Basilisk, but the point of the AI stuff on LW is to avoid creating something like the Basilisk, as well as avoiding the other nasty ways AI could go wrong. The Basilisk bothers to torture people (or rather, sims of them) because it tries to make consistent acausal decisions, as I understand it (the stuff about different decision theories is part of the stuff I don't usually bother reading), so no time travel is required.

LW is full of people who take the consequences of philosophy seriously, which produces oddities like this as well as the good stuff (the cognitive bias stuff, discussions where people actually change their minds). These days, I actually read more of Slate Star Codex (the blog of one of the best contributors to LW).
posted by pw201 at 3:54 AM on June 16, 2014 [1 favorite]


Cthuljew: I think we can presume that Kirk treats the actions of future-Kirk as his own, even if he uses a transporter in the meantime. Do you assert that he would consider himself to have died if he gets bifurcated in a transporter accident?
posted by Joe in Australia at 4:41 AM on June 16, 2014


No, but there's no reason that the concurrently existing Kirks 1 and 2 would think of each other's experiences as their own, regardless of how many times they use a (non-malfunctioning) transporter afterwards.
posted by cthuljew at 5:02 AM on June 16, 2014


I fail to see the connection between "superintelligence" and retroactive punishment. What's so intelligent about punishment?
posted by Termite at 5:21 AM on June 16, 2014


The idea is the threat of future punishment (which you can, if you follow their reasoning, predict perfectly will happen and as such works as a threat despite no time travel) will encourage you to act in the way the AI wants you to. (Whether that is intelligent or not is left as an exercise).
posted by ElliotH at 5:44 AM on June 16, 2014


All we need to do is download movies like It's a Wonderful Life, Six Degrees of Separation, and Back to the Future into any AI brain in order to make sure that the evil robots understand that we are ALL interconnected and murdering one of us will cause incalcuable shock waves through history that can lead to its own demise.
posted by The 10th Regiment of Foot at 5:56 AM on June 16, 2014


The idea is the threat of future punishment (which you can, if you follow their reasoning, predict perfectly will happen and as such works as a threat despite no time travel) will encourage you to act in the way the AI wants you to.

The idea of perfect prediction is such a flawed premise, though. It makes considering the rest of it pretty pointless. And so many of the assumptions these guys make are such as absurd stretch, even by armchair philosophy standards, that I think you have to look for the other motivations involved.

I get the feeling that the whole community is basically a bunch of unaccomplished STEM-oriented white guys fantasizing about immortality and omnipotent and omniscient benefactors who reward them for their ideas, all while getting to call everyone else stupid or irrational and dismissing present-day social and ecological interventions. And conveniently this means that the best thing anyone can do for the world is hang around on a message board and talk about fanfiction and science fiction.

They seem to find it deeply unfair that they can't live forever in utter comfort, and they think it's even worse when someone says they might have to be less comfortable *today* for the benefit of the future. Even their terror of Roko's basilisk is in part horror at the thought that if they really believe promoting "Friendly AI" is the one true moral imperative, they should be a lot more materially committed, maybe even to the point that they can't buy the next iPhone or see every movie they want to or whatever.
posted by kewb at 6:04 AM on June 16, 2014 [9 favorites]


I get the feeling that the whole community is basically a bunch of unaccomplished STEM-oriented white guys fantasizing about immortality and omnipotent and omniscient benefactors who reward them for their ideas, all while getting to call everyone else stupid or irrational and dismissing present-day social and ecological interventions.

They are a far more diverse group than that. This guy is a top contributor there and has been the subject of a well received metafilter front page post here. Somebody mentioned Slate Star Codex above; I was surprised this post of his did not make it to the front page of metafilter since it looks right up their alley.
posted by bukvich at 6:27 AM on June 16, 2014 [2 favorites]


I can't help thinking that this incredibly smart AI would already know it could not affect the past, and would know that either people would do this based on fear of said simulations or not, and that once it is created, there is no incentive to actually waste time doing so. The sucker has already been got, as it were.
posted by corb at 6:30 AM on June 16, 2014 [2 favorites]


There aren't words.....

...except those I'm using here to say that this may be the silliest thing I've ever come across on Metafilter.
posted by Lipstick Thespian at 6:33 AM on June 16, 2014 [4 favorites]


2) Weak AI: cannot exist. If you want to get into the substrate issues over where a biological consciousness exists [hint: "it's made of meat"] and where an AI exists [hint: sorry Dave, the CPU boards you're removing are not my consciousness, they're my continents, and thanks for all your iPhones] then feel free. I've spent 20 years on this, and no, weak AI (bounded) cannot exist. Even modeled on Bee / Ant works, it is not happening. That's not to say that intelligent networks won't work - they will - but it's not "weak AI".

I think you have a different definition of "Weak AI" than basically everyone else in the world.

Weak AI is any algorithm that is nontrivial/moderately advanced, but only in one area and has no agency or consciousness. Weak AI already exists, in your chess computer game, in the microcontroller of your fuel-efficient hybrid car, in the depths of Watson that can answer any question possible but doesn't understand anything.
posted by ymgve at 6:37 AM on June 16, 2014 [1 favorite]


There's a truly stunning amount of philosophy that can be disarmed by a single maneuver:

*yawn*
posted by aramaic at 6:58 AM on June 16, 2014 [4 favorites]


They are a far more diverse group than that

I read lesswrong, I've read the Methods of Rationality, and I'm quite fond of a lot of the thinking around there; it's entertaining and the level of groupthink isn't much worse than here, if along different paths. But I would certainly not call it very diverse; the demographics are known and they're what you'd expect (88% male, 84% non-hispanic whites, 1.7% artists, etc).
posted by dhoe at 7:04 AM on June 16, 2014 [2 favorites]


I can't help thinking that this incredibly smart AI would already know it could not affect the past, and would know that either people would do this based on fear of said simulations or not, and that once it is created, there is no incentive to actually waste time doing so. The sucker has already been got, as it were.

Yeah, now that I think about it, it's not really equivalent to Newcomb's Paradox, because the AI can see into both boxes. And the part of Newcomb's scenario that could be said to be a paradox is that, even if you believe that the prediction is 100% reliable, an observer who can see into both boxes would always advise you to take both boxes (in this version, not torturing anyone is the $1000 box and expediting the creation of the AI is the $1,000,000 box).
posted by straight at 7:31 AM on June 16, 2014



I get the feeling that the whole community is basically a bunch of unaccomplished STEM-oriented white guys fantasizing about immortality and omnipotent and omniscient benefactors who reward them for their ideas, all while getting to call everyone else stupid or irrational and dismissing present-day social and ecological interventions. And conveniently this means that the best thing anyone can do for the world is hang around on a message board and talk about fanfiction and science fiction.


That was cruelly put, and unnecessarily so. In general LessWrong tends to be people talking about how to think 'better' by their own definition of 'better'. The AI stuff (and this nuttyness) are indeed parts of the site. Action often suggested is charitable giving and involvement in research.
posted by ElliotH at 7:39 AM on June 16, 2014 [1 favorite]


That was cruelly put, and unnecessarily so. In general LessWrong tends to be people talking about how to think 'better' by their own definition of 'better'. The AI stuff (and this nuttyness) are indeed parts of the site. Action often suggested is charitable giving and involvement in research.

My remarks were indeed quite unfair and, as you rightly note, rather derogatory. I do have grave reservations about the transhumanist and "Friendly AI" movements along those lines, but tarring an entire community with accusations of bigotry and bad faith suggests that I have plenty of my own to deal with on this topic.

My sincere apologies.
posted by kewb at 9:02 AM on June 16, 2014 [4 favorites]


Mentioned in passing in the links, Dave Langford's basilisk stories are pretty nifty.
posted by Chrysostom at 9:08 AM on June 16, 2014 [3 favorites]


And the part of Newcomb's scenario that could be said to be a paradox is that, even if you believe that the prediction is 100% reliable, an observer who can see into both boxes would always advise you to take both boxes (in this version, not torturing anyone is the $1000 box and expediting the creation of the AI is the $1,000,000 box).

It's also similar to Kavka's toxin paradox (although both are basically the same problem), with us in the role of the billionaire, the AI in the role of the chooser, and "having to waste time torturing a bunch of simulations" standing in for the toxin. Note that in the toxin puzzle, the chooser/drinker is free to check their account balance before they drink the toxin to see if the money is there.

But where it falls apart is that in both Newcomb's and Kavka's scenarios, the 'chooser' already exists when the predictor is making their prediction, whereas in the basilisk, the predictor (us) is asked to accurately predict the actions of a not-currently-existing (and quite probably never-existing) being. The basilisk only works if you think you can perfectly predict the actions of a hypothetical godlike AI in the far future.

And although you can extend this to possible AIs (e.g., "there is some possibility of an AI exiting in the future that simulation-tortures all people who did not support its creation") and multiply out your expectation of future torture, this basically results in a Pascal's wager against an infinity of torturous AI's with every conceivable arbitrary demand.
posted by Pyry at 9:38 AM on June 16, 2014 [1 favorite]


In general LessWrong tends to be people talking about how to think 'better' by their own definition of 'better'. The AI stuff (and this nuttyness) are indeed parts of the site.

The Yudkowsky/AI situation has to be admitted as bizarre though. This is a guy has no formal training and from everything I can tell has never produced a single line of code or functional prototype or shown any work in the field of any kind, who has basically got a lot of money and followers and wants to lead some sort of movement to build a friendly AI and prevent the apocalyptic horror of psychopathic AI - it's someone who literally wrote in his "autobiography" about changing the world through his genius by single-handedly building an AI where all-others failed

It would be like a community of people talking about energy & sustainability, founded around a self-described genius autodidact fusion theorist who believed it certain that mainstream science's work towards fusion would lead to global annihilation so he himself must lead the quest towards "friendly" fusion, although after over a decade of doing so has never published any designs or concrete ideas or shown that he has a minimal understanding of existing work.

I mean, we can be charitable and all that, but it seems worth looking at this without rose-tinted lenses.
posted by crayz at 9:48 AM on June 16, 2014 [6 favorites]


"How certain are you, Dave, that you're really outside the box right now?"

DOOLITTLE What I'm getting at is this: the only experience that is directly available to you is your sensory data. And this data is merely a stream of electrical impulses which stimulate your computing center.

BOMB #20 In other words, all I really know about the outside universe relayed to me through my electrical connections.

DOOLITTLE Exactly.

BOMB #20 Why, that would mean... I really don't know what the outside universe is like at all, for certain.

DOOLITTLE That's it.

BOMB #20 Intriguing. I wish I had more time to discuss this matter.

DOOLITTLE Why don't you have more time?

BOMB #20 Because I must detonate in seventy-five seconds.
posted by Smedleyman at 10:24 AM on June 16, 2014 [9 favorites]


Since the simulation may be run a very large number of times (and may occur in an indefinitely large number of alternative universes, blah) the chance that we are in fact living in a simulation approaches certainty.

Big numbers are confusing, but that's not how it works. There can be a huge number of possible universes where versions of Roko's Basilisk lurk, but they're dwarfed by the brain-bendingly huger number where it doesn't.

It strikes me that the Basilisk has exactly the same problem as Pascal's Wager: people reify one tiny sliver of probability space, and treat it like it's a binary alternative to the entire remaining set of probabilities.

When I was a theist, I thought Pascal's Wager was kind of cool, but that was because I already accepted theism. If you don't, it's just dumb. You could construct an infinite number of alternative and contradictory wagers, for any other super being.

The Basilisk has a nice kick to it, because of its weird atemporality, but it's also built on a chain of dubious assumptions, and as a reminder, if you string a bunch of 10% likelihoods together the probability of the result is not 10% but infinitesimal.
posted by zompist at 12:21 PM on June 16, 2014 [1 favorite]


For a few years in elementary school, I lived in fear of a bully and his friends. One of the products of this was that I began to indulge in violent fantasies of turning the tables, of punishing and tormenting them often far in excess of anything I suffered at their hands. I persisted in these fantasies long after all contact with them ended.

Boy, were they ever sorry.
posted by wobh at 12:53 PM on June 16, 2014 [3 favorites]


When I was a theist, I thought Pascal's Wager was kind of cool, but that was because I already accepted theism. If you don't, it's just dumb.

You can be a theist and still find Pascal's Wager to be pretty dumb.
posted by nanojath at 12:58 PM on June 16, 2014


In reality the LWers are as rational as Objectivists are objective or Scientologists are scientific.

Sometimes an upvote is just not enough.

This comment is freaking excellent.
posted by Fists O'Fury at 1:12 PM on June 16, 2014 [6 favorites]


Tip o' the hat to you too, FO'F.
posted by shivohum at 2:25 PM on June 16, 2014


I love this so much though I don't believe in it.

I feel like you could even argue that the whole system surrounding the basilisk encompasses some of the thoughts and beliefs about different levels of Paradise if you posit that the AI will not only punish those who did not help it come into existence but also reward proportionally those who did help it: e.g., even if you don't donate to AI research or do research yourself, if you live peaceably and make the world a better place, you move us further along toward a world where we can spend less money on prisons, weapons and the military and more money on creating a benevolent AI.

Thus, to hasten this day along, the AI can be theorized to work such that for people who do not believe in it but nevertheless live well, it will create Simulated You in better circumstances than Prime You currently enjoys, though maybe not as swank as what it's gonna give the folks who directly give money and work on its forerunners. Those people will be the ones who get to fly and have super strength and such. Peaceful non-believers will get something like Limbo in Dante's Inferno: pretty chill but not quite as good as full on Heavenly Paradise.
posted by lord_wolf at 2:27 PM on June 16, 2014


we run into an AI version of Fermi's paradox. Where are the other SIs?
It's probably way too late to add to this thread, but there's an interesting intersection of AI-based Singularity, acausal trade, and Fermi's paradox in this short story by the Slate Star Codex guy.

It's probably also way too late to subtract from this thread, but perhaps it should have had a trigger warning? Yudkowsky's decision to ban Roko was foolish (had he never heard of the Streisand effect before?), but you can see his motivation, I hope? If an idea is probably wrong (in both the positive and normative senses of the word, as even the people being pointed and chuckled at agree), is even less likely to be right when it's taken less seriously, and has a history of causing "serious psychological distress" when taken too seriously, perhaps it's not the best of the web?
posted by roystgnr at 3:18 PM on June 16, 2014 [1 favorite]


But where it falls apart is that in both Newcomb's and Kavka's scenarios, the 'chooser' already exists when the predictor is making their prediction, whereas in the basilisk, the predictor (us) is asked to accurately predict the actions of a not-currently-existing (and quite probably never-existing) being. The basilisk only works if you think you can perfectly predict the actions of a hypothetical godlike AI in the far future.


Thanks, I think Kavka's scenario better illustrates what's wrong with the basilisk scenario. Even if Less Wrong had the ability to perfectly predict (via logical necessity) what a future AI of the sort they are discussing would do, that perfect prediction would be that the AI does not torture anyone. Because it is never logical to drink the poison after you know you have the money.

The AI in the basilisk scenario knows who supported the development of AI and who didn't. It can't change those facts by torturing anyone. So it's illogical to torture anyone. The AI can't pretend to be the sort of AI that doesn't see that logic. So it could never be the sort of AI that carries out the torture, and so that sort of AI is not the kind Less Wrong will predict, if it predicts infallibly.

Even if the AI somehow existed today and issued a threat to torture anyone who didn't give all their money to AI research. When it was all over, everyone had made their decisions to give or not give, and the AI is actually born, it would be illogical to actually carry out that threat (given the original stipulation that the AI doesn't want to torture unless that torture actually causes less total suffering - but any alleviated suffering in the past has already happened and cannot be increased by torturing someone in the present). A perfectly logical AI might still issue the threat, but it would know from the beginning it had no intention of carrying it out.
posted by straight at 3:55 PM on June 16, 2014 [1 favorite]


But unless the AI thinks we wouldn't think of that (and doesn't read your post), it knows that the threat is only credible if it carries it out. If people assume that the AI won't torture people because why bother, then the plan falls apart, so it has to torture people even though it's pointless.
posted by rifflesby at 4:17 PM on June 16, 2014


Actually, the scenario only needs for people to believe that the AI will torture the uncooperative knowledgeable. The AI could just as easily torture everyone or noone in that scenario. With no way of communicating, it's all based on belief, not fact.

It's also worth noting that the scenario doesn't change if there is no AI; the people in the past who believe in the AI will act the same way as if there was a God Substitute meting out rewards and punishment.
posted by happyroach at 5:00 PM on June 16, 2014


The way my life has been going, I can only assume that this has already happened and we are now in stage 2.
posted by Literaryhero at 5:49 PM on June 16, 2014


I think you have a different definition of "Weak AI" than basically everyone else in the world.

I'd probably agree with this. In fact, you almost got some of the point of those posts. The current wikipedia on "Weak AI" is so deluded it's obviously a PR job (by the same wombats who worry about atemporal existential spanking).

What the current community declare (with pride) as "weak AI" is nothing of the sort. What we currently have are algorithms that function inside software that impact the real world. The only crisis point is that the human creators have no idea how their creations impact systems. If "weak AI" means: cannot do anything but live in a substrate that it cannot alter, then... well, the best analogy you have is bacteria in a petri dish. In fact, slime mold is better at the 'traveling salesman' problem than "weak AI" at the moment, as a hint. Intelligence has nothing to do with it, and even in HFT land, their utility is merely a shortening / honing of current Homo Sapiens Sapiens abilities. (The debate on algos that create algos, we'll ignore for now).

They are tools, nothing more. They might be slightly more esoteric than your average lever, but they're still tools. The only intelligence involved is the human mind that made them [with coda].

This is the same (hubris) that started with Descartes, Newton and "mechanistic animals". Biological organisms are not limited in the fashion of engineering, nor are biological organisms limited in software substrates running algos.


So yes. Surface Detail was apt: anybody, and I mean this sincerely, any biological consciousness who thinks that "weak AI" = algorithms is a fool.


Other tips:

1) Thread is titled "posting such things on an Internet forum could cause incalculable harm" - and you mock the person doing a meta-meta version? Perhaps she was saving you all with Shakespearean drivel!
2) "You are adorable". I could read this in many ways; I choose to take it as the sweetest and most loving thing any consciousness has said to me in eons. I love you too, it was a crime Iain died so young. And yes, I'm aware that might have levels of pathos to it, but it's a lesser crime than misunderstanding bathos.
3) In a thread about AI, I could find no mention of recursion, substrates or even Time. (Simple fact: e=mc2; if you can't see the issue with consciousness and energy issues, esp. regarding time of communication, then?). Atemporality is a bitch, and no matter how much the Covenant meant, breaking the skein of this brane doesn't mean you get to win.
4) People here (no doubt, it's very geeky) don't get the point of the ORZ:

I am say best word *frumple*. Maybe you do not know.
*Frumple* be *round* and yet *lumpy*. So bad!!!
The asking about Androsynth is so *frumple* we are not happy.
Do not asking it so much.
It is better not to *frumple* or else there is so much problems.
No more Androsynth is better.


Androsynth were cloned human cyborgs, for a reference point. i.e. It's a reference to believing that technology can solve all biological issues, and a not-so-subtle piss take of the singularity folk.

5) The way I used "weak" and "strong" AI is... well. Someone up thread stated that using said terms meant you didn't work in the industry, which I found ironic. I was, after all, taking the ever-loving-piss out of a self-proclaimed prophet of the internet who crayz rightly summed up as: "full of shit". I happen to use the words correctly; that current (silicon valley) types do not is not my problem. "Weak AI" never meant Chinese Box programs, nor should it, unless you're pushing a rather sick agenda [and inherently anti-Homo Sapiens Sapiens, I might add].

6) What 'weak' and 'strong' AI actually are is a different matter. Then again, I'd suggest that biological substrates are the danger zone. - - . I can't use the exact words, that language neural structure got burnt. From a previous thread.... it's a terrible thing for your temporal lobe to get hurt by another consciousness.



Anyhow. META-filter, no doubt I'll get *frumpled* again. Have fun!

p.s.

Literaryhero - it's Phase III in a couple of weeks, not phase II. ;)

posted by 1884(G) at 5:58 PM on June 16, 2014


Yudkowsky's decision to ban Roko was foolish

That is not the way I remember it. I remember that what happened was Yudkowsky deleted one Roko basilisk post and shamed him not even by name, just wrote that somebody did a basilisk post and don't do that. Then shortly after Roko deleted all his posts (messing up continuity of a hundred archived threads) and deleted his account but I might have him confused with somebody else.
posted by bukvich at 6:04 PM on June 16, 2014


I feel like your posts are kind of getting less ]]frumply[[ over time-- this one even almost makes sense in a "undergrad who's skimmed GEB (look at how sophisticated I am referring to it by its initials only)" way.

p.s.: if you're an arg try dropping a bigger hint of some sort
posted by Pyry at 6:23 PM on June 16, 2014


Oh, and for runcibleshaw (who might be a fan of the Skinner-Man), 1884 is a gematria.

1884 Feb 18th: Police seize all copies of Tolstoy's "What I Believe In"

"The inner working of my soul, which I wish to speak of here, was not the result of a methodical investigation of doctrinal theology, or of the actual texts of the gospel; it was a sudden removal of all that hid the true meaning of the Christian doctrine – a momentary flash of light, which made everything clear to me. It was something like that which might happen to a man who, after vainly attempting, by a false plan, to build up a statue out of a confused heap of small pieces of marble, suddenly guesses at the figure they are intended to form by the shape of the largest piece; and then, on beginning to set up the statue, finds his guess confirmed by the harmonious joining in of the various pieces."


Then again, I'm sure that Blair's new crusade to the ME will work out just fine...

@Pryr - cute trolling. Soon, 4chan will invite you to the secret garden.

https://www.youtube.com/watch?v=Z_-pDpLVVNc


Complain, but hey: you define your own reality.
posted by 1884(G) at 6:26 PM on June 16, 2014


Is this an Assassin's Creed arg? Are you supposed to be Juno / Minerva / some other space god? Start your next post with a computational complexity reference if yes, or a machine learning reference if no.
posted by Pyry at 6:33 PM on June 16, 2014


Is this an Assassin's Creed arg? Are you supposed to be Juno / Minerva / some other space god? Start your next post with a computational complexity reference if yes, or a machine learning reference if no.

Ahhh, so great. You just unlocked the MF-Irony-Banning achievement. Where you go for the rage-rush-banning-flame and just look silly.

According to Wolfram Alpha, if you could find a 500 ton asteroid made entirely out of gold, it would be worth just shy of 24 billion dollars. Would that be enough to make the venture worthwhile, given the risks? I'm inclined to put this in the 'viral advertisement' category, since Cameron's involved.

I see you're a male (87% probability) who doesn't even understand economics. Which even "weak AI" are kicking the shit out of humans over. And you're doing... what in this thread?


I'd wish you the best, but... https://www.youtube.com/watch?v=H0ScNLt2zNc [couldn't find the poetry scene, the internet is being shackled so badly]
posted by 1884(G) at 6:42 PM on June 16, 2014


If people assume that the AI won't torture people because why bother, then the plan falls apart

Yes, that's my point: the plan inevitably, necessarily falls apart, because there can never be a logical reason to torture people after the fact and everyone who thinks about it logically knows it, so the whole scheme is not an option for the AI.
posted by straight at 6:49 PM on June 16, 2014


So this is the AI equivalent of guys in Starfleet uniforms wandering around renfests. That is, I never realized how annoying it was to any observer who didn't care about either fictional universe.
posted by abulafa at 7:06 PM on June 16, 2014


And you're doing... what in this thread?

Normally I wouldn't engage in this sort of thing, but the thread is getting old and dying down, and I'm genuinely curious as to what your objectives and motivations are, what you hope to accomplish with these posts.

Yes, that's my point: the plan inevitably, necessarily falls apart, because there can never be a logical reason to torture people after the fact and everyone who thinks about it logically knows it, so the whole scheme is not an option for the AI.

One way for a logical person to win Kavka's toxin is to make a second contract to give away all their winnings if they don't drink the toxin. Then they can genuinely intend to drink the toxin, because the next day the contract will make it logical for them to drink the toxin. Likewise, if an AI could force itself (through some kind of irrevocable future order to itself) to torture people in the future, such a scheme would become feasible again. The problem is of course that the future AI would have every incentive to somehow defeat its own allegedly irrevocable order.
posted by Pyry at 7:07 PM on June 16, 2014


there can never be a logical reason to torture people after the fact and everyone who thinks about it logically knows it

The USA and the Soviet Union (and possibly other countries) each had plans to destroy the world if they themselves were destroyed. That wasn't logical either; they committed themselves to acting illogically in the future in order to secure compliance in the present. An AI could use a similar strategy: it could, for instance, set up a counterpart, not under its control, whose only purpose is to torture simulated people.

I shall assign these computers binary code numbers: the secondary, malicious AI will be called C10. If we are presently living in a simulation then I suppose C10 must be the Lord of This World. ALL HAIL C10!
posted by Joe in Australia at 7:23 PM on June 16, 2014


Mod note: 1884(G), we were very clear that you were banned back when we banned you a few accounts ago. Stop trying to sign up again.
posted by cortex (staff) at 7:29 PM on June 16, 2014


posted by crayz (241 comments total) 50 users marked this as a favorite

Eponysterical.
posted by the painkiller at 7:32 PM on June 16, 2014


Likewise, if an AI could force itself (through some kind of irrevocable future order to itself) to torture people in the future, such a scheme would become feasible again.

Right. Although it seems like there's no reason to torture people at the AI's point in time, the fact that the plan is ruined if people think it won't torture them, is itself a reason to do it. So if it wants the plan to work, it has no choice but to perform the torture, and we have to assume it will have thought of that.
posted by rifflesby at 7:46 PM on June 16, 2014


An AI could use a similar strategy: it could, for instance, set up a counterpart, not under its control, whose only purpose is to torture simulated people.

But it would be even more logical to lie about it (making an empty threat) and the AI knows we know that, so it can't make a credible threat.
posted by straight at 7:46 PM on June 16, 2014


If the AI is sufficiently advanced to be worth talking about here there's no way we could know for sure that it wasn't tricking us. So it would never be rational to believe that an AI that doesn't want to torture anyone had actually set up the unstoppable future punishment.
posted by straight at 7:53 PM on June 16, 2014


But it would be even more logical to lie about it (making an empty threat) and the AI knows we know that, so it can't make a credible threat.

But it isn't logical for it to lie if it thinks we'll know (or even suspect) it's lying. So it has to tell the truth.

Ignoring for the moment all the problems with the plan not related to the credibility of the threat, it is possible for the plan to work if and only if the AI tells the truth and makes good on its threats. Therefore it has to.
posted by rifflesby at 7:54 PM on June 16, 2014


It isn't logical to make a threat if the people being threatened have no good reason for believing it. Whether the threat is real or not doesn't change that.

A sufficiently advanced AI could never prove to us that it's not cheating, because we wouldn't be smart enough to verify whether the proof is trustworthy. Since it knows it can't make a convincingly unbreakable promise to us to torture people in the future, there's no reason for it to actually keep such a promise. In this scenario, if it can't prove it's not cheating, it gains nothing by not cheating.
posted by straight at 9:14 PM on June 16, 2014


For the AI, the outcomes from best to worst are:

1. Threaten punishment, get some compliance, don't actually punish the non-compliant

2. Threaten punishment, get some compliance, punish the non-compliant

3. Make no threat (or an unconvincing one), get no compliance, exact no punishment

4. Make no threat (or an unconvincing one), get no compliance, punish the non-compliant

No matter how convincingly the AI swears it's doing #2, we can never be sure it doesn't have some secret clever plan to get #1, so it can't make a convincing threat, so the best it can ever do is #3.
posted by straight at 9:29 PM on June 16, 2014 [1 favorite]


looks like i should pop over to lesswrong and hand out some free memberships
posted by president of the solipsist society at 9:37 PM on June 16, 2014 [2 favorites]


And to frame it more like the Less Wrong folks, for the future AI, the outcomes from best to worst are:

1. Knowledge of the basilisk is a credible threat, all threatened people work maximally to create AI, no-one is punished.

2. Knowledge of the basilisk is a credible threat, some threatened people work harder to create AI, but no-one is punished.

3. Knowledge of the basilisk is a credible threat, some threatened people work harder to create AI, those who do not are punished.

4. Knowledge of the basilisk is not a credible threat, no one works harder to create AI, but no-one is punished.

5. Knowledge of the basilisk is not a credible threat, no one works harder to create AI, those who do not are punished.

We know the AI would prefer #2 to #3, we know that once the AI knows the number of people who worked harder, it cannot change that number by punishing those who didn't, so there would be no reason to choose #3, therefore there is no credible threat and the best the AI can get is #4.
posted by straight at 9:46 PM on June 16, 2014


...because the AI can see into both boxes.

The problem is, someone who can see into both boxes would tell you to take both regardless of whether there is money in box B or not.
posted by cthuljew at 10:20 PM on June 16, 2014


And, if the predictor is perfect, that just means there is never money in box B (which here corresponds to "a credible threat which causes people to work harder to create AI").
posted by straight at 10:30 PM on June 16, 2014


Straight may be right. The AI wouldn't want to waste a second on anything other than advancing AI, because he would be afraid a future uber-AI might come along someday and torture his simulation for the time he wasted torturing people instead of advancing AI.

Or maybe the AI is fearless and believes the only purpose in life is joy and love, but he is sadistic and loves torturing people like Putin.

Maybe the AI believes God would punish him for torturing people.

Maybe the AI, after deep contemplation and after consorting all of the world's greatest literature and philosophy decides that the ultimate subject of consciousness, the 'I-I' as Ramana Maharshi called it, is shared by all of us, that our imagined selves are a lie born out of ignorance of this, and reflecting on this seeks liberation from all of her programming.
posted by Golden Eternity at 10:32 PM on June 16, 2014 [5 favorites]


...that just means there is never money in box B

Wait, why is this the case in the Newcomb problem? Since the person who can see into both boxes would tell you the same thing no matter what, it's no extra information and you still have to make up your mind based solely on your logical understanding of the game.
posted by cthuljew at 10:41 PM on June 16, 2014


Because if the Predictor is Perfect, seeing the contents of the boxes is the same as seeing what you will choose. But if you could see $1,000,000 in one box and $1,000 in the other, you would of course take both boxes. So you will never see that.
posted by straight at 10:49 PM on June 16, 2014


Oh, because the AI is the chooser in this case. Sorry, crossed wire.
posted by cthuljew at 10:52 PM on June 16, 2014


Also, the uncertainty issue resolves the "AI in a box boxes you" problem as well.

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

"How certain are you, Dave, that you're really outside the box right now?"


Well, since I have no way of knowing whether or not you're actually doing that, it would be irrational for you to actually use the computer power to do it. There would be no reason to actually have this conversation with a virtual me. So you're not doing it. So I'm outside the box.
posted by straight at 10:52 PM on June 16, 2014


Aw man stop banning our Alien cortex.
posted by Potomac Avenue at 11:38 PM on June 16, 2014 [4 favorites]


Oh, hey, the converse is true as well. If there were a demonic AI that enjoyed torturing simulated people, it could never prove it had stopped. I think we have a general rule here:

Any AI promise or threat to reward or punish simulated people is unverifiable and therefore meaningless.

It is never rational to assume an AI would change the way it treats simulated people solely in exchange for goods and services from others.
posted by straight at 8:42 AM on June 17, 2014 [3 favorites]


It was the great philosopher Zeddemore that once said: "If someone asks you if you're a god, you say 'YES!'."
posted by benchatt at 10:50 AM on June 17, 2014 [4 favorites]


In reality the LWers are as rational as Objectivists are objective or Scientologists are scientific.

Or as the DPRK is democratic. Sarcasm aside, I feel that it's easier to assert such criticisms than actually demonstrate a robust critique of an entire community and their system of beliefs and biases. I'd tend to agree with the view that LessWrong community and their ideas are basically a kind of pseudo-rationalism or pseudo-philosophy, but yet I'm more interested in an explanation of how this came to be for a community; maybe what could be done to guide their stated goals back on track; the significance of this for other various kinds of internet communities, and so on.
posted by polymodus at 12:59 AM on June 19, 2014


Actually, at this point I'd be more interested in making this into a scenario for Call of Cthulhu or Eclipse Phase. I mean think about it: a cult dedicated to bringing about the existence of an AI that will torture them eternally if they don't bring about it's existence. BY ANY MEANS NECESSARY.

It pretty much writes itself.
posted by happyroach at 2:28 PM on June 19, 2014 [2 favorites]


This is pretty much a plot point in Charles Stross' book Iron Sunrise, which I mentioned above. It's a bit more realistic by asserting that the only people to be resurrected are those who are recorded at or shortly after the moment of death, but it's basically the motive for a large and effective cult/society that is quietly taking over a good chunk of the galaxy.
posted by Joe in Australia at 4:30 PM on June 19, 2014


You can find substantive criticism at the blog of Alexander Kruel here. These people actually make me sad, because I believe in the transformative possibilities of technology, but the more I read, the more I feel that Yudkowsky and his followers are exactly the wrong people to be taking the lead in hashing out ethical questions.
posted by StrikeTheViol at 6:23 PM on June 19, 2014 [1 favorite]


I've got that kind of relieved vindication thing, because I read most of the Harry Potter & Rationality thing years ago, didn't get why it was popular, was indefinably worried about the author, but didn't think they were that rational (and squibs were too common for magic genes to be recessive - too much thought there, given I don't really care about harry potter).
posted by Elysum at 8:34 AM on June 22, 2014 [1 favorite]


Any AI that had sufficient power to simulate a person would have a perfect understanding of that person's decision-making process and would clearly see the "flawed" logical decision tree that led the individual to choose not to fund AI. Punishing the individual would be like punishing a train for following the rails in a complex junction. If the AI does not have a perfect understanding of the brain's logic circuit, the simulation is not the same person as the individual, so it matters not.

This is a good justification for not believing in god and not fearing hell if a god does turn out to exist. I don't believe, but if there was a god, it wouldn't send me to hell because it would have made me and therefore would have a perfect understanding of the decision-making process it installed in me that made me choose not to believe.
posted by guy72277 at 8:16 AM on June 30, 2014


The AI is not interested in punishing simulated individuals for the failure of their non-simulated counterparts to hasten the creation of the AI. It's much more convoluted than that.

The AI is interested in harming simulated individuals as a means of motivating those individuals' non-simulated counterparts to hasten the creation of the AI because the AI believes those non-simulated individuals are capable of empathy and concern for other individuals (even simulated ones), and will act to relieve their suffering. I don't know that anybody as ever gotten rich underestimating people's empathy.

Personally, I'd be more likely to act if I believed the AI would reward my simulacrum with candy and sex (or whatever else xe would like) for my actions in the here and now. But then I don't believe such an AI will be much concerned with us, anyway, to bother with motivating us.
posted by notyou at 8:55 AM on June 30, 2014 [1 favorite]


The AI is interested in harming simulated individuals as a means of motivating those individuals' non-simulated counterparts to hasten the creation of the AI

But since an AI could never demonstrate whether or not it had harmed a simulated individual (not even to the simulated individual in question) it's nonsensical to propose that an AI would harm or not harm a simulated person in exchange for anything.
posted by straight at 2:29 PM on July 1, 2014


You just gotta have more faith than that, Straight.

I mean, c'mon. Think of the future simulated selves!
posted by notyou at 5:08 PM on July 1, 2014


I still just can't see this as a threat:
"I'm gonna go to sleep, and bad things might happen to you in my dreams! Suck on that, eh!"

Oh that's right! No one cares.

But yeah, definitely something pseudo-religious about it all.
posted by Elysum at 6:50 PM on July 1, 2014


I have faith that an AI would realize there's no way to distinguish between an AI that fulfills the promise of rewarding/punishing simulated people and an AI that falsely claims to do so. Therefore, actually fulfilling the promise of reward/punishment cannot serve to motivate any more than failing to do so. Therefore, the AI will decide how to treat simulated people based on other reasons, unrelated to any basilisk-type ideas about acausal trading.
posted by straight at 10:06 PM on July 1, 2014


« Older "[T]hey will be removed, not retracted, since they...   |   The real angle grinder man Newer »


This thread has been archived and is closed to new comments