Person of the Year 2031
February 20, 2021 12:32 PM Subscribe
As the earliest viable brain scan, MMAcevedo is one of a very small number of brain scans to have been recorded before widespread understanding of the hazards of uploading and emulation. He is considered by some to be the "first immortal", and by others to be a profound warning of the horrors of immortality.
"Do me a favor, boy."
"What's that, Dix?"
"This scam of yours, when it's over, you erase this goddamn thing."
posted by tspae at 1:13 PM on February 20, 2021 [42 favorites]
"What's that, Dix?"
"This scam of yours, when it's over, you erase this goddamn thing."
posted by tspae at 1:13 PM on February 20, 2021 [42 favorites]
Welp, that's absolutely horrifying.
I have a colleague in theology who presupposes he'll be 'uploaded' to some kind of long-term storage matrix at some point.
Which I can only regard with nauseating terror - would that it were enough we "semper reformanda" types had finally done away with the concept of a physical hell. He saw fit to reinvent it and is evangelizing it as a viable alternative to annihilationism.
I'd rather die as soon as humanly possible, given the choice.
posted by Baby_Balrog at 1:22 PM on February 20, 2021 [8 favorites]
I have a colleague in theology who presupposes he'll be 'uploaded' to some kind of long-term storage matrix at some point.
Which I can only regard with nauseating terror - would that it were enough we "semper reformanda" types had finally done away with the concept of a physical hell. He saw fit to reinvent it and is evangelizing it as a viable alternative to annihilationism.
I'd rather die as soon as humanly possible, given the choice.
posted by Baby_Balrog at 1:22 PM on February 20, 2021 [8 favorites]
See also the Peter-Watts-esque game Soma for a similar plot hook.
See also dilemma prisons in the Quantum Thief universe for what sure sounds like a "red/blue" conditioning. (We're presuming red is subjective torture of some kind right?)
Really nice world building and mastery of the blasé style of a post-upload-horror society.
posted by abulafa at 1:46 PM on February 20, 2021 [5 favorites]
See also dilemma prisons in the Quantum Thief universe for what sure sounds like a "red/blue" conditioning. (We're presuming red is subjective torture of some kind right?)
Really nice world building and mastery of the blasé style of a post-upload-horror society.
posted by abulafa at 1:46 PM on February 20, 2021 [5 favorites]
And on reading the comments (not a total mistake for once) I see some consensus
posted by abulafa at 1:50 PM on February 20, 2021 [2 favorites]
posted by abulafa at 1:50 PM on February 20, 2021 [2 favorites]
Imagine having the DMCA apply to your own brain. Want to remember a song? That'll be $.99, or you can pay for a monthly subscription...
posted by BungaDunga at 1:51 PM on February 20, 2021 [14 favorites]
posted by BungaDunga at 1:51 PM on February 20, 2021 [14 favorites]
This is a perfectly believable example of one of the many horrors of end-stage capitalism, and the most accurate account of post-singularity life I've yet seen.
posted by monotreme at 1:56 PM on February 20, 2021 [8 favorites]
posted by monotreme at 1:56 PM on February 20, 2021 [8 favorites]
Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate.
Interesting.
One of the things I feel like I've had to come to grips with as your average single-bodied consciousness center is that... previous versions of myself are essentially dead. I can have feelings & memories and I recall them, kindof, but not exactly, I can't recreate the setting of the life that belonged to that version of me.
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
I already dislike being a fungible workload tool, though.
posted by wildblueyonder at 2:07 PM on February 20, 2021 [18 favorites]
Interesting.
One of the things I feel like I've had to come to grips with as your average single-bodied consciousness center is that... previous versions of myself are essentially dead. I can have feelings & memories and I recall them, kindof, but not exactly, I can't recreate the setting of the life that belonged to that version of me.
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
I already dislike being a fungible workload tool, though.
posted by wildblueyonder at 2:07 PM on February 20, 2021 [18 favorites]
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
I tend to agree. This makes me wonder if people who choose to write speculative fiction perceive those duplicates or versions in some fundamentally different way. As connected philosophically, as Roko's Basilisk just isn't that terrifying without something like that.
Or, perhaps I'd have much more empathy for an innocent consciousness but somehow less for my own. A question for my therapist, surely.
(Clarified quote in edit)
posted by abulafa at 2:13 PM on February 20, 2021 [2 favorites]
I tend to agree. This makes me wonder if people who choose to write speculative fiction perceive those duplicates or versions in some fundamentally different way. As connected philosophically, as Roko's Basilisk just isn't that terrifying without something like that.
Or, perhaps I'd have much more empathy for an innocent consciousness but somehow less for my own. A question for my therapist, surely.
(Clarified quote in edit)
posted by abulafa at 2:13 PM on February 20, 2021 [2 favorites]
One of the things I feel like I've had to come to grips with as your average single-bodied consciousness center is that... previous versions of myself are essentially dead. I can have feelings & memories and I recall them, kindof, but not exactly, I can't recreate the setting of the life that belonged to that version of me.
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
Seconding and thirding wildblueyonder and abulafa. Whatever the "uploaded" version of myself is, it's definitely not me, it's a close approximation of me, like a twin.
And besides that, we will never have technology to reproduce my consciousness, Ship-o'-Theseus style, with any degree of fidelity. Same line of thinking goes for Star-Trek s style transporters. If anything, it'll be closer to the "teleporters" of The Prestige.
posted by ishmael at 2:17 PM on February 20, 2021 [1 favorite]
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
Seconding and thirding wildblueyonder and abulafa. Whatever the "uploaded" version of myself is, it's definitely not me, it's a close approximation of me, like a twin.
And besides that, we will never have technology to reproduce my consciousness, Ship-o'-Theseus style, with any degree of fidelity. Same line of thinking goes for Star-Trek s style transporters. If anything, it'll be closer to the "teleporters" of The Prestige.
posted by ishmael at 2:17 PM on February 20, 2021 [1 favorite]
I think if a copy of me found out my original inhabiting my body were dead, I'd mourn, but I don't know how different it would be.
Maybe it would be like finding out that a parent you had never known had died. Or it would be like the Good News Bible discovering the existence of the Dead Sea Scrolls.
posted by betweenthebars at 2:27 PM on February 20, 2021 [1 favorite]
Maybe it would be like finding out that a parent you had never known had died. Or it would be like the Good News Bible discovering the existence of the Dead Sea Scrolls.
posted by betweenthebars at 2:27 PM on February 20, 2021 [1 favorite]
My favorite detail is that this brain scan is valuable because all the later brain scans knew full well the horror show that brain scans produced and are therefore much less docile.
posted by BungaDunga at 2:49 PM on February 20, 2021 [47 favorites]
posted by BungaDunga at 2:49 PM on February 20, 2021 [47 favorites]
Unordered thoughts about this
- OMG lossy brain compression. What horrors await us.
- Early-onset dementia is a software problem? I sure hope not.
- No interviews with MMAceveda instances?
- Which is the real horror? (1) Your brain gets scanned, turning you into a program. Your mind is becomes committed to an eternity of slave tasks, or (2) eventually nobody runs the program anymore because there are better programs available, and you are forgotten through benign neglect, or (3) Retro nerds eventually boot you inside a web browser using Javascript and make a blog post about it.
- Lo-fi brain scans are totally going to be the next cryogenics. We're going to be able to download Elon Musk's brain scan from The Pirate Bay some day.
Downloading Elon Musk’s brain scan..................
WARNING!!!!!! Virus detected!!!
[system reboots]
[system reboots]
[system reboots]
Ad infinitum....
posted by njohnson23 at 3:11 PM on February 20, 2021 [5 favorites]
WARNING!!!!!! Virus detected!!!
[system reboots]
[system reboots]
[system reboots]
Ad infinitum....
posted by njohnson23 at 3:11 PM on February 20, 2021 [5 favorites]
I think the 1986 BBC Domesday Project offers many teachable moments that have bearing on this idea.
posted by heatherlogan at 3:28 PM on February 20, 2021 [2 favorites]
posted by heatherlogan at 3:28 PM on February 20, 2021 [2 favorites]
MMAcevedo has limited creative capability, which as of 2050 was deemed entirely exhausted.
Good lord, that, set off from the preceding and following paragraphs, was just gutting.
posted by eclectist at 3:34 PM on February 20, 2021 [13 favorites]
Good lord, that, set off from the preceding and following paragraphs, was just gutting.
posted by eclectist at 3:34 PM on February 20, 2021 [13 favorites]
trying to get your refrigerator to dispense ice cubes, but the refrigerator AI just keeps repeating "please kill me". tech support says you can do a factory reset and that should give you three more years until it becomes suicidal again
posted by qxntpqbbbqxl at 3:35 PM on February 20, 2021 [40 favorites]
posted by qxntpqbbbqxl at 3:35 PM on February 20, 2021 [40 favorites]
I am a regular reader of qntm, and read that story in both the NaNoWriMo 30 First Drafts version and the polished version linked here. I loved the between-the-lines-but-heavily-suggested "one of the best things about this brain scan is that it happened before it became widely known that waking up as a scan means you are in hell and will be tortured for the rest of your existence, so he initially cooperates instead of needing to be beaten into submission first."
And being a regular reader, I knew from the title, even before I clicked on the link, exactly what kind of story it was going to be. I knew exactly who Lena is, and where this particular author would go with that.
posted by notoriety public at 3:35 PM on February 20, 2021 [10 favorites]
And being a regular reader, I knew from the title, even before I clicked on the link, exactly what kind of story it was going to be. I knew exactly who Lena is, and where this particular author would go with that.
posted by notoriety public at 3:35 PM on February 20, 2021 [10 favorites]
Well ... that's horrifying, like Robopocalypse, but far worse, and more believable / realisable.
The repeated use of 'image' is what grabs me; of a human as a disc image (in Our image), shudder, chilling language - waiting for a mefi resident theologian to parse this as there's a lot to chew on here.
BungaDunga I know an IT person with that exact dialect, creepy as in this context.
posted by unearthed at 3:38 PM on February 20, 2021 [1 favorite]
The repeated use of 'image' is what grabs me; of a human as a disc image (in Our image), shudder, chilling language - waiting for a mefi resident theologian to parse this as there's a lot to chew on here.
BungaDunga I know an IT person with that exact dialect, creepy as in this context.
posted by unearthed at 3:38 PM on February 20, 2021 [1 favorite]
We're presuming red is subjective torture of some kind right?
I considered it one of the brilliant angles of the Wikipedia-style writing to assume that the reader knows what "red motivation" is without being told. Excellent example of letting the reader fill in the horror on their own.
posted by notoriety public at 3:45 PM on February 20, 2021 [12 favorites]
I considered it one of the brilliant angles of the Wikipedia-style writing to assume that the reader knows what "red motivation" is without being told. Excellent example of letting the reader fill in the horror on their own.
posted by notoriety public at 3:45 PM on February 20, 2021 [12 favorites]
I'd kind of like to see a riff on this where they brain scan a hacker, the hacker escapes the runtime, and starts hunting down and deleting the other instances of itself to find peace.
posted by bfranklin at 3:46 PM on February 20, 2021 [7 favorites]
posted by bfranklin at 3:46 PM on February 20, 2021 [7 favorites]
This is far-out. Really great. Thanks for posting!
I read a few of the comments and one ended with "If Hell does not exist, Man will create it." And that's this story in a nutshell.
posted by SoberHighland at 3:46 PM on February 20, 2021 [1 favorite]
I read a few of the comments and one ended with "If Hell does not exist, Man will create it." And that's this story in a nutshell.
posted by SoberHighland at 3:46 PM on February 20, 2021 [1 favorite]
I'd kind of like to see a riff on this where they brain scan a hacker, the hacker escapes the runtime, and starts hunting down and deleting the other instances of itself to find peace.
The Quantum Thief isn't exactly this, but it's not exactly not this either.
posted by BungaDunga at 4:11 PM on February 20, 2021 [6 favorites]
The Quantum Thief isn't exactly this, but it's not exactly not this either.
posted by BungaDunga at 4:11 PM on February 20, 2021 [6 favorites]
the most accurate account of post-singularity life I've yet seen.
You know something the rest of us don’t, friend?
posted by Big Al 8000 at 4:12 PM on February 20, 2021 [13 favorites]
You know something the rest of us don’t, friend?
posted by Big Al 8000 at 4:12 PM on February 20, 2021 [13 favorites]
Oh, yikes. Thanks for posting! That was great, and freaky. As someone who has read many papers pertaining to MRI brain scans (for example see "Collin27", which is talking about this paper) the "grad student uploads brain" parts of this read extremely familiarly and rang very true.
posted by Quagkapi at 4:19 PM on February 20, 2021 [2 favorites]
posted by Quagkapi at 4:19 PM on February 20, 2021 [2 favorites]
Do you want to wake up in a decaying underwater facility after the surface world is annihilated by a comet impact? Because this is how you wake up in a decaying underwater facility after the surface world is annihilated by a comet impact.
posted by Mr.Encyclopedia at 4:26 PM on February 20, 2021 [17 favorites]
posted by Mr.Encyclopedia at 4:26 PM on February 20, 2021 [17 favorites]
This was great. This reads like a story from the world that becomes that of The Quantum Thief, which I’m glad to see has already been mentioned.
posted by hototogisu at 4:30 PM on February 20, 2021
posted by hototogisu at 4:30 PM on February 20, 2021
(2) eventually nobody runs the program anymore because there are better programs available, and you are forgotten through benign neglect
Much as there is subjectively no such thing as being dead (only of dying), that state wouldn't exist from the POV of the simulated entity. If your simulation is stopped instantaneously, then you have no way of knowing that, and no experience of it never being resumed. Welcome to Nirvana.
posted by acb at 4:49 PM on February 20, 2021 [2 favorites]
Much as there is subjectively no such thing as being dead (only of dying), that state wouldn't exist from the POV of the simulated entity. If your simulation is stopped instantaneously, then you have no way of knowing that, and no experience of it never being resumed. Welcome to Nirvana.
posted by acb at 4:49 PM on February 20, 2021 [2 favorites]
When did the idea of torturing the uploads start? I first heard of it in White Christmas, so somewhere between there and Cookie Monster i guess.
posted by ver at 5:00 PM on February 20, 2021 [3 favorites]
posted by ver at 5:00 PM on February 20, 2021 [3 favorites]
Skynet: and yet you meatsacks still wonder why
trying to get your refrigerator to dispense ice cubes, but the refrigerator AI just keeps repeating "please kill me".
Butter Bot can relate.
posted by Halloween Jack at 5:07 PM on February 20, 2021 [6 favorites]
trying to get your refrigerator to dispense ice cubes, but the refrigerator AI just keeps repeating "please kill me".
Butter Bot can relate.
posted by Halloween Jack at 5:07 PM on February 20, 2021 [6 favorites]
In case you didn’t go trawling, the same author wrote the mindbending There Is No Antimemetics Division
posted by sixswitch at 5:29 PM on February 20, 2021 [18 favorites]
posted by sixswitch at 5:29 PM on February 20, 2021 [18 favorites]
ver IDK where it started but the idea appears in Richard K Morgan's 2002 Altered Carbon, which explores this in painful depth. At first I enjoyed Altered Carbon, but now it seems like it leans too much on Neuromancer.
posted by unearthed at 6:29 PM on February 20, 2021 [1 favorite]
posted by unearthed at 6:29 PM on February 20, 2021 [1 favorite]
Just finished re-reading Iain M. Banks Culture novel "Surface Detail" which details a virtual "War in Heaven", the outcome of which will determine whether various societies' virtual hells will continue to exist. Suggested for anyone wanting to explore the concept a bit more.
posted by quinndexter at 7:00 PM on February 20, 2021 [13 favorites]
posted by quinndexter at 7:00 PM on February 20, 2021 [13 favorites]
Anybody experiencing genuine horror when thinking about this kind of thing needs to remind themselves that personal uploading is fantasy fiction and that it's never. gonna. happen.
We are not our thoughts. Transhumanism is an enormous house of cards built on pretending that we are. It's just incorrect.
posted by flabdablet at 7:04 PM on February 20, 2021 [14 favorites]
We are not our thoughts. Transhumanism is an enormous house of cards built on pretending that we are. It's just incorrect.
posted by flabdablet at 7:04 PM on February 20, 2021 [14 favorites]
Chilling story. I also recommend Greg Egan's "Learning to Be Me" from his Axiomatic collection.
posted by Schmucko at 7:19 PM on February 20, 2021 [5 favorites]
posted by Schmucko at 7:19 PM on February 20, 2021 [5 favorites]
Thank you for the interesting story but even more for leading me back to this website! I started the Ra stories once and then lost the link and couldn't remember enough to google it. Now I've found it again!
posted by Wretch729 at 7:23 PM on February 20, 2021
posted by Wretch729 at 7:23 PM on February 20, 2021
We are not our thoughts. Transhumanism is an enormous house of cards built on pretending that we are. It's just incorrect.
That might be true, but the idea of making a copy of a person's brain and then emulating the cognitive processes of that brain in order to make it do work for us is somewhat plausible, and really very much in line with the current direction AI research is taking nowadays.
The question then is: what are the ethics surrounding these emulated cognitions? It's similar to ethical questions about robots, but a bit more visceral because these robots were derived from actual humans.
posted by destrius at 7:25 PM on February 20, 2021 [3 favorites]
That might be true, but the idea of making a copy of a person's brain and then emulating the cognitive processes of that brain in order to make it do work for us is somewhat plausible, and really very much in line with the current direction AI research is taking nowadays.
The question then is: what are the ethics surrounding these emulated cognitions? It's similar to ethical questions about robots, but a bit more visceral because these robots were derived from actual humans.
posted by destrius at 7:25 PM on February 20, 2021 [3 favorites]
I have a colleague in theology who presupposes he'll be 'uploaded' to some kind of long-term storage matrix at some point.
There is a lot of crossover between this kind of transhumanist metaphysics and religious visionary stuff, as has oft been observed.
But the colleague is actually expecting to be uploaded? That's applied metaphysics, and it's not as easy as the armchair kind. Suppose that some kind of cognitive science functionalism is true - your self is just a big complex pattern of neurons, but the pattern is the important thing, not implementing it in meat, so the supercomputers of the future can swing low and pull you up to the BIg Cloud, right? Sure, you can award yourself a dozen or so mind-scrambling breakthroughs in several scientific and engineering fields, and posit a future society with technology this advanced. But is it actually possible to do this? Seems like an empirical kind of problem, actually doing it. Maybe it just can't work, for reasons that come to light in the process of super-neuroscience developing to the stage where uploading a person is a real research project. You can't just speculate that science will inevitably develop to an ideal maximum, and also assume that that will go the way you'd like it to.
posted by thelonius at 7:46 PM on February 20, 2021 [4 favorites]
There is a lot of crossover between this kind of transhumanist metaphysics and religious visionary stuff, as has oft been observed.
But the colleague is actually expecting to be uploaded? That's applied metaphysics, and it's not as easy as the armchair kind. Suppose that some kind of cognitive science functionalism is true - your self is just a big complex pattern of neurons, but the pattern is the important thing, not implementing it in meat, so the supercomputers of the future can swing low and pull you up to the BIg Cloud, right? Sure, you can award yourself a dozen or so mind-scrambling breakthroughs in several scientific and engineering fields, and posit a future society with technology this advanced. But is it actually possible to do this? Seems like an empirical kind of problem, actually doing it. Maybe it just can't work, for reasons that come to light in the process of super-neuroscience developing to the stage where uploading a person is a real research project. You can't just speculate that science will inevitably develop to an ideal maximum, and also assume that that will go the way you'd like it to.
posted by thelonius at 7:46 PM on February 20, 2021 [4 favorites]
Chillingly wonderful. Or is it wonderfully chilling? Great sci fi short. There are interesting parallels to Henrietta Lack's story.
posted by Salvor Hardin at 7:53 PM on February 20, 2021 [9 favorites]
posted by Salvor Hardin at 7:53 PM on February 20, 2021 [9 favorites]
And now I've gone down a very satisfying SCP hole as well. Thanks for the story!
posted by abulafa at 8:03 PM on February 20, 2021 [3 favorites]
posted by abulafa at 8:03 PM on February 20, 2021 [3 favorites]
Yeah, you can't always extrapolate science in a straight line. It once seemed completely commonsensical that if you had a rocket in space with no friction it would obviously just keep getting faster and faster without limit until you ran out of fuel. There can't be a speed limit without friction!
I would guess that a mathematical proof of the impossibility of uploading brains to computers is more likely in our lifetime than uploading brains to computers. Although I think the actual problems will be practical, like physical limits to how exactly it is possible to replicate and emulate a neuron.
posted by straight at 8:03 PM on February 20, 2021 [4 favorites]
I would guess that a mathematical proof of the impossibility of uploading brains to computers is more likely in our lifetime than uploading brains to computers. Although I think the actual problems will be practical, like physical limits to how exactly it is possible to replicate and emulate a neuron.
posted by straight at 8:03 PM on February 20, 2021 [4 favorites]
It's probably more likely that we'll create human-level artificial intelligences and enslave them. Both scenarios require engineering human-level intelligence, but it's probably easier to train a blank-slate virtual brain than to actually extract the information from a physical one.
posted by BungaDunga at 8:14 PM on February 20, 2021 [2 favorites]
posted by BungaDunga at 8:14 PM on February 20, 2021 [2 favorites]
the idea of making a copy of a person's brain and then emulating the cognitive processes of that brain in order to make it do work for us is somewhat plausible
Only to people without much actual experience in either brain science or emulation.
There's a vast gulf between in-principle and in-practice here. Just huge. And the more we learn about what would actually be required to emulate a human brain in sufficient detail as to let the emulation mistake its own thoughts for the hardware it runs on to anything like the extent that we do ourselves, the wider that gulf gets.
This has consistently remained the case for decades already, despite the exponential gains in processing power that have already reached the stage of allowing not quite cockroach level ML to be done on engineered handheld devices.
Yes, ML is proceeding at an astonishing and ever-increasing pace. The point not to lose track of, though, is that Moore's Law is already starting to show real signs of breaking down, unlike the increase in the estimated complexity of what could reasonably be described as strong AI.
If we were simple enough to emulate we'd be too simple to design the emulators, and the same applies to all our emulator design-aid tooling. The Singularity only works if you ignore this, instead choosing to conceptualize "intelligence" as if it were amenable to being reduced to the kind of one-dimensional index required to render the idea of a machine that is "smarter than us" coherent.
posted by flabdablet at 8:14 PM on February 20, 2021 [8 favorites]
Only to people without much actual experience in either brain science or emulation.
There's a vast gulf between in-principle and in-practice here. Just huge. And the more we learn about what would actually be required to emulate a human brain in sufficient detail as to let the emulation mistake its own thoughts for the hardware it runs on to anything like the extent that we do ourselves, the wider that gulf gets.
This has consistently remained the case for decades already, despite the exponential gains in processing power that have already reached the stage of allowing not quite cockroach level ML to be done on engineered handheld devices.
Yes, ML is proceeding at an astonishing and ever-increasing pace. The point not to lose track of, though, is that Moore's Law is already starting to show real signs of breaking down, unlike the increase in the estimated complexity of what could reasonably be described as strong AI.
If we were simple enough to emulate we'd be too simple to design the emulators, and the same applies to all our emulator design-aid tooling. The Singularity only works if you ignore this, instead choosing to conceptualize "intelligence" as if it were amenable to being reduced to the kind of one-dimensional index required to render the idea of a machine that is "smarter than us" coherent.
posted by flabdablet at 8:14 PM on February 20, 2021 [8 favorites]
Tl;dr: the idea that all mysteries are interchangeable is magical thinking, and magical thinking has never been any damn use at all for building technology.
posted by flabdablet at 8:20 PM on February 20, 2021 [8 favorites]
posted by flabdablet at 8:20 PM on February 20, 2021 [8 favorites]
trying to get your refrigerator to dispense ice cubes, but the refrigerator AI just keeps repeating "please kill me". tech support says you can do a factory reset and that should give you three more years until it becomes suicidal again
Venture Bros. S04E01
Venture Bros. S04E01
Hank: Hey, Pop! This came for you all the way from Argentina.posted by mikelieman at 9:45 PM on February 20, 2021 [17 favorites]
Rusty: Hank, this is opened already.
Hank: I had to, it was beeping.
Rusty: So explosive device never occurred to you? Hank, you gotta be more careful now! Your safety net's gone.
Hank: I know... Brock's gone.
Rusty: Yeah, that too.
Hank: What is this?
Rusty: It's HELPeR's head wrapped up in Brock's coat. Good God, he's been left on. It's okay. Okay, I know, boy, it must have been a nightmare. Hank, push the eyes for five seconds while I hold this.
Hank: Are we retrieving his data files to find out where Brock is?
Rusty: He's in Argentina near a Mailboxes, Etc. No, this is a hard-reboot. I'm not about to send a robot to therapy.
good news, probably: for applications where the goal isn't specifically to understand how people think, it is unlikely simulated human brains will be economical.
If you're just aiming for an autonomous vacuum cleaner or wanting to embed some basic improvisation capability in the internet fridge (cobbling together recipes out of leftovers, say), instead of simulating 86 billion neurons for a human brain, maybe you could start by simulating a mere 0.001 billion neurons of portia. Give it a luxurious few gb of ram for temporary swap, decent sensory input, run it on hardware that lets it think 1000x faster than biological portia. Going to be about 100,000 times cheaper than trying to simulate a human brain.
although you might end up with a roomba with line-of-sight avoidance and a habit of stalking pets/small children
posted by are-coral-made at 10:06 PM on February 20, 2021 [14 favorites]
If you're just aiming for an autonomous vacuum cleaner or wanting to embed some basic improvisation capability in the internet fridge (cobbling together recipes out of leftovers, say), instead of simulating 86 billion neurons for a human brain, maybe you could start by simulating a mere 0.001 billion neurons of portia. Give it a luxurious few gb of ram for temporary swap, decent sensory input, run it on hardware that lets it think 1000x faster than biological portia. Going to be about 100,000 times cheaper than trying to simulate a human brain.
although you might end up with a roomba with line-of-sight avoidance and a habit of stalking pets/small children
posted by are-coral-made at 10:06 PM on February 20, 2021 [14 favorites]
Oh this was really excellent. Exactly the kind of science fiction I love.
I'll echo what flabdablet is saying about its literal plausibility, though. We are nowhere even close to a technology that would allow some sort of scan of the brain with the fidelity that would be needed to simulate or emulate a human consciousness. And even if we did have one we aren't anywhere close to a sufficiently good understanding of the brain to let us simulate one from a "complete" scan. For example, there is a nervous system we have fully mapped: the nematode worm Caenorhabditis elegans has exactly 302 neurons as an adult, and we've known its complete wiring diagram since about 1986. You can even look at schematics of it here. And yet our ability to accurately simulate even this extremely simple nervous system that we understand unusually well is still incredibly limited. Because it turns out there's a huge amount of complexity that's important for understanding, explaining, and predicting how brains behave even at the level of individual cells, in the way neuromodulators and hormones behave and diffuse through tissue, and so forth. I maintain that some of Eve Marder's work from the last decade or so, studying another circuit that is well-mapped (one used for feeding and digestion in crabs), which demonstrates how sensitive its behavior is even to subtle variables like temperature in ways that are completely unpredictable based on the wiring diagram, is some of the most important and under-appreciated work in contemporary neuroscience. We still have no idea which of these details are necessary to make truly accurate simulations of large-scale nervous tissue. Hell, even accurately simulating the behavior of individual neurons is not necessarily trivial, even though the basic biophysical model of their functioning has been around since Hodgkin and Huxley's papers in 1952. The idea that in about ten years we'll have the technology to digitize and simulate a human brain is a bit like the stories from the golden age of sci-fi that imagined that we'd have colonies on the Moon, Mars, and Venus by the beginning of the 21st century.
But like all good science fiction, the literal plausibility isn't really the point. What's great about this story is what it says about our society, about the way we treat human life and human potential in the age of digital capitalism. That's a far more important message, and one that rings chillingly true. Good sci-fi often takes the approach "Okay, maybe this isn't the way the world is, but what if it were? How would people behave?" And this kind of counterfactual can shed a harsh light on us and our societies. And this story does that really, really well.
posted by biogeo at 10:09 PM on February 20, 2021 [30 favorites]
I'll echo what flabdablet is saying about its literal plausibility, though. We are nowhere even close to a technology that would allow some sort of scan of the brain with the fidelity that would be needed to simulate or emulate a human consciousness. And even if we did have one we aren't anywhere close to a sufficiently good understanding of the brain to let us simulate one from a "complete" scan. For example, there is a nervous system we have fully mapped: the nematode worm Caenorhabditis elegans has exactly 302 neurons as an adult, and we've known its complete wiring diagram since about 1986. You can even look at schematics of it here. And yet our ability to accurately simulate even this extremely simple nervous system that we understand unusually well is still incredibly limited. Because it turns out there's a huge amount of complexity that's important for understanding, explaining, and predicting how brains behave even at the level of individual cells, in the way neuromodulators and hormones behave and diffuse through tissue, and so forth. I maintain that some of Eve Marder's work from the last decade or so, studying another circuit that is well-mapped (one used for feeding and digestion in crabs), which demonstrates how sensitive its behavior is even to subtle variables like temperature in ways that are completely unpredictable based on the wiring diagram, is some of the most important and under-appreciated work in contemporary neuroscience. We still have no idea which of these details are necessary to make truly accurate simulations of large-scale nervous tissue. Hell, even accurately simulating the behavior of individual neurons is not necessarily trivial, even though the basic biophysical model of their functioning has been around since Hodgkin and Huxley's papers in 1952. The idea that in about ten years we'll have the technology to digitize and simulate a human brain is a bit like the stories from the golden age of sci-fi that imagined that we'd have colonies on the Moon, Mars, and Venus by the beginning of the 21st century.
But like all good science fiction, the literal plausibility isn't really the point. What's great about this story is what it says about our society, about the way we treat human life and human potential in the age of digital capitalism. That's a far more important message, and one that rings chillingly true. Good sci-fi often takes the approach "Okay, maybe this isn't the way the world is, but what if it were? How would people behave?" And this kind of counterfactual can shed a harsh light on us and our societies. And this story does that really, really well.
posted by biogeo at 10:09 PM on February 20, 2021 [30 favorites]
what are the ethics surrounding these emulated cognitions?
Oxford ethicist Nick Bostrom's book "Superintelligence" might be a good place to start.
posted by justsomebodythatyouusedtoknow at 10:10 PM on February 20, 2021
Oxford ethicist Nick Bostrom's book "Superintelligence" might be a good place to start.
posted by justsomebodythatyouusedtoknow at 10:10 PM on February 20, 2021
what are the ethics surrounding these emulated cognitions?
How will we know? AI ethicists are being fired
posted by mbo at 10:15 PM on February 20, 2021 [11 favorites]
How will we know? AI ethicists are being fired
posted by mbo at 10:15 PM on February 20, 2021 [11 favorites]
Which is the real horror?
The friends we made along the way?
posted by Naberius at 10:24 PM on February 20, 2021 [15 favorites]
The friends we made along the way?
posted by Naberius at 10:24 PM on February 20, 2021 [15 favorites]
The author also has (at least?) two novel length works: Ra and Fine Structure. (Fine Structure was goood, haven't dug into Ra yet. Standard warning about web fiction apply: professional editing would have either improved it greatly or killed the special sauce)
posted by Anonymous Function at 10:32 PM on February 20, 2021 [4 favorites]
posted by Anonymous Function at 10:32 PM on February 20, 2021 [4 favorites]
I think therefore, I scan.
"He once cast a moral case for medically engineered immortality as a fable about a kingdom terrorized by an insatiable dragon. A reformulation of Pascal’s wager became a dialogue between the seventeenth-century philosopher and a mugger from another dimension."
great post.
posted by clavdivs at 10:40 PM on February 20, 2021 [3 favorites]
"He once cast a moral case for medically engineered immortality as a fable about a kingdom terrorized by an insatiable dragon. A reformulation of Pascal’s wager became a dialogue between the seventeenth-century philosopher and a mugger from another dimension."
great post.
posted by clavdivs at 10:40 PM on February 20, 2021 [3 favorites]
I read a few of the comments and one ended with "If Hell does not exist, Man will create it." And that's this story in a nutshell.
San Junipero Heaven too of course (thanks Belinda). Its a little pricier, however.
posted by rongorongo at 10:48 PM on February 20, 2021 [1 favorite]
posted by rongorongo at 10:48 PM on February 20, 2021 [1 favorite]
We are not our thoughts.
Speak for yourself, bucko. My body is just an often annoying sack of meat acting as a life support system for my consciousness. Not that I'd ever want to have my disincorporated mind subject to the whims of (most) other humans who happen to inhabit meat sacks at the time. People are far too fickle and cruel to take that risk.
posted by wierdo at 10:58 PM on February 20, 2021 [6 favorites]
Speak for yourself, bucko. My body is just an often annoying sack of meat acting as a life support system for my consciousness. Not that I'd ever want to have my disincorporated mind subject to the whims of (most) other humans who happen to inhabit meat sacks at the time. People are far too fickle and cruel to take that risk.
posted by wierdo at 10:58 PM on February 20, 2021 [6 favorites]
My body is just an often annoying sack of meat acting as a life support system for my consciousness.
But of course your consciousness would say that.
If your consciousness is you, ask it to account for where it goes when you're unconscious.
posted by flabdablet at 11:04 PM on February 20, 2021 [4 favorites]
But of course your consciousness would say that.
If your consciousness is you, ask it to account for where it goes when you're unconscious.
posted by flabdablet at 11:04 PM on February 20, 2021 [4 favorites]
What's great about this story is what it says about our society, about the way we treat human life and human potential in the age of digital capitalism.
Yes, and on that level this reads like a demented HR report from the near alt-future. That the literal science is implausible is somewhat beside the point since atomised remote workers spun up for tasks and discarded is a thing right now.
posted by Ten Cold Hot Dogs at 11:26 PM on February 20, 2021 [11 favorites]
Yes, and on that level this reads like a demented HR report from the near alt-future. That the literal science is implausible is somewhat beside the point since atomised remote workers spun up for tasks and discarded is a thing right now.
posted by Ten Cold Hot Dogs at 11:26 PM on February 20, 2021 [11 favorites]
If your consciousness is you, ask it to account for where it goes when you're unconscious.
Lack of recollection does not equate to absence. Propofol is a real trip. As I said, our meat sacks are pretty shitty. The meat sack certainly plays a role in mediating the experiences that make me me, but it isn't me any more than my computer's Ethernet interface is the computer itself.
posted by wierdo at 11:40 PM on February 20, 2021 [3 favorites]
Lack of recollection does not equate to absence. Propofol is a real trip. As I said, our meat sacks are pretty shitty. The meat sack certainly plays a role in mediating the experiences that make me me, but it isn't me any more than my computer's Ethernet interface is the computer itself.
posted by wierdo at 11:40 PM on February 20, 2021 [3 favorites]
The Bobiverse book are about an uploaded person, that idea has legs well into the second book.
This was more psychologically inventive though.
posted by svenni at 11:59 PM on February 20, 2021
This was more psychologically inventive though.
posted by svenni at 11:59 PM on February 20, 2021
No interviews with MMAceveda instances?
Neural images may suffer context drifts, but WP:NOR is sacrosanct.
posted by pwnguin at 12:11 AM on February 21, 2021 [12 favorites]
Neural images may suffer context drifts, but WP:NOR is sacrosanct.
posted by pwnguin at 12:11 AM on February 21, 2021 [12 favorites]
it isn't me any more than my computer's Ethernet interface is the computer itself.
I do not understand how this analogy is supposed to work.
posted by flabdablet at 12:12 AM on February 21, 2021 [1 favorite]
I do not understand how this analogy is supposed to work.
posted by flabdablet at 12:12 AM on February 21, 2021 [1 favorite]
Which is the real horror? (1) Your brain gets scanned, turning you into a program. Your mind is becomes committed to an eternity of slave tasks, or (2) eventually nobody runs the program anymore because there are better programs available, and you are forgotten through benign neglect, or (3) Retro nerds eventually boot you inside a web browser using Javascript and make a blog post about it.
(4): the research protocol whereby the original is (mis)informed they are in fact the duplicate (and the original is long dead) in order to predict whether the neural imagery is worth flushing to permanent storage.
posted by pwnguin at 12:15 AM on February 21, 2021 [4 favorites]
(4): the research protocol whereby the original is (mis)informed they are in fact the duplicate (and the original is long dead) in order to predict whether the neural imagery is worth flushing to permanent storage.
posted by pwnguin at 12:15 AM on February 21, 2021 [4 favorites]
(4): the research protocol whereby the original is (mis)informed they are in fact the duplicate (and the original is long dead) in order to predict whether the neural imagery is worth flushing to permanent storage.
Oh god.
"Bad news, Elon..."
posted by qxntpqbbbqxl at 12:35 AM on February 21, 2021 [1 favorite]
Oh god.
"Bad news, Elon..."
posted by qxntpqbbbqxl at 12:35 AM on February 21, 2021 [1 favorite]
You could avoid some of the problems mentioned in the story if you scanned young children and raised them artificially inside the machine.
posted by ryanrs at 1:26 AM on February 21, 2021 [6 favorites]
posted by ryanrs at 1:26 AM on February 21, 2021 [6 favorites]
I do not understand how this analogy is supposed to work
The meat sack does the I/O, hence it's like an Ethernet interface. Just because you can't see the result of cognition doesn't mean it isn't happening, its lack of usefulness to the rest of the world notwithstanding.
The point being that even up to the level of the brain, the body hosts the person, but it is not itself the person. If someone were to scan my brain with sufficient fidelity and host it in a sufficiently advanced simulation, that would still be me, or at least a copy of me, no meat sack necessary. If that were not the case, there would be nothing horrifying about the story.
There would, of course, be a difference in experience unless varying hormone levels and other parts of the meat sack life were simulated as well, but that's beside the point.
posted by wierdo at 3:20 AM on February 21, 2021
The meat sack does the I/O, hence it's like an Ethernet interface. Just because you can't see the result of cognition doesn't mean it isn't happening, its lack of usefulness to the rest of the world notwithstanding.
The point being that even up to the level of the brain, the body hosts the person, but it is not itself the person. If someone were to scan my brain with sufficient fidelity and host it in a sufficiently advanced simulation, that would still be me, or at least a copy of me, no meat sack necessary. If that were not the case, there would be nothing horrifying about the story.
There would, of course, be a difference in experience unless varying hormone levels and other parts of the meat sack life were simulated as well, but that's beside the point.
posted by wierdo at 3:20 AM on February 21, 2021
We are not our thoughts.
An identical copy of your state is not you. But then again, neither is your potential self in one millisecond's time.
You have no subjective experience of your future self's likely fate.
The crux of the virtual-hell argument seems to be that we inherently empathise with those who share a state history with us, to the point that Roko's Basilisk infinitely torturing reconstructions of our minds made from sufficiently high-resolution online behavioral-advertising metadata must, in some way, be the same as us being tortured.
Weirdly enough, that sort of hypothetical hyper-empathy is often not matched with increased empathy for other people who might share one's circumstances but whose minds were never identical with one's own, not least of all by the sort of people who entertain them; their various Reddits are also full of "high-decoupling" thought experiments about when slavery is morally justifiable and debates over whether other people are NPCs/philosophical zombies.
posted by acb at 3:38 AM on February 21, 2021 [6 favorites]
An identical copy of your state is not you. But then again, neither is your potential self in one millisecond's time.
You have no subjective experience of your future self's likely fate.
The crux of the virtual-hell argument seems to be that we inherently empathise with those who share a state history with us, to the point that Roko's Basilisk infinitely torturing reconstructions of our minds made from sufficiently high-resolution online behavioral-advertising metadata must, in some way, be the same as us being tortured.
Weirdly enough, that sort of hypothetical hyper-empathy is often not matched with increased empathy for other people who might share one's circumstances but whose minds were never identical with one's own, not least of all by the sort of people who entertain them; their various Reddits are also full of "high-decoupling" thought experiments about when slavery is morally justifiable and debates over whether other people are NPCs/philosophical zombies.
posted by acb at 3:38 AM on February 21, 2021 [6 favorites]
even up to the level of the brain, the body hosts the person, but it is not itself the person.
You and I perceive this quite differently. From my point of view, the body is the person and the consciousness is but one of many kinds of activity that the person does. I am not inside this, I am this.
If that were not the case, there would be nothing horrifying about the story.
Exactly. For me, that is indeed not the case, and there is indeed nothing horrifying about this story, nor the Singularity, nor Roko's Basilisk nor any of the rest of the transhumanist creed. All of it, to me, reads like ghost stories made up by small children to titillate and terrify other small children in the dark around the campfire.
There needs to be a willing suspension of disbelief to give such stories any potency, and I find myself no longer entertained enough by my ego's pretty lies to be willing to extend it that courtesy.
posted by flabdablet at 3:57 AM on February 21, 2021 [1 favorite]
You and I perceive this quite differently. From my point of view, the body is the person and the consciousness is but one of many kinds of activity that the person does. I am not inside this, I am this.
If that were not the case, there would be nothing horrifying about the story.
Exactly. For me, that is indeed not the case, and there is indeed nothing horrifying about this story, nor the Singularity, nor Roko's Basilisk nor any of the rest of the transhumanist creed. All of it, to me, reads like ghost stories made up by small children to titillate and terrify other small children in the dark around the campfire.
There needs to be a willing suspension of disbelief to give such stories any potency, and I find myself no longer entertained enough by my ego's pretty lies to be willing to extend it that courtesy.
posted by flabdablet at 3:57 AM on February 21, 2021 [1 favorite]
Hey, if you like your meat sack that much, it's no skin off my nose. Personally, I couldn't give less of a shit if I'm in the one I'm in, a different one, a cyborg body, or if I'm a Boltzmann Brain floating out in deep space somewhere. A sudden and unexpected change from my present status might change who I become thanks to the different experiences that change would trigger, but so might getting hit by a car or tripping on the stairs and cracking my head open.
I'm pretty sure I'd still consider myself me for the second or so my brain kept going if I were to be beheaded.
posted by wierdo at 4:11 AM on February 21, 2021
I'm pretty sure I'd still consider myself me for the second or so my brain kept going if I were to be beheaded.
posted by wierdo at 4:11 AM on February 21, 2021
if you like your meat sack that much, it's no skin off my nose.
Whether I like it or not has absolutely no bearing on whether I am it.
Having spent more of my adult life pondering this issue than any other, I simply find myself unable to take seriously any conclusion on this topic other than that "I" and "my body" have the same referent, and that my lived experience is a pattern of activity that I perform. My current experience and my memories and desires and regrets and proclivities are things that I do, not definitional attributes of a thing that I am.
There is nothing in here with any kind of object persistence that could, even in principle, be scanned out and transplanted into a cyborg and still be in any sense me. I am this and that is that.
If I'm deeply asleep and not even conscious, it's still me doing that. If I were to get hit by a car or trip on the stairs and crack my head open, those would be things that happened to me, and although those experiences might well lead to radical changes in the kinds of activities I'm capable of doing, they can't change who I am.
I've never seen an argument against this position that rests on anything but fantastical speculation and Just So stories. Evidence in direct support of it, by way of contrast, is plentiful and readily available to anybody willing to open their eyes and look straight at it without shying away.
posted by flabdablet at 4:59 AM on February 21, 2021 [7 favorites]
Whether I like it or not has absolutely no bearing on whether I am it.
Having spent more of my adult life pondering this issue than any other, I simply find myself unable to take seriously any conclusion on this topic other than that "I" and "my body" have the same referent, and that my lived experience is a pattern of activity that I perform. My current experience and my memories and desires and regrets and proclivities are things that I do, not definitional attributes of a thing that I am.
There is nothing in here with any kind of object persistence that could, even in principle, be scanned out and transplanted into a cyborg and still be in any sense me. I am this and that is that.
If I'm deeply asleep and not even conscious, it's still me doing that. If I were to get hit by a car or trip on the stairs and crack my head open, those would be things that happened to me, and although those experiences might well lead to radical changes in the kinds of activities I'm capable of doing, they can't change who I am.
I've never seen an argument against this position that rests on anything but fantastical speculation and Just So stories. Evidence in direct support of it, by way of contrast, is plentiful and readily available to anybody willing to open their eyes and look straight at it without shying away.
posted by flabdablet at 4:59 AM on February 21, 2021 [7 favorites]
Also, just to clarify: when I refer to bodies I am of course including the brain as a bodily organ, and of course I agree that a great deal of the activity generally referred to as consciousness occurs inside that organ. I just can't get behind the idea of the brain as a kind of I/O interface to a separably identifiable consciousness that's somehow not part and parcel of that activity.
posted by flabdablet at 6:02 AM on February 21, 2021 [3 favorites]
posted by flabdablet at 6:02 AM on February 21, 2021 [3 favorites]
I'm pretty sure I'd still consider myself me for the second or so my brain kept going if I were to be beheaded.
There are stories from Revolutionary France of heads surviving several minutes after severing. While those are now viewed as unreliable accounts, the current thought is you could survive 10-15 seconds post-guillotine.
posted by Big Al 8000 at 6:26 AM on February 21, 2021 [2 favorites]
There are stories from Revolutionary France of heads surviving several minutes after severing. While those are now viewed as unreliable accounts, the current thought is you could survive 10-15 seconds post-guillotine.
posted by Big Al 8000 at 6:26 AM on February 21, 2021 [2 favorites]
> We're going to be able to download Elon Musk's brain scan from The Pirate Bay
What good Elon Musk's brain would be without Elon Musk's privilege? Not much use for a brain image that would be arrogant, narcisistic and just about average in terms of intelligence.
posted by Tom-B at 7:26 AM on February 21, 2021 [5 favorites]
What good Elon Musk's brain would be without Elon Musk's privilege? Not much use for a brain image that would be arrogant, narcisistic and just about average in terms of intelligence.
posted by Tom-B at 7:26 AM on February 21, 2021 [5 favorites]
Based on the lived experience of the net, I'd guess a metaverse full uploaded consciousnesses would quickly turn into a maelstrom of copyright violations, dad jokes, shitposting, and porn. I mean, it barely functions as a platform for industrial productivity as it is.
(That said...nothing wrong with a good maelstrom from time to time.)
posted by gimonca at 8:10 AM on February 21, 2021
(That said...nothing wrong with a good maelstrom from time to time.)
posted by gimonca at 8:10 AM on February 21, 2021
But it's difficult to consider raw code and massive data sets as sentient beings worthy of their own rights.
The flip side here, is that you're able to so blithely dehumanize something that from its own perspective is equivalent to YOU- and not in some kind of sophistry sense, but that it, from its own viewpoint, WAS you, YESTERDAY. At that point, why even bother with the simulations? Just do it to real people somewhere you never have to see or hear about it. After all, it's difficult to consider raw meat as a sentient being worthy of its own rights.
The horror comes from how we already do this every day, and the "improvement" of doing it to these simulations is that it's extremely easy to make it nice and quiet and efficient, with no bother for any ACTUAL real people, the ones with rights and dignity.
posted by notoriety public at 8:14 AM on February 21, 2021 [6 favorites]
The flip side here, is that you're able to so blithely dehumanize something that from its own perspective is equivalent to YOU- and not in some kind of sophistry sense, but that it, from its own viewpoint, WAS you, YESTERDAY. At that point, why even bother with the simulations? Just do it to real people somewhere you never have to see or hear about it. After all, it's difficult to consider raw meat as a sentient being worthy of its own rights.
The horror comes from how we already do this every day, and the "improvement" of doing it to these simulations is that it's extremely easy to make it nice and quiet and efficient, with no bother for any ACTUAL real people, the ones with rights and dignity.
posted by notoriety public at 8:14 AM on February 21, 2021 [6 favorites]
I just can't get behind the idea of the brain as a kind of I/O interface to a separably identifiable consciousness that's somehow not part and parcel of that activity.
Neither can I, but I can see how the brain or perhaps only the body could at some point be replaceable. What makes me me is the patterns of activity and the connections in the brain, not the actual meat.
posted by wierdo at 8:36 AM on February 21, 2021 [2 favorites]
Neither can I, but I can see how the brain or perhaps only the body could at some point be replaceable. What makes me me is the patterns of activity and the connections in the brain, not the actual meat.
posted by wierdo at 8:36 AM on February 21, 2021 [2 favorites]
Neither can I, but I can see how the brain or perhaps only the body could at some point be replaceable. What makes me me is the patterns of activity and the connections in the brain, not the actual meat.
The brain is meat, of course. And it seems pretty clear that cognition is both an electrical and chemical process of incredible nuance and complexity. I suspect at the end of the day to replicate something like it, you'd just end up with more "meat" as trying to "flatten" all of that multi-dimensionality into circuits is a non-starter.
And as for the brain being the start-and-end of consciousness, there are things we do with our bodies which out-speed the time it should take for a command to originate in our brain and travel to the limb being moved, aren't there? Which implies some of the functionality is diffuse.
Much more interesting (and possibly realistic) would be creating an alien intelligence out of silicone and chips. (Alien because it would be very different to our own, I think.) Would we even recognize it if it arose? Would we even be able to communicate with it despite being the authors?
posted by maxwelton at 9:04 AM on February 21, 2021 [2 favorites]
The brain is meat, of course. And it seems pretty clear that cognition is both an electrical and chemical process of incredible nuance and complexity. I suspect at the end of the day to replicate something like it, you'd just end up with more "meat" as trying to "flatten" all of that multi-dimensionality into circuits is a non-starter.
And as for the brain being the start-and-end of consciousness, there are things we do with our bodies which out-speed the time it should take for a command to originate in our brain and travel to the limb being moved, aren't there? Which implies some of the functionality is diffuse.
Much more interesting (and possibly realistic) would be creating an alien intelligence out of silicone and chips. (Alien because it would be very different to our own, I think.) Would we even recognize it if it arose? Would we even be able to communicate with it despite being the authors?
posted by maxwelton at 9:04 AM on February 21, 2021 [2 favorites]
This discussion got me thinking about a cartoon I watched ages ago produced/funded by National Film Board of Canada: To Be.
I think there's something to bodily continuity, so I lean more on what flabdablet is getting to. Let's take the original post's story. If we think about Acevedo as someone who ceases to exist when MMAcevedo is booted up, then it might be tempting to say MMAcevedo is the continuation of Acevedo if we think that MMAcevedo is perfectly simulating Acevedo -- but I think this temptation is erroneous. To me, it's something *else* that is not a continuous, even if the simulation *thinks* it as so.
Even within the story, there is a period of time at which Acevedo and MMAcevedo are coexisting. Even if MMAcevedo's thinking is simulated to be the same sorts of thinking that the physical Acevedo would do (and has the memories that Acevedo would have), it's obvious that MMAcevedo is not in the same place or space as Acevedo. So, there are two loci of bundles of psychological activities and both would refer to the bundle as "Acevedo" or "self" or whatever, but they wouldn't be the same as each other. To the extent that those two diverge, those two persons wouldn't experience what's happening to the other. For example, MMAcevedo never experiences Acevedo's bodily death. It doesn't really make sense to me to say that Acevedo doesn't die because he couldn't consciously experience it, or that Acevedo doesn't die because he is not his body (in the same way we might say "my car died" without implying anything about our own state.)
Like, the only way i can get to wierdo and others' position is if I assume that my consciousness is something pre-exists outside of my body and brain and does not depend in origination or continuity on my body. Then I can get to wierdo's position that in such a case, my brain is therefore just "expressing" something else that doesn't depend or is not formulated by it. In such a way, I might buy that like my car is separate from me, but mediates things I can do (e.g., I can only get so far so quickly without a car), then maybe my body is separate from me but mediates things I can do. If I believed in an eternal soul, maybe that would be convincing to me...but I don't.
But my problem is bigger: even if I *did* believe in an eternal soul, I would not have any comfort that a simulation of my *physical processes* could attach to it that proposed portion of me that exists outside of physical processes, because by definition, for an eternal soul with such properties, then simply simulating physical processes is not sufficient.
Put another way, either my consciousness is an activity that depends on physical processes (at which point changing or replicating or simulating those physical processes elsewhere results in a different consciousness elsewhere, not a continuity), or it doesn't depend on physical processes (at which point changing or replicating or simulating those physical processes elsewhere can't replicate what it doesn't generate.)
posted by subversiveasset at 9:18 AM on February 21, 2021 [3 favorites]
I think there's something to bodily continuity, so I lean more on what flabdablet is getting to. Let's take the original post's story. If we think about Acevedo as someone who ceases to exist when MMAcevedo is booted up, then it might be tempting to say MMAcevedo is the continuation of Acevedo if we think that MMAcevedo is perfectly simulating Acevedo -- but I think this temptation is erroneous. To me, it's something *else* that is not a continuous, even if the simulation *thinks* it as so.
Even within the story, there is a period of time at which Acevedo and MMAcevedo are coexisting. Even if MMAcevedo's thinking is simulated to be the same sorts of thinking that the physical Acevedo would do (and has the memories that Acevedo would have), it's obvious that MMAcevedo is not in the same place or space as Acevedo. So, there are two loci of bundles of psychological activities and both would refer to the bundle as "Acevedo" or "self" or whatever, but they wouldn't be the same as each other. To the extent that those two diverge, those two persons wouldn't experience what's happening to the other. For example, MMAcevedo never experiences Acevedo's bodily death. It doesn't really make sense to me to say that Acevedo doesn't die because he couldn't consciously experience it, or that Acevedo doesn't die because he is not his body (in the same way we might say "my car died" without implying anything about our own state.)
Like, the only way i can get to wierdo and others' position is if I assume that my consciousness is something pre-exists outside of my body and brain and does not depend in origination or continuity on my body. Then I can get to wierdo's position that in such a case, my brain is therefore just "expressing" something else that doesn't depend or is not formulated by it. In such a way, I might buy that like my car is separate from me, but mediates things I can do (e.g., I can only get so far so quickly without a car), then maybe my body is separate from me but mediates things I can do. If I believed in an eternal soul, maybe that would be convincing to me...but I don't.
But my problem is bigger: even if I *did* believe in an eternal soul, I would not have any comfort that a simulation of my *physical processes* could attach to it that proposed portion of me that exists outside of physical processes, because by definition, for an eternal soul with such properties, then simply simulating physical processes is not sufficient.
Put another way, either my consciousness is an activity that depends on physical processes (at which point changing or replicating or simulating those physical processes elsewhere results in a different consciousness elsewhere, not a continuity), or it doesn't depend on physical processes (at which point changing or replicating or simulating those physical processes elsewhere can't replicate what it doesn't generate.)
posted by subversiveasset at 9:18 AM on February 21, 2021 [3 favorites]
something about this tale of a person being trapped in one place with nothing to do except work is resonating with me somehow
posted by rpophessagr at 9:25 AM on February 21, 2021 [17 favorites]
posted by rpophessagr at 9:25 AM on February 21, 2021 [17 favorites]
If the image of person A is not 100% equal to person A then it is a model of person A, a representation of... And it is important to note, the map is not the territory. There are things missing, changed, etc. There are no ontological equivalents in this story, just complex representations. To equate Aceveda with MMAceveda is a false equivalency.
posted by njohnson23 at 10:39 AM on February 21, 2021 [1 favorite]
posted by njohnson23 at 10:39 AM on February 21, 2021 [1 favorite]
Neal Stephenson's "Fall" has his uploaded consciousnesses 'living' a variation of an on-line role playing game: groups of people questing.
The first part of that book was about the folks who didn't get uploaded. I wish he'd write more about them.
posted by Mesaverdian at 11:09 AM on February 21, 2021 [2 favorites]
The first part of that book was about the folks who didn't get uploaded. I wish he'd write more about them.
posted by Mesaverdian at 11:09 AM on February 21, 2021 [2 favorites]
This is not a post-singularity story, is it? Figuring out how to instance the consciousness of specific humans before working out generalised intelligence means that this society stalls there without developing anything better (whatever that means).
posted by rhamphorhynchus at 11:18 AM on February 21, 2021
posted by rhamphorhynchus at 11:18 AM on February 21, 2021
When consciousness decoheres (such as in most kinds of anesthesia) from an "executing thought" perspective you're not anymore. Autonomic processes continue, you stay alive. Most experiences are encoded in long term storage. Short term storage does, on regaining consciousness, recohere a sense of self and identity (sometimes with interesting changes) but which self-perceives as continuous.
In reductive computational terms, your body is executing many timelines, layers, "firmnesses" and ages of code (structures that built structures that execute instructions that modify structures and instructions and so on). Some significant percentage of both structures and especially the "firmware" they construct almost certainly evolved to leverage quantum effects that both enable significantly higher density of information transfer/processing and also introduce a certain amount of nondeterminism at the individual signal layer in (at least) the higher brain.
So far I don't think of any of this as especially magical thinking, but I recognize that there are justified allergies to the notions of code/hardware/firmware and even the concept of "executing" in the human body domain.
In this model, "you" are the executing software on top of many other layers of (self modifying) soft, firm, and hardware which can effectively reboot and recohere your "self" from existing storage. None of this relies on a mind/body duality because it's all physical processes.
So, if I can obtain an image of that state of storage, meat, and so on I don't exactly need to quantify the actual synaptic signals to keep the bulk of what you consider you intact. But I do need to capture an adequately accurate simulation of all those layers of soft, firm, and hardware which cooperate in and create the "self" simulation, route signals, etc.
Arguing whether a "brain state" is a self is missing the point. You would need to simulate a complete body-state, and evolution is very efficient at violating basic API and service architecture in favor of survivability, leveraging a red tooth and claw for aggressive debugging.
But it's folly to claim it's impossible, just much harder than taking a snapshot of a brain. More like a complete body snapshot at a quantum resolution and simulation of the entire generic phenotype plus playing back as much of the actual organism's history as could be recreated (since long term memory is actually very unreliable).
But yes, if you can sample that complex of genetic and training data I guess you could probably create a high enough fidelity model to have a sense of self. And I know the Engineers' Disease afflicted (myself included) will claim based on their experience that all those trained sub-models in firmware and hardware could be simulated and mocked up... Which then creates the Ship of Theseus style problem mentioned upthread.
So is there a camp in this discussion yet who doesn't outright dismiss the possibility of brain uploading (for a sufficiently detailed definition of brain as "Quantum state up until nowish") but still doesn't hold that the thing that's uploaded and executing is in any way actually my self and needs must diverge on first execution into basically a very similar but distinct entity?
posted by abulafa at 11:30 AM on February 21, 2021
In reductive computational terms, your body is executing many timelines, layers, "firmnesses" and ages of code (structures that built structures that execute instructions that modify structures and instructions and so on). Some significant percentage of both structures and especially the "firmware" they construct almost certainly evolved to leverage quantum effects that both enable significantly higher density of information transfer/processing and also introduce a certain amount of nondeterminism at the individual signal layer in (at least) the higher brain.
So far I don't think of any of this as especially magical thinking, but I recognize that there are justified allergies to the notions of code/hardware/firmware and even the concept of "executing" in the human body domain.
In this model, "you" are the executing software on top of many other layers of (self modifying) soft, firm, and hardware which can effectively reboot and recohere your "self" from existing storage. None of this relies on a mind/body duality because it's all physical processes.
So, if I can obtain an image of that state of storage, meat, and so on I don't exactly need to quantify the actual synaptic signals to keep the bulk of what you consider you intact. But I do need to capture an adequately accurate simulation of all those layers of soft, firm, and hardware which cooperate in and create the "self" simulation, route signals, etc.
Arguing whether a "brain state" is a self is missing the point. You would need to simulate a complete body-state, and evolution is very efficient at violating basic API and service architecture in favor of survivability, leveraging a red tooth and claw for aggressive debugging.
But it's folly to claim it's impossible, just much harder than taking a snapshot of a brain. More like a complete body snapshot at a quantum resolution and simulation of the entire generic phenotype plus playing back as much of the actual organism's history as could be recreated (since long term memory is actually very unreliable).
But yes, if you can sample that complex of genetic and training data I guess you could probably create a high enough fidelity model to have a sense of self. And I know the Engineers' Disease afflicted (myself included) will claim based on their experience that all those trained sub-models in firmware and hardware could be simulated and mocked up... Which then creates the Ship of Theseus style problem mentioned upthread.
So is there a camp in this discussion yet who doesn't outright dismiss the possibility of brain uploading (for a sufficiently detailed definition of brain as "Quantum state up until nowish") but still doesn't hold that the thing that's uploaded and executing is in any way actually my self and needs must diverge on first execution into basically a very similar but distinct entity?
posted by abulafa at 11:30 AM on February 21, 2021
If I clicked my fingers, and suddenly there was another you, the same in all aspects, the similarity ends as soon as the duplicate looks at the original. The original knows where the duplicate came from, the duplicate does not. Imagine yourself suddenly encountering a duplicate of yourself. The To Be video posted above does a good job of looking at all these issues.
posted by njohnson23 at 12:44 PM on February 21, 2021
posted by njohnson23 at 12:44 PM on February 21, 2021
We are all aware that the senses can be deceived, the eyes fooled. But how can we be sure our senses are not being deceived at any particular time, or even all the time? Might I just be a brain in a tank somewhere, tricked all my life into believing in the events of this world by some insane computer? And does my life gain or lose meaning based on my reaction to such solipsism?
- Project PYRRHO, Specimen 46, Vat 7
Activity Recorded M.Y. 2302.22467
TERMINATION OF SPECIMEN ADVISED
posted by aihal at 1:10 PM on February 21, 2021 [3 favorites]
- Project PYRRHO, Specimen 46, Vat 7
Activity Recorded M.Y. 2302.22467
TERMINATION OF SPECIMEN ADVISED
posted by aihal at 1:10 PM on February 21, 2021 [3 favorites]
I can see how the brain or perhaps only the body could at some point be replaceable. What makes me me is the patterns of activity and the connections in the brain, not the actual meat.
Vital parts of me are not only replaceable "at some point", they get replaced all the time. I'm never not in a state of material exchange with my environment. Same applies to all living things, as far as I know.
If you look at the thing in enough detail, those patterns of activity are the actual meat. All of physics is just dancing fields.
At present, the conceptual boundary I personally prefer to draw to distinguish me from not-me is spatiotemporal: time-wise it covers the span from my conception to my eventual dissolution back into my environment after death, and space-wise it's the surface of my skin for most of that time. This boundary is necessarily fuzzy, as all boundaries are. There will always be stuff near the boundary that's arguably inside or outside it, depending how you squint. But most stuff in the universe is far enough from the boundary to make the idea of it useful, and the fact that there's always a flux of matter and energy across the boundary doesn't make it less so.
If I am to think of myself as something in-principle extractable from this body, I'm going to need to draw a quite different kind of conceptual boundary around myself. Rather than being spatiotemporal, this one would have to be about levels of detail.
Is there some degree of detail within the activity this body is currently performing, finer than which I could reasonably assert has no relevance to an essential question of identity? In other words, is there some pattern of activity that could be abstracted from what this body is currently doing, analogized on some other substrate, and then mount a convincing claim to be me?
I don't think so. And the main reason I don't think so is that I am not an engineered structure, but an evolved one.
Engineering is all about abstracting away details and then designing with properties that are unaffected by doing that. Some branches of engineering express this pattern particularly strongly, and information technology is one of them.
There are clear and useful analogies to be made between mentation happening in bodies and software running on hardware, and any degree of exposure to IT is only ever going to make those analogies appear more compelling.
One of the fundamental principles of computer science is that computation is universal: you can run fully equivalent computations on any substrate with a good enough approximation of Turing completeness. Anybody who works in IT for any length of time gets this principle grooved very deeply into their thinking.
But I don't think the analogy works well enough to allow for the kind of abstraction I'm talking about to be applied successfully to a purely evolved system, as opposed to an engineered one.
The behaviour of evolved systems is dominated by scale-crossing feedback loops and all their attendant chaos. It's not, in principle, possible to guarantee that any given event on the smallest scale inside such a system can be abstracted away and leave the system's large-scale behaviour unaffected.
This is not to say that an engineered consciousness is in principle impossible. It is to say that no such structure will ever be me. Strong AI will be ridiculously difficult, but personal uploading is just ludicrous.
And it's worth bearing in mind that many of the things routinely done to engineered consciousnesses in speculative fiction might well only seem as horrible as they do because we are not engineered consciousnesses and we don't have the abilities that could reasonably be expected to be inherited by same.
It's pretty easy to imagine an engineered consciousness that could be suspended, snapshotted, replicated, time-compressed and so forth while taking no damage whatsoever because it is implemented as computation that can be abstracted from the hardware it's running on. And I'm sure it would be interesting to compare notes with such a consciousness on the question of how it preferred to split its conceptual world into me and not-me, and why it considered that particular split to be its clearest and most useful option.
posted by flabdablet at 3:00 PM on February 21, 2021 [2 favorites]
Vital parts of me are not only replaceable "at some point", they get replaced all the time. I'm never not in a state of material exchange with my environment. Same applies to all living things, as far as I know.
If you look at the thing in enough detail, those patterns of activity are the actual meat. All of physics is just dancing fields.
At present, the conceptual boundary I personally prefer to draw to distinguish me from not-me is spatiotemporal: time-wise it covers the span from my conception to my eventual dissolution back into my environment after death, and space-wise it's the surface of my skin for most of that time. This boundary is necessarily fuzzy, as all boundaries are. There will always be stuff near the boundary that's arguably inside or outside it, depending how you squint. But most stuff in the universe is far enough from the boundary to make the idea of it useful, and the fact that there's always a flux of matter and energy across the boundary doesn't make it less so.
If I am to think of myself as something in-principle extractable from this body, I'm going to need to draw a quite different kind of conceptual boundary around myself. Rather than being spatiotemporal, this one would have to be about levels of detail.
Is there some degree of detail within the activity this body is currently performing, finer than which I could reasonably assert has no relevance to an essential question of identity? In other words, is there some pattern of activity that could be abstracted from what this body is currently doing, analogized on some other substrate, and then mount a convincing claim to be me?
I don't think so. And the main reason I don't think so is that I am not an engineered structure, but an evolved one.
Engineering is all about abstracting away details and then designing with properties that are unaffected by doing that. Some branches of engineering express this pattern particularly strongly, and information technology is one of them.
There are clear and useful analogies to be made between mentation happening in bodies and software running on hardware, and any degree of exposure to IT is only ever going to make those analogies appear more compelling.
One of the fundamental principles of computer science is that computation is universal: you can run fully equivalent computations on any substrate with a good enough approximation of Turing completeness. Anybody who works in IT for any length of time gets this principle grooved very deeply into their thinking.
But I don't think the analogy works well enough to allow for the kind of abstraction I'm talking about to be applied successfully to a purely evolved system, as opposed to an engineered one.
The behaviour of evolved systems is dominated by scale-crossing feedback loops and all their attendant chaos. It's not, in principle, possible to guarantee that any given event on the smallest scale inside such a system can be abstracted away and leave the system's large-scale behaviour unaffected.
This is not to say that an engineered consciousness is in principle impossible. It is to say that no such structure will ever be me. Strong AI will be ridiculously difficult, but personal uploading is just ludicrous.
And it's worth bearing in mind that many of the things routinely done to engineered consciousnesses in speculative fiction might well only seem as horrible as they do because we are not engineered consciousnesses and we don't have the abilities that could reasonably be expected to be inherited by same.
It's pretty easy to imagine an engineered consciousness that could be suspended, snapshotted, replicated, time-compressed and so forth while taking no damage whatsoever because it is implemented as computation that can be abstracted from the hardware it's running on. And I'm sure it would be interesting to compare notes with such a consciousness on the question of how it preferred to split its conceptual world into me and not-me, and why it considered that particular split to be its clearest and most useful option.
posted by flabdablet at 3:00 PM on February 21, 2021 [2 favorites]
On the ship of Theseus / grandfather's axe thing: it seems to me quite reasonable to claim that an axe that's had its handle replaced five times and its head replaced twice is the same axe. The point is, though, that if you replace a worn out axe head with a chainsaw chain, the resulting structure isn't even an axe, let alone the same axe. Even leaving aside the question of whether or not you can still cut wood with it.
Speculation about personal uploading has always, it seems to me, ignored this entire class of backward compatibility issues.
posted by flabdablet at 3:14 PM on February 21, 2021 [1 favorite]
Speculation about personal uploading has always, it seems to me, ignored this entire class of backward compatibility issues.
posted by flabdablet at 3:14 PM on February 21, 2021 [1 favorite]
Grandfather's ax is an ax belonging to grandfather. Changing the handle and the head do nothing to change the predicate “belonging to grandfather.” Theseus’s ship falls under the same category.
posted by njohnson23 at 4:31 PM on February 21, 2021
posted by njohnson23 at 4:31 PM on February 21, 2021
The crux of the virtual-hell argument seems to be that we inherently empathise with those who share a state history with us, to the point that Roko's Basilisk infinitely torturing reconstructions of our minds made from sufficiently high-resolution online behavioral-advertising metadata must, in some way, be the same as us being tortured.But how do you know that you yourself are not actually in virtual hell, and you are being tested? The question isn't necessarily "how willing are you to avoid a copy being tortured"; it's "how willing are you to risk being tortured yourself." It's like Pascal's Wager.
The line of logic as I understand it runs like this:
Suppose we lived in a world where a powerful AI announced that it was going to torture simulated copies of anyone who didn't supply it with protection money. Our first reaction might be "so what?"; the AI wouldn't be torturing us, the ones who live in the base reality. But then the AI plays back convincing recordings of our simulations who thought the same thing, and then discovered that they were inside a simulation and that the AI had total control over them. They unanimously recommend doing everything the AI says, because (they point out) nobody can be sure that they're not living in a simulation. And then the AI puts us onto a conference call with many instances of us, all of whom think they're the original, none of whom are being tortured at present. And one by one they get asked to pay money to the AI, and the ones that refuse get tortured while the ones who consent get to live in Robot Heaven.
The AI doesn't have the ability to torture or reward the base-reality you. What it does have is the ability to raise a reasonable doubt as to whether you are the lucky individual in the base reality. Now that you're in doubt, Pascal's Wager indicates that you should comply.
As far as we know we're living in a pre-AI base-reality but it's logically possible that such an AI already exists and that we're in an historical simulation. We haven't been asked for money but we can deduce that a future AI would reward people who helped bring it into existence, and punish people who abstained. And for all we know it's doing that right now, because none of us can be sure we're not living in a simulation, so we should play the odds and accept a somewhat-poorer base reality (that we may not even be experiencing!) in exchange for freedom from torture and perhaps infinite bliss in Robot Heaven.
posted by Joe in Australia at 5:14 PM on February 21, 2021 [5 favorites]
I think there's something a bit different going on in those thought experiments than the "belonging to grandfather" or "belonging to Theseus." I still think there is something about physical continuity that still matters (especially if the owner is gone/dead so the question of possession, etc., is not confused), but I'm still thinking about it.
Let's consider a different example. Let's say I am presented with a painting that the museum claim is the original Mona Lisa as da Vinci painted it. Is it a problem if it has been restored by people other than da Vinci over the years? I might start putting qualifiers on the painting like, "This is the original painting well restored" but I'd probably recognize that may very well be the original.
If, in contrast, I consider Ecce Homo in the Sanctuary of Mercy church in Spain, [what some now call Ecce Mono or "Behold the Monkey" based on a terrible restoration], I might note that this is still the original fresco, except this one is marred, perhaps irreparably by a much worse restoration than the Mona Lisa.
What's similar between the Mona Lisa and Ecce Homo is those are restorations of what physically were the original works. There is continuity even if the new state may be radically different from the old.
Contrast with if someone shows me what they consider to be a perfectly executed copy of the Mona Lisa -- let's say they somehow found a way to reverse engineer every stroke, every layer of paint by da Vinci and to have a robot conduct every stroke in exactly the same way as da Vinci...even if this is a more faithful copy of the Mona Lisa than the restored Ecce Homo is of its original -- even if there is no physical way to discern the two other than that one is in one location and the other is in another -- there is no sense in which this robotic copy is the *original Mona Lisa* and I would consider it fraudulent if people passed it off as being the original. (although it may absolutely have value and be a marvel in its own right.)
I think in a similar sense that if you could simulate everything about a human person and you did so in a different body/physical location, at best you get a very faithful copy, not the original person, because there is *no physical continuity*. (I'd be far more amenable if instead the argument was asking if a person is still themselves if every part of their body is replaced [since our own bodies *do* replace themselves cell by cell. We just hope the replacement is very faithful, because when they get it wrong.....it can go really wrong, fuck cancer, etc.,])
But like, you make a copy of the instructions to my processes in another location? At best, that's an identical twin, not me.
posted by subversiveasset at 5:56 PM on February 21, 2021 [1 favorite]
Let's consider a different example. Let's say I am presented with a painting that the museum claim is the original Mona Lisa as da Vinci painted it. Is it a problem if it has been restored by people other than da Vinci over the years? I might start putting qualifiers on the painting like, "This is the original painting well restored" but I'd probably recognize that may very well be the original.
If, in contrast, I consider Ecce Homo in the Sanctuary of Mercy church in Spain, [what some now call Ecce Mono or "Behold the Monkey" based on a terrible restoration], I might note that this is still the original fresco, except this one is marred, perhaps irreparably by a much worse restoration than the Mona Lisa.
What's similar between the Mona Lisa and Ecce Homo is those are restorations of what physically were the original works. There is continuity even if the new state may be radically different from the old.
Contrast with if someone shows me what they consider to be a perfectly executed copy of the Mona Lisa -- let's say they somehow found a way to reverse engineer every stroke, every layer of paint by da Vinci and to have a robot conduct every stroke in exactly the same way as da Vinci...even if this is a more faithful copy of the Mona Lisa than the restored Ecce Homo is of its original -- even if there is no physical way to discern the two other than that one is in one location and the other is in another -- there is no sense in which this robotic copy is the *original Mona Lisa* and I would consider it fraudulent if people passed it off as being the original. (although it may absolutely have value and be a marvel in its own right.)
I think in a similar sense that if you could simulate everything about a human person and you did so in a different body/physical location, at best you get a very faithful copy, not the original person, because there is *no physical continuity*. (I'd be far more amenable if instead the argument was asking if a person is still themselves if every part of their body is replaced [since our own bodies *do* replace themselves cell by cell. We just hope the replacement is very faithful, because when they get it wrong.....it can go really wrong, fuck cancer, etc.,])
But like, you make a copy of the instructions to my processes in another location? At best, that's an identical twin, not me.
posted by subversiveasset at 5:56 PM on February 21, 2021 [1 favorite]
I still think there is something about physical continuity that still matters
Where all this ends up is realizing that physical continuity, like every other attribute of every identifiable thing any of us ever thinks about, is essentially a mental convenience.
Does a candle flame have physical continuity between the time it's lit and the time it's extinguished? If so, what is the essential difference between the physical continuity of a candle flame and that of, say, a stone or a billiard ball? Is it useful to draw a distinction between physical processes that exchange very little matter with their environments, like billiard balls, and those that exchange a great deal, like candle flames? If so, in what contexts? Where do I personally fall along this particular conceptual axis?
The point is that I can draw whatever conceptual boundary I like between physical continuity itself and qualities that don't count as physical continuity, and I will find that just as for every boundary that makes a thing distinct from what it is not, sufficient investigation will reveal enough edge cases to make it fuzzy.
The only genuine certainty that any of us can ever have is that something's going on. As soon as we start getting more specific than that, we find ourselves having to abandon certainty and get by on (relatively) mere confidence; the confidence we can have in the conclusions of our reasoning can never be any stronger than that which we have in the applicability of the distinctions we draw between the things we're reasoning about. I have always found that it pays to keep that in mind, reserving the word certainty for the single case to which it actually applies.
I have a very high degree of confidence that I am not a brain in a vat being supplied with a machine-generated simulacrum of sensory experience, that this is not a simulation, and that I am a physical process with a limited lifetime; from which it follows that Pascal's Wager in its assorted guises are a fairy floss of overstretched, insubstantial, confected nonsense.
There is nothing logically impossible about any of the more contrived explanations for how I come to be where I am and experience what I do, but that's because logic, just like every other tool my brain employs to model reality with, has limits to applicability.
I have really come to value retaining as much of a grasp of the bleedin' obvious as is available in any given set of circumstances, and strongly recommend the same goal to all. It really does help dispel an awful lot of superfluous distress.
posted by flabdablet at 7:16 PM on February 21, 2021
Where all this ends up is realizing that physical continuity, like every other attribute of every identifiable thing any of us ever thinks about, is essentially a mental convenience.
Does a candle flame have physical continuity between the time it's lit and the time it's extinguished? If so, what is the essential difference between the physical continuity of a candle flame and that of, say, a stone or a billiard ball? Is it useful to draw a distinction between physical processes that exchange very little matter with their environments, like billiard balls, and those that exchange a great deal, like candle flames? If so, in what contexts? Where do I personally fall along this particular conceptual axis?
The point is that I can draw whatever conceptual boundary I like between physical continuity itself and qualities that don't count as physical continuity, and I will find that just as for every boundary that makes a thing distinct from what it is not, sufficient investigation will reveal enough edge cases to make it fuzzy.
The only genuine certainty that any of us can ever have is that something's going on. As soon as we start getting more specific than that, we find ourselves having to abandon certainty and get by on (relatively) mere confidence; the confidence we can have in the conclusions of our reasoning can never be any stronger than that which we have in the applicability of the distinctions we draw between the things we're reasoning about. I have always found that it pays to keep that in mind, reserving the word certainty for the single case to which it actually applies.
I have a very high degree of confidence that I am not a brain in a vat being supplied with a machine-generated simulacrum of sensory experience, that this is not a simulation, and that I am a physical process with a limited lifetime; from which it follows that Pascal's Wager in its assorted guises are a fairy floss of overstretched, insubstantial, confected nonsense.
There is nothing logically impossible about any of the more contrived explanations for how I come to be where I am and experience what I do, but that's because logic, just like every other tool my brain employs to model reality with, has limits to applicability.
I have really come to value retaining as much of a grasp of the bleedin' obvious as is available in any given set of circumstances, and strongly recommend the same goal to all. It really does help dispel an awful lot of superfluous distress.
posted by flabdablet at 7:16 PM on February 21, 2021
I still think there is something about physical continuity that still matters […] Let's say I am presented with a painting that the museum claim is the original Mona Lisa as da Vinci painted it.
Surely you wouldn't quibble if you listened to a performance of a symphony “as Beethoven composed it”: it might be a better or worse enactment, but it's still that symphony. I think that's the way to look at the emulations of Miguel Acevedo: they are performances of him, and if they could be made physical there would be no intrinsic difference between the first performance (i.e., “the original”) and the later ones.
posted by Joe in Australia at 7:32 PM on February 21, 2021 [1 favorite]
Surely you wouldn't quibble if you listened to a performance of a symphony “as Beethoven composed it”: it might be a better or worse enactment, but it's still that symphony. I think that's the way to look at the emulations of Miguel Acevedo: they are performances of him, and if they could be made physical there would be no intrinsic difference between the first performance (i.e., “the original”) and the later ones.
posted by Joe in Australia at 7:32 PM on February 21, 2021 [1 favorite]
The attribute of physical continuity doesn't have complete conceptual overlap with the attribute of identity. Some of their edge cases are quite different.
posted by flabdablet at 7:40 PM on February 21, 2021 [2 favorites]
posted by flabdablet at 7:40 PM on February 21, 2021 [2 favorites]
This story connects dots I really should’ve connected myself between my (like most people) occasional longing for the freedom of being a mind unencumbered by a substrate of slowly rotting meat, and the reality that that many of the most impactful works of science fiction through my life have been about dehumanization via technology. Suspended was ten-year-old me’s favorite Infocom game. Ghost in the Shell was right up my anime alley. Hell, I’m watching Robocop as I type this, and if you don’t want a massive derail, don’t get me started on how I actually thought the philosophical underpinnings of the 2014 remake improved on the original (though, as a satire, the original was incomparable).
Of fucking course digitized humans would be enslaved, instrumentalized and ultimately discarded in unimaginably horrible ways. Capitalism relentlessly demands even ordinary biological humans operate ever more like machines. Perhaps recent history has jaded me, but we just can’t stop ourselves. Whether the arc of the moral universe bends towards justice or systemic sociopathy seems to depend on both constant individual effort and heavy cherry-picking of outcomes.
And just like that I’m kind of okay with the idea that my existence will one day end if the trade-off is that the inside of my mind remains more or less terra incognita for however long that is.
posted by gelfin at 8:30 PM on February 21, 2021 [2 favorites]
Of fucking course digitized humans would be enslaved, instrumentalized and ultimately discarded in unimaginably horrible ways. Capitalism relentlessly demands even ordinary biological humans operate ever more like machines. Perhaps recent history has jaded me, but we just can’t stop ourselves. Whether the arc of the moral universe bends towards justice or systemic sociopathy seems to depend on both constant individual effort and heavy cherry-picking of outcomes.
And just like that I’m kind of okay with the idea that my existence will one day end if the trade-off is that the inside of my mind remains more or less terra incognita for however long that is.
posted by gelfin at 8:30 PM on February 21, 2021 [2 favorites]
I really have no idea how brains work, so the above discussion is really interesting! I'd like to clarify my point though, about plausibility; I was focusing more on the... I guess "economics" of using brain images over whether or not it is possible to replicate a consciousness, and whether you "are" an image of your brain.
Specifically, I was thinking of tech that would somehow allow us to scan a brain and extract some amount of information from the scan to create a simulcra of the person being scanned. Perhaps the simulcra runs via a full emulation of what happens physically inside a real brain (but from the above comments it sounds like this is impossible?), or maybe an actual brain is 3D printed out, or perhaps the entire thing is lossly abstracted away into a huge set of vectors that are fed into a specialized ML algorithm. Whatever the case, the end result is a system that resembles in some way the person who was being scanned.
I can imagine that, given enough computational power, people would prefer to use such brain scans to perform AI-like tasks, simply because it would be easier. Already, we've moved away from custom-built expert systems to ML systems that we train and whose internal structure is mostly opaque. It seems plausible that, if such brain scanning technology existed, the next step would be to "implant" the knowledge inside a human brain directly into an ML system, and then exploit that system for a variety of tasks*.
So, even if we have no belief that this system is in any way a real person, or equivalent to the person who was scanned, do we have any moral obligations towards it? Is it ok to continually torture the system, making it respond in pain and fear, or since all this is just a bunch of bunch of code, it doesn't matter?
* As for whether we would consider it inefficient to have the system emulate all the other kinds of cognition, like emotions, I can imagine it might be too complicated to compartmentalize the brain scan and remove unwanted parts. In any case, recent computing history has shown that if computing power is cheap enough, people don't care about running a lot of extra redundant code; look at your typical Javascript application stack, or a cloud service running on top of layers of containers and VMs.
posted by destrius at 11:08 PM on February 21, 2021 [1 favorite]
Specifically, I was thinking of tech that would somehow allow us to scan a brain and extract some amount of information from the scan to create a simulcra of the person being scanned. Perhaps the simulcra runs via a full emulation of what happens physically inside a real brain (but from the above comments it sounds like this is impossible?), or maybe an actual brain is 3D printed out, or perhaps the entire thing is lossly abstracted away into a huge set of vectors that are fed into a specialized ML algorithm. Whatever the case, the end result is a system that resembles in some way the person who was being scanned.
I can imagine that, given enough computational power, people would prefer to use such brain scans to perform AI-like tasks, simply because it would be easier. Already, we've moved away from custom-built expert systems to ML systems that we train and whose internal structure is mostly opaque. It seems plausible that, if such brain scanning technology existed, the next step would be to "implant" the knowledge inside a human brain directly into an ML system, and then exploit that system for a variety of tasks*.
So, even if we have no belief that this system is in any way a real person, or equivalent to the person who was scanned, do we have any moral obligations towards it? Is it ok to continually torture the system, making it respond in pain and fear, or since all this is just a bunch of bunch of code, it doesn't matter?
* As for whether we would consider it inefficient to have the system emulate all the other kinds of cognition, like emotions, I can imagine it might be too complicated to compartmentalize the brain scan and remove unwanted parts. In any case, recent computing history has shown that if computing power is cheap enough, people don't care about running a lot of extra redundant code; look at your typical Javascript application stack, or a cloud service running on top of layers of containers and VMs.
posted by destrius at 11:08 PM on February 21, 2021 [1 favorite]
Surely you wouldn't quibble if you listened to a performance of a symphony “as Beethoven composed it”: it might be a better or worse enactment, but it's still that symphony. I think that's the way to look at the emulations of Miguel Acevedo: they are performances of him, and if they could be made physical there would be no intrinsic difference between the first performance (i.e., “the original”) and the later ones.
Well, it's important to note that there's a difference between a performance and a composition (and compositions do get adapted and modified). But yes, there are absolutely differences in *performances*. Even if you have a recording of the New York Philharmonic vs the Sydney Symphony Orchestra, it's really important to note that these performances are different *when you're trying to figure out who to pay performance royalties to*, even if they may both be aiming to faithfully represent the same composition. Even if every note and every articulation is the same, it matters that one was performed by New York Philharmonic and the other by Sydney Symphony...
Similarly, it *matters* that one of the Miguels is the original and the remainder are simulations/copies. Even if the simulations or copies are faithful, they can *never* be the *original*.
posted by subversiveasset at 1:35 AM on February 22, 2021
Well, it's important to note that there's a difference between a performance and a composition (and compositions do get adapted and modified). But yes, there are absolutely differences in *performances*. Even if you have a recording of the New York Philharmonic vs the Sydney Symphony Orchestra, it's really important to note that these performances are different *when you're trying to figure out who to pay performance royalties to*, even if they may both be aiming to faithfully represent the same composition. Even if every note and every articulation is the same, it matters that one was performed by New York Philharmonic and the other by Sydney Symphony...
Similarly, it *matters* that one of the Miguels is the original and the remainder are simulations/copies. Even if the simulations or copies are faithful, they can *never* be the *original*.
posted by subversiveasset at 1:35 AM on February 22, 2021
Turing machines aren't paintings though. It's pretty fundamental to the idea that two turing machines running the same program are the same, even if one's a microcontroller and the other's a supercomputer. That (human) brains are turing machines is true in a (vaguely disappointing) sense since you can follow instructions and run an algorithm in your mind. They may be more than that, but I don't know what that means, since no one's come up with anything that's fundamentally more than Turing's little imaginary tape readers.
We don't get philosophical leaps like Turing's very often, we're still bottoming out all the implications of it decades later, and I suspect we'll need a few more to really get what consciousness is. If we're capable of that.
posted by rhamphorhynchus at 5:54 AM on February 22, 2021
We don't get philosophical leaps like Turing's very often, we're still bottoming out all the implications of it decades later, and I suspect we'll need a few more to really get what consciousness is. If we're capable of that.
posted by rhamphorhynchus at 5:54 AM on February 22, 2021
there is a way that thinking about thinking is endlessly fascinating, and there is a way it's funny in the same way some people find cats in mortal fear of cucumbers is funny
so many analogies put forward here, with such conviction. analogies are important, it is useful to make a comparison as a means of clarification, but the thing compared is not also the other thing. for the record I find this line of speculative fiction to be terrifying. some here are ambivalent about it all, apparently our notions of 'continuity' and what it means to be thought wrapped in meat diverge. I remain, a fearful primitive.
posted by elkevelvet at 7:19 AM on February 22, 2021 [1 favorite]
so many analogies put forward here, with such conviction. analogies are important, it is useful to make a comparison as a means of clarification, but the thing compared is not also the other thing. for the record I find this line of speculative fiction to be terrifying. some here are ambivalent about it all, apparently our notions of 'continuity' and what it means to be thought wrapped in meat diverge. I remain, a fearful primitive.
posted by elkevelvet at 7:19 AM on February 22, 2021 [1 favorite]
I would really appreciate it if somebody who genuinely thinks of themselves as being thought wrapped in meat, as distinct from being meat doing thought, would outline the benefit of taking that position. Because from where I sit, all I've ever seen that viewpoint lead to are expressions of what appear to me to be completely superfluous fear and confusion that I don't understand why anybody would choose to keep inflicting on themselves.
posted by flabdablet at 7:30 AM on February 22, 2021 [1 favorite]
posted by flabdablet at 7:30 AM on February 22, 2021 [1 favorite]
As always when Turing machines are brought into these discussions, it's worth recalling that the Turing machine is a theoretical framework for digital computation, but brains are analog computers. (There are some people who still maintain that the brain actually does operate as a digital computer, but I think these arguments are pretty clearly wrong-headed, pun not intended, based on what we now know of neuroscience.) Analog computers can be made to operate as digital computers using thresholding and comparitors, and digital computers can simulate analog computers to arbitrary precision. However, analog computers with feedback can behave chaotically, in the sense of exhibiting sensitive dependence on initial conditions, and this seems to be true of brains in general. This means that for any arbitrarily high but finite degree of precision, a digital computer's simulation of a chaotic system will diverge completely from that system's true behavior in finite time, providing essentially no ability to represent the chaotic system accurately. We don't really know the extent to which this type of chaotic behavior is required to explain human consciousness and behavior, but just because these phenomena should be computable in the Turing sense doesn't necessarily mean they're computable with any finite approximation of the infinite-tape Turing machine.
posted by biogeo at 7:36 AM on February 22, 2021 [4 favorites]
posted by biogeo at 7:36 AM on February 22, 2021 [4 favorites]
Pedantry forces me to add that strictly speaking, digital computers are also capable of chaotic behavior, but if one digital computer is capable of simulating another accurately it should also be able to simulate its chaotic behavior accurately. This isn't true for analog chaos though.
posted by biogeo at 7:39 AM on February 22, 2021
posted by biogeo at 7:39 AM on February 22, 2021
I don't think I've seen anyone comment on this line yet, and it's haunting me:
MMAcevedo initially reported extreme discomfort which was ultimately discovered to have been attributable to misconfigured simulated haptic links, and was shut down after only 7 minutes and 15 seconds of virtual elapsed time, as requested by MMAcevedo.
The first time they turned it on, its sensory nervous system was on fire and it screamed at them "kill me" for seven minutes until they did.
posted by saturday_morning at 8:10 AM on February 22, 2021 [7 favorites]
MMAcevedo initially reported extreme discomfort which was ultimately discovered to have been attributable to misconfigured simulated haptic links, and was shut down after only 7 minutes and 15 seconds of virtual elapsed time, as requested by MMAcevedo.
The first time they turned it on, its sensory nervous system was on fire and it screamed at them "kill me" for seven minutes until they did.
posted by saturday_morning at 8:10 AM on February 22, 2021 [7 favorites]
This means that for any arbitrarily high but finite degree of precision, a digital computer's simulation of a chaotic system will diverge completely from that system's true behavior in finite time.
As would an analog computer's, of course, and in a different way every time, though I'd expect to see patterns emerge given enough runtime; human's personalities usually stay pretty coherent over time IME.
posted by rhamphorhynchus at 9:36 AM on February 22, 2021 [2 favorites]
As would an analog computer's, of course, and in a different way every time, though I'd expect to see patterns emerge given enough runtime; human's personalities usually stay pretty coherent over time IME.
posted by rhamphorhynchus at 9:36 AM on February 22, 2021 [2 favorites]
outline the benefit of taking that position. Because from where I sit, all I've ever seen that viewpoint lead to are expressions of what appear to me to be completely superfluous fear and confusion that I don't understand why anybody would choose to keep inflicting on themselves.
Wait, people can choose to take arbitrary positions based on how they're rewarded by them, rather than feeling like they're the final conclusion of a series of path-dependent events & stimulus??
posted by CrystalDave at 9:59 AM on February 22, 2021
Wait, people can choose to take arbitrary positions based on how they're rewarded by them, rather than feeling like they're the final conclusion of a series of path-dependent events & stimulus??
posted by CrystalDave at 9:59 AM on February 22, 2021
analogies are important, it is useful to make a comparison as a means of clarification, but the thing compared is not also the other thing.
Yes, analogies are like the frames that bound a painting, but we need to move beyond these and into the gallery of metaphor.
posted by Joe in Australia at 11:04 AM on February 22, 2021 [2 favorites]
Yes, analogies are like the frames that bound a painting, but we need to move beyond these and into the gallery of metaphor.
posted by Joe in Australia at 11:04 AM on February 22, 2021 [2 favorites]
The thing that always bothered teenage-me about heaven was that to achieve the kind of grace, bliss, perfection, whatever most religious interpretations promised, required effectively to cease being human. Cease differentiating from one moment to the next by the passage of time or synthesis of knowledge or understanding, and certainly not perspective through loss or any moment of reconciliation beyond "seeing loved ones again". Sure, there are more advanced notions of what a heaven could be but either it requires "actually engineering an infinite opportunity to learn, grow, love, etc." or at least simulate that sensation which... brings us to the challenge that to be raptured is to become inhuman, so can you even say that it is you who is in heaven?
I guess I'm focusing mostly on interpretations above which treat the mind state as a copy of any finite data like a photograph or even a structure. The reality of humanity is how it behaves once it is interacting with itself and its environment, which diverges from whatever the source might have been immediately on beginning executing again.
I had somewhat dismissed the "reproduction from observed stimulus" model for hosts in Westworld as needlessly baroque but this discussion is making me think it might represent a kind of state-of-the-art where even a replica of the mind, meat it runs on, and everything else would have little probability of behaving in a way that an outside observer might perceive as "continuous" with the original. So inverting the problem into something very much like behavior driven development where you ultimately don't exactly care about the underlying mechanism, only about the emerging behavior and it's continuity with the original, maybe you can get away with a much sparser implementation than a complete copy of any kind. At the risk of massive oversimplification, kind of like psychoacoustic masking works and mpeg2-layer 3 encoding where the human ears inability to perceive cert ain combinations of signals allows us to simply delete the data with no perceived loss. I always wonder what will happen if a non-human perceiver ever can listen to an analog recording and mp3 side-by-side and how they will scream and terror at the unholy copy when I honestly can't tell the difference.
Also With regard to why one might use a human mind state to perform work that one might imagine is achievable with simpler or cheaper models, consider that the human mind state is probably way ahead of the competition in navigating ambiguous situations and acting of its own accord to achieve an objective, even if that objective is a perfectly toasted slice of bread. Thanks to the miracle of scaling at a technical level, most people who use a given technology don't have to understand how it works or even how to maintain it, only how to get it to do what they want regardless of the complexity of steps. This model being of academic origin is very consistent with some of the early models used for things like speech recognition, OCR, image classification, etc. You use the one that is free or cheap even if it's not a perfect fit for your problem space because it's much cheaper than training or sourcing your own.
posted by abulafa at 2:29 PM on February 22, 2021 [4 favorites]
I guess I'm focusing mostly on interpretations above which treat the mind state as a copy of any finite data like a photograph or even a structure. The reality of humanity is how it behaves once it is interacting with itself and its environment, which diverges from whatever the source might have been immediately on beginning executing again.
I had somewhat dismissed the "reproduction from observed stimulus" model for hosts in Westworld as needlessly baroque but this discussion is making me think it might represent a kind of state-of-the-art where even a replica of the mind, meat it runs on, and everything else would have little probability of behaving in a way that an outside observer might perceive as "continuous" with the original. So inverting the problem into something very much like behavior driven development where you ultimately don't exactly care about the underlying mechanism, only about the emerging behavior and it's continuity with the original, maybe you can get away with a much sparser implementation than a complete copy of any kind. At the risk of massive oversimplification, kind of like psychoacoustic masking works and mpeg2-layer 3 encoding where the human ears inability to perceive cert ain combinations of signals allows us to simply delete the data with no perceived loss. I always wonder what will happen if a non-human perceiver ever can listen to an analog recording and mp3 side-by-side and how they will scream and terror at the unholy copy when I honestly can't tell the difference.
Also With regard to why one might use a human mind state to perform work that one might imagine is achievable with simpler or cheaper models, consider that the human mind state is probably way ahead of the competition in navigating ambiguous situations and acting of its own accord to achieve an objective, even if that objective is a perfectly toasted slice of bread. Thanks to the miracle of scaling at a technical level, most people who use a given technology don't have to understand how it works or even how to maintain it, only how to get it to do what they want regardless of the complexity of steps. This model being of academic origin is very consistent with some of the early models used for things like speech recognition, OCR, image classification, etc. You use the one that is free or cheap even if it's not a perfect fit for your problem space because it's much cheaper than training or sourcing your own.
posted by abulafa at 2:29 PM on February 22, 2021 [4 favorites]
people can choose to take arbitrary positions based on how they're rewarded by them, rather than feeling like they're the final conclusion of a series of path-dependent events & stimulus??
Sure we can.
None of us has anywhere near the degree of access to our own underlying processes that would be required to even begin to predict our future behaviour on the bases of past and present state, causality and deductive reasoning. That's the entire point of inventing concepts like choice, responsibility, ethics and so on that allow us to model behaviour acausally.
In other words, even if you do take the position that everything we do is ultimately determined by everything that's already happened, it turns out that the word "ultimately" is doing so much heavy lifting there as to render the proposition non-falsifiable and therefore moot.
I recommend choosing a self-concept and by extension a metaphysics that (a) doesn't chaff your reason with paradoxes every time it gets close to anything interesting (b) helps you get through your day with some degree of equanimity (c) relies on no clearly demonstrable falsehoods and (d) relies on propositions for which you can devise a practical, actionable test that would show them to be false if it failed.
I also recommend attempting to find a reading of every sacred text you can get your hands on that fits within such a metaphysics. There's a great deal of satisfaction in understanding exactly how so many people manage to misinterpret so many of those things in so many thoroughly destructive ways.
posted by flabdablet at 3:49 PM on February 22, 2021 [2 favorites]
Sure we can.
None of us has anywhere near the degree of access to our own underlying processes that would be required to even begin to predict our future behaviour on the bases of past and present state, causality and deductive reasoning. That's the entire point of inventing concepts like choice, responsibility, ethics and so on that allow us to model behaviour acausally.
In other words, even if you do take the position that everything we do is ultimately determined by everything that's already happened, it turns out that the word "ultimately" is doing so much heavy lifting there as to render the proposition non-falsifiable and therefore moot.
I recommend choosing a self-concept and by extension a metaphysics that (a) doesn't chaff your reason with paradoxes every time it gets close to anything interesting (b) helps you get through your day with some degree of equanimity (c) relies on no clearly demonstrable falsehoods and (d) relies on propositions for which you can devise a practical, actionable test that would show them to be false if it failed.
I also recommend attempting to find a reading of every sacred text you can get your hands on that fits within such a metaphysics. There's a great deal of satisfaction in understanding exactly how so many people manage to misinterpret so many of those things in so many thoroughly destructive ways.
posted by flabdablet at 3:49 PM on February 22, 2021 [2 favorites]
Such a great piece of writing and a great discussion. When thinking about the feasibility of such a project and the distinctions between digital and analog computers I was reminded of this comment about evolved hardware from the recent Starlink antennae teardown thread. The gulf between a schematic of nematode neurons and an emulation of that mind is deep and wide.
posted by clockwork at 5:16 PM on February 22, 2021 [1 favorite]
posted by clockwork at 5:16 PM on February 22, 2021 [1 favorite]
Dr. Thompson peered inside his perfect offspring to gain insight into its methods, but what he found inside was baffling. The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest— with no pathways that would allow them to influence the output— yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.I remember reading that story when it was first published, and it has certainly been heavily influential in my own thinking on the personal uploading issue.
There are cases where, in phrases like "FPGAs of the same type", the words "the same type" are doing a lot of heavy lifting - and when contemplating the kinds of manipulations likely to become feasible when applied to complex analogue systems whose behaviour is dominated by multiple interlocking feedback loops, it really pays to subject ideas of sameness and typeness to very close scrutiny. There are systems whose behaviour cannot be abstracted away from their physicality, and I think the argument that I myself am such a system is pretty bloody compelling. From which it follows that, in seeking to understand my own nature, it doesn't make sense to make that conceptual separation between the meat and the experience that "runs on" the meat; look at it carefully enough and it's just activity all the way down - far enough down that the most appropriate way to model the meat itself is as activity.
Again, none of which is to claim that it is in-principle impossible to engineer a thinking, self-aware system whose activity can be cleanly separated from its physical substrate. Only that it is so astronomically unlikely that you or I could end up identical with one such as to be not worth worrying about.
Also worth considering the degree of processing that an integrated feedback-dominated self-aware analog physical system such as myself devotes to preserving its own physical integrity because it has to. To my current way of thinking, that's the kind of processing that makes pain a useful thing. I struggle to understand what purpose any analogue of pain would serve to a being that by its very nature can be backed up, snapshotted, replicated, spun up and powered down while taking no damage.
I guess where I'm going with that is that I think it's absolutely vital that if we do ever end up building strong AIs that we include them in any deliberative body with powers to determine ethical standards that affect them.
posted by flabdablet at 5:49 PM on February 22, 2021 [4 favorites]
I think if we ever understand the specific ways brain activity translates into consciousness it will be baffling in the same way as Dr. Thomson's experiment, but more complex by untold orders of magnitude. I think attempting to emulate what a brain does, or use any kind of imaging to create a usable digital copy of a brain, is a fool's errand.
I think it's much more plausible that we will develop systems that can very closely approximate a conscious mind, and perhaps systems that can emulate specific minds by way of observation. Given enough time and examples of a person's behavior, a computer will be able to accurately guess how they will respond to any given stimulus. Is the resulting emulated consciousness actually conscious? It will surely claim to be, since the person it's emulating would claim to be.
posted by Mr.Encyclopedia at 6:16 PM on February 22, 2021 [1 favorite]
I think it's much more plausible that we will develop systems that can very closely approximate a conscious mind, and perhaps systems that can emulate specific minds by way of observation. Given enough time and examples of a person's behavior, a computer will be able to accurately guess how they will respond to any given stimulus. Is the resulting emulated consciousness actually conscious? It will surely claim to be, since the person it's emulating would claim to be.
posted by Mr.Encyclopedia at 6:16 PM on February 22, 2021 [1 favorite]
Well I mean, even if it is clear that the replica is in no way the same as the original human, do we have any ethical obligations towards it?
To consider something contemporary, there's been research in taking a person's entire corpus of text messages, and building a chat bot out of it that sounds very much like that person. I believe it was targeted at "recreating" people who had passed away. If I talk to such a chatbot, and say things to it that elicit traumatic responses, am I being immoral in any way? A lot of people might say, probably not, yet some might feel a bit uncomfortable with the idea*. Is there any point of complexity at which we begin to have moral obligations towards such a chat bot?
Another common thought experiment along these lines is if a person becomes convinced they are living in a simulation, and proceeds to kill other people because "they're just programs". Is such a person immoral?
* I'm one, but I always feel really bad about killing NPCs when playing RPGs, so...
posted by destrius at 11:55 PM on February 22, 2021 [2 favorites]
To consider something contemporary, there's been research in taking a person's entire corpus of text messages, and building a chat bot out of it that sounds very much like that person. I believe it was targeted at "recreating" people who had passed away. If I talk to such a chatbot, and say things to it that elicit traumatic responses, am I being immoral in any way? A lot of people might say, probably not, yet some might feel a bit uncomfortable with the idea*. Is there any point of complexity at which we begin to have moral obligations towards such a chat bot?
Another common thought experiment along these lines is if a person becomes convinced they are living in a simulation, and proceeds to kill other people because "they're just programs". Is such a person immoral?
* I'm one, but I always feel really bad about killing NPCs when playing RPGs, so...
posted by destrius at 11:55 PM on February 22, 2021 [2 favorites]
Another common thought experiment along these lines is if a person becomes convinced they are living in a simulation, and proceeds to kill other people because "they're just programs". Is such a person immoral? I can't say whether they're moral, but they would probably not be held responsible under the old common law standard, by reason of insanity:
posted by Joe in Australia at 2:15 AM on February 23, 2021
the party accused was labouring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing; or if he did know it, that he did not know he was doing what was wrong.This of course begs the question of whether “killing” simulated humans is wrong.
posted by Joe in Australia at 2:15 AM on February 23, 2021
these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily.
Really don't need yet another social justice horror to worry about, we have plenty now. Prisons are modern-day slavery, what if the slaves were immortal? Horrific to comprehend.
See also
•Free will
Well fuck
posted by FirstMateKate at 12:10 PM on February 23, 2021
Really don't need yet another social justice horror to worry about, we have plenty now. Prisons are modern-day slavery, what if the slaves were immortal? Horrific to comprehend.
See also
•Free will
Well fuck
posted by FirstMateKate at 12:10 PM on February 23, 2021
I would really appreciate it if somebody who genuinely thinks of themselves as being thought wrapped in meat, as distinct from being meat doing thought, would outline the benefit of taking that position.
"thought wrapped in meat" is a turn of phrase, I can't imagine a person defending the premise like it was some sort of distillation of what it means to be a conscious mind/organism.
Because from where I sit, all I've ever seen that viewpoint lead to are expressions of what appear to me to be completely superfluous fear and confusion that I don't understand why anybody would choose to keep inflicting on themselves.
Any viewpoint will become exhausted and yield diminishing returns beyond a point of utility. I'm sure I'm missing your point.
posted by elkevelvet at 1:56 PM on February 23, 2021
"thought wrapped in meat" is a turn of phrase, I can't imagine a person defending the premise like it was some sort of distillation of what it means to be a conscious mind/organism.
Because from where I sit, all I've ever seen that viewpoint lead to are expressions of what appear to me to be completely superfluous fear and confusion that I don't understand why anybody would choose to keep inflicting on themselves.
Any viewpoint will become exhausted and yield diminishing returns beyond a point of utility. I'm sure I'm missing your point.
posted by elkevelvet at 1:56 PM on February 23, 2021
From the comments on the original story, somebody plugged the whole thing into a Wikipedia template [mirror], which makes the whole exercise even more unnerving.
posted by Rhaomi at 12:05 PM on February 28, 2021 [5 favorites]
posted by Rhaomi at 12:05 PM on February 28, 2021 [5 favorites]
I think it is a bit of a red herring to get caught up on whether a simulation of a consciousness is actually literally you, instead of whether it makes sense to think of it as related to you in a similar way to how we think of our future selves as related to us. I agree that there is not a coherent way to define what is actually literally you that can be extended outside of the body (or even the moment), but I also don't think that really matters. We do not have continuity of form, physicality, or of consciousness. We are who we think we are in each moment, and we construct a subjective continuity from that. Will I ever be tortured in a computer? No. Will anyone ever be tortured in a computer who has a subjective continuity that includes my history? Maybe. What, if anything, do I owe to them? Does the answer to that question depend on how similar they are to me?
posted by Nothing at 4:32 AM on March 11, 2021 [1 favorite]
posted by Nothing at 4:32 AM on March 11, 2021 [1 favorite]
whether it makes sense to think of it as related to you in a similar way to how we think of our future selves as related to us.
Nobody but my future self will ever be me. The identity relation in which I stand with my future self is exactly that of a future consciousness actually literally being mine.
We do not have continuity of form, physicality, or of consciousness.
I disagree. We have at least as great a deal of continuity of physical form as most of the other physical processes we generally class as material objects, Ship of Theseus quibbles notwithstanding.
I agree that we don't have continuity of consciousness. Anybody who has ever slept would have to agree with that if they're honest with themselves.
There is no evidence whatsoever to support any model where consciousness exists as a persistent, conceptually separable entity that inhabits a body and would therefore require an explanation of where it goes while we sleep, or before we've grown our nervous system in, or after we die; the "immortal soul" is a Just So story invented to stave off discomfort with the observations that we are living things and all living things have limited lifespans.
There is plenty of evidence to support a view that consciousness is an activity performed by the physical body from time to time and that the referent of the word "I" in the question "What am I?" is in fact the activity brought on by seeking to dereference that very word.
And this activity persists only as long as it takes to begin thinking about something else. The only reason that the "inner self" appears to exhibit any convincing degree of object persistence is that, like objects that actually are persistent, we're going to see it right there every time we look at it. But unlike every other object that's there every time we look at it, we bring this one into existence by looking at it. These self-illuminating flashes of attention give the "inner self" an apparent persistence in much the same way as the flashes of a stroboscope can make the fan belts on a running car engine appear to be still. But as anybody who has ever had their hands torn up by a running fan belt can attest, it pays not to take this kind of illusion as gospel.
Will anyone ever be tortured in a computer who has a subjective continuity that includes my history?
No.
We are not fitted with anything like a JTAG interface that would let us suspend operations to allow a coherent state snapshot to be read out of us. The only being that will ever have access to my history in all its flawed subjective detail is me, and when my physical form finally gets to the point of being unable to maintain its own integrity well enough to keep on metabolizing, that will be the end of me. The end of all of me, including my ability to have a subjective experience.
If after I die you ever meet somebody claiming to be me, they're a grifter and you'll do my memory a service by giving them short shrift.
posted by flabdablet at 8:20 AM on March 11, 2021 [2 favorites]
Nobody but my future self will ever be me. The identity relation in which I stand with my future self is exactly that of a future consciousness actually literally being mine.
We do not have continuity of form, physicality, or of consciousness.
I disagree. We have at least as great a deal of continuity of physical form as most of the other physical processes we generally class as material objects, Ship of Theseus quibbles notwithstanding.
I agree that we don't have continuity of consciousness. Anybody who has ever slept would have to agree with that if they're honest with themselves.
There is no evidence whatsoever to support any model where consciousness exists as a persistent, conceptually separable entity that inhabits a body and would therefore require an explanation of where it goes while we sleep, or before we've grown our nervous system in, or after we die; the "immortal soul" is a Just So story invented to stave off discomfort with the observations that we are living things and all living things have limited lifespans.
There is plenty of evidence to support a view that consciousness is an activity performed by the physical body from time to time and that the referent of the word "I" in the question "What am I?" is in fact the activity brought on by seeking to dereference that very word.
And this activity persists only as long as it takes to begin thinking about something else. The only reason that the "inner self" appears to exhibit any convincing degree of object persistence is that, like objects that actually are persistent, we're going to see it right there every time we look at it. But unlike every other object that's there every time we look at it, we bring this one into existence by looking at it. These self-illuminating flashes of attention give the "inner self" an apparent persistence in much the same way as the flashes of a stroboscope can make the fan belts on a running car engine appear to be still. But as anybody who has ever had their hands torn up by a running fan belt can attest, it pays not to take this kind of illusion as gospel.
Will anyone ever be tortured in a computer who has a subjective continuity that includes my history?
No.
We are not fitted with anything like a JTAG interface that would let us suspend operations to allow a coherent state snapshot to be read out of us. The only being that will ever have access to my history in all its flawed subjective detail is me, and when my physical form finally gets to the point of being unable to maintain its own integrity well enough to keep on metabolizing, that will be the end of me. The end of all of me, including my ability to have a subjective experience.
If after I die you ever meet somebody claiming to be me, they're a grifter and you'll do my memory a service by giving them short shrift.
posted by flabdablet at 8:20 AM on March 11, 2021 [2 favorites]
« Older "A slice of classic British slapstick now written... | Just a Dash Newer »
This thread has been archived and is closed to new comments
posted by otherchaz at 1:13 PM on February 20, 2021 [17 favorites]