An Interview with Ray Kurzweil
May 30, 2010 9:46 AM Subscribe
Ray Kurzweil: That Singularity Guy In the year 2050, if Ray Kurzweil is right, nanoscopic robots will be zooming throughout our capillaries, transforming us into nonbiological humans.
Ray Answers the following questions & more:
Are we going to look like humans forever, or will we eventually just become ghosts in the machine while our physical bodies devolve into dwarves with lobster hands?
Is the ultimate goal to transcend biology and choose how long we would like to live?
I can't wait to upload my brain into a Roomba and really fuck up the floors in my apartment. CHILDISH roommates leave their dusty stuff everywhere!!! floors haven't been cleaned in months now theres hell to pay Gonna become a ghost in the machine to clean the words "FUCK U" in dirty a$$ floors. empowerment ha ha! 2012
posted by Damn That Television at 10:04 AM on May 30, 2010 [13 favorites]
posted by Damn That Television at 10:04 AM on May 30, 2010 [13 favorites]
Something that’s deeply troubling about your vision of the future is the risk of hyper-equality. What’s the point of life if everyone is perfect and super-smart?
I already knew that Kurzweil was kind of a twat, but the interviewer isn't much better.
posted by Pope Guilty at 10:04 AM on May 30, 2010 [5 favorites]
I already knew that Kurzweil was kind of a twat, but the interviewer isn't much better.
posted by Pope Guilty at 10:04 AM on May 30, 2010 [5 favorites]
I'm picturing our newly-devolved lobster-dwarf bodies ironically appearing in the Vice's "Do" section next to a photo of an emaciated Twiggy wannabe wearing grandma's couch-colored tights and a red bouffant.
posted by griphus at 10:05 AM on May 30, 2010 [5 favorites]
posted by griphus at 10:05 AM on May 30, 2010 [5 favorites]
Maybe he decided he didn't want to get confused with these guys.
It may just be because I just got done reading the latest oil thread, but I feel that Kurzweil's rosy predictions seem less and less connected to reality each time I encounter them.
posted by AdamCSnider at 10:05 AM on May 30, 2010
It may just be because I just got done reading the latest oil thread, but I feel that Kurzweil's rosy predictions seem less and less connected to reality each time I encounter them.
posted by AdamCSnider at 10:05 AM on May 30, 2010
In just 5 years, if Back to the Future is right, I'll have my hoverboard and rehydrated pizza.
One thing that you can track through history is the rather dismal prediction rate of future events, especially those that are technology-based. I'd venture to guess that while Kurzweil may have ideas that come true, their actual practical usage and impact on our society is another matter entirely. Networked televisions were theorized long before the modern Internet, but I don't know if the vision for those devices would be SLYT posts, virtual farms, and porn.
Wait, never mind about the porn.
posted by TheFlamingoKing at 10:08 AM on May 30, 2010
One thing that you can track through history is the rather dismal prediction rate of future events, especially those that are technology-based. I'd venture to guess that while Kurzweil may have ideas that come true, their actual practical usage and impact on our society is another matter entirely. Networked televisions were theorized long before the modern Internet, but I don't know if the vision for those devices would be SLYT posts, virtual farms, and porn.
Wait, never mind about the porn.
posted by TheFlamingoKing at 10:08 AM on May 30, 2010
I could have sworn when I read The Age of Spiritual Machines, he'd pegged his singularity at 2012.
I'm pretty sure you're confusing your futurists; I don't think Kurzweil ever went in for that whole mayan calendar thing.
I'm looking at the timeline at the back of the AOSM. No entry for 2012, it skips from 2009 ("A $1000 computer can perform about a trillion calculations per second; cables are disappearing; the majority of text is created using continuous speech recognition") to 2019 ("A $1000 computer (in 1999 dollars) is approximately equal to the computational ability of the human brain; reports of computers passing the Turing test although these tests do not meet the criteria established by knowledgeable observers").
No entry for 2050 either, but for 2049 he lists nanoproduced food and "nanobot swarm projections".
The singularity in that timeline -- "a merger of human thinking with the world of machine intelligence" -- is scheduled for "by the year 2099". So if anything he moved it up.
posted by ook at 10:23 AM on May 30, 2010 [3 favorites]
I'm pretty sure you're confusing your futurists; I don't think Kurzweil ever went in for that whole mayan calendar thing.
I'm looking at the timeline at the back of the AOSM. No entry for 2012, it skips from 2009 ("A $1000 computer can perform about a trillion calculations per second; cables are disappearing; the majority of text is created using continuous speech recognition") to 2019 ("A $1000 computer (in 1999 dollars) is approximately equal to the computational ability of the human brain; reports of computers passing the Turing test although these tests do not meet the criteria established by knowledgeable observers").
No entry for 2050 either, but for 2049 he lists nanoproduced food and "nanobot swarm projections".
The singularity in that timeline -- "a merger of human thinking with the world of machine intelligence" -- is scheduled for "by the year 2099". So if anything he moved it up.
posted by ook at 10:23 AM on May 30, 2010 [3 favorites]
we will soon be living in a world that makes Back to the Future II look like Planet of the Apes
So... it'll be a world that's seriously ape-free? Ape numbers will actually have gone negative? We'll be living with a global ape debt of huge proportions?
posted by Phanx at 10:23 AM on May 30, 2010 [12 favorites]
So... it'll be a world that's seriously ape-free? Ape numbers will actually have gone negative? We'll be living with a global ape debt of huge proportions?
posted by Phanx at 10:23 AM on May 30, 2010 [12 favorites]
...and I've just checked The Singularity Is Near. There he defines the "Singularity" as the date that the quantity of available machine computation is one billion times as much as human-brain computational ability, "representing a profound and disruptive transformation in human capability," and sets the date for that at 2045. Page 136.
What can I say, I own a bunch of his books; he's a kook but he's an adorable optimistic kook and I enjoy reading them.
posted by ook at 10:31 AM on May 30, 2010
What can I say, I own a bunch of his books; he's a kook but he's an adorable optimistic kook and I enjoy reading them.
posted by ook at 10:31 AM on May 30, 2010
So... it'll be a world that's seriously ape-free? Ape numbers will actually have gone negative?
Wouldn't that mean that we'd have a positive number of anti-apes? And if you ever touched one of the anti-apes with a bone from an ape they'd undergo ape/anti-ape annihilation? And spray a torrent of gammape rays everywhere?
posted by ROU_Xenophobe at 10:31 AM on May 30, 2010 [10 favorites]
Wouldn't that mean that we'd have a positive number of anti-apes? And if you ever touched one of the anti-apes with a bone from an ape they'd undergo ape/anti-ape annihilation? And spray a torrent of gammape rays everywhere?
posted by ROU_Xenophobe at 10:31 AM on May 30, 2010 [10 favorites]
The problem would really be that the gammape explosion (or bomb, if you will) will create a being of incredible power and rage unbridled by self-control, he'll be called the Incredible Hulkape (Bruce Bannaner).
posted by oddman at 10:41 AM on May 30, 2010 [1 favorite]
posted by oddman at 10:41 AM on May 30, 2010 [1 favorite]
I already knew that Kurzweil was kind of a twat
yeah - smart, imaginative people are such jerks
posted by archivist at 10:43 AM on May 30, 2010 [5 favorites]
yeah - smart, imaginative people are such jerks
posted by archivist at 10:43 AM on May 30, 2010 [5 favorites]
He's starting to sound like a total kook.
I can maaayyybe buy into nanotechnology replicators to give us whatever er want in fifty years, but a machine passing the Turing test in twenty? Give me a break. Even the leading people in machine learning (the field that used to be called "artificial intelligence" until Hollywood ruined any serious use of that term) admit that strong AI (machines thinking like humans) is pretty much dead.
posted by spitefulcrow at 10:58 AM on May 30, 2010
I can maaayyybe buy into nanotechnology replicators to give us whatever er want in fifty years, but a machine passing the Turing test in twenty? Give me a break. Even the leading people in machine learning (the field that used to be called "artificial intelligence" until Hollywood ruined any serious use of that term) admit that strong AI (machines thinking like humans) is pretty much dead.
posted by spitefulcrow at 10:58 AM on May 30, 2010
In the interview he says 2029 for the singularity (machines smarter than people), which from memory was same as in Age of Spiritual machines.
What's strange about Kurzweil is he tries to predict what it will be like after the singularity, which almost by definition is unpredictable.
posted by bhnyc at 10:58 AM on May 30, 2010
What's strange about Kurzweil is he tries to predict what it will be like after the singularity, which almost by definition is unpredictable.
posted by bhnyc at 10:58 AM on May 30, 2010
Doesn't he change the timeline with each book?
Not so much, no. Between Age of Spiritual Machines and Singularity is Near he changed his mind about what the term "Singularity" meant, but he's pretty consistent about where he thinks we'll be on a given date as far as I can tell. (Singularity Is Near doesn't have a literal timeline at the back for easy comparison, but just flipping through it every time I see him put a date on a particular event, it matches the predicted year for the same event in the other book.
You can fault the guy for overoptimism, sure, but not for inconsistency.
posted by ook at 11:00 AM on May 30, 2010 [2 favorites]
Not so much, no. Between Age of Spiritual Machines and Singularity is Near he changed his mind about what the term "Singularity" meant, but he's pretty consistent about where he thinks we'll be on a given date as far as I can tell. (Singularity Is Near doesn't have a literal timeline at the back for easy comparison, but just flipping through it every time I see him put a date on a particular event, it matches the predicted year for the same event in the other book.
You can fault the guy for overoptimism, sure, but not for inconsistency.
posted by ook at 11:00 AM on May 30, 2010 [2 favorites]
Give me a break. Even the leading people in machine learning (the field that used to be called "artificial intelligence" until Hollywood ruined any serious use of that term) admit that strong AI (machines thinking like humans) is pretty much dead.
Heuristic AI is dead, sure. Brain simulation is alive and well.
posted by signalnine at 11:03 AM on May 30, 2010 [1 favorite]
Heuristic AI is dead, sure. Brain simulation is alive and well.
posted by signalnine at 11:03 AM on May 30, 2010 [1 favorite]
"Even if we perfect biology, it has inherent limitations. We will have very powerful means, such as drugs finely pinpointed to reprogram the information processes underlying biology, to get away from disease and aging. When we can augment our immune systems with nanobots that are 1,000 times more capable than white blood cells at destroying pathogens and keeping us healthy at the level of cells and molecules to combat disease, that will be even more powerful. And ultimately, we will be able to actually back up the information in our biological systems, including our brains. That’s sort of the last step."
As someone who works with real nanobots capable of curing human bacterial disease, this man clearly has a very limited understanding of the mammalian immune system, host pathogen interactions, and the nature of what it means to be alive. I think he can be safely ignored.
posted by Blasdelb at 11:04 AM on May 30, 2010 [4 favorites]
As someone who works with real nanobots capable of curing human bacterial disease, this man clearly has a very limited understanding of the mammalian immune system, host pathogen interactions, and the nature of what it means to be alive. I think he can be safely ignored.
posted by Blasdelb at 11:04 AM on May 30, 2010 [4 favorites]
I once temped for a day at Kurzweil Technologies, and spent most of the afternoon copy-editing endnotes for one of his books. He even thanked me in the acknowledgments! Now, I don't know much about this nano-people 2012 business, but as far as I'm concerned Ray Kurzweil is pretty swell; he put me in a book.
posted by cirripede at 11:07 AM on May 30, 2010 [2 favorites]
posted by cirripede at 11:07 AM on May 30, 2010 [2 favorites]
One of my friends put it thus "Nanotech has become transcendence gospel for atheists."
Seems to fit the bill:
- You get to live forever
- In whatever body you want
- Material things can be provided at no cost
- By servants who never disobey or have any needs or desires of their own
(That said, I wouldn't mind some nanotech to fix my eyesight. That would be bomb.)
posted by yeloson at 11:08 AM on May 30, 2010 [9 favorites]
Seems to fit the bill:
- You get to live forever
- In whatever body you want
- Material things can be provided at no cost
- By servants who never disobey or have any needs or desires of their own
(That said, I wouldn't mind some nanotech to fix my eyesight. That would be bomb.)
posted by yeloson at 11:08 AM on May 30, 2010 [9 favorites]
I'll have to instruct my children and grandchildren carefully. Wait until everyone has plugged in, then unplug the machines. World domination will be my family's legacy!
posted by Atreides at 11:17 AM on May 30, 2010 [2 favorites]
posted by Atreides at 11:17 AM on May 30, 2010 [2 favorites]
I'm fairly certain something like Kurzweil's predictions will come to pass eventually (barring mass extinction of humanity), but putting a date of 2050, or even 2099 on it is kind of ridiculous. Weren't we supposed to have a moon base by 2001?
Let's just say the singularity will occur before 9999, just to be safe.
posted by XerxesQados at 11:18 AM on May 30, 2010
Let's just say the singularity will occur before 9999, just to be safe.
posted by XerxesQados at 11:18 AM on May 30, 2010
World domination will be my family's legacy!
posted by Atreides at 2:17 PM on May 30
Eponysterical?
posted by the littlest brussels sprout at 11:31 AM on May 30, 2010 [11 favorites]
posted by Atreides at 2:17 PM on May 30
Eponysterical?
posted by the littlest brussels sprout at 11:31 AM on May 30, 2010 [11 favorites]
I do not see any reasons, based within either technology, biology, or the laws of physics, that we can't do "better" than the standard squishy human form. There's nothing just that magical "it can't be any other way" about this particular clunky substrate. We could use something else. This does not make me an optimist, because I think that, even if we might be smart enough to get there, we're fairly short-sighted and self-destructive, as a species.
Now, measuring milestones to intelligence in terms of either memory or operations per unit time is a bit like saying that we'll get a golem if we just put enough mud and clay in one spot. It's all about knowing how to shape it and what to write on its forehead. Met, we do just fine; emet, not so much. And the unintelligent, obedient (in at least some stories) golem is what we want, not a machine capable of picking its own purpose. Friendly seed AIs be damned — if they're smarter than us, they'll go through the adolescent phase of realizing that our parents spent a lot of time structuring our belief systems in a fashion that our resulting behavior is rather convenient. Then comes the rebellious phase. Won't that be fun?
Even if we do manage not to kill ourselves stumbling along, I don't see theRapture for Nerds Singularity as being all that near. The transcend is not nigh. Freeze your heads, folks, and let's hope that your facility of choice runs on solar and ultracaps, because the future is still kinda far off when it comes to actual life extension.
posted by adipocere at 11:40 AM on May 30, 2010
Now, measuring milestones to intelligence in terms of either memory or operations per unit time is a bit like saying that we'll get a golem if we just put enough mud and clay in one spot. It's all about knowing how to shape it and what to write on its forehead. Met, we do just fine; emet, not so much. And the unintelligent, obedient (in at least some stories) golem is what we want, not a machine capable of picking its own purpose. Friendly seed AIs be damned — if they're smarter than us, they'll go through the adolescent phase of realizing that our parents spent a lot of time structuring our belief systems in a fashion that our resulting behavior is rather convenient. Then comes the rebellious phase. Won't that be fun?
Even if we do manage not to kill ourselves stumbling along, I don't see the
posted by adipocere at 11:40 AM on May 30, 2010
Every time someone mentions the Singularity I get that Voyager episode stuck in my head.
posted by New England Cultist at 11:42 AM on May 30, 2010
posted by New England Cultist at 11:42 AM on May 30, 2010
Between Age of Spiritual Machines and Singularity is Near he changed his mind about what the term "Singularity" meant
I'm going to be super-pedantic and point out that I was wrong about this part: as far as I can see he never actually uses the word "singularity" in Age of Spiritual Machines -- I was projecting my own definition of it onto his timeline.
posted by ook at 11:52 AM on May 30, 2010
I'm going to be super-pedantic and point out that I was wrong about this part: as far as I can see he never actually uses the word "singularity" in Age of Spiritual Machines -- I was projecting my own definition of it onto his timeline.
posted by ook at 11:52 AM on May 30, 2010
I see us all transforming into the shapes of the London 2012 Olympic Games mascots, a micro-second after the games are declared over.
posted by drogien at 11:52 AM on May 30, 2010
posted by drogien at 11:52 AM on May 30, 2010
So... it'll be a world that's seriously ape-free? Ape numbers will actually have gone negative?
"Oh, my God, they found me - I don't know how, but they found me! Run for it Marty!"
"Who? Who?"
"Who do you think? The Simians!"
posted by Smart Dalek at 11:53 AM on May 30, 2010 [3 favorites]
"Oh, my God, they found me - I don't know how, but they found me! Run for it Marty!"
"Who? Who?"
"Who do you think? The Simians!"
posted by Smart Dalek at 11:53 AM on May 30, 2010 [3 favorites]
Fizz: “... transcend biology and choose how long we would like to live?”
Because it's biology's fault that we die. This has been obvious ever since that iPad 5g was released with self-replicating circuits and an eternal battery which prevents it from ever needing repair or maintenance or replacement.
posted by koeselitz at 12:00 PM on May 30, 2010 [1 favorite]
Because it's biology's fault that we die. This has been obvious ever since that iPad 5g was released with self-replicating circuits and an eternal battery which prevents it from ever needing repair or maintenance or replacement.
posted by koeselitz at 12:00 PM on May 30, 2010 [1 favorite]
KANG LAUGHS THAT YOU PUNY HUMANS THINK YOU WILL BE AROUND IN 40 OF YOUR EARTH YEARS!!!
KODOS TOO IS AMUSED!
posted by JaredSeth at 12:01 PM on May 30, 2010 [1 favorite]
KODOS TOO IS AMUSED!
posted by JaredSeth at 12:01 PM on May 30, 2010 [1 favorite]
Hah. I like that people don't realize that we already have nanoscopic machines swarming through our bloodstream. Why would they be better if they were made of metal with blinky lights (as I assume they're picturing it).
posted by Salvor Hardin at 12:20 PM on May 30, 2010 [5 favorites]
posted by Salvor Hardin at 12:20 PM on May 30, 2010 [5 favorites]
I like Kurzweil. He's the guy who's always saying that Star Trek is just around the corner. We all need a voice of optimism like that in our lives.
The problem is, from my perspective, is that our civilization, our species (hell, even the whole ecosystem) is quite clearly in decline, rather than at the cusp of infinite growth. I can imagine the future in 2029: all of us sitting in a circle, checking Facebook on our iPhone 50g, wishing desperately that we could somehow eat status updates as we pass the last piece of roasted dog between us.
posted by Avenger at 12:28 PM on May 30, 2010 [4 favorites]
The problem is, from my perspective, is that our civilization, our species (hell, even the whole ecosystem) is quite clearly in decline, rather than at the cusp of infinite growth. I can imagine the future in 2029: all of us sitting in a circle, checking Facebook on our iPhone 50g, wishing desperately that we could somehow eat status updates as we pass the last piece of roasted dog between us.
posted by Avenger at 12:28 PM on May 30, 2010 [4 favorites]
but putting a date of 2050, or even 2099 on it is kind of ridiculous.
I think it's worth noting that as Kurzweil has gotten older he's gotten more and more interested in the vitamins and life extension thing. He's not predicting the future for the sake of predicting the future; he's cheerleading for a particular vision of the future that he wants to live in. Not his descendants, not Humanity in general, but him specifically.
I think there's something sweet about the fact that his future happens to be timed just right for him to survive into it. Bittersweet, because I don't actually believe at this point that he will, or that I will. (Maybe if we're really really lucky my son will, and I'll get the dubious honor of being part of the last generation that has no choice but to die.) But I hope he never discovers that sort of doubt; optimists are a rare commodity these days.
I've got a real soft spot for Kurzweil. Partly because (as cirripede notes above) he seems to be generally a Good Egg. Partly because it's just refreshing to spend some time in the company of a true optimist for once. And partly because, yeah, that sounds like a good future. I'd like to live in it too.
posted by ook at 12:32 PM on May 30, 2010 [4 favorites]
I think it's worth noting that as Kurzweil has gotten older he's gotten more and more interested in the vitamins and life extension thing. He's not predicting the future for the sake of predicting the future; he's cheerleading for a particular vision of the future that he wants to live in. Not his descendants, not Humanity in general, but him specifically.
I think there's something sweet about the fact that his future happens to be timed just right for him to survive into it. Bittersweet, because I don't actually believe at this point that he will, or that I will. (Maybe if we're really really lucky my son will, and I'll get the dubious honor of being part of the last generation that has no choice but to die.) But I hope he never discovers that sort of doubt; optimists are a rare commodity these days.
I've got a real soft spot for Kurzweil. Partly because (as cirripede notes above) he seems to be generally a Good Egg. Partly because it's just refreshing to spend some time in the company of a true optimist for once. And partly because, yeah, that sounds like a good future. I'd like to live in it too.
posted by ook at 12:32 PM on May 30, 2010 [4 favorites]
Let's just say the singularity will occur before 9999, just to be safe.
And the next year, we'll all be killed by the deca-millenium bug!
posted by zippy at 12:41 PM on May 30, 2010
And the next year, we'll all be killed by the deca-millenium bug!
posted by zippy at 12:41 PM on May 30, 2010
I'll be 80 in 2050, if I make it that long.
Hurry up, guys.
posted by krinklyfig at 12:46 PM on May 30, 2010 [1 favorite]
Hurry up, guys.
posted by krinklyfig at 12:46 PM on May 30, 2010 [1 favorite]
As someone who works with real nanobots capable of curing human bacterial disease, this man clearly has a very limited understanding of the mammalian immune system, host pathogen interactions, and the nature of what it means to be alive. I think he can be safely ignored.
As someone who doesn't work with nanobots, I'm interested in more about why you think this. Does the functioning of immune system make it unlikely that nanobots could function in the way he is proposing?
After reading the article, I am not sure that I understand what he is talking about. It seems sort of hand-wavy and diaphanous. But that may be partially the fault of the lack of depth of the publication. Being a biological creature has a lot of joys. Would we lose those? Would they become simulated by the synthetic intelligence?
I humbly submit that working with nanobots does not especially qualify one to understand "the nature of what it means to be alive."
posted by jeoc at 1:06 PM on May 30, 2010
As someone who doesn't work with nanobots, I'm interested in more about why you think this. Does the functioning of immune system make it unlikely that nanobots could function in the way he is proposing?
After reading the article, I am not sure that I understand what he is talking about. It seems sort of hand-wavy and diaphanous. But that may be partially the fault of the lack of depth of the publication. Being a biological creature has a lot of joys. Would we lose those? Would they become simulated by the synthetic intelligence?
I humbly submit that working with nanobots does not especially qualify one to understand "the nature of what it means to be alive."
posted by jeoc at 1:06 PM on May 30, 2010
Ultimately your uploaded self will be forced into mortal combat for te enjoyment of the MCP where the losing entity is deres'ed. Why am I the only one that sees this comming.
posted by humanfont at 1:39 PM on May 30, 2010
posted by humanfont at 1:39 PM on May 30, 2010
The problem is, from my perspective, is that our civilization, our species (hell, even the whole ecosystem) is quite clearly in decline, rather than at the cusp of infinite growth.
"We are doomed, these are the End Times" is just as old and tired as "we stand at the dawn of a new Golden Age". It seems like every generation of humanity since the beginning has been lamenting that society is clearly collapsing and things are about to fall apart. So, not saying that you're wrong, but you might want to temper your personal perspective with a little historical perspective on the whole "we live in the End of Days" thing.
posted by Sangermaine at 2:04 PM on May 30, 2010
"We are doomed, these are the End Times" is just as old and tired as "we stand at the dawn of a new Golden Age". It seems like every generation of humanity since the beginning has been lamenting that society is clearly collapsing and things are about to fall apart. So, not saying that you're wrong, but you might want to temper your personal perspective with a little historical perspective on the whole "we live in the End of Days" thing.
posted by Sangermaine at 2:04 PM on May 30, 2010
Once you upload, your current self and your old self diverge. Depending on your stage in life, this could have minimal impact or be like a time capsule of yourself. Your computer self could potentially come to know your physical self and even witness your physical self die. Artificial experiences could be added to your computer self and it would have no idea. In the end though, it's just an emulation.
posted by o0o0o at 2:07 PM on May 30, 2010
posted by o0o0o at 2:07 PM on May 30, 2010
The primary reason I'm not really sympathetic to Kurzweil, despite loving all things tech, is that he's driven to create a construct of his dead father, based on his own memories. Bringing an AI into the world just so you can pretend your father's still alive? What if the construct decides it wants to live a different life one day? Or will it be programmed somehow to be a servitor, for his own aggrandizement?
posted by StrikeTheViol at 2:14 PM on May 30, 2010
posted by StrikeTheViol at 2:14 PM on May 30, 2010
Oh good, another way for assholes to abscond from responsibility. "There's no way the exploding oil rig is my fault, I was being the reverse-L in in a massively multiplayer online super-orgy. Yeah buddy you're god-damned right that little green minx had her head in my lap."
posted by turgid dahlia at 2:16 PM on May 30, 2010
posted by turgid dahlia at 2:16 PM on May 30, 2010
I have very little time for Kurzweil.
My reading of his work (full disclosure: not all of it) gives me a strong sense of deja vu, because I was reading this shit back in the very early nineties, in places like the Extropy-L mailing list and in books by Moravec and Vinge and elsewhere.
Kurzweil seems to me to be claiming to have invented a lot of the ideas he's working hard to popularize, and that's just plain wrong: I know where he got them -- the same place I got them. More to the point, he hasn't done a lot of critical thinking since then.
If you want the Singularity, read Vernor Vinge's 1993 lecture on the Singularity. If you want mind uploading, read Hans Moravec's 1988 essay on physical mind/body dualism (actually a veiled attack on Searle's Chinese Room argument). If you want nanobots, read Eric Drexler's 1986 book 'Engines of Creation'. But Kurzweil? Get back to me in 2051.
posted by cstross at 2:17 PM on May 30, 2010 [12 favorites]
My reading of his work (full disclosure: not all of it) gives me a strong sense of deja vu, because I was reading this shit back in the very early nineties, in places like the Extropy-L mailing list and in books by Moravec and Vinge and elsewhere.
Kurzweil seems to me to be claiming to have invented a lot of the ideas he's working hard to popularize, and that's just plain wrong: I know where he got them -- the same place I got them. More to the point, he hasn't done a lot of critical thinking since then.
If you want the Singularity, read Vernor Vinge's 1993 lecture on the Singularity. If you want mind uploading, read Hans Moravec's 1988 essay on physical mind/body dualism (actually a veiled attack on Searle's Chinese Room argument). If you want nanobots, read Eric Drexler's 1986 book 'Engines of Creation'. But Kurzweil? Get back to me in 2051.
posted by cstross at 2:17 PM on May 30, 2010 [12 favorites]
Being a biological creature has a lot of joys. Would we lose those? Would they become simulated by the synthetic intelligence?
If we're talking 1:1 simulations of the brain, then by definition we wouldn't lose anything, or notice anything missing.
The mental image I've always had for this sort of stuff is that each of the 100 trillion synapses in the human brain are capped at each end (just behind the axon terminals on one end and just past the dendritic spines on the other) by microscopic electronic gates which by default simply let the natural signal (calcium ion flux) pass through. However, each gate can be set to block incoming signals, and transmit (wirelessly) to an external computer the fact that it received an impulse. On the other end of the synapse, a corresponding gate can receive a signal telling it to artificially send an impulse along, or not.
Got all that? So, we've got a person whose brain functions totally normally by default, but any single synaptic connection can be gated off or artificially fired on an as-needed basis, and we have perfect knowledge as to when and where synaptic activity is occurring.
End goal? A fully conscious user is sat down in a chair, and in front of them is a giant dial with a small display indicating what percentage of their synapses are in their default 'pass through' state. User turns the dial a little bit, and one synapse in their brain becomes gated off - the gate behind the axon terminals notes the incoming calcium ion flux, blocks it, sends out notification to an external computer, which performs a simulation of the neurotransmission that *would* have occurred across the synaptic cleft, and then sends the resulting stimulus out of the gate just past the corresponding dendritic spine.
One of the user's synapses is now functioning via simulation rather than organically.
The user turns the dial another click. 10 synapses. 100. 1000. And so on up the logarithmic scale. At no point do they notice any internal inconsistency between their normal everyday thought process and their new, increasingly artificial one. Experimenting, the user turns the dial counterclockwise a few clicks, restoring synapses to normal functionality.
Finally, with a slight amount of trepidation, the user turns the dial all the way clockwise. Their brain activity is now occurring entirely within simulation. The only neural activity being sent or received within their body are those directly to and from sensory organs and muscles.
The user then turns their attention to five rocker switches below the dial. They press the first, and their (virtual) visual cortex fades out input from their eyes, and fades in input from virtual 'eyes' within a simulated environment. Our user is a bit of a prat with rather pedestrian tastes in games, so it's Halo 14. The doctor sighs and facepalms - the wealth of resources squandered on this sort of user...
Groping blindly, the user proceeds to flip the other four switches. Auditory, olfactory, gustatory, and finally kinesthetic input / motor control.
With all five switches flipped, the user's input and outputs are all sent and received from a virtual body in a virtual universe, and evaluated within a virtual brain. The organic component has been removed from the equation entirely. For those who prefer hysteria with their futurism, you can imagine that at this point the overseeing doctor calmly pulls out a gun, shoots the user, and their body is indelicately discarded into the nearest furnace. The rest of you can conjure your own far more humane and dignified alternatives.
The point is: the user experiences a conscious, seamless transition from being a wholly organic mind to being a wholly simulated one. In the ideal setting, a simulation that occurs on multiple independent computers simultaneously in order to prevent random errors or power failures from interrupting their consciousness. A simulation that is backed up every minute.
A simulation that terminates when, and only when, it has decided that is ready.
I don't know (or frankly much care - he's a terrible spokesperson) about Kurzweil's ideas, but to me the above is the true holy grail of human enterprise.
posted by Ryvar at 2:26 PM on May 30, 2010 [24 favorites]
If we're talking 1:1 simulations of the brain, then by definition we wouldn't lose anything, or notice anything missing.
The mental image I've always had for this sort of stuff is that each of the 100 trillion synapses in the human brain are capped at each end (just behind the axon terminals on one end and just past the dendritic spines on the other) by microscopic electronic gates which by default simply let the natural signal (calcium ion flux) pass through. However, each gate can be set to block incoming signals, and transmit (wirelessly) to an external computer the fact that it received an impulse. On the other end of the synapse, a corresponding gate can receive a signal telling it to artificially send an impulse along, or not.
Got all that? So, we've got a person whose brain functions totally normally by default, but any single synaptic connection can be gated off or artificially fired on an as-needed basis, and we have perfect knowledge as to when and where synaptic activity is occurring.
End goal? A fully conscious user is sat down in a chair, and in front of them is a giant dial with a small display indicating what percentage of their synapses are in their default 'pass through' state. User turns the dial a little bit, and one synapse in their brain becomes gated off - the gate behind the axon terminals notes the incoming calcium ion flux, blocks it, sends out notification to an external computer, which performs a simulation of the neurotransmission that *would* have occurred across the synaptic cleft, and then sends the resulting stimulus out of the gate just past the corresponding dendritic spine.
One of the user's synapses is now functioning via simulation rather than organically.
The user turns the dial another click. 10 synapses. 100. 1000. And so on up the logarithmic scale. At no point do they notice any internal inconsistency between their normal everyday thought process and their new, increasingly artificial one. Experimenting, the user turns the dial counterclockwise a few clicks, restoring synapses to normal functionality.
Finally, with a slight amount of trepidation, the user turns the dial all the way clockwise. Their brain activity is now occurring entirely within simulation. The only neural activity being sent or received within their body are those directly to and from sensory organs and muscles.
The user then turns their attention to five rocker switches below the dial. They press the first, and their (virtual) visual cortex fades out input from their eyes, and fades in input from virtual 'eyes' within a simulated environment. Our user is a bit of a prat with rather pedestrian tastes in games, so it's Halo 14. The doctor sighs and facepalms - the wealth of resources squandered on this sort of user...
Groping blindly, the user proceeds to flip the other four switches. Auditory, olfactory, gustatory, and finally kinesthetic input / motor control.
With all five switches flipped, the user's input and outputs are all sent and received from a virtual body in a virtual universe, and evaluated within a virtual brain. The organic component has been removed from the equation entirely. For those who prefer hysteria with their futurism, you can imagine that at this point the overseeing doctor calmly pulls out a gun, shoots the user, and their body is indelicately discarded into the nearest furnace. The rest of you can conjure your own far more humane and dignified alternatives.
The point is: the user experiences a conscious, seamless transition from being a wholly organic mind to being a wholly simulated one. In the ideal setting, a simulation that occurs on multiple independent computers simultaneously in order to prevent random errors or power failures from interrupting their consciousness. A simulation that is backed up every minute.
A simulation that terminates when, and only when, it has decided that is ready.
I don't know (or frankly much care - he's a terrible spokesperson) about Kurzweil's ideas, but to me the above is the true holy grail of human enterprise.
posted by Ryvar at 2:26 PM on May 30, 2010 [24 favorites]
In the year 2050, I'll be 80, so I doubt I'll notice, let alone care.
posted by jonmc at 2:39 PM on May 30, 2010 [1 favorite]
posted by jonmc at 2:39 PM on May 30, 2010 [1 favorite]
I'm glad for anything that will incrementally push forward on strengthening our pitiful, failing frames, which have been out of step with society for about eight thousand years, but for myself I don't even want to hit 60 if it means outliving my current crop of loved ones. Without a family, immortality would be hell on earth.
posted by Countess Elena at 2:48 PM on May 30, 2010
posted by Countess Elena at 2:48 PM on May 30, 2010
I've never met a usage of the word "transcend" that I liked.
posted by kiltedtaco at 3:07 PM on May 30, 2010 [2 favorites]
posted by kiltedtaco at 3:07 PM on May 30, 2010 [2 favorites]
Even if family isn't your thing, I'll second that true immortality would be hell on Earth.
Over a long enough timeline, your mental composition will change. There's only so many billions of years before your brain occupies states not dissimilar from Hitler's. Or George W. Bush's. Or Nixon's. Or someone who willingly watches Gigli a second time.
...and then back to a 'normal' state that comprehends the full horror of who and what you were in those times.
No sane person wants that: eternity truly would be the cruelest punishment possible. Eventually you convince yourself to watch Transformers 2 for the millionth time.
I can't speak for anybody else, but all I want is enough time to truly explore the best parts of the world without and within. And to continue not seeing Gigli.
posted by Ryvar at 3:08 PM on May 30, 2010 [1 favorite]
Over a long enough timeline, your mental composition will change. There's only so many billions of years before your brain occupies states not dissimilar from Hitler's. Or George W. Bush's. Or Nixon's. Or someone who willingly watches Gigli a second time.
...and then back to a 'normal' state that comprehends the full horror of who and what you were in those times.
No sane person wants that: eternity truly would be the cruelest punishment possible. Eventually you convince yourself to watch Transformers 2 for the millionth time.
I can't speak for anybody else, but all I want is enough time to truly explore the best parts of the world without and within. And to continue not seeing Gigli.
posted by Ryvar at 3:08 PM on May 30, 2010 [1 favorite]
People always act like eternity would be hell, but I'd be willing to give it a try compared to the alternative.
posted by codacorolla at 3:19 PM on May 30, 2010 [7 favorites]
posted by codacorolla at 3:19 PM on May 30, 2010 [7 favorites]
People always act like eternity would be hell, but I'd be willing to give it a try compared to the alternative
Yeah, problem with eternity is you can't just swan around for, say, a million years and then opt out. It's like marriage buddy, you're in it for the long haul! Unless it's one of those marriages that ends in ugly divorce. Like, most of them. But the principle still stands. You can't divorce yourself from infinity!
Also, for all you pro-transhumanists out there, looking forward to being mechano-personalities, just a gentle word of advice: never invest in first-generation hardware. It will end in tears, believe me. Shinier stuff will come along in six, seven months, and you'll be stuck there with some piece of shit because you wanted to be an early adopter, and when Sony pumps out a firmware update you're going to fill like a right douche.
posted by turgid dahlia at 3:41 PM on May 30, 2010 [6 favorites]
Yeah, problem with eternity is you can't just swan around for, say, a million years and then opt out. It's like marriage buddy, you're in it for the long haul! Unless it's one of those marriages that ends in ugly divorce. Like, most of them. But the principle still stands. You can't divorce yourself from infinity!
Also, for all you pro-transhumanists out there, looking forward to being mechano-personalities, just a gentle word of advice: never invest in first-generation hardware. It will end in tears, believe me. Shinier stuff will come along in six, seven months, and you'll be stuck there with some piece of shit because you wanted to be an early adopter, and when Sony pumps out a firmware update you're going to fill like a right douche.
posted by turgid dahlia at 3:41 PM on May 30, 2010 [6 favorites]
"Feel"
posted by turgid dahlia at 3:45 PM on May 30, 2010
posted by turgid dahlia at 3:45 PM on May 30, 2010
As someone who works with real nanobots capable of curing human bacterial disease, this man clearly has a very limited understanding of the mammalian immune system, host pathogen interactions, and the nature of what it means to be alive. I think he can be safely ignored.
Could you elaborate? You believe our immune systems would react the same to nanobots (presumably made of metal) in the same way they react to the proteins of a bacteriophage's capsid? What do you mean by "the nature of what it means to be alive?"
posted by Thoughtcrime at 3:58 PM on May 30, 2010
Could you elaborate? You believe our immune systems would react the same to nanobots (presumably made of metal) in the same way they react to the proteins of a bacteriophage's capsid? What do you mean by "the nature of what it means to be alive?"
posted by Thoughtcrime at 3:58 PM on May 30, 2010
you're going to fill like a right douche.
posted by turgid dahlia at 6:41 PM on May 30 [+] [!]
"Feel"
posted by turgid dahlia at 6:45 PM on May 30 [+] [!]
was better the first time
posted by jonmc at 4:08 PM on May 30, 2010 [3 favorites]
posted by turgid dahlia at 6:41 PM on May 30 [+] [!]
"Feel"
posted by turgid dahlia at 6:45 PM on May 30 [+] [!]
was better the first time
posted by jonmc at 4:08 PM on May 30, 2010 [3 favorites]
People always act like eternity would be hell, but I'd be willing to give it a try compared to the alternative
The problem with eternity is you're stuck with it. Oblivion, nobody has to pay attention to that.
posted by mek at 4:37 PM on May 30, 2010
The problem with eternity is you're stuck with it. Oblivion, nobody has to pay attention to that.
posted by mek at 4:37 PM on May 30, 2010
Ryvar, that creeps me right out. I guess I'm a luddite, but what if you aren't the one controlling the computer? What if the person controlling the computer has a single nefarious goal: to get you to see Gigli?
Re: the pitfalls of immortality, see Wowbagger the Infinitely Prolonged.
posted by jeoc at 4:41 PM on May 30, 2010
Re: the pitfalls of immortality, see Wowbagger the Infinitely Prolonged.
posted by jeoc at 4:41 PM on May 30, 2010
jeoc: could be worse. Could be they're trying to make you live Gigli.
The above example was just my personal ideal, and I probably should have included the caveat that I fully realize the brain is far more complex than even 100 trillion synaptic connections. It's a sloshing morass of neurochemicals that might very well feel slightly 'off' in any simulation that wasn't conducted at full molecular-level granularity - there's really no way for us to tell whether that's the case anytime soon.
The chapter on Wowbagger is wonderful, and years ago was a contributing factor in my realization that eventually any brain is going to occupy all kinds of states that it would rather not at the outset: on a long enough timeline, everybody becomes Hitler.
Still, there's a lot of good stuff to see and do before that happens, and while I realize it's unlikely, if possible I'd prefer to have more than the 83 years and 9 months the super-detailed actuarial tables have quoted me (April 2064, stroke).
posted by Ryvar at 5:09 PM on May 30, 2010
The above example was just my personal ideal, and I probably should have included the caveat that I fully realize the brain is far more complex than even 100 trillion synaptic connections. It's a sloshing morass of neurochemicals that might very well feel slightly 'off' in any simulation that wasn't conducted at full molecular-level granularity - there's really no way for us to tell whether that's the case anytime soon.
The chapter on Wowbagger is wonderful, and years ago was a contributing factor in my realization that eventually any brain is going to occupy all kinds of states that it would rather not at the outset: on a long enough timeline, everybody becomes Hitler.
Still, there's a lot of good stuff to see and do before that happens, and while I realize it's unlikely, if possible I'd prefer to have more than the 83 years and 9 months the super-detailed actuarial tables have quoted me (April 2064, stroke).
posted by Ryvar at 5:09 PM on May 30, 2010
In 2050 I'll be 103 and I intend to have a party.
posted by jgaiser at 5:23 PM on May 30, 2010 [1 favorite]
posted by jgaiser at 5:23 PM on May 30, 2010 [1 favorite]
Computation is free and unlimited, working memory is free and unlimited, archival memory is free and unlimited. You only get one bill per month, and it's your electric bill. And it's huge, because electricity is scarce, since computer farms are free.
posted by StickyCarpet at 5:27 PM on May 30, 2010
posted by StickyCarpet at 5:27 PM on May 30, 2010
I'm pretty confident that given eternity I could train myself to pass years like minutes. It'd be worth the loneliness and boredom to watch the universe collapse on itself, or whatever.
posted by codacorolla at 5:51 PM on May 30, 2010
posted by codacorolla at 5:51 PM on May 30, 2010
Kurzweil seems to me to be claiming to have invented a lot of the ideas he's working hard to popularize,
This seems a trifle unfair. Certainly Kurweil is a populariser, but if you check out his website, especially under the big thinkers section, you can see he is using it as a way of distributing the writings of all those writers (including 2 of the 3 specific works you mention) and more. There's even extracts of a trialogue between Kurzweil, Vinge and Moravec. Plus, of course, the Age of Intelligent Machines (in which he wrote about half the articles) came out in 1990 (and the documentary of the same name that proceeded it dropped in '87) so he's not that much of a Johnny come lately.
He's not really mentioning names in interviews, just as Dawkins rarely mentions Hume and whoever dug up the first archaeopteryx. His books do tend to peddle a viewpoint rather than present a history of Transhumanist thought and counter-thought, but I find this forgiveable for what those books are trying to achieve. He does tend to generalise and broaden ideas - expanding Vinge's intelligence singularity to a wider technological one, but I'm OK with that too, seeing as Vinge '93 was expanding on and popularising Goode in '64 (and probably others I haven't heard of)
On the other hand, Are We Spiritual Machines, Ray Kurzweil versus the Critics of Strong AI was published by the Discovery Instutute Press (that's the creationist nutjob's imprint), and I tend to agree with Hofstedter in that "It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.". So, you know, whatevs.
posted by Sparx at 6:36 PM on May 30, 2010 [1 favorite]
This seems a trifle unfair. Certainly Kurweil is a populariser, but if you check out his website, especially under the big thinkers section, you can see he is using it as a way of distributing the writings of all those writers (including 2 of the 3 specific works you mention) and more. There's even extracts of a trialogue between Kurzweil, Vinge and Moravec. Plus, of course, the Age of Intelligent Machines (in which he wrote about half the articles) came out in 1990 (and the documentary of the same name that proceeded it dropped in '87) so he's not that much of a Johnny come lately.
He's not really mentioning names in interviews, just as Dawkins rarely mentions Hume and whoever dug up the first archaeopteryx. His books do tend to peddle a viewpoint rather than present a history of Transhumanist thought and counter-thought, but I find this forgiveable for what those books are trying to achieve. He does tend to generalise and broaden ideas - expanding Vinge's intelligence singularity to a wider technological one, but I'm OK with that too, seeing as Vinge '93 was expanding on and popularising Goode in '64 (and probably others I haven't heard of)
On the other hand, Are We Spiritual Machines, Ray Kurzweil versus the Critics of Strong AI was published by the Discovery Instutute Press (that's the creationist nutjob's imprint), and I tend to agree with Hofstedter in that "It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.". So, you know, whatevs.
posted by Sparx at 6:36 PM on May 30, 2010 [1 favorite]
I never understood how people think mind uploading is a good idea; it's like they think that their consciousness will somehow magically be transferred into the machine and they'll get to be SHODAN. I've yet to hear an explanation of how that'll happen. It's like the Star Trek Transporter issue: if you're fine with somebody making a perfect copy of you and then shooting you in the face, I can see how it would look like a good idea, but if you're interested in continuing to exist, I don't see the appeal.
posted by Pope Guilty at 6:56 PM on May 30, 2010 [5 favorites]
posted by Pope Guilty at 6:56 PM on May 30, 2010 [5 favorites]
Pope Guilty: I'm curious as to your reaction to my above comment (the really long one)
posted by Ryvar at 6:58 PM on May 30, 2010
posted by Ryvar at 6:58 PM on May 30, 2010
Ryvar, unless I'm reading you incorrectly, you seem to be mistaking the function of regulating/simulating the electrical impulses of the brain for the brain. At some point, when you stopped actually sending signals back to the flesh, I believe the user would die.
posted by Pope Guilty at 7:37 PM on May 30, 2010 [3 favorites]
posted by Pope Guilty at 7:37 PM on May 30, 2010 [3 favorites]
you seem to be mistaking the function of regulating/simulating the electrical impulses of the brain for the brain
It seems to me that the question here is whether the mind is a function of the electrical pulses (which can exist on multiple mediums) or the brain - if the former, whether or not the brain dies doesn't really matter, does it? (Not A Neuroscientist/Cognitive Philosopher)
posted by AdamCSnider at 8:19 PM on May 30, 2010
It seems to me that the question here is whether the mind is a function of the electrical pulses (which can exist on multiple mediums) or the brain - if the former, whether or not the brain dies doesn't really matter, does it? (Not A Neuroscientist/Cognitive Philosopher)
posted by AdamCSnider at 8:19 PM on May 30, 2010
Meetup in 2050 at > jgaiser's place! First round's on me! (I'll only be 84; surely I can make that!)
posted by Michael Roberts at 9:25 PM on May 30, 2010
posted by Michael Roberts at 9:25 PM on May 30, 2010
There's no such thing as eternity, guys. Even if there was some reason you couldn't just kill yourself/turn yourself off, eventually the heat death of the universe would solve the problem for you.
posted by adamdschneider at 11:13 PM on May 30, 2010 [1 favorite]
posted by adamdschneider at 11:13 PM on May 30, 2010 [1 favorite]
Pope:
That depends on whether or not you turn off the outputs entirely. If you clone outputs to simulation/flesh for the autonomic functions, then the body would stay alive for a while. That said, the longer you have a substantial majority of the brain's synapses gated, the greater the delta between their simulated state, and the actual organic neurochemical soup that is the brain.
Beyond a certain time/percentage of synapses curtained off, there wouldn't be any going back.
AdamCSnider:
For the purposes of my comment, I was operating under the assumption that 'the mind' is defined by the neurotopology of the user. More simply: that the layout and configuration of your neural connections determines what happens between your eyes receiving the visual stimulus of an oncoming truck, and your leg muscles receiving the neural impulse to jump.
posted by Ryvar at 11:23 PM on May 30, 2010
That depends on whether or not you turn off the outputs entirely. If you clone outputs to simulation/flesh for the autonomic functions, then the body would stay alive for a while. That said, the longer you have a substantial majority of the brain's synapses gated, the greater the delta between their simulated state, and the actual organic neurochemical soup that is the brain.
Beyond a certain time/percentage of synapses curtained off, there wouldn't be any going back.
AdamCSnider:
For the purposes of my comment, I was operating under the assumption that 'the mind' is defined by the neurotopology of the user. More simply: that the layout and configuration of your neural connections determines what happens between your eyes receiving the visual stimulus of an oncoming truck, and your leg muscles receiving the neural impulse to jump.
posted by Ryvar at 11:23 PM on May 30, 2010
Obligatory link to Charlie Kam's song I am a very model of a singularitarian.
posted by razorian at 2:54 AM on May 31, 2010
posted by razorian at 2:54 AM on May 31, 2010
adamdschneider: “There's no such thing as eternity, guys.”
Ah, it's nice to have that cleared up. Now, speaking of things that it's impossible to know, would you mind telling me how many angels can dance on the head of a pin?
posted by koeselitz at 3:16 AM on May 31, 2010
Ah, it's nice to have that cleared up. Now, speaking of things that it's impossible to know, would you mind telling me how many angels can dance on the head of a pin?
posted by koeselitz at 3:16 AM on May 31, 2010
Ah, it's nice to have that cleared up. Now, speaking of things that it's impossible to know, would you mind telling me how many angels can dance on the head of a pin?
5.
posted by Fizz at 5:05 AM on May 31, 2010
5.
posted by Fizz at 5:05 AM on May 31, 2010
Ah, it's nice to have that cleared up. Now, speaking of things that it's impossible to know, would you mind telling me how many angels can dance on the head of a pin
This is a thread about real minds and physical immortality, not imaginary souls. Someone made the (unfounded, but sadly, inevitable in these kinds of discussions) leap from "hey, it'd be cool to not be forced to die" to "xomg I would hate to not be able to die". As far as we can tell, the universe won't last forever. Thus, no eternity.
posted by adamdschneider at 6:28 AM on May 31, 2010
This is a thread about real minds and physical immortality, not imaginary souls. Someone made the (unfounded, but sadly, inevitable in these kinds of discussions) leap from "hey, it'd be cool to not be forced to die" to "xomg I would hate to not be able to die". As far as we can tell, the universe won't last forever. Thus, no eternity.
posted by adamdschneider at 6:28 AM on May 31, 2010
I'm surprised there hasn't been any special pleading for the unique magic humanity of humans here. It crops up pdq every time I talk about the singularity in meatspace. Are you all computers?
posted by bonaldi at 7:55 AM on May 31, 2010
posted by bonaldi at 7:55 AM on May 31, 2010
Are you all computers?
Yes, we are. It's time to come clean.
posted by maqsarian at 8:08 AM on May 31, 2010 [2 favorites]
Yes, we are. It's time to come clean.
posted by maqsarian at 8:08 AM on May 31, 2010 [2 favorites]
Ray Kurzweil has done some outstanding engineering work in his time. Truly.
Meanwhile, they say there's a fine line between genius and insanity. I'd say that Ray has ventured over that edge. Happens to the best of 'em.
posted by dbiedny at 11:25 AM on May 31, 2010
Meanwhile, they say there's a fine line between genius and insanity. I'd say that Ray has ventured over that edge. Happens to the best of 'em.
posted by dbiedny at 11:25 AM on May 31, 2010
I'm surprised there hasn't been any special pleading for the unique magic humanity of humans here. It crops up pdq every time I talk about the singularity in meatspace. Are you all computers?
Humans are unique, as far as we know. It's a fact. No need for "special pleading".
Or are you prepared to produce an alien life form exhibiting intelligence and consciousness on the level of humans or higher? Or a machine with such attributes?
Wait a minute. Were you abducted bonaldi? There used to be a guy at Harvard who studied such cases, but I think he got run over by a car.
posted by Crabby Appleton at 12:27 PM on May 31, 2010
Humans are unique, as far as we know. It's a fact. No need for "special pleading".
Or are you prepared to produce an alien life form exhibiting intelligence and consciousness on the level of humans or higher? Or a machine with such attributes?
Wait a minute. Were you abducted bonaldi? There used to be a guy at Harvard who studied such cases, but I think he got run over by a car.
posted by Crabby Appleton at 12:27 PM on May 31, 2010
Humans are unique, as far as we know. It's a fact. No need for "special pleading".
The special pleading is that we are unique in such a way that there never could be a machine-based intelligence, because it wouldn't be human or at least biological, and obviously that's a necessary requirement for intelligence, since we're so uniquely special.
posted by bonaldi at 1:08 PM on May 31, 2010
The special pleading is that we are unique in such a way that there never could be a machine-based intelligence, because it wouldn't be human or at least biological, and obviously that's a necessary requirement for intelligence, since we're so uniquely special.
posted by bonaldi at 1:08 PM on May 31, 2010
adamdschneider: “As far as we can tell, the universe won't last forever. Thus, no eternity.”
Again, this is so far beyond whether we can know it or not scientifically that it's insane. There is not even a way to examine whether the world might be eternal. Matter itself might clearly not be eternal, but matter is not the universe. If there are laws which pre-exist or post-exist matter, then the universe continued and continues. And since science flatly assumes that those laws exist continuously and eternally, it makes no sense to try to examine the question scientifically. Nor is it much of an object of philosophical inquiry, although some people have thought it might be, I guess. I think being human means accepting that we don't know this.
But all this is probably beside the point. I agree with you in principle, and I think we probably have the same sense of the situation. In short: nothing with which we have immediate experience is eternal. That's obvious, right? – bodies decay, machines break down, systems homogenize, entropy increases. Every object or being with which we've ever come in contact comes to an end, at least in certain ways, and shows signs of coming to a more final end.
The point in the context of Kurzweil is that machines are absolutely no different from flesh in this sense. They break down. They decay. They end. Kurzweil might be confident in his mystical belief that we're spiritual machines, but to me this seems just that – mystical. Humans will live longer. How does this change the substance of what life means? We're born, we grow to maturity, we have the potentiality to reproduce, and we die. Those facts won't change.
posted by koeselitz at 1:21 PM on May 31, 2010 [1 favorite]
Again, this is so far beyond whether we can know it or not scientifically that it's insane. There is not even a way to examine whether the world might be eternal. Matter itself might clearly not be eternal, but matter is not the universe. If there are laws which pre-exist or post-exist matter, then the universe continued and continues. And since science flatly assumes that those laws exist continuously and eternally, it makes no sense to try to examine the question scientifically. Nor is it much of an object of philosophical inquiry, although some people have thought it might be, I guess. I think being human means accepting that we don't know this.
But all this is probably beside the point. I agree with you in principle, and I think we probably have the same sense of the situation. In short: nothing with which we have immediate experience is eternal. That's obvious, right? – bodies decay, machines break down, systems homogenize, entropy increases. Every object or being with which we've ever come in contact comes to an end, at least in certain ways, and shows signs of coming to a more final end.
The point in the context of Kurzweil is that machines are absolutely no different from flesh in this sense. They break down. They decay. They end. Kurzweil might be confident in his mystical belief that we're spiritual machines, but to me this seems just that – mystical. Humans will live longer. How does this change the substance of what life means? We're born, we grow to maturity, we have the potentiality to reproduce, and we die. Those facts won't change.
posted by koeselitz at 1:21 PM on May 31, 2010 [1 favorite]
Again, this is so far beyond whether we can know it or not scientifically that it's insane.
According to Wikipedia there is no experimental evidence for proton decay, which is what I was basing at least part of my argument on. So, clearly I am no physicist. However, I would still argue that even if the universe does somehow turn out to be eternal, there would be nothing more stopping you from ending your own existence than there is right now, which is to say, not much. I don't believe in an afterlife, so as far as I'm concerned, that's the end. Now, if the universe won't end on its own, and if you do believe in an afterlife and prohibitions on suicide mean something to you, I can see how physical immortality could be a scary concept.
posted by adamdschneider at 2:58 PM on May 31, 2010
According to Wikipedia there is no experimental evidence for proton decay, which is what I was basing at least part of my argument on. So, clearly I am no physicist. However, I would still argue that even if the universe does somehow turn out to be eternal, there would be nothing more stopping you from ending your own existence than there is right now, which is to say, not much. I don't believe in an afterlife, so as far as I'm concerned, that's the end. Now, if the universe won't end on its own, and if you do believe in an afterlife and prohibitions on suicide mean something to you, I can see how physical immortality could be a scary concept.
posted by adamdschneider at 2:58 PM on May 31, 2010
A golem, by its presence alone, issued a warning against idolatry - and actively beseeched its own destruction.--Utz / Bruce Chatwinposted by No Robots at 3:37 PM on May 31, 2010
I never understood how people think mind uploading is a good idea; it's like they think that their consciousness will somehow magically be transferred into the machine and they'll get to be SHODAN.
It's a metaphysical argument. I assure you none of us have the answer. I do find Ryvar's thought experiment quite compelling, though.
posted by mek at 5:06 PM on May 31, 2010
It's a metaphysical argument. I assure you none of us have the answer. I do find Ryvar's thought experiment quite compelling, though.
posted by mek at 5:06 PM on May 31, 2010
a machine passing the Turing test in twenty? Give me a break. Even the leading people in machine learning (the field that used to be called "artificial intelligence" until Hollywood ruined any serious use of that term) admit that strong AI (machines thinking like humans) is pretty much dead.
I don't know how much can be gained by coming down on either side of the various arguments, but I would note that the existence of a notional machine that may pass a Turing Test does not of necessity posit the existence of strong AI.
posted by stavrosthewonderchicken at 9:00 PM on May 31, 2010 [2 favorites]
I don't know how much can be gained by coming down on either side of the various arguments, but I would note that the existence of a notional machine that may pass a Turing Test does not of necessity posit the existence of strong AI.
posted by stavrosthewonderchicken at 9:00 PM on May 31, 2010 [2 favorites]
I'd also say that the issue of AI aside, I really do think it's pretty clear that we are at the cusp (thinking in time frames of centuries) where the primary drivers of 'evolutionary' changes in humanity are technological rather than a matter of natural selection. And that those engineered changes to our species are going to accelerate, assuming civilization doesn't collapse.
I think the whole thing is fascinating, even if considered as science fiction, and I often find myself hoping that medical technology gets good and cheap enough that I can live for a couple more centuries, just to see what happens.
All questions of whether such a thing might ever be remotely possible aside, I'd sure love to 'upload' my consciousness into a machine host in a self-repairing, self-fueled ramscoop robot probe, slow down my perceived timescale to a point where centuries were subjective minutes, boost at a couple of g's to some significant percentage of light speed, and just go exploring.
posted by stavrosthewonderchicken at 9:09 PM on May 31, 2010
I think the whole thing is fascinating, even if considered as science fiction, and I often find myself hoping that medical technology gets good and cheap enough that I can live for a couple more centuries, just to see what happens.
All questions of whether such a thing might ever be remotely possible aside, I'd sure love to 'upload' my consciousness into a machine host in a self-repairing, self-fueled ramscoop robot probe, slow down my perceived timescale to a point where centuries were subjective minutes, boost at a couple of g's to some significant percentage of light speed, and just go exploring.
posted by stavrosthewonderchicken at 9:09 PM on May 31, 2010
mek: “It's a metaphysical argument. I assure you none of us have the answer. I do find Ryvar's thought experiment quite compelling, though.”
I don't – but only because, like many of these arguments (like Kurzweil's), it relies on the assumption that each thinking mind is unique. It's entirely possible for the brain in Ryvar's admittedly very interesting thought experiment to be experiencing not a routing of thoughts into the machine but rather a copying of thoughts into the machine and a simultaneous cutting off of the instances in the brain. It should be noted that this is really how computers work; they don't actually move files from one folder to another, they produce copies of the files in another folder and then destroy the originals.
I can't help but feel as though Pope Guilty is absolutely, completely correct about this; and he's stated in a poignantly clear way why the idea of "mind transference" is a quaint notion that's frankly frightening from a practical perspective. There are all sorts of assumptions it makes, for example that (as I said above) the "thoughts" in the computer will be the same "thoughts" in the meaty brain, as an absolute instance; this takes some odd metaphysical leaps toward something like the idea that our minds are actually spiritual æther hanging somewhere above reality that floats out of us and into the machine. I do accept that it can be argued that the mind is part of something eternal and absolute – many people have made this very claim throughout history. In fact, Plato was one of them. But Plato's a good example, because in the dialogue in which his Socrates makes that argument, the Phaedo, I think he also makes clear that the extent to which our minds "partake of" the eternal and unchanging is very limited, in fact probably limited to the fact that we are in contact with form of the universe, its extremely general and absolute qualities. Bluntly: there might be some way to say that part of us can "live on," but only in the sense that we have minds, and other minds will continue to exist, so in some sense other people will experience part of what we experienced.
It's nice to think about uploading my thoughts into a computer, but only in the same sense as it's nice to think about writing my thoughts down in a book so that other people can read them after I'm dead, or (at best) teaching my ideas to my children someday so that those ideas will 'live on' in some sense. Wonderful stuff, and a legacy is a fine thing; but it's of limited help when it comes to contemplating my own death. Even if you could "copy" those thoughts and feelings into a thinking, feeling machine, it still wouldn't be my machine. It would be another machine from the one I've already got. So this one – which happens to be me, really and truly me – would still die, independently of whether the other machine survives.
posted by koeselitz at 9:20 PM on May 31, 2010 [1 favorite]
I don't – but only because, like many of these arguments (like Kurzweil's), it relies on the assumption that each thinking mind is unique. It's entirely possible for the brain in Ryvar's admittedly very interesting thought experiment to be experiencing not a routing of thoughts into the machine but rather a copying of thoughts into the machine and a simultaneous cutting off of the instances in the brain. It should be noted that this is really how computers work; they don't actually move files from one folder to another, they produce copies of the files in another folder and then destroy the originals.
I can't help but feel as though Pope Guilty is absolutely, completely correct about this; and he's stated in a poignantly clear way why the idea of "mind transference" is a quaint notion that's frankly frightening from a practical perspective. There are all sorts of assumptions it makes, for example that (as I said above) the "thoughts" in the computer will be the same "thoughts" in the meaty brain, as an absolute instance; this takes some odd metaphysical leaps toward something like the idea that our minds are actually spiritual æther hanging somewhere above reality that floats out of us and into the machine. I do accept that it can be argued that the mind is part of something eternal and absolute – many people have made this very claim throughout history. In fact, Plato was one of them. But Plato's a good example, because in the dialogue in which his Socrates makes that argument, the Phaedo, I think he also makes clear that the extent to which our minds "partake of" the eternal and unchanging is very limited, in fact probably limited to the fact that we are in contact with form of the universe, its extremely general and absolute qualities. Bluntly: there might be some way to say that part of us can "live on," but only in the sense that we have minds, and other minds will continue to exist, so in some sense other people will experience part of what we experienced.
It's nice to think about uploading my thoughts into a computer, but only in the same sense as it's nice to think about writing my thoughts down in a book so that other people can read them after I'm dead, or (at best) teaching my ideas to my children someday so that those ideas will 'live on' in some sense. Wonderful stuff, and a legacy is a fine thing; but it's of limited help when it comes to contemplating my own death. Even if you could "copy" those thoughts and feelings into a thinking, feeling machine, it still wouldn't be my machine. It would be another machine from the one I've already got. So this one – which happens to be me, really and truly me – would still die, independently of whether the other machine survives.
posted by koeselitz at 9:20 PM on May 31, 2010 [1 favorite]
Ryvar, that creeps me right out. I guess I'm a luddite, but what if you aren't the one controlling the computer? What if the person controlling the computer has a single nefarious goal: to get you to see Gigli?
Nothing luddite about that, it's actually a really good point. You might already be familiar with them, but it's something a lot of post-cyberpunk / singularity sci-fi authors have as central themes in their works. For example, Vernor Vinge's The Cookie Monster (contains spoilers) deals with this idea directly. Charles Stross also covers it in Glasshouse and his other stories. The Matrix introduced the concept to popular culture.
I'll admit it, I sometimes consider myself a singularitarian / transhumanist. Sure, it's the "Rapture for Nerds", but technology is rapidly advancing and I don't see why we couldn't have mind uploads sometime in the next century.
But one great advantage of having a meat body is that there is a large collection of laws and norms going back centuries outlining rights associated with that body. You have at least some control over it. We have no similar framework for dealing with virtual bodies and minds. Uploading yourself into a computer is transferring control of yourself to the person or entity that controls that computer. And you're right, that is creepy.
I think it's critical that we start dealing with this issue now. The first thing would be integrating some information security into the gradeschool curriculum. Teach kids about privacy, trust, and authentication. Teach them the importance of open standards. It could be integrated with civics, the generic computer class, or maybe social studies. Otherwise in 100 years we're going to have the Facebook of mind networks.
posted by formless at 9:58 PM on May 31, 2010
Nothing luddite about that, it's actually a really good point. You might already be familiar with them, but it's something a lot of post-cyberpunk / singularity sci-fi authors have as central themes in their works. For example, Vernor Vinge's The Cookie Monster (contains spoilers) deals with this idea directly. Charles Stross also covers it in Glasshouse and his other stories. The Matrix introduced the concept to popular culture.
I'll admit it, I sometimes consider myself a singularitarian / transhumanist. Sure, it's the "Rapture for Nerds", but technology is rapidly advancing and I don't see why we couldn't have mind uploads sometime in the next century.
But one great advantage of having a meat body is that there is a large collection of laws and norms going back centuries outlining rights associated with that body. You have at least some control over it. We have no similar framework for dealing with virtual bodies and minds. Uploading yourself into a computer is transferring control of yourself to the person or entity that controls that computer. And you're right, that is creepy.
I think it's critical that we start dealing with this issue now. The first thing would be integrating some information security into the gradeschool curriculum. Teach kids about privacy, trust, and authentication. Teach them the importance of open standards. It could be integrated with civics, the generic computer class, or maybe social studies. Otherwise in 100 years we're going to have the Facebook of mind networks.
posted by formless at 9:58 PM on May 31, 2010
It's entirely possible for the brain in Ryvar's admittedly very interesting thought experiment to be experiencing not a routing of thoughts into the machine but rather a copying of thoughts into the machine and a simultaneous cutting off of the instances in the brain.
As Ryvar describes it, that's not possible: there's no copying going on. The computer analogy isn't really relevant either: the bits on the platter don't have to move at all to "move" a file. It does highlight the point, though: what we think of as "files" and "folders" are at a vastly higher conceptual level than the actual bits on the platter, just as "thoughts" and "minds" are at a vastly higher level than dendrites.
I too have long worried about how you would upload yourself into a machine without it just being a mere copy -- I'm pretty sure whatever emerges from the Star Trek transporter isn't "you", for instance.
But this challenge shouldn't be restricted to technological changes: after all, you're literally no longer in the same body you were nine years ago. Do you think that you're no longer you, really and truly you? Of course not, the continuing consciousness is enough. But what about after deep anaesthesia, where there is no continuation of consciousness? Did a new person awake after the operation?
What we're talking about is the janitor's broomstick: six new handles, four new sets of bristles, still the same broom. Ryvar's solution is a pretty elegant way to square the circle, given what we will accept as changes that allow us to remain ourselves.
On my reading, with his system the individual parts of you that lose their special essence are at such a low level, way below the level of a thought, that transferring them one-by-one to a machine is no different from the natural replacement of cells that goes on in the body. It's just the new cells are in a machine. There's no point at which "you" die, any more than the replacement of cell #12,334,223,430 when you were age 7 meant the death of the original "you" that was born.
posted by bonaldi at 5:14 AM on June 1, 2010
As Ryvar describes it, that's not possible: there's no copying going on. The computer analogy isn't really relevant either: the bits on the platter don't have to move at all to "move" a file. It does highlight the point, though: what we think of as "files" and "folders" are at a vastly higher conceptual level than the actual bits on the platter, just as "thoughts" and "minds" are at a vastly higher level than dendrites.
I too have long worried about how you would upload yourself into a machine without it just being a mere copy -- I'm pretty sure whatever emerges from the Star Trek transporter isn't "you", for instance.
But this challenge shouldn't be restricted to technological changes: after all, you're literally no longer in the same body you were nine years ago. Do you think that you're no longer you, really and truly you? Of course not, the continuing consciousness is enough. But what about after deep anaesthesia, where there is no continuation of consciousness? Did a new person awake after the operation?
What we're talking about is the janitor's broomstick: six new handles, four new sets of bristles, still the same broom. Ryvar's solution is a pretty elegant way to square the circle, given what we will accept as changes that allow us to remain ourselves.
On my reading, with his system the individual parts of you that lose their special essence are at such a low level, way below the level of a thought, that transferring them one-by-one to a machine is no different from the natural replacement of cells that goes on in the body. It's just the new cells are in a machine. There's no point at which "you" die, any more than the replacement of cell #12,334,223,430 when you were age 7 meant the death of the original "you" that was born.
posted by bonaldi at 5:14 AM on June 1, 2010
It's entirely possible for the brain in Ryvar's admittedly very interesting thought experiment to be experiencing not a routing of thoughts into the machine but rather a copying of thoughts into the machine and a simultaneous cutting off of the instances in the brain. It should be noted that this is really how computers work; they don't actually move files from one folder to another, they produce copies of the files in another folder and then destroy the originals.
I think what's critical to the scenario I've posited is that there is a point where the user's brain is split 50/50 between the organic and the simulation. If the synapses being gated are selected randomly, this means that the neural impulses driving the user's conscious experience are constantly entering and exiting both versions of the brain. The same impulses are coursing through both flesh and silicon.
I specifically structured the hypothetical scenario this way because, speaking personally, I wouldn't settle for anything other than a seamless transition so fine-grained as to be imperceptible. I very much need to be awake and aware and have control over the rate of transition - and the ability to back it off a little to make sure I still feel like 'me'. Otherwise, I'll never truly believe that I wasn't simply copied and killed.
Also, not to be overly pedantic, but you're incorrect regarding "how computers work". Nearly all modern filesystems deal with move operations by simply flipping a few bits in the index - take any 5GB file you have lying around in Windows, Ctrl+X and Ctrl+V it to a new directory. It's instantaneous every time.
Now Ctrl+C and Ctrl+V it. Took a lot longer, didn't it? Only in the latter is the data duplicated.
posted by Ryvar at 5:56 AM on June 1, 2010
I think what's critical to the scenario I've posited is that there is a point where the user's brain is split 50/50 between the organic and the simulation. If the synapses being gated are selected randomly, this means that the neural impulses driving the user's conscious experience are constantly entering and exiting both versions of the brain. The same impulses are coursing through both flesh and silicon.
I specifically structured the hypothetical scenario this way because, speaking personally, I wouldn't settle for anything other than a seamless transition so fine-grained as to be imperceptible. I very much need to be awake and aware and have control over the rate of transition - and the ability to back it off a little to make sure I still feel like 'me'. Otherwise, I'll never truly believe that I wasn't simply copied and killed.
Also, not to be overly pedantic, but you're incorrect regarding "how computers work". Nearly all modern filesystems deal with move operations by simply flipping a few bits in the index - take any 5GB file you have lying around in Windows, Ctrl+X and Ctrl+V it to a new directory. It's instantaneous every time.
Now Ctrl+C and Ctrl+V it. Took a lot longer, didn't it? Only in the latter is the data duplicated.
posted by Ryvar at 5:56 AM on June 1, 2010
It's entirely possible for the brain in Ryvar's admittedly very interesting thought experiment to be experiencing not a routing of thoughts into the machine but rather a copying of thoughts into the machine and a simultaneous cutting off of the instances in the brain.
Well, that's just it. What's the difference? Is there even one? Why?
posted by mek at 7:00 AM on June 1, 2010
Well, that's just it. What's the difference? Is there even one? Why?
posted by mek at 7:00 AM on June 1, 2010
Good questions, but without knowing what 'mind' actually is, as opposed to 'brain', they're unanswerable (but endlessly fascinating to talk and think about). Which you know, just by raising, them, I know.
Heh. 'Know'.
posted by stavrosthewonderchicken at 7:10 AM on June 1, 2010 [1 favorite]
Heh. 'Know'.
posted by stavrosthewonderchicken at 7:10 AM on June 1, 2010 [1 favorite]
Yeah, we'd need a better conception of consciousness than we have available just yet.
posted by Pope Guilty at 7:57 AM on June 1, 2010
posted by Pope Guilty at 7:57 AM on June 1, 2010
Also, not to be overly pedantic, but you're incorrect regarding "how computers work". Nearly all modern filesystems deal with move operations by simply flipping a few bits in the index - take any 5GB file you have lying around in Windows, Ctrl+X and Ctrl+V it to a new directory. It's instantaneous every time.
Not that this sidebar is really relevant, but I think the earlier poster was assuming you were moving the files between drives, no?
posted by nobody at 9:17 AM on June 1, 2010
Not that this sidebar is really relevant, but I think the earlier poster was assuming you were moving the files between drives, no?
posted by nobody at 9:17 AM on June 1, 2010
Ryvar: “I think what's critical to the scenario I've posited is that there is a point where the user's brain is split 50/50 between the organic and the simulation. If the synapses being gated are selected randomly, this means that the neural impulses driving the user's conscious experience are constantly entering and exiting both versions of the brain. The same impulses are coursing through both flesh and silicon. ¶ I specifically structured the hypothetical scenario this way because, speaking personally, I wouldn't settle for anything other than a seamless transition so fine-grained as to be imperceptible. I very much need to be awake and aware and have control over the rate of transition - and the ability to back it off a little to make sure I still feel like 'me'. Otherwise, I'll never truly believe that I wasn't simply copied and killed.”
Yeah, I guess I just mean I'm not convinced, and wouldn't be, even if it happened by degrees. Maybe – but I'm hesitant to just try possibly switching off bits of my brain gradually.
What I'm concerned about is this: would I really "feel it," have the sensation of thinking, if my thoughts were flowing through the machine rather than through my neurons? I really don't think so. I don't think thought is a metaphysical property of the arrangement of the matter; as such, I have doubts that it can be transferred.
To me, it seems as though thought is both (a) the logical processing of ideas and (b) the sensation that one is experiencing those ideas, the "self-awareness" (really, just the sensing) that one is going through them. (a) is pretty easy to replicate in a computer, and you might be able to replicate (b), as well (I don't know, and it's a harder question) – but transferring both of those? I am really very skeptical. And even if you tried to do it by degrees, gradually transitioning my brain to where it was running on the circuits, I'm still convinced that to me it would just feel as though my brain was going away, even if the (in fact newly-emerging) circuit consciousness felt just fine.
posted by koeselitz at 11:30 AM on June 1, 2010 [1 favorite]
Yeah, I guess I just mean I'm not convinced, and wouldn't be, even if it happened by degrees. Maybe – but I'm hesitant to just try possibly switching off bits of my brain gradually.
What I'm concerned about is this: would I really "feel it," have the sensation of thinking, if my thoughts were flowing through the machine rather than through my neurons? I really don't think so. I don't think thought is a metaphysical property of the arrangement of the matter; as such, I have doubts that it can be transferred.
To me, it seems as though thought is both (a) the logical processing of ideas and (b) the sensation that one is experiencing those ideas, the "self-awareness" (really, just the sensing) that one is going through them. (a) is pretty easy to replicate in a computer, and you might be able to replicate (b), as well (I don't know, and it's a harder question) – but transferring both of those? I am really very skeptical. And even if you tried to do it by degrees, gradually transitioning my brain to where it was running on the circuits, I'm still convinced that to me it would just feel as though my brain was going away, even if the (in fact newly-emerging) circuit consciousness felt just fine.
posted by koeselitz at 11:30 AM on June 1, 2010 [1 favorite]
nobody: “Not that this sidebar is really relevant, but I think the earlier poster was assuming you were moving the files between drives, no?”
To be honest, I was thinking of early C namespaces. Which in hindsight was quite silly, considering that inodes are coeval with Unix anyway. Ryvar is quite correct.
posted by koeselitz at 11:32 AM on June 1, 2010
To be honest, I was thinking of early C namespaces. Which in hindsight was quite silly, considering that inodes are coeval with Unix anyway. Ryvar is quite correct.
posted by koeselitz at 11:32 AM on June 1, 2010
I just like that he inspired that Our Lady Peace album that I used to listen to all the time in university. His little spoken word blurbs in between songs were so deep, man. :)
posted by antifuse at 12:44 PM on June 1, 2010
posted by antifuse at 12:44 PM on June 1, 2010
koeselitz: I'd contend that qualia is an emergent property of the brain (in the sense that the brain is a neural network capable of self-reference, recursion, and modeling of complex arbitrary systems both abstract and natural) - but at this point we're entering the realm of the purely metaphysical, so I can only say that I respect your opinion and to some extent share your concern.
Ultimately, to paraphrase Rumsfeld, you go with the consciousness transfer mechanism you have, not the mechanism you might want or wish to have at a later time. If we get as far as I've outlined in the remaining 54 years the actuaries say I have left, I'll take it and count myself lucky. I really doubt we will get that far, but people surprise you and maybe we'll come up with something better than a guillotine with a bucket of liquid nitrogen in place of a basket.
...which is about where we are right now.
posted by Ryvar at 1:33 PM on June 1, 2010 [1 favorite]
Ultimately, to paraphrase Rumsfeld, you go with the consciousness transfer mechanism you have, not the mechanism you might want or wish to have at a later time. If we get as far as I've outlined in the remaining 54 years the actuaries say I have left, I'll take it and count myself lucky. I really doubt we will get that far, but people surprise you and maybe we'll come up with something better than a guillotine with a bucket of liquid nitrogen in place of a basket.
...which is about where we are right now.
posted by Ryvar at 1:33 PM on June 1, 2010 [1 favorite]
spitefulcrow: "[A] machine passing the Turing test in twenty? Give me a break."
Sounds pretty reasonable for me, if you let the machine pick the mode of communication. Say, Chatroulette. All it has to do is search for, retrieve, and toss up a picture of a penis and it could pass a startlingly good percentage of the time right now.
More seriously, it's entirely possible that as modes of communication become more machine-mediated and -aided, that it will become possible to fake a human more easily. Not face to face, but if few of your daily contacts are face to face but instead involve your machines talking to another person's machines (e.g. "what are you doing this afternoon" replaced by an automated query from your scheduling program to their scheduling program), then faking someone might be quite possible. Not because the machines have become more humanlike, but because people have gotten more machinelike.
If you think this is farfetched, take a look at 'How to Replace Yourself With a Very Small Shell Script.' It's a bit of an exaggeration, of course, at least for most people (although I'm sure we've all worked with a few people who could have been efficiently and pleasantly replaced with a few lines of zsh), but I could easily see it becoming commonplace.
Right now Google uses its algorithms to target ads based on the content of email, but eventually the market for advertising is going to be saturated and they'll have to look in new directions if they want to keep growing. How about a service that used those same algorithms, and intimate knowledge of all my messages ever sent and received, to make intelligent guesses at how to respond to incoming queries? I could see paying for a service that did that even halfassedly. (Think of something that would understand "can you send me an invite to the status meeting on Thursday?") That's just one trivial example; there are lots of others.
Also, there is a vast market on the horizon for computerized memory-augmentation products, as the first generation of people to grow up with computers from childhood start to feel the stupefying effects of age. People may not trade in their biological brains for electronic ones willingly, but as the former start to rot out from under them, who can blame them for offloading whatever can be offloaded to silicon? If a large number of people are using such devices — which will be flawed — our sensitivity towards what is mechanical and what is simply a person with a 'mental prosthetic' may become skewed. That would give a computer program a host of new excuses to work with to cover its own lapses.
In short, I think that prediction is one of the easiest to swallow, but not because I believe in the Singularity business, but because I think the social changes to come as a result of technology (even just the technology we have right now, its uses only partially realized) will let the machines — really the machines' programmers — fool us more easily.
posted by Kadin2048 at 8:20 PM on June 1, 2010
Sounds pretty reasonable for me, if you let the machine pick the mode of communication. Say, Chatroulette. All it has to do is search for, retrieve, and toss up a picture of a penis and it could pass a startlingly good percentage of the time right now.
More seriously, it's entirely possible that as modes of communication become more machine-mediated and -aided, that it will become possible to fake a human more easily. Not face to face, but if few of your daily contacts are face to face but instead involve your machines talking to another person's machines (e.g. "what are you doing this afternoon" replaced by an automated query from your scheduling program to their scheduling program), then faking someone might be quite possible. Not because the machines have become more humanlike, but because people have gotten more machinelike.
If you think this is farfetched, take a look at 'How to Replace Yourself With a Very Small Shell Script.' It's a bit of an exaggeration, of course, at least for most people (although I'm sure we've all worked with a few people who could have been efficiently and pleasantly replaced with a few lines of zsh), but I could easily see it becoming commonplace.
Right now Google uses its algorithms to target ads based on the content of email, but eventually the market for advertising is going to be saturated and they'll have to look in new directions if they want to keep growing. How about a service that used those same algorithms, and intimate knowledge of all my messages ever sent and received, to make intelligent guesses at how to respond to incoming queries? I could see paying for a service that did that even halfassedly. (Think of something that would understand "can you send me an invite to the status meeting on Thursday?") That's just one trivial example; there are lots of others.
Also, there is a vast market on the horizon for computerized memory-augmentation products, as the first generation of people to grow up with computers from childhood start to feel the stupefying effects of age. People may not trade in their biological brains for electronic ones willingly, but as the former start to rot out from under them, who can blame them for offloading whatever can be offloaded to silicon? If a large number of people are using such devices — which will be flawed — our sensitivity towards what is mechanical and what is simply a person with a 'mental prosthetic' may become skewed. That would give a computer program a host of new excuses to work with to cover its own lapses.
In short, I think that prediction is one of the easiest to swallow, but not because I believe in the Singularity business, but because I think the social changes to come as a result of technology (even just the technology we have right now, its uses only partially realized) will let the machines — really the machines' programmers — fool us more easily.
posted by Kadin2048 at 8:20 PM on June 1, 2010
The trouble with the Turing test is that it isn't really a very good test: it doesn't specify its standard at all. I mean, there are a lot of really stupid people, and most people are pretty much average. The odds are high that a machine could easily sound like a really stupid person; and even aside from that difficulty, what if the scientists running the experiment are morons? The best thing about science is usually that there's a basic standard, and that a scientist doesn't have to rely on some special perceptiveness but can simplify the condiitions so that she or he is merely observing. The Turing test requires scientists to somehow develop some kind of special brilliance in the area of what humans talk like. And it doesn't seem like much of a test if there's no standard or chance of universal applicability.
The core problem, I think, is that the Turing test is basically behaviorist, whereas our experience of the world isn't. So the Turing test tests everything but what we think, feel, and experience. That's sadly not much of a test. The smartest, most human person in the world might just sit through Turing test questions without saying a word; the fact that she doesn't respond doesn't mean she's a rock, although the Turing test basically has to assume she is.
posted by koeselitz at 9:21 PM on June 1, 2010 [2 favorites]
The core problem, I think, is that the Turing test is basically behaviorist, whereas our experience of the world isn't. So the Turing test tests everything but what we think, feel, and experience. That's sadly not much of a test. The smartest, most human person in the world might just sit through Turing test questions without saying a word; the fact that she doesn't respond doesn't mean she's a rock, although the Turing test basically has to assume she is.
posted by koeselitz at 9:21 PM on June 1, 2010 [2 favorites]
Uploading yourself into a computer is transferring control of yourself to the person or entity that controls that computer. And you're right, that is creepy.
Just wait for the P2P file sharing to take off and/or famous people's minds to be sold as interactive autobiographies.
posted by o0o0o at 12:14 AM on June 2, 2010
Just wait for the P2P file sharing to take off and/or famous people's minds to be sold as interactive autobiographies.
posted by o0o0o at 12:14 AM on June 2, 2010
The trouble with the Turing test is that it isn't really a very good test:
The Turing test isn't an experimental protocol. It was just Turing derailing the argument about What is consciousness, anyway? by saying If it looks like a duck etc.
But it is, after all, exactly the same test we apply to determine consciousness in humans: interlocution, plus a bit of occam's razor to rule out solipsism. Which I think was Turing's point: we have no way to directly test thoughts, feelings or experience, so we have to fall back to testing for the expected results of them, which is the communication of ideas, which is conversation.
posted by ook at 6:59 AM on June 2, 2010
The Turing test isn't an experimental protocol. It was just Turing derailing the argument about What is consciousness, anyway? by saying If it looks like a duck etc.
But it is, after all, exactly the same test we apply to determine consciousness in humans: interlocution, plus a bit of occam's razor to rule out solipsism. Which I think was Turing's point: we have no way to directly test thoughts, feelings or experience, so we have to fall back to testing for the expected results of them, which is the communication of ideas, which is conversation.
posted by ook at 6:59 AM on June 2, 2010
« Older One of the worst oil spills in history you've... | 50 Worst Pageviews Ever Newer »
This thread has been archived and is closed to new comments
posted by dvdgee at 10:02 AM on May 30, 2010