Kurzweil vs. Myers
August 18, 2010 1:11 AM Subscribe
Ray Kurzweil: Reverse-Engineering of Human Brain Likely by 2030. PZ Myers: Ray Kurzweil does not understand the brain.
6 reasons why you'll never upload your mind into a computer
posted by homunculus at 1:19 AM on August 18, 2010 [5 favorites]
posted by homunculus at 1:19 AM on August 18, 2010 [5 favorites]
*reverse engineers mechanism of desire for beer and popcorn, grabs chair*
posted by Dr Dracator at 1:19 AM on August 18, 2010 [1 favorite]
posted by Dr Dracator at 1:19 AM on August 18, 2010 [1 favorite]
It's like one of those old point-counterpoint pieces from The Onion
posted by Jon_Evil at 1:20 AM on August 18, 2010 [5 favorites]
posted by Jon_Evil at 1:20 AM on August 18, 2010 [5 favorites]
"Expert." Ha.
Ray Kurzweil is a common idiot.
posted by koeselitz at 1:23 AM on August 18, 2010 [4 favorites]
Ray Kurzweil is a common idiot.
posted by koeselitz at 1:23 AM on August 18, 2010 [4 favorites]
... and Myers is right. The human brain is in the genome? Huh?
posted by koeselitz at 1:24 AM on August 18, 2010 [1 favorite]
posted by koeselitz at 1:24 AM on August 18, 2010 [1 favorite]
The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. [...] About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
The bytecode may be a million lines but the VM to run it is a real bitch. Suggest continued use of extant biological/mechanical brain reproduction measures.
posted by fleacircus at 1:33 AM on August 18, 2010 [27 favorites]
The bytecode may be a million lines but the VM to run it is a real bitch. Suggest continued use of extant biological/mechanical brain reproduction measures.
posted by fleacircus at 1:33 AM on August 18, 2010 [27 favorites]
I can't read anything about Ray Kurzweil without thinking of this.
posted by the duck by the oboe at 1:37 AM on August 18, 2010 [6 favorites]
posted by the duck by the oboe at 1:37 AM on August 18, 2010 [6 favorites]
I love Kurzweil's argument. Take the entire human genome and, I don't know, pipe it through "gzip" - voila! Out come a million lines of code. I'm sure the resulting code will be nicely commented with a modular architecture and good use of design patterns. Evolution has millions of years of industry experience, and isn't going to spit out shitty spaghetti code like that eight-year-old Perl script you've been stuck maintaining.
Now all we need is a compiler to translate compressed base pair sequences into x86 machine instructions. That should be easy though - all we need is to emulate all the gene regulatory networks of the human cell. There's probably a nice open-source library that does this already - let me google it. If I run into any problems I'll just post a question on stackoverflow.
I'll get back to you with a ballpark estimate by end-of-day - then you can go over it in your morning conference call with the client.
posted by problemspace at 1:40 AM on August 18, 2010 [43 favorites]
Now all we need is a compiler to translate compressed base pair sequences into x86 machine instructions. That should be easy though - all we need is to emulate all the gene regulatory networks of the human cell. There's probably a nice open-source library that does this already - let me google it. If I run into any problems I'll just post a question on stackoverflow.
I'll get back to you with a ballpark estimate by end-of-day - then you can go over it in your morning conference call with the client.
posted by problemspace at 1:40 AM on August 18, 2010 [43 favorites]
From the second link: "Ray Kurzweil must be able to spin out a good line of bafflegab..."
PZ Myers writes as if he's a character in a Philip K Dick novel.
posted by zippy at 2:07 AM on August 18, 2010 [9 favorites]
PZ Myers writes as if he's a character in a Philip K Dick novel.
posted by zippy at 2:07 AM on August 18, 2010 [9 favorites]
7. You don't get to upload your penis.
posted by Meatbomb at 2:08 AM on August 18, 2010 [11 favorites]
posted by Meatbomb at 2:08 AM on August 18, 2010 [11 favorites]
That's what your mom said.
posted by zippy at 2:09 AM on August 18, 2010 [4 favorites]
posted by zippy at 2:09 AM on August 18, 2010 [4 favorites]
You know, I went for the quick snappy comment and pressed preview, and then realized that: a) the comment was pretty dumb and also b) I had pressed post comment. I blame my slow and buggy VM.
posted by zippy at 2:16 AM on August 18, 2010 [3 favorites]
posted by zippy at 2:16 AM on August 18, 2010 [3 favorites]
Not our problem, upload yourself to some faster hardware before commenting again.
posted by Dr Dracator at 2:21 AM on August 18, 2010 [8 favorites]
posted by Dr Dracator at 2:21 AM on August 18, 2010 [8 favorites]
When you get so fed up of reality that you decide to just give up and implement a perfect copy of it I think it's probably time to go back a few steps and try rediscover exactly what you were trying to achieve in the first place.
posted by public at 2:21 AM on August 18, 2010 [7 favorites]
posted by public at 2:21 AM on August 18, 2010 [7 favorites]
The bytecode may be a million lines but the VM to run it is a real bitch.
Yeah, this would appear to be Kurzweil's fundamental error. Programs are made up of operations that are defined in the processor, and the more complex the processor, the simpler your code can be. It's sort of like SSE in Intel processors; by using that special hardware support, programs can compactly describe complex matrix math operations they want done, rather than having to do the work themselves with simpler instructions. Without SSE, most game code, for example, would be a lot bigger and a lot slower.
Cells are probably the most complex processors we know. The best human efforts are many orders of magnitude simpler. The available functions that can be accessed by DNA are, quite literally, mind-boggling.
DNA is a program, but it's running on God's computer.
posted by Malor at 2:48 AM on August 18, 2010 [15 favorites]
Yeah, this would appear to be Kurzweil's fundamental error. Programs are made up of operations that are defined in the processor, and the more complex the processor, the simpler your code can be. It's sort of like SSE in Intel processors; by using that special hardware support, programs can compactly describe complex matrix math operations they want done, rather than having to do the work themselves with simpler instructions. Without SSE, most game code, for example, would be a lot bigger and a lot slower.
Cells are probably the most complex processors we know. The best human efforts are many orders of magnitude simpler. The available functions that can be accessed by DNA are, quite literally, mind-boggling.
DNA is a program, but it's running on God's computer.
posted by Malor at 2:48 AM on August 18, 2010 [15 favorites]
(and no, I'm not a creationist, I'm just trying to express how complex that problem is.)
posted by Malor at 2:49 AM on August 18, 2010
posted by Malor at 2:49 AM on August 18, 2010
Mad professor: Behold my creation! My brain uploaded to a gigantic computer!
Mad Electronic professor: Come here. (grabs professor using ill-considered robotic arm. Detaches head.) Hmm... now how do I run Quake on this?
posted by vanar sena at 2:56 AM on August 18, 2010
Mad Electronic professor: Come here. (grabs professor using ill-considered robotic arm. Detaches head.) Hmm... now how do I run Quake on this?
posted by vanar sena at 2:56 AM on August 18, 2010
I guess I'm curious as to why we need to simulate proteins at all; what's wrong with idealized neurons firing off idealized action potentials? The hard part would be building the axial and dendritic connections between them, but that's just a math problem. I mean, so far (as far as I know) it seems that cognition is based on the actions of neurons, and not on glial support cells or anything else, as such. So, in that sense, simulating the human brain seems like it ought to be possible within a reasonable amount of time. Kurzweil's gobblety-gook about DNA and lines of code notwithstanding, I don't see why a sufficiently powerful computer couldn't build a model of an idealized human brain in the next twenty years.
Please feel free to correct my (possibly glaring) misconceptions; I'm only about a quarter of the way through the 1000-page neuroscience textbook I bought a while ago.
posted by cthuljew at 2:58 AM on August 18, 2010
Please feel free to correct my (possibly glaring) misconceptions; I'm only about a quarter of the way through the 1000-page neuroscience textbook I bought a while ago.
posted by cthuljew at 2:58 AM on August 18, 2010
Kurtzweil is making what seems to me a reasonable back of the envelope estimate of the informational complexity of the brain vs a program, saying that if the brain can be encoded in n bits (and here he's using DNA as the upper bound of the number of bits, if I've skimmed correctly, and I may not have), then a program that similarly requires n bits would be of sufficient complexity to contain the same information.
The compressed size of an object is one accepted way of measuring its complexity (see: Kolmogorov complexity, Minimum description length.)
If my hands wave any faster, they're going to break the sound barrier.
posted by zippy at 3:27 AM on August 18, 2010 [2 favorites]
The compressed size of an object is one accepted way of measuring its complexity (see: Kolmogorov complexity, Minimum description length.)
If my hands wave any faster, they're going to break the sound barrier.
posted by zippy at 3:27 AM on August 18, 2010 [2 favorites]
Kurzweil: good at inventing things, terrible at everything else. Especially terrible at understanding which things it is that he doesn't understand.
posted by Pope Guilty at 3:31 AM on August 18, 2010 [2 favorites]
posted by Pope Guilty at 3:31 AM on August 18, 2010 [2 favorites]
Cthuljew: In my rather limited experience, it's becoming more and more clear that the spiking action potential network aspect of the brain, though complicated in itself, is also under significant active modulation by various other factors. Glial cells have an active role in controlling synaptic properties, and there are other interesting microcircuits known where neurons can use chemical cues to control the type of input they receive. I would be unsurprised if this sort of arrangement is somewhat common and difficult to discover. Even without getting into synaptic plasticity, the network structure of the brain is far from passive.
One of the several big things that arguments like Kurzweil's fail to take into account is that DNA is a set of instructions for self-organization (i.e. development) with many expectations about the developmental environment. Various aspects of neuronal connectivity, especially, are well known to be strongly dependent on firing activity, which is itself a response to stimulus. Without taking into account the myriad complex stimuli that DNA "expects," the rules alone aren't enough to give you a sensible picture, nor a proper view of the information content of the developed neural organization.
posted by Schismatic at 3:51 AM on August 18, 2010 [11 favorites]
One of the several big things that arguments like Kurzweil's fail to take into account is that DNA is a set of instructions for self-organization (i.e. development) with many expectations about the developmental environment. Various aspects of neuronal connectivity, especially, are well known to be strongly dependent on firing activity, which is itself a response to stimulus. Without taking into account the myriad complex stimuli that DNA "expects," the rules alone aren't enough to give you a sensible picture, nor a proper view of the information content of the developed neural organization.
posted by Schismatic at 3:51 AM on August 18, 2010 [11 favorites]
The problem with Kurzweil/Myers is the same one we've had for 100000000 years (give or take a zero). You're trying to define consciousness so you can make an .iso out of somebody's but... who knew? People cannot agree on what that is.
I would consider people like Kurzweil at a severe disadvantage because he seems like the type of brain that aims its crosshairs at something and concentrates on it. Most brains are like, "I'm gonna invent something today to benefit mankind," and somewhere between the bedroom and the notepad the directive becomes, "Is there a cereal somewhere that combines chocolate, vanilla, and strawberry flavors?"
The human brain is far too complex, there are too many nuances, and it's processing is fluid and unpredictable.
Oh, and
8. You can't right-click save boobies.
posted by Bathtub Bobsled at 3:54 AM on August 18, 2010
I would consider people like Kurzweil at a severe disadvantage because he seems like the type of brain that aims its crosshairs at something and concentrates on it. Most brains are like, "I'm gonna invent something today to benefit mankind," and somewhere between the bedroom and the notepad the directive becomes, "Is there a cereal somewhere that combines chocolate, vanilla, and strawberry flavors?"
The human brain is far too complex, there are too many nuances, and it's processing is fluid and unpredictable.
Oh, and
8. You can't right-click save boobies.
posted by Bathtub Bobsled at 3:54 AM on August 18, 2010
Why do we even need to achieve immortality like this anyway? Is it meant to be some sort of cognitive efficiency thing where we get to skip the first 20-30 unproductive years of life or is it just to satisfy our enormous egos? I really don't get it.
If it's just for ego stroking then I think it would probably be a lot easier to just deploy a series of carefully trained clones who all believe they are the same person. If it's to improve human productivity somehow I hate to break it to you but we really aren't all that great at stuff individually. You'd do much better fixing all the other shit first.
posted by public at 4:03 AM on August 18, 2010
If it's just for ego stroking then I think it would probably be a lot easier to just deploy a series of carefully trained clones who all believe they are the same person. If it's to improve human productivity somehow I hate to break it to you but we really aren't all that great at stuff individually. You'd do much better fixing all the other shit first.
posted by public at 4:03 AM on August 18, 2010
zippy: saying that if the brain can be encoded in n bits (and here he's using DNA as the upper bound of the number of bits, if I've skimmed correctly, and I may not have), then a program that similarly requires n bits would be of sufficient complexity to contain the same information.
I guess I didn't express myself very well up above. DNA is only part of the answer. It represents the additional complexity to make humans out of cells, but the other part of that answer is the insane complexity of the cells themselves.
Being able to write a program for a given computer in N bits does not mean you can represent the same algorithm in the same number of bits on a simpler machine. The larger the complexity difference, and the more advantage the program took of the advanced functions on the complex computer, the more bits will be required to duplicate the same program on the simple one.
Our computers are very, very tiny fractions of the complexity of a cell, so it seems reasonable to presume that it will take a truly vast bit multiplier to translate DNA to x86-circa-2030.
posted by Malor at 4:21 AM on August 18, 2010 [1 favorite]
I guess I didn't express myself very well up above. DNA is only part of the answer. It represents the additional complexity to make humans out of cells, but the other part of that answer is the insane complexity of the cells themselves.
Being able to write a program for a given computer in N bits does not mean you can represent the same algorithm in the same number of bits on a simpler machine. The larger the complexity difference, and the more advantage the program took of the advanced functions on the complex computer, the more bits will be required to duplicate the same program on the simple one.
Our computers are very, very tiny fractions of the complexity of a cell, so it seems reasonable to presume that it will take a truly vast bit multiplier to translate DNA to x86-circa-2030.
posted by Malor at 4:21 AM on August 18, 2010 [1 favorite]
Ever notice that futurists always think that the breakthrough advance will happen in their lifetime?
20 years ago, when he was 40, Kurzweil was predicting that it would take 30 years to make the breakthrough. Now, 10 years later, he's predicting just 10 more years, despite the growth curve of processing power slowing down dramatically.
I'm not saying that uploaded trans-human pseudo-consciousness isn't possible - it may be. I am saying that we're a hell of a long way from being there, though.
posted by chrisamiller at 4:23 AM on August 18, 2010 [1 favorite]
20 years ago, when he was 40, Kurzweil was predicting that it would take 30 years to make the breakthrough. Now, 10 years later, he's predicting just 10 more years, despite the growth curve of processing power slowing down dramatically.
I'm not saying that uploaded trans-human pseudo-consciousness isn't possible - it may be. I am saying that we're a hell of a long way from being there, though.
posted by chrisamiller at 4:23 AM on August 18, 2010 [1 favorite]
I don't see why a sufficiently powerful computer couldn't build a model of an idealized human brain in the next twenty years.
This guy doesn't either. And previously.
posted by fleacircus at 4:27 AM on August 18, 2010 [1 favorite]
This guy doesn't either. And previously.
posted by fleacircus at 4:27 AM on August 18, 2010 [1 favorite]
You know, I went for the quick snappy comment and pressed preview, and then realized that: a) the comment was pretty dumb and also b) I had pressed post comment. I blame my slow and buggy VM.
posted by zippy at 2:16 AM on August 18 [1 favorite +] [!]
1 user marked this as a favorite:
Your Time Machine Sucks August 18, 2010 2:53 AM
Eponyfavoristerical.
posted by Anything at 4:29 AM on August 18, 2010 [5 favorites]
posted by zippy at 2:16 AM on August 18 [1 favorite +] [!]
1 user marked this as a favorite:
Your Time Machine Sucks August 18, 2010 2:53 AM
Eponyfavoristerical.
posted by Anything at 4:29 AM on August 18, 2010 [5 favorites]
Kurzweil is off by at least seven orders of magnitude in his estimate of the information content of the human brain:
107 transcribed bases in the genome vs. 1015 synapses in a human brain.
posted by dongolier at 4:39 AM on August 18, 2010 [4 favorites]
107 transcribed bases in the genome vs. 1015 synapses in a human brain.
posted by dongolier at 4:39 AM on August 18, 2010 [4 favorites]
chrismiller: despite the growth curve of processing power slowing down dramatically.
Er what? We are still running on an ~18 month doubling rate on transistor count just like Moore's Law suggested over 40 years ago.
If you count it in MIPS per core then we've got a bit slower sure but that doesn't seem hugely significant to me yet. The top super-computers are actually doing more than ever and IBM are already in the planning stages of an >1 ExaFLOP machine.
posted by public at 5:00 AM on August 18, 2010 [1 favorite]
Er what? We are still running on an ~18 month doubling rate on transistor count just like Moore's Law suggested over 40 years ago.
If you count it in MIPS per core then we've got a bit slower sure but that doesn't seem hugely significant to me yet. The top super-computers are actually doing more than ever and IBM are already in the planning stages of an >1 ExaFLOP machine.
posted by public at 5:00 AM on August 18, 2010 [1 favorite]
The available functions that can be accessed by DNA are, quite literally, mind-boggling
Plus, the context is a bitch. Function x may do exactly what you want in this context; in any other context, it may generate problems that lead to systemic instability.
Trivial example: Both autism and schizophrenia can be conceptualized as dopamine disorders, yet they're so radically different that treating one doesn't tell you much (beyond common sense) about how to treat the other.
posted by lodurr at 5:18 AM on August 18, 2010 [1 favorite]
Plus, the context is a bitch. Function x may do exactly what you want in this context; in any other context, it may generate problems that lead to systemic instability.
Trivial example: Both autism and schizophrenia can be conceptualized as dopamine disorders, yet they're so radically different that treating one doesn't tell you much (beyond common sense) about how to treat the other.
posted by lodurr at 5:18 AM on August 18, 2010 [1 favorite]
Kurzweil is off by at least seven orders of magnitude in his estimate of the information content of the human brain
Explain to me in the brain where the processing is happening and where and how the information is being stored. If you can't, then estimating the "information content" is just absurd overreach
Kurzweil, Myers and many of the people in this thread are all speaking with far too much confidence about how many lines of code or beefy a computer would be needed to "emulate" a human brain. Might memristors or new probability processors be an AI game-changer? Might a couple well-architected intermediary programming languages make the "brain" source code quite compact and readable? Are the "algorithms" the brain uses for processing and information storage anything like anything we already know about? We have no fucking clue
Look, I agree Kurzweil has become a bit of a crank with his endless wild-eyed, self-promoting predictions. But we actually are, in directions from lithography to DNA synthesis, rapidly approaching the day when we're literally engineering reality down at its most fundamental levels. It's very difficult to believe that evolution rolled enough genetic dice to sculpt an organ so complex and beautiful we can never hope to replicate its functions in a different physical medium. That is after all what we're talking about - a few pounds of meat, a few pounds of molecules, a few pounds of bouncy atoms - inside our heads. Unless you still believe there's magic in there
posted by crayz at 5:19 AM on August 18, 2010 [4 favorites]
Explain to me in the brain where the processing is happening and where and how the information is being stored. If you can't, then estimating the "information content" is just absurd overreach
Kurzweil, Myers and many of the people in this thread are all speaking with far too much confidence about how many lines of code or beefy a computer would be needed to "emulate" a human brain. Might memristors or new probability processors be an AI game-changer? Might a couple well-architected intermediary programming languages make the "brain" source code quite compact and readable? Are the "algorithms" the brain uses for processing and information storage anything like anything we already know about? We have no fucking clue
Look, I agree Kurzweil has become a bit of a crank with his endless wild-eyed, self-promoting predictions. But we actually are, in directions from lithography to DNA synthesis, rapidly approaching the day when we're literally engineering reality down at its most fundamental levels. It's very difficult to believe that evolution rolled enough genetic dice to sculpt an organ so complex and beautiful we can never hope to replicate its functions in a different physical medium. That is after all what we're talking about - a few pounds of meat, a few pounds of molecules, a few pounds of bouncy atoms - inside our heads. Unless you still believe there's magic in there
posted by crayz at 5:19 AM on August 18, 2010 [4 favorites]
Our computers are very, very tiny fractions of the complexity of a cell, so it seems reasonable to presume that it will take a truly vast bit multiplier to translate DNA to x86-circa-2030.
It's extremely unlikely that all the complexity within the brain is actually useful for the purpose of...whatever it is we do
Or, we didn't need to grow feathers to make airplanes
posted by crayz at 5:24 AM on August 18, 2010 [1 favorite]
It's extremely unlikely that all the complexity within the brain is actually useful for the purpose of...whatever it is we do
Or, we didn't need to grow feathers to make airplanes
posted by crayz at 5:24 AM on August 18, 2010 [1 favorite]
If only we could lock up Ray Kurzweil, Stephen Wolfram and Kevin Warwick together in some far-away place, and have them collaborate on, say, a way to massively decrease the net amount of entropy in the universe, we could a) effectively make them all go away, and b) power half the Western world on the flow of bullshit & hot air coming from their meeting!
Win!
posted by kcds at 5:26 AM on August 18, 2010 [5 favorites]
Win!
posted by kcds at 5:26 AM on August 18, 2010 [5 favorites]
Schismatic: One of the several big things that arguments like Kurzweil's fail to take into account is that DNA is a set of instructions for self-organization (i.e. development) with many expectations about the developmental environment.
Oh, man, I have been saying this for about 25 years whenever this kind of discussion happens. I even wrote a long seminar paper on exactly this point.* People just acted like I was either spoiling their fun or not using the right philosophical jargon.
--
*Got an A, but I think that was mostly just respect for the fact that I'd gone through the motions with thoroughness.
posted by lodurr at 5:26 AM on August 18, 2010
Oh, man, I have been saying this for about 25 years whenever this kind of discussion happens. I even wrote a long seminar paper on exactly this point.* People just acted like I was either spoiling their fun or not using the right philosophical jargon.
--
*Got an A, but I think that was mostly just respect for the fact that I'd gone through the motions with thoroughness.
posted by lodurr at 5:26 AM on August 18, 2010
The system I work in is a small (~30 cell) system that rhythmically controls the stomach muscles of decapods (e.g. crabs and lobsters). We've been studying it for decades and still can't simulate it. We can't even really simulate one cell. To give just a tiny idea why, if you put tetrodotoxin on the neurons, that shuts down all the sodium channels, the cells stop spiking, and the system goes quiet. Now if you add the appropriate neuromodulator to the prep (e.g. oxotremorine) the rhythm will start up again, only without any spikes. Spikes aren't the only way information travels in a neural network ---good luck simulating that kind of response with simple neurons.
Individual neurons and individual synapses are very complicated, and the details absolutely matter for the behavior of the network. This is because the neuromodulators the cells are bathed in change based on the animal's state, completely altering the activity of the network. This is another thing futurists miss about neural networks: you can't just simulate the brain, you have to simulate the whole damned animal. The brain and the animal are part of one huge interacting system.
I have little doubt that one day people will make brain-inspired AI, but it won't be a simulation of the brain. That's just too complicated and inefficient to run on CPU type hardware.
posted by Humanzee at 5:29 AM on August 18, 2010 [34 favorites]
Individual neurons and individual synapses are very complicated, and the details absolutely matter for the behavior of the network. This is because the neuromodulators the cells are bathed in change based on the animal's state, completely altering the activity of the network. This is another thing futurists miss about neural networks: you can't just simulate the brain, you have to simulate the whole damned animal. The brain and the animal are part of one huge interacting system.
I have little doubt that one day people will make brain-inspired AI, but it won't be a simulation of the brain. That's just too complicated and inefficient to run on CPU type hardware.
posted by Humanzee at 5:29 AM on August 18, 2010 [34 favorites]
Man, not much love for Kurzweil here, is there?
I hate to join the pile-on, but I just wanted to point out that we don't even understand how general anesthetics make the brain stop working (some theories can be found here), much less understand how the brain works in the first place. We know a lot, but are ignorant of even more.
posted by TedW at 5:30 AM on August 18, 2010
I hate to join the pile-on, but I just wanted to point out that we don't even understand how general anesthetics make the brain stop working (some theories can be found here), much less understand how the brain works in the first place. We know a lot, but are ignorant of even more.
posted by TedW at 5:30 AM on August 18, 2010
I do have truck with this comment:
posted by plinth at 5:33 AM on August 18, 2010 [2 favorites]
"The genome is not the program; it's the data."One consistent cycle in computer science is that there is no difference between program and data, and the easier it is to treat data as code and code as data the easier it is to produce adaptive systems.
posted by plinth at 5:33 AM on August 18, 2010 [2 favorites]
Ray is one of the best examples of the reality of the old saying, "it's a fine line between genius and insanity". Between this silliness, his obsessive supplement mega-dosing and Ramona, you just gotta wonder...
posted by dbiedny at 5:33 AM on August 18, 2010
posted by dbiedny at 5:33 AM on August 18, 2010
crayz: Unless you still believe there's magic in there
As the man* once said: Quantity [complexity] has a quality all its own.
You seem to me to be doing essentially the same thing as Kurzweil, just with less definite numbers: You're assuming both that there's a technical solution to this problem (how to create a functional isomorph of the human brain), and that it's something we even ought to bother trying to do.
The latter has always been the big question for me: Why are we trying to do this? There are two kinds of answers I usually hear, and they boil down to: a) because learning how to recreate minds will teach us about what minds are, and b) because we want to cheat death. The second answer has always seemed to me to be the more honest one.
When you consider the impact of small imbalances in one neurotransmitter (e.g., dopamine) on the whole brain, it becomes amazing to me that the damn thing continues to work at all.
I have no issues with the concept of machine intelligence. But AFAIAC, trying to create functional isomorphs of minds, human or otherwise, with the assumption that it will be a genuine 1:1 functional isomorph, is just a ludicrous and misguided waste of time.
--
*I hear this attrib'd to Lenin & Stalin with equal frequency.
posted by lodurr at 5:36 AM on August 18, 2010
As the man* once said: Quantity [complexity] has a quality all its own.
You seem to me to be doing essentially the same thing as Kurzweil, just with less definite numbers: You're assuming both that there's a technical solution to this problem (how to create a functional isomorph of the human brain), and that it's something we even ought to bother trying to do.
The latter has always been the big question for me: Why are we trying to do this? There are two kinds of answers I usually hear, and they boil down to: a) because learning how to recreate minds will teach us about what minds are, and b) because we want to cheat death. The second answer has always seemed to me to be the more honest one.
When you consider the impact of small imbalances in one neurotransmitter (e.g., dopamine) on the whole brain, it becomes amazing to me that the damn thing continues to work at all.
I have no issues with the concept of machine intelligence. But AFAIAC, trying to create functional isomorphs of minds, human or otherwise, with the assumption that it will be a genuine 1:1 functional isomorph, is just a ludicrous and misguided waste of time.
--
*I hear this attrib'd to Lenin & Stalin with equal frequency.
posted by lodurr at 5:36 AM on August 18, 2010
Ray obviously has been talking to some of those clever Wall Street traders so I can understand his enthusiasm. I mean, hell, we've already got magical facial recognition software that can tell you when you're a liar. It's really just a hop, step and a jump to cloning a brain. I don't understand why people are so mouth-frothing about this.
posted by Civil_Disobedient at 5:44 AM on August 18, 2010
posted by Civil_Disobedient at 5:44 AM on August 18, 2010
I even wrote a long seminar paper on exactly this point.*
Link for those interested in reading this?
posted by AdamCSnider at 5:53 AM on August 18, 2010
Link for those interested in reading this?
posted by AdamCSnider at 5:53 AM on August 18, 2010
But we actually are, in directions from lithography to DNA synthesis, rapidly approaching the day when we're literally engineering reality down at its most fundamental levels.
When I first heard the term "virtual reality", from Lanier himself, my response was, "you're grossly underestimating the resolution and bandwidth of the physical, real world". I suspect we're pretty far from getting down to understanding, much less emulating, the fundamental levels of reality. The brain is the single most complex system known to our species, and we've only really been able to gain any handle on it in the last century or so. We've come far in emulating man-made circuits - the modeled MiniMoog plugins out there are pretty darned cool - but we're not even far enough along to figure out the weather more than a few days in advance. The sum total of reality? You need to know it intimately before you model it, and based on what I've seen of human behavior, we're not even close, IMO.
posted by dbiedny at 6:03 AM on August 18, 2010
When I first heard the term "virtual reality", from Lanier himself, my response was, "you're grossly underestimating the resolution and bandwidth of the physical, real world". I suspect we're pretty far from getting down to understanding, much less emulating, the fundamental levels of reality. The brain is the single most complex system known to our species, and we've only really been able to gain any handle on it in the last century or so. We've come far in emulating man-made circuits - the modeled MiniMoog plugins out there are pretty darned cool - but we're not even far enough along to figure out the weather more than a few days in advance. The sum total of reality? You need to know it intimately before you model it, and based on what I've seen of human behavior, we're not even close, IMO.
posted by dbiedny at 6:03 AM on August 18, 2010
Why do we even need to achieve immortality like this anyway?
The most commonly-given reason is that it is a necessary prerequisite to visiting other solar systems (given that the whole speed-of-light thing makes interstellar travel impractical for those with merely human lifespans).
posted by Ritchie at 6:05 AM on August 18, 2010
The most commonly-given reason is that it is a necessary prerequisite to visiting other solar systems (given that the whole speed-of-light thing makes interstellar travel impractical for those with merely human lifespans).
posted by Ritchie at 6:05 AM on August 18, 2010
I have no issues with the concept of machine intelligence. But AFAIAC, trying to create functional isomorphs of minds, human or otherwise, with the assumption that it will be a genuine 1:1 functional isomorph, is just a ludicrous and misguided waste of time.
Machine intelligence the way we do it does not appear to be anything like the sentience we see in humans and other animals. I've seen nothing to indicate we've simulated/created anything beyond an insect level of cognition, if that. I think it's pretty likely if we do start to understand consciousness and build conscious machines, the ability to "port" animal minds into artificial brains will be possible
If you consider neuroplasticity or simply the way we etch a very homogenous culture into a great variety of brains, it seems clear that reality as we perceive it exists at a level quite a lot higher than neurons and spike trains
posted by crayz at 6:09 AM on August 18, 2010
Machine intelligence the way we do it does not appear to be anything like the sentience we see in humans and other animals. I've seen nothing to indicate we've simulated/created anything beyond an insect level of cognition, if that. I think it's pretty likely if we do start to understand consciousness and build conscious machines, the ability to "port" animal minds into artificial brains will be possible
If you consider neuroplasticity or simply the way we etch a very homogenous culture into a great variety of brains, it seems clear that reality as we perceive it exists at a level quite a lot higher than neurons and spike trains
posted by crayz at 6:09 AM on August 18, 2010
PZ Myers: Ray Kurzweil does not understand the brain.
Because a blogger who teaches undergrad only and doesn't do any research would definitely know more about this then Ray Kurzweil! Obviously Kurzweil is kind of out there, but he's been researching this stuff for decades. He's not claiming to 'understand the brain', since the whole point is that no one does. His claim is that by 2030 people will have it figured out. I think it depends on how well Moore's law holds out.
PZ Myers might be good at blogging, but is he actually much of a scientist? How much of an impact do his actual scientific papers have, or does he (as an undergrad professor at a university who doesn't even work with grad students) even have any?
posted by delmoi at 6:11 AM on August 18, 2010
Because a blogger who teaches undergrad only and doesn't do any research would definitely know more about this then Ray Kurzweil! Obviously Kurzweil is kind of out there, but he's been researching this stuff for decades. He's not claiming to 'understand the brain', since the whole point is that no one does. His claim is that by 2030 people will have it figured out. I think it depends on how well Moore's law holds out.
PZ Myers might be good at blogging, but is he actually much of a scientist? How much of an impact do his actual scientific papers have, or does he (as an undergrad professor at a university who doesn't even work with grad students) even have any?
posted by delmoi at 6:11 AM on August 18, 2010
PZ Myers might be good at blogging, but is he actually much of a scientist?
Yes.
posted by grubi at 6:18 AM on August 18, 2010 [1 favorite]
Yes.
posted by grubi at 6:18 AM on August 18, 2010 [1 favorite]
Rejoice! Soon we will have the technology to allow baby boomers to live for ever!
posted by Artw at 6:18 AM on August 18, 2010
posted by Artw at 6:18 AM on August 18, 2010
Ritchie: The most commonly-given reason is that it is a necessary prerequisite to visiting other solar systems (given that the whole speed-of-light thing makes interstellar travel impractical for those with merely human lifespans).
Ah so it's really a big ego-tripping thing then. Clearly the solution to that problem (assuming some sort of cryogenic system is not workable) is to just make the ships self sustaining mobile colonies in their own right.
posted by public at 6:25 AM on August 18, 2010 [1 favorite]
Ah so it's really a big ego-tripping thing then. Clearly the solution to that problem (assuming some sort of cryogenic system is not workable) is to just make the ships self sustaining mobile colonies in their own right.
posted by public at 6:25 AM on August 18, 2010 [1 favorite]
A lot of this hyper-optimistic futurism seems to be predicated on Moore's Law, but doesn't really stop to consider trends in the use of this constantly-doubling processing power. The reality of the situation seems to be that, outside a few specific areas (computer graphics, chemistry, cryptography), the only singlularity we're really headed towards is one where a Google search for cheesecake recipes takes zero time.
posted by le morte de bea arthur at 6:27 AM on August 18, 2010 [2 favorites]
posted by le morte de bea arthur at 6:27 AM on August 18, 2010 [2 favorites]
he's actually just another Deepak Chopra for the computer science cognoscenti.
Such a great diss.
posted by Artw at 6:27 AM on August 18, 2010 [6 favorites]
Such a great diss.
posted by Artw at 6:27 AM on August 18, 2010 [6 favorites]
I guess I'm curious as to why we need to simulate proteins at all; what's wrong with idealized neurons firing off idealized action potentials?Because the brain is not composed of idealized neurons. Real neurons may change their connections, over time, based on what's going on inside them. They divide and create new brain cells. Memory is stored in some manner (we have no idea how) and so on.
I would consider people like Kurzweil at a severe disadvantage because he seems like the type of brain that aims its crosshairs at something and concentrates on it.They'll be at a disadvantage until they actually succeed, at which point it will be obvious to most people they were correct.
The Turing test, I think, would be pretty convincing. When you can talk to a computer the same way you talk to a person, and they have the same responses -- not just in text but also face to face with generated faces, emotional responses, and everything I think it will convince most people. Obviously not everyone, but I think most people.
The problem now is that while people right now can imagine such a thing the fact that it doesn't exist means it's easy to say it's impossible, or that the computers aren't "really" conscious, or whatever. It's kind of an annoying debate to have, because it's about something entirely hypothetical.
On the other hand, once those things become real, the discussion will be much more grounded.
I think the problem is that people imbue "consciousness" with some kind of magical, almost religious quality. Obviously, if you're religious and you believe that only god is capable of creating souls, the idea of "computer consciousness" is absurd.
But if you're not religious, then what's the reason for assuming that evolution is capable of creating conscious beings, but we are not. There may be some technological barrier that evolution crossed but that we can't. But the existence of that barrier, I think, needs some evidence, not the other way around.
But anyway, the exclusivity of biological consciousness is really a religious debate, at this point. Trying to argue about it is a waste of time. But people who don't believe that computer's can't appear 'conscious' by any reasonable measure don't really have any impact on whether or not it's ever developed.
---
Also, PZ Myers is, like I said, good at blogging but not particularly distinguished for his actual scientific work.
posted by delmoi at 6:29 AM on August 18, 2010 [3 favorites]
Why are we trying to do this? [...] a) because learning how to recreate minds will teach us about what minds are, and b) because we want to cheat death.
I'd argue that understanding the mind is the more rational motive for brain simulation lodurr; one of the best reasons to model a system is to get a firm grip on the questions that need to be answered to understand that system. We build a model based on our current understanding of the system, find how that model's predictions differ from our observations, come up with a hypothesis to explain the difference, and test that hypothesis observationally and via simulation. This process seems as valid in neuroscience as in any other science.
Personally, singulatarian focus on uploading irritates me; even supposing that is possible, it seems misguided to use computational resources as a virtual machine for running people. By (crude) analogy, I'm running Windows software on a Macbook via Parallels and it just doesn't seem to use the resources and interface as well as software intended for that platform. Biological plausibility -- the idea of designing algorithms that mimic what might be going on in natural organisms -- does give a (coarse) roadmap for designing intelligent software, but with my AI hat on I prefer to focus on designing software tailored to the capabilities of available computers rather than trying to match biological behavior to solve any particular problem.
posted by agent at 6:32 AM on August 18, 2010 [1 favorite]
I'd argue that understanding the mind is the more rational motive for brain simulation lodurr; one of the best reasons to model a system is to get a firm grip on the questions that need to be answered to understand that system. We build a model based on our current understanding of the system, find how that model's predictions differ from our observations, come up with a hypothesis to explain the difference, and test that hypothesis observationally and via simulation. This process seems as valid in neuroscience as in any other science.
Personally, singulatarian focus on uploading irritates me; even supposing that is possible, it seems misguided to use computational resources as a virtual machine for running people. By (crude) analogy, I'm running Windows software on a Macbook via Parallels and it just doesn't seem to use the resources and interface as well as software intended for that platform. Biological plausibility -- the idea of designing algorithms that mimic what might be going on in natural organisms -- does give a (coarse) roadmap for designing intelligent software, but with my AI hat on I prefer to focus on designing software tailored to the capabilities of available computers rather than trying to match biological behavior to solve any particular problem.
posted by agent at 6:32 AM on August 18, 2010 [1 favorite]
The brain is a series of microtubules.
posted by symbioid at 6:35 AM on August 18, 2010 [3 favorites]
posted by symbioid at 6:35 AM on August 18, 2010 [3 favorites]
Why do we even need to achieve immortality like this anyway?
We could also achieve immortality by preserving and protecting our existing meat systems with future medical technology. Or do you just object to immortality altogether because of some quasi-religious belief in the rightness of death?
posted by crayz at 6:36 AM on August 18, 2010 [1 favorite]
We could also achieve immortality by preserving and protecting our existing meat systems with future medical technology. Or do you just object to immortality altogether because of some quasi-religious belief in the rightness of death?
posted by crayz at 6:36 AM on August 18, 2010 [1 favorite]
Or, we didn't need to grow feathers to make airplanes
Yes, but airplanes and feathers result in radically different forms of flight. You can almost always distinguish a bird from an airplane.
posted by KirkJobSluder at 6:37 AM on August 18, 2010 [1 favorite]
Yes, but airplanes and feathers result in radically different forms of flight. You can almost always distinguish a bird from an airplane.
posted by KirkJobSluder at 6:37 AM on August 18, 2010 [1 favorite]
Clearly the solution to that problem (assuming some sort of cryogenic system is not workable) is to just make the ships self sustaining mobile colonies in their own right.
It would probably be easier to develop human-level AI than to make an indefinitely self-sustainable manned ship capable of interstellar travel. One way to look at it is in terms of the energy needed to accelerate the ship. A probe loaded with an AI compared to the (at least) few hundred people needed for long term genetic diversity and all of the food and industrial production facilities necessary to sustain them and the ship for hundreds or thousands of years? You could send hundreds of probes for the same energy budget.
Remember too that sending people assumes that there's a hospitable place to stop at the end of the trip (otherwise you're signing your descendants up for a suicide mission, which is all kinds of unethical). There are a lot of systems we might like to visit that have no hospitable planets, at least that we know of. Alpha Centauri, for example, would be a good first choice.
posted by jedicus at 6:39 AM on August 18, 2010
It would probably be easier to develop human-level AI than to make an indefinitely self-sustainable manned ship capable of interstellar travel. One way to look at it is in terms of the energy needed to accelerate the ship. A probe loaded with an AI compared to the (at least) few hundred people needed for long term genetic diversity and all of the food and industrial production facilities necessary to sustain them and the ship for hundreds or thousands of years? You could send hundreds of probes for the same energy budget.
Remember too that sending people assumes that there's a hospitable place to stop at the end of the trip (otherwise you're signing your descendants up for a suicide mission, which is all kinds of unethical). There are a lot of systems we might like to visit that have no hospitable planets, at least that we know of. Alpha Centauri, for example, would be a good first choice.
posted by jedicus at 6:39 AM on August 18, 2010
I want to download my consciousness to a robot because I can't bear the idea of kids going on my lawn after I'm dead.
posted by digsrus at 6:39 AM on August 18, 2010 [4 favorites]
posted by digsrus at 6:39 AM on August 18, 2010 [4 favorites]
PZ Myers is, like I said, good at blogging but not particularly distinguished for his actual scientific work.
I won't even get into who's more 'distinguished' than the other, but if you compare Kurzweil's vague handwaving and wild predictions with Myers' analytical look at the actual numbers and some pertinent examples of what the problem involves, it shouldn't be too hard to pick a winner.
posted by echo target at 6:40 AM on August 18, 2010 [4 favorites]
I won't even get into who's more 'distinguished' than the other, but if you compare Kurzweil's vague handwaving and wild predictions with Myers' analytical look at the actual numbers and some pertinent examples of what the problem involves, it shouldn't be too hard to pick a winner.
posted by echo target at 6:40 AM on August 18, 2010 [4 favorites]
Personally, singulatarian focus on uploading irritates me; even supposing that is possible, it seems misguided to use computational resources as a virtual machine for running people.
So if grandma could die or take over your circa 2038 iPhone, you'd tell her to kick off? Or you think evolution designed the most efficient possible platform on which to run human consciousness?
posted by crayz at 6:41 AM on August 18, 2010
So if grandma could die or take over your circa 2038 iPhone, you'd tell her to kick off? Or you think evolution designed the most efficient possible platform on which to run human consciousness?
posted by crayz at 6:41 AM on August 18, 2010
delmoi: I don't doubt that we'll have very complex and possibly conscious mechanical intelligence by the end of the next century. But biological cognition is embodied. You don't just think with your brain, you think with your liver, stomach, muscles, and skin as well.
Just because it's conceivable to create a machine intelligence to model all this doesn't mean it's an ideal way for engineering to move forward towards AI, similar to the way that feathers was not the way to engineer effective flying machines at the start of the last century.
posted by KirkJobSluder at 6:42 AM on August 18, 2010 [1 favorite]
Just because it's conceivable to create a machine intelligence to model all this doesn't mean it's an ideal way for engineering to move forward towards AI, similar to the way that feathers was not the way to engineer effective flying machines at the start of the last century.
posted by KirkJobSluder at 6:42 AM on August 18, 2010 [1 favorite]
We should ask Wolfram Alpha which of them is the more distinguished scientist!
posted by Artw at 6:45 AM on August 18, 2010 [1 favorite]
posted by Artw at 6:45 AM on August 18, 2010 [1 favorite]
My theory is that Kurzweil doesn't necessarily believe his predictions are accurate, but has reasoned that announcing such things is the best way he can help to make them happen.
posted by malevolent at 6:47 AM on August 18, 2010
posted by malevolent at 6:47 AM on August 18, 2010
How so? I mean, looking at Google Scholar I only see about 10 papers from the late 1980s and early 90s, with one paper from 1998 with him on the byline.PZ Myers might be good at blogging, but is he actually much of a scientist?Yes.
Anyway, reading his article, it doesn't sound like he understands how software works. Simple, short programs can produce extremely complex results, and the fact that there are 'cell/cell' interactions wouldn't prevent software from setting things in motion. You can write software that 'creates' things that then interact with each other to create emergent phenomena. PZ seems to think this is impossible.
We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently.The problem with this statement is there is no reason why the fact that the "environment, history of a few hundred billion cells each plugging along independently" is something that can't be done in software. It certainly can't be done in software today. Will it be possible in 20 years? Maybe.
Got that? You can't understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway. Is that trivially deducible from the structure of the protein? No.Right. Well, the argument is that we will get all of those interactions figured out. in the next 20 years. Maybe that's overly optimistic. But it's not impossible that we'll get it figured out eventually.
To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism.The problem with this statement is that programs are data. I think he's confusing "program" with "processor" here.
I've noticed an odd thing. Criticizing Ray Kurzweil brings out swarms of defenders, very few of whom demonstrate much ability to engage in critical thinking.PZ thinks everyone who disagrees with him is being irrational, and that everything he thinks is correct because obviously he's hyper-rational and knows everything about every field. Guy's an idiot.
posted by delmoi at 6:50 AM on August 18, 2010 [1 favorite]
Myers' analytical look at the actual numbers and some pertinent examples of what the problem involves, it shouldn't be too hard to pick a winner.
There's nothing really analytical about Myers blog post. He throws out some examples and says "See how hard it would be!" He has a point about protein folding. He doesn't do any analysis on how powerful the tools are to break it.
It's like doing an "analytical" look the possibility of travel to the moon just by noting how far away it is, without bothering to look at how big we can make our rockets.
posted by delmoi at 6:54 AM on August 18, 2010
There's nothing really analytical about Myers blog post. He throws out some examples and says "See how hard it would be!" He has a point about protein folding. He doesn't do any analysis on how powerful the tools are to break it.
It's like doing an "analytical" look the possibility of travel to the moon just by noting how far away it is, without bothering to look at how big we can make our rockets.
posted by delmoi at 6:54 AM on August 18, 2010
PZ Myers may not be a particularly distinguished scientist, but he is nevertheless a scientist. On the other side, Terry Sejnowski is an extraordinarily distinguished scientist, if that matters. That said, PZ's expertise is in development, which is why he understands at a gut level that there is a huge gap between our current understanding of the brain, and what would be required to build one in a computer. The brain is an intricate part of a machine that builds itself. If you want to replicate that process in silica, you are stuck simulating all the protein-protein interactions, the chemical and physical environment, all the sensory information (otherwise you get kid-raised-in-the-basement syndrome), etc. If you want to skip the development process and build an adult brain, you have to throw out the "simple" description and characterize all the relevant details of an adult brain. We don't know what all those are yet, but we know that there are a huge number of them, and it goes far beyond just numbers and locations of neurons and synapses. It isn't just a matter of Moore's Law.
posted by Humanzee at 6:54 AM on August 18, 2010 [4 favorites]
posted by Humanzee at 6:54 AM on August 18, 2010 [4 favorites]
"Asking whether a computer can think is like asking whether a submarine can swim."
(I forget the source for this, but it is apropos)
posted by bashos_frog at 6:58 AM on August 18, 2010 [1 favorite]
(I forget the source for this, but it is apropos)
posted by bashos_frog at 6:58 AM on August 18, 2010 [1 favorite]
delmoi, the protein-protein thing is amazingly complicated, which is why Myers doesn't go into details. Proteins form massive multi-protein complexes with different elements binding and unbinding to change function of the whole. And they're all affected by things like temperature, ionic concentrations, chaperones, etc. Even figuring out how a small handful of proteins interact is an extraordinary challenge, trying to get them all is a combinatoric nightmare. I suppose it isn't physically impossible, but there's no reason to believe it will happen in 25 years. There's no way of estimating how long it will take, there are too many breakthroughs needed to get there.
posted by Humanzee at 7:00 AM on August 18, 2010 [1 favorite]
posted by Humanzee at 7:00 AM on August 18, 2010 [1 favorite]
Aaanyway, there is one major problem with Kurzweil's argument. He's making a point about the Kolmogorov Complexity of the brain. And he's probably right, that the program could be written.
The problem is we don't know how long that program will take to run. It doesn't matter whether or not a program can be written the question is whether or not the program will run in a reasonable amount of time. If you write a brain simulator that takes 10 years to simulate 1 nanosecond, that's not really that useful. At all.
But PZ seems to be arguing that the program can't be written at all. And it's certainly possible that it won't be written in the next 20 years. I would be pretty surprised personally, manly because I think Moore's law is going to run into brick walls before then, unfortunately. If I thought Moore's law was going to continue, then I would be more likely to think it was possible.
But PZ doesn't seem to understand much about theoretical computer science. And on top of that he's ranting about how everyone who disagrees with him is an idiot, and bla bla bla. He doesn't know what he's talking about.
posted by delmoi at 7:03 AM on August 18, 2010
The problem is we don't know how long that program will take to run. It doesn't matter whether or not a program can be written the question is whether or not the program will run in a reasonable amount of time. If you write a brain simulator that takes 10 years to simulate 1 nanosecond, that's not really that useful. At all.
But PZ seems to be arguing that the program can't be written at all. And it's certainly possible that it won't be written in the next 20 years. I would be pretty surprised personally, manly because I think Moore's law is going to run into brick walls before then, unfortunately. If I thought Moore's law was going to continue, then I would be more likely to think it was possible.
But PZ doesn't seem to understand much about theoretical computer science. And on top of that he's ranting about how everyone who disagrees with him is an idiot, and bla bla bla. He doesn't know what he's talking about.
posted by delmoi at 7:03 AM on August 18, 2010
Humanzee: The system I work in is a small (~30 cell) system that rhythmically controls the stomach muscles of decapods (e.g. crabs and lobsters). We've been studying it for decades and still can't simulate it. We can't even really simulate one cell ...
Can you give us some idea of how the pace of change in your understanding of that system has evolved? From my naive perspective, it seems like tools like DNA sequencing and engineering, protein folding simulation, microscopes that can look inside living cells, etc, would mean that progress over the last three decades isn't very representative of progress over the next three decades. Just like astronomy moved a little faster in the decades after invention of the telescope. No?
posted by jhc at 7:03 AM on August 18, 2010 [1 favorite]
Can you give us some idea of how the pace of change in your understanding of that system has evolved? From my naive perspective, it seems like tools like DNA sequencing and engineering, protein folding simulation, microscopes that can look inside living cells, etc, would mean that progress over the last three decades isn't very representative of progress over the next three decades. Just like astronomy moved a little faster in the decades after invention of the telescope. No?
posted by jhc at 7:03 AM on August 18, 2010 [1 favorite]
Fortunately for Kurzweil, it'll be pretty easy to write a quick Markov chain program to simulate him by periodically expelling random bullshit through a TCP/IP gateway.
posted by Eideteker at 7:07 AM on August 18, 2010 [2 favorites]
posted by Eideteker at 7:07 AM on August 18, 2010 [2 favorites]
"Asking whether a computer can think is like asking whether a submarine can swim."
What's this thing doing? (And does it count as a 'submarine'?)
posted by delmoi at 7:08 AM on August 18, 2010
What's this thing doing? (And does it count as a 'submarine'?)
posted by delmoi at 7:08 AM on August 18, 2010
Simple, short programs can produce extremely complex results
This is very true, and it's a fascinating field, but it doesn't really get us closer to emulating a human brain. If you need to produce a particular complex result, writing a simple program to do it is actually much harder than writing a complex program to do it. If we ever are able to make a computer brain, I suspect it'll be done with a lot more than a million lines of code.
posted by echo target at 7:11 AM on August 18, 2010
This is very true, and it's a fascinating field, but it doesn't really get us closer to emulating a human brain. If you need to produce a particular complex result, writing a simple program to do it is actually much harder than writing a complex program to do it. If we ever are able to make a computer brain, I suspect it'll be done with a lot more than a million lines of code.
posted by echo target at 7:11 AM on August 18, 2010
Anyway, reading his article, it doesn't sound like he understands how software works. Simple, short programs can produce extremely complex results
it's a simple matter to write a fractal generator that will create something that looks like a tree, something else entirely to create a *specific* tree. Or a dog. Or a person.
posted by Artw at 7:12 AM on August 18, 2010 [1 favorite]
it's a simple matter to write a fractal generator that will create something that looks like a tree, something else entirely to create a *specific* tree. Or a dog. Or a person.
posted by Artw at 7:12 AM on August 18, 2010 [1 favorite]
Even assuming away the very difficult and potentially intractable problem of actually constructing a computer capable of running the human brain's software, isn't there a philosophical problem with describing this as "uploading our consciousness" to the computer? I mean, aren't we at best copying the software? I don't see any mechanism by which actual consciousness is transferred. When you die, YOU are still dead, whether you've created a simulacra of your personality running on a computer or not, yes?
posted by monju_bosatsu at 7:14 AM on August 18, 2010 [1 favorite]
posted by monju_bosatsu at 7:14 AM on August 18, 2010 [1 favorite]
For all of the talk about Moore's Law and what kind of hardware this could run on, my fear is that the greatest stumbling block may be that humanity is simply not bright enough to figure it out.
posted by adipocere at 7:18 AM on August 18, 2010
posted by adipocere at 7:18 AM on August 18, 2010
When you die, YOU are still dead, whether you've created a simulacra of your personality running on a computer or not, yes?
Well, that would depend on what counts as 'you'. Which is of course an unsolved philosophical issue. Watch out for dualism, though!
posted by echo target at 7:24 AM on August 18, 2010
Well, that would depend on what counts as 'you'. Which is of course an unsolved philosophical issue. Watch out for dualism, though!
posted by echo target at 7:24 AM on August 18, 2010
delmoi: I really think that Meyers is right on the money here. We barely have enough understanding of what's going on in biological organisms to reverse engineer a single cell, much less a multicellular organism.
Note that Meyers didn't say that complex machine cognition was impossible. His position is that you can't simulate human development by throwing an asston of computing power at the human genome or proteome. What you need are experimental studies that explore exactly what those proteins do in context. It's quite possible that brute-forcing the problem in the way advocated by Kurzweil is realistically impossible in the same way that certain types of encryption is realistically impossible, demanding a perfect Turing machine the size of a small planet or a time span beyond the heat death of the universe to go through all permutations.
Right. Well, the argument is that we will get all of those interactions figured out. in the next 20 years. Maybe that's overly optimistic. But it's not impossible that we'll get it figured out eventually.
Sure, but not by brute-force computation as advocated by Kurzweil. And since developmental biologists and neuroscientists have been using computers since the 80s to work on these problems, they're in a reasonably informed position to assess the current state of bioinfomatics.
The problem with this statement is that programs are data. I think he's confusing "program" with "processor" here.
I don't think that really matters. The point is that Kurzweil can't build such a program without a theoretical understanding of what that program does, and developmental biology isn't even close to being able to model complex organisms from the genome. On top of that, you run into the issue that in order to model your brain at age 30, you'd need not just a geneome, but 30 years of extremely comprehensive environmental data.
PZ thinks everyone who disagrees with him is being irrational...
Which is the sort of statement that proves PZ right, because you are being irrational and failing to demonstrate an ability to engage in critical thinking.
posted by KirkJobSluder at 7:24 AM on August 18, 2010 [4 favorites]
Note that Meyers didn't say that complex machine cognition was impossible. His position is that you can't simulate human development by throwing an asston of computing power at the human genome or proteome. What you need are experimental studies that explore exactly what those proteins do in context. It's quite possible that brute-forcing the problem in the way advocated by Kurzweil is realistically impossible in the same way that certain types of encryption is realistically impossible, demanding a perfect Turing machine the size of a small planet or a time span beyond the heat death of the universe to go through all permutations.
Right. Well, the argument is that we will get all of those interactions figured out. in the next 20 years. Maybe that's overly optimistic. But it's not impossible that we'll get it figured out eventually.
Sure, but not by brute-force computation as advocated by Kurzweil. And since developmental biologists and neuroscientists have been using computers since the 80s to work on these problems, they're in a reasonably informed position to assess the current state of bioinfomatics.
The problem with this statement is that programs are data. I think he's confusing "program" with "processor" here.
I don't think that really matters. The point is that Kurzweil can't build such a program without a theoretical understanding of what that program does, and developmental biology isn't even close to being able to model complex organisms from the genome. On top of that, you run into the issue that in order to model your brain at age 30, you'd need not just a geneome, but 30 years of extremely comprehensive environmental data.
PZ thinks everyone who disagrees with him is being irrational...
Which is the sort of statement that proves PZ right, because you are being irrational and failing to demonstrate an ability to engage in critical thinking.
posted by KirkJobSluder at 7:24 AM on August 18, 2010 [4 favorites]
Can you give us some idea of how the pace of change in your understanding of that system has evolved?
Things are picking up speed, but the truth of the matter is there's a long way to go. There is no experimental system where all the tools of neuroscience can be used, and most of them are destructive (i.e. destroy the organism being studied) and thus preclude multiple measurements on the same preparation. In our system (the stomatogastric ganglion or STG) we can't use many genetic techniques because we can't raise crabs in captivity (and their life cycle is too long anyway). But we can do some great intracellular recordings, and we can attribute a function to each cell because they're nearly all motor neurons, and we know which muscles they control. We're still really working at learning what all the molecular parts are, where they are, and how they work together. In particular, how ion channels are swapped in and out to allow neurons and the system as a whole to adapt to changing environments. If we can understand that process (homeostasis) we will take a large step towards understanding how neural networks work. But this is critical ---they don't just compute behavior, they also modify themselves at a sub-cellular level to adapt to their circumstances. And note, this is in a system that doesn't learn. Learning is in some sense the opposite of homeostasis, but homeostasis is present in all biological systems.
My own work over the past year and a half has been developing a technique to model individual isolated neurons in an unchanging neuromodulatory state. Everyone I talk to tells me I'm being too ambitious (they're probably right).
delmoi, you don't know what you're talking about. We literally can't simulate protein-protein interactions in bulk now. We can't write the equations. Unless you're talking about simulating every atom in the brain as it grows. But that's just stupid. We'll never do that.
posted by Humanzee at 7:27 AM on August 18, 2010 [14 favorites]
Things are picking up speed, but the truth of the matter is there's a long way to go. There is no experimental system where all the tools of neuroscience can be used, and most of them are destructive (i.e. destroy the organism being studied) and thus preclude multiple measurements on the same preparation. In our system (the stomatogastric ganglion or STG) we can't use many genetic techniques because we can't raise crabs in captivity (and their life cycle is too long anyway). But we can do some great intracellular recordings, and we can attribute a function to each cell because they're nearly all motor neurons, and we know which muscles they control. We're still really working at learning what all the molecular parts are, where they are, and how they work together. In particular, how ion channels are swapped in and out to allow neurons and the system as a whole to adapt to changing environments. If we can understand that process (homeostasis) we will take a large step towards understanding how neural networks work. But this is critical ---they don't just compute behavior, they also modify themselves at a sub-cellular level to adapt to their circumstances. And note, this is in a system that doesn't learn. Learning is in some sense the opposite of homeostasis, but homeostasis is present in all biological systems.
My own work over the past year and a half has been developing a technique to model individual isolated neurons in an unchanging neuromodulatory state. Everyone I talk to tells me I'm being too ambitious (they're probably right).
delmoi, you don't know what you're talking about. We literally can't simulate protein-protein interactions in bulk now. We can't write the equations. Unless you're talking about simulating every atom in the brain as it grows. But that's just stupid. We'll never do that.
posted by Humanzee at 7:27 AM on August 18, 2010 [14 favorites]
it's a simple matter to write a fractal generator that will create something that looks like a tree, something else entirely to create a *specific* tree. Or a dog. Or a person.It is indeed a different thing but what makes you think it's impossible? Doing exactly that is the basis for Fractal Image Compression, which mainly isn't used because of patent encumbrance.
But Kurzwile's point is that we already have the software, it's the genome. We just need to write an 'emulator' for the hardware. The problem is whether or not we can figure out a way to make the emulator run fast enough.
If you need to produce a particular complex result, writing a simple program to do it is actually much harder than writing a complex program to do it.A brain simulation obviously won't be written with normal source code, it would be a simulation that operates on an enormous amount of data, most of it computer generated. The actual human written source code wouldn't be very long. Just enough to manage loading data and implementing whatever rules the simulation runs on.
But I think the discussion of fractals or whatever kind of misses the point. The idea isn't to come up with a 'simple program' from scratch that creates a brain. Instead, it's trying to take an existing program that starts operating in a single cell, and emulate it until a brain is created.
The question is A) whether or not we'll ever have hardware that can do that and B) whether or not we can speed up the simulation by taking shortcuts to speed things up. Those questions can't really be answered, and like I said I think 2030 is kind of optimistic. but PZ's response isn't well thought out and doesn't seem to understand the computer side of the equation at all. Compounding that, he basically calls everyone who disagrees with his (incorrect) view an idiot.
posted by delmoi at 7:27 AM on August 18, 2010
Which is the sort of statement that proves PZ right, because you are being irrational and failing to demonstrate an ability to engage in critical thinking.
NUH UH, YOUR A POOPY HEAD!
posted by delmoi at 7:28 AM on August 18, 2010 [1 favorite]
NUH UH, YOUR A POOPY HEAD!
posted by delmoi at 7:28 AM on August 18, 2010 [1 favorite]
Sure, but not by brute-force computation as advocated by Kurzweil. And since developmental biologists and neuroscientists have been using computers since the 80s to work on these problems, they're in a reasonably informed position to assess the current state of bioinfomatics.
Yes, and capabilities have increased massively in that time. Compare the effort required to sequence a single gene in the 1980s to the fact that we can sequence an entire genome of an individual automatically for a few thousand dollars today. Protein folding and other problems are being worked on pretty intensively right now, and who knows what advances will be made in the next few decades. I'm not saying Kurzweil is right, and I don't think he is.
But PZ Myers response is just stupid, ill informed and illogical. Which isn't surprising, because that's how he rolls.
posted by delmoi at 7:33 AM on August 18, 2010
Yes, and capabilities have increased massively in that time. Compare the effort required to sequence a single gene in the 1980s to the fact that we can sequence an entire genome of an individual automatically for a few thousand dollars today. Protein folding and other problems are being worked on pretty intensively right now, and who knows what advances will be made in the next few decades. I'm not saying Kurzweil is right, and I don't think he is.
But PZ Myers response is just stupid, ill informed and illogical. Which isn't surprising, because that's how he rolls.
posted by delmoi at 7:33 AM on August 18, 2010
In terms of "consciousness preservation," which is an interesting branch of AI, I think we'll see stepwise replacement, in which portions of your meatbrain are replaced by some form of processor until no meatbrain is left, much earlier than we would see consciousness copying to a new platform.
Non-destructive simultaneous (let's say under 150 milliseconds) reads of every single neuron and synapse and glial cell and who knows what else in the brain and putting that onto a brand new platform sounds pretty tough to me.
posted by adipocere at 7:34 AM on August 18, 2010
Non-destructive simultaneous (let's say under 150 milliseconds) reads of every single neuron and synapse and glial cell and who knows what else in the brain and putting that onto a brand new platform sounds pretty tough to me.
posted by adipocere at 7:34 AM on August 18, 2010
Humanzee: That said, PZ's expertise is in development, which is why he understands at a gut level that there is a huge gap between our current understanding of the brain, and what would be required to build one in a computer.
It's not just a gut level. Biologists have been working with computer databases and models since the 80s. Most biologists would be happy to have automated tools that could spare some of the time, expense, and mess of physical experiments.
And let's not get into the problem that making the jump from understanding animal models to human models is likely to involve some really sketchy ethical issues. For that matter, even the animal models are ethically problematic and many people are squeamish about them unless there's a clear medical need involved.
posted by KirkJobSluder at 7:35 AM on August 18, 2010
It's not just a gut level. Biologists have been working with computer databases and models since the 80s. Most biologists would be happy to have automated tools that could spare some of the time, expense, and mess of physical experiments.
And let's not get into the problem that making the jump from understanding animal models to human models is likely to involve some really sketchy ethical issues. For that matter, even the animal models are ethically problematic and many people are squeamish about them unless there's a clear medical need involved.
posted by KirkJobSluder at 7:35 AM on August 18, 2010
Ray Kurzweil understands the brain well enough to know that "experts," making specific predictions, based on "science," promising a better life for everyone in the near future, sell more books than credentialed specialists in the field admitting that we don't know nothin' about nothin'.
posted by The Winsome Parker Lewis at 7:43 AM on August 18, 2010 [1 favorite]
posted by The Winsome Parker Lewis at 7:43 AM on August 18, 2010 [1 favorite]
Good god, why are we even treating this as a debate?
I may not agree with Meyers' all-American branded atheism which forgets that this same complex brain invents stuff like religion (and that's ok) but he recognizes scientism when he sees it.
I don't know who this Kurzweil is, but he seems to have a tenuous grasp on both the nature of code and the nature of this thing we call intelligence.
posted by clvrmnky at 7:48 AM on August 18, 2010 [1 favorite]
I may not agree with Meyers' all-American branded atheism which forgets that this same complex brain invents stuff like religion (and that's ok) but he recognizes scientism when he sees it.
I don't know who this Kurzweil is, but he seems to have a tenuous grasp on both the nature of code and the nature of this thing we call intelligence.
posted by clvrmnky at 7:48 AM on August 18, 2010 [1 favorite]
cthuljew: The problem with your scenario, though, is that at some point you are still shutting off your brain and turning on the computer simulation. This is just like the problem in The Prestige, which is part of what really gives that film its psychological edge. Namely, the "new" you, i.e., the you in the computer, will think that the transition has really occurred, because the "new" you has all the memories of the "old" you. But the "old" you is really still sitting in that chair. Both copies think they are authentic, and given the option, of course the "new" you will exercise the option to switch the "old" you off, because the "new" you thinks it is the "authentic" you.
posted by monju_bosatsu at 7:48 AM on August 18, 2010 [3 favorites]
posted by monju_bosatsu at 7:48 AM on August 18, 2010 [3 favorites]
So yeah, no big news that Ray is a charlatan. I got him yelling at me in a talk he gave in 2000 once just for asking him if he was being honest with his readers when he failed to mention the assumptions and caveats to his thinking.
So my meh here with Ray and all the singularity people is twofold:
1.) C'mon, folks -- you're all pretty smart and engaged people. Shouldn't you be spending your energy doing something more productive than this?
and
2.) You are underestimating the real. You are assuming that EVERYTHING that is held as conventional knowledge about EVERYTHING that there is is right. That the world is as we think it is. No one in the history of anything that has made that kind of assumption has been proven anything but wrong. Ever. And you won't be the first because you're just not doing the work. You assume that the brain's mechanics are explicable by current or next 20 years physics, you assume that building a brain will result in cognition, you assume that machine cognition would resemble human cognition in any way, you assume that intelligence itself... smartness... is a mechanical thing and not a developmental thing. You assume that machine cognition would be compatible enough with human models of culture and learning so that we could teach a thinking machine our language... etc. It is LUDICROUS. Your looking at a scenario that is about as probable as Dr. Who being real.
posted by n9 at 7:49 AM on August 18, 2010 [6 favorites]
So my meh here with Ray and all the singularity people is twofold:
1.) C'mon, folks -- you're all pretty smart and engaged people. Shouldn't you be spending your energy doing something more productive than this?
and
2.) You are underestimating the real. You are assuming that EVERYTHING that is held as conventional knowledge about EVERYTHING that there is is right. That the world is as we think it is. No one in the history of anything that has made that kind of assumption has been proven anything but wrong. Ever. And you won't be the first because you're just not doing the work. You assume that the brain's mechanics are explicable by current or next 20 years physics, you assume that building a brain will result in cognition, you assume that machine cognition would resemble human cognition in any way, you assume that intelligence itself... smartness... is a mechanical thing and not a developmental thing. You assume that machine cognition would be compatible enough with human models of culture and learning so that we could teach a thinking machine our language... etc. It is LUDICROUS. Your looking at a scenario that is about as probable as Dr. Who being real.
posted by n9 at 7:49 AM on August 18, 2010 [6 favorites]
It doesn't matter whether or not a program can be written the question is whether or not the program will run in a reasonable amount of time.
There's something else, too. Malor and others mentioned it above, but I think it's worth reiterating:
Kurzweil thinks he is making a point about the Kolmogorv complexity of the brain. What he is in fact talking about is the complexity of a process for generating an infant brain in vitro. The genome just describes a bunch of proteins, which are manufactured in the cells and then have to interact with each other in all sorts of complex ways to actually make a brain. The genome is so remarkably small because a lot of the complexity is coming from real-life physical and chemical interactions between the proteins which the genome describes.
We can't run all that beautifully simple code in a computer until we are capable of perfectly simulating the behavior of all of these proteins, and how they interact with each other. As it stands, we're having a damned hard time even figuring out what a single protein looks like based on its corresponding genetic code. And that's just the first step! Putting them all together and watching them make a brain is going to be many, many times harder.
In short: In order to simulate a brain, using the human genome, we first have to simulate the entire universe.
(Or, rephrased: The K-complexity of a string is dependent on the description language relative to which you are calculating it. Imagine, for instance, a fancy futuristic machine that makes brains. It's incredibly simple to operate; all you have to do is give it a string containing a decimal number specifying how many brains you want and it churns 'em out. So I give it the string "9." And out come nine brand new brains!
So it looks like the Kolmogorov complexity of the brain is 1/9th of a byte! Who would have guessed, eh?)
posted by magnificent frigatebird at 7:49 AM on August 18, 2010 [5 favorites]
There's something else, too. Malor and others mentioned it above, but I think it's worth reiterating:
Kurzweil thinks he is making a point about the Kolmogorv complexity of the brain. What he is in fact talking about is the complexity of a process for generating an infant brain in vitro. The genome just describes a bunch of proteins, which are manufactured in the cells and then have to interact with each other in all sorts of complex ways to actually make a brain. The genome is so remarkably small because a lot of the complexity is coming from real-life physical and chemical interactions between the proteins which the genome describes.
We can't run all that beautifully simple code in a computer until we are capable of perfectly simulating the behavior of all of these proteins, and how they interact with each other. As it stands, we're having a damned hard time even figuring out what a single protein looks like based on its corresponding genetic code. And that's just the first step! Putting them all together and watching them make a brain is going to be many, many times harder.
In short: In order to simulate a brain, using the human genome, we first have to simulate the entire universe.
(Or, rephrased: The K-complexity of a string is dependent on the description language relative to which you are calculating it. Imagine, for instance, a fancy futuristic machine that makes brains. It's incredibly simple to operate; all you have to do is give it a string containing a decimal number specifying how many brains you want and it churns 'em out. So I give it the string "9." And out come nine brand new brains!
So it looks like the Kolmogorov complexity of the brain is 1/9th of a byte! Who would have guessed, eh?)
posted by magnificent frigatebird at 7:49 AM on August 18, 2010 [5 favorites]
ack, the string "9". not the string "9." you know what i mean anyway, right?
posted by magnificent frigatebird at 7:52 AM on August 18, 2010
posted by magnificent frigatebird at 7:52 AM on August 18, 2010
I don't know who this Kurzweil is
He did a lot of pioneering work in the field of OCR in the 70s/80s. Then he moved on to doing this kind of blather full time.
posted by Artw at 7:54 AM on August 18, 2010 [1 favorite]
He did a lot of pioneering work in the field of OCR in the 70s/80s. Then he moved on to doing this kind of blather full time.
posted by Artw at 7:54 AM on August 18, 2010 [1 favorite]
The actual human written source code wouldn't be very long. Just enough to manage loading data and implementing whatever rules the simulation runs on.
The point here is that the rules that the simulation runs on would have to be fantastically complex. They would have to contain many many orders of magnitude more information than the genome.
Here's an example: if I walk into a bar and select A12 from a jukebox, it plays "Let it Be." Does the sequence "A12" contain all the information expressed by the song? Very few people would say it does, because it only works in a very specific context - I have to be in a particular bar with a particular jukebox, the jukebox needs to be in good working order, etc.
Similarly, the genome triggers lots of extremely complex phenomena in a living organism, but does it really contain all that information in itself? We've got the genome, but that was the easy part.
posted by echo target at 7:57 AM on August 18, 2010 [9 favorites]
The point here is that the rules that the simulation runs on would have to be fantastically complex. They would have to contain many many orders of magnitude more information than the genome.
Here's an example: if I walk into a bar and select A12 from a jukebox, it plays "Let it Be." Does the sequence "A12" contain all the information expressed by the song? Very few people would say it does, because it only works in a very specific context - I have to be in a particular bar with a particular jukebox, the jukebox needs to be in good working order, etc.
Similarly, the genome triggers lots of extremely complex phenomena in a living organism, but does it really contain all that information in itself? We've got the genome, but that was the easy part.
posted by echo target at 7:57 AM on August 18, 2010 [9 favorites]
delmoi: But Kurzwile's point is that we already have the software, it's the genome. We just need to write an 'emulator' for the hardware. The problem is whether or not we can figure out a way to make the emulator run fast enough.
Yes, but Kurzweil is wrong. The genome isn't a complete version of the software. And it's highly unlikely that you're going to be able to fill in the missing pieces without a very long and tedious process of experimental studies with living organisms rather than, as Kurzweil assumes, just running grep over the entire thing.
But I think the discussion of fractals or whatever kind of misses the point. The idea isn't to come up with a 'simple program' from scratch that creates a brain. Instead, it's trying to take an existing program that starts operating in a single cell, and emulate it until a brain is created.
Hint, the genome is not a complete program that explains everything in developmental biology. Until you can understand that, you have no business talking about the the problem.
delmoi: NUH UH, YOUR A POOPY HEAD!
Certainly, and your participation here is illogical, ignorant, and utterly lacking in critical thinking. Which makes PZ entirely correct here.
Yes, and capabilities have increased massively in that time. Compare the effort required to sequence a single gene in the 1980s to the fact that we can sequence an entire genome of an individual automatically for a few thousand dollars today. Protein folding and other problems are being worked on pretty intensively right now, and who knows what advances will be made in the next few decades. I'm not saying Kurzweil is right, and I don't think he is.
Yes, and so have advances in bioinfomatics. Meyers as a reasonably informed person about the current state of bioinfomatics is perfectly competent to say that the ability to fully simulate organism development isn't on the horizon.
But here, you're being illogical because:
1) You do say that Kurzweil is right by supporting his ignorant, and quasi-religious belief that the genome is a program that just needs an emulator.
2) You're arguing that a developmental biologist isn't qualified to talk about the current state of developmental biology, or the kinds of methods that are needed to advance developmental biology.
3) Most of your argument is little more than an ad hom against Meyers, and relies considerably on obviously false statements about Meyer's claim.
posted by KirkJobSluder at 7:58 AM on August 18, 2010 [1 favorite]
Yes, but Kurzweil is wrong. The genome isn't a complete version of the software. And it's highly unlikely that you're going to be able to fill in the missing pieces without a very long and tedious process of experimental studies with living organisms rather than, as Kurzweil assumes, just running grep over the entire thing.
But I think the discussion of fractals or whatever kind of misses the point. The idea isn't to come up with a 'simple program' from scratch that creates a brain. Instead, it's trying to take an existing program that starts operating in a single cell, and emulate it until a brain is created.
Hint, the genome is not a complete program that explains everything in developmental biology. Until you can understand that, you have no business talking about the the problem.
delmoi: NUH UH, YOUR A POOPY HEAD!
Certainly, and your participation here is illogical, ignorant, and utterly lacking in critical thinking. Which makes PZ entirely correct here.
Yes, and capabilities have increased massively in that time. Compare the effort required to sequence a single gene in the 1980s to the fact that we can sequence an entire genome of an individual automatically for a few thousand dollars today. Protein folding and other problems are being worked on pretty intensively right now, and who knows what advances will be made in the next few decades. I'm not saying Kurzweil is right, and I don't think he is.
Yes, and so have advances in bioinfomatics. Meyers as a reasonably informed person about the current state of bioinfomatics is perfectly competent to say that the ability to fully simulate organism development isn't on the horizon.
But here, you're being illogical because:
1) You do say that Kurzweil is right by supporting his ignorant, and quasi-religious belief that the genome is a program that just needs an emulator.
2) You're arguing that a developmental biologist isn't qualified to talk about the current state of developmental biology, or the kinds of methods that are needed to advance developmental biology.
3) Most of your argument is little more than an ad hom against Meyers, and relies considerably on obviously false statements about Meyer's claim.
posted by KirkJobSluder at 7:58 AM on August 18, 2010 [1 favorite]
He also designed the VAST synthesis model used in the K1000 and K2000 synthesizers, which was awesome. Now he wastes nerd's time and money professionally.
I've talked to smart Singularity-people over drinks. And I liked them. But it was weird. ALL of their talking points were about the end of some long process that will never start. They hadn't thought through the fundamentals of anything. It's like a new kind of trekkie. They think that the fictional future that they are going on about is real. They've gone past the bit about SF fans where they know it is made up. They've gone into true believer mode, which is a little spooky and messianic.
The way that they dismiss any question about how this will all come about and just say that it will is a kind of religious faith and I think that it is caused by the silly fast and looseness that scientists have been promoting lately... there is no hubris and humility in the sciences anymore. Everything that we think is true is not actually true and anyone that think differently is stupid. This kind of dogmatic garbage is going to make the cults of the future and this is one of them.
posted by n9 at 8:00 AM on August 18, 2010 [4 favorites]
I've talked to smart Singularity-people over drinks. And I liked them. But it was weird. ALL of their talking points were about the end of some long process that will never start. They hadn't thought through the fundamentals of anything. It's like a new kind of trekkie. They think that the fictional future that they are going on about is real. They've gone past the bit about SF fans where they know it is made up. They've gone into true believer mode, which is a little spooky and messianic.
The way that they dismiss any question about how this will all come about and just say that it will is a kind of religious faith and I think that it is caused by the silly fast and looseness that scientists have been promoting lately... there is no hubris and humility in the sciences anymore. Everything that we think is true is not actually true and anyone that think differently is stupid. This kind of dogmatic garbage is going to make the cults of the future and this is one of them.
posted by n9 at 8:00 AM on August 18, 2010 [4 favorites]
And to everyone that is talking simulation. Please understand something.
To run that simulation would require us having a comprehensive understanding of the physics in play. To make the 'reality emulator' we'd have to understand EVERYTHING. And we don't. It's not a closed system, the real world.
So even if that was "all it took" we couldn't do it because we don't know how to build an accurate and comprehensive model of the atomic world.
Thinking beyond that point is silly. There is no there there.
posted by n9 at 8:04 AM on August 18, 2010
To run that simulation would require us having a comprehensive understanding of the physics in play. To make the 'reality emulator' we'd have to understand EVERYTHING. And we don't. It's not a closed system, the real world.
So even if that was "all it took" we couldn't do it because we don't know how to build an accurate and comprehensive model of the atomic world.
Thinking beyond that point is silly. There is no there there.
posted by n9 at 8:04 AM on August 18, 2010
Probably the beauty of computational models comes not from when they perfectly match a given theory, but when they show that the theories behind the model are flawed to some degree. Dark energy and dark matter come immediately to mind as an example of, "hey perhaps we don't understand as much as we think we do."
posted by KirkJobSluder at 8:05 AM on August 18, 2010
posted by KirkJobSluder at 8:05 AM on August 18, 2010
monju_bosatsu - I'd say that the idea of a persistent "self" existing within our ever-changing brains is an illusion of the same sort anyway
posted by crayz at 8:05 AM on August 18, 2010
posted by crayz at 8:05 AM on August 18, 2010
I don't know who this Kurzweil is, but he seems to have a tenuous grasp on both the nature of code and the nature of this thing we call intelligence.Well, you could read the wikipedia article but I guess that would just be too much work. You can say that he doesn't understand intelligence if you want, but to argue that he doesn't understand code is pretty idiotic, for sure.
Kurzweil thinks he is making a point about the Kolmogorv complexity of the brain. What he is in fact talking about is the complexity of a process for generating an infant brain in vitro.Right. Which is why I was talking about the 'emulator' for the 'processor'. Writing an emulator won't be easy. And we don't exactly know how much external stimulus is needed to start with. That would all need to be included in the 'program'.
In short: In order to simulate a brain, using the human genome, we first have to simulate the entire universe.Only if by 'universe' you mean 'single cell' (at least to start). Do you really think something on mars is going to affect an embryo? Really? Now who's getting into 'woo' territory?
Anyway, what PZ myers was saying was that Kurzweil's argument was Illogical. But Myers doesn't understand enough about computer science to make that determination. And in fact there's nothing illogical about what he's saying. The question is whether or not it's practical. And, like I said I don't think it will be in 20 years, but it's not something I think would be impossible if Moore's law held (which I don't think it will)
posted by delmoi at 8:06 AM on August 18, 2010
It seems transparently obvious that you'd need a complete universe simulator to make the DNA code produce a brain. If we had a complete universe simulator, I'd imagine that simulating intelligence would be trivial.
posted by empath at 8:16 AM on August 18, 2010
posted by empath at 8:16 AM on August 18, 2010
delmoi: Right. Which is why I was talking about the 'emulator' for the 'processor'. Writing an emulator won't be easy. And we don't exactly know how much external stimulus is needed to start with. That would all need to be included in the 'program'.
And that's something that can't be determined from the genome, which is central to Kurzweil's argument.
delmoi: Anyway, what PZ myers was saying was that Kurzweil's argument was Illogical. But Myers doesn't understand enough about computer science to make that determination.
He doesn't need to be a computer scientist though, because Kurzweil's argument is about developmental biology. From the linked post: "The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits..." (emphasis added).
It doesn't matter if Kurzweil's numbers crunch properly if he's building his analysis on a high-school misconception about developmental biology. Since Kurzweil's starting premise is wrong, the rest of the argument is illogical.
posted by KirkJobSluder at 8:18 AM on August 18, 2010 [1 favorite]
And that's something that can't be determined from the genome, which is central to Kurzweil's argument.
delmoi: Anyway, what PZ myers was saying was that Kurzweil's argument was Illogical. But Myers doesn't understand enough about computer science to make that determination.
He doesn't need to be a computer scientist though, because Kurzweil's argument is about developmental biology. From the linked post: "The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits..." (emphasis added).
It doesn't matter if Kurzweil's numbers crunch properly if he's building his analysis on a high-school misconception about developmental biology. Since Kurzweil's starting premise is wrong, the rest of the argument is illogical.
posted by KirkJobSluder at 8:18 AM on August 18, 2010 [1 favorite]
I can't wait till we're all brains in computer-simulated jars. All the fun of a non-stop metafilter argument, none of the drudgery of going outside to play.
posted by Eideteker at 8:23 AM on August 18, 2010 [2 favorites]
posted by Eideteker at 8:23 AM on August 18, 2010 [2 favorites]
crayz: Or do you just object to immortality altogether because of some quasi-religious belief in the rightness of death?
You mean like severe overpopulation leading to new heights of human suffering ultimately culminating in massive failures of infrastructure to provide hospitable environments for society and life and thus resulting in even more death? Because other than that, "nobody ever has to die" sure does sound great.
posted by atbash at 8:23 AM on August 18, 2010
You mean like severe overpopulation leading to new heights of human suffering ultimately culminating in massive failures of infrastructure to provide hospitable environments for society and life and thus resulting in even more death? Because other than that, "nobody ever has to die" sure does sound great.
posted by atbash at 8:23 AM on August 18, 2010
Kurzweil's argument is illogical because it's based on a misconception about a basic theory of developmental biology.
It's impractical because it's based on a misconception about a basic theory of developmental biology.
And it's likely impossible because it it's based on a misconception about a basic theory of developmental biology.
It's like saying you can model the solar system if you had enough computing power to add all the epicycles.
posted by KirkJobSluder at 8:27 AM on August 18, 2010
It's impractical because it's based on a misconception about a basic theory of developmental biology.
And it's likely impossible because it it's based on a misconception about a basic theory of developmental biology.
It's like saying you can model the solar system if you had enough computing power to add all the epicycles.
posted by KirkJobSluder at 8:27 AM on August 18, 2010
My wife and I used the human genome to engineer a human brain plus the case to go with it.
posted by incessant at 8:27 AM on August 18, 2010 [14 favorites]
posted by incessant at 8:27 AM on August 18, 2010 [14 favorites]
But here, you're being illogical because:No, I said I thought Kurzweil was probably wrong, several times in this tread (first in this comment). Apparently you can't read, which doesn't give me a lot of faith in your logic.
1) You do say that Kurzweil is right by supporting his ignorant, and quasi-religious belief that the genome is a program that just needs an emulator.
2) You're arguing that a developmental biologist isn't qualified to talk about the current state of developmental biology, or the kinds of methods that are needed to advance developmental biology.No, I said he wasn't qualified to talk about the state of developmental biology in 20 years. Which is obvious. I do think it's something that might happen in the next 100-200 years but we'll have to see.
It's also not clear to me why a guy who hasn't published papers since the early 90s (with one in 98) and spends his time teaching undergrads is actually qualified to discuss the current state of developmental biology. Maybe he is, and maybe he's not. But having a popular blog doesn't actually make you an expert. And anyway, even if he is familiar with cutting edge research today, he's not going to be able to predict what things will be like in 20 years.
Finally, you claim I'm being 'illogical' but you apparently can't be bothered to construct an actual logical argument (with prepositions and induction and all that). You just say the belief that the genome is a program that needs an emulator wis 'ignorant and quasi-religious', but you don't actually bother to explain why that is. (in other words Your statement begs the question of whether or not the belief is ignorant and and quasi-religious)
Now you may or may not be right, but 'logical' is not a synonym for 'something I disagree with'. Which seems to be how you're using it.
(And by the way, there are lots of perfectly logical arguments that are non-congruent to the real world because incorrect premises. PZ Myers argument is that Kurzweil's statements were illogical, but there is nothing illogical about what Kurzweil was saying, given the correct premises. Myers didn't understand that. If he disagreed with the premises, he could have said that instead. Logic is a fairly complex field and it's not really that helpful to argue about what is and isn't logical, and instead look at whether or not the premises are supported by evidence)
posted by delmoi at 8:27 AM on August 18, 2010
Are proponents of simulating the brain in silicon suggesting we can do so without simulating the complex activity of every individual cell at the atomic level? It's not like each brain cell is a simple OR/AND/XOR logic gate emitting a steady streams of binary signal like a computer. There is every indication that each cell is a program in and of itself of sufficient complexity that the larger issue of their interaction isn't even approachable until that is solved. At which point, I ask why bother? The biological brain is pretty well suited for running this program, and we have an easy way of generating new instances (albeit with an annoying knack for being unpredictable and predisposed to being on my lawn) with a handy function called "reproduction".
The whole thing seems akin to running windows in bootcamp on os x in a vmware running in wine on a linux box, except exponentially more complicated and pointless.
posted by cj_ at 8:29 AM on August 18, 2010 [1 favorite]
The whole thing seems akin to running windows in bootcamp on os x in a vmware running in wine on a linux box, except exponentially more complicated and pointless.
posted by cj_ at 8:29 AM on August 18, 2010 [1 favorite]
Fractal Image Compression, which mainly isn't used because of patent encumbrance.
Not to derail too much, but the main reason fractal compression isn't used is that JPEG is superior for the vast majority of use cases, both in terms of quality at a given compression ratio and encoding speed. The encoding speed issue was especially important early on and remains relevant in the video compression context. Fractal compression is marginally better at extremely high compression ratios, but at that point both methods look terrible--fractal compression just looks slightly less terrible.
The patents themselves aren't really the issue. JPEG (the format), for example, is covered by lots of patents, but JPEG (the standards group) arranged for licensing from the patent holders. If anything it was the licensing scheme that Iterated Systems pursued that was the problem.
You mean like severe overpopulation leading to new heights of human suffering
That's not necessarily a given. First off, if immortality is achieved by moving consciousness to machines then overpopulation isn't really a problem anymore, especially because it then becomes practical to live somewhere other than Earth. In the more likely case that immortality (or at least extreme longevity) is achieved through medicine, then people still wouldn't live forever. I seem to recall a rough estimate that non-disease causes of death (e.g., accidents, murder, etc) would lead to a live expectancy of 200-300. That's a long time but very different than forever. Finally, so far increased life expectancy has been strongly correlated with a reduction in family size. It seems reasonably likely therefore that if people started living to 200 that they would have even fewer children than they do already.
posted by jedicus at 8:31 AM on August 18, 2010
Not to derail too much, but the main reason fractal compression isn't used is that JPEG is superior for the vast majority of use cases, both in terms of quality at a given compression ratio and encoding speed. The encoding speed issue was especially important early on and remains relevant in the video compression context. Fractal compression is marginally better at extremely high compression ratios, but at that point both methods look terrible--fractal compression just looks slightly less terrible.
The patents themselves aren't really the issue. JPEG (the format), for example, is covered by lots of patents, but JPEG (the standards group) arranged for licensing from the patent holders. If anything it was the licensing scheme that Iterated Systems pursued that was the problem.
You mean like severe overpopulation leading to new heights of human suffering
That's not necessarily a given. First off, if immortality is achieved by moving consciousness to machines then overpopulation isn't really a problem anymore, especially because it then becomes practical to live somewhere other than Earth. In the more likely case that immortality (or at least extreme longevity) is achieved through medicine, then people still wouldn't live forever. I seem to recall a rough estimate that non-disease causes of death (e.g., accidents, murder, etc) would lead to a live expectancy of 200-300. That's a long time but very different than forever. Finally, so far increased life expectancy has been strongly correlated with a reduction in family size. It seems reasonably likely therefore that if people started living to 200 that they would have even fewer children than they do already.
posted by jedicus at 8:31 AM on August 18, 2010
Unless there is some magic going on down in the brain, there is no reason why we couldn't simulate one some day.
As someone very interested in singularity topics I feel like I gotta say the following: Kurzweil is batshit. Can we please start ignoring him now?
posted by cirrostratus at 8:34 AM on August 18, 2010
As someone very interested in singularity topics I feel like I gotta say the following: Kurzweil is batshit. Can we please start ignoring him now?
posted by cirrostratus at 8:34 AM on August 18, 2010
It's also not clear to me why a guy who hasn't published papers since the early 90s (with one in 98) and spends his time teaching undergrads is actually qualified to discuss the current state of developmental biology. Maybe he is, and maybe he's not. But having a popular blog doesn't actually make you an expert. And anyway, even if he is familiar with cutting edge research today, he's not going to be able to predict what things will be like in 20 years.
By the same logic Kurzweil isn't qualified to make predictions about what computer science will be like in 10-20 years, even if he is familiar with cutting edge research today, which is dubious, since his published research (what little there is) dates further back than Myers'.
posted by jedicus at 8:37 AM on August 18, 2010
By the same logic Kurzweil isn't qualified to make predictions about what computer science will be like in 10-20 years, even if he is familiar with cutting edge research today, which is dubious, since his published research (what little there is) dates further back than Myers'.
posted by jedicus at 8:37 AM on August 18, 2010
Kurzweil's argument is illogical because it's based on a misconception about a basic theory of developmental biology.Whether or not an argument it's based on a misconception has no baring on whether or not the argument is logical. Whether or not an argument is logical and whether or not it's correct in the real world are two separate things.
And anyway, you seem to be raging about a pretty minor point, frankly. If I look at a computer program and say, "All the information about what this program does in the source code" most people would take that as being a correct statement. In reality, you need to understand the compiler, and the CPU it's going to run on and all that. But it's a fairly accurate statement that most people would understand.
If you expand Kurzweil's argument from "The brain is encoded in the genome" to "The brain is encoded in the genome, plus the chemical structure of human cells, starting with the egg" the statement would be more precise. But no one can make every statement perfectly precise, and it's reasonable to assume that that was what he meant.
You can agree or disagree about whether or not people will be able to emulate the chemical structure of a cell in 20 years. Or 50 years or ever. But to argue that it's somehow a 'logical' impossibility is absurd.
It's like saying you can model the solar system if you had enough computing power to add all the epicycles.Uh, it's pretty easy to model the solar system. You could probably model it computationally as a system of geocentric epicycles pretty easily, if you wanted to. In fact, all you would need would be eight epicycles centered around the sun.
posted by delmoi at 8:38 AM on August 18, 2010
By the same logic Kurzweil isn't qualified to make predictions about what computer science will be like in 10-20 years, even if he is familiar with cutting edge research today, which is dubious, since his published research (what little there is) dates further back than Myers'.
Which is a fair point. I'm not trying to say that Kurzweil is right here, especially about the timing. But I think if you extend the time-frame by an order of magnitude, it's a more reasonable prediction, and that PZ Myers criticism is kind of ridiculous.
posted by delmoi at 8:40 AM on August 18, 2010
Which is a fair point. I'm not trying to say that Kurzweil is right here, especially about the timing. But I think if you extend the time-frame by an order of magnitude, it's a more reasonable prediction, and that PZ Myers criticism is kind of ridiculous.
posted by delmoi at 8:40 AM on August 18, 2010
No, I said I thought Kurzweil was probably wrong, several times in this tread (first in this comment). Apparently you can't read, which doesn't give me a lot of faith in your logic.
You're saying that Kurzweil is wrong about the timeline but right about the theory. Meyers is pointing out that the theory behind the timeline is false. And since he's a developmental biologist, he's in a better position to examine that theory than Kurzweil.
No, I said he wasn't qualified to talk about the state of developmental biology in 20 years. Which is obvious. I do think it's something that might happen in the next 100-200 years but we'll have to see.
How is a developmental biologist unqualified to talk about the state of developmental biology in 20 years? I'd say that he's likely in a damn good position to extrapolate from current developments to what is likely to happen in the next decade or so.
Finally, you claim I'm being 'illogical' but you apparently can't be bothered to construct an actual logical argument (with prepositions and induction and all that).
I pointed out the three obvious fallacies in your argument which is 1) constructing an argument based on an empirically false premise, 2) attacking the expertise of a person in his own field, while insisting on the expertise of a person in an unrelated field, and 3) engaging an ad hom attack.
PZ Myers argument is that Kurzweil's statements were illogical, but there is nothing illogical about what Kurzweil was saying, given the correct premises.
Yes, the problem is that Kurzweil's premise is ludicrous, making his conclusions suspect. And Myers did an effective job of pointing out that Kurzweil can't make any conclusion about the feasibility of simulating a brain from the size of the genome.
posted by KirkJobSluder at 8:42 AM on August 18, 2010 [2 favorites]
You're saying that Kurzweil is wrong about the timeline but right about the theory. Meyers is pointing out that the theory behind the timeline is false. And since he's a developmental biologist, he's in a better position to examine that theory than Kurzweil.
No, I said he wasn't qualified to talk about the state of developmental biology in 20 years. Which is obvious. I do think it's something that might happen in the next 100-200 years but we'll have to see.
How is a developmental biologist unqualified to talk about the state of developmental biology in 20 years? I'd say that he's likely in a damn good position to extrapolate from current developments to what is likely to happen in the next decade or so.
Finally, you claim I'm being 'illogical' but you apparently can't be bothered to construct an actual logical argument (with prepositions and induction and all that).
I pointed out the three obvious fallacies in your argument which is 1) constructing an argument based on an empirically false premise, 2) attacking the expertise of a person in his own field, while insisting on the expertise of a person in an unrelated field, and 3) engaging an ad hom attack.
PZ Myers argument is that Kurzweil's statements were illogical, but there is nothing illogical about what Kurzweil was saying, given the correct premises.
Yes, the problem is that Kurzweil's premise is ludicrous, making his conclusions suspect. And Myers did an effective job of pointing out that Kurzweil can't make any conclusion about the feasibility of simulating a brain from the size of the genome.
posted by KirkJobSluder at 8:42 AM on August 18, 2010 [2 favorites]
I'll believe this when I get my flying car.
posted by thsmchnekllsfascists at 8:49 AM on August 18, 2010
posted by thsmchnekllsfascists at 8:49 AM on August 18, 2010
delmoi, I have no idea why you're digging in on this.
The reason people explicitly discuss the genome not containing all information is because we in fact do have the genome now, but have come to understand that it only provided a moderate advance in our understanding of life processes. The situation with computer programs is not analogous, since there the CPU and OS exist and are understood before the program is written. They are also designed to be largely transparent to the operation of the program, whereas the cell state and laws of chemistry are intrinsic to the development of an organism. It's not reasonable to assume Kurzweil understood what he was talking about, because he used genes to estimate the difficulty of the problem (incorrectly), and literally no one is trying to simulate neural networks by starting with genes. It's a non-starter as a technique.
PZ is right that Kurzweil was being illogical. Only very basic, publicly available knowledge is necessary to understand why the genome is not the way to approach simulating neural networks. Those premises were available to Kurzweil and he ignored them. PZ didn't.
posted by Humanzee at 8:55 AM on August 18, 2010 [1 favorite]
The reason people explicitly discuss the genome not containing all information is because we in fact do have the genome now, but have come to understand that it only provided a moderate advance in our understanding of life processes. The situation with computer programs is not analogous, since there the CPU and OS exist and are understood before the program is written. They are also designed to be largely transparent to the operation of the program, whereas the cell state and laws of chemistry are intrinsic to the development of an organism. It's not reasonable to assume Kurzweil understood what he was talking about, because he used genes to estimate the difficulty of the problem (incorrectly), and literally no one is trying to simulate neural networks by starting with genes. It's a non-starter as a technique.
PZ is right that Kurzweil was being illogical. Only very basic, publicly available knowledge is necessary to understand why the genome is not the way to approach simulating neural networks. Those premises were available to Kurzweil and he ignored them. PZ didn't.
posted by Humanzee at 8:55 AM on August 18, 2010 [1 favorite]
You're saying that Kurzweil is wrong about the timeline but right about the theory. Meyers is pointing out that the theory behind the timeline is false. And since he's a developmental biologist, he's in a better position to examine that theory than Kurzweil.He says it's false, but he doesn't really give a solid reason why he thinks its false.
How is a developmental biologist unqualified to talk about the state of developmental biology in 20 years? I'd say that he's likely in a damn good position to extrapolate from current developments to what is likely to happen in the next decade or so.Because people can't predict the future? Seems obvious enough. Meyers isn't saying Kurzweil's prediction is unlikely, he's saying it's logically impossible, and (trying) to make an argument based on fundamental principles without bothering to do the math. He's just pointing out a few proteins and pointing out how they all interact with each other. But he's not actually sitting down and computing how many interactions there are, how much computing power they'll actually take and more importantly - whether or not the interaction matrix can be simplified or optimized.
I pointed out the three obvious fallacies in your argument which is 1) constructing an argument based on an empirically false premiseYes, and I have explained how that is not a logical fallacy. Whether or not the logic is sound is independent of whether or not the premises are true.
The problem is that no statements about the real world can ever be fully true (there are always going to be exceptions) and thus no argument about the real world is ever 'logically' true. But that doesn't mean they are illogical given their premises.
This is a fairly complicated topic and kind of irrelevant to who is a bigger idiot: Kurzweil or Myers. But it's kind of frustrating to debate whether or not something is logical with someone who doesn't understand how logic works. And as a result your getting hung up on what's actually a fairly minor point: Whether or not the brain is "encoded in DNA" or "encoded in DNA + the molecular machinery to transcribe it into proteins, and so on." Those are two very similar statements, and saying one instead of the other isn't a very big deal.
posted by delmoi at 9:07 AM on August 18, 2010 [1 favorite]
While I see the point that the informational complexity of the DNA alone may not be sufficient to encode a working human brain, I think that saying that you'd need a universe simulator in order to make an intelligence equal to humans suffers from the assumption that, since we don't understand how a brain is made, it must be dependent on everything in the universe!
Imagine if computers were nearly black box technology, and we received the ROM for the Apple ][. Hey, it encodes everything, but since we don't know how the Apple ][ is made or works, we're going to have to simulate it at the atomic level, and to do that, we'll need to understand ... the entire universe.
I think that on the ST * continuum of how much brain development depends on environment, where the dial goes from "DNA alone is sufficient" to "we must understand the entire universe and model it faithfully," it is unlikely that the actual setting is either zero or eleven.
* The Spın̈al Tap Model of Human Cognition and Computational Complexity. Tufnel, Nigel, et al.
posted by zippy at 9:12 AM on August 18, 2010 [4 favorites]
Imagine if computers were nearly black box technology, and we received the ROM for the Apple ][. Hey, it encodes everything, but since we don't know how the Apple ][ is made or works, we're going to have to simulate it at the atomic level, and to do that, we'll need to understand ... the entire universe.
I think that on the ST * continuum of how much brain development depends on environment, where the dial goes from "DNA alone is sufficient" to "we must understand the entire universe and model it faithfully," it is unlikely that the actual setting is either zero or eleven.
* The Spın̈al Tap Model of Human Cognition and Computational Complexity. Tufnel, Nigel, et al.
posted by zippy at 9:12 AM on August 18, 2010 [4 favorites]
delmoi: I just read carefully over PZ Meyers article again, and I have to add a strawman to your list of sins here. Because Meyers doesn't call Kurzweil's claim "illogical" or "impossible." So let's try arguing with what Meyers actually DOES say:
If you expand Kurzweil's argument from "The brain is encoded in the genome" to "The brain is encoded in the genome, plus the chemical structure of human cells, starting with the egg" the statement would be more precise. But no one can make every statement perfectly precise, and it's reasonable to assume that that was what he meant.
The problem here is that he keeps repeating the same mistake over, and over, and over again. His whole chain of argument is that we can fit the human genome into a computer, crunch it, run it as a program, and simulate the results into a working brain. And that's not even wrong. It wasn't even wrong when I left biology in '92 when it became obvious that having a complete E. coli or HIV genome was insufficient for understanding how those organisms function in vivo. It's not Meyers who's stuck in the 80s, it appears to be Kurzweil.
Uh, it's pretty easy to model the solar system. You could probably model it computationally as a system of geocentric epicycles pretty easily, if you wanted to. In fact, all you would need would be eight epicycles centered around the sun.
Actually, it's not because the N-body problem is computationally intractable over large time frames except for certain specific exceptions which our solar system approximates. Epicycles also don't work well with things on highly elliptical orbits like dwarf planets and comets, nor can it easily explain resonant orbits, tidal locking, and trojan asteroids which you can do with gravity.
posted by KirkJobSluder at 9:16 AM on August 18, 2010 [8 favorites]
If you are complaining that I've claimed it will be impossible to build a computer with all the capabilities of the human brain, or that I'm arguing for dualism, look again. The brain is a computer of sorts, and I'm in the camp that says there is problem in principle with replicating it artificially.So, not illogical, not impossible, but much more difficult than Kurzweil's ludicrous claims based on a basic misunderstanding of developmental biology.
What I am saying is this:
Reverse engineering the human brain has complexities that are hugely underestimated by Kurzweil, because he demonstrates little understanding of how the brain works.
If you expand Kurzweil's argument from "The brain is encoded in the genome" to "The brain is encoded in the genome, plus the chemical structure of human cells, starting with the egg" the statement would be more precise. But no one can make every statement perfectly precise, and it's reasonable to assume that that was what he meant.
The problem here is that he keeps repeating the same mistake over, and over, and over again. His whole chain of argument is that we can fit the human genome into a computer, crunch it, run it as a program, and simulate the results into a working brain. And that's not even wrong. It wasn't even wrong when I left biology in '92 when it became obvious that having a complete E. coli or HIV genome was insufficient for understanding how those organisms function in vivo. It's not Meyers who's stuck in the 80s, it appears to be Kurzweil.
Uh, it's pretty easy to model the solar system. You could probably model it computationally as a system of geocentric epicycles pretty easily, if you wanted to. In fact, all you would need would be eight epicycles centered around the sun.
Actually, it's not because the N-body problem is computationally intractable over large time frames except for certain specific exceptions which our solar system approximates. Epicycles also don't work well with things on highly elliptical orbits like dwarf planets and comets, nor can it easily explain resonant orbits, tidal locking, and trojan asteroids which you can do with gravity.
posted by KirkJobSluder at 9:16 AM on August 18, 2010 [8 favorites]
I do understand computer science. And Myers is quite right.
Consider "getting to the moon". We accomplished that pretty fast, right?
The trouble is that humans had a pretty good idea of how to get to the moon for hundreds of years before it actually happened. I believe some ancient Chinese was the first person to suggest using a rocket to get there, long before Verne, but serious scientists in the late 19th century were discussing how it'd be accomplished.
But we have no idea of how to create a mind - none at all.
Let's suppose I magically gave you a computer that was 100 times as powerful as any existing installation in the world - say, a computer with 100 times the FLOPS and RAM of Google's largest data center. Or, heck, 10,000 times...
How would you use this to make an artificial intelligence? No one has the faintest idea.
Perhaps, just perhaps, if we built a lot of virtual neurons, and then somehow fed them information from the world, something would happen. But what? And how do we feed them useful information? And how do we feedback within the system to make sure it goes somewhere useful?
No one has the slightest idea of how to deal with these questions. There isn't even a useful, testable theory on how any of these things might work.
We have things like Eliza and Deep Blue - but the more work we do on these toys, the more we see that these programs cannot possibly be extended to make actual general purpose intelligences.
Is it impossible? I have no idea. Could there be some breakthrough in 20 years? Absolutely possible. I'd be surprised if there weren't SOME breakthrough in AI in the next 20 years (but what was the breakthrough in the last 20 years?) But will we have a machine that passes the Turing test? Unlikely. Uploading consciousness? Very unlikely.
Craig Silverstein was and probably still is fond of saying that true artificial intelligence is 150 years away. I argued a bit with him at the time - I might be closer to believing 50 years - but he's probably close to the truth.
I have to say that K lost a lot of respect from me because of his "automata" obsession. He acts as if he invented the idea, when in fact people have been studying them since the 50s. There are tons of negative results in the field that show that doing specific, useful, computations with this class of machines must inherently be very slow. The thought that particle physics could be modelled by cellular automata is quite a good one - but is not original to K. (And I frankly don't buy it myself - because there's no way to model the Uncertainty Principle that way as far as I can see!)
Frankly, I'm more likely to believe that people 20 years from now will be scrabbling for food in a post-industrial wasteland than we'll be pristine uploaded consciousnesses in a massive computer somewhere.
posted by lupus_yonderboy at 9:17 AM on August 18, 2010 [11 favorites]
Consider "getting to the moon". We accomplished that pretty fast, right?
The trouble is that humans had a pretty good idea of how to get to the moon for hundreds of years before it actually happened. I believe some ancient Chinese was the first person to suggest using a rocket to get there, long before Verne, but serious scientists in the late 19th century were discussing how it'd be accomplished.
But we have no idea of how to create a mind - none at all.
Let's suppose I magically gave you a computer that was 100 times as powerful as any existing installation in the world - say, a computer with 100 times the FLOPS and RAM of Google's largest data center. Or, heck, 10,000 times...
How would you use this to make an artificial intelligence? No one has the faintest idea.
Perhaps, just perhaps, if we built a lot of virtual neurons, and then somehow fed them information from the world, something would happen. But what? And how do we feed them useful information? And how do we feedback within the system to make sure it goes somewhere useful?
No one has the slightest idea of how to deal with these questions. There isn't even a useful, testable theory on how any of these things might work.
We have things like Eliza and Deep Blue - but the more work we do on these toys, the more we see that these programs cannot possibly be extended to make actual general purpose intelligences.
Is it impossible? I have no idea. Could there be some breakthrough in 20 years? Absolutely possible. I'd be surprised if there weren't SOME breakthrough in AI in the next 20 years (but what was the breakthrough in the last 20 years?) But will we have a machine that passes the Turing test? Unlikely. Uploading consciousness? Very unlikely.
Craig Silverstein was and probably still is fond of saying that true artificial intelligence is 150 years away. I argued a bit with him at the time - I might be closer to believing 50 years - but he's probably close to the truth.
I have to say that K lost a lot of respect from me because of his "automata" obsession. He acts as if he invented the idea, when in fact people have been studying them since the 50s. There are tons of negative results in the field that show that doing specific, useful, computations with this class of machines must inherently be very slow. The thought that particle physics could be modelled by cellular automata is quite a good one - but is not original to K. (And I frankly don't buy it myself - because there's no way to model the Uncertainty Principle that way as far as I can see!)
Frankly, I'm more likely to believe that people 20 years from now will be scrabbling for food in a post-industrial wasteland than we'll be pristine uploaded consciousnesses in a massive computer somewhere.
posted by lupus_yonderboy at 9:17 AM on August 18, 2010 [11 favorites]
Conservapædia has a stiffy for Myers.
posted by homunculus at 9:22 AM on August 18, 2010 [1 favorite]
posted by homunculus at 9:22 AM on August 18, 2010 [1 favorite]
"Hey Kid, I'm a computer... Stop all the downloading!"
posted by Capricorn13 at 9:26 AM on August 18, 2010
posted by Capricorn13 at 9:26 AM on August 18, 2010
I've already uploaded my brain -- it's called Facebook. Granted, it's a little rudimentary, but once it gets enough status updates that shit's going conscious.
posted by iamck at 9:42 AM on August 18, 2010
posted by iamck at 9:42 AM on August 18, 2010
He says it's false, but he doesn't really give a solid reason why he thinks its false.
Actually Meyers does give a number of solid reasons, to highlight them because it's clear that you didn't actually read Meyers before spouting off:
1: The model has to simulate all of neural development.
2: We've not found a general solution to the sequence-to-folding problem.
3: The actual metabolic and regulatory functions of proteins in vivo need to be reverse-engineered from living cells.
4: The naive model of genetic determinism doesn't work.
On top of that, Meyers actually cites current literature that highlights the complexity of understanding what proteins do in cells.
Because people can't predict the future? Seems obvious enough.
One can certainly make reasonable predictions based on trends and probability. We do it with weather and economics after all.
Meyers isn't saying Kurzweil's prediction is unlikely, he's saying it's logically impossible, and (trying) to make an argument based on fundamental principles without bothering to do the math.
Well, he's not saying it's logically impossible at all.
And making an argument based on fundamental principles is entirely kosher if those fundamental principles are fundamentally wrong. Kurzweil's argument is like that old engineering joke with the punchline, "assume a circular cow." He can get away with it here because while everyone knows the hype regarding the human genome project, few people really understand how developmental biology is looking at complex relationships between genetics, structure, and environment.
posted by KirkJobSluder at 9:44 AM on August 18, 2010
Actually Meyers does give a number of solid reasons, to highlight them because it's clear that you didn't actually read Meyers before spouting off:
1: The model has to simulate all of neural development.
2: We've not found a general solution to the sequence-to-folding problem.
3: The actual metabolic and regulatory functions of proteins in vivo need to be reverse-engineered from living cells.
4: The naive model of genetic determinism doesn't work.
On top of that, Meyers actually cites current literature that highlights the complexity of understanding what proteins do in cells.
Because people can't predict the future? Seems obvious enough.
One can certainly make reasonable predictions based on trends and probability. We do it with weather and economics after all.
Meyers isn't saying Kurzweil's prediction is unlikely, he's saying it's logically impossible, and (trying) to make an argument based on fundamental principles without bothering to do the math.
Well, he's not saying it's logically impossible at all.
And making an argument based on fundamental principles is entirely kosher if those fundamental principles are fundamentally wrong. Kurzweil's argument is like that old engineering joke with the punchline, "assume a circular cow." He can get away with it here because while everyone knows the hype regarding the human genome project, few people really understand how developmental biology is looking at complex relationships between genetics, structure, and environment.
posted by KirkJobSluder at 9:44 AM on August 18, 2010
lupus_yonderboy: Let's suppose I magically gave you a computer that was 100 times as powerful as any existing installation in the world - say, a computer with 100 times the FLOPS and RAM of Google's largest data center. Or, heck, 10,000 times...
A 10,000x improvement in processing speed is only about 20 years away. That's how fucking awesome we are. So what you are saying, is that if getting to the Moon is even 1/100th as complicated at building an electronic brain, is that we are like 1000 years away from practically being able to do it?
posted by public at 9:57 AM on August 18, 2010
A 10,000x improvement in processing speed is only about 20 years away. That's how fucking awesome we are. So what you are saying, is that if getting to the Moon is even 1/100th as complicated at building an electronic brain, is that we are like 1000 years away from practically being able to do it?
posted by public at 9:57 AM on August 18, 2010
I think we'll have an electronic brain by the end of the century. For that matter, I think we have super-human machine intelligence right now. It's just not proven to be very interesting to talk to super-human intelligences who just want to deliver you a map of your neighborhood.
posted by KirkJobSluder at 10:03 AM on August 18, 2010 [1 favorite]
posted by KirkJobSluder at 10:03 AM on August 18, 2010 [1 favorite]
delmoi: I just read carefully over PZ Meyers article again, and I have to add a strawman to your list of sins here. Because Meyers doesn't call Kurzweil's claim "illogical" or "impossible." So let's try arguing with what Meyers actually DOES say: -- KirkJobSluderReally? Then what's this:
He's not just speculating optimistically, though: he's building his case on such awfully bad logic that I'm surprised anyone still pays attention to that kook. -- PZ MyersAre you claiming that "awfully bad logic" is somehow different then 'illogical'? Because I think most people interpret them the same way.
Let's suppose I magically gave you a computer that was 100 times as powerful as any existing installation in the world - say, a computer with 100 times the FLOPS and RAM of Google's largest data center. Or, heck, 10,000 times... -- lupus_yonderboyExcept, if Mores law held, it wouldn't be 100 times or 10,000 times by 2(2/3)*n, where n is the number of years. So 1 million in 30 years, or 8 billion in 100 years. (but only 8.2k in 20 years). But anyway, I don't think Moore's law will hold that long.
Is it impossible? I have no idea. Could there be some breakthrough in 20 years? Absolutely possible. I'd be surprised if there weren't SOME breakthrough in AI in the next 20 years (but what was the breakthrough in the last 20 years?) But will we have a machine that passes the Turing test? Unlikely. Uploading consciousness? Very unlikely.But it's important to keep in mind that Simulating the brain on a computer and "AI" are two totally seperate things. Right now there is research being done on brain simulation and the purpose is to be able to do experiments without using live test animals. It has nothing to do with AI. Having AI won't allow us to 'upload our brains' at all.
Craig Silverstein was and probably still is fond of saying that true artificial intelligence is 150 years away. I argued a bit with him at the time - I might be closer to believing 50 years - but he's probably close to the truth. -- lupus_yonderboy
posted by delmoi at 10:06 AM on August 18, 2010
Myers' line,
Got that? You can't understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway. Is that trivially deducible from the structure of the protein? No. It had to be worked out operationally, by doing experiments to modulate one protein and measure what happened to others. If you read deeper into the description, you discover that the overall effect of RHEB is to modulate cell proliferation in a tightly controlled quantitative way. You aren't going to be able to simulate a whole brain until you know precisely and in complete detail exactly how this one protein works.
actually makes me think that brain simulation is even more easily quantifiable and replicable than I'd supposed. If the problems are just 1) interactions and 2) learning what each protein does, it just seems like an enormously-large-and-labor-heavy-yet-structurally-simple brute-force task.
(And yes, I'm ignorant in terms of both biology and computer science. That part, I know.)
posted by darth_tedious at 10:17 AM on August 18, 2010
Got that? You can't understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway. Is that trivially deducible from the structure of the protein? No. It had to be worked out operationally, by doing experiments to modulate one protein and measure what happened to others. If you read deeper into the description, you discover that the overall effect of RHEB is to modulate cell proliferation in a tightly controlled quantitative way. You aren't going to be able to simulate a whole brain until you know precisely and in complete detail exactly how this one protein works.
actually makes me think that brain simulation is even more easily quantifiable and replicable than I'd supposed. If the problems are just 1) interactions and 2) learning what each protein does, it just seems like an enormously-large-and-labor-heavy-yet-structurally-simple brute-force task.
(And yes, I'm ignorant in terms of both biology and computer science. That part, I know.)
posted by darth_tedious at 10:17 AM on August 18, 2010
delmoi: Are you claiming that "awfully bad logic" is somehow different then 'illogical'? Because I think most people interpret them the same way.
In which case, I stand corrected. I'll instead point out that it's in bad form to give license to Kurzweil for spouting nonsense on the grounds that he's just being sloppy with his words, and deny Meyers the same privilege that he might be hasty in his choice of words.
posted by KirkJobSluder at 10:20 AM on August 18, 2010
In which case, I stand corrected. I'll instead point out that it's in bad form to give license to Kurzweil for spouting nonsense on the grounds that he's just being sloppy with his words, and deny Meyers the same privilege that he might be hasty in his choice of words.
posted by KirkJobSluder at 10:20 AM on August 18, 2010
it just seems like an enormously-large-and-labor-heavy-yet-structurally-simple brute-force task.
Really? Well it shouldn't.
posted by Artw at 10:23 AM on August 18, 2010
Really? Well it shouldn't.
posted by Artw at 10:23 AM on August 18, 2010
monju_bosatsu: Maybe the way I wrote it didn't make it entirely clear, but there's never a distinction between the "new" you and the "old" you. You just start experiencing different parts of your mind in different physical locations. It's like when you poke something with a stick; for all intents and purposes, the stick becomes a part of your body. You don't think, "what does the pressure on my fingers tell me the stick is doing?" You think, "What is the tip of the stick doing?" In the same way, when your visual cortex is shut off, you still have full physical control of your body. There is no "new" you controlling your eyes, and "old" you controlling your body. You can use the computer's camera to look at a glass, and then use your brain's motor functions to reach for and pick it up. There's never a split between two you's. In The Prestige, what happened wasn't comparable at all, as it was effectively cloning. BOTH of the resultant people were conscious and alive. With my idea, you just transfer your consciousness piece-wise to the computer, the transfer itself being disorienting but not consciousness-stopping (or doubling).
posted by cthuljew at 10:29 AM on August 18, 2010
posted by cthuljew at 10:29 AM on August 18, 2010
Ray Kurzweil does not understand the brain.
Unlike all those other scientists who do understand the brain?
posted by straight at 10:31 AM on August 18, 2010
Unlike all those other scientists who do understand the brain?
posted by straight at 10:31 AM on August 18, 2010
He understands their lack of understanding less than them.
posted by Artw at 10:35 AM on August 18, 2010 [3 favorites]
posted by Artw at 10:35 AM on August 18, 2010 [3 favorites]
I was tending a bonfire at a party this summer where two post-doc neurobiologists really got into talking shop. They were both researching signal transport mechanisms in neural cells, very technical and complex stuff. When they reached a pause, I asked them how long before we'd be able to simulate a single neuron accurately enough to pass for the real thing. Neither was willing to give any sort of timeframe other than "decades," as from their point of view we just don't know nearly enough about how they work.
I think we'll have to have working nanotechnology, something that can actually sit in a cell and watch it function in situ, before we have a chance at understanding even the most basic building block of the brain.
Maybe in a few decades we can have a supercomputer somewhere that can efectively simulate a neural cell. Then we can make two of them, and see if we can get them to interact. Then we can start talking about full blown brain simulation, not before.
posted by Blackanvil at 10:36 AM on August 18, 2010
I think we'll have to have working nanotechnology, something that can actually sit in a cell and watch it function in situ, before we have a chance at understanding even the most basic building block of the brain.
Maybe in a few decades we can have a supercomputer somewhere that can efectively simulate a neural cell. Then we can make two of them, and see if we can get them to interact. Then we can start talking about full blown brain simulation, not before.
posted by Blackanvil at 10:36 AM on August 18, 2010
The amendment to Myers' blog post tacitly admits that he misinterpreted Sejnowski's comment; Myers took it to mean their approach to the problem, when it was really merely an information-theoretic estimate of the complexity of the task at hand. A more valid objection would be that this ball-park estimate is neither useful as an upper nor lower bound of the resources required needed to model the human central nervous system.
Kurzweil vs. Myers is really an instance of engineer vs. scientist. The former wants to build; the latter seeks to understand.
FTA: But even a perfect simulation of the human brain or cortex won’t do anything unless it is infused with knowledge and trained, says Kurzweil. So Kurzweil has some inkling about the issues at hand.
What is the point of getting all ranty about a piece of speculation? Nothing, other than venting frustration. Worse, your emotional state puts you at risk of saying incorrect things.
posted by polymodus at 10:36 AM on August 18, 2010
Kurzweil vs. Myers is really an instance of engineer vs. scientist. The former wants to build; the latter seeks to understand.
FTA: But even a perfect simulation of the human brain or cortex won’t do anything unless it is infused with knowledge and trained, says Kurzweil. So Kurzweil has some inkling about the issues at hand.
What is the point of getting all ranty about a piece of speculation? Nothing, other than venting frustration. Worse, your emotional state puts you at risk of saying incorrect things.
posted by polymodus at 10:36 AM on August 18, 2010
straight: Unlike all those other scientists who do understand the brain?
I'd say those other scientists understand enough about the brain to identify a few hundred open questions, methodological problems, and ethical pitfalls with the research that need to be addressed before we can claim to be able to develop an accurate simulation.
posted by KirkJobSluder at 10:39 AM on August 18, 2010 [1 favorite]
I'd say those other scientists understand enough about the brain to identify a few hundred open questions, methodological problems, and ethical pitfalls with the research that need to be addressed before we can claim to be able to develop an accurate simulation.
posted by KirkJobSluder at 10:39 AM on August 18, 2010 [1 favorite]
He understands their lack of understanding less than them.
I think Kurzweil's perspective is that is a lot of low-level functioning (e.g. much of the chemical interactions) that can be ignored that would still leave a usable model of the human brain. In computers, this is akin to you don't need to know the theory of transistors to know how to use and write software programs.
It's an interesting avenue to pursue, because scientists like PZ Myers seem to insist on the need to understand the system down to the smallest cellular, hormonal, and environmental details.
Whether Kurzweil's perspective is right or wrong, I haven't seen a clear argument favoring either.
posted by polymodus at 10:47 AM on August 18, 2010 [1 favorite]
I think Kurzweil's perspective is that is a lot of low-level functioning (e.g. much of the chemical interactions) that can be ignored that would still leave a usable model of the human brain. In computers, this is akin to you don't need to know the theory of transistors to know how to use and write software programs.
It's an interesting avenue to pursue, because scientists like PZ Myers seem to insist on the need to understand the system down to the smallest cellular, hormonal, and environmental details.
Whether Kurzweil's perspective is right or wrong, I haven't seen a clear argument favoring either.
posted by polymodus at 10:47 AM on August 18, 2010 [1 favorite]
I'm really not sure that using the human genome is a tenable position if you're going to abstract everything else away.
posted by Artw at 10:57 AM on August 18, 2010
posted by Artw at 10:57 AM on August 18, 2010
>Let's suppose I magically gave you a computer that was 100 times as powerful as any existing installation in the world - say, a computer with 100 times the FLOPS and RAM of Google's largest data center. Or, heck, 10,000 times...
How would you use this to make an artificial intelligence? No one has the faintest idea.
>Really? Well it shouldn't. [seem like an enormously-large-and-labor-heavy-yet-structurally-simple brute-force task]
Okay, if not, I'd like to understand why.
Am I correct in assuming the factors are these?
1) We don't know how the brain itself functions;
2) We don't know which parts of the brain interact with which other parts, to produce a given result;
3) Even assuming infinite processing-power, we don't know how to configure this processing power in such a way that it creates what we would recognize as sentience.
Fundamentally, there is a finite number of parts to the brain... and a finite number of classes of interaction the developing brain can have with the outside world.
So the number of working materials is an enormously large but limited set.
If the number of elements, then, is limited and knowable; and we have access to all of them; then either we can construct sentience using these elements (or their digital analogues)... or we cannot, because sentience is literally a magical process.
Either sentience is magic, or it is replicable.
It may be that Kurzweil is wrong to argue that we can reverse engineer the brain in a decade; this does sound absurd. But a longer timeline-- 30 years or 300 years-- makes it seem less so; with a sufficiently long timeline, and an adequate diet of 50s sci-fi movies, we should, at a bare minimum, be able to grow working brains in a vat (with a proper artificial sensorium attached, of course).
That we don't know how to configure artificial processing power in order to create sentience-- that we don't know how the right way to nest goals within goals and feedback loops on top of feedback loops-- seems less some absolute categorical block, than simply the task at hand: Figuring out the right structure for all that processing power.
Sure, Eliza and Deep Blue are toys. But they're also examples of what-not-to-do, and what-doesn't-get-us-all-the-way-there... which is to say, they are stepping stones.
posted by darth_tedious at 10:58 AM on August 18, 2010 [1 favorite]
How would you use this to make an artificial intelligence? No one has the faintest idea.
>Really? Well it shouldn't. [seem like an enormously-large-and-labor-heavy-yet-structurally-simple brute-force task]
Okay, if not, I'd like to understand why.
Am I correct in assuming the factors are these?
1) We don't know how the brain itself functions;
2) We don't know which parts of the brain interact with which other parts, to produce a given result;
3) Even assuming infinite processing-power, we don't know how to configure this processing power in such a way that it creates what we would recognize as sentience.
Fundamentally, there is a finite number of parts to the brain... and a finite number of classes of interaction the developing brain can have with the outside world.
So the number of working materials is an enormously large but limited set.
If the number of elements, then, is limited and knowable; and we have access to all of them; then either we can construct sentience using these elements (or their digital analogues)... or we cannot, because sentience is literally a magical process.
Either sentience is magic, or it is replicable.
It may be that Kurzweil is wrong to argue that we can reverse engineer the brain in a decade; this does sound absurd. But a longer timeline-- 30 years or 300 years-- makes it seem less so; with a sufficiently long timeline, and an adequate diet of 50s sci-fi movies, we should, at a bare minimum, be able to grow working brains in a vat (with a proper artificial sensorium attached, of course).
That we don't know how to configure artificial processing power in order to create sentience-- that we don't know how the right way to nest goals within goals and feedback loops on top of feedback loops-- seems less some absolute categorical block, than simply the task at hand: Figuring out the right structure for all that processing power.
Sure, Eliza and Deep Blue are toys. But they're also examples of what-not-to-do, and what-doesn't-get-us-all-the-way-there... which is to say, they are stepping stones.
posted by darth_tedious at 10:58 AM on August 18, 2010 [1 favorite]
I'm really not sure that using the human genome is a tenable position if you're going to abstract everything else away.
Kurzweil didn't say this, neither did Sejnowski. At least, not in the Wired article. As I pointed out earlier, Myers misread their comments about DNA.
posted by polymodus at 11:01 AM on August 18, 2010
Kurzweil didn't say this, neither did Sejnowski. At least, not in the Wired article. As I pointed out earlier, Myers misread their comments about DNA.
posted by polymodus at 11:01 AM on August 18, 2010
The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
No, still stupid.
posted by Artw at 11:05 AM on August 18, 2010
No, still stupid.
posted by Artw at 11:05 AM on August 18, 2010
No, still stupid.
Kurzweil did not assert that the path to the model of the brain is through analyzing our chromosomes. Myers tacitly admits this in his amendment.
Currently everyone who is objecting to Kurzweil based on the DNA argument completely misread the purpose of that Wired paragraph.
posted by polymodus at 11:10 AM on August 18, 2010
Kurzweil did not assert that the path to the model of the brain is through analyzing our chromosomes. Myers tacitly admits this in his amendment.
Currently everyone who is objecting to Kurzweil based on the DNA argument completely misread the purpose of that Wired paragraph.
posted by polymodus at 11:10 AM on August 18, 2010
delmoi: "Kurzweil is kind of out there, but he's been researching this stuff for decades."
This is not "research".
posted by meehawl at 11:13 AM on August 18, 2010 [2 favorites]
This is not "research".
posted by meehawl at 11:13 AM on August 18, 2010 [2 favorites]
That paragraph was intended to argue that their exists a feasible point for the problem of brain modeling. It makes no assertions about a feasible path to that point.
But I can see how naive thinking or reading in a rush would put words in mouths where they were not intended. So this is also an example of miscommunication; it didn't help that Wired did not give readers such as PZ Myers enough context for Sejnowski's calculation.
posted by polymodus at 11:14 AM on August 18, 2010
But I can see how naive thinking or reading in a rush would put words in mouths where they were not intended. So this is also an example of miscommunication; it didn't help that Wired did not give readers such as PZ Myers enough context for Sejnowski's calculation.
posted by polymodus at 11:14 AM on August 18, 2010
feasible path
One last clarification: the touted objection being that deriving the design of the brain from the plain genome is an infeasible path. Again, only a casual reading of Kurzweil's statements would lead to this objection; most people would agree on the infeasibility of this approach, but it is irrelevant to what Kurzweil was trying to get across in the article.
posted by polymodus at 11:19 AM on August 18, 2010
One last clarification: the touted objection being that deriving the design of the brain from the plain genome is an infeasible path. Again, only a casual reading of Kurzweil's statements would lead to this objection; most people would agree on the infeasibility of this approach, but it is irrelevant to what Kurzweil was trying to get across in the article.
posted by polymodus at 11:19 AM on August 18, 2010
Currently everyone who is objecting to Kurzweil based on the DNA argument completely misread the purpose of that Wired paragraph.
It appears to be an estimation of the complexity of the problem based on how many "bits" the human genome has.
posted by Artw at 11:27 AM on August 18, 2010
It appears to be an estimation of the complexity of the problem based on how many "bits" the human genome has.
posted by Artw at 11:27 AM on August 18, 2010
darth_tedious: Either sentience is magic, or it is replicable.
There's a huge difference between 'theoretically possible' and 'actually achievable'.
Consider encryption; it's obviously the case that every encrypted message using a public-key method can be cracked. It's a finite problem. But it's not usefully finite, at least from a 2010 perspective. It would take all the computers on Earth roughly the expected lifetime of the universe to crack just one heavily-encrypted message.
And then, hey, we can start on #2.
Trying to brute-force simulate a working brain may indeed be a finite problem, but that alone doesn't mean a damn thing.
posted by Malor at 11:29 AM on August 18, 2010
There's a huge difference between 'theoretically possible' and 'actually achievable'.
Consider encryption; it's obviously the case that every encrypted message using a public-key method can be cracked. It's a finite problem. But it's not usefully finite, at least from a 2010 perspective. It would take all the computers on Earth roughly the expected lifetime of the universe to crack just one heavily-encrypted message.
And then, hey, we can start on #2.
Trying to brute-force simulate a working brain may indeed be a finite problem, but that alone doesn't mean a damn thing.
posted by Malor at 11:29 AM on August 18, 2010
I remember some neuroscience research from the 1990s where Larry Abbott's lab at Brandeis was able to replace a real synapse in the stomatogastric ganglion with a simulated one. In that case, they apparently could simulate a single neuron with sufficient accuracy that they observed expected behavior from the system.
My recollection of a talk Abbott gave to a general science audience was that the computational power required to simulate that synapse was equivalent to a Mac Plus.
Here's his lab. I think his "dynamic clamp" paper from he early nineties describes or cites his work.
posted by zippy at 11:29 AM on August 18, 2010
My recollection of a talk Abbott gave to a general science audience was that the computational power required to simulate that synapse was equivalent to a Mac Plus.
Here's his lab. I think his "dynamic clamp" paper from he early nineties describes or cites his work.
posted by zippy at 11:29 AM on August 18, 2010
This kind of dogmatic garbage is going to make the cults of the future and this is one of themIsn't it interesting how pretty much all the crazy religious cults are based on the morbid fear of death? The whole Singularity thing really just reeks of that (whether it's possible or not).
"As long as it happens before I get too old, I can have everlasting life!" gosh that sure sounds familiar...
posted by zoogleplex at 11:35 AM on August 18, 2010
> But it's not usefully finite, at least from a 2010 perspective.
But I think it's the 2010 perspective that's the point.
We simply don't know how much processing power we'll have available in the future-- we have Moore's Law as a theoretical guide, but that's all it is-- and more to the point, we don't know what we'll discover, either about our tools or our approaches-- as our discoveries converge and impact on one another.
One thing we do know about interactive systems is that, pretty much by definition, there are lots of threshold effects: No trace of Y until X> n.... but, hey, once X> n, there's suddenly more Y than you can shake a stick at.
I think it's premature to rule out such a pattern for constructing sentience... in which case, we won't know we're actually, truly close to doing it, until we've done it.
posted by darth_tedious at 11:48 AM on August 18, 2010
But I think it's the 2010 perspective that's the point.
We simply don't know how much processing power we'll have available in the future-- we have Moore's Law as a theoretical guide, but that's all it is-- and more to the point, we don't know what we'll discover, either about our tools or our approaches-- as our discoveries converge and impact on one another.
One thing we do know about interactive systems is that, pretty much by definition, there are lots of threshold effects: No trace of Y until X> n.... but, hey, once X> n, there's suddenly more Y than you can shake a stick at.
I think it's premature to rule out such a pattern for constructing sentience... in which case, we won't know we're actually, truly close to doing it, until we've done it.
posted by darth_tedious at 11:48 AM on August 18, 2010
polymodus: I think Kurzweil's perspective is that is a lot of low-level functioning (e.g. much of the chemical interactions) that can be ignored that would still leave a usable model of the human brain. In computers, this is akin to you don't need to know the theory of transistors to know how to use and write software programs.
I find that to be a questionable assumption given that something as simple as blood sugar affects cognition in statistically predictable but individually unpredictable ways.
darth_tedious: Either sentience is magic, or it is replicable.
Of course it's replicable. Millions of parents do it every year.
It's quite possibly something that can be reinvented in machine intelligences.
That's a different question from proposing "The design of the brain is in the genome," because this isn't even true for understanding C. elegans.
And that doesn't even get into a central problem of simulation of strong dependence on initial conditions. Even something like the solar system with a single operating mechanism becomes a mathematically complex problem once you start adding in the hundreds of known objects.
polymodus: There's very few ways to reasonably interpret, "The design of the brain is in the genome."
posted by KirkJobSluder at 11:53 AM on August 18, 2010
I find that to be a questionable assumption given that something as simple as blood sugar affects cognition in statistically predictable but individually unpredictable ways.
darth_tedious: Either sentience is magic, or it is replicable.
Of course it's replicable. Millions of parents do it every year.
It's quite possibly something that can be reinvented in machine intelligences.
That's a different question from proposing "The design of the brain is in the genome," because this isn't even true for understanding C. elegans.
And that doesn't even get into a central problem of simulation of strong dependence on initial conditions. Even something like the solar system with a single operating mechanism becomes a mathematically complex problem once you start adding in the hundreds of known objects.
polymodus: There's very few ways to reasonably interpret, "The design of the brain is in the genome."
posted by KirkJobSluder at 11:53 AM on August 18, 2010
Either sentience is magic, or it is replicable.
On preview: what Malor said above about encryption is spot-on. If a problem is intractable from a computational standpoint, it's not going to be solved by throwing more computational power at it.
Sentience, I would argue, is even worse off. Artificial intelligence is a field that deals with relatively minor tasks. Playing chess is the ideal problem for a computer. There are only 64 squares on a chess board, there are exactly 32 pieces, each piece moves discretely and between discrete spaces, in totally predictable patterns. Turns are sequential and also discrete. There are no simultaneous moves, no placing a piece on two squares at once, no moving outside of the board, and you can predict exactly what the board will look like after any given move. The computer can also process difficult situations with extra time, without any adverse consequences.
Driving a taxi is a very poor problem for a computer. The field of motion is potentially unlimited, and there are potentially unlimited numbers of objects, which move totally unpredictably. Everything happens at once, things can overlap, the world is 3-dimensional. You can't predict what the world will look like after any given move. AND the computer doesn't have time to process difficult situations, they are all time-critical. Even if you could come up with a set of rules that let a computer safely drive a taxi, getting it to run at the speed it would require would probably take more investment and power than simply paying a human taxi driver.
Sentience is much more like driving a taxi, except that the problem isn't even well defined. We don't have a good model of what sentience is, other than that we have it. The field of AI stays away from it, because it would basically mean pouring dollars after an unproductive pipe dream. It's not even a matter of having better computers (which is slowing down anyway - Moore's Law will probably not apply by 2018), it's a question of a huge undefined problem that nobody is seriously looking at. Modelling the physical brain is a huge, tremendously stupid waste of time; the brain works on large-scale parallelism while few computers have more than a couple of processors working at once. The idea of an algorithm for "thinking humanly" is a sci-fi dream, they tell you that on your first day in an actual AI class.
If sentience is replicable, to be blunt: I do not think computers are the kind of device that could efficiently replicate it. It just doesn't fall into the "computers are good at this" box, and adding more computational power doesn't change anything about that.
posted by graymouser at 12:02 PM on August 18, 2010 [1 favorite]
On preview: what Malor said above about encryption is spot-on. If a problem is intractable from a computational standpoint, it's not going to be solved by throwing more computational power at it.
Sentience, I would argue, is even worse off. Artificial intelligence is a field that deals with relatively minor tasks. Playing chess is the ideal problem for a computer. There are only 64 squares on a chess board, there are exactly 32 pieces, each piece moves discretely and between discrete spaces, in totally predictable patterns. Turns are sequential and also discrete. There are no simultaneous moves, no placing a piece on two squares at once, no moving outside of the board, and you can predict exactly what the board will look like after any given move. The computer can also process difficult situations with extra time, without any adverse consequences.
Driving a taxi is a very poor problem for a computer. The field of motion is potentially unlimited, and there are potentially unlimited numbers of objects, which move totally unpredictably. Everything happens at once, things can overlap, the world is 3-dimensional. You can't predict what the world will look like after any given move. AND the computer doesn't have time to process difficult situations, they are all time-critical. Even if you could come up with a set of rules that let a computer safely drive a taxi, getting it to run at the speed it would require would probably take more investment and power than simply paying a human taxi driver.
Sentience is much more like driving a taxi, except that the problem isn't even well defined. We don't have a good model of what sentience is, other than that we have it. The field of AI stays away from it, because it would basically mean pouring dollars after an unproductive pipe dream. It's not even a matter of having better computers (which is slowing down anyway - Moore's Law will probably not apply by 2018), it's a question of a huge undefined problem that nobody is seriously looking at. Modelling the physical brain is a huge, tremendously stupid waste of time; the brain works on large-scale parallelism while few computers have more than a couple of processors working at once. The idea of an algorithm for "thinking humanly" is a sci-fi dream, they tell you that on your first day in an actual AI class.
If sentience is replicable, to be blunt: I do not think computers are the kind of device that could efficiently replicate it. It just doesn't fall into the "computers are good at this" box, and adding more computational power doesn't change anything about that.
posted by graymouser at 12:02 PM on August 18, 2010 [1 favorite]
darth_tedious: if we wanted to do this in 300 years, yeah, sure. I don't think we'll ever want to do this, because the protein-protein interactions are necessary for a brain, but not for intelligence per se, which is presumably what we'd be interested in. You can understand the brain without simulating it on a computer.
That said, you're supremely underestimating the complexity here. Protein-protein interactions in a cell generally don't follow some orderly structure. The best analogy I can think of is this story of genetic circuit design. Pieces fit together in complicated patterns that happen to work given the prevailing conditions in a cell. The main difference being that circuit components have an intended function even if they arranged in a strange manner, whereas proteins do not. If you alter the concentration of a given protein in a cell, you will alter the nature of its interactions with other proteins. You may figure out a protein's "job", then do a genetic knock out to remove it, only to find that some other protein has taken over that function. For this reason, protein interaction studies have to be carried out in living cells under the conditions the organism would experience in real life. You can screen things in advance, but eventually you have to start mucking around in real living cells. This is a tall order, and makes the going slow.
There are people working on this problem, but it's crazy complicated. Here's an example of the kind of diagram that they produce. It's almost certainly incomplete and highly simplified.
on preview: long argument with my advisor means I'm behind. darth_tedious, I'm referring to your earlier comment.
zippy: Larry Abbott is a theorist who collaborated with Eve Marder. Her lab would have been the ones to do that. I'm in that lab now, and I've worked with those dynamic clamp experiments (I'm also a theorist, I don't do the experiments myself). Those model cells are outrageously simple. Their goal isn't to reproduce the actual behavior of an STG cell, it's to provide a simple network with biologically relevant behavior and known parameters and dynamics.
posted by Humanzee at 12:05 PM on August 18, 2010 [1 favorite]
That said, you're supremely underestimating the complexity here. Protein-protein interactions in a cell generally don't follow some orderly structure. The best analogy I can think of is this story of genetic circuit design. Pieces fit together in complicated patterns that happen to work given the prevailing conditions in a cell. The main difference being that circuit components have an intended function even if they arranged in a strange manner, whereas proteins do not. If you alter the concentration of a given protein in a cell, you will alter the nature of its interactions with other proteins. You may figure out a protein's "job", then do a genetic knock out to remove it, only to find that some other protein has taken over that function. For this reason, protein interaction studies have to be carried out in living cells under the conditions the organism would experience in real life. You can screen things in advance, but eventually you have to start mucking around in real living cells. This is a tall order, and makes the going slow.
There are people working on this problem, but it's crazy complicated. Here's an example of the kind of diagram that they produce. It's almost certainly incomplete and highly simplified.
on preview: long argument with my advisor means I'm behind. darth_tedious, I'm referring to your earlier comment.
zippy: Larry Abbott is a theorist who collaborated with Eve Marder. Her lab would have been the ones to do that. I'm in that lab now, and I've worked with those dynamic clamp experiments (I'm also a theorist, I don't do the experiments myself). Those model cells are outrageously simple. Their goal isn't to reproduce the actual behavior of an STG cell, it's to provide a simple network with biologically relevant behavior and known parameters and dynamics.
posted by Humanzee at 12:05 PM on August 18, 2010 [1 favorite]
Seems to me that several other essential communication/thinking systems are being ignored. Our hormonal systems, for instance: they are intimately tied to our brain's functioning, but everyone is all "neuron neuron neuron."
posted by five fresh fish at 12:21 PM on August 18, 2010
posted by five fresh fish at 12:21 PM on August 18, 2010
Humanzee, whether it's their goal or not, they did get real behavior when dropping in a simulated element into the stomatogastric ganglion (STG). My point is that, while simulating the entire range of neuronal activity or a complete network, even one as simple and well understood as the STG, is a long way off, we're on the path to simulating the meaningful parts of neurons such that other neurons that interact with the simulated ones continue to behave normally. It's a stepping stone that I wanted to point out as a partial refutation to the claim that we cannot simulate neurons.
The correct claim, I think, is that we can simulate key elements of some (large, well studied) neurons quite well, and this gives me hope that we could simulate all of the useful parts of a neuron without having to encode and simulate all of the laws of physics.
posted by zippy at 12:24 PM on August 18, 2010
The correct claim, I think, is that we can simulate key elements of some (large, well studied) neurons quite well, and this gives me hope that we could simulate all of the useful parts of a neuron without having to encode and simulate all of the laws of physics.
posted by zippy at 12:24 PM on August 18, 2010
> There are people working on this problem, but it's crazy complicated. Here's an example
Okay, that's quite interesting. Still, this is immensely complicated, and not presently understood... yet not beyond understanding. Doesn't that reduce to a brute-force problem?
> Consider encryption... Trying to brute-force simulate a working brain may indeed be a finite problem, but that alone doesn't mean a damn thing.
But doesn't that reduce to again just concluding that the problem... is a problem? That either a) much more brute-force is needed, or b) more usefully, the proper way to attack the problem hasn't yet been found?
> The best analogy I can think of is this story of genetic circuit design.
Thanks for the article-- damned interesting. Of course, what it mainly brings to mind for me is how slow our standard creative regimes are: form a hypothesis, add a line of code/dollop of protein what have you, and retest. I'm curious how things will change when we start using increasingly independent systems to test many more things in parallel.
> because the protein-protein interactions are necessary for a brain, but not for intelligence per se
Right. Actually, attempts to simulate intelligence by reverse-engineering the physical brain, or building a circuit-by-circuit analogue of the brain, seem to fall in the category of Doing Something When You Don't Know What Else to Do.
I look at it this way: How long did it take for the human brain to evolve? And how waves of unpredictable visual, auditory, and tactile stimuli come crashing on its already evolutionarily engineered brain before a human infant develops what we recognize as sentience?
Really, if there's one thing we can ingredient we should assume is necessary for the creation of artificial intelligence-- of whatever degree or kind, precisely because the different degrees would vary enormously in terms of recognizability as intelligence (and to some extent, are a matter of arbitrary definition)-- it's probably patience.
posted by darth_tedious at 12:39 PM on August 18, 2010
Okay, that's quite interesting. Still, this is immensely complicated, and not presently understood... yet not beyond understanding. Doesn't that reduce to a brute-force problem?
> Consider encryption... Trying to brute-force simulate a working brain may indeed be a finite problem, but that alone doesn't mean a damn thing.
But doesn't that reduce to again just concluding that the problem... is a problem? That either a) much more brute-force is needed, or b) more usefully, the proper way to attack the problem hasn't yet been found?
> The best analogy I can think of is this story of genetic circuit design.
Thanks for the article-- damned interesting. Of course, what it mainly brings to mind for me is how slow our standard creative regimes are: form a hypothesis, add a line of code/dollop of protein what have you, and retest. I'm curious how things will change when we start using increasingly independent systems to test many more things in parallel.
> because the protein-protein interactions are necessary for a brain, but not for intelligence per se
Right. Actually, attempts to simulate intelligence by reverse-engineering the physical brain, or building a circuit-by-circuit analogue of the brain, seem to fall in the category of Doing Something When You Don't Know What Else to Do.
I look at it this way: How long did it take for the human brain to evolve? And how waves of unpredictable visual, auditory, and tactile stimuli come crashing on its already evolutionarily engineered brain before a human infant develops what we recognize as sentience?
Really, if there's one thing we can ingredient we should assume is necessary for the creation of artificial intelligence-- of whatever degree or kind, precisely because the different degrees would vary enormously in terms of recognizability as intelligence (and to some extent, are a matter of arbitrary definition)-- it's probably patience.
posted by darth_tedious at 12:39 PM on August 18, 2010
zippy, I guess I was overly broad in my objections. I think we mostly agree.
No one tries to simulate all the laws of physics, I was just trying to explain why. In practice, neurons are simulated by assuming they're composed of connected cylindrical capacitors, with ion channels injecting current into them. The ion channels are collectively simulated with approximate equations ---typically Hodgkin-Huxley style, although there are much better alternatives (I use Hodgkin-Huxley). The trick is knowing which ion channels, what their properties are, and how they respond to neuromodulation and past activity (i.e. how the channels are regulated to provide homeostasis). Very few models even attempt to address homeostasis, in part because little is known about it, and in part because it's difficult to observe and therefore constrain. You have to keep a prep alive for a long time, and make repeated measurements.
Looking over the papers, they used dynamic clamp to artificially inject current that corresponds to the behavior expected from a single type of channel (sensitive to proctolin, a neuromodulator) and a synapse. I agree that many of the most important components of neurons are well-known (although also many are constrained by guesswork, or assuming cross-cell or cross-species equivalence).
I certainly hope it's possible to to piece together these well-known parts into a reasonable model of neural behavior ---that's my whole research project! Still, at the moment, it really hasn't been done. Most STG models are single-compartment, resulting in very inaccurate activity patterns. They are also often heavily under-constrained, or tuned by hand, or matched to only a few key features. I'm quite confident we'll have very good models of STG neurons in less than 25 years, even if my own project is a complete bust. But it's a long way to the brain.
posted by Humanzee at 12:59 PM on August 18, 2010
No one tries to simulate all the laws of physics, I was just trying to explain why. In practice, neurons are simulated by assuming they're composed of connected cylindrical capacitors, with ion channels injecting current into them. The ion channels are collectively simulated with approximate equations ---typically Hodgkin-Huxley style, although there are much better alternatives (I use Hodgkin-Huxley). The trick is knowing which ion channels, what their properties are, and how they respond to neuromodulation and past activity (i.e. how the channels are regulated to provide homeostasis). Very few models even attempt to address homeostasis, in part because little is known about it, and in part because it's difficult to observe and therefore constrain. You have to keep a prep alive for a long time, and make repeated measurements.
Looking over the papers, they used dynamic clamp to artificially inject current that corresponds to the behavior expected from a single type of channel (sensitive to proctolin, a neuromodulator) and a synapse. I agree that many of the most important components of neurons are well-known (although also many are constrained by guesswork, or assuming cross-cell or cross-species equivalence).
I certainly hope it's possible to to piece together these well-known parts into a reasonable model of neural behavior ---that's my whole research project! Still, at the moment, it really hasn't been done. Most STG models are single-compartment, resulting in very inaccurate activity patterns. They are also often heavily under-constrained, or tuned by hand, or matched to only a few key features. I'm quite confident we'll have very good models of STG neurons in less than 25 years, even if my own project is a complete bust. But it's a long way to the brain.
posted by Humanzee at 12:59 PM on August 18, 2010
The amendment to Myers' blog post tacitly admits that he misinterpreted Sejnowski's comment; Myers took it to mean their approach to the problem, when it was really merely an information-theoretic estimate of the complexity of the task at hand. A more valid objection would be that this ball-park estimate is neither useful as an upper nor lower bound of the resources required needed to model the human central nervous system.
Yeah, I think Myers and some of the people here are being a little unfair to Kurzweil. There's no way that he intended to suggest that the route to brain simulation is a first principles simulation of the physics and chemistry of molecular biology. That would be insane; people can do those kind of complete physiochemical simulations now at the scale of a million atoms over a microsecond or so. Even a 10,000-fold improvement won't get you to a single complete signaling pathway, let alone brain function.
Kerzweil's error is more subtle, and Myers' gets it, but he seems unable to resist lampooning Kerzweil's argument as well (gotta solve the protein folding problem first, har, har!). The main issue is that the informational content required to understand cell function, let alone brain function, goes beyond the information in the genome. Basically, you also need to understand gene regulation and you need to understand how gene products interact. This is information in addition to the information in the genome, and while you could in principle learn it by running a billion-year-long molecular simulation, I don't think anyone is seriously suggesting that. We learn it by doing biology. And people are building simulations of cellular signaling based on what we're learning, and similar simulations could eventually be a route to understanding brain function.
But the informational content needed to build these simulations isn't necessarily limited by the length of the genome. The length of the genome only gives you some idea of the number of interacting components (variables, or maybe objects, in computer science terms). You also need the interactions (functions), and we don't know how many of those there are.
posted by mr_roboto at 1:32 PM on August 18, 2010 [2 favorites]
Yeah, I think Myers and some of the people here are being a little unfair to Kurzweil. There's no way that he intended to suggest that the route to brain simulation is a first principles simulation of the physics and chemistry of molecular biology. That would be insane; people can do those kind of complete physiochemical simulations now at the scale of a million atoms over a microsecond or so. Even a 10,000-fold improvement won't get you to a single complete signaling pathway, let alone brain function.
Kerzweil's error is more subtle, and Myers' gets it, but he seems unable to resist lampooning Kerzweil's argument as well (gotta solve the protein folding problem first, har, har!). The main issue is that the informational content required to understand cell function, let alone brain function, goes beyond the information in the genome. Basically, you also need to understand gene regulation and you need to understand how gene products interact. This is information in addition to the information in the genome, and while you could in principle learn it by running a billion-year-long molecular simulation, I don't think anyone is seriously suggesting that. We learn it by doing biology. And people are building simulations of cellular signaling based on what we're learning, and similar simulations could eventually be a route to understanding brain function.
But the informational content needed to build these simulations isn't necessarily limited by the length of the genome. The length of the genome only gives you some idea of the number of interacting components (variables, or maybe objects, in computer science terms). You also need the interactions (functions), and we don't know how many of those there are.
posted by mr_roboto at 1:32 PM on August 18, 2010 [2 favorites]
Humanzee writes: zippy, I guess I was overly broad in my objections. I think we mostly agree.
Me too. I was thinking mostly of the proposed problem upthread that, since we don't understand brains / neurons completely, we have to model everything, and therefore it's a (seemingly) impossible task.
mr_roboto writes: But the informational content needed to build these simulations isn't necessarily limited by the length of the genome.
Yes. I have problems with this approach too, but it's fun to think about it. I would love to say that the informational content of DNA is an upper bound to the complexity of the brain in the information theory sense, but I don't know whether that's true. But let me give its plausibility a shot. Warning, major handwaving ahead.
I think Kurzweil is saying that the rules for building a brain, above the atomic level, and subject to the fetal environment, are encoded in DNA. He's taking the frameworks of chemistry and physics (whose rules we could say are spelled out externally) for granted, and so protein-protein interactions presumably occur at this level, and structures arise from the specification of early building blocks, that themselves operate independently but as expected consequences of the initial instructions. It may be sufficient then to specify, broadly, the pieces and their initial arrangement, and then let them do their thing.
This seems similar to how one can write down instructions for building a stone bridge, but these rules do not need to specify that there is gravity, or that the stones that make up an arch may be formed by volcanic processes, or that two stones when pressed together can remain in place by friction. All of these, and more, are necessary for the bridge to exist and function, but modeling these or encoding this knowledge explicitly in the instructions is not necessary to specify the form and function of a bridge, or even its method of construction. One can encode them implicitly, for example by specifying the shape of the bridge such that it holds up under gravity, without explicitly saying anything about gravity and how it works.
So, handwaving acknowledged, it seems possible that DNA sets an upper bound on the complexity of the brain's design, in the same sense that a description of a bridge can set an upper bound on the complexity of the resulting structure.
IANABiologist, IANYBiologist
posted by zippy at 2:50 PM on August 18, 2010
Me too. I was thinking mostly of the proposed problem upthread that, since we don't understand brains / neurons completely, we have to model everything, and therefore it's a (seemingly) impossible task.
mr_roboto writes: But the informational content needed to build these simulations isn't necessarily limited by the length of the genome.
Yes. I have problems with this approach too, but it's fun to think about it. I would love to say that the informational content of DNA is an upper bound to the complexity of the brain in the information theory sense, but I don't know whether that's true. But let me give its plausibility a shot. Warning, major handwaving ahead.
I think Kurzweil is saying that the rules for building a brain, above the atomic level, and subject to the fetal environment, are encoded in DNA. He's taking the frameworks of chemistry and physics (whose rules we could say are spelled out externally) for granted, and so protein-protein interactions presumably occur at this level, and structures arise from the specification of early building blocks, that themselves operate independently but as expected consequences of the initial instructions. It may be sufficient then to specify, broadly, the pieces and their initial arrangement, and then let them do their thing.
This seems similar to how one can write down instructions for building a stone bridge, but these rules do not need to specify that there is gravity, or that the stones that make up an arch may be formed by volcanic processes, or that two stones when pressed together can remain in place by friction. All of these, and more, are necessary for the bridge to exist and function, but modeling these or encoding this knowledge explicitly in the instructions is not necessary to specify the form and function of a bridge, or even its method of construction. One can encode them implicitly, for example by specifying the shape of the bridge such that it holds up under gravity, without explicitly saying anything about gravity and how it works.
So, handwaving acknowledged, it seems possible that DNA sets an upper bound on the complexity of the brain's design, in the same sense that a description of a bridge can set an upper bound on the complexity of the resulting structure.
IANABiologist, IANYBiologist
posted by zippy at 2:50 PM on August 18, 2010
Does this mean we might have to take our computers to the psychiatrist for antidepressants?
posted by anniecat at 2:55 PM on August 18, 2010 [1 favorite]
posted by anniecat at 2:55 PM on August 18, 2010 [1 favorite]
Isn't the description of a bridge setting a lower bound on the complexity of the resulting structure? Something that's simpler than the description wouldn't be a bridge but something could be limitlessly complex and still incidentally fulfill the description of the bridge.
...or be limitlessly complex and not fulfill the description of the bridge so maybe there's just no relationship at all between the description and the complexity.
But even thinking in terms of instructions, you might have a bridge-building robot and then the instructions would be “Press the button labeled ‘make a bridge.’” But the bridge would certainly be more complex than those instructions.
(And anyways I don't agree with the notion that DNA is necessarily like a series of instructions.)
posted by XMLicious at 3:17 PM on August 18, 2010
...or be limitlessly complex and not fulfill the description of the bridge so maybe there's just no relationship at all between the description and the complexity.
But even thinking in terms of instructions, you might have a bridge-building robot and then the instructions would be “Press the button labeled ‘make a bridge.’” But the bridge would certainly be more complex than those instructions.
(And anyways I don't agree with the notion that DNA is necessarily like a series of instructions.)
posted by XMLicious at 3:17 PM on August 18, 2010
Someone needs to sit down with PZ Myers and teach him LISP. I would argue once he's implemented his own interpreter, and written a reasonably trivial genetic algorithm with it he will have changed his mind about this particular issue. I'd be willing to put money on it.
He would learn several very important things:
1) the program is the data and vice versa.
2) the concept of self-hosting
3) That a very powerful lisp macro, compiled and compressed could easily be 8 bytes inside of a larger compiled and compressed program. that macro could also display significantly more polymorphous functionality than the specific genome segment he's talking about.
posted by Freen at 3:34 PM on August 18, 2010
He would learn several very important things:
1) the program is the data and vice versa.
2) the concept of self-hosting
3) That a very powerful lisp macro, compiled and compressed could easily be 8 bytes inside of a larger compiled and compressed program. that macro could also display significantly more polymorphous functionality than the specific genome segment he's talking about.
posted by Freen at 3:34 PM on August 18, 2010
I see the LISP people are still fantasizing about discovering that the brain is written in it.
posted by Pope Guilty at 3:39 PM on August 18, 2010 [7 favorites]
posted by Pope Guilty at 3:39 PM on August 18, 2010 [7 favorites]
XMLicious, I think it would be the upper bound of the complexity. Consider the following. You could add instructions that resulted in no change, or which undid each other. So the complexity could be less than the instructions themselves. However, any element resulting from the instructions would not be present (subject to much handwaving about how, say, if you specified that the bridge be made out of limestone, and it was in a wet environment, the stalactites that might result over time would not be part of the bridge itself).
In the bridge-building robot example, it would be interesting to decide what constitutes 'the thing that encodes the bridge.' Certainly, there's a mechanism and instructions behind the "build a bridge" button. And what's hard about these discussions is deciding 'what can we assume' and 'what do we need to specify.'
I didn't mean to say that DNA is a sequence of instructions. Perhaps parts of it are, but I assume much of it is 'express protein X' which happens to get triggered by condition Y.
posted by zippy at 3:42 PM on August 18, 2010
In the bridge-building robot example, it would be interesting to decide what constitutes 'the thing that encodes the bridge.' Certainly, there's a mechanism and instructions behind the "build a bridge" button. And what's hard about these discussions is deciding 'what can we assume' and 'what do we need to specify.'
I didn't mean to say that DNA is a sequence of instructions. Perhaps parts of it are, but I assume much of it is 'express protein X' which happens to get triggered by condition Y.
posted by zippy at 3:42 PM on August 18, 2010
I see the LISP people are still fantasizing about discovering that the brain is written in it.
Thinking Machines, which came out of MIT's AI Lab and used many Symbolics Lisp machines, had the motto:
"We're building a machine that will be proud of us."
posted by zippy at 4:26 PM on August 18, 2010
Thinking Machines, which came out of MIT's AI Lab and used many Symbolics Lisp machines, had the motto:
"We're building a machine that will be proud of us."
posted by zippy at 4:26 PM on August 18, 2010
Freen: Someone needs to sit down with PZ Myers and teach him LISP. I would argue once he's implemented his own interpreter, and written a reasonably trivial genetic algorithm with it he will have changed his mind about this particular issue. I'd be willing to put money on it.
Good luck with that. The Nobel prize for tanposons was awarded in '83, and it's a sophomore, if not freshman-level concept. Yes, everyone already knows about polymorphism and complex meta regulation of DNA transcription. The immune system is used as a case example of this. Meyers even mentions polymorphic functionality in his fucking post.
Do you have any other brilliant ideas that those of us with a biology background may have missed, like maybe that mammals have fur and give milk, or Mendelian genetics doesn't apply to all phenotypic traits?
posted by KirkJobSluder at 4:46 PM on August 18, 2010
Good luck with that. The Nobel prize for tanposons was awarded in '83, and it's a sophomore, if not freshman-level concept. Yes, everyone already knows about polymorphism and complex meta regulation of DNA transcription. The immune system is used as a case example of this. Meyers even mentions polymorphic functionality in his fucking post.
Do you have any other brilliant ideas that those of us with a biology background may have missed, like maybe that mammals have fur and give milk, or Mendelian genetics doesn't apply to all phenotypic traits?
posted by KirkJobSluder at 4:46 PM on August 18, 2010
zippy, I think I'm not entirely understanding what it is you're talking about the complexity of.
If the build-a-bridge instructions say "Lay a slab-like object of sufficient strength across the gap" then you might build it by laying a solid, rectangular block of undifferentiated stone across the gap, or you might take a slab-shaped supercomputer and lay that across the gap.
Does the bridge built out of a supercomputer fit under the upper bound of complexity specified by that instruction?
posted by XMLicious at 4:51 PM on August 18, 2010
If the build-a-bridge instructions say "Lay a slab-like object of sufficient strength across the gap" then you might build it by laying a solid, rectangular block of undifferentiated stone across the gap, or you might take a slab-shaped supercomputer and lay that across the gap.
Does the bridge built out of a supercomputer fit under the upper bound of complexity specified by that instruction?
posted by XMLicious at 4:51 PM on August 18, 2010
XMLicious: I could be wrong, but I think he meant the lower bound for that specific bridge. An abstract bridge, like an abstract brain, can be as complex as you want, but if you have certain limitations (quarry stone, mortar, x depth, y height, z length, w width, arches (neurons, glial cells, CNS fluid, x cranial capacity, axons)), then a simple set of building instructions could be the most you need to get pretty much the same bridge every time you try to build a bridge like that.
posted by cthuljew at 5:36 PM on August 18, 2010
posted by cthuljew at 5:36 PM on August 18, 2010
cthuljew: "a simple set of building instructions could be the most you need to get pretty much the same bridge every time you try to build a bridge like that."
So who or what is executing these "simple" instructions to build a bridge? I'd say a pretty complex organism or set of organisms/objects with their own fantastically intricate assembly mechanisms supported by a vast socioeconomic network of inputs that have accreted over long time and are necessary to create the building environment. Unless you have found some self-assembling bridges? As earlier messages have noted, it's an outside context problem.
posted by meehawl at 5:43 PM on August 18, 2010
So who or what is executing these "simple" instructions to build a bridge? I'd say a pretty complex organism or set of organisms/objects with their own fantastically intricate assembly mechanisms supported by a vast socioeconomic network of inputs that have accreted over long time and are necessary to create the building environment. Unless you have found some self-assembling bridges? As earlier messages have noted, it's an outside context problem.
posted by meehawl at 5:43 PM on August 18, 2010
The basic problem is that DNA isn't a blueprint for a body. Nor is it a program for building a body. DNA is the template for making functional molecules that have their own regulatory logic and feedback mechanisms. For a very simple case, bacteria can engage in chemotaxis without needing to engage in the slow process of DNA transcription. The logic of what tastes good/bad is embedded in the structure and relationship of the molecules involved.
While DNA is the template for those functional units, it doesn't shed much light on the functionality of those units in vivo. That sequence that shares statistical similarity with RNA transcriptase may be involved in RNA transcription. It may be non-functional junk. It may be a latent paleolithic virus. It's hard to tell without going through a bunch of molecular genetics. If we splice it into a plasmid does it actually express? Can we splice a marker like luciferase into it? What happens if we knock it out? Is it's expression dependent on development?
Then, you have to deal with the fact as Meyers points out that evolution is stingy. A specific protein may be involved in a half-dozen radically different functions at different stages of development. And again, the DNA and mRNA sequences don't help much here. The other components of the system may be located on different chromosomes. Proteins with sequences that should fit together may never be expressed in the same cell.
Again, the point of contention isn't that we won't be able to create computational models for mammalian cognition. The point of contention is that DNA sequences tell you very little about what's going on in the proteome or organisms at the multicellular level in vivo. Kurzweil is in a better position when he spins his prophesies about the singularity based on the number of neurons.
You can't use DNA to set boundaries on the functional logic of an organism because a large chunk of that functional logic isn't implemented as DNA. We've known this since the 80s.
posted by KirkJobSluder at 5:49 PM on August 18, 2010 [6 favorites]
While DNA is the template for those functional units, it doesn't shed much light on the functionality of those units in vivo. That sequence that shares statistical similarity with RNA transcriptase may be involved in RNA transcription. It may be non-functional junk. It may be a latent paleolithic virus. It's hard to tell without going through a bunch of molecular genetics. If we splice it into a plasmid does it actually express? Can we splice a marker like luciferase into it? What happens if we knock it out? Is it's expression dependent on development?
Then, you have to deal with the fact as Meyers points out that evolution is stingy. A specific protein may be involved in a half-dozen radically different functions at different stages of development. And again, the DNA and mRNA sequences don't help much here. The other components of the system may be located on different chromosomes. Proteins with sequences that should fit together may never be expressed in the same cell.
Again, the point of contention isn't that we won't be able to create computational models for mammalian cognition. The point of contention is that DNA sequences tell you very little about what's going on in the proteome or organisms at the multicellular level in vivo. Kurzweil is in a better position when he spins his prophesies about the singularity based on the number of neurons.
You can't use DNA to set boundaries on the functional logic of an organism because a large chunk of that functional logic isn't implemented as DNA. We've known this since the 80s.
posted by KirkJobSluder at 5:49 PM on August 18, 2010 [6 favorites]
meehawl: I agree. I was taking a stab at clarifying zippy's point. But, for what it's worth, the world we live in is the only one where we can create such instructions, so it's pretty safe to assume that the context is going to be pretty much the same everywhere. However, that doesn't actually map onto a free-reins computer simulation, so I'm not sure what my point is. I guess just that the bridge-instructions analogy makes sense, even if it's not a great analogy for the entire problem.
posted by cthuljew at 6:13 PM on August 18, 2010
posted by cthuljew at 6:13 PM on August 18, 2010
Playing chess is the ideal problem for a computer....*sigh* Dude. Cars that can drive themselves on roads with real traffic already exist! (another example) The technology is expensive, and experimental so governments probably wouldn't allow you to put self-driving taxies on the road without a lot of testing. But the technology already exists.
Driving a taxi is a very poor problem for a computer. The field of motion is potentially unlimited, and there are potentially unlimited numbers of objects, which move totally unpredictably. Everything happens at once, things can overlap, the world is 3-dimensional. You can't predict what the world will look like after any given move. AND the computer doesn't have time to process difficult situations, they are all time-critical. Even if you could come up with a set of rules that let a computer safely drive a taxi, getting it to run at the speed it would require would probably take more investment and power than simply paying a human taxi driver.
Sentience is much more like driving a taxi, except that the problem isn't even well defined.
And your right that 'sentience' is not well defined, but that's not even what the argument is about. It's about whether or not a computer can simulate a brain at a cellular level. Which is actually a much harder problem, it terms of the amount of computation.
If sentience is replicable, to be blunt: I do not think computers are the kind of device that could efficiently replicate it. It just doesn't fall into the "computers are good at this" box, and adding more computational power doesn't change anything about that.Well, you also don't seem to think that computers can drive cars. Despite the computer driven cars actually exit.
posted by delmoi at 6:33 PM on August 18, 2010 [1 favorite]
Let's say that you have a gene that you sequence. You find two alleles for the same gene that have a base pair difference. How does that base pair difference affect development?
Does it cause cancer?
Does it have no effect?
Does it result in a statistically significant difference in the length of your big toe?
Is the resulting protein non-functional?
Does it react with a different metabolic pathway?
The DNA alone can't answer these questions.
posted by KirkJobSluder at 6:48 PM on August 18, 2010
Does it cause cancer?
Does it have no effect?
Does it result in a statistically significant difference in the length of your big toe?
Is the resulting protein non-functional?
Does it react with a different metabolic pathway?
The DNA alone can't answer these questions.
posted by KirkJobSluder at 6:48 PM on August 18, 2010
Mr. Kurzweil has certainly upped the ante on the word 'optimist'.
posted by Twang at 7:27 PM on August 18, 2010 [2 favorites]
posted by Twang at 7:27 PM on August 18, 2010 [2 favorites]
delmoi: It is not idiotic to criticize this dude for not knowing code because he has demonstrated that he does not. At least the limitations of the computational model he is envisioning.
Every fucking midnight hacker who read early singularity-flavoured sci-fi has dreamed of the ways code can be made flesh; it's a rite of passage in some ways.
Those of us with paying jobs who can't rely on an early claim to fame have long realized that the various problems we have to solve to realize even a tiny part of those dreams are hard. Harder than anything we have done, and definitely harder than any advance a few decades will give us.
One cannot code a solution to a problem space that is not yet fully explained. Bootstrapping a god emulator or turing bot that read/writes physical space is a neat idea, but it can only be fully realized (right now) as fiction. We are not just some few technical hurdles from such a solution! We aren't even sure we could recognize the solution if we found it.
Add to this the problem that we are trying to solve human intelligence, the conceit that we can encode genome emulators in order to grow human intelligence is so flawed I wouldn't know where to start. Others have done a pretty good job else-thread.
So, I submit that someone suggesting this conceit is possible in my life-time, and will result in near-human intelligence in scope and form, doesn't really understand code. They seem to understand the possibilities of code, perhaps, but this is far from actually being able to demonstrate anything substantial in any computational tool of their choice.
Sorry, but this whole thing is a fool's errand and reminds me of the AI net.kooks who used to hang out on some of the comp.lang.* USENET newsgroups: lots of clever rhetoric and dreamy pronouncements. But when you actually see what any of them have done it is inevitably some pattern matching tree system that can learn that cows are green.
It stinks of wishful thinking, desperation and pure scientism by folks that are willfully disregarding the hardest problems they need to solve before they can even write a single line of code.
To me, this is not understanding code.
posted by clvrmnky at 8:27 PM on August 18, 2010 [1 favorite]
Every fucking midnight hacker who read early singularity-flavoured sci-fi has dreamed of the ways code can be made flesh; it's a rite of passage in some ways.
Those of us with paying jobs who can't rely on an early claim to fame have long realized that the various problems we have to solve to realize even a tiny part of those dreams are hard. Harder than anything we have done, and definitely harder than any advance a few decades will give us.
One cannot code a solution to a problem space that is not yet fully explained. Bootstrapping a god emulator or turing bot that read/writes physical space is a neat idea, but it can only be fully realized (right now) as fiction. We are not just some few technical hurdles from such a solution! We aren't even sure we could recognize the solution if we found it.
Add to this the problem that we are trying to solve human intelligence, the conceit that we can encode genome emulators in order to grow human intelligence is so flawed I wouldn't know where to start. Others have done a pretty good job else-thread.
So, I submit that someone suggesting this conceit is possible in my life-time, and will result in near-human intelligence in scope and form, doesn't really understand code. They seem to understand the possibilities of code, perhaps, but this is far from actually being able to demonstrate anything substantial in any computational tool of their choice.
Sorry, but this whole thing is a fool's errand and reminds me of the AI net.kooks who used to hang out on some of the comp.lang.* USENET newsgroups: lots of clever rhetoric and dreamy pronouncements. But when you actually see what any of them have done it is inevitably some pattern matching tree system that can learn that cows are green.
It stinks of wishful thinking, desperation and pure scientism by folks that are willfully disregarding the hardest problems they need to solve before they can even write a single line of code.
To me, this is not understanding code.
posted by clvrmnky at 8:27 PM on August 18, 2010 [1 favorite]
clvrmnky, on a related note: one of the things i find most discouraging about extropian singularitarianism (by which I mean teh whole uploaded-mind complex) is that it wastes effort on planning vessels for human minds that might be better expended building creches for machine minds.
when arthur clarke used to expostulate on the idea that we might build our successors back in the 60s/70s, he wasn't thinking about uploading brains and living forever -- he was thinking about turning the world over to our children and hoping they'd take care of us, but mostly hoping they'd make something of themselves that we'd be proud of -- like any good parent.
That old Clarkean/Shannonite singularity* concept of building a beautiful world that would live on and prosper (and in ways we fundamentally couldn't imagine) after we were gone has been displaced by a sort of post-Vingean vision of eternal consumer paradise a la Cory Doctorow's "I Row-Boat": a world where once we're uploaded (instantaneously) to the computronium satellite and are able to run 30 parallel instances of ourselves everything will be just like it always has been, multiplied by 30. That our qualitative experiences of the world will be no different from what they would be if we were flesh. (I could riff for hours about how this isn't quite as bad as it seems because it's all clearly intended to be taken as fantasy, but after I do that for a while someone usually points out to me that yes, a lot of the fans and even some of the writers do take it very seriously indeed. Viz. Mr. Kurzweil.....)
Someone upthread asked for a link to that 25 year old paper of mind. It would absolutely not be worth the time invested to read it. Yes, I was talking about the same stuff folks have been talking about here, but in a much less sophisticated way, in a time when analytic philosophy still trumped science in the philosophy of mind, all while locked-in to utilitarian terminology. As I read it know, I only understand it because I know what it's supposed to be saying.
--
*Apparently Claude Shannon invented the modern idea of the singularity -- at least according to Vernor Vinge & Bruce Sterling. It's probably fair to say he was the first modern to carry it out of the shadows of morality tale (see Jack Williamson's "With Folded Hands") and see it as something potentially real.
posted by lodurr at 7:12 PM on August 19, 2010 [1 favorite]
when arthur clarke used to expostulate on the idea that we might build our successors back in the 60s/70s, he wasn't thinking about uploading brains and living forever -- he was thinking about turning the world over to our children and hoping they'd take care of us, but mostly hoping they'd make something of themselves that we'd be proud of -- like any good parent.
That old Clarkean/Shannonite singularity* concept of building a beautiful world that would live on and prosper (and in ways we fundamentally couldn't imagine) after we were gone has been displaced by a sort of post-Vingean vision of eternal consumer paradise a la Cory Doctorow's "I Row-Boat": a world where once we're uploaded (instantaneously) to the computronium satellite and are able to run 30 parallel instances of ourselves everything will be just like it always has been, multiplied by 30. That our qualitative experiences of the world will be no different from what they would be if we were flesh. (I could riff for hours about how this isn't quite as bad as it seems because it's all clearly intended to be taken as fantasy, but after I do that for a while someone usually points out to me that yes, a lot of the fans and even some of the writers do take it very seriously indeed. Viz. Mr. Kurzweil.....)
Someone upthread asked for a link to that 25 year old paper of mind. It would absolutely not be worth the time invested to read it. Yes, I was talking about the same stuff folks have been talking about here, but in a much less sophisticated way, in a time when analytic philosophy still trumped science in the philosophy of mind, all while locked-in to utilitarian terminology. As I read it know, I only understand it because I know what it's supposed to be saying.
--
*Apparently Claude Shannon invented the modern idea of the singularity -- at least according to Vernor Vinge & Bruce Sterling. It's probably fair to say he was the first modern to carry it out of the shadows of morality tale (see Jack Williamson's "With Folded Hands") and see it as something potentially real.
posted by lodurr at 7:12 PM on August 19, 2010 [1 favorite]
Some quotes from Kurzweil's response:
I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated.
It is true that the brain gains a great deal of information by interacting with its environment – it is an adaptive learning system.
As for where is the design of the brain located: The original source of that design is the genome (plus a small amount of information from the epigenetic machinery)…
He also spends a few paragraphs explaining the motivation and hints at a route for accomplishing his goal.
For a kook, his writing is clear and restrained, compared to Myers' blog entry.
posted by polymodus at 9:08 AM on August 20, 2010
I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated.
It is true that the brain gains a great deal of information by interacting with its environment – it is an adaptive learning system.
As for where is the design of the brain located: The original source of that design is the genome (plus a small amount of information from the epigenetic machinery)…
He also spends a few paragraphs explaining the motivation and hints at a route for accomplishing his goal.
For a kook, his writing is clear and restrained, compared to Myers' blog entry.
posted by polymodus at 9:08 AM on August 20, 2010
from Kurzweil's response: “The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.”
Well, considering the fact that PZ Myers' whole point was that there is nothing "prior to the brain's interaction with its environment" – this doesn't seem like much of a response. PZ Myers is a developmental biologist; you can't respond to his concerns just by hand-waving away the whole concept of development.
posted by koeselitz at 9:18 AM on August 20, 2010
Well, considering the fact that PZ Myers' whole point was that there is nothing "prior to the brain's interaction with its environment" – this doesn't seem like much of a response. PZ Myers is a developmental biologist; you can't respond to his concerns just by hand-waving away the whole concept of development.
posted by koeselitz at 9:18 AM on August 20, 2010
Ray responds to PZ.
Yeah, it looks like his actual talk was considerably more even handed than was reported. I still think he's overly optimistic about the time frame, though, as simulating a single rat neocortical column required a 14TFLOPS supercomputer. For a sense of scale, that's something like 1/6th the complexity of a human neocortical column and the human brain has something like two million of them. Extrapolating up, simulating a human brain would require something like 168,000,000 TFLOPS. The fastest supercomputer in the world right now manages about 1,759 TFLOPS.
So we need a gain in computational power of a factor of about 1000. Notionally that should only take about 10 years according to Moore's Law and past trends in supercomputer power, which is where Kurzweil's optimism comes from, but I think the hold up will be the biology and biochemistry necessary for actually writing the simulation. It won't help that it would take a decent sized power plant's worth of electricity to run the supercomputer.
And really it's even worse than that because the neocortical column simulation wasn't even at the molecular level, but rather only at the level of neurons and their connections. It didn't even touch gene expression.
posted by jedicus at 9:20 AM on August 20, 2010 [1 favorite]
Yeah, it looks like his actual talk was considerably more even handed than was reported. I still think he's overly optimistic about the time frame, though, as simulating a single rat neocortical column required a 14TFLOPS supercomputer. For a sense of scale, that's something like 1/6th the complexity of a human neocortical column and the human brain has something like two million of them. Extrapolating up, simulating a human brain would require something like 168,000,000 TFLOPS. The fastest supercomputer in the world right now manages about 1,759 TFLOPS.
So we need a gain in computational power of a factor of about 1000. Notionally that should only take about 10 years according to Moore's Law and past trends in supercomputer power, which is where Kurzweil's optimism comes from, but I think the hold up will be the biology and biochemistry necessary for actually writing the simulation. It won't help that it would take a decent sized power plant's worth of electricity to run the supercomputer.
And really it's even worse than that because the neocortical column simulation wasn't even at the molecular level, but rather only at the level of neurons and their connections. It didn't even touch gene expression.
posted by jedicus at 9:20 AM on August 20, 2010 [1 favorite]
considering the fact that PZ Myers' whole point was that there is nothing "prior to the brain's interaction with its environment" – this doesn't seem like much of a response
That can't be the counterargument… it's not that there is nothing prior to development, because of the a priori genome. If there is a genetics-based counterargument to Kurzweil's views, I haven't yet come across a clear explanation of it.
posted by polymodus at 9:34 AM on August 20, 2010
That can't be the counterargument… it's not that there is nothing prior to development, because of the a priori genome. If there is a genetics-based counterargument to Kurzweil's views, I haven't yet come across a clear explanation of it.
posted by polymodus at 9:34 AM on August 20, 2010
polymodus: You can't model brain development without understanding the functional logic of the proteome. And that's exactly the kind of problem that's unlikely to benefit from Moore's law because it involves a lot of labor-intensive research with living cells in vivo and in vitro.
posted by KirkJobSluder at 10:40 AM on August 20, 2010 [1 favorite]
posted by KirkJobSluder at 10:40 AM on August 20, 2010 [1 favorite]
polymodus: "For a kook, his writing is clear and restrained, compared to Myers' blog entry."
That's why his relentless spewage of punditrocious drivel is so dangerous. It's an appealingly scented idea virus swarm calculated to appeal to simplistic notions of social relationship, development, and intelligence that make complete and comforting sense to a certain demographic. It's the modern-day equivalent of de Chardin's or Rand's fantasies. To quote Douglas Hofstadter:
That's why his relentless spewage of punditrocious drivel is so dangerous. It's an appealingly scented idea virus swarm calculated to appeal to simplistic notions of social relationship, development, and intelligence that make complete and comforting sense to a certain demographic. It's the modern-day equivalent of de Chardin's or Rand's fantasies. To quote Douglas Hofstadter:
If you read Ray Kurzweil's books and Hans Moravec's, what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.posted by meehawl at 10:42 AM on August 20, 2010 [5 favorites]
In part, they're talking past each other. Kurzweil certainly has a strong case that we'll be able to have computational systems that are at least as complex as the human brain.
That's a very different question from being able to say those computational systems will be able to actually simulate the human brain with a reasonable degree of accuracy.
posted by KirkJobSluder at 11:03 AM on August 20, 2010
That's a very different question from being able to say those computational systems will be able to actually simulate the human brain with a reasonable degree of accuracy.
posted by KirkJobSluder at 11:03 AM on August 20, 2010
Myers: Kurzweil still doesn't understand the brain
posted by homunculus at 4:06 PM on August 21, 2010
posted by homunculus at 4:06 PM on August 21, 2010
"in the future there will be a really complicated computer" is really not that amazing a prediction.
posted by Artw at 4:35 PM on August 21, 2010
posted by Artw at 4:35 PM on August 21, 2010
Singularity slapfight: yet more Kurzweil vs. Myers
posted by homunculus at 11:20 AM on August 23, 2010
posted by homunculus at 11:20 AM on August 23, 2010
Oh shit, Myers is going in for heart surgery: That's not a heart! It's a flailing Engine of Destruction!
posted by homunculus at 9:54 AM on August 24, 2010
posted by homunculus at 9:54 AM on August 24, 2010
« Older A simple, minimalist approach to getting things... | Reading Racism Right to Left: Tim Wise on Race and... Newer »
This thread has been archived and is closed to new comments
posted by ShawnStruck at 1:18 AM on August 18, 2010 [2 favorites]