Where am I?
August 26, 2010 10:50 AM Subscribe
If your brain and body were separated, which one would be "you?" Philosopher Daniel Dennett explores what might happen in that event. (Previously)
I sorta expected that "previously" link to go back to an FPP on the blue from the 1600's when some verbose Frenchman with the handle CombiboMihi was ranting about some Latin thing posted on Reddit and we closed it as a self-post.
posted by Bathtub Bobsled at 10:55 AM on August 26, 2010 [4 favorites]
posted by Bathtub Bobsled at 10:55 AM on August 26, 2010 [4 favorites]
Daniel Dennett had a weird dream after watching "Spock's Brain"?
posted by ricochet biscuit at 10:58 AM on August 26, 2010 [1 favorite]
posted by ricochet biscuit at 10:58 AM on August 26, 2010 [1 favorite]
See his collaboration with Douglas R. Hofstader, The Mind's I. Lots of stuff along these lines.
posted by Joe Beese at 11:00 AM on August 26, 2010
posted by Joe Beese at 11:00 AM on August 26, 2010
I remember this blowing my mind (but not my body) when I read it in Minds I.
posted by DU at 11:02 AM on August 26, 2010 [2 favorites]
posted by DU at 11:02 AM on August 26, 2010 [2 favorites]
Blindsight explores this topic, and is a fantastic read to boot.
posted by L'Estrange Fruit at 11:04 AM on August 26, 2010 [2 favorites]
posted by L'Estrange Fruit at 11:04 AM on August 26, 2010 [2 favorites]
See, this is why when I bioengineer the species to overtake humans, I'm going to have muscles and brains use the same tissue. That way, menial labor also leads to genius, and mind and body are comfortably one.
posted by mccarty.tim at 11:04 AM on August 26, 2010 [8 favorites]
posted by mccarty.tim at 11:04 AM on August 26, 2010 [8 favorites]
This very much reminds me of a thing that annoys me about shitty cyberpunk and pre-cyperpunk fiction where people use some kind of hook up yo go inside the computer.
posted by Artw at 11:04 AM on August 26, 2010 [1 favorite]
posted by Artw at 11:04 AM on August 26, 2010 [1 favorite]
Er actually I take it back, it's not the same thing. but um really cool thing to read!
posted by L'Estrange Fruit at 11:05 AM on August 26, 2010
posted by L'Estrange Fruit at 11:05 AM on August 26, 2010
As experts on the subject of brain-body separation go, Daniel Dennet, you are no Henry Wentworth Akeley.
posted by Parasite Unseen at 11:06 AM on August 26, 2010 [3 favorites]
posted by Parasite Unseen at 11:06 AM on August 26, 2010 [3 favorites]
Some people prefer to make this an even more complicated question by introducing some kind of soul or spiritual essence as well, so they can ask, are you a body, a brain, or a soul? But it's a red herring, souls are imaginary (although they are valid metaphors).
It is often noted that cells die and are replaced, yet you remain the same person. Nerve cells are more durable, but even so, they could be replaced (with stem cells that develop into new nerve cells) and you would still be yourself. So what are you?
I have concluded that you, the essential personality that is yourself, is a pattern of information, essentially a computer program if we consider the brain to be an organic computer (which is a reasonably accurate description). That's what you actually are. All else is peripheral - important though it undoubtedly is. Computer programs are of very little use without computers on which to run them.
posted by grizzled at 11:07 AM on August 26, 2010 [7 favorites]
It is often noted that cells die and are replaced, yet you remain the same person. Nerve cells are more durable, but even so, they could be replaced (with stem cells that develop into new nerve cells) and you would still be yourself. So what are you?
I have concluded that you, the essential personality that is yourself, is a pattern of information, essentially a computer program if we consider the brain to be an organic computer (which is a reasonably accurate description). That's what you actually are. All else is peripheral - important though it undoubtedly is. Computer programs are of very little use without computers on which to run them.
posted by grizzled at 11:07 AM on August 26, 2010 [7 favorites]
Whichever part gets the penis, that's the part I want to be.
posted by spicynuts at 11:08 AM on August 26, 2010 [3 favorites]
posted by spicynuts at 11:08 AM on August 26, 2010 [3 favorites]
If your brain and body were separated, which one would be "you?"
Best one-sentence summary of Dennet I've ever heard.
posted by 7segment at 11:10 AM on August 26, 2010 [6 favorites]
Best one-sentence summary of Dennet I've ever heard.
posted by 7segment at 11:10 AM on August 26, 2010 [6 favorites]
If your eyeballs could pop out on very, very long stalks and go all the the Moon, would you be on the Moon?
posted by Artw at 11:10 AM on August 26, 2010 [17 favorites]
posted by Artw at 11:10 AM on August 26, 2010 [17 favorites]
I know it's totally beside the point, and was not intended literally, but that legal analogy was just stupid. It was like the Simpsons episode where Sideshow Bob goes to the Five Corners to make sure that every part of Bart's murder would be in a different jurisdiction, thus making it unprosecutable. IAAL, and I feel like these things are my karmic punishment for being one.
Of course, I still laugh for an hour every time I hear George Bluth explain "They can't arrest a husband and a wife for the same crime!"
posted by nickgb at 11:13 AM on August 26, 2010 [1 favorite]
Of course, I still laugh for an hour every time I hear George Bluth explain "They can't arrest a husband and a wife for the same crime!"
posted by nickgb at 11:13 AM on August 26, 2010 [1 favorite]
I don't know how this is related, but here it is.
That said, Dennett is foolish to try to seek another human body when he could get a ton of robot bodies. Hook up a bunch of transmitters to your car, computer, DVR, and enable switching from inside the brain, rather than a physical switch. Then you are truly in control of everything you own. Plus, humanoid robots look quite good these days. Faceshifting would probably not be an impossible feature.
Between the internet browsing, TV watching, and impersonation of others, I wouldn't have time to get introspective and curious about which one is me.
posted by mccarty.tim at 11:13 AM on August 26, 2010
That said, Dennett is foolish to try to seek another human body when he could get a ton of robot bodies. Hook up a bunch of transmitters to your car, computer, DVR, and enable switching from inside the brain, rather than a physical switch. Then you are truly in control of everything you own. Plus, humanoid robots look quite good these days. Faceshifting would probably not be an impossible feature.
Between the internet browsing, TV watching, and impersonation of others, I wouldn't have time to get introspective and curious about which one is me.
posted by mccarty.tim at 11:13 AM on August 26, 2010
we consider the brain to be an organic computer (which is a reasonably accurate description)
I don't understand how people come to this conclusion. The brain does computation, but it also does all kinds of things that we have no idea how to make computers do, that we have significant difficulty getting computers to even *pretend* to do. There isn't, so far, any evidence that we can make a computer that even apparently does what most of the things we see the brain doing, even if we try to emulate the architecture. So, why the conclusion?
posted by weston at 11:14 AM on August 26, 2010 [3 favorites]
I don't understand how people come to this conclusion. The brain does computation, but it also does all kinds of things that we have no idea how to make computers do, that we have significant difficulty getting computers to even *pretend* to do. There isn't, so far, any evidence that we can make a computer that even apparently does what most of the things we see the brain doing, even if we try to emulate the architecture. So, why the conclusion?
posted by weston at 11:14 AM on August 26, 2010 [3 favorites]
Whichever part gets the penis, that's the part I want to be.
Rule 34 dictates a lot of parts could potentially get the penis.
posted by Kirk Grim at 11:15 AM on August 26, 2010 [5 favorites]
Rule 34 dictates a lot of parts could potentially get the penis.
posted by Kirk Grim at 11:15 AM on August 26, 2010 [5 favorites]
I like how MeFi loves (or at least is OK with) this essay but hates Kurzweil who just wants to make it real.
posted by DU at 11:15 AM on August 26, 2010
posted by DU at 11:15 AM on August 26, 2010
Kurzweil talks about some fictional and extremely implausible brain simulation as if it were just around the corner, pulling some numbers out of his ass to justify it. This doesn't do that (and even have the simulation, more of a remoting set up)
posted by Artw at 11:18 AM on August 26, 2010 [4 favorites]
posted by Artw at 11:18 AM on August 26, 2010 [4 favorites]
Kurzwell violates my main rule in life: Dead people should always stay dead.
You may argue about what dead is, but when has bringing someone back to life beyond two definitions of dead almost never is a good thing in the movies.
posted by mccarty.tim at 11:19 AM on August 26, 2010 [1 favorite]
You may argue about what dead is, but when has bringing someone back to life beyond two definitions of dead almost never is a good thing in the movies.
posted by mccarty.tim at 11:19 AM on August 26, 2010 [1 favorite]
Since you ask, weston, as far as I can determine, the reason why brains do all sorts of things that computers cannot do, is because we just do not yet have the right programs with which to enable computers to do those things. It's all data processing of some kind or another. Many people believe (even as an article of religious faith) that consciousness is something special and magical that cannot be produced by mechanical means. However, consciousness can be described as a program that monitors other programs. I am convinced that consciousness can be attained in a computer if you have the right programming and enough computing power.
posted by grizzled at 11:20 AM on August 26, 2010 [5 favorites]
posted by grizzled at 11:20 AM on August 26, 2010 [5 favorites]
Also I suspect if Kurzweil had never pulled numbers out of his ass regarding the human genome and brain complexity nobody would have jumped on him at all, but on the other hand probably nobody would have gotten exited about what he was saying in the first place, as lets face it it;s the same old junk he's spouted for the last ten years.
posted by Artw at 11:20 AM on August 26, 2010 [2 favorites]
posted by Artw at 11:20 AM on August 26, 2010 [2 favorites]
Of course, I still laugh for an hour every time I hear George Bluth explain "They can't arrest a husband and a wife for the same crime!"
posted by nickgb at 11:13 AM on August 26 [+] [!]
i read that as george bush
posted by kitchenrat at 11:21 AM on August 26, 2010
posted by nickgb at 11:13 AM on August 26 [+] [!]
i read that as george bush
posted by kitchenrat at 11:21 AM on August 26, 2010
The brain does computation, but it also does all kinds of things that we have no idea how to make computers do, that we have significant difficulty getting computers to even *pretend* to do.
This is an argument about how little we understand the brain, not the computational properties of the brain.
posted by logicpunk at 11:22 AM on August 26, 2010
This is an argument about how little we understand the brain, not the computational properties of the brain.
posted by logicpunk at 11:22 AM on August 26, 2010
Kurzweil also has some credibility problems due to unmet predictions. But that doesn't explain the "nerd rapture" type hate.
posted by DU at 11:27 AM on August 26, 2010
posted by DU at 11:27 AM on August 26, 2010
DU: “I like how MeFi loves (or at least is OK with) this essay but hates Kurzweil who just wants to make it real.”
They're both full of it, but Kurzweil is actually an idiot. Dennett is a relatively intelligent person who's just wrong about everything.
posted by koeselitz at 11:32 AM on August 26, 2010 [10 favorites]
They're both full of it, but Kurzweil is actually an idiot. Dennett is a relatively intelligent person who's just wrong about everything.
posted by koeselitz at 11:32 AM on August 26, 2010 [10 favorites]
I thought Fry and Amy solved this one a long time ago.
posted by warbaby at 11:34 AM on August 26, 2010
posted by warbaby at 11:34 AM on August 26, 2010
If I throw a rock from one side of a state line and it hits a person on the other and knocks them dead, what state was the crime committed in?
posted by Artw at 11:35 AM on August 26, 2010
posted by Artw at 11:35 AM on August 26, 2010
In State #1: Unlawful throwing. In State #2: Unattended manslaughter.
posted by DU at 11:37 AM on August 26, 2010 [6 favorites]
posted by DU at 11:37 AM on August 26, 2010 [6 favorites]
if we consider the brain to be an organic computer (which is a reasonably accurate description)
I am extremely skeptical of the mechanistic modernism implicit in this sort of description.
posted by shakespeherian at 11:37 AM on August 26, 2010 [4 favorites]
I am extremely skeptical of the mechanistic modernism implicit in this sort of description.
posted by shakespeherian at 11:37 AM on August 26, 2010 [4 favorites]
the one in the little paragraph at the bottom of page 6 of the local paper, where it says, area man decapitated in freak accident
posted by nervousfritz at 11:37 AM on August 26, 2010
posted by nervousfritz at 11:37 AM on August 26, 2010
“If you cut off my head, what would I say... Me and my head, or me and my body? What right has my head to call itself me?”
posted by Shepherd at 11:41 AM on August 26, 2010
posted by Shepherd at 11:41 AM on August 26, 2010
not a good time to lose one's head
More seriously, define "you" under this kind of bizarre thought experiment.
posted by 7-7 at 11:42 AM on August 26, 2010
More seriously, define "you" under this kind of bizarre thought experiment.
posted by 7-7 at 11:42 AM on August 26, 2010
A bit boring- who cares where you are? It's all illusory anyway. If I disconnect my eyes and move them across the room, I'm in two places at the same time, sort of. I could disconnect the left half of my body and use it to control a robot. That'd be surreal.
It gets much better when they reveal the brain simulation... and he starts flicking between the two. He's giving one brain control of his body, then that brain is returning control to the first, etc, but since both brains are kept in exact sync neither can tell the difference between "being in control" and "being identical to the brain that is REALLY in control". Bizarre, and fairly interesting actually. More interesting than the classic "What if I teleport but the original me isn't destroyed?" anyway.
posted by BungaDunga at 11:43 AM on August 26, 2010 [1 favorite]
It gets much better when they reveal the brain simulation... and he starts flicking between the two. He's giving one brain control of his body, then that brain is returning control to the first, etc, but since both brains are kept in exact sync neither can tell the difference between "being in control" and "being identical to the brain that is REALLY in control". Bizarre, and fairly interesting actually. More interesting than the classic "What if I teleport but the original me isn't destroyed?" anyway.
posted by BungaDunga at 11:43 AM on August 26, 2010 [1 favorite]
Dennett is a relatively intelligent person who's just wrong about everything.
...
I am extremely skeptical of the mechanistic modernism implicit in this sort of description.
I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.
posted by DU at 11:46 AM on August 26, 2010
...
I am extremely skeptical of the mechanistic modernism implicit in this sort of description.
I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.
posted by DU at 11:46 AM on August 26, 2010
This is an argument about how little we understand the brain, not the computational properties of the brain.
For now, though, it's not entirely clear whether the reason we can't make computers act like brains is because we haven't developed the right software or because it's just theoretically impossible.
Our intuition might say it's a matter of software, but we don't know for sure. I think that was weston's point.
posted by magnificent frigatebird at 11:46 AM on August 26, 2010 [2 favorites]
For now, though, it's not entirely clear whether the reason we can't make computers act like brains is because we haven't developed the right software or because it's just theoretically impossible.
Our intuition might say it's a matter of software, but we don't know for sure. I think that was weston's point.
posted by magnificent frigatebird at 11:46 AM on August 26, 2010 [2 favorites]
and even have the simulation, more of a remoting set up
Actually I see it gets all unnecessarily convoluted at the end.
posted by Artw at 11:46 AM on August 26, 2010
Actually I see it gets all unnecessarily convoluted at the end.
posted by Artw at 11:46 AM on August 26, 2010
Your extreme skepticism about mechanistic modernism, shakespeherian, does not tell us what alternative you find to be more plausible. The alternative to the modern is, I suppose, the return to older belief systems such as those of the medieval or classical era. The alternative to the mechanistic is usually the mystical, unless you believe that the universe is entirely inexplicable, by either mechanistic or mystical means. So perhaps you prefer medieval mysticism. Maybe you believe that human beings will never be capable of understanding their own brains. If so, that is of course your privilege. Personally I will stick to mechanistic modernism.
posted by grizzled at 11:47 AM on August 26, 2010 [4 favorites]
posted by grizzled at 11:47 AM on August 26, 2010 [4 favorites]
Too bad none of the stories involving Lavoisier and blinking after guillotining seem to be true, or we would have an answer on this. Perhaps mythbusters wants to explore this. . .
posted by oshburghor at 11:53 AM on August 26, 2010 [1 favorite]
posted by oshburghor at 11:53 AM on August 26, 2010 [1 favorite]
I'm exploring writing a program that will monitor your web browsing habits and allow you to continue to surf after death. Then you'll be immortal. I'm thinking of calling it Death Cookies.
posted by dances_with_sneetches at 11:54 AM on August 26, 2010 [6 favorites]
posted by dances_with_sneetches at 11:54 AM on August 26, 2010 [6 favorites]
The alternative to the modern is, I suppose, the return to older belief systems such as those of the medieval or classical era.
That's not the issue at all. I simply have immense problems with the notion of not progressing past the philosophies of the Enlightenment era and the attempt to explain everything natural with the terminology of the Machine Age. Metaphors are useful, but they are limited, and their overuse can serve to limit our understanding of those things they attempt to describe.
posted by shakespeherian at 11:54 AM on August 26, 2010 [1 favorite]
That's not the issue at all. I simply have immense problems with the notion of not progressing past the philosophies of the Enlightenment era and the attempt to explain everything natural with the terminology of the Machine Age. Metaphors are useful, but they are limited, and their overuse can serve to limit our understanding of those things they attempt to describe.
posted by shakespeherian at 11:54 AM on August 26, 2010 [1 favorite]
I enjoyed this short piece of speculative fiction on a similar theme even more: The story of a brain, which seems to be probing at something pretty deep about the connection between brain and mind. Both this and the Dennett piece are from the excellent book The Mind's I, as already mentioned.
posted by snoktruix at 11:54 AM on August 26, 2010
posted by snoktruix at 11:54 AM on August 26, 2010
There's plenty of head-seperated-from-bodies stuff in Mary Roach's Spook, if you're interested, including some rather ghoulish soviet animal experiments.
posted by Artw at 11:55 AM on August 26, 2010
posted by Artw at 11:55 AM on August 26, 2010
I'd be more interested in, if my brain were split in half, which one would be me.
posted by notmydesk at 11:56 AM on August 26, 2010 [3 favorites]
posted by notmydesk at 11:56 AM on August 26, 2010 [3 favorites]
It's an easy question. You're both in the tank and walking around outside.
If I straddle a doorway between two rooms.
There are no things as absolutely discrete objects. No one can observe where objects begin and end. There really is no I to speak us.
Speaking of Hofstader, I'm reading I Am a Strange Loop right now. That linked interview has some rather harsh words for Kurzweil. His "irrational" explanation of why we imagine "souls" and make them real is also interesting to me.
I also Seem to be a Verb.
posted by mrgrimm at 11:58 AM on August 26, 2010
If I straddle a doorway between two rooms.
There are no things as absolutely discrete objects. No one can observe where objects begin and end. There really is no I to speak us.
Speaking of Hofstader, I'm reading I Am a Strange Loop right now. That linked interview has some rather harsh words for Kurzweil. His "irrational" explanation of why we imagine "souls" and make them real is also interesting to me.
I also Seem to be a Verb.
posted by mrgrimm at 11:58 AM on August 26, 2010
Oops. No Preview.
"If I straddle a doorway between two rooms ... I'm in both rooms."
posted by mrgrimm at 11:59 AM on August 26, 2010
"If I straddle a doorway between two rooms ... I'm in both rooms."
posted by mrgrimm at 11:59 AM on August 26, 2010
Kurzwell violates my main rule in life: Dead people should always stay dead.
Which is why MeFi hates him. He'd invalidate every "." and force 37 MORE James Brown threads on us.
And I thought last week's Futurama explained everything. (We obsessed so much on the logic problem, we didn't even notice they were swapping brains)
posted by oneswellfoop at 11:59 AM on August 26, 2010
Which is why MeFi hates him. He'd invalidate every "." and force 37 MORE James Brown threads on us.
And I thought last week's Futurama explained everything. (We obsessed so much on the logic problem, we didn't even notice they were swapping brains)
posted by oneswellfoop at 11:59 AM on August 26, 2010
When I describe the brain as an organic computer, I am not being metaphorical. That is precisely what the brain is, as far as I can determine. It is a biological mechanism for data processing. It processes data.
posted by grizzled at 12:00 PM on August 26, 2010 [1 favorite]
posted by grizzled at 12:00 PM on August 26, 2010 [1 favorite]
In Japan "they" (regular folks) figure that the "you" that is "you" in your identity resides in your chest, where your heart is, whereas in the West it would seem to be your head and your brain.
posted by KokuRyu at 12:04 PM on August 26, 2010
posted by KokuRyu at 12:04 PM on August 26, 2010
Our intuition might say it's a matter of software, but we don't know for sure. I think that was weston's point.
If we build a universe-simulator, then we can simulate a brain. Simulate a womb, sperm, egg, watch what happens.
Most (all?) of (hard) science assumes the universe is reducible to a set of rules. Computers take rules and apply them. Assuming our view of the universe is correct, we can in principle simulate it. Therefore, we can simulate a brain, quantum effects (if there are any...) and all.
You'd need godly amounts of computing power, though, and it's the most brute-force method possible. You might well need a large portion of the universe to simulate a brain-sized portion of universe.
Under what circumstances would this argument not apply?
- There are no underlying rules to the universe that we can divine
--- We just need rules that are good enough, though, unless the brain is so sensitive that a simulated universe that is almost correct cannot not good enough
- The brain doesn't work according to the rules of the rest of the universe
--- Then we can find out what those rules are instead. It'd be awfully unlikely though.
- The brain doesn't actually work on any physical rules at all
--- So it's controlled by, say, a soul that is fundamentally impossible to measure or predict.
The question isn't "Can we, in principle, simulate a brain?" but "Can we simulate a brain practically? Can we work out what rules we need and which we can discard? Do we need to keep track of every subatomic particle- or every molecule- or every group of molecules- or just every neuron?"
I'd love to hear arguments against this point of view, actually. I can't see any way to prohibit simulated brains without divine intervention.
posted by BungaDunga at 12:05 PM on August 26, 2010 [3 favorites]
If we build a universe-simulator, then we can simulate a brain. Simulate a womb, sperm, egg, watch what happens.
Most (all?) of (hard) science assumes the universe is reducible to a set of rules. Computers take rules and apply them. Assuming our view of the universe is correct, we can in principle simulate it. Therefore, we can simulate a brain, quantum effects (if there are any...) and all.
You'd need godly amounts of computing power, though, and it's the most brute-force method possible. You might well need a large portion of the universe to simulate a brain-sized portion of universe.
Under what circumstances would this argument not apply?
- There are no underlying rules to the universe that we can divine
--- We just need rules that are good enough, though, unless the brain is so sensitive that a simulated universe that is almost correct cannot not good enough
- The brain doesn't work according to the rules of the rest of the universe
--- Then we can find out what those rules are instead. It'd be awfully unlikely though.
- The brain doesn't actually work on any physical rules at all
--- So it's controlled by, say, a soul that is fundamentally impossible to measure or predict.
The question isn't "Can we, in principle, simulate a brain?" but "Can we simulate a brain practically? Can we work out what rules we need and which we can discard? Do we need to keep track of every subatomic particle- or every molecule- or every group of molecules- or just every neuron?"
I'd love to hear arguments against this point of view, actually. I can't see any way to prohibit simulated brains without divine intervention.
posted by BungaDunga at 12:05 PM on August 26, 2010 [3 favorites]
When I describe the brain as an organic computer, I am not being metaphorical. That is precisely what the brain is, as far as I can determine. It is a biological mechanism for data processing. It processes data.
My brain is an iPod!
posted by KokuRyu at 12:06 PM on August 26, 2010 [1 favorite]
My brain is an iPod!
posted by KokuRyu at 12:06 PM on August 26, 2010 [1 favorite]
There's also the issue of Penrose style quantum do-dahs, but I don't beleive anyone takes him seriously these days (or at least only on the same level as Kurzweil).
posted by Artw at 12:06 PM on August 26, 2010
posted by Artw at 12:06 PM on August 26, 2010
I'd be more interested in, if my brain were split in half, which one would be me.
This fascinates me as well. There are some people who believe that the right and left brains are two completely different "I"s.
posted by mrgrimm at 12:06 PM on August 26, 2010
This fascinates me as well. There are some people who believe that the right and left brains are two completely different "I"s.
posted by mrgrimm at 12:06 PM on August 26, 2010
I have concluded that you, the essential personality that is yourself, is a pattern of information, essentially a computer program if we consider the brain to be an organic computer
If the same program were running in a brain simulator, which one would be you?
posted by ekroh at 12:10 PM on August 26, 2010
If the same program were running in a brain simulator, which one would be you?
posted by ekroh at 12:10 PM on August 26, 2010
This fascinates me as well. There are some people who believe that the right and left brains are two completely different "I"s.
My first thought is that if you totally separate the two, you might end up with nobody at all. Or two people who are both rather different from the original. For example, they probably wouldn't both get all the memories of the original brain, so they'd both be different. You'd be destroying one person and creating two.
posted by BungaDunga at 12:11 PM on August 26, 2010
My first thought is that if you totally separate the two, you might end up with nobody at all. Or two people who are both rather different from the original. For example, they probably wouldn't both get all the memories of the original brain, so they'd both be different. You'd be destroying one person and creating two.
posted by BungaDunga at 12:11 PM on August 26, 2010
"Does the brain control you or are you controlling the brain? I don't know if I'm in charge of mine."
posted by gyc at 12:16 PM on August 26, 2010
posted by gyc at 12:16 PM on August 26, 2010
The head! The head always wins (and, by proxy, the brain). Everything else is unessential for continued life (assuming a life-span of roughly 30 seconds).
posted by Baby_Balrog at 12:19 PM on August 26, 2010
posted by Baby_Balrog at 12:19 PM on August 26, 2010
My first thought is that if you totally separate the two, you might end up with nobody at all. Or two people who are both rather different from the original.
Batman: The Brave and the Bold already went over this.
posted by kmz at 12:21 PM on August 26, 2010
Batman: The Brave and the Bold already went over this.
posted by kmz at 12:21 PM on August 26, 2010
Isn't this a very old problem? Like Descartes old? I get the feeling you're not going to add too much with yet another thought experiment, after almost 200 years of thought experiments.
This fascinates me as well. There are some people who believe that the right and left brains are two completely different "I"s.
Well there are cases of complete corpus callosotomies for treatment of severe epilepsy (the corpus callosum being the largest fiber tract bridging the two hemispheres) - they used to transect the whole thing instead of ~1/3 like they do these days. There are some functional deficits (visual field-related, mainly) but I'm not aware of behavioral changes.
For example, they probably wouldn't both get all the memories of the original brain
I'm trying to remember whether I've seen a functional study on unilateral hippocampal/temporal lobe damage/sclerosis/surgical removal (again, for treatment of seizures). I think they can do it without too much loss of function. Of course, bilateral results in severe deficit.
posted by Tikirific at 12:26 PM on August 26, 2010 [1 favorite]
This fascinates me as well. There are some people who believe that the right and left brains are two completely different "I"s.
Well there are cases of complete corpus callosotomies for treatment of severe epilepsy (the corpus callosum being the largest fiber tract bridging the two hemispheres) - they used to transect the whole thing instead of ~1/3 like they do these days. There are some functional deficits (visual field-related, mainly) but I'm not aware of behavioral changes.
For example, they probably wouldn't both get all the memories of the original brain
I'm trying to remember whether I've seen a functional study on unilateral hippocampal/temporal lobe damage/sclerosis/surgical removal (again, for treatment of seizures). I think they can do it without too much loss of function. Of course, bilateral results in severe deficit.
posted by Tikirific at 12:26 PM on August 26, 2010 [1 favorite]
Dennett is a relatively intelligent person who's just wrong about everything.
posted by koeselitz at 7:32 PM on August 26
This sort of unsupported blanket dismissal of an entire person leaves me breathless, frankly.
posted by Decani at 12:28 PM on August 26, 2010 [3 favorites]
posted by koeselitz at 7:32 PM on August 26
This sort of unsupported blanket dismissal of an entire person leaves me breathless, frankly.
posted by Decani at 12:28 PM on August 26, 2010 [3 favorites]
What's curious is that when people refer to themselves with a gesture, especially when they are in a charged emotional state, they invariably point to their chest rather than their head.
Is this actually true? The previously noted Japanese point at their nose.
posted by rr at 12:29 PM on August 26, 2010
Is this actually true? The previously noted Japanese point at their nose.
posted by rr at 12:29 PM on August 26, 2010
What's curious is that when people refer to themselves with a gesture, especially when they are in a charged emotional state, they invariably point to their chest rather than their head.
I always figured that was more because of the geometric center of the body rather than anything with metaphysical implications.
posted by shakespeherian at 12:30 PM on August 26, 2010
I always figured that was more because of the geometric center of the body rather than anything with metaphysical implications.
posted by shakespeherian at 12:30 PM on August 26, 2010
Oh, I also remembered the case of Phineas Gage, and this was also a unilateral injury. His personality DID change after the injury, but this was damage in the frontal lobe, which is not typically associated with memory.
posted by Tikirific at 12:32 PM on August 26, 2010
posted by Tikirific at 12:32 PM on August 26, 2010
The brain isn't literally a computer. Sometimes it can be helpful to think of it that way, but it can cause misunderstandings too.
Computers have a hierarchy of abstractions that the brain lacks. You have the hardware, then the OS sitting on it, with programs running managed by the OS. The same basic CPU can be a desktop, can run an embedded system like an ATM, or the controller for some subsystem on an airplane or car. You can run web browsers, whatever. It can be reprogrammed. Computers don't learn. You could write software that learns, but you're just using the ability of programs to simulate any system. The computer itself isn't learning. Computers are fragile. If you damage part of a CPU, likely the whole thing won't work anymore. With the brain, there is no meaningful distinction between hardware and software. A brain is a brain for a specific organism. You can't take a squirrel brain and reprogram it into a bat brain. Brains are not programmable, they learn. Brains cannot simulate non-brain systems, or even the brain of a different individual. Brains are plastic. Large parts can be damaged or removed, and the whole may adapt to recover missing functionality.
Computers have a clear-cut unit of information (the bit) which is respected at all times. Information is stored as bits, manipulated as bits, transmitted as bits. In computers, information storage and processing are conceptually and physically separated. The brain has no known unit of information. You may think "spikes" but what about them? How much information is in a spike? Sometimes it seems spike rates are important, sometimes timing, sometimes how many. I know of systems where spikes don't matter so much, and subthreshold oscillations matter more. In the brain, information storage and processing are difficult to distinguish.
Computers are fundamentally organized around a CPU which performs sequential instructions. It's certainly possible to parallelize CPUs and have many work together. To do this, you need a problem which is parallelizable. Then the problem is divided up, with each CPU doing some sensible task, communicating its result, then asking for the next task. Brains are massively parallel, but each unit (the neuron, presumably) is communicating with huge numbers of its neighbors, and processing the information from them in a fairly incomprehensible manner (usually, sometimes we can tease it out). The flow of information in the brain often defies the typical design choices that engineers would make. There is a huge amount of feedback in the visual pathway (information traveling from the higher regions of the brain down towards the eyes).
posted by Humanzee at 12:32 PM on August 26, 2010 [8 favorites]
Computers have a hierarchy of abstractions that the brain lacks. You have the hardware, then the OS sitting on it, with programs running managed by the OS. The same basic CPU can be a desktop, can run an embedded system like an ATM, or the controller for some subsystem on an airplane or car. You can run web browsers, whatever. It can be reprogrammed. Computers don't learn. You could write software that learns, but you're just using the ability of programs to simulate any system. The computer itself isn't learning. Computers are fragile. If you damage part of a CPU, likely the whole thing won't work anymore. With the brain, there is no meaningful distinction between hardware and software. A brain is a brain for a specific organism. You can't take a squirrel brain and reprogram it into a bat brain. Brains are not programmable, they learn. Brains cannot simulate non-brain systems, or even the brain of a different individual. Brains are plastic. Large parts can be damaged or removed, and the whole may adapt to recover missing functionality.
Computers have a clear-cut unit of information (the bit) which is respected at all times. Information is stored as bits, manipulated as bits, transmitted as bits. In computers, information storage and processing are conceptually and physically separated. The brain has no known unit of information. You may think "spikes" but what about them? How much information is in a spike? Sometimes it seems spike rates are important, sometimes timing, sometimes how many. I know of systems where spikes don't matter so much, and subthreshold oscillations matter more. In the brain, information storage and processing are difficult to distinguish.
Computers are fundamentally organized around a CPU which performs sequential instructions. It's certainly possible to parallelize CPUs and have many work together. To do this, you need a problem which is parallelizable. Then the problem is divided up, with each CPU doing some sensible task, communicating its result, then asking for the next task. Brains are massively parallel, but each unit (the neuron, presumably) is communicating with huge numbers of its neighbors, and processing the information from them in a fairly incomprehensible manner (usually, sometimes we can tease it out). The flow of information in the brain often defies the typical design choices that engineers would make. There is a huge amount of feedback in the visual pathway (information traveling from the higher regions of the brain down towards the eyes).
posted by Humanzee at 12:32 PM on August 26, 2010 [8 favorites]
My body is typing this. My brain is elsewhere.
Am I posting this?
posted by mazola at 12:38 PM on August 26, 2010
Am I posting this?
posted by mazola at 12:38 PM on August 26, 2010
The brain has no known unit of information. You may think "spikes" but what about them? How much information is in a spike? Sometimes it seems spike rates are important, sometimes timing, sometimes how many. I know of systems where spikes don't matter so much, and subthreshold oscillations matter more. In the brain, information storage and processing are difficult to distinguish.
Quoted for truth. Also complicating matters is that even though the analogy is often made between neurons and bits (on/off), there's evidence of state-dependent behavior in networks of neurons, suggesting that the brain, though seemingly consisting of deterministic units, is a dynamic system. The implication being that you can see the same firing pattern in a network with completely different results, which is impossible in the realm of (hardware) computing as we know it.
posted by Tikirific at 12:39 PM on August 26, 2010
Quoted for truth. Also complicating matters is that even though the analogy is often made between neurons and bits (on/off), there's evidence of state-dependent behavior in networks of neurons, suggesting that the brain, though seemingly consisting of deterministic units, is a dynamic system. The implication being that you can see the same firing pattern in a network with completely different results, which is impossible in the realm of (hardware) computing as we know it.
posted by Tikirific at 12:39 PM on August 26, 2010
Consider this: when I asked Dave Eggers if he would replace his organic arm with a bionic arm that was ten times as powerful, he chose the bionic arm.
posted by Beardman at 12:39 PM on August 26, 2010 [1 favorite]
posted by Beardman at 12:39 PM on August 26, 2010 [1 favorite]
koeselitz: “Dennett is a relatively intelligent person who's just wrong about everything.”
DU: “I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.”
Decani: “This sort of unsupported blanket dismissal of an entire person leaves me breathless, frankly.”
It's not a blanket dismissal; there's more behind it than just those words. I just happen to be at work at the moment, but I'll get to explaining my position when I can. It's me, so you can count on getting a couple paragraphs about this at some point. As it is, I haven't even read the main link, although I have a copy of Content and Consciousness on my desk at home and was reading it just last week.
My own sympathies lie with Aristotle, who says things in On The Soul that clear up most of Dennett's problems with the mind and the body. (That's not to say that Aristotle has all the answers; the fantastic thing about that book is that he never makes that claim at all, and hardly says anything about what he thinks about the world.) As far as a modern critique, however, I think that John Searle has dealt admirably with Dennett's statement of the problem, which takes a good number of logical leaps that it probably shouldn't. To put it bluntly, 'the mind-body problem' is an artifact of a misapprehension about the world; it's not actually a problem as stated. The moment you make it a problem, you've already implicitly explained a whole slew of other difficulties. But those difficulties still exist. As I think Searle would say: the plain fact of consciousness can't be negated just because of the false dichotomy between mind and body.
But that's not much of a response; it's just a brief statement of how I feel about Dennett, and as I say, I haven't even looked at the main link here yet. I'll have more later when I've got time.
posted by koeselitz at 12:43 PM on August 26, 2010
DU: “I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.”
Decani: “This sort of unsupported blanket dismissal of an entire person leaves me breathless, frankly.”
It's not a blanket dismissal; there's more behind it than just those words. I just happen to be at work at the moment, but I'll get to explaining my position when I can. It's me, so you can count on getting a couple paragraphs about this at some point. As it is, I haven't even read the main link, although I have a copy of Content and Consciousness on my desk at home and was reading it just last week.
My own sympathies lie with Aristotle, who says things in On The Soul that clear up most of Dennett's problems with the mind and the body. (That's not to say that Aristotle has all the answers; the fantastic thing about that book is that he never makes that claim at all, and hardly says anything about what he thinks about the world.) As far as a modern critique, however, I think that John Searle has dealt admirably with Dennett's statement of the problem, which takes a good number of logical leaps that it probably shouldn't. To put it bluntly, 'the mind-body problem' is an artifact of a misapprehension about the world; it's not actually a problem as stated. The moment you make it a problem, you've already implicitly explained a whole slew of other difficulties. But those difficulties still exist. As I think Searle would say: the plain fact of consciousness can't be negated just because of the false dichotomy between mind and body.
But that's not much of a response; it's just a brief statement of how I feel about Dennett, and as I say, I haven't even looked at the main link here yet. I'll have more later when I've got time.
posted by koeselitz at 12:43 PM on August 26, 2010
If your brain and body were separated, which one would be "you?"
Through my love for Sci-fi tropes, I've spent a lot of time considering this question over the years, and I've decided that the answer that seems to best resolve the question is "Both would be you, for about a half second, and then as new independent experiences started for each individual the one that was the other you would cease to be "you" and become "him".
posted by quin at 12:48 PM on August 26, 2010 [2 favorites]
Through my love for Sci-fi tropes, I've spent a lot of time considering this question over the years, and I've decided that the answer that seems to best resolve the question is "Both would be you, for about a half second, and then as new independent experiences started for each individual the one that was the other you would cease to be "you" and become "him".
posted by quin at 12:48 PM on August 26, 2010 [2 favorites]
The brain is not the bottom line. Sure, it may be a computer, but if it is a computer, then it can only understand things from its limited point of view. What about the concept of 'you?' The brain-computer could deduce that 'I am' or 'I am there' but not necessarily in a true and universal manner. The brain may process data but only in a programming language that it can understand; the universe is too massive and too strange to think that our language is the only one, or at the top of the hierarchy. Brain? Body? In an infinite universe there has to be more than that.
posted by infinitefloatingbrains at 12:48 PM on August 26, 2010
posted by infinitefloatingbrains at 12:48 PM on August 26, 2010
implicitly explained away a whole slew of other difficulties
posted by koeselitz at 12:48 PM on August 26, 2010
posted by koeselitz at 12:48 PM on August 26, 2010
...so you can count on getting a couple paragraphs about this at some point
posted by koeselitz at 8:43 PM on August 26
Well, I know a little bit about Dennett's work so all I'll say at this point is that if you can justify your statement that Dennett is "wrong about everything" in a "couple of paragraphs" (with all due recognition of non-literalism) I'm looking forward to being seriously impressed.
posted by Decani at 12:53 PM on August 26, 2010 [2 favorites]
posted by koeselitz at 8:43 PM on August 26
Well, I know a little bit about Dennett's work so all I'll say at this point is that if you can justify your statement that Dennett is "wrong about everything" in a "couple of paragraphs" (with all due recognition of non-literalism) I'm looking forward to being seriously impressed.
posted by Decani at 12:53 PM on August 26, 2010 [2 favorites]
If I remove your arm, you are still you.
If I remove your heart, you are still you.
If I remove your skin, you are still you.
If I remove your eyes, you are still you.
If I remove your brain, you're gone.
I submit perhaps you are your brain, and not the input/output wetware around you.
posted by effugas at 12:53 PM on August 26, 2010 [1 favorite]
If I remove your heart, you are still you.
If I remove your skin, you are still you.
If I remove your eyes, you are still you.
If I remove your brain, you're gone.
I submit perhaps you are your brain, and not the input/output wetware around you.
posted by effugas at 12:53 PM on August 26, 2010 [1 favorite]
[Dr. Hfuhruhurr is rowing with Anne's enjarred brain, complete with sunhat and sunglasses, along an idyllic creek.]
Anne Uumellmahaye: I don't think there's a girl floating in any jar anywhere who's as happy as I am! Oh, Michael, you do so much for me, and I do nothing for you!
Dr. Hfuhruhurr: Are you out of your head? [pauses] Sorry, I forgot.
posted by Kabanos at 1:07 PM on August 26, 2010 [2 favorites]
Anne Uumellmahaye: I don't think there's a girl floating in any jar anywhere who's as happy as I am! Oh, Michael, you do so much for me, and I do nothing for you!
Dr. Hfuhruhurr: Are you out of your head? [pauses] Sorry, I forgot.
posted by Kabanos at 1:07 PM on August 26, 2010 [2 favorites]
The brain isn't literally a computer... Computers have a hierarchy of abstractions that the brain lacks.
You might be a bit locked up in one usage of the word "computer", which has a pretty storied history.
Interestingly, a number of concepts used for computer science and engineering are actually derived from models of biological function. There is no requirement that a computer be digital, for example. Indeed, the first computer, Babbage's Difference Engine, was analog, as were the cryptanalysis bombes or calculators used during World War II. At one point, a computer was the formal job title for someone, usually a woman, who would tally figures with these machines.
Parts of the brain are segregated by functionality. That's how lobotomies work their magic, for example, changing the patient's behavior by severing connections to the prefrontal cortex and its specific behavioral functions. The actual working of these various segments is abstracted away from us through a biochemical network of nerve cells: a neural net. We model the brain's neural net with software, to aid digital computers with pattern matching and similar "fuzzy", analog tasks.
While the brain can compensate from some damage, we build computers that can do the same; for example, the radiation-hardened hardware that powers our satellites and space vehicles includes shielding, redundancy, and software to reroute operations away from damaged components.
Calling the brain an organic computer is reasonably accurate. Analog input from the senses is processed into output, action through muscular contractions, or physiological changes (e.g. release of hormones into the bloodstream), etc.
At the end of the day, computing is about the interplay of input, processing and output. The hardware, whether organic or silicon, is just a physical implementation of that mechanism.
posted by Blazecock Pileon at 1:13 PM on August 26, 2010 [2 favorites]
You might be a bit locked up in one usage of the word "computer", which has a pretty storied history.
Interestingly, a number of concepts used for computer science and engineering are actually derived from models of biological function. There is no requirement that a computer be digital, for example. Indeed, the first computer, Babbage's Difference Engine, was analog, as were the cryptanalysis bombes or calculators used during World War II. At one point, a computer was the formal job title for someone, usually a woman, who would tally figures with these machines.
Parts of the brain are segregated by functionality. That's how lobotomies work their magic, for example, changing the patient's behavior by severing connections to the prefrontal cortex and its specific behavioral functions. The actual working of these various segments is abstracted away from us through a biochemical network of nerve cells: a neural net. We model the brain's neural net with software, to aid digital computers with pattern matching and similar "fuzzy", analog tasks.
While the brain can compensate from some damage, we build computers that can do the same; for example, the radiation-hardened hardware that powers our satellites and space vehicles includes shielding, redundancy, and software to reroute operations away from damaged components.
Calling the brain an organic computer is reasonably accurate. Analog input from the senses is processed into output, action through muscular contractions, or physiological changes (e.g. release of hormones into the bloodstream), etc.
At the end of the day, computing is about the interplay of input, processing and output. The hardware, whether organic or silicon, is just a physical implementation of that mechanism.
posted by Blazecock Pileon at 1:13 PM on August 26, 2010 [2 favorites]
I think there are some misconceptions here. Firstly there is the misconception that there is something unique about blobs of flesh. We all think of ourselves as "me". But logically, we can't all be "me". So the concept of "meness" is a bit suspect. If you want my view, I think that if you chopped yourself in half, and both halves somehow grew into whole bodies, there would be two yous, but that there being two yous is no different to there being two completely unrelated people.
I think the body is me, the flesh is me. Say I cut my finger off, then there are two mes. Until the finger disintegrates. But who cares? It's like cutting a piece of bread in half and saying there are two pieces of bread. It's not earth-shattering, because there isn't anything unique about lumps of flesh.
A significant misconception is that the contents of the brain are the individual. That is utter nonsense, if you look at it rationally. Emotionally, it seems to make sense, but it is a red herring. The body is the individual, as far as the individual has any meaning. The brain, personality, thoughts, memory, etc, are just a tool to aid the survival of the body. Like fingernails. The brain is nothing but an overdeveloped fingernail. Look at it impartially and you'll realise this is true. It is the conclusion which arises naturally out of a rational survey of what we know about evolution, etc. The contents of the brain are irrelevant to being me. The "me" which is the social construct, that is not a me that I care about. Being a body, I care about my body's comfort. I don't care about a social concept, a notion, a collection of traits. My body is the thing which persists, which mends itself, which circulates blood, which feels sensations. Who cares about memories, thoughts, ideas? They are detritus. It's nice, emotionally, to enjoy them, I am in favour of them, but if you get down to it, it you want to get to the philosophical nitty-gritty, they are not me - that is an illusion. You could wipe my brain's contents and replace them with something else tomorrow, and I'd still be me - I'd still be the one experiencing things. To say otherwise seems to me wrong.
Also another misconception is that there is anything special about consciousness. There isn't. (I take the deterministic view anyway, that there is no such thing as free will). I think we're just automata. To me, any device with inputs and information storage and processing is conscious. A computer with a webcam is conscious. Stop accepting the emotional, plainly irrational and daft interpretation that there is something magical and ineffable about consciousness and you will realise this is true. Just because the operations of our brains are too sophisticated for us to understand, it doesn't mean they are magical and ineffable. People need to look at these things rationally, if they claim to be wanting to be genuinely philosophical, and stop being influenced by emotions, which are basically the same things as prejudices, or which at least are irrational. I'm only saying, this is my answer to the question, I'm not saying that emotions per se are bad.
posted by rubber duck at 1:14 PM on August 26, 2010 [2 favorites]
I think the body is me, the flesh is me. Say I cut my finger off, then there are two mes. Until the finger disintegrates. But who cares? It's like cutting a piece of bread in half and saying there are two pieces of bread. It's not earth-shattering, because there isn't anything unique about lumps of flesh.
A significant misconception is that the contents of the brain are the individual. That is utter nonsense, if you look at it rationally. Emotionally, it seems to make sense, but it is a red herring. The body is the individual, as far as the individual has any meaning. The brain, personality, thoughts, memory, etc, are just a tool to aid the survival of the body. Like fingernails. The brain is nothing but an overdeveloped fingernail. Look at it impartially and you'll realise this is true. It is the conclusion which arises naturally out of a rational survey of what we know about evolution, etc. The contents of the brain are irrelevant to being me. The "me" which is the social construct, that is not a me that I care about. Being a body, I care about my body's comfort. I don't care about a social concept, a notion, a collection of traits. My body is the thing which persists, which mends itself, which circulates blood, which feels sensations. Who cares about memories, thoughts, ideas? They are detritus. It's nice, emotionally, to enjoy them, I am in favour of them, but if you get down to it, it you want to get to the philosophical nitty-gritty, they are not me - that is an illusion. You could wipe my brain's contents and replace them with something else tomorrow, and I'd still be me - I'd still be the one experiencing things. To say otherwise seems to me wrong.
Also another misconception is that there is anything special about consciousness. There isn't. (I take the deterministic view anyway, that there is no such thing as free will). I think we're just automata. To me, any device with inputs and information storage and processing is conscious. A computer with a webcam is conscious. Stop accepting the emotional, plainly irrational and daft interpretation that there is something magical and ineffable about consciousness and you will realise this is true. Just because the operations of our brains are too sophisticated for us to understand, it doesn't mean they are magical and ineffable. People need to look at these things rationally, if they claim to be wanting to be genuinely philosophical, and stop being influenced by emotions, which are basically the same things as prejudices, or which at least are irrational. I'm only saying, this is my answer to the question, I'm not saying that emotions per se are bad.
posted by rubber duck at 1:14 PM on August 26, 2010 [2 favorites]
The body is the individual, as far as the individual has any meaning.
So then what would you say about the case in the linked essay where there is the same brain controlling a different body but it is the same "me"?
posted by ekroh at 1:23 PM on August 26, 2010
So then what would you say about the case in the linked essay where there is the same brain controlling a different body but it is the same "me"?
posted by ekroh at 1:23 PM on August 26, 2010
Basically I think there is no distinction between "mes" - it is an arbitrary distinction. You could say that that rock is me, because, from the rock's point of view, it is "me". So "me" is an illusion, an apparition, something that doesn't actually exist or have any meaning. You are me, since you think of yourself as "me", don't you? The only distinction we can make which holds is the distinction between physical items, separate physical constructs, the same as we can make a distinction between a table and a chair. So the problem of what is me disappears, each thing is a separate me, a different me. The word me ceases to mean anything other than how a thing would identify itself as opposed to things which were not it. The demarcation is physical, and, once you start chopping things in half or gluing them together, arbitrary. All of these conclusions arise from determinism, from looking at the universe impartially, realising there is no absolute, there is nothing special about life and existence. We can enjoy it and value it, but it you want to get philosophical, that is how I see it.
posted by rubber duck at 1:25 PM on August 26, 2010
posted by rubber duck at 1:25 PM on August 26, 2010
So then what would you say about the case in the linked essay where there is the same brain controlling a different body but it is the same "me"?
The brain can think what it wants, it can see itself as the body, but it isn't that body, it is just some matter in a jar being fed unreliable input - unreliable because it wasn't designed to be transmitted over distance. The brain is basically being deceived, isn't it? Being fooled. The brain is, in short, wrong.
posted by rubber duck at 1:27 PM on August 26, 2010
The brain can think what it wants, it can see itself as the body, but it isn't that body, it is just some matter in a jar being fed unreliable input - unreliable because it wasn't designed to be transmitted over distance. The brain is basically being deceived, isn't it? Being fooled. The brain is, in short, wrong.
posted by rubber duck at 1:27 PM on August 26, 2010
I often think this way when running my Java VM in a cloud-powered VM.
posted by RobotVoodooPower at 1:31 PM on August 26, 2010
posted by RobotVoodooPower at 1:31 PM on August 26, 2010
'the mind-body problem' is an artifact of a misapprehension about the world; it's not actually a problem as stated. The moment you make it a problem, you've already implicitly explained a whole slew of other difficulties. But those difficulties still exist. As I think Searle would say: the plain fact of consciousness can't be negated just because of the false dichotomy between mind and body.
What do you mean by "the plain fact of consciousness", here? Because I don't understand Dennett (in the linked article or elsewhere) as doing anything like denying consciousness, or insisting we're not conscious, etc. (I've seen straw-man interpretations of his arguments that amount to this, however, which he has taken great pains to refute.) And, as far as I can tell, his approach to the mind-body problem is to suggest that the "problem" is typically incorrectly presented as an "intuition pump", and is better dissolved than answered head-on (for example, his work on the impossibility of philosophical zombies). I have a number of issues with his work, myself—I find it particularly irksome whenever he mentions religion, memes, or computers—but he is no kind of simpleton. Count me in with those eager to hear a refutation of his work on philosophy of mind which fits in a couple of paragraphs.
posted by avianism at 1:34 PM on August 26, 2010 [1 favorite]
What do you mean by "the plain fact of consciousness", here? Because I don't understand Dennett (in the linked article or elsewhere) as doing anything like denying consciousness, or insisting we're not conscious, etc. (I've seen straw-man interpretations of his arguments that amount to this, however, which he has taken great pains to refute.) And, as far as I can tell, his approach to the mind-body problem is to suggest that the "problem" is typically incorrectly presented as an "intuition pump", and is better dissolved than answered head-on (for example, his work on the impossibility of philosophical zombies). I have a number of issues with his work, myself—I find it particularly irksome whenever he mentions religion, memes, or computers—but he is no kind of simpleton. Count me in with those eager to hear a refutation of his work on philosophy of mind which fits in a couple of paragraphs.
posted by avianism at 1:34 PM on August 26, 2010 [1 favorite]
Blackcock, I agree that the brain receives sensory input, processes sensory input plus internal state, and produces output. I agree that this is computation. But I think it's problematic to refer to the brain as "a computer" because then people look to all the devices in their experience that are "computers", make the obvious analogy, and get ideas about the organization, function, and behavior of the brain that are just wrong. I constantly hear people talk about "hardware" as distinct from "software" in the context of the brain. There is no such distinction! There are similarities between the computation the brain does and traditional CPU-style computation, but they're only at the most gross level of abstraction. When you get down to the level of "operations" they don't resemble each other at all, and I think that it's really necessary to understand the differences to have a decent appreciation for how the brain works. For what it's worth, my interactions with colleagues leads me to believe this is the dominant viewpoint within the neuroscience community.
posted by Humanzee at 1:38 PM on August 26, 2010 [1 favorite]
posted by Humanzee at 1:38 PM on August 26, 2010 [1 favorite]
ugh. Blazecock, not Blackcock. What the hell?!?
posted by Humanzee at 1:38 PM on August 26, 2010 [6 favorites]
posted by Humanzee at 1:38 PM on August 26, 2010 [6 favorites]
In Japan "they" (regular folks) figure that the "you" that is "you" in your identity resides in your chest
Then why do they point at their noses to indicate themselves?
posted by Jimmy Havok at 1:38 PM on August 26, 2010
Then why do they point at their noses to indicate themselves?
posted by Jimmy Havok at 1:38 PM on August 26, 2010
rubber duck, I think Dennett would disagree with you about the definition of "me." In your previous comment, you said if you cut yourself in half and both halves grew into whole bodies, there would be two yous. But I think he would say that only one of them would maintain the temporal-spatial continuity of self consciousness, which I think is what he is calling "me." The other would be what he calls "a sort of super-twin brother."
On preview:
The brain can think what it wants, it can see itself as the body, but it isn't that body, it is just some matter in a jar being fed unreliable input - unreliable because it wasn't designed to be transmitted over distance. The brain is basically being deceived, isn't it? Being fooled. The brain is, in short, wrong.
So in the case Dennett presents, you would say the real you would be the diesmbrained body that eventually decomposes and the brain in the jar would persist in a state of illusion of being you?
posted by ekroh at 1:44 PM on August 26, 2010
On preview:
The brain can think what it wants, it can see itself as the body, but it isn't that body, it is just some matter in a jar being fed unreliable input - unreliable because it wasn't designed to be transmitted over distance. The brain is basically being deceived, isn't it? Being fooled. The brain is, in short, wrong.
So in the case Dennett presents, you would say the real you would be the diesmbrained body that eventually decomposes and the brain in the jar would persist in a state of illusion of being you?
posted by ekroh at 1:44 PM on August 26, 2010
unreliable because it wasn't designed to be transmitted over distance
The input is already transmitted over distance when the brain is in the body. Why is it unreliable when it is transmitted over a greater distance?
I'm not trying to pick on you, just enjoying the philosophical debate.
posted by ekroh at 1:48 PM on August 26, 2010
The input is already transmitted over distance when the brain is in the body. Why is it unreliable when it is transmitted over a greater distance?
I'm not trying to pick on you, just enjoying the philosophical debate.
posted by ekroh at 1:48 PM on August 26, 2010
But this isn't a philosophical debate. When we talk about a brain thinking its you, that's a statement that makes anatomical sense. The brain is capable of thought. When we talk about a body thinking its you, that's a statement that makes no antatomical sense. The rest of the body is not capable of thought.
You are not other's perceptions of you, otherwise a really good wax model would have a claim to actually being you. You are your conceptions of you, and anatomically, that happens between your ears.
posted by effugas at 1:53 PM on August 26, 2010
You are not other's perceptions of you, otherwise a really good wax model would have a claim to actually being you. You are your conceptions of you, and anatomically, that happens between your ears.
posted by effugas at 1:53 PM on August 26, 2010
If you enjoy this, you'll love Greg Egan's short stories. Particularly the ones collected in Axiomatic, which has a story, "Learning to Be Me" which might almost be a dramatization of this essay (Egan says he wrote it without having read Dennett's talk).
posted by straight at 1:55 PM on August 26, 2010 [1 favorite]
posted by straight at 1:55 PM on August 26, 2010 [1 favorite]
Computers have a hierarchy of abstractions that the brain lacks. You have the hardware, then the OS sitting on it, with programs running managed by the OS. The same basic CPU can be a desktop, can run an embedded system like an ATM, or the controller for some subsystem on an airplane or car.
You're confusing a computer in the most general sense for computers as we know them. An example of a computer that defies your definition is one I've come across in machine learning contexts: a ball rolling down a hill. If you think about the surface that the ball is on as being described by a function, then the ball by reaching the bottom of the hill computes the minimum, and is therefore a computer. It has no unit of information, nor is it portable, nor does it have an operating system. The brain also performs computations without the benefit of those things.
posted by invitapriore at 1:56 PM on August 26, 2010 [1 favorite]
You're confusing a computer in the most general sense for computers as we know them. An example of a computer that defies your definition is one I've come across in machine learning contexts: a ball rolling down a hill. If you think about the surface that the ball is on as being described by a function, then the ball by reaching the bottom of the hill computes the minimum, and is therefore a computer. It has no unit of information, nor is it portable, nor does it have an operating system. The brain also performs computations without the benefit of those things.
posted by invitapriore at 1:56 PM on August 26, 2010 [1 favorite]
The brain isn't literally a computer. Sometimes it can be helpful to think of it that way, but it can cause misunderstandings too.This is kind of annoying. You're taking one type of computer -- ordinary desktop computers -- with software stacks and the like and saying that anything that isn't like those things isn't a computer. But that's not right at all. There are lots of computers that aren't anything like modern desktop computers. Old analogue computers had no real separation between hardware and software, for example.
Computers have a hierarchy of abstractions that the brain lacks. You have the hardware, then the OS sitting on it, with programs running managed by the OS. The same basic CPU can be a desktop, can run an embedded system like an ATM, or the controller for some subsystem on an airplane or car. You can run web browsers, whatever. It can be reprogrammed.
The problem is that you're narrowing the definition of "computer" to the kind of computer you're most familiar with. The arguments about how you can't reprogram a brain are irrelevant because not all computers are reprogramable.
The brain, at least the human brain, certainly qualifies as a computer in the abstract sense.
Computers have a clear-cut unit of information (the bit) which is respected at all times. Information is stored as bits, manipulated as bits, transmitted as bits. In computers, information storage and processing are conceptually and physically separated. The brain has no known unit of information.Again, you're just misusing terms here. Information is a mathematical quantity that can be measured. You can absolutely have non-integer amounts of information. A number like 1.5 bits is totally plausible. And you can figure out how much information is a neuron spike if you want too.
Computers are fundamentally organized around a CPU which performs sequential instructions. It's certainly possible to parallelize CPUs and have many work together. To do this, you need a problem which is parallelizable. Then the problem is divided upAgain, you're only talking about one type of computer. Not all theoretical types of computers. No one is saying that the brain is a PC that you can run windows on, only that it's mathematically equivalent to a computer Turing machine type computer, one that could be emulated on a device mathematically equivalent to a universal Turing machine.
---
Of course back in the day "Computer" actually referred to people. Or actually, a job. Like secretary. It was someone who sat around doing math all day.
Indeed, the first computer, Babbage's Difference Engine, was analog, as were the cryptanalysis bombes or calculators used during World War II.I don't think Babbage's difference engine was analog. It was mechanical, but it was still using integers (i.e. digits). Analog computers actually computed analog values using electronic circuit components to compute differential equations to simulate things like gravity for doing bomb drop computations.
posted by delmoi at 1:57 PM on August 26, 2010 [2 favorites]
Also, the brain is absolutely not a homogenous blob of undifferentiated neurons. It is filled with structures that have hierarchical and clearly defined roles -- the hippocampus, the amygdala, the various image processors in the occipital lobe, the neocortex...
posted by effugas at 1:58 PM on August 26, 2010
posted by effugas at 1:58 PM on August 26, 2010
delmoi's comment reminds me that I should have said that the ball has no quantized unit of information.
posted by invitapriore at 2:00 PM on August 26, 2010
posted by invitapriore at 2:00 PM on August 26, 2010
You're confusing a computer in the most general sense for computers as we know them. An example of a computer that defies your definition is one I've come across in machine learning contexts: a ball rolling down a hill. If you think about the surface that the ball is on as being described by a function, then the ball by reaching the bottom of the hill computes the minimum, and is therefore a computer. It has no unit of information, nor is it portable, nor does it have an operating system. The brain also performs computations without the benefit of those things.
If this is the definition in use when describing the brain as an organic computer, I'm not sure what the usefulness of the description is.
posted by shakespeherian at 2:00 PM on August 26, 2010 [2 favorites]
If this is the definition in use when describing the brain as an organic computer, I'm not sure what the usefulness of the description is.
posted by shakespeherian at 2:00 PM on August 26, 2010 [2 favorites]
It's just proprioception. He feels like he's in the room (not in the jar) because the brain gets its sense of "where" based on input from the body - which is standing in the room.
posted by r_nebblesworthII at 2:01 PM on August 26, 2010 [1 favorite]
posted by r_nebblesworthII at 2:01 PM on August 26, 2010 [1 favorite]
Decani: “Well, I know a little bit about Dennett's work so all I'll say at this point is that if you can justify your statement that Dennett is "wrong about everything" in a "couple of paragraphs" (with all due recognition of non-literalism) I'm looking forward to being seriously impressed.”
I think there are significant misunderstandings of the problems involved lodged in the very foundation of almost all of Dennett's work. That doesn't mean he's stupid; far from it. But it does mean that I can sketch why I think he's wrong about many things in a few paragraphs. That's not so audacious. I think that he would be willing to agree that, if someone were wrong about one of their fundamental premises, that mistake, which underlies all their work, could be sketched in a few paragraphs. I'm not saying he's not intelligent; he clearly is.
If you want a readable and engaging book-length refutation of Daniel C Dennett's positions on mind and body (which isn't specifically about Dennett, but applies to him well) I can recommend John Searle's book, The Rediscovery of the Mind.
posted by koeselitz at 2:01 PM on August 26, 2010
I think there are significant misunderstandings of the problems involved lodged in the very foundation of almost all of Dennett's work. That doesn't mean he's stupid; far from it. But it does mean that I can sketch why I think he's wrong about many things in a few paragraphs. That's not so audacious. I think that he would be willing to agree that, if someone were wrong about one of their fundamental premises, that mistake, which underlies all their work, could be sketched in a few paragraphs. I'm not saying he's not intelligent; he clearly is.
If you want a readable and engaging book-length refutation of Daniel C Dennett's positions on mind and body (which isn't specifically about Dennett, but applies to him well) I can recommend John Searle's book, The Rediscovery of the Mind.
posted by koeselitz at 2:01 PM on August 26, 2010
Ah, on scrolling up I somehow missed the entire interchange between Humanzee and Blazecock Pileon. Sorry Humanzee.
If this is the definition in use when describing the brain as an organic computer, I'm not sure what the usefulness of the description is.
It's useful in that it establishes that the brain can in fact be described as a computer. What remains is to determine what type of computer it is, and the relationships between its inputs and outputs. The argument then is how difficult that is or whether it's possible. I suspect it's possible but I don't really know enough about the problems involved to really have an opinion there.
posted by invitapriore at 2:13 PM on August 26, 2010
If this is the definition in use when describing the brain as an organic computer, I'm not sure what the usefulness of the description is.
It's useful in that it establishes that the brain can in fact be described as a computer. What remains is to determine what type of computer it is, and the relationships between its inputs and outputs. The argument then is how difficult that is or whether it's possible. I suspect it's possible but I don't really know enough about the problems involved to really have an opinion there.
posted by invitapriore at 2:13 PM on August 26, 2010
It's useful in that it establishes that the brain can in fact be described as a computer.
But, again, if you're using 'computer' to mean 'system upon which physical laws can act,' I don't really see what you're gaining by establishing this.
posted by shakespeherian at 2:18 PM on August 26, 2010 [1 favorite]
But, again, if you're using 'computer' to mean 'system upon which physical laws can act,' I don't really see what you're gaining by establishing this.
posted by shakespeherian at 2:18 PM on August 26, 2010 [1 favorite]
I felt this story was leading up to a topic of which I have interest in. I feel Dennet has a brain-centered view of person, as he chose to pursue the "if there are two brains, each controlling the body, which is 'me'". But reverse the situation, where you have one brain simultaneously controlling two bodies, which is "you"?
I feel that wherever our sensory input is located is where we place our sense of "self". The fact that many of us believe that "me" resides in our head is because thats where the majority of our sensing infrastructure lies. The fact that the brain there is mere coincidence. So it doesn't feel strange that if your brain separates from your body, the sense of "me" would still reside in the head of the body, as opposed to the location of the brain.
However, what about the scenario where your sensing equipment are not co-located, but instead dispersed over a large area. Having two bodies is one way to achieve this. The essay hinted at another, when the microphone signal was converted into a direct neural stimulus. What if they had microphones installed all over the base, and each of them fed into Dennet's brain? I suspect Dennets sense of "here" would be the entire base. I feel that would be true were I in that scenario.
I am also interested in the "limbo" state, of a brain without senses. Dennet claims that his sense of "here" became the vat, in Houston, but I doubt that would actually be the case. I believe there would be a vague sense that one was in Houston in a vat, as our memory would tell us it is true, but it wouldn't have the same sense of reality as if ones senses confirmed that one was actually in a vat in Houston. To illustrate the point. Imagine that before Dennet sent his original body into the hole in Tulsa that the engineers told him they were going to move the vat to an undisclosed location, and then transfer the brain into an unidentified container. Where would Dennet have felt was "here" when he later got disconnected from his body? He would have no memory of where his brain was located, nor any senses to provide him a sense of "here". So what would he come up with? I can only imagine that would be a psychosis-inducing scenario of fear and despair, having no physical reality, or even memory of one, to cling to.
posted by LoopyG at 2:19 PM on August 26, 2010 [2 favorites]
I feel that wherever our sensory input is located is where we place our sense of "self". The fact that many of us believe that "me" resides in our head is because thats where the majority of our sensing infrastructure lies. The fact that the brain there is mere coincidence. So it doesn't feel strange that if your brain separates from your body, the sense of "me" would still reside in the head of the body, as opposed to the location of the brain.
However, what about the scenario where your sensing equipment are not co-located, but instead dispersed over a large area. Having two bodies is one way to achieve this. The essay hinted at another, when the microphone signal was converted into a direct neural stimulus. What if they had microphones installed all over the base, and each of them fed into Dennet's brain? I suspect Dennets sense of "here" would be the entire base. I feel that would be true were I in that scenario.
I am also interested in the "limbo" state, of a brain without senses. Dennet claims that his sense of "here" became the vat, in Houston, but I doubt that would actually be the case. I believe there would be a vague sense that one was in Houston in a vat, as our memory would tell us it is true, but it wouldn't have the same sense of reality as if ones senses confirmed that one was actually in a vat in Houston. To illustrate the point. Imagine that before Dennet sent his original body into the hole in Tulsa that the engineers told him they were going to move the vat to an undisclosed location, and then transfer the brain into an unidentified container. Where would Dennet have felt was "here" when he later got disconnected from his body? He would have no memory of where his brain was located, nor any senses to provide him a sense of "here". So what would he come up with? I can only imagine that would be a psychosis-inducing scenario of fear and despair, having no physical reality, or even memory of one, to cling to.
posted by LoopyG at 2:19 PM on August 26, 2010 [2 favorites]
But I think it's problematic to refer to the brain as "a computer" because then people look to all the devices in their experience that are "computers", make the obvious analogy, and get ideas about the organization, function, and behavior of the brain that are just wrong
That's a problem of logical misuse of terminology by people not thinking carefully about abstract concepts, not a problem with identifying the brain for its functionality.
A car has four wheels, but it doesn't follow that any vehicle that has four wheels and rolls down the road must be a car. Computers can come in all shapes and sizes, but ultimately the only requirement is that a computer computes.
In any case, the brain takes input, processes it, and creates output. We have a fair understanding of the inputs and outputs through hundreds of years of anatomical and physiological research — we no longer ascribe these to humors or spirits or other supernatural ideas. The processing part we understand less, but cognition is a young science, relatively speaking, and we are always learning more about the brain's secrets.
The brain is a computer, performing computation. This description is useful because it accurately describes the brain's function. We can study it, model it, replicate it (however poorly) with mathematics and computers of our own design.
Most other characteristics ascribed to the brain as unique (vaguely speaking: "soul", "free will", "consciousness", other behavioral patterns etc.) can either be modeled with computation or are boiled down to vitalist notions, supernatural beliefs that are unverifiable by neuroscientists (or any scientist, for that matter).
posted by Blazecock Pileon at 2:20 PM on August 26, 2010 [1 favorite]
That's a problem of logical misuse of terminology by people not thinking carefully about abstract concepts, not a problem with identifying the brain for its functionality.
A car has four wheels, but it doesn't follow that any vehicle that has four wheels and rolls down the road must be a car. Computers can come in all shapes and sizes, but ultimately the only requirement is that a computer computes.
In any case, the brain takes input, processes it, and creates output. We have a fair understanding of the inputs and outputs through hundreds of years of anatomical and physiological research — we no longer ascribe these to humors or spirits or other supernatural ideas. The processing part we understand less, but cognition is a young science, relatively speaking, and we are always learning more about the brain's secrets.
The brain is a computer, performing computation. This description is useful because it accurately describes the brain's function. We can study it, model it, replicate it (however poorly) with mathematics and computers of our own design.
Most other characteristics ascribed to the brain as unique (vaguely speaking: "soul", "free will", "consciousness", other behavioral patterns etc.) can either be modeled with computation or are boiled down to vitalist notions, supernatural beliefs that are unverifiable by neuroscientists (or any scientist, for that matter).
posted by Blazecock Pileon at 2:20 PM on August 26, 2010 [1 favorite]
If you enjoy this, you'll love Greg Egan's short stories.
The last Greg Egan short story I read, Singleton, was about the stupidest thing I've read for quite some time and I refuse to have anything more to do with his stuff until I've ceased fuming about it.
...but that's the whole Kurzweilian world view thing again....
posted by Artw at 2:22 PM on August 26, 2010
The last Greg Egan short story I read, Singleton, was about the stupidest thing I've read for quite some time and I refuse to have anything more to do with his stuff until I've ceased fuming about it.
...but that's the whole Kurzweilian world view thing again....
posted by Artw at 2:22 PM on August 26, 2010
But, again, if you're using 'computer' to mean 'system upon which physical laws can act,' I don't really see what you're gaining by establishing this.
That's a lot less specific than the definition I'm thinking of, which is I think the fault of my first ball example. Anyway, I would define a computer as a system which receives well-defined inputs, performs an operation on them, and yields well-defined outputs. In the case of neurons the nature of those inputs and outputs is fairly well-known. In the case of the brain they are less so, but that seems to be a function of our current capacity to discover their nature as opposed to some basic trait of theirs that makes their nature undiscoverable.
posted by invitapriore at 2:28 PM on August 26, 2010 [1 favorite]
That's a lot less specific than the definition I'm thinking of, which is I think the fault of my first ball example. Anyway, I would define a computer as a system which receives well-defined inputs, performs an operation on them, and yields well-defined outputs. In the case of neurons the nature of those inputs and outputs is fairly well-known. In the case of the brain they are less so, but that seems to be a function of our current capacity to discover their nature as opposed to some basic trait of theirs that makes their nature undiscoverable.
posted by invitapriore at 2:28 PM on August 26, 2010 [1 favorite]
Also, the brain is absolutely not a homogenous blob of undifferentiated neurons. It is filled with structures that have hierarchical and clearly defined roles -- the hippocampus, the amygdala, the various image processors in the occipital lobe, the neocortex...
I don't think anyone was making the argument that the brain had no modularity, but rather that there are still many processes that remain elusive. The visual cortex for example. It's a big jump from selective sensitivity of certain neurons in response to lines at different angles in the visual field to something like, say, facial recognition. So-called "grandmother/Jennifer Aniston" cells don't honestly have incredible support (yet, anyway), and the first papers on that came out, what, 6 or 7 years ago, with no follow up? Sparseness vs. distributed representations of cognition are by no means well understood (to say nothing of cognition, a sense of "self," et. al.). Technologies like fMRI/BOLD imaging are certainly useful, but I think it implies a kind of compartmentalization in the brain that frankly may not be completely accurate (all such imaging techniques rely on some dicey statistical analysis and heavy averaging, for example).
In the end though, I guess if you're referring to a brain as a computer in the sense that it is essentially a device that processes information, I can't really argue with that, that's what it is. But given that, you also can't really make the same analogy-based leaps that some people have been making in this thread.
posted by Tikirific at 2:37 PM on August 26, 2010
I don't think anyone was making the argument that the brain had no modularity, but rather that there are still many processes that remain elusive. The visual cortex for example. It's a big jump from selective sensitivity of certain neurons in response to lines at different angles in the visual field to something like, say, facial recognition. So-called "grandmother/Jennifer Aniston" cells don't honestly have incredible support (yet, anyway), and the first papers on that came out, what, 6 or 7 years ago, with no follow up? Sparseness vs. distributed representations of cognition are by no means well understood (to say nothing of cognition, a sense of "self," et. al.). Technologies like fMRI/BOLD imaging are certainly useful, but I think it implies a kind of compartmentalization in the brain that frankly may not be completely accurate (all such imaging techniques rely on some dicey statistical analysis and heavy averaging, for example).
In the end though, I guess if you're referring to a brain as a computer in the sense that it is essentially a device that processes information, I can't really argue with that, that's what it is. But given that, you also can't really make the same analogy-based leaps that some people have been making in this thread.
posted by Tikirific at 2:37 PM on August 26, 2010
> refutation of Dennet [...] John Searle's book, The Rediscovery of the Mind.
I laughed out loud when I read this.
Searle has a gift for being extremely annoying by apparently deliberately misunderstanding the matter at hand. Many or most of his arguments seem to result from a willful refusal to understand the idea of emergent phenomena, or to concede that increasing the complexity of a system by several orders of magnitude might result in behaviour is that qualitatively different, or simply straw men: e.g. "[facts] as that we really do have subjective conscious mental states and that these are not eliminable in favour of anything else, are routinely denied by many, perhaps most, of the advanced thinkers on the subject" - which not only misstates the position of his opponents but glues together something ridiculous ("subjective conscious mental states don't exist") with something that's not so clearly wrong ("subjective conscious mental states can be eliminable in favour of something else").
If that's the best philosophy can do for a refutation...!
posted by lupus_yonderboy at 2:38 PM on August 26, 2010
I laughed out loud when I read this.
Searle has a gift for being extremely annoying by apparently deliberately misunderstanding the matter at hand. Many or most of his arguments seem to result from a willful refusal to understand the idea of emergent phenomena, or to concede that increasing the complexity of a system by several orders of magnitude might result in behaviour is that qualitatively different, or simply straw men: e.g. "[facts] as that we really do have subjective conscious mental states and that these are not eliminable in favour of anything else, are routinely denied by many, perhaps most, of the advanced thinkers on the subject" - which not only misstates the position of his opponents but glues together something ridiculous ("subjective conscious mental states don't exist") with something that's not so clearly wrong ("subjective conscious mental states can be eliminable in favour of something else").
If that's the best philosophy can do for a refutation...!
posted by lupus_yonderboy at 2:38 PM on August 26, 2010
> Isn't this a very old problem? Like Descartes old? I get the feeling you're not going to add too much with yet another thought experiment, after almost 200 years of thought experiments.
Exactly. Dennett's "problem" seems like a set of sloppy word-games. The fact that he feels he has to put this flimsy "BRAIN OR BODY??? WHO IS REAL ME???" question into a narrative in order to give it weight should tip one off as to how uninformative it actually is.
Even if you disregard the fact that this is just a game built around abstractions like "me" and "self", the specific, particular "self" isn't left-hemisphere or right-hemisphere, or brain, or body; it's the intersection of these things. Contrary to the little line Dennett slips into his story dismissing the idea of personality change via bodily change, change one element of someone's body-brain make-up, and "personality" will change in proportion to the severity of that change. The earlier the change, the sooner it affects one's trajectory and the evolution of one's "self".
Put more concretely: Think of what your body was like when you were four or five or six years old. Now radically change an element of your appearance (and therefore, your social self)-- imagine that you were much heavier, or much shorter, or much taller, or much uglier, or much more beautiful; imagine that that change in your appearance persisted, and the character of your interactions with other people changed commensurately.
You would develop different beliefs about the world.
You would develop different behaviors in response to those different beliefs.
You would be a different person.
Even long after the formative period of early childhood, should someone have a life-altering experience-- disfigurement, paralysis, loss of a limb, and so forth-- it's quite common for that person to develop new beliefs, new behaviors, and new styles of interaction.
Becoming a New Man is not at all uncommon; it happens all the time in hospital beds.
Given modern civilization's ability to conduct precise scientific research, harvest statistics, and, generally, put hypotheses to the test, these sorts of Gedanken experiments often seem about as useful as 3 a.m. dorm-room blather.
The "problem" Dennett poses isn't about the biology of intelligence; as his question rests on concepts like "real self" versus "artificial self" it's wholly an exercise in semantics and metaphysics, appropriate for theologians and lawyers.
It might be relevant for property rights and religious observance, but not much else.
posted by darth_tedious at 2:41 PM on August 26, 2010 [2 favorites]
Exactly. Dennett's "problem" seems like a set of sloppy word-games. The fact that he feels he has to put this flimsy "BRAIN OR BODY??? WHO IS REAL ME???" question into a narrative in order to give it weight should tip one off as to how uninformative it actually is.
Even if you disregard the fact that this is just a game built around abstractions like "me" and "self", the specific, particular "self" isn't left-hemisphere or right-hemisphere, or brain, or body; it's the intersection of these things. Contrary to the little line Dennett slips into his story dismissing the idea of personality change via bodily change, change one element of someone's body-brain make-up, and "personality" will change in proportion to the severity of that change. The earlier the change, the sooner it affects one's trajectory and the evolution of one's "self".
Put more concretely: Think of what your body was like when you were four or five or six years old. Now radically change an element of your appearance (and therefore, your social self)-- imagine that you were much heavier, or much shorter, or much taller, or much uglier, or much more beautiful; imagine that that change in your appearance persisted, and the character of your interactions with other people changed commensurately.
You would develop different beliefs about the world.
You would develop different behaviors in response to those different beliefs.
You would be a different person.
Even long after the formative period of early childhood, should someone have a life-altering experience-- disfigurement, paralysis, loss of a limb, and so forth-- it's quite common for that person to develop new beliefs, new behaviors, and new styles of interaction.
Becoming a New Man is not at all uncommon; it happens all the time in hospital beds.
Given modern civilization's ability to conduct precise scientific research, harvest statistics, and, generally, put hypotheses to the test, these sorts of Gedanken experiments often seem about as useful as 3 a.m. dorm-room blather.
The "problem" Dennett poses isn't about the biology of intelligence; as his question rests on concepts like "real self" versus "artificial self" it's wholly an exercise in semantics and metaphysics, appropriate for theologians and lawyers.
It might be relevant for property rights and religious observance, but not much else.
posted by darth_tedious at 2:41 PM on August 26, 2010 [2 favorites]
If you took the engine out of my car, would it still be my car?
The question has more to do with the specific way we traditionally name things than it does the deep nature of the universe.
posted by miyabo at 2:42 PM on August 26, 2010 [3 favorites]
The question has more to do with the specific way we traditionally name things than it does the deep nature of the universe.
posted by miyabo at 2:42 PM on August 26, 2010 [3 favorites]
I sent this link to the players in my Eclipse Phase game. As they're currently brains in jars on Mars following a regrettable incident with a hegemonising AI nanoswarm, it seemed ... germane.
posted by Sebmojo at 2:47 PM on August 26, 2010
posted by Sebmojo at 2:47 PM on August 26, 2010
darth_tedious,
Exactly. The entire history (warning: vulgar) of this is just a trumped up Who's On First routine. I think we're getting to the point where we can do actual things to figure shit out instead of trying to cut microchips with stone tools.
posted by Tikirific at 2:53 PM on August 26, 2010 [1 favorite]
Exactly. The entire history (warning: vulgar) of this is just a trumped up Who's On First routine. I think we're getting to the point where we can do actual things to figure shit out instead of trying to cut microchips with stone tools.
posted by Tikirific at 2:53 PM on August 26, 2010 [1 favorite]
Our intuition might say it's a matter of software, but we don't know for sure. I think that was weston's point.
Pretty much. I don't have antipathy towards it as a working hypothesis. My problem with it is as a foregone conclusion at a point where there are, to my knowledge, no demonstrations or experiments.
is because we just do not yet have the right programs with which to enable computers to do those things. It's all data processing of some kind or another.
Where's the evidence for the hypothesis that it's all data processing?
Many people believe (even as an article of religious faith) that consciousness is something special and magical that cannot be produced by mechanical means.
Well, it depends on what you mean by mechanical. It may not be clear from my earlier comment, but I'm not a dualist. I'm confident that consciousness is physics instead of magic (though I also think we're not at the point where they're distinguishable). I am confident that if you expand the definition of "mechanical" to biology (and potentially bio-cooperative nano machinery that would in essence be the same thing), then it might indeed be produced by mechanical means. Indeed, you could argue it is being produced through the usual mechanical means every day.
But if by mechanical we mean the current state of electronic computer engineering... the idea that consciousness is something that has to do with something magical and special called a soul and the idea it can be mechanically reproduced have a lot in common: no one has been able to objectively demonstrate either.
Computers take rules and apply them. Assuming our view of the universe is correct, we can in principle simulate it. Therefore, we can simulate a brain, quantum effects (if there are any...) and all.
I'm willing to believe it's credible that this is a route that could, if we computing power to simulate it, could someday potentially yield something that looks enough like consciousness that we wouldn't be able to draw easy distinctions. Those seem like big ifs to me, but even assuming that works, without a contrasting demonstration, it's more or less an admission that the specific processes in the biology are what yields consciousness and intelligence.
posted by weston at 3:16 PM on August 26, 2010 [1 favorite]
Pretty much. I don't have antipathy towards it as a working hypothesis. My problem with it is as a foregone conclusion at a point where there are, to my knowledge, no demonstrations or experiments.
is because we just do not yet have the right programs with which to enable computers to do those things. It's all data processing of some kind or another.
Where's the evidence for the hypothesis that it's all data processing?
Many people believe (even as an article of religious faith) that consciousness is something special and magical that cannot be produced by mechanical means.
Well, it depends on what you mean by mechanical. It may not be clear from my earlier comment, but I'm not a dualist. I'm confident that consciousness is physics instead of magic (though I also think we're not at the point where they're distinguishable). I am confident that if you expand the definition of "mechanical" to biology (and potentially bio-cooperative nano machinery that would in essence be the same thing), then it might indeed be produced by mechanical means. Indeed, you could argue it is being produced through the usual mechanical means every day.
But if by mechanical we mean the current state of electronic computer engineering... the idea that consciousness is something that has to do with something magical and special called a soul and the idea it can be mechanically reproduced have a lot in common: no one has been able to objectively demonstrate either.
Computers take rules and apply them. Assuming our view of the universe is correct, we can in principle simulate it. Therefore, we can simulate a brain, quantum effects (if there are any...) and all.
I'm willing to believe it's credible that this is a route that could, if we computing power to simulate it, could someday potentially yield something that looks enough like consciousness that we wouldn't be able to draw easy distinctions. Those seem like big ifs to me, but even assuming that works, without a contrasting demonstration, it's more or less an admission that the specific processes in the biology are what yields consciousness and intelligence.
posted by weston at 3:16 PM on August 26, 2010 [1 favorite]
Put more concretely: Think of what your body was like when you were four or five or six years old. Now radically change an element of your appearance (and therefore, your social self)-- imagine that you were much heavier, or much shorter, or much taller, or much uglier, or much more beautiful; imagine that that change in your appearance persisted, and the character of your interactions with other people changed commensurately.
You would develop different beliefs about the world.
You would develop different behaviors in response to those different beliefs.
You would be a different person.
Why do you say you would be a different person when your personality changes? You're saying that self equals personality and that's it?
posted by ekroh at 3:19 PM on August 26, 2010
You would develop different beliefs about the world.
You would develop different behaviors in response to those different beliefs.
You would be a different person.
Why do you say you would be a different person when your personality changes? You're saying that self equals personality and that's it?
posted by ekroh at 3:19 PM on August 26, 2010
I was OK with this up to this point: The brain is a computer and vice versa. And immediately things go horribly wrong.
This is sort of like saying, "Since P=NP there are no more secrets." Sorry, but assuming away the central question is FUBAR.
What Penrose was saying in "The Emperor's New Mind" was that we don't know that consciousness is a "computable number" in the Turing sense. Not only can we not assume this, nobody has even come close to showing that modeling consciousness is identical with consciousness. And what's worse, there doesn't seem to be any consensus on what consciousness is or how we can tell when it's present or not.
But sliding that assumption about brain=computer into the discussion without examination is probably the all-time nerd mistake. Just because you've got human-feeling skin and mucus for your sexbot, I don't think you are going to be having kids. And even if your kittybot is eating kibbles, what coming out the other end isn't cat poop. Likewise, I don't care how many transistors can dance on the head of a pin unless somebody can show that the sets "consciousness" and "transistors" actually overlap instead of just being lazily assumed to be equivalent.
Dennett's not doing much better than anybody else in this thread, since he just assumes his conclusion in a pretty thoughtless way. (Assuming, of course, that he's missing it. If he gets it, then this is just the same sort of cheat used to resolve crises in the less-good Dr. Who episodes.) The moment the brain/computer switch gets flipped and nothing seems to happen, it's game over man. My guess is that for all time, when that switch gets flipped, the lights go out.
And the same thing goes for that stupid singularity stuff. This isn't science or philosophy, it's fiction and a mindless amusement. And it would be funnier if Bender was in it.
*gets in Huff and drives off*
posted by warbaby at 3:21 PM on August 26, 2010 [1 favorite]
This is sort of like saying, "Since P=NP there are no more secrets." Sorry, but assuming away the central question is FUBAR.
What Penrose was saying in "The Emperor's New Mind" was that we don't know that consciousness is a "computable number" in the Turing sense. Not only can we not assume this, nobody has even come close to showing that modeling consciousness is identical with consciousness. And what's worse, there doesn't seem to be any consensus on what consciousness is or how we can tell when it's present or not.
But sliding that assumption about brain=computer into the discussion without examination is probably the all-time nerd mistake. Just because you've got human-feeling skin and mucus for your sexbot, I don't think you are going to be having kids. And even if your kittybot is eating kibbles, what coming out the other end isn't cat poop. Likewise, I don't care how many transistors can dance on the head of a pin unless somebody can show that the sets "consciousness" and "transistors" actually overlap instead of just being lazily assumed to be equivalent.
Dennett's not doing much better than anybody else in this thread, since he just assumes his conclusion in a pretty thoughtless way. (Assuming, of course, that he's missing it. If he gets it, then this is just the same sort of cheat used to resolve crises in the less-good Dr. Who episodes.) The moment the brain/computer switch gets flipped and nothing seems to happen, it's game over man. My guess is that for all time, when that switch gets flipped, the lights go out.
And the same thing goes for that stupid singularity stuff. This isn't science or philosophy, it's fiction and a mindless amusement. And it would be funnier if Bender was in it.
*gets in Huff and drives off*
posted by warbaby at 3:21 PM on August 26, 2010 [1 favorite]
lupus_yonderboy: “Searle has a gift for being extremely annoying by apparently deliberately misunderstanding the matter at hand. Many or most of his arguments seem to result from a willful refusal to understand the idea of emergent phenomena, or to concede that increasing the complexity of a system by several orders of magnitude might result in behaviour is that qualitatively different, or simply straw men: e.g. "[facts] as that we really do have subjective conscious mental states and that these are not eliminable in favour of anything else, are routinely denied by many, perhaps most, of the advanced thinkers on the subject" - which not only misstates the position of his opponents but glues together something ridiculous ("subjective conscious mental states don't exist") with something that's not so clearly wrong ("subjective conscious mental states can be eliminable in favour of something else"). ¶ If that's the best philosophy can do for a refutation...!”
I didn't say it was. I said that Aristotle's On The Soul was the best philosophy could do for a refutation. And I don't really understand your point; but that's okay, this isn't really about Searle anyway. Suffice it to say that you think he's a crank, whereas I think he's not bad. He makes some modernist mistakes, but he's not bad.
This comment sounds like it's flatly hostile to philosophy, however. That seems unscientific, but whatever.
posted by koeselitz at 3:25 PM on August 26, 2010
I didn't say it was. I said that Aristotle's On The Soul was the best philosophy could do for a refutation. And I don't really understand your point; but that's okay, this isn't really about Searle anyway. Suffice it to say that you think he's a crank, whereas I think he's not bad. He makes some modernist mistakes, but he's not bad.
This comment sounds like it's flatly hostile to philosophy, however. That seems unscientific, but whatever.
posted by koeselitz at 3:25 PM on August 26, 2010
Past a point of abstraction you can call any particular thing a "computer", in four easy steps:
- step 1: devise some way of distinguishing your thing from every other thing
- step 2: label the configuration of some set of other things vis-a-vis your thing up to some point in time "input"
- step 3: label the configuration of some set of other things vis-a-vis your thing after some point in time "output"
- step 4: label the internal activity of your thing in the times between "input" and "output" as "processing"
Et voila! Your thing is now a "computer"!
Now, naturally, at this level of abstraction the construction can yield analogies that are essentially vacuous (!): you've relabeled "configuration at time T_0" as "input", "configuration at time T" as "output", and "behavior of the system between times T_0 and T" as "processing", yielding no new knowledge and no new insight (but, perhaps, fooling yourself into thinking you've done so).
Which is why I'd suggest that if someone is claiming something is in essence a "computer", and therefore "just" a question of input, processing, and output, that you ask them to do one of two things:
- (A) establish some criteria for distinguishing humdrum chains of cause-and-effect -- "physics", if you will -- from "computation" (and follow it up by showing that what they're talking about is "computation" and "physics", if you will)
- (B) demonstrate that the X-is-a-computer analogy holds more than vacuously for X, e.g. demonstrate that calling X a computer yields novel insights or actionable ideas, rather than just new names for some general things
If they can't do either of those they're just wasting your time, whether they know it or not.
(!) As an example of a vacuous application of this process: you could, for example, say that your digestive tract is a "computer" -- food goes in, shit goes out, and the complex chains of chemical reactions execute the algorithm of determining what sorts of food become what sorts of shit. The analogy works: we have a clearly-defined thing, input, output, and processing, but we haven't learned anything here. If you want to say "that's not computing" you'll, of course, need to establish some criteria distinguishing "computing" from plain-old cause-and-effect, as per (A) above.
posted by hoople at 3:32 PM on August 26, 2010 [1 favorite]
- step 1: devise some way of distinguishing your thing from every other thing
- step 2: label the configuration of some set of other things vis-a-vis your thing up to some point in time "input"
- step 3: label the configuration of some set of other things vis-a-vis your thing after some point in time "output"
- step 4: label the internal activity of your thing in the times between "input" and "output" as "processing"
Et voila! Your thing is now a "computer"!
Now, naturally, at this level of abstraction the construction can yield analogies that are essentially vacuous (!): you've relabeled "configuration at time T_0" as "input", "configuration at time T" as "output", and "behavior of the system between times T_0 and T" as "processing", yielding no new knowledge and no new insight (but, perhaps, fooling yourself into thinking you've done so).
Which is why I'd suggest that if someone is claiming something is in essence a "computer", and therefore "just" a question of input, processing, and output, that you ask them to do one of two things:
- (A) establish some criteria for distinguishing humdrum chains of cause-and-effect -- "physics", if you will -- from "computation" (and follow it up by showing that what they're talking about is "computation" and "physics", if you will)
- (B) demonstrate that the X-is-a-computer analogy holds more than vacuously for X, e.g. demonstrate that calling X a computer yields novel insights or actionable ideas, rather than just new names for some general things
If they can't do either of those they're just wasting your time, whether they know it or not.
(!) As an example of a vacuous application of this process: you could, for example, say that your digestive tract is a "computer" -- food goes in, shit goes out, and the complex chains of chemical reactions execute the algorithm of determining what sorts of food become what sorts of shit. The analogy works: we have a clearly-defined thing, input, output, and processing, but we haven't learned anything here. If you want to say "that's not computing" you'll, of course, need to establish some criteria distinguishing "computing" from plain-old cause-and-effect, as per (A) above.
posted by hoople at 3:32 PM on August 26, 2010 [1 favorite]
> Why do you say you would be a different person when your personality changes? You're saying that self equals personality and that's it?
I'm saying "self" is a very fuzzy, and therefore not all that useful, concept; at best, we can boil it down to
1) memories, viewed from a particular perspective;
2) identification with those memories;
3) habits of brain and muscle and nerve formed by those memories;
4) expectations and beliefs formed by those memories of brain and body;
5) a set of likely actions established by those memories, beliefs, and habits.
posted by darth_tedious at 3:32 PM on August 26, 2010
I'm saying "self" is a very fuzzy, and therefore not all that useful, concept; at best, we can boil it down to
1) memories, viewed from a particular perspective;
2) identification with those memories;
3) habits of brain and muscle and nerve formed by those memories;
4) expectations and beliefs formed by those memories of brain and body;
5) a set of likely actions established by those memories, beliefs, and habits.
posted by darth_tedious at 3:32 PM on August 26, 2010
That should be: ..."that what they're talking about is "computation", and not just "physics"".
posted by hoople at 3:37 PM on August 26, 2010
posted by hoople at 3:37 PM on August 26, 2010
But sliding that assumption about brain=computer into the discussion without examination is probably the all-time nerd mistake.
I don't think in the thought experiment brain = computer as far as consciousness goes, just that they are indistinguishable to an outside observer.
posted by ekroh at 3:44 PM on August 26, 2010
I don't think in the thought experiment brain = computer as far as consciousness goes, just that they are indistinguishable to an outside observer.
posted by ekroh at 3:44 PM on August 26, 2010
Dennett is awesome, and when I got to the bottom of that article it was even more awesome to discover that he wrote it 32 years ago, way before the idea of a computer running your brain processes was a sci-fi staple.
posted by memebake at 3:58 PM on August 26, 2010
posted by memebake at 3:58 PM on August 26, 2010
Sure, mind/body thought experiments are a lot of fun but I had a hard time suspending my disbelief to get into this one. He tells us that (in or prior to 1978) his brain and body were outfitted with analog radio transceivers to carry a signal from Houston to Tulsa and back. My first thought was that his brainless head must've been filled with some kind of obscenely dense battery to power that thing, it must have weighed a ton.
Then I started calculating the latency. That's a distance of about 350 miles, which would take about 27 minutes to cross at the speed of sound. Double that to get the total duration from signal sent to response received: 54 minutes. This is the most interesting thought experiment in the whole story... how would your mind cope if your body lagged nearly an hour behind it?
Then he went on to mention that he was descending a mile under the earth's surface, and the analog radio transceiver kept working normally until he arrived at the warhead buried there, and I lost it. Not to mention the introduction of the perfect digital brain simulation (again, in or prior to 1978). I really did enjoy the piece on a philosophical level, but as fiction it didn't hold much water. Since people keep mentioning Kurzweil, I have to at least acknowledge that his stories resonate with me on a much deeper level, even if they're fundamentally as flawed as this.
Also, I still have hope for the Blue Brain Project, stagnant as it may be.
posted by The Winsome Parker Lewis at 4:01 PM on August 26, 2010
Then I started calculating the latency. That's a distance of about 350 miles, which would take about 27 minutes to cross at the speed of sound. Double that to get the total duration from signal sent to response received: 54 minutes. This is the most interesting thought experiment in the whole story... how would your mind cope if your body lagged nearly an hour behind it?
Then he went on to mention that he was descending a mile under the earth's surface, and the analog radio transceiver kept working normally until he arrived at the warhead buried there, and I lost it. Not to mention the introduction of the perfect digital brain simulation (again, in or prior to 1978). I really did enjoy the piece on a philosophical level, but as fiction it didn't hold much water. Since people keep mentioning Kurzweil, I have to at least acknowledge that his stories resonate with me on a much deeper level, even if they're fundamentally as flawed as this.
Also, I still have hope for the Blue Brain Project, stagnant as it may be.
posted by The Winsome Parker Lewis at 4:01 PM on August 26, 2010
There's a biot about latency and identity in The Island, a rather neat short story by Peter Watts:
The thing about I is, it only exists within a tenth-of-a-second of all its parts. When we get spread too thin— when someone splits your brain down the middle, say, chops the fat pipe so the halves have to talk the long way around; when the neural architecture diffuses past some critical point and signals take just that much longer to pass from A to B— the system, well, decoheres. The two sides of your brain become different people with different tastes, different agendas, different senses of themselves.
posted by Artw at 4:09 PM on August 26, 2010 [1 favorite]
The thing about I is, it only exists within a tenth-of-a-second of all its parts. When we get spread too thin— when someone splits your brain down the middle, say, chops the fat pipe so the halves have to talk the long way around; when the neural architecture diffuses past some critical point and signals take just that much longer to pass from A to B— the system, well, decoheres. The two sides of your brain become different people with different tastes, different agendas, different senses of themselves.
posted by Artw at 4:09 PM on August 26, 2010 [1 favorite]
The Winsome Parker Lewis ... which would take about 27 minutes to cross at the speed of sound ...
Radio waves travel at the speed of light, not the speed of sound.
Also, I don't think Dennett is aiming for 'fiction' as such, more a kind of thought experiment padded out with amusing details.
Its written as if it was given as a lecture, and if that was the case it must have been fun to watch at the end if he acted out the change of brains.
posted by memebake at 4:23 PM on August 26, 2010
Radio waves travel at the speed of light, not the speed of sound.
Also, I don't think Dennett is aiming for 'fiction' as such, more a kind of thought experiment padded out with amusing details.
Its written as if it was given as a lecture, and if that was the case it must have been fun to watch at the end if he acted out the change of brains.
posted by memebake at 4:23 PM on August 26, 2010
Good point about light vs. sound. I banged out that comment too quickly and forgot how the EM spectrum worked. My concerns are diminished, until the time Dennett gets asked to accompany NASA on a secret mission to Alpha Centauri.
posted by The Winsome Parker Lewis at 4:28 PM on August 26, 2010
posted by The Winsome Parker Lewis at 4:28 PM on August 26, 2010
My fingers command electrons; ordering them to travel to the other side of the world. I am in dozens of places at once, places that are not places because they do not exist in the physical realm. With a single motion, I am somewhere else, somewhere new. In some places I am alone - elsewhere, anonymous amongst thousands, sometimes millions of others.
With the internet, the question isn't where I am.
It's where I'm not.
posted by ymgve at 4:39 PM on August 26, 2010 [1 favorite]
With the internet, the question isn't where I am.
It's where I'm not.
posted by ymgve at 4:39 PM on August 26, 2010 [1 favorite]
ymgve, that is a nice turn of phrase, but it seems to me to conflate ones "being" with ones influence. Perhaps I am alone in this, but I would not equate sending signals to a distant place to sending ones self to a distant place.
posted by LoopyG at 5:38 PM on August 26, 2010
posted by LoopyG at 5:38 PM on August 26, 2010
- (B) demonstrate that the X-is-a-computer analogy holds more than vacuously for X, e.g. demonstrate that calling X a computer yields novel insights or actionable ideas, rather than just new names for some general things
To me X-is-a-computer means that X can be simulated on what we think of today when we hear the word computer - an electronic machine with a cpu, memory, etc. (at least in theory - for most things, such as a brain or the digestive tract, we neither have computers powerful enough nor the proper algorithms/code). So in the case of X = brain, a brain can be simulated on a computer, which means that consciousness can be simulated, which means a computer can be conscious.
Of course this view contains several huge assumptions which could undermine it.
1) If you can simulate something to a fine enough degree, the simulation will contain the emergent properties of that something.
2) Consciousness is an emergent property of the brain.
There are probably others, like the one that we'll be able to build a computer with enough power to simulate a brain, or be able to figure out the proper algorithms.
Personally, I think computer consciousness is likely to come about once we have enough computing power to simulate all the relevant properties of neurons that are required to have consciousness emerge - what ever they are. Then someone's brain will be scanned and simulated on the computer. I expect well have the technology to do that before we'll be able to figure out other ways to program consciousness. Or in other words, I think we'll be able to simulate a brain and consciousness before we understand it.
posted by Bort at 5:57 PM on August 26, 2010
To me X-is-a-computer means that X can be simulated on what we think of today when we hear the word computer - an electronic machine with a cpu, memory, etc. (at least in theory - for most things, such as a brain or the digestive tract, we neither have computers powerful enough nor the proper algorithms/code). So in the case of X = brain, a brain can be simulated on a computer, which means that consciousness can be simulated, which means a computer can be conscious.
Of course this view contains several huge assumptions which could undermine it.
1) If you can simulate something to a fine enough degree, the simulation will contain the emergent properties of that something.
2) Consciousness is an emergent property of the brain.
There are probably others, like the one that we'll be able to build a computer with enough power to simulate a brain, or be able to figure out the proper algorithms.
Personally, I think computer consciousness is likely to come about once we have enough computing power to simulate all the relevant properties of neurons that are required to have consciousness emerge - what ever they are. Then someone's brain will be scanned and simulated on the computer. I expect well have the technology to do that before we'll be able to figure out other ways to program consciousness. Or in other words, I think we'll be able to simulate a brain and consciousness before we understand it.
posted by Bort at 5:57 PM on August 26, 2010
I think this topic was already resolved a long, long time ago... let me go through my logs...
Bug #59345 Atman points to multiple locations, remains inconsistent Indian Merchant < mark@india.biz > 800 BCE Comment 1 Following the Eternalists' instructions I have begun a period of deep yoga searching for my true self, or Atman. However, after meditation today I realized that the Atman I'm feeling right now is different from the one I got a few weeks ago. Also, my Eternalist guru keeps changing his mind about how much he charges for consulting. What's up with this whole atman thing? Nihilist < trash@dev.null > 800 BCE Comment 2 Atman is deprecated garbage. If your guru knew what he was doing he'd be pointing you toward self-annihilation right now. I recommend starving and torturing yourself until your feeling of self is extinguished entirely. Indian Merchant < mark@aol.com > 700 BCE Comment 3 I died during my ascetic practice and was reincarnated as an AOL user!!!! Someone besides Nihilist please help me !!!!!! Nihilist < trash@dev.null > 700 BCE Comment 4 You obviously weren't trying hard enough. Requesting admin so I can mark this INVALID. You're the idiot for looking for atman in the first place. [...] Gautama < gautama@kapilvastu.in > 500 BCE Comment 1435 Status changed to WONTFIX Gautama < gautama@kapilvastu.in > 500 BCE Comment 1436 This isn't a bug, it's a feature. "Atman" cannot be assigned to any individual part of a human being, but is the combination of all parts. It's an important part of the nature of reality that atman remains inconsistent and the product of multiple skandhas. Maya < maya@samsara.int > 480 BCE Comment 1437 Reopen this bug IMMEDIATELY. People all over the world are trying to find their true selves or learn who created them. How are we supposed to do that if we don't even know who we are? Vajira < vajira@sangha.in > 480 BCE Comment 1438 This bug is already resolved. Think of the word "atman" as something like the word "car". A car isn't any of its individual parts, and even all the parts together do not magically form a car. Only the relationships and changes between the parts create something we can call a car. Maya < maya@samsara.int > 480 BCE Comment 1439 Ugh, why is it so hard to troll this bugzilla ever since Gautama got admin? Bug #438801 Where does e-mail go if there is no self???? Milinda < majordomo@sakala.in > 100 BCE Comment 1 Hey Nagasena, you said there is no self, so where does your e-mail go to? Will you even read this bug?? HMMM???!? Nagasena < nagasena@sanga.in > 100 BCE Comment 2 Status changed to DUPLICATE Nagasena < nagasena@sanga.in > 100 BCE Comment 3 This is a duplicate ofposted by shii at 6:32 PM on August 26, 2010 [9 favorites]bug #59345. Please stop opening new bugs...
You know, I started reading this article and within the first few sentences I was having a deja vu. And I was right, I had read this before. This is a chapter from Dennet's book "Brainstorms," it was published in 1981. So this isn't exactly cutting edge, it's almost 30 years old. There has been an awful lot of neuroscience and cognitive science research since this was written, I'd guess that the bulk of current neuroscience theory has been created since this article was written.
posted by charlie don't surf at 7:01 PM on August 26, 2010
posted by charlie don't surf at 7:01 PM on August 26, 2010
Past a point of abstraction you can call any particular thing a "computer", in four easy steps:
- step 1: devise some way of distinguishing your thing from every other thing
- step 2: label the configuration of some set of other things vis-a-vis your thing up to some point in time "input"
- step 3: label the configuration of some set of other things vis-a-vis your thing after some point in time "output"
- step 4: label the internal activity of your thing in the times between "input" and "output" as "processing"
Step 5: find a meaningful abstraction or interpretation scheme for the input and output states. In the case of the ball, the abstraction centers around the minimum of a function. In any case you could probably describe your abstraction in terms of a function mapping from the physical state of your computer to meaningful inputs and outputs*. I think it's easy for example to see how the things that happen to a neuron could be abstracted in a way that doesn't require direct physical modeling of the neuron.
Not only can we not assume this, nobody has even come close to showing that modeling consciousness is identical with consciousness.
This read to me like a cloaked version of the Chinese Room argument, so I went and started reading the wikipedia article for it. One interesting elaboration of the argument is Ned Block's blockhead argument, which assumes that any program simulating a mind on a Turing complete computer could be re-implemented as a set of simple if X, then Y rules defined across all possible inputs. Intuitively such a system cannot be conscious. I have to admit that that notion really makes it difficult for me to think that the modeling of consciousness and consciousness are by definition equivalent, even though I want to. So, thanks for complicating my understanding of the problem (not sarcasm).
* Intuitively it seems to me that that function is really only useful if it's monotonic for some ordering (I think isotonic is the word for that?), but this is the first I've really thought about the whole thing.
posted by invitapriore at 7:27 PM on August 26, 2010
- step 1: devise some way of distinguishing your thing from every other thing
- step 2: label the configuration of some set of other things vis-a-vis your thing up to some point in time "input"
- step 3: label the configuration of some set of other things vis-a-vis your thing after some point in time "output"
- step 4: label the internal activity of your thing in the times between "input" and "output" as "processing"
Step 5: find a meaningful abstraction or interpretation scheme for the input and output states. In the case of the ball, the abstraction centers around the minimum of a function. In any case you could probably describe your abstraction in terms of a function mapping from the physical state of your computer to meaningful inputs and outputs*. I think it's easy for example to see how the things that happen to a neuron could be abstracted in a way that doesn't require direct physical modeling of the neuron.
Not only can we not assume this, nobody has even come close to showing that modeling consciousness is identical with consciousness.
This read to me like a cloaked version of the Chinese Room argument, so I went and started reading the wikipedia article for it. One interesting elaboration of the argument is Ned Block's blockhead argument, which assumes that any program simulating a mind on a Turing complete computer could be re-implemented as a set of simple if X, then Y rules defined across all possible inputs. Intuitively such a system cannot be conscious. I have to admit that that notion really makes it difficult for me to think that the modeling of consciousness and consciousness are by definition equivalent, even though I want to. So, thanks for complicating my understanding of the problem (not sarcasm).
* Intuitively it seems to me that that function is really only useful if it's monotonic for some ordering (I think isotonic is the word for that?), but this is the first I've really thought about the whole thing.
posted by invitapriore at 7:27 PM on August 26, 2010
In eastern philosophy the body is the 'vehicle' of the self (not the illusory personal self or persona). That would -include- the brain ... so the OP's question leaves out this option.
To use a computer analogy: parts of the brain are the 'OS', parts of the brain are the programming language that supports the apps & data you got from your culture and experience. In a computer you can't tell much of anything about the purpose of the programs that are running by watching the electron motions in the transistors. So the 'higher level' functions are 'running' in the brain but not really 'part' of it in any physical sense ... just patterns.
posted by Twang at 8:05 PM on August 26, 2010
To use a computer analogy: parts of the brain are the 'OS', parts of the brain are the programming language that supports the apps & data you got from your culture and experience. In a computer you can't tell much of anything about the purpose of the programs that are running by watching the electron motions in the transistors. So the 'higher level' functions are 'running' in the brain but not really 'part' of it in any physical sense ... just patterns.
posted by Twang at 8:05 PM on August 26, 2010
One interesting elaboration of the argument is Ned Block's blockhead argument, which assumes that any program simulating a mind on a Turing complete computer could be re-implemented as a set of simple if X, then Y rules defined across all possible inputs. Intuitively such a system cannot be conscious. I have to admit that that notion really makes it difficult for me to think that the modeling of consciousness and consciousness are by definition equivalent, even though I want to.
A couple thoughts:
Consciousness is an online process. So instead of receiving all its input at once, then doing some processing, then spitting out all its output, the consciousness I experience (and I assume everyone else experiences) gets some input, produces some output, gets more input, produces more output, etc.
This could still be modeled by sending our blockhead lookup-table a huge vector representing all of the input it would receive throughout its "life", and then interpreting the huge vector that it spits out as all of the actions that it would generate throughout its "life".
But how could a suitable input vector ever be produced? The consequences of my actions feed back to me. I see my fingers moving to type this comment, and that is part of why I am typing this sentence, but my having seen my fingers move is only because earlier I had decided to type this comment.
So the input vector to such a lookup table would have to somehow account for certain aspects of what the output vector would eventually contain. One could conceive of a specialized program that generated feasible input vectors for a given blockhead lookup-table, but I would guess that such a program would be of similar complexity to a consciousness, and would not rule out that the possibility of consciousness in the combination of input-generator and blockhead-lookup.
posted by a snickering nuthatch at 8:15 PM on August 26, 2010
A couple thoughts:
Consciousness is an online process. So instead of receiving all its input at once, then doing some processing, then spitting out all its output, the consciousness I experience (and I assume everyone else experiences) gets some input, produces some output, gets more input, produces more output, etc.
This could still be modeled by sending our blockhead lookup-table a huge vector representing all of the input it would receive throughout its "life", and then interpreting the huge vector that it spits out as all of the actions that it would generate throughout its "life".
But how could a suitable input vector ever be produced? The consequences of my actions feed back to me. I see my fingers moving to type this comment, and that is part of why I am typing this sentence, but my having seen my fingers move is only because earlier I had decided to type this comment.
So the input vector to such a lookup table would have to somehow account for certain aspects of what the output vector would eventually contain. One could conceive of a specialized program that generated feasible input vectors for a given blockhead lookup-table, but I would guess that such a program would be of similar complexity to a consciousness, and would not rule out that the possibility of consciousness in the combination of input-generator and blockhead-lookup.
posted by a snickering nuthatch at 8:15 PM on August 26, 2010
Jpfed: Just thinking out loud, do you think that rules out the possibility of a Turing complete computer being able to model consciousness?
Another question, to which I suspect the answer is no, is could that continuous input and output you describe be quantized?
posted by invitapriore at 8:24 PM on August 26, 2010
Another question, to which I suspect the answer is no, is could that continuous input and output you describe be quantized?
posted by invitapriore at 8:24 PM on August 26, 2010
Jpfed: Just thinking out loud, do you think that rules out the possibility of a Turing complete computer being able to model consciousness?
In the sense of "here's an input tape, Mr. Turing Machine, go produce me an output tape", my guess is that a Turing machine would only be able to simulate a consciousness if it were also simulating the environment that the consciousness was embedded in.
------
If we allow computing devices that take input and produce output right away, then I think that even if that device secretly was powered by a lookup table, it could be conscious. I have hands I could wave in that direction but I don't mean to monopolize, considering how little rigor I could bring to the table (TRIPLE METAPHOR COMBO!).
posted by a snickering nuthatch at 8:32 PM on August 26, 2010
In the sense of "here's an input tape, Mr. Turing Machine, go produce me an output tape", my guess is that a Turing machine would only be able to simulate a consciousness if it were also simulating the environment that the consciousness was embedded in.
------
If we allow computing devices that take input and produce output right away, then I think that even if that device secretly was powered by a lookup table, it could be conscious. I have hands I could wave in that direction but I don't mean to monopolize, considering how little rigor I could bring to the table (TRIPLE METAPHOR COMBO!).
posted by a snickering nuthatch at 8:32 PM on August 26, 2010
Another question, to which I suspect the answer is no, is could that continuous input and output you describe be quantized?
I'm not sure what you mean by this- could you clarify? Going off of what I think you're asking, I think a consciousness could use discretely-sampled streams of input and output or continuous streams. I wouldn't attempt to draw a line between that which is sampled rapidly enough and that which is not; I do think that consciousness-like effects could be produced with discretely-sampled streams of input, and the degree to which the simulator "looks like" consciousness may vary smoothly with the sampling rate.
posted by a snickering nuthatch at 8:37 PM on August 26, 2010
I'm not sure what you mean by this- could you clarify? Going off of what I think you're asking, I think a consciousness could use discretely-sampled streams of input and output or continuous streams. I wouldn't attempt to draw a line between that which is sampled rapidly enough and that which is not; I do think that consciousness-like effects could be produced with discretely-sampled streams of input, and the degree to which the simulator "looks like" consciousness may vary smoothly with the sampling rate.
posted by a snickering nuthatch at 8:37 PM on August 26, 2010
Yeah, that's exactly what I meant. There could be something like the Nyquist–Shannon theorem for fidelity to analog consciousness, but I think that's putting the cart before the horse pretty severely.
posted by invitapriore at 8:47 PM on August 26, 2010
posted by invitapriore at 8:47 PM on August 26, 2010
As an example of a vacuous application of this process: you could, for example, say that your digestive tract is a "computer" -- food goes in, shit goes out, and the complex chains of chemical reactions execute the algorithm of determining what sorts of food become what sorts of shit. The analogy works: we have a clearly-defined thing, input, output, and processing, but we haven't learned anything here.
It sounds silly, but your description of digestion is analogous to DNA computing, in which chemical processing sorts molecules in order to accomplish some task.
In the case of digestion, your body's enzymes and acids sort out sugars and fats from indigestible materials, so that you can live. In the case of DNA computing, Adleman found the shortest distance between two points using gel electrophoresis and other standard bench techniques.
Both are useful "calculations" in their way, however odd it may seem to characterize them as such.
In the case of digestion, particularly, thinking about your body as a food calculator is highly useful when considering the effect of junk food on the system. "Garbage-in, garbage-out" is a common computing adage that applies here as much as it does in the digital world.
posted by Blazecock Pileon at 8:50 PM on August 26, 2010
It sounds silly, but your description of digestion is analogous to DNA computing, in which chemical processing sorts molecules in order to accomplish some task.
In the case of digestion, your body's enzymes and acids sort out sugars and fats from indigestible materials, so that you can live. In the case of DNA computing, Adleman found the shortest distance between two points using gel electrophoresis and other standard bench techniques.
Both are useful "calculations" in their way, however odd it may seem to characterize them as such.
In the case of digestion, particularly, thinking about your body as a food calculator is highly useful when considering the effect of junk food on the system. "Garbage-in, garbage-out" is a common computing adage that applies here as much as it does in the digital world.
posted by Blazecock Pileon at 8:50 PM on August 26, 2010
Also another misconception is that there is anything special about consciousness. There isn't. (I take the deterministic view anyway, that there is no such thing as free will). I think we're just automata. To me, any device with inputs and information storage and processing is conscious. A computer with a webcam is conscious.
I've often wondered if the sensation of being conscious is just an artifact of being a self-reflective intelligent entity, i.e. if any sufficiently intelligent device automatically becomes conscious. Or it could be that every object is conscious to some degree by fiat, as you suggest. It doesn't logically have to be the case though.
I think the main problem is that the sensation of consciousness that we all feel (and that is therefore indisputably a feature of the physical world) doesn't fit into any imaginable scientific theory (what could a full explanation of it possibly look like?). It seems to me to be something from "outside physics", inexplicable in terms of any kind of mechanistic model.
posted by snoktruix at 4:47 AM on August 27, 2010
I've often wondered if the sensation of being conscious is just an artifact of being a self-reflective intelligent entity, i.e. if any sufficiently intelligent device automatically becomes conscious. Or it could be that every object is conscious to some degree by fiat, as you suggest. It doesn't logically have to be the case though.
I think the main problem is that the sensation of consciousness that we all feel (and that is therefore indisputably a feature of the physical world) doesn't fit into any imaginable scientific theory (what could a full explanation of it possibly look like?). It seems to me to be something from "outside physics", inexplicable in terms of any kind of mechanistic model.
posted by snoktruix at 4:47 AM on August 27, 2010
charlie don't surf You know, I started reading this article and within the first few sentences I was having a deja vu. And I was right, I had read this before. This is a chapter from Dennet's book "Brainstorms," it was published in 1981. So this isn't exactly cutting edge, it's almost 30 years old. There has been an awful lot of neuroscience and cognitive science research since this was written, I'd guess that the bulk of current neuroscience theory has been created since this article was written.
To me, the age of it makes it more impressive rather than less relevant. Its a philosophical thought experiment about consciousness rather than a piece of research, and so its age doesn't necessarilly count against it. The Chinese Room argument dates from about 1980, but its still being discussed.
That said, I think Dennett might write it differently now - his later ideas (see Consciousness Explained) hinge on the idea that there is (most likely) no single place in the brain where consciousness 'comes together'. If consciousness is 'smeared' across a sizeable area of the brain, then one can argue that it is also 'smeared' across (shortish periods of) time. This then tends towards comparing consciousness to concepts such as 'centre of gravity' which are useful abstractions of much more complicated phenomena. This would give a slightly different answer to the question of 'where am I'?
Dennett's mission as a philosopher can perhaps be understood as providing us with new tools and frameworks for thinking about consciousness (which is a very tricky thing to think about, all told), and helping us to appreciate that if it ever does get explained, the explanation might be very counter-intuitive.
posted by memebake at 5:19 AM on August 27, 2010
To me, the age of it makes it more impressive rather than less relevant. Its a philosophical thought experiment about consciousness rather than a piece of research, and so its age doesn't necessarilly count against it. The Chinese Room argument dates from about 1980, but its still being discussed.
That said, I think Dennett might write it differently now - his later ideas (see Consciousness Explained) hinge on the idea that there is (most likely) no single place in the brain where consciousness 'comes together'. If consciousness is 'smeared' across a sizeable area of the brain, then one can argue that it is also 'smeared' across (shortish periods of) time. This then tends towards comparing consciousness to concepts such as 'centre of gravity' which are useful abstractions of much more complicated phenomena. This would give a slightly different answer to the question of 'where am I'?
Dennett's mission as a philosopher can perhaps be understood as providing us with new tools and frameworks for thinking about consciousness (which is a very tricky thing to think about, all told), and helping us to appreciate that if it ever does get explained, the explanation might be very counter-intuitive.
posted by memebake at 5:19 AM on August 27, 2010
I have taken some time off metafilter, as we all must from time to time, and vast numbers of additional comments have been appearing, which relate to my own earlier comments. I will make a few replies. First of all I want to comment on the statement from warbaby, But sliding that assumption about brain=computer into the discussion without examination is probably the all-time nerd mistake. This would seem to relate to my own comments even though he does not mention me by name (or by pseudonym). I did not make those comments without examination. I have gone to extreme lengths to examine this issue, prior to posting my conclusions on this site. And it is not an assumption, it is a logical conclusion. Those who assume (like snoktruix above who says It seems to me to be something from "outside physics", inexplicable in terms of any kind of mechanistic model) that there is something supernatural about consciousness, are the ones who are making assumptions. What has our collective experience over the past several centuries taught us? That everything in the natural world looks mysterious and will be given a bizarre supernatural explanation, until it is scientifically examined at which time it turns out to have an understandable explanation based on the laws of nature rather than gods, demons, spirits, or other magical entities or phenomena. Lightning is not a weapon hurled at the earth from heaven by an irate deity, it is an electrical discharge. And consciousness is also a natural phenomenon, not a supernatural phenomenon.
A fundamental problem that we run into is the difference between the subjective and the objective. Subjectively, you are the center of the universe, from whence all things are observed, and your consciousness is a magical phenomenon which transcends all things in the natural world. Objectively, you are just an organism which results from and is subject to evolutionary processes. Truth, however, is objective. Subjectivity is just how you feel.
The question also appeared, much earlier in the discussion, if you are actually a pattern of information or a program, and if that program were to be duplicated in another processor, which one is you? Subjectively, you are still you. Each of these programs would have a subjective sense of being an individual being. As for the legal or formal identity, the original is you, the copy is someone else who has a bizarre resemblance to you. It's not really a paradox, it's just a peculiarity.
posted by grizzled at 5:46 AM on August 27, 2010 [1 favorite]
A fundamental problem that we run into is the difference between the subjective and the objective. Subjectively, you are the center of the universe, from whence all things are observed, and your consciousness is a magical phenomenon which transcends all things in the natural world. Objectively, you are just an organism which results from and is subject to evolutionary processes. Truth, however, is objective. Subjectivity is just how you feel.
The question also appeared, much earlier in the discussion, if you are actually a pattern of information or a program, and if that program were to be duplicated in another processor, which one is you? Subjectively, you are still you. Each of these programs would have a subjective sense of being an individual being. As for the legal or formal identity, the original is you, the copy is someone else who has a bizarre resemblance to you. It's not really a paradox, it's just a peculiarity.
posted by grizzled at 5:46 AM on August 27, 2010 [1 favorite]
Those who assume (like snoktruix above who says It seems to me to be something from "outside physics", inexplicable in terms of any kind of mechanistic model) that there is something supernatural about consciousness, are the ones who are making assumptions. What has our collective experience over the past several centuries taught us? That everything in the natural world looks mysterious and will be given a bizarre supernatural explanation, until it is scientifically examined at which time it turns out to have an understandable explanation based on the laws of nature rather than gods, demons, spirits, or other magical entities or phenomena.
I'm saying I cannot see even in principle how consciousness could be explained with a scientific theory. Any such theory would have the form of some model with some rules (like all scientific theories). But such a model can give you no insight into the raw fact of your own consciousness, is that not obvious? By the way, I also believe that everything else (in the physical world) we currently regard as mysterious (quantum measurements, dark energy, how to reconcile gravity and quantum mechanics, etc.) will eventually succumb to a completely mechanistic satisfying explanation, because we just have to find the right mathematical models. No mathematical model will help you one bit in understanding consciousness. Drugs might.
posted by snoktruix at 6:59 AM on August 27, 2010
I'm saying I cannot see even in principle how consciousness could be explained with a scientific theory. Any such theory would have the form of some model with some rules (like all scientific theories). But such a model can give you no insight into the raw fact of your own consciousness, is that not obvious? By the way, I also believe that everything else (in the physical world) we currently regard as mysterious (quantum measurements, dark energy, how to reconcile gravity and quantum mechanics, etc.) will eventually succumb to a completely mechanistic satisfying explanation, because we just have to find the right mathematical models. No mathematical model will help you one bit in understanding consciousness. Drugs might.
posted by snoktruix at 6:59 AM on August 27, 2010
Blazecock Pileon: DNA computing isn't a vacuous analogy, but note very carefully what the paper establishes:
- we understand the mechanics of particular chemical reactions involving DNA well enough to predict their effects upon the DNA
- with careful preparation of the "input" DNA we can encode into the DNA problems from an unrelated field (graph theory, here)
- with careful analysis of the "output" DNA we can interpret that "output" DNA as "answers" to the problems we "input"
- this isn't magic, and we can explain why chains of cause-and-effect we already understood allow us to perform certain computations with DNA
Adleman's not wasting our time: he shows you can use DNA as a computer (for a particular problem) and shows us how to do it; his conclusion isn't "DNA=computer" but the much more precise claim "if particular steps are taken one can use DNA as part of a system that performs computation".
By contrast: suppose we stop calling the digestive system the digestive system and instead call it the food computer. What does this tell us that we didn't already know from biology, biochemistry, and chemistry, etc.? We already knew that it separates some chemicals from other chemicals and we already knew that, eg, eating too much junk food has various detrimental effects, and we knew these facts before the era of the food computer.
What's a novel insight or nontrivial analogy this provides me? Where's the "punchline"?
Bort: notice that you've (perhaps accidentally) introduced a curious definition of "X-is-a-computer", which is essentially that "X-is-a-computer" is equivalent to "a computer as we know it can simulate X". Even leaving aside nuances in what it means to be able to simulate something, you probably don't actually mean that; it's a very odd meaning of is, since by this definition, eg, "a clock is a computer" and "a tree is a computer" and "a plane is a computer" and, remarkably, "a computer is a human" and "a computer is a brain".
So regardless of the rest of your points and conjecture about the possibility of simulating, eg, a brain, you almost certainly need to refine your notion of what it means when you say "X-is-a-computer".
invitapriore: your step 5 is one way of showing that a particular application of the "computer analogy" process isn't being performed vacuously; CF the difference between Adleman's "DNA computing" (as linked by Blazecock Pileon) and the digestive track. In general I agree with this: if you can't perform step 5 you're probably wasting people's time by calling something a "computer", b/c you're only adding (purported) synonyms, not new information.
However, your "step 5" on its own is not a good criterion if, also, you're inclined towards strict reductionist (or, to invent a term, towards "mechanicalism", meaning you don't believe in anything like souls or platonic realms or consciousness or other mystical claptrap).
It hinges on "abstraction": system X is "computation" if there's some abstraction its inputs, outputs, and processing can be understood to implement; or, to make that more concrete: "X is 'computation' if an intelligent agent could interpret its behavior as a mechanical realization of some abstract system".
However, "interpreting a physical system as a material realization of an abstract system" must itself be performed by a physical system behaving mechanically; the natural follow-up is "what mechanical operation can be performed that will distinguish mechanical process that 'interpret a physical system as a mechanical realization of an abstract system' from other mechanical processes"?
If you don't have a purely-mechanical criterion for distinguishing "computation" from "just physics" this leaves you in infinite regress (unless you resort to a get-out-of-jail-free card like a soul or platonic realms, etc.).
So although "step 5" is helpful in practice -- if someone can't perform step 5 they're wasting your time -- it can't resolve anything at the philosophical level without much more work.
Jpfed: you got to where I was wanting to go. Peter Wegner and Dina Goldin have been banging on this drum for a long time, without much uptake (cf here, for example: http://www.cs.brown.edu/people/pw/strong-cct.pdf or here if you prefer a more technical treatment: http://www.cse.uconn.edu/%7Edqg/papers/its.pdf ). I think part of this is that at least to my taste they are very strong thinkers but not necessarily good writers and certainly not the greatest at propaganda, even inside academia; outside of academia uptake is even slower and laggier.
posted by hoople at 7:01 AM on August 27, 2010 [1 favorite]
- we understand the mechanics of particular chemical reactions involving DNA well enough to predict their effects upon the DNA
- with careful preparation of the "input" DNA we can encode into the DNA problems from an unrelated field (graph theory, here)
- with careful analysis of the "output" DNA we can interpret that "output" DNA as "answers" to the problems we "input"
- this isn't magic, and we can explain why chains of cause-and-effect we already understood allow us to perform certain computations with DNA
Adleman's not wasting our time: he shows you can use DNA as a computer (for a particular problem) and shows us how to do it; his conclusion isn't "DNA=computer" but the much more precise claim "if particular steps are taken one can use DNA as part of a system that performs computation".
By contrast: suppose we stop calling the digestive system the digestive system and instead call it the food computer. What does this tell us that we didn't already know from biology, biochemistry, and chemistry, etc.? We already knew that it separates some chemicals from other chemicals and we already knew that, eg, eating too much junk food has various detrimental effects, and we knew these facts before the era of the food computer.
What's a novel insight or nontrivial analogy this provides me? Where's the "punchline"?
Bort: notice that you've (perhaps accidentally) introduced a curious definition of "X-is-a-computer", which is essentially that "X-is-a-computer" is equivalent to "a computer as we know it can simulate X". Even leaving aside nuances in what it means to be able to simulate something, you probably don't actually mean that; it's a very odd meaning of is, since by this definition, eg, "a clock is a computer" and "a tree is a computer" and "a plane is a computer" and, remarkably, "a computer is a human" and "a computer is a brain".
So regardless of the rest of your points and conjecture about the possibility of simulating, eg, a brain, you almost certainly need to refine your notion of what it means when you say "X-is-a-computer".
invitapriore: your step 5 is one way of showing that a particular application of the "computer analogy" process isn't being performed vacuously; CF the difference between Adleman's "DNA computing" (as linked by Blazecock Pileon) and the digestive track. In general I agree with this: if you can't perform step 5 you're probably wasting people's time by calling something a "computer", b/c you're only adding (purported) synonyms, not new information.
However, your "step 5" on its own is not a good criterion if, also, you're inclined towards strict reductionist (or, to invent a term, towards "mechanicalism", meaning you don't believe in anything like souls or platonic realms or consciousness or other mystical claptrap).
It hinges on "abstraction": system X is "computation" if there's some abstraction its inputs, outputs, and processing can be understood to implement; or, to make that more concrete: "X is 'computation' if an intelligent agent could interpret its behavior as a mechanical realization of some abstract system".
However, "interpreting a physical system as a material realization of an abstract system" must itself be performed by a physical system behaving mechanically; the natural follow-up is "what mechanical operation can be performed that will distinguish mechanical process that 'interpret a physical system as a mechanical realization of an abstract system' from other mechanical processes"?
If you don't have a purely-mechanical criterion for distinguishing "computation" from "just physics" this leaves you in infinite regress (unless you resort to a get-out-of-jail-free card like a soul or platonic realms, etc.).
So although "step 5" is helpful in practice -- if someone can't perform step 5 they're wasting your time -- it can't resolve anything at the philosophical level without much more work.
Jpfed: you got to where I was wanting to go. Peter Wegner and Dina Goldin have been banging on this drum for a long time, without much uptake (cf here, for example: http://www.cs.brown.edu/people/pw/strong-cct.pdf or here if you prefer a more technical treatment: http://www.cse.uconn.edu/%7Edqg/papers/its.pdf ). I think part of this is that at least to my taste they are very strong thinkers but not necessarily good writers and certainly not the greatest at propaganda, even inside academia; outside of academia uptake is even slower and laggier.
posted by hoople at 7:01 AM on August 27, 2010 [1 favorite]
Suggest reading some Greg Egan sci-fi if you like this kind of stuff.... makes for some interesting thought games.
I've always analyzed a similar problem as the transporter paradox... assuming we have a device with two chambers that can somehow make an exact copy of you, down to every quantum state - and you make such a copy - which one would be you?
This leads to several interesting concepts.
It seems obvious that, from your perspective, you are still the original you. Stream of conciousness and all that. Anyone watching the experiment would agree as well, having seen you the entire your little scanning pod, and then leave afterwards. You are still you in any normal sense of understanding that we have.
Now - the exact copy of you - although now you are obviously two differnet people having differing experiences in life, also feels like he's you. He is alive, has your memories, and as far as he is concerned, he seems to have been transported from one pod to another, but other than that, he's you.
If this experiment were done with no observers, and only one person left the room, to the outside world, in all cases, either entity would qualify any test of being "you".
Then we think of the brain and the stream of consciousness - what if we could build replacement parts out of more durable material. If I ask, hey, what if I build an entirely "new" brain out of this material and then just uploaded my brain state into it - then destoryed the old one. Would I still be me, or would I be dead? The answer seems to me to be that "I" would be gone from my point of view, but to the rest of the world, I would still be me.
Then, instead of one big change, think of it in small changes. What if we replace just one tiny piece of my brain with a manufactured analogue that behaves identically. Would I still be me? Seems like it.... but if we continue that process, bit by bit - it seems like I stay me, and retain my stream of consciousness.
In the end, rather than get all confused about it, I've decided that the reason these questions are hard to answer and seemingly paradoxical are because they are impossible - and we will likely never understand consciousness in my lifetime, so it's not worth worrying about :)
posted by TravellingDen at 7:37 AM on August 27, 2010
I've always analyzed a similar problem as the transporter paradox... assuming we have a device with two chambers that can somehow make an exact copy of you, down to every quantum state - and you make such a copy - which one would be you?
This leads to several interesting concepts.
It seems obvious that, from your perspective, you are still the original you. Stream of conciousness and all that. Anyone watching the experiment would agree as well, having seen you the entire your little scanning pod, and then leave afterwards. You are still you in any normal sense of understanding that we have.
Now - the exact copy of you - although now you are obviously two differnet people having differing experiences in life, also feels like he's you. He is alive, has your memories, and as far as he is concerned, he seems to have been transported from one pod to another, but other than that, he's you.
If this experiment were done with no observers, and only one person left the room, to the outside world, in all cases, either entity would qualify any test of being "you".
Then we think of the brain and the stream of consciousness - what if we could build replacement parts out of more durable material. If I ask, hey, what if I build an entirely "new" brain out of this material and then just uploaded my brain state into it - then destoryed the old one. Would I still be me, or would I be dead? The answer seems to me to be that "I" would be gone from my point of view, but to the rest of the world, I would still be me.
Then, instead of one big change, think of it in small changes. What if we replace just one tiny piece of my brain with a manufactured analogue that behaves identically. Would I still be me? Seems like it.... but if we continue that process, bit by bit - it seems like I stay me, and retain my stream of consciousness.
In the end, rather than get all confused about it, I've decided that the reason these questions are hard to answer and seemingly paradoxical are because they are impossible - and we will likely never understand consciousness in my lifetime, so it's not worth worrying about :)
posted by TravellingDen at 7:37 AM on August 27, 2010
grizzled: you may have claimed to have engaged in careful consideration, but if you can't see why "the brain is a computer" doesn't follow from "science always winds up supplanting superstitious, supernatural 'explanations' with simpler, 'natural' explanations" without introducing a lot more supporting argumentation -- argumentation you've not even sketched, here -- the end product of your "careful" considerations probably isn't worth very much.
snoktruix: the larger issue with folk-reductionism as per grizzled is that if you follow the reasoning all the way through -- banish all non-physical considerations -- you lose the ability to discuss things like "truth" (and associated notions, like "objectivity" and "subjectivity"); the "truth" isn't a physical thing, and thus the sensation of "knowing the truth" is just another illusory byproduct of our lived experience (in a similar way to how, in a deterministic universe, there isn't really such a thing as "free will", but it remains a part of our lived experience and thus remains a useful conceptual shorthand for making sense of our lives, even if it isn't actually anything at all).
This doesn't, per se, mean we need to invite the supernatural back in, but it does show two things: (1) we seem to currently lack the conceptual structures we'd need to reconstruct most of our "helpful" ideas and shorthand (like "truth", "knowing", etc.) in purely mechanistic terms and (2) most of the more strident folk reductionists haven't actually thought through their own beliefs all that thoroughly.
posted by hoople at 8:08 AM on August 27, 2010
snoktruix: the larger issue with folk-reductionism as per grizzled is that if you follow the reasoning all the way through -- banish all non-physical considerations -- you lose the ability to discuss things like "truth" (and associated notions, like "objectivity" and "subjectivity"); the "truth" isn't a physical thing, and thus the sensation of "knowing the truth" is just another illusory byproduct of our lived experience (in a similar way to how, in a deterministic universe, there isn't really such a thing as "free will", but it remains a part of our lived experience and thus remains a useful conceptual shorthand for making sense of our lives, even if it isn't actually anything at all).
This doesn't, per se, mean we need to invite the supernatural back in, but it does show two things: (1) we seem to currently lack the conceptual structures we'd need to reconstruct most of our "helpful" ideas and shorthand (like "truth", "knowing", etc.) in purely mechanistic terms and (2) most of the more strident folk reductionists haven't actually thought through their own beliefs all that thoroughly.
posted by hoople at 8:08 AM on August 27, 2010
To hoople, yes, I can introduce a lot more supporting argumentation. I am sure I could write an entire book on the subject, and possibly I should do so, but if I do, it is not going to be posted in the form of a ridiculously long comment on this site. I often do post brief comments, because it is a virtue to be succinct. Brevity is the soul of wit, let us not forget. Sometimes I post very long comments (on this site or others) and I have been accused, several times, of being verbose, long-winded, and boring. But then, if my comment is too succinct, I am guilty of failing to provide sufficient supporting argumentation or detail by which my assertions can be better understood. So there is always this balancing act to perform. Even the very explanation I am making now, about why I have to perform a balancing act, will doubtlessly strike some people as being too lengthy (and furthermore, it probably is). Anyway, since you have asked, I will expand upon my reasoning.
I have tried to indicate why people are so impressed by the phenomenon of their own consciousness. People are born with a subjective viewpoint. It is built in to our biology for very understandable evolutionary reasons. An organism that fails to protect its own interests, particularly as relates to survival and reproduction, is less likely to pass on its genetics, and will be selected against. Egocentricism is very successful as an evolutionary strategy. Subjectively, there is no question that to any given individual, that individual is the most important thing in the universe. Such massive importance then easily suggests that the consciousness of that individual must be very special, it must be literally the only thing in the universe which is not susceptible to normal kinds of scientific explanations based on the laws of nature. Stepping back from this subjective viewpoint, there is no objective reason to come to those kinds of conclusions. An objective observer does not see anything magical going on in another person's consciousness, the observer just sees an organism that behaves in the way that evolution has designed it to behave.
Relgion has, since time immemorial, served to buttress this subjective egotism. Even if the universe was created by God, and not by you personally, God created the universe for your benefit. You are vitally important to God, Who for some unknown reason is in need of your prayers, worship, and adoration. So you are very important. There exist other religions which go even further. If you were (heaven forbid) a Scientologist, you would believe that it is actually you personally who created the universe. You get to become your own deity. Some of my readers (who are not that familiar with Scientology) may find this an odd claim, given that there are billions of people in the world, yet we all appear to live in the same physical universe (even if we all have our own mental universes) and how could we all have created the universe, particularly since the universe seems to predate us? But that can be explained. First of all, we are all immortal "thetans" (the Scientological equivalent of souls or spirits) who have been around literally forever; we are reincarnated and unfortunately tend to forget our previous lives, so that the current life seems like the only one. Secondly, the creation of the universe is a group effort. We all create it merely by our agreement that it exists. (In Scientology terms, reality is agreement.) So we get a perfectly consistent and comprehensible explanation of how it is that we personally create the universe and are therefore our own gods. It is the last word in egotism. However, when you try to develop your supernatural abilities (or "OT abilities" as they are known in Scientology) it doesn't work. All objective tests fail. Only the subjective tests succeed. So Scientology turns out to be a potent method of self-delusion, and nothing more.
There are plenty of other examples of religious or occult practices which seek to develop supernatural powers in human beings, and claim to have successfully done so. Religions report all kinds of miracles. If you meditate well enough, you supposedly can levitate. Psychic powers abound. I once had an elaborate argument (in the pages of Analog magazine) about the oddity that in a country with thousands of professional psychics, nobody predicted the 9/11 disaster (or many other terrible disasters for which we should have received advance warning from some helpful psychic). The answer is that there were lots of people who did indeed predict the 9/11 disaster, and who therefore, out of a premonition of danger, did not go in to work at the World Trade Center on that fateful day. Of course, there are people who do not go in to work on ANY given day, and I was not presented with any statistical evidence that the number of people who did not go in to work on 9/11 was greater than usual. But that's how these kinds of arguments go; you can always confuse the issue, fudge the evidence, and see psychic powers operating where none exist. There are a limitless number of tricks that people play, to their financial profit. Yet, there is no actual evidence. It's all smoke and mirrors. I will add that I personally have tested a number of claims of the paranormal, this is not solely a theoretical discussion for me. I felt that it was worth testing. The results are negative, except for people who have a vested interest in finding positive results. And even then, WHY did they not issue a public warning about 9/11? Are they really that callous?
So if all we are left with in terms of the supposedly supernatural nature of human consciousness is the fact that it feels supernatural to you, then you have to face the fact that subjective reality is not the same as objective reality.
Even this expanded explanation may strike you as inadequate, but it is going to have to do. As always, I recomment the book "The Demon Haunted World" by Carl Sagan, for those who would like to better understand the process of scientific criticism of mystical thinking.
posted by grizzled at 9:13 AM on August 27, 2010 [1 favorite]
I have tried to indicate why people are so impressed by the phenomenon of their own consciousness. People are born with a subjective viewpoint. It is built in to our biology for very understandable evolutionary reasons. An organism that fails to protect its own interests, particularly as relates to survival and reproduction, is less likely to pass on its genetics, and will be selected against. Egocentricism is very successful as an evolutionary strategy. Subjectively, there is no question that to any given individual, that individual is the most important thing in the universe. Such massive importance then easily suggests that the consciousness of that individual must be very special, it must be literally the only thing in the universe which is not susceptible to normal kinds of scientific explanations based on the laws of nature. Stepping back from this subjective viewpoint, there is no objective reason to come to those kinds of conclusions. An objective observer does not see anything magical going on in another person's consciousness, the observer just sees an organism that behaves in the way that evolution has designed it to behave.
Relgion has, since time immemorial, served to buttress this subjective egotism. Even if the universe was created by God, and not by you personally, God created the universe for your benefit. You are vitally important to God, Who for some unknown reason is in need of your prayers, worship, and adoration. So you are very important. There exist other religions which go even further. If you were (heaven forbid) a Scientologist, you would believe that it is actually you personally who created the universe. You get to become your own deity. Some of my readers (who are not that familiar with Scientology) may find this an odd claim, given that there are billions of people in the world, yet we all appear to live in the same physical universe (even if we all have our own mental universes) and how could we all have created the universe, particularly since the universe seems to predate us? But that can be explained. First of all, we are all immortal "thetans" (the Scientological equivalent of souls or spirits) who have been around literally forever; we are reincarnated and unfortunately tend to forget our previous lives, so that the current life seems like the only one. Secondly, the creation of the universe is a group effort. We all create it merely by our agreement that it exists. (In Scientology terms, reality is agreement.) So we get a perfectly consistent and comprehensible explanation of how it is that we personally create the universe and are therefore our own gods. It is the last word in egotism. However, when you try to develop your supernatural abilities (or "OT abilities" as they are known in Scientology) it doesn't work. All objective tests fail. Only the subjective tests succeed. So Scientology turns out to be a potent method of self-delusion, and nothing more.
There are plenty of other examples of religious or occult practices which seek to develop supernatural powers in human beings, and claim to have successfully done so. Religions report all kinds of miracles. If you meditate well enough, you supposedly can levitate. Psychic powers abound. I once had an elaborate argument (in the pages of Analog magazine) about the oddity that in a country with thousands of professional psychics, nobody predicted the 9/11 disaster (or many other terrible disasters for which we should have received advance warning from some helpful psychic). The answer is that there were lots of people who did indeed predict the 9/11 disaster, and who therefore, out of a premonition of danger, did not go in to work at the World Trade Center on that fateful day. Of course, there are people who do not go in to work on ANY given day, and I was not presented with any statistical evidence that the number of people who did not go in to work on 9/11 was greater than usual. But that's how these kinds of arguments go; you can always confuse the issue, fudge the evidence, and see psychic powers operating where none exist. There are a limitless number of tricks that people play, to their financial profit. Yet, there is no actual evidence. It's all smoke and mirrors. I will add that I personally have tested a number of claims of the paranormal, this is not solely a theoretical discussion for me. I felt that it was worth testing. The results are negative, except for people who have a vested interest in finding positive results. And even then, WHY did they not issue a public warning about 9/11? Are they really that callous?
So if all we are left with in terms of the supposedly supernatural nature of human consciousness is the fact that it feels supernatural to you, then you have to face the fact that subjective reality is not the same as objective reality.
Even this expanded explanation may strike you as inadequate, but it is going to have to do. As always, I recomment the book "The Demon Haunted World" by Carl Sagan, for those who would like to better understand the process of scientific criticism of mystical thinking.
posted by grizzled at 9:13 AM on August 27, 2010 [1 favorite]
grizzled: briefly, again, what you have written here supports the contention that "the brain is a deterministic mechanism, not the locus of supernatural activity", but, again, leaves the claim "therefore, the brain is a computer" a non sequitur.
posted by hoople at 9:21 AM on August 27, 2010
posted by hoople at 9:21 AM on August 27, 2010
Brains have memories. They receive information from the senses and send out instructions to the muscles and organs, in the form of nerve impulses which contain information. It's all about information and the processing of that information. I really do not see how else we can describe the functions of the brain, other than as computation of some kind - once we accept that magic is not involved. Perhaps you can tell me what the brain is, if it is not a computer. And please don't tell me it's all a huge mystery. We can still describe the brain based on current knowledge, with the understanding that we will learn more about it in the future, and possibly will want to change out minds; science is not dogmatic.
posted by grizzled at 9:47 AM on August 27, 2010
posted by grizzled at 9:47 AM on August 27, 2010
I don't want to become annoyingly argumentative, but I have decided to reply to snoktruix's assertion that No mathematical model will help you one bit in understanding consciousness. How exactly could you know such a thing? The most you can reasonably claim is that you have never seen any mathematical model which helps you one bit in understanding consciousness. You have no way of knowing what mathematical models will be devised in the future, which may shed light on this subject.
At the present time there is a lot that we do not know about how the brain works and how consciousness works, but nonetheless, it still looks like data processing to me.
posted by grizzled at 10:42 AM on August 27, 2010
At the present time there is a lot that we do not know about how the brain works and how consciousness works, but nonetheless, it still looks like data processing to me.
posted by grizzled at 10:42 AM on August 27, 2010
Jpfed: you got to where I was wanting to go. Peter Wegner and Dina Goldin have been banging on this drum for a long time, without much uptake (cf here, for example: http://www.cs.brown.edu/people/pw/strong-cct.pdf or here if you prefer a more technical treatment: http://www.cse.uconn.edu/%7Edqg/papers/its.pdf ). I think part of this is that at least to my taste they are very strong thinkers but not necessarily good writers and certainly not the greatest at propaganda, even inside academia; outside of academia uptake is even slower and laggier.
posted by hoople at 9:01 AM on August 27 [1 favorite +] [!]
Thanks for pointing me to this! I double-majored in Psych and CS, and that meant that my CS coursework wasn't as deep as I would've wanted it to be. I never took a course that actually introduced the Turing machine formalism; even my algorithms class kind of skirted the edges of it. So I have these vague half-formed intuitions about Turing machines and computability; I was never sure how interactive processes were supposed to fit into the input tape/output tape model.
posted by a snickering nuthatch at 10:51 AM on August 27, 2010
posted by hoople at 9:01 AM on August 27 [1 favorite +] [!]
Thanks for pointing me to this! I double-majored in Psych and CS, and that meant that my CS coursework wasn't as deep as I would've wanted it to be. I never took a course that actually introduced the Turing machine formalism; even my algorithms class kind of skirted the edges of it. So I have these vague half-formed intuitions about Turing machines and computability; I was never sure how interactive processes were supposed to fit into the input tape/output tape model.
posted by a snickering nuthatch at 10:51 AM on August 27, 2010
This doesn't, per se, mean we need to invite the supernatural back in, but it does show two things: (1) we seem to currently lack the conceptual structures we'd need to reconstruct most of our "helpful" ideas and shorthand (like "truth", "knowing", etc.) in purely mechanistic terms and (2) most of the more strident folk reductionists haven't actually thought through their own beliefs all that thoroughly.
I think you're probably right on both accounts. As for assertion (1), though, and forgive me if this is an easily invalidated notion because I'm really not too well-informed about philosophy of mind, isn't it plausible that we could bind abstract concepts to the neuron activity patterns that correlate to them in our thinking? This assumes the ability to meaningfully isolate those patterns, which seems difficult but not impossible to this total layman. If it were to happen, do you think that such a discovery would reconcile the conflicts between the ontological natures of the material and the conceptual?
posted by invitapriore at 10:58 AM on August 27, 2010
I think you're probably right on both accounts. As for assertion (1), though, and forgive me if this is an easily invalidated notion because I'm really not too well-informed about philosophy of mind, isn't it plausible that we could bind abstract concepts to the neuron activity patterns that correlate to them in our thinking? This assumes the ability to meaningfully isolate those patterns, which seems difficult but not impossible to this total layman. If it were to happen, do you think that such a discovery would reconcile the conflicts between the ontological natures of the material and the conceptual?
posted by invitapriore at 10:58 AM on August 27, 2010
I really do not see how else we can describe the functions of the brain, other than as computation of some kind - once we accept that magic is not involved. Perhaps you can tell me what the brain is, if it is not a computer. And please don't tell me it's all a huge mystery.
Why not a mystery? The world is full of things you can prove false even where you don't know what the actual truth is.
A position based on "what else could it be?" isn't a logically (deductive) sound one unless you're working under the right circumstances in certain formal systems. When we're talking about consciousness, we're generally not. We're working in a universe that we have half-decent working understanding of at best. It's still remarkably limited, and we don't have control over the premises/axioms -- and even if we did (1) "what else could it be?" doesn't always get you the result you're looking for, since you can very frequently, even in these relatively tame circumstances, find that a result in question is false without knowing the result you're looking and (2) moving into the realm of formal systems might actually leave you with results indicating that the brain is doing something beyond conventional computation as we know it, as warbaby points out.
You find conventional religious explanations unscientific and unsatisfying. That's clear, but their entry in the explanation competition doesn't give the hypothesis that the brain works similarly to any existing technology a bit of additional merit. It has to stand on its own merits as to whether it seems to do all the things we observe brains doing, and then -- if you want it to be science, instead of a story which has exactly as much scientific footing as something as objectively unverifiable as an independent soul -- you have to be able to falsify your hypothesis via experiment.
If you're aware of proposed experiments of this nature, then it'd be pretty fruitful to examine and discuss them.
If not, though, it really does look like your argument centers around "What else could it be?" And I'd submit that while you're doing an inductive look back at history, you should also consider that the discovery of previously unknown things and the proliferation of new models and systems is as much a part of the narrative of progress you've invoked in this thread as the shift from supernatural explanations. Inductive logic should tell us there's often something else it could be.
posted by weston at 11:00 AM on August 27, 2010
Why not a mystery? The world is full of things you can prove false even where you don't know what the actual truth is.
A position based on "what else could it be?" isn't a logically (deductive) sound one unless you're working under the right circumstances in certain formal systems. When we're talking about consciousness, we're generally not. We're working in a universe that we have half-decent working understanding of at best. It's still remarkably limited, and we don't have control over the premises/axioms -- and even if we did (1) "what else could it be?" doesn't always get you the result you're looking for, since you can very frequently, even in these relatively tame circumstances, find that a result in question is false without knowing the result you're looking and (2) moving into the realm of formal systems might actually leave you with results indicating that the brain is doing something beyond conventional computation as we know it, as warbaby points out.
You find conventional religious explanations unscientific and unsatisfying. That's clear, but their entry in the explanation competition doesn't give the hypothesis that the brain works similarly to any existing technology a bit of additional merit. It has to stand on its own merits as to whether it seems to do all the things we observe brains doing, and then -- if you want it to be science, instead of a story which has exactly as much scientific footing as something as objectively unverifiable as an independent soul -- you have to be able to falsify your hypothesis via experiment.
If you're aware of proposed experiments of this nature, then it'd be pretty fruitful to examine and discuss them.
If not, though, it really does look like your argument centers around "What else could it be?" And I'd submit that while you're doing an inductive look back at history, you should also consider that the discovery of previously unknown things and the proliferation of new models and systems is as much a part of the narrative of progress you've invoked in this thread as the shift from supernatural explanations. Inductive logic should tell us there's often something else it could be.
posted by weston at 11:00 AM on August 27, 2010
I do not rule out the possibility that there is something going on in the brain, or something involving human consciousness which is fundamentally different than what happens in a computer. But there really is no objective evidence of that. There is just people's subjective feeling of their own specialness. Until we have some reason to think otherwise, it still seems reasonable to consider the brain to be an organic computer. You (weston) apparently think it's something else, but you haven't said what it may be or why the explanation of organic computer doesn't fit. I have said this before, but I'll repeat it. Brains have memories. They receive information from the senses and send out instructions to the muscles and organs, in the form of nerve impulses which contain information. It's all about information and the processing of that information. So, it looks like computation to me.
posted by grizzled at 11:10 AM on August 27, 2010
posted by grizzled at 11:10 AM on August 27, 2010
grizzled: back up a step: if you want to say that "X is a Y" you ought to be able to give a precise characterization of Y (or, at least, Y-ness), but you haven't done so yet.
What, to you, is a "computer"? How, on a purely physical basis, can I identify whether or not a particular physical system is a "computer"? What are the physical properties that distinguish, say, a laptop (which, I'm willing to wager, you would agree is a kind of computer) from a sack of potatoes (which, I'm also willing to wager, you don't believe to be a computer). What physical characteristics separate computers from non-computers?
Likewise: what, to you, is computation? How, on a purely physical basis, can I identify whether or not a particular sequence of physical events is "computation"? What are the physical properties that distinguish, say, the operation of a flight simulator (which we'd presumably both agree involves computation) from a landslide (which, I'm willing to wager, we would agree isn't computation)? What physical characteristics separate "computation" from "stuff happening"?
And: what, to you, is information? How, on a purely physical basis, can I identify whether or not a particular physical information "contains" information (and, additionally, where, physically, can I find that information)? What are the physical properties that distinguish, say, nerve impulses as containing information (which I think we'd agree on) from, say, a lightning bolt (which I'm willing to wager we'd agree doesn't "contain" information). What physical characteristics separate physical systems that contain information from those that don't? (And, as a bonus: when some physical system contains information, where, physically, can I find that information?)
It seems your reasoning is: to be a "computer" something has to have "memory", receive "input" (which "contains information"), and then produce output (which also "contains information").
Without clearer definitions of the terms in quotes then that's a broad enough criterion to capture any physical system with an extended temporal existence; eg, why isn't it the case that the surface of the earth is a computer? It has memory (in the form of topography), receives inputs (in the form of precipitation) and outputs (in the form of runoff, infiltration, and evaporation). If that's not a computer, why not?
posted by hoople at 11:20 AM on August 27, 2010
What, to you, is a "computer"? How, on a purely physical basis, can I identify whether or not a particular physical system is a "computer"? What are the physical properties that distinguish, say, a laptop (which, I'm willing to wager, you would agree is a kind of computer) from a sack of potatoes (which, I'm also willing to wager, you don't believe to be a computer). What physical characteristics separate computers from non-computers?
Likewise: what, to you, is computation? How, on a purely physical basis, can I identify whether or not a particular sequence of physical events is "computation"? What are the physical properties that distinguish, say, the operation of a flight simulator (which we'd presumably both agree involves computation) from a landslide (which, I'm willing to wager, we would agree isn't computation)? What physical characteristics separate "computation" from "stuff happening"?
And: what, to you, is information? How, on a purely physical basis, can I identify whether or not a particular physical information "contains" information (and, additionally, where, physically, can I find that information)? What are the physical properties that distinguish, say, nerve impulses as containing information (which I think we'd agree on) from, say, a lightning bolt (which I'm willing to wager we'd agree doesn't "contain" information). What physical characteristics separate physical systems that contain information from those that don't? (And, as a bonus: when some physical system contains information, where, physically, can I find that information?)
It seems your reasoning is: to be a "computer" something has to have "memory", receive "input" (which "contains information"), and then produce output (which also "contains information").
Without clearer definitions of the terms in quotes then that's a broad enough criterion to capture any physical system with an extended temporal existence; eg, why isn't it the case that the surface of the earth is a computer? It has memory (in the form of topography), receives inputs (in the form of precipitation) and outputs (in the form of runoff, infiltration, and evaporation). If that's not a computer, why not?
posted by hoople at 11:20 AM on August 27, 2010
grizzled,
Well it seems like there are a number of ways of interpreting the statement "the brain is a computer"
a) The brain is a computer much like those we use today:
This is obviously rubbish, as has been pointed out by many commenters here. I don't think that anyone is this thread believes that, but this is what the layman understands by "the brain is a computer".
b) The brain carries out computations (in the most abstract sense of processing data) and is therefore a computer in the computer science / math sense:
This is obviously true. Unfortunately, the entire universe also carries out computations on its "input data", as does a ball rolling down a slope.
If we could reconstruct a brain from scratch that was physically identical to the one in my head it would obviously be conscious. The interesting question is whether we could construct a device that is physically different from a human brain but that would display the same behaviour on a "black box" basis. I don't know the answer to that, though I suspect that we can.
The final question is, can we construct such a device that is non-trivially different in "architecture" than the brain. In other words, genetically engineering chimp neurons and wiring them up into a human-equivalent brain doesn't count, because that is just duplication without necessarily deep comprehension. Like engineering around a patent while copying its central innovation.
I don't know the answers to these questions but to say that "the brain is a computer" is obviously true is not the case except in the sense that virtually everything is a computer.
posted by atrazine at 11:36 AM on August 27, 2010 [1 favorite]
Well it seems like there are a number of ways of interpreting the statement "the brain is a computer"
a) The brain is a computer much like those we use today:
This is obviously rubbish, as has been pointed out by many commenters here. I don't think that anyone is this thread believes that, but this is what the layman understands by "the brain is a computer".
b) The brain carries out computations (in the most abstract sense of processing data) and is therefore a computer in the computer science / math sense:
This is obviously true. Unfortunately, the entire universe also carries out computations on its "input data", as does a ball rolling down a slope.
If we could reconstruct a brain from scratch that was physically identical to the one in my head it would obviously be conscious. The interesting question is whether we could construct a device that is physically different from a human brain but that would display the same behaviour on a "black box" basis. I don't know the answer to that, though I suspect that we can.
The final question is, can we construct such a device that is non-trivially different in "architecture" than the brain. In other words, genetically engineering chimp neurons and wiring them up into a human-equivalent brain doesn't count, because that is just duplication without necessarily deep comprehension. Like engineering around a patent while copying its central innovation.
I don't know the answers to these questions but to say that "the brain is a computer" is obviously true is not the case except in the sense that virtually everything is a computer.
posted by atrazine at 11:36 AM on August 27, 2010 [1 favorite]
Actually, the planet Earth may be a computer, see the Gaia hypothesis. Sometimes this is formulated as a theory that the Earth is a huge living organism, but the hypothesis works just as well if we imagine the Earth to be a huge computer.
It is always possible, and sometimes it is even useful, to try to refine our definition of terms, yet you have observed the various cases in which you and I would agree about what information is. We do know what information is, even if I have not offered a definition. I am not using any unusual or altered definition, I mean what you mean. But I guess I will have to be more explicit. Information can be defined as an abstraction about either physical or intellectual objects; it refers to things and tells us something about things. For example, you see an apple on a table. The fact that there is an apple on the table is information. It is not necessarily important information, but it is information. If you were fooled into thinking that there was an apple on the table, when in reality it was just a picture of an apple (or a wax apple, etc.) then the belief that there is an apple on the table would be classified as inaccurate or false information. This information could be contained in either a brain or a computer (or both). It would be stored in different ways, but it would be the same information. The information could be encoded in the form of a visual image or photograph, or verbally, or even numerically if you had the right code; in terms of brain function it would be ultimately stored in the form of a nerve transmission pathway created by various nerve cells altering their electrochemical conductivity at specific ganglions. A lot of detail could be described about how this happens. But it still adds up to information.
posted by grizzled at 11:38 AM on August 27, 2010 [1 favorite]
It is always possible, and sometimes it is even useful, to try to refine our definition of terms, yet you have observed the various cases in which you and I would agree about what information is. We do know what information is, even if I have not offered a definition. I am not using any unusual or altered definition, I mean what you mean. But I guess I will have to be more explicit. Information can be defined as an abstraction about either physical or intellectual objects; it refers to things and tells us something about things. For example, you see an apple on a table. The fact that there is an apple on the table is information. It is not necessarily important information, but it is information. If you were fooled into thinking that there was an apple on the table, when in reality it was just a picture of an apple (or a wax apple, etc.) then the belief that there is an apple on the table would be classified as inaccurate or false information. This information could be contained in either a brain or a computer (or both). It would be stored in different ways, but it would be the same information. The information could be encoded in the form of a visual image or photograph, or verbally, or even numerically if you had the right code; in terms of brain function it would be ultimately stored in the form of a nerve transmission pathway created by various nerve cells altering their electrochemical conductivity at specific ganglions. A lot of detail could be described about how this happens. But it still adds up to information.
posted by grizzled at 11:38 AM on August 27, 2010 [1 favorite]
The above comment by atrazine notes that it is obviously true that the brain carries out computations and is therefore a computer in the computer science/math sense. The same comment then concludes that to say that "the brain is a computer" is obviously true is not the case. To quote Robby the robot, this does not compute.
posted by grizzled at 11:42 AM on August 27, 2010 [1 favorite]
posted by grizzled at 11:42 AM on August 27, 2010 [1 favorite]
Ok, grizzled, let me put it another way. The sense in which a brain is obviously a computer doesn't necessarily tell us anything interesting.
It doesn't answer any questions about whether we can simulate it with any kind of artificial computer even in theory. What if an actual brain is the most compact way to simulate itself? (admittedly very unlikely given the constraints of our evolutionary history).
Maybe we're arguing at cross purposes here and we actually agree on this.
The argument that I often see is:
1) The brain is a computer
2) We can build computers
3) They get ever faster
4) Therefore one day we will be able to simulate the brain
This is Kurzweil in a nutshell. The argument relies on a confusion between the uses of the word computer in points 1 and 2. Again, maybe this isn't what you're saying at all and we agree.
posted by atrazine at 11:59 AM on August 27, 2010
It doesn't answer any questions about whether we can simulate it with any kind of artificial computer even in theory. What if an actual brain is the most compact way to simulate itself? (admittedly very unlikely given the constraints of our evolutionary history).
Maybe we're arguing at cross purposes here and we actually agree on this.
The argument that I often see is:
1) The brain is a computer
2) We can build computers
3) They get ever faster
4) Therefore one day we will be able to simulate the brain
This is Kurzweil in a nutshell. The argument relies on a confusion between the uses of the word computer in points 1 and 2. Again, maybe this isn't what you're saying at all and we agree.
posted by atrazine at 11:59 AM on August 27, 2010
What's a novel insight or nontrivial analogy this provides me? Where's the "punchline"?
Calling the body a food computer could well be useful when studying metabolism in an informational context, but there is no "punchline", nor any need for one. The point, as such, is simply to call things what they are, without ascribing special, supernatural, vitalist qualities to (for example) digestion, as is otherwise seemingly done here with the brain, by people who deny it is a computer, even if it is one made of fatty acids and ion pumps, rather than copper and gold.
posted by Blazecock Pileon at 12:41 PM on August 27, 2010
Calling the body a food computer could well be useful when studying metabolism in an informational context, but there is no "punchline", nor any need for one. The point, as such, is simply to call things what they are, without ascribing special, supernatural, vitalist qualities to (for example) digestion, as is otherwise seemingly done here with the brain, by people who deny it is a computer, even if it is one made of fatty acids and ion pumps, rather than copper and gold.
posted by Blazecock Pileon at 12:41 PM on August 27, 2010
Blazecock Pileon: I've given up on grizzled. What, precisely, do you think a computer is? How, exactly, would one falsify the claim "X is a computer"?
posted by hoople at 12:44 PM on August 27, 2010
posted by hoople at 12:44 PM on August 27, 2010
When I look at an FPP on the blue, how do I know you're not seeing it on the green?
posted by not_on_display at 12:47 PM on August 27, 2010
posted by not_on_display at 12:47 PM on August 27, 2010
as is otherwise seemingly done here with the brain, by people who deny it is a computer, even if it is one made of fatty acids and ion pumps, rather than copper and gold.
I don't think people are doing this. I think that people are saying that the brain is a computer in the abstract sense of the word but that this is a trivial statement that doesn't do anything for us. Of course that doesn't make it incorrect.
posted by atrazine at 1:06 PM on August 27, 2010
I don't think people are doing this. I think that people are saying that the brain is a computer in the abstract sense of the word but that this is a trivial statement that doesn't do anything for us. Of course that doesn't make it incorrect.
posted by atrazine at 1:06 PM on August 27, 2010
How, exactly, would one falsify the claim "X is a computer"?
hoople: I can't speak for others, but consider the following.
I believe that some statements S are not said because they eliminate alternative predictions or explanations, but simply to direct attention in a new way, or to change the emphasis that we give different ideas, or to change the vocabulary we use to discuss a problem. Such a change of vocabulary could make further reasoning easier, in the much same way that a change of coordinates could make a multiple integral easier.
Some people believe that the action of the laws of physics is itself computation. If one holds this belief, it does not introduce new information to say that a particular object is a computer. But it may change the terms in which you think about or discuss that particular object. People that say "X is a computer" may then think that this change of vocabulary is useful, even if their statement isn't falsifiable.
The onus is then on them (assuming this is what they actually mean) to show what benefit is derived from the change of vocabulary.
posted by a snickering nuthatch at 1:09 PM on August 27, 2010
hoople: I can't speak for others, but consider the following.
I believe that some statements S are not said because they eliminate alternative predictions or explanations, but simply to direct attention in a new way, or to change the emphasis that we give different ideas, or to change the vocabulary we use to discuss a problem. Such a change of vocabulary could make further reasoning easier, in the much same way that a change of coordinates could make a multiple integral easier.
Some people believe that the action of the laws of physics is itself computation. If one holds this belief, it does not introduce new information to say that a particular object is a computer. But it may change the terms in which you think about or discuss that particular object. People that say "X is a computer" may then think that this change of vocabulary is useful, even if their statement isn't falsifiable.
The onus is then on them (assuming this is what they actually mean) to show what benefit is derived from the change of vocabulary.
posted by a snickering nuthatch at 1:09 PM on August 27, 2010
The point, as such, is simply to call things what they are [...]
Comparing brains to computers may or may not be a useful strategy in some contexts or others, but it is definitely not a neutral, value-free, tell-it-like-it-is approach.
posted by avianism at 1:15 PM on August 27, 2010
Comparing brains to computers may or may not be a useful strategy in some contexts or others, but it is definitely not a neutral, value-free, tell-it-like-it-is approach.
posted by avianism at 1:15 PM on August 27, 2010
What, precisely, do you think a computer is?
It can be useful to describe something as a computer, when it processes information to some understandable end. Information can be described abstractly as any physical unit of storage or communication of symbols. The actual, nuts-and-bolts, physical representation of the information or the implementation of its processing is not really important to me.
How, exactly, would one falsify the claim "X is a computer"?
If the mechanistic chain of input, processing and output is reasonably defined, it can be useful to call X a computer if it meets those criteria. If those criteria are not met, it is probably not useful to describe X as a computer.
I'm reluctant to "falsify a word", because, in my opinion, words have to have some utility for their application to be reasonable. It is more reasonable to describe their utility, or lack thereof, in order to communicate a shared understanding.
Semantics are boring, I know, but to explain the point further, I also have a difficult time with saying that viruses are not life, which is the line that most biologists draw in the sand. I like to think that viruses are life (or at least close to it) because they contain genetic information which they process, while being dependent on other forms of life for doing so, even though nine out of ten other biologists I would ask would flatly disagree. Sometimes you have to work with the limitations of the words you have on hand.
I maintain that the brain (and similar neural antecedents in evolutionary history) is an organic computer, because I am not a superstitious fool, and it is useful to me to describe the brain by its obvious, repeatably observable functionality, which we are successful in abstracting by way of terminology and concepts that we use in information theory.
To return to your prior analogy, in some contexts, it could be potentially useful to think about food metabolism in information processing contexts. We could conceptualize loss of phytoplankton, for example, as having an exponentially catastrophic effect on organisms further downstream in the food chain "network". In the sense of modeling cause and effect, it could be useful to think of populations of organisms as glorified food computers, abstracting inputs and outputs in a network of producers and consumers. I would not be so bold as to say that this is the terminology that ecologists should choose to use, but it wouldn't be too inaccurate to do so, in my opinion.
posted by Blazecock Pileon at 1:18 PM on August 27, 2010
It can be useful to describe something as a computer, when it processes information to some understandable end. Information can be described abstractly as any physical unit of storage or communication of symbols. The actual, nuts-and-bolts, physical representation of the information or the implementation of its processing is not really important to me.
How, exactly, would one falsify the claim "X is a computer"?
If the mechanistic chain of input, processing and output is reasonably defined, it can be useful to call X a computer if it meets those criteria. If those criteria are not met, it is probably not useful to describe X as a computer.
I'm reluctant to "falsify a word", because, in my opinion, words have to have some utility for their application to be reasonable. It is more reasonable to describe their utility, or lack thereof, in order to communicate a shared understanding.
Semantics are boring, I know, but to explain the point further, I also have a difficult time with saying that viruses are not life, which is the line that most biologists draw in the sand. I like to think that viruses are life (or at least close to it) because they contain genetic information which they process, while being dependent on other forms of life for doing so, even though nine out of ten other biologists I would ask would flatly disagree. Sometimes you have to work with the limitations of the words you have on hand.
I maintain that the brain (and similar neural antecedents in evolutionary history) is an organic computer, because I am not a superstitious fool, and it is useful to me to describe the brain by its obvious, repeatably observable functionality, which we are successful in abstracting by way of terminology and concepts that we use in information theory.
To return to your prior analogy, in some contexts, it could be potentially useful to think about food metabolism in information processing contexts. We could conceptualize loss of phytoplankton, for example, as having an exponentially catastrophic effect on organisms further downstream in the food chain "network". In the sense of modeling cause and effect, it could be useful to think of populations of organisms as glorified food computers, abstracting inputs and outputs in a network of producers and consumers. I would not be so bold as to say that this is the terminology that ecologists should choose to use, but it wouldn't be too inaccurate to do so, in my opinion.
posted by Blazecock Pileon at 1:18 PM on August 27, 2010
Comparing brains to computers may or may not be a useful strategy in some contexts or others, but it is definitely not a neutral, value-free, tell-it-like-it-is approach.
Then I would ask what it is being asserted that is special about the brain, specifically what makes it a unique enough entity that calling it a computer is incorrect. There is vagueness about this that I find difficult to understand, for the purposes of communication.
posted by Blazecock Pileon at 1:26 PM on August 27, 2010
Then I would ask what it is being asserted that is special about the brain, specifically what makes it a unique enough entity that calling it a computer is incorrect. There is vagueness about this that I find difficult to understand, for the purposes of communication.
posted by Blazecock Pileon at 1:26 PM on August 27, 2010
Blazecock Pileon: we're getting somewhere at least.
I think you're trying to have your cake and eat it too, here.
Your attempt at an actual computer definition is circular, provided you think the brain is also a computer (which seems fair, since you say as much).
Using your formulation, for something to be describable as a computer it should process information to some understandable end. The idea of an understandable end intrinsically implies some kind of understander, or, at least, "if an understander were around and wanted to understand it then the understander could understand it" (whatever understanding and understanders may happen to be) .
Without further clarification from you I'd have to take an understander to be, for the time being, "us (humans)" (and, presumably, sentient machines, men from mars, and other hypothetical "things that think like we do, or could at least pass the turing test, or whatever else suits your fancy").
And, presumably, for you the brain is the mechanism where understanding -- whatever that even is -- takes place.
What is the brain? Here we come full circle: for you the brain is a computer, so we're back where we started again, and your attempted definition amounts to "a computer is whatever a computer might consider useful (for computing)".
So that's no good as a stand-alone definition.
But, you say you aren't big on definitions and prefer utility, etc., in which case what's left is a dispute about aesthetics, mostly.
EG: there's nothing in your sketched approach to the understanding the food network (in the ecological sense) that isn't already captured -- more precisely and with less external baggage -- by modeling the food web as a coupled system of stochastic differential equations. Calling it a computer in the blazecock sense seems pointless, adds no value, and uses up time and energy that could have been spent, for example, characterizing the change in dynamics of the system in response to, eg, the loss of plankton.
posted by hoople at 1:46 PM on August 27, 2010
I think you're trying to have your cake and eat it too, here.
Your attempt at an actual computer definition is circular, provided you think the brain is also a computer (which seems fair, since you say as much).
Using your formulation, for something to be describable as a computer it should process information to some understandable end. The idea of an understandable end intrinsically implies some kind of understander, or, at least, "if an understander were around and wanted to understand it then the understander could understand it" (whatever understanding and understanders may happen to be) .
Without further clarification from you I'd have to take an understander to be, for the time being, "us (humans)" (and, presumably, sentient machines, men from mars, and other hypothetical "things that think like we do, or could at least pass the turing test, or whatever else suits your fancy").
And, presumably, for you the brain is the mechanism where understanding -- whatever that even is -- takes place.
What is the brain? Here we come full circle: for you the brain is a computer, so we're back where we started again, and your attempted definition amounts to "a computer is whatever a computer might consider useful (for computing)".
So that's no good as a stand-alone definition.
But, you say you aren't big on definitions and prefer utility, etc., in which case what's left is a dispute about aesthetics, mostly.
EG: there's nothing in your sketched approach to the understanding the food network (in the ecological sense) that isn't already captured -- more precisely and with less external baggage -- by modeling the food web as a coupled system of stochastic differential equations. Calling it a computer in the blazecock sense seems pointless, adds no value, and uses up time and energy that could have been spent, for example, characterizing the change in dynamics of the system in response to, eg, the loss of plankton.
posted by hoople at 1:46 PM on August 27, 2010
What is the brain? Here we come full circle: for you the brain is a computer, so we're back where we started again, and your attempted definition amounts to "a computer is whatever a computer might consider useful (for computing)".
We are bipedal; we walk; we understand how it works well enough to make robots that approximate walking — possessing legs and an instinctual program for walking upright does not introduce a circular definition of walking that makes it impossible to understand how the process works, or impossible to call legs "walking implements" or some other functional equivalent.
I guess I would like to return to my previous question: What is special about the brain that makes calling it a computer incorrect? One person suggested not running an OS or having RAM or a hard drive, but I think we'd agree that this particular technical implementation is not specific to all computers.
posted by Blazecock Pileon at 2:52 PM on August 27, 2010
We are bipedal; we walk; we understand how it works well enough to make robots that approximate walking — possessing legs and an instinctual program for walking upright does not introduce a circular definition of walking that makes it impossible to understand how the process works, or impossible to call legs "walking implements" or some other functional equivalent.
I guess I would like to return to my previous question: What is special about the brain that makes calling it a computer incorrect? One person suggested not running an OS or having RAM or a hard drive, but I think we'd agree that this particular technical implementation is not specific to all computers.
posted by Blazecock Pileon at 2:52 PM on August 27, 2010
Then I would ask what it is being asserted that is special about the brain, specifically what makes it a unique enough entity that calling it a computer is incorrect. There is vagueness about this that I find difficult to understand, for the purposes of communication.
By some definition, a brain is a kind of computer. Correctness is not really the issue. The question for me is (as shakespeherian put it above): what does calling the brain a computer buy us?
It seems to me that in most contexts it buys us very little, and confuses things quite a bit. I can say of the computer on my desk that we know how it works, that its inputs and outputs are clearly defined, that it operates by some set of rules that can be conveniently represented. But, while the brain may be a computer, none of these things are true of the brain. Further, the brain does or is involved in the doing of all kinds of things that the computer on my desk can't do, like getting distracted, making mistakes, and so forth.
Most of the time, it seems like when people call the brain a computer, they mean to create the impression that these are not differences in kind, but simple differences of degree. That, once computers (of the sort on our desks) are fast enough, there will be little further difficulty in making them work more like brains (the ones in our heads). This picture is likely false; brains are intimately connected to and have evolved over millions of years inside bodies situated within environments, the precise forms of which are critical to their functional operation.1
And that's why, even though I acknowledge that in some sense the brain is a kind of (analog, massively parallel, embodied, precisely situated and evolved) computer (insofar as it computes stuff sometimes, anyway), calling it a "computer" is something I avoid.
[1] That is to say, modeling a brain within a computer means modeling all of that other stuff too, the body, the environment, everything. In short, modeling things which are as unlike a computer on your desk as things could possibly be.
posted by avianism at 3:14 PM on August 27, 2010 [1 favorite]
By some definition, a brain is a kind of computer. Correctness is not really the issue. The question for me is (as shakespeherian put it above): what does calling the brain a computer buy us?
It seems to me that in most contexts it buys us very little, and confuses things quite a bit. I can say of the computer on my desk that we know how it works, that its inputs and outputs are clearly defined, that it operates by some set of rules that can be conveniently represented. But, while the brain may be a computer, none of these things are true of the brain. Further, the brain does or is involved in the doing of all kinds of things that the computer on my desk can't do, like getting distracted, making mistakes, and so forth.
Most of the time, it seems like when people call the brain a computer, they mean to create the impression that these are not differences in kind, but simple differences of degree. That, once computers (of the sort on our desks) are fast enough, there will be little further difficulty in making them work more like brains (the ones in our heads). This picture is likely false; brains are intimately connected to and have evolved over millions of years inside bodies situated within environments, the precise forms of which are critical to their functional operation.1
And that's why, even though I acknowledge that in some sense the brain is a kind of (analog, massively parallel, embodied, precisely situated and evolved) computer (insofar as it computes stuff sometimes, anyway), calling it a "computer" is something I avoid.
[1] That is to say, modeling a brain within a computer means modeling all of that other stuff too, the body, the environment, everything. In short, modeling things which are as unlike a computer on your desk as things could possibly be.
posted by avianism at 3:14 PM on August 27, 2010 [1 favorite]
Although, it would be totally sweet to be all like "Let me take out my computer" and then to actually rip my brain out of my own skull through my nose and then slam it down on the desk, to like, prove a point in an argument or something.
I'd do that, if I could.
posted by avianism at 3:18 PM on August 27, 2010
I'd do that, if I could.
posted by avianism at 3:18 PM on August 27, 2010
I can say of the computer on my desk that we know how it works, that its inputs and outputs are clearly defined, that it operates by some set of rules that can be conveniently represented. But, while the brain may be a computer, none of these things are true of the brain.
This seems factually wrong, to me. There is variability and noise, but inputs and outputs are mostly understood, as I described in a previous comment. We don't fully understand all the processing, but we've learned a fair amount and are learning more at a faster rate.
Again, you're falling into the trap of thinking computer = "Windows PC", when we've already moved beyond that to a more abstract definition that encompasses the underlying meaning of computing, without referring to any specific "hardware" implementation.
posted by Blazecock Pileon at 3:35 PM on August 27, 2010
This seems factually wrong, to me. There is variability and noise, but inputs and outputs are mostly understood, as I described in a previous comment. We don't fully understand all the processing, but we've learned a fair amount and are learning more at a faster rate.
Again, you're falling into the trap of thinking computer = "Windows PC", when we've already moved beyond that to a more abstract definition that encompasses the underlying meaning of computing, without referring to any specific "hardware" implementation.
posted by Blazecock Pileon at 3:35 PM on August 27, 2010
inputs and outputs are mostly understood, as I described in a previous comment. We don't fully understand all the processing, but we've learned a fair amount and are learning more at a faster rate.
I have a really hard time with this. It just doesn't seem to me like we have the arms around the problem of understanding the brains input/output the way this implies. I'm no expert though.
You seem to say that we know what groups of knowledge there is to learn about the brain and we just have to go out and do it, while to me, it seems like almost every year new aspects are opened up that spawn whole new areas of research that we didn't know anything about before.
Going to bed now.
posted by Catfry at 4:06 PM on August 27, 2010
I have a really hard time with this. It just doesn't seem to me like we have the arms around the problem of understanding the brains input/output the way this implies. I'm no expert though.
You seem to say that we know what groups of knowledge there is to learn about the brain and we just have to go out and do it, while to me, it seems like almost every year new aspects are opened up that spawn whole new areas of research that we didn't know anything about before.
Going to bed now.
posted by Catfry at 4:06 PM on August 27, 2010
So, what if they slice the brain perfectly in half, and then have it communicate with the other half across the room? And what if they then hook it up to a cellular data plan and have half of the brain go on a museum tour, while the other half stays in the lab for safekeeping? And meanwhile his body gets a job as a flight attendent, going from state to state?
WHERE THE HELL IS HE?
posted by mccarty.tim at 5:54 PM on August 27, 2010
WHERE THE HELL IS HE?
posted by mccarty.tim at 5:54 PM on August 27, 2010
almost every year new aspects are opened up that spawn whole new areas of research that we didn't know anything about before.
That comes from believing that the the brain (or any other aspect of the universe, for that matter) is a comprehensible object. If we believe the brain is innately incomprehensible, then there's no reason to study it and learn more about it.
So even if the belief that the brain is comprehensible is wrong, it's still the better belief.
posted by Jimmy Havok at 10:37 PM on August 27, 2010
That comes from believing that the the brain (or any other aspect of the universe, for that matter) is a comprehensible object. If we believe the brain is innately incomprehensible, then there's no reason to study it and learn more about it.
So even if the belief that the brain is comprehensible is wrong, it's still the better belief.
posted by Jimmy Havok at 10:37 PM on August 27, 2010
atrazine has expressed the concern that when I describe the brain as a computer, I mean to imply that therefore human technology will eventually be able to build a computer that is the equivalent of a human brain, into which human minds can be downloaded as hypothesized by Kurzweil. Personally I consider that to be a far more difficult technical problem than Kurzweil thinks it is, and I have no idea if we will ever be able to build computers that have equivalent capabilities of the human brain (or whether it would be cost effective to do so, given that human brains, as components of human bodies, can be made quite cheaply by unskilled labor). I will go so far as to say that it is at least theoretically possible, with sufficient research and development, to eventually build a computer that has the capabilities of the human brain. I see no reason why this should not be possible. Very difficult and very expensive, but possible.
There is a lot of continuing argument over why we might want to describe the brain as a computer and what we accomplish by such a description. I find it useful, because it gives us a way to understand the nature of the human personality, as a kind of computer program. Without such a model we are inevitably tempted to resort to mysitcal explanations about the transcendent and mysterious nature of human consciousness, and these, I believe, lead people astray. Beyond that, well, hoople has already announced that he (or she) is giving up on me, and the whole argument does seem to have gone on at excessive length.
posted by grizzled at 6:22 AM on August 28, 2010 [1 favorite]
There is a lot of continuing argument over why we might want to describe the brain as a computer and what we accomplish by such a description. I find it useful, because it gives us a way to understand the nature of the human personality, as a kind of computer program. Without such a model we are inevitably tempted to resort to mysitcal explanations about the transcendent and mysterious nature of human consciousness, and these, I believe, lead people astray. Beyond that, well, hoople has already announced that he (or she) is giving up on me, and the whole argument does seem to have gone on at excessive length.
posted by grizzled at 6:22 AM on August 28, 2010 [1 favorite]
Without such a model we are inevitably tempted to resort to mysitcal explanations about the transcendent and mysterious nature of human consciousness, and these, I believe, lead people astray.
I wonder if this is why my question remains unanswered.
posted by Blazecock Pileon at 12:12 PM on August 28, 2010
I wonder if this is why my question remains unanswered.
posted by Blazecock Pileon at 12:12 PM on August 28, 2010
I don't want to become annoyingly argumentative, but I have decided to reply to snoktruix's assertion that No mathematical model will help you one bit in understanding consciousness. How exactly could you know such a thing? The most you can reasonably claim is that you have never seen any mathematical model which helps you one bit in understanding consciousness. You have no way of knowing what mathematical models will be devised in the future, which may shed light on this subject.
Please allow me to try to explain my point of view. When scientists say they understand some phenomenon, they mean they have a theory (i.e. model), which may or may not be mathematical, which defines some framework for explaining the phenomena observed, making new predictions, etc. For example, we understand electricity and magnetism because we have Maxwell's equations. However, that doesn't mean we understand what the magnetic field actually "is". We just have a model of it that works well. We can't possibly reach out and probe the real thing, because we are limited to touching the phenomenon at a distance, through instruments, and filtered through our senses.
To understand consciousness, it is the second kind of understanding that is needed. We can make models of it as much as we like, but we are trying to grasp the reason for the sensation of awareness itself. It just seems self-evident that no model can do that. (Thought experiment: "In the year 2970, the mathematics of consciousness is finally understood and published in Nature. You read the paper, which has lots of equations and nice graphs. As you put the paper down, you realize that you now actually understand completely why you are conscious - there is NO mystery." Is this not patently impossible?).
However, I do think we might approach the point where our understanding of brain function is so complete that we in effect understand consciousness completely in the same way as we understand electricity and magnetism now, i.e. we have a model of (presumably some brain physiology) that is satisfying enough that we think we know what is going on, even though I claim we will never really know what our awareness is (just as we will never really know the answer to deep questions like what the universe is, or why there is something rather than nothing).
posted by snoktruix at 1:14 PM on August 28, 2010
Please allow me to try to explain my point of view. When scientists say they understand some phenomenon, they mean they have a theory (i.e. model), which may or may not be mathematical, which defines some framework for explaining the phenomena observed, making new predictions, etc. For example, we understand electricity and magnetism because we have Maxwell's equations. However, that doesn't mean we understand what the magnetic field actually "is". We just have a model of it that works well. We can't possibly reach out and probe the real thing, because we are limited to touching the phenomenon at a distance, through instruments, and filtered through our senses.
To understand consciousness, it is the second kind of understanding that is needed. We can make models of it as much as we like, but we are trying to grasp the reason for the sensation of awareness itself. It just seems self-evident that no model can do that. (Thought experiment: "In the year 2970, the mathematics of consciousness is finally understood and published in Nature. You read the paper, which has lots of equations and nice graphs. As you put the paper down, you realize that you now actually understand completely why you are conscious - there is NO mystery." Is this not patently impossible?).
However, I do think we might approach the point where our understanding of brain function is so complete that we in effect understand consciousness completely in the same way as we understand electricity and magnetism now, i.e. we have a model of (presumably some brain physiology) that is satisfying enough that we think we know what is going on, even though I claim we will never really know what our awareness is (just as we will never really know the answer to deep questions like what the universe is, or why there is something rather than nothing).
posted by snoktruix at 1:14 PM on August 28, 2010
I think I should be more specific, that the mystery is not that we cannot grasp "the reason for the sensation of awareness itself". The reason for it might be something simple like the idea that anything sufficiently intelligent develops awareness. The mystery is how to describe "the sensation of awareness itself". I know I feel it, it's real. I would love to have a deeper understanding of the structure of this thing we call awareness, if there is structure. But I can't imagine it being explained by a theory.
posted by snoktruix at 1:27 PM on August 28, 2010
posted by snoktruix at 1:27 PM on August 28, 2010
But I can't imagine it being explained by a theory.
If we all thought like that there would be no science.
posted by Jimmy Havok at 1:56 PM on August 28, 2010
If we all thought like that there would be no science.
posted by Jimmy Havok at 1:56 PM on August 28, 2010
You misunderstand me. It's not that I don't want to. I just literally can't. Can you? If so, please tell me what the thing you're imagining looks like, i'm seriously interested.
posted by snoktruix at 3:18 PM on August 28, 2010
posted by snoktruix at 3:18 PM on August 28, 2010
And like I said, I think something like "anything sufficiently intelligent develops awareness" doesn't count as a theory of consciousness. It doesn't explain why the awareness exists and what the hell it is. This phenomenon seems unlike anything else in science, being at the very core of our contact with the world. So I think this is not just another case of a mystery ("what are all those funny colored dots in the sky at night", or something) waiting to be cracked once we find the right approach. I could be wrong.
posted by snoktruix at 3:30 PM on August 28, 2010
posted by snoktruix at 3:30 PM on August 28, 2010
Sorry to belabour the point. But by the way, I canimagine what a theory of quantum gravity might be like. It could for example be a theory which allows for a precise description of quantum fluctuations of spacetime geometry. I can easily imagine such mathematics being developed and a successful theory emerging (I think loop quantum gravity may be close to that description). Give me any other difficult problem in any science, and I guarantee you I can imagine what a possibe solution might look like. Not so for this one problem, consciousness. So if everyone thought like me, science would be fine, thank you.
posted by snoktruix at 3:36 PM on August 28, 2010
posted by snoktruix at 3:36 PM on August 28, 2010
Well, the first problem with discussing consciousness that I can see is that we don't really have a satisfactory definition of it. But I'd imagine a comprehensive theory of consciousness would be one that allowed us to create an algorithm for devices that easily passed the Turing test.
posted by Jimmy Havok at 5:26 PM on August 28, 2010
posted by Jimmy Havok at 5:26 PM on August 28, 2010
A little late to the party but experiments to induce out of body experiences have been performed. The experiments are quite clever and the results very revealing.
Yes, we presented evidence that the brain, when tricked by optical and sensory illusions, can quickly adopt another human form as its own, no matter how different it is. We designed two experiments. In the first one, the researchers fitted the head of a mannequin with two cameras connected to two small screens placed in front of the volunteer’s eyes, so that the volunteer could see what the mannequin ‘saw’.posted by euphorb at 11:44 PM on August 28, 2010 [1 favorite]
When the mannequin’s camera eyes and the volunteer’s head, complete with the camera goggles, were directed downwards, the volunteer saw the dummy’s body where he or she would normally have seen his or her own body. By simultaneously touching the stomachs of both the volunteer and the mannequin, we could create the illusion of body swapping.
The volunteer could then see that the mannequin’s stomach was being touched while feeling (but not seeing) a similar sensation on his or her own stomach. Thus, the volunteer developed a powerful sensation that the mannequin’s body was his or her own.
In the second experiment, we mounted the camera onto another person’s head. When this person and the volunteer turned towards each other to shake hands, the volunteer perceived the camera-wearer’s body as his or her own. The volunteers saw themselves shaking hands, experiencing it as though they were another person. The sensory impression from the handshake was perceived as though coming from the new body, rather than the volunteer’s own.
The strength of the illusion was proved when the volunteers exhibited stress reactions after a knife was held to the camera-wearer’s arm but not when it was held to their own. However, the volunteers could not fool themselves into identifying with a non-humanoid object, such as a chair or a large block.
In reply to snoktruix, I believe that I do understand your concerns, and I am going to attempt to give you a careful and detailed reply - not out of any obsessive effort to win an argument, but merely in an effort to be helpful. It is perfectly true that consciousness is a unique phenomenon in terms of how we perceive it - particularly since all of our perceptions of anything depend upon the fact that we have a consciousness which is capable of perception. But the unique nature of consciousness is an illusion, and one that I explained in a previous comment (the one addressed to hoople, who subsequently gave up on me). Our evolutionary background has endowed us with a sense of our own importance. This is so deeply established in the human mind that it becomes difficult to imagine that consciousness itself is just another biological device that has evolved to enable organisms to survive and reproduce more successfully. We sense a transcendant and mysterious nature to our own consciousness. It is the supreme phenomenon of the entire universe, the one thing that matters above all else, the one thing that engages our moral imperatives, and which commands our respect. That is what we evolved to feel. That is a subjective reality. Objectively, consciousness is a very sophisticated and clever biological strategy, and one that is as yet beyond the skill of our best artificial intelligence scientists and researchers to produce by artificial means, but it is not fundamentally different from any other phenomenon. It is still data processing. I know that this seems difficult to believe, and it is something that I struggled with myself, but that is the objective reality as far as I have been able to determine.
You tell me to consider this thought experiment: "In the year 2970, the mathematics of consciousness is finally understood and published in Nature. You read the paper, which has lots of equations and nice graphs. As you put the paper down, you realize that you now actually understand completely why you are conscious - there is NO mystery." Is this not patently impossible? And guess what? I see nothing impossible about that. I believe that it is entirely possible. I also believe that the full analysis of consciousness is a very technically difficult problem, and I expect that I personally will probably not live to see the solution to that problem. And perhaps it will never actually be solved. (Human civilization itself may collapse before it can advance to that level - I would guess that it is likely to do so.) But there is a solution, at least in theory, whether we discover it or not.
People (as I was explaining in my earlier comment to hoople) who believe in the fundamentally transcendent or supernatural nature of human consciousness also look for other supernatural phenomena to result from human consciousness, such as psychic power. And despite exhaustive searches, no actual psychic power turns up, only a vast series of fakes, deceptions, and delusions. All kinds of supernatural phenomena are claimed, and none of the claims pan out. I have investigated this personally.
Could consciousness have a transcendant nature anyway? Yes, in theory it could. But it's sort of like claiming (as many people do) that although we have no actual evidence of the existence of God, I believe in God anyway, so there. Occam's Razor militates against any belief that is not supported by evidence. If all the phenomena produced by consciousness can be explained by the workings of the laws of nature as we know them, then it is unnecessary to hypothesize that consciousness is nonetheless, in some manner, outside of or above those laws. All we have to tell us otherwise, remember, is our purely subjective FEELING of being special. And I accept that. I feel special and I am special. But that is subjective. Objective reality would suggest that I am an organism much like any other, and that everything about me fits into the known scientific understanding of how the universe works, and requires no mysterious transcendence.
You might want to ask, if we are all just organisms and consciousness is mechanistic rather than transcendent, why is my consciousness in the specific place where it is? Why should I be me instead of someone else? There is no objective distinction between me and another person, we are perfectly equivalent, yet subjectively, there is a HUGE difference between me and another person. It is a distinction of the utmost importance to me (even if it may be irrelevant to anyone else). How does this happen?
But that is evolution's trick, to make you think that you are special so that you will exert every possible effort to survive and reproduce. That's how it works, that is what your consciousness is designed to do. Subjectively you are absolutely special, objectively you are just another person. No doubt you are a good person; I am not implying any criticism here. I merely assert that there is nothing magical about the fact that you are yourself.
posted by grizzled at 10:05 AM on August 29, 2010
You tell me to consider this thought experiment: "In the year 2970, the mathematics of consciousness is finally understood and published in Nature. You read the paper, which has lots of equations and nice graphs. As you put the paper down, you realize that you now actually understand completely why you are conscious - there is NO mystery." Is this not patently impossible? And guess what? I see nothing impossible about that. I believe that it is entirely possible. I also believe that the full analysis of consciousness is a very technically difficult problem, and I expect that I personally will probably not live to see the solution to that problem. And perhaps it will never actually be solved. (Human civilization itself may collapse before it can advance to that level - I would guess that it is likely to do so.) But there is a solution, at least in theory, whether we discover it or not.
People (as I was explaining in my earlier comment to hoople) who believe in the fundamentally transcendent or supernatural nature of human consciousness also look for other supernatural phenomena to result from human consciousness, such as psychic power. And despite exhaustive searches, no actual psychic power turns up, only a vast series of fakes, deceptions, and delusions. All kinds of supernatural phenomena are claimed, and none of the claims pan out. I have investigated this personally.
Could consciousness have a transcendant nature anyway? Yes, in theory it could. But it's sort of like claiming (as many people do) that although we have no actual evidence of the existence of God, I believe in God anyway, so there. Occam's Razor militates against any belief that is not supported by evidence. If all the phenomena produced by consciousness can be explained by the workings of the laws of nature as we know them, then it is unnecessary to hypothesize that consciousness is nonetheless, in some manner, outside of or above those laws. All we have to tell us otherwise, remember, is our purely subjective FEELING of being special. And I accept that. I feel special and I am special. But that is subjective. Objective reality would suggest that I am an organism much like any other, and that everything about me fits into the known scientific understanding of how the universe works, and requires no mysterious transcendence.
You might want to ask, if we are all just organisms and consciousness is mechanistic rather than transcendent, why is my consciousness in the specific place where it is? Why should I be me instead of someone else? There is no objective distinction between me and another person, we are perfectly equivalent, yet subjectively, there is a HUGE difference between me and another person. It is a distinction of the utmost importance to me (even if it may be irrelevant to anyone else). How does this happen?
But that is evolution's trick, to make you think that you are special so that you will exert every possible effort to survive and reproduce. That's how it works, that is what your consciousness is designed to do. Subjectively you are absolutely special, objectively you are just another person. No doubt you are a good person; I am not implying any criticism here. I merely assert that there is nothing magical about the fact that you are yourself.
posted by grizzled at 10:05 AM on August 29, 2010
Also, the most recent comment posted by Blazecock Pileon asks why his (or her) question remains unanswered. At this point there has been so much discussion of this topic that I can't really keep track of what questions have been asked and which of them have been answered, nor am I sure whether this particular question was directed toward me or someone else. If you have a specific question that you would like me to answer, let me know, and I will see what I can do.
posted by grizzled at 10:12 AM on August 29, 2010
posted by grizzled at 10:12 AM on August 29, 2010
Actuallythe question probably was, what is it about a brain which makes calling it a computer incorrect, in which case the answer that might be offered by those who do not want to call the brain a computer is, the brain is special.
posted by grizzled at 10:14 AM on August 29, 2010
posted by grizzled at 10:14 AM on August 29, 2010
Again, you're falling into the trap of thinking computer = "Windows PC", when we've already moved beyond that to a more abstract definition that encompasses the underlying meaning of computing, without referring to any specific "hardware" implementation.
As I've said earlier in the thread, for my part I'm willing to accept that the brain is "machinery", or, similarly, a computer for expansive enough definitions of a computer. But by the time we get that expansive, it doesn't seem useful to talk about it anymore. Sure, you can look at just about any phenomenon you choose and see nothing but automata running on the substrate of reality. In that case, sure, the brain is a computer, but this stops telling anyone anything more useful than "what happens in the brain is physics," just like everything is.
what is it about a brain which makes calling it a computer incorrect, in which case the answer that might be offered by those who do not want to call the brain a computer is, the brain is special.
The brain is frequently observed doing things we have not observed computers doing. Even if it does turn out to be a computer, whether by the march of progress or by use of expansive definitions, it's unquestionably special as the computers we're familiar with go.
posted by weston at 12:37 PM on August 30, 2010
As I've said earlier in the thread, for my part I'm willing to accept that the brain is "machinery", or, similarly, a computer for expansive enough definitions of a computer. But by the time we get that expansive, it doesn't seem useful to talk about it anymore. Sure, you can look at just about any phenomenon you choose and see nothing but automata running on the substrate of reality. In that case, sure, the brain is a computer, but this stops telling anyone anything more useful than "what happens in the brain is physics," just like everything is.
what is it about a brain which makes calling it a computer incorrect, in which case the answer that might be offered by those who do not want to call the brain a computer is, the brain is special.
The brain is frequently observed doing things we have not observed computers doing. Even if it does turn out to be a computer, whether by the march of progress or by use of expansive definitions, it's unquestionably special as the computers we're familiar with go.
posted by weston at 12:37 PM on August 30, 2010
It's true, the brain actually is special. But it is still a computer.
posted by grizzled at 5:18 AM on September 1, 2010 [1 favorite]
posted by grizzled at 5:18 AM on September 1, 2010 [1 favorite]
"The point is not that the claim 'The brain is a digital computer' is false. Rather it does not get up to the level of falsehood. ...
The question 'Is the brain a digital computer?' is as ill defined as the questions 'Is it an abacus?', 'Is it a book?', or 'Is it a set of symbols?', 'Is it a set of mathematical formulae?'"
- John R. Searle
By the definitions some people here are using, a rock could be considered a "computer."
posted by mrgrimm at 11:04 AM on September 1, 2010
The question 'Is the brain a digital computer?' is as ill defined as the questions 'Is it an abacus?', 'Is it a book?', or 'Is it a set of symbols?', 'Is it a set of mathematical formulae?'"
- John R. Searle
By the definitions some people here are using, a rock could be considered a "computer."
posted by mrgrimm at 11:04 AM on September 1, 2010
a rock could be considered a "computer."
Ah, good ol' geological automata!
posted by weston at 11:12 AM on September 1, 2010
Ah, good ol' geological automata!
posted by weston at 11:12 AM on September 1, 2010
To mrgrimm: no one in this discussion has called the brain a digital computer. I have called it an organic computer. And there is a huge difference in the data processing capability of a brain as compared to a rock. A rock can be a useful counting device, provided that you don't have to count beyond one. With a larger humber of rocks you can drill holes in them, put them on strings, and make an abacus. With a single rock, there's not too much you can do in the way of computation.
posted by grizzled at 11:49 AM on September 1, 2010 [1 favorite]
posted by grizzled at 11:49 AM on September 1, 2010 [1 favorite]
Blazecock: your question is unanswered b/c life sometimes intervenes.
There's nothing I know about the brain that makes me disagree with this statement:
"The brain clearly can be used as part of a system that performs the same kinds of things the systems we call computers can perform."
...and there's nothing I know about it that would make me disagree with this statement:
"Provided enough access to various external resources (pencil and paper, say, and time) there doesn't appear to be anything a computer can be made to do that the brain can't be used as a part of a system to do, as well (provided enough time and access to external resources, like pencil and paper)."
Those are perhaps not so bold as "is the brain a computer?" but have the advantage of being precise: I know what I'm agreeing to when I agree with them, I know what they mean, and they very naturally suggest ways of testing their veracity if I was so motivated.
My objection to calling the brain a computer is that if we're to drop to that level of vagueness I'd need a correspondingly-precise definition of computer to know what I'm agreeing to, and you've not supplied that.
This isn't that difficult for most mechanical devices: a transistor has a natural characterization in terms of its input and outputs; levers and pulleys and engines and dynamos and transceivers and triodes and peltiers and so forth can be given definitions that make it easy to go out and determine if some unknown X is a lever or pulley or engine or dynamo or transceiver or triode or peltier, and don't depend on your already knowing what a lever or pulley or engine or dynamo o transceiver or triode or peltier is to properly apply said test.
Even walking can be defined non-circularly: "a mechanical apparatus can be said to walk when it has a procedure for manipulating lever-like sub-apparati in such a way as to induce translational motion of the entire apparatus".
You might quibble with whether or not that definition of walking is *entirely* satisfactory -- maybe it includes some edge case it ought to exclude, or excludes some edge case it ought to include, etc. -- but it's certainly not circular: interpreting it in no way requires already knowing what walking is, and understanding the external concepts it does reference does not in any way require you to already understand "walking", either. There's no mention of "walking", no mention of "walking-ness" or "walkers" or anything else.
Contrast with your attempts at defining a computer: I'll pick two from upthread, if I missed a newer or better attempt I apologize. Your attempts:
- Computers can come in all shapes and sizes, but ultimately the only requirement is that a computer computes.
- It can be useful to describe something as a computer, when it processes information to some understandable end.
The first is trivially circular unless you can supply a non-circular definition of what it means to compute.
The second is the same kind of circularity (it's actually less solid than I thought it was the first time I read it): to be able to make use of that definition requires you define information processing, etc., which (based on the context of this thread) is just another synonym for computing, which in turn remains undefined in terms of any more-clearly-defined primitives. (My earlier critique also applies but only if you are trying to make the leap to the more "cosmic" implications of the brain being a computer, etc., in which case it is even more vacuous).
To continue the brain == computer discussion I'll ask you either:
- (A) supply a non-circular definition of computer or computing, so I know with what I'm agreeing or disagreeing
- (B) supply at least one example of something that ISN'T a computer, with some explanation as to why that thing isn't a computer
In the interim: let me recall to you the famous joke about the engineer, physicist, and mathematician. They're all on a train through the Irish countryside and for all three of them it's their first time visiting Ireland.
They spot a cow.
The engineer says: "Look! Irish cows are brown!"
The physicist says: "Not so fast! All we know right now is that at least one Irish cow is brown."
The mathematician says: "Not so fast yourself! All we know right now is that at least one side of one Irish cow is brown."
You and grizzled are, at best, being engineers here; many of those who disagree with you or not disagreeing because they are silly mystics, against whom you both clearly have vast inventories of prepared counterarguments it gives you great pleasure to deploy (regardless of whether or not you've correctly identified your target: once an engineer, always an engineer); they are disagreeing because they are physicists, or mathematicians, etc., and see you making bold-but-vague claims that aren't justified on the evidence.
When you get called on it -- asked to define your terms, or to recognize you're drawing too strong a conclusion from the evidence at hand -- you respond with variants on the "Well, do YOU see any other cows around here? What color are they?", which sort of response gets tiresome to continue to engage with.
posted by hoople at 2:36 PM on September 1, 2010
There's nothing I know about the brain that makes me disagree with this statement:
"The brain clearly can be used as part of a system that performs the same kinds of things the systems we call computers can perform."
...and there's nothing I know about it that would make me disagree with this statement:
"Provided enough access to various external resources (pencil and paper, say, and time) there doesn't appear to be anything a computer can be made to do that the brain can't be used as a part of a system to do, as well (provided enough time and access to external resources, like pencil and paper)."
Those are perhaps not so bold as "is the brain a computer?" but have the advantage of being precise: I know what I'm agreeing to when I agree with them, I know what they mean, and they very naturally suggest ways of testing their veracity if I was so motivated.
My objection to calling the brain a computer is that if we're to drop to that level of vagueness I'd need a correspondingly-precise definition of computer to know what I'm agreeing to, and you've not supplied that.
This isn't that difficult for most mechanical devices: a transistor has a natural characterization in terms of its input and outputs; levers and pulleys and engines and dynamos and transceivers and triodes and peltiers and so forth can be given definitions that make it easy to go out and determine if some unknown X is a lever or pulley or engine or dynamo or transceiver or triode or peltier, and don't depend on your already knowing what a lever or pulley or engine or dynamo o transceiver or triode or peltier is to properly apply said test.
Even walking can be defined non-circularly: "a mechanical apparatus can be said to walk when it has a procedure for manipulating lever-like sub-apparati in such a way as to induce translational motion of the entire apparatus".
You might quibble with whether or not that definition of walking is *entirely* satisfactory -- maybe it includes some edge case it ought to exclude, or excludes some edge case it ought to include, etc. -- but it's certainly not circular: interpreting it in no way requires already knowing what walking is, and understanding the external concepts it does reference does not in any way require you to already understand "walking", either. There's no mention of "walking", no mention of "walking-ness" or "walkers" or anything else.
Contrast with your attempts at defining a computer: I'll pick two from upthread, if I missed a newer or better attempt I apologize. Your attempts:
- Computers can come in all shapes and sizes, but ultimately the only requirement is that a computer computes.
- It can be useful to describe something as a computer, when it processes information to some understandable end.
The first is trivially circular unless you can supply a non-circular definition of what it means to compute.
The second is the same kind of circularity (it's actually less solid than I thought it was the first time I read it): to be able to make use of that definition requires you define information processing, etc., which (based on the context of this thread) is just another synonym for computing, which in turn remains undefined in terms of any more-clearly-defined primitives. (My earlier critique also applies but only if you are trying to make the leap to the more "cosmic" implications of the brain being a computer, etc., in which case it is even more vacuous).
To continue the brain == computer discussion I'll ask you either:
- (A) supply a non-circular definition of computer or computing, so I know with what I'm agreeing or disagreeing
- (B) supply at least one example of something that ISN'T a computer, with some explanation as to why that thing isn't a computer
In the interim: let me recall to you the famous joke about the engineer, physicist, and mathematician. They're all on a train through the Irish countryside and for all three of them it's their first time visiting Ireland.
They spot a cow.
The engineer says: "Look! Irish cows are brown!"
The physicist says: "Not so fast! All we know right now is that at least one Irish cow is brown."
The mathematician says: "Not so fast yourself! All we know right now is that at least one side of one Irish cow is brown."
You and grizzled are, at best, being engineers here; many of those who disagree with you or not disagreeing because they are silly mystics, against whom you both clearly have vast inventories of prepared counterarguments it gives you great pleasure to deploy (regardless of whether or not you've correctly identified your target: once an engineer, always an engineer); they are disagreeing because they are physicists, or mathematicians, etc., and see you making bold-but-vague claims that aren't justified on the evidence.
When you get called on it -- asked to define your terms, or to recognize you're drawing too strong a conclusion from the evidence at hand -- you respond with variants on the "Well, do YOU see any other cows around here? What color are they?", which sort of response gets tiresome to continue to engage with.
posted by hoople at 2:36 PM on September 1, 2010
invitapriore: it's a good idea and a good suggestion but you're kinda proposing the thing I was saying winds up putting you in a hard place philosophically (without some huge conceptual breakthrough not currently available). It sounds like a blow-off to say that I don't have time to write it up properly but the fact is I don't, so I'll sketch it and leave it at that.
The gist of it is say you follow through on that proposal, and bind abstract ideas to various neural activity patterns. At one level you have to do this anyway, if you're interested in the science: barring new developments that's clearly what happens, etc., so we would try and understand exactly what happens and how it works (and we might learn something in the process).
That's fine operationally but the (philosophical, not practical) troubles arise if you try to finish the reductionist / mechanicalist project; if you go that route than those neural activity patterns are all of our mental universe -- ideas, concepts, etc. -- that actually exists, so now you have the problem of trying to reconstruct the abstractions you bound to those patterns in terms of the patterns themselves.
This is very hard to even imagine doing, as an example think about even something basic like "7 is a prime number". It's easy to imagine some reverse-engineering project working out how the brain operates when it thinks about that (and arrives at the conclusion of it being true, and gneerates the sensation of recognizing it's true, etc.).
What's substantially harder to even imagine is a universe where there is no "7", or "prime", etc. (eg: 7 in the platonic sense, or the idea of "prime"): there are perhaps 6 billion brains walking around with neural activity patterns that are somehow representations of those abstract entities but the entities of themselves have no existence beyond that; the embodied representations being like shadows cast by something that itself doesn't exist.
You might say "well, then we can define 7 and prime, etc., as the invariants of those "shadows", or as some limit of the various partial representations carried around in people's brains, etc., except that both of those proposals are either re-introducing intangibles (invariants, limits) or themselves must also reduce down to mere patterns of neural activity themselves (as that's all there is, mentally), which defeats the purpose: we were trying to make sense of what it would mean for such abstractions to only "exist" as neural activity patterns, and pointing to other neural activity patterns as an explanation is not all that helpful right now (though, of course, with enough conceptual breakthroughs it might actually prove adequate).
It's solving this "flattening" issue that's the big issue: unlike most theories, taking a mechanistic view of the universe along with a mechanistic view of the brain basically means that to try and ground your knowledge is much much harder than it used to be, and any proposal that purports to solve the issues it brings up has the unique challenge of having to be able to "explain itself", too (which requirement is both hard to deal with and very unusual, most theories work b/c they get to operate from "outside" the thing they describe and b/c they don't have to offer a true "grounding" for all knowledge).
That's already much longer than I care for.
The whole thing is kinda like those magic-eye things: once you see it it's hard not to see it, but some people can't see it, and until you do it just looks like noise. I'm not sure you're actually better off if you get it; it's abstruse enough to be irrelevant in just about every possible context.
posted by hoople at 7:29 PM on September 1, 2010 [1 favorite]
The gist of it is say you follow through on that proposal, and bind abstract ideas to various neural activity patterns. At one level you have to do this anyway, if you're interested in the science: barring new developments that's clearly what happens, etc., so we would try and understand exactly what happens and how it works (and we might learn something in the process).
That's fine operationally but the (philosophical, not practical) troubles arise if you try to finish the reductionist / mechanicalist project; if you go that route than those neural activity patterns are all of our mental universe -- ideas, concepts, etc. -- that actually exists, so now you have the problem of trying to reconstruct the abstractions you bound to those patterns in terms of the patterns themselves.
This is very hard to even imagine doing, as an example think about even something basic like "7 is a prime number". It's easy to imagine some reverse-engineering project working out how the brain operates when it thinks about that (and arrives at the conclusion of it being true, and gneerates the sensation of recognizing it's true, etc.).
What's substantially harder to even imagine is a universe where there is no "7", or "prime", etc. (eg: 7 in the platonic sense, or the idea of "prime"): there are perhaps 6 billion brains walking around with neural activity patterns that are somehow representations of those abstract entities but the entities of themselves have no existence beyond that; the embodied representations being like shadows cast by something that itself doesn't exist.
You might say "well, then we can define 7 and prime, etc., as the invariants of those "shadows", or as some limit of the various partial representations carried around in people's brains, etc., except that both of those proposals are either re-introducing intangibles (invariants, limits) or themselves must also reduce down to mere patterns of neural activity themselves (as that's all there is, mentally), which defeats the purpose: we were trying to make sense of what it would mean for such abstractions to only "exist" as neural activity patterns, and pointing to other neural activity patterns as an explanation is not all that helpful right now (though, of course, with enough conceptual breakthroughs it might actually prove adequate).
It's solving this "flattening" issue that's the big issue: unlike most theories, taking a mechanistic view of the universe along with a mechanistic view of the brain basically means that to try and ground your knowledge is much much harder than it used to be, and any proposal that purports to solve the issues it brings up has the unique challenge of having to be able to "explain itself", too (which requirement is both hard to deal with and very unusual, most theories work b/c they get to operate from "outside" the thing they describe and b/c they don't have to offer a true "grounding" for all knowledge).
That's already much longer than I care for.
The whole thing is kinda like those magic-eye things: once you see it it's hard not to see it, but some people can't see it, and until you do it just looks like noise. I'm not sure you're actually better off if you get it; it's abstruse enough to be irrelevant in just about every possible context.
posted by hoople at 7:29 PM on September 1, 2010 [1 favorite]
To hoople: although you have already officially given up on me, you nonetheless mention me in your above comment "you and grizzled are, at best, being engineers here". I therefore feel entitled to reply. Your elaborate comment above seems to have only one substantial concern, which is that no adequate definition has been offered for the term "computer". This seems like an unnecessary complaint to me, given that you and I and everyone else on this website know perfectly well what computers are. We are having this discussion by way of computers. Furthermore, I did give some detail in earlier comments as to exactly what it is that computers do. But to repeat, a computer is a data processing device; there is an input of data, there are a variety of ways in which that data is manipulated, analysed, utilized, evaluated, and so forth, which then results in an output of data. This is a very familiar and well understood process which we can clearly and easily see in the case of the computers which we are using to post these messages on metafilter, and which we can also clearly and easily see in the case of our brains, the organic computers which compose the messages that are then posted by means of our artificial computers. From your comment it would seem that the whole process is a mystery to you, explained only by means of useless circular definitions. The definiton I have offered is not circular and explains computation quite clearly. There are, of course, endless numbers of details that I haven't explained, but the general outline is quite clear. I could go into greater detail to tell you more things that you already know, but it does not seem necessary.
You also complain about the great pleasure with which I deploy arguments against mysticism, as if this whole discussion were a pretext for me to harp upon my own personal obsessions which have nothing to do with the actual topic of discussion. Meanwhile, it remains true that those who believe that the brain cannot be understood as a computer, have mystical alternatives and consider the brain and/or the human mind or "soul" to be essentially supernatural in nature. If the brain, mind, or supposed soul are not supernatural in nature, then there does not seem to be any other alternative but to consider the brain to be a computer and the mind to be a computer program, leaving the soul as a mere metaphor. I address the issue of mysticism only because it is relevant. The supposed delight that I take in doing so is imaginary. I would actually have preferred if my simple and rather obvious statement that the brain is an organic computer had not been questioned or challenged, making it unnecessary for me to produce elaborate explanations.
I have also been challenged previously to define the term information, which I then did, even though, like the term computer, information is a term whose meaning is perfectly well understood by everyone who is participating in this discussion. Who knows what other words you will now declare to be undefined? Why have you provide no definition for the term "definiton"? I would have no way to know what you mean, except that you and I both speak English and these are all familiar words whose meanings we know. But don't let that stop you. I'm sure there are lots more words that you can now declare to be undefined even though their definitions are easily found in any dictionary.
If you do not believe that the brain is a computer, and you do not believe that the functioning or consciousness of the human mind requires supernatural explanations, are you arguing that the brain is just incomprehensible and we know nothing about it? Because that clearly is not the case. We have lots of knowledge about the brain, it has been studied in great detail and much has been learned about it. There is still more to be learned, but that in no way invalidates what we already know.
It is possible to examine the human body and to observe specific functions for every organ or type of tissue. Bones have a structural purpose, to hold the body in a certain shape; they protect more vulnerable organs, and also have marrow in which blood cells are made. The liver is a chemical processing center. The kidneys filter waste products out of the blood. The blood is a transportation mechanism that carries a variety of needed substances everywhere in the body. The skin is an outer covering that keeps body fluids in and invading organisms out. And the brain is a data processing device. That's what it does. I fail to see why you need to argue about this simple and obvious statement.
posted by grizzled at 5:14 AM on September 2, 2010
You also complain about the great pleasure with which I deploy arguments against mysticism, as if this whole discussion were a pretext for me to harp upon my own personal obsessions which have nothing to do with the actual topic of discussion. Meanwhile, it remains true that those who believe that the brain cannot be understood as a computer, have mystical alternatives and consider the brain and/or the human mind or "soul" to be essentially supernatural in nature. If the brain, mind, or supposed soul are not supernatural in nature, then there does not seem to be any other alternative but to consider the brain to be a computer and the mind to be a computer program, leaving the soul as a mere metaphor. I address the issue of mysticism only because it is relevant. The supposed delight that I take in doing so is imaginary. I would actually have preferred if my simple and rather obvious statement that the brain is an organic computer had not been questioned or challenged, making it unnecessary for me to produce elaborate explanations.
I have also been challenged previously to define the term information, which I then did, even though, like the term computer, information is a term whose meaning is perfectly well understood by everyone who is participating in this discussion. Who knows what other words you will now declare to be undefined? Why have you provide no definition for the term "definiton"? I would have no way to know what you mean, except that you and I both speak English and these are all familiar words whose meanings we know. But don't let that stop you. I'm sure there are lots more words that you can now declare to be undefined even though their definitions are easily found in any dictionary.
If you do not believe that the brain is a computer, and you do not believe that the functioning or consciousness of the human mind requires supernatural explanations, are you arguing that the brain is just incomprehensible and we know nothing about it? Because that clearly is not the case. We have lots of knowledge about the brain, it has been studied in great detail and much has been learned about it. There is still more to be learned, but that in no way invalidates what we already know.
It is possible to examine the human body and to observe specific functions for every organ or type of tissue. Bones have a structural purpose, to hold the body in a certain shape; they protect more vulnerable organs, and also have marrow in which blood cells are made. The liver is a chemical processing center. The kidneys filter waste products out of the blood. The blood is a transportation mechanism that carries a variety of needed substances everywhere in the body. The skin is an outer covering that keeps body fluids in and invading organisms out. And the brain is a data processing device. That's what it does. I fail to see why you need to argue about this simple and obvious statement.
posted by grizzled at 5:14 AM on September 2, 2010
grizzled: briefly, if I've misjudged you I apologize. I believet that if you take the time to go through your writing you can easily see how I might have that impression about you: count the number of sentences addressing, eg, the subjective-objective distinction, or the number of sentences discussing the beliefs of scientology, your search for supernatural phenomena, and so on -- and the often-reocurring phrasings and so forth -- and I think you'll see how that impression might come out of your writing.
I'll refer you here: http://plato.stanford.edu/entries/computation-physicalsystems/ (Section 3 in particular) and then here: http://plato.stanford.edu/entries/information-semantic/ (entire thing); they go through these topics in more detail than I have time to prepare from scratch. You may find them all to be making much ado over nothing, you may find them interesting, but if you read them you will certainly see why some people find your simple and obvious statements neither so simple nor so obvious.
posted by hoople at 8:35 AM on September 2, 2010
I'll refer you here: http://plato.stanford.edu/entries/computation-physicalsystems/ (Section 3 in particular) and then here: http://plato.stanford.edu/entries/information-semantic/ (entire thing); they go through these topics in more detail than I have time to prepare from scratch. You may find them all to be making much ado over nothing, you may find them interesting, but if you read them you will certainly see why some people find your simple and obvious statements neither so simple nor so obvious.
posted by hoople at 8:35 AM on September 2, 2010
I have looked at the material to which you referred me. It is true that there are many different ways to understand the concepts both of computers and of information, but it is also true that there are ordinary, normal ways to understand those concepts as well. I already discussed ways in which a rock could be used as a computer. And as the material to which I was referred mentions, there is a concept of pancomputationalism by which everything can be considered a computer - and of course, if everything is a computer, it is then entirely uninformative or trivial to say that a particular thing is a computer. However, rocks are not primarily used as computers (although you can carve beads out of rock, and use them in an abacus, or you can extract silicon from rock to use in circuit boards). Rocks are much more likely to be used as structural material than as computers. You would be much more likely to build a wall or to pave a street with rock, than you would be to use rocks to calculate your taxes. Similarly, brains can have other uses than as computers; for example, they are edible (and not just by zombies). If you happened to have lots of brains on hand and didn't know what to do with them, they could also be made into fertilizer, or fish bait. Nonetheless, it is the capacity of brains to think that is by far their most important feature. That is what brains are really for. And thinking is a form of computation.
Just as you might want to argue that everything is a computer, you could also say that everything is a weapon, because with sufficient ingenuity you can find ways to use anything as a weapon. Rocks, in particular, are used as weapons much more often than they are used as computers. In some Muslim countries, there are still executions by stoning. The brain, of course, is also a weapon, in the sense that an intelligently planned attack is much more likely to succeed than a badly planned or unplanned attack. But even so, brains are computers first, and weapons second. They are useful as weapons precisely because of their ability to think. If you took a brain out of someone's skull to use as a weapon, you could still throw it at someone, but you probably wouldn't do very much damage (other than causing a tremendous sense of revulsion). It is possible in theory to kill people by putting them in a tank and smothering them with thousands of brains. It would be a spectacularly inefficient way of killing someone. So I still think it makes more sense to regard the brain as a computer than to regard it as a weapon, or as an entree, or as an object d'art (suitably preserved in a block of lucite) or any of the other uses to which it might be put.
Similarly, the reference you give regarding information does indicate that there are many different ways to understand the concept of information, but there is also a normal way to do so. I did not use either the words computer or information in any esoteric sense, as I have pointed out previously. When I say that there is information entering the brain by way of the senses and information exiting the brain by way of commands to muscles in the body, my meaning is perfectly clear. It is not necessary to ponder the various ways in which we can conceive of information.
I don't really need to count the number of sentences in which I discuss the issue of mysticism vs. science. I have to explain this multiple times because it keeps coming up. I really should only have to say it once, if this were a more efficient discussion, but by the nature of the bulletin board system, it is somewhat chaotic. If people ask me ten times to define a computer, I am not going to come up with ten different definitions. So, the discussion is somewhat repetitive. But that is not because of any obsession on my part. I am just trying to participate in a discussion.
Despite any complexity in our concept of what computers are and what information is, it is still simple and obvious that the brain is an organic computer which processes information. That is what brains are for, it is their biological function. Do they do something else? If you could levitate by the power of thought, then you would have gone beyond mere computation. But you can't.
posted by grizzled at 11:14 AM on September 2, 2010
Just as you might want to argue that everything is a computer, you could also say that everything is a weapon, because with sufficient ingenuity you can find ways to use anything as a weapon. Rocks, in particular, are used as weapons much more often than they are used as computers. In some Muslim countries, there are still executions by stoning. The brain, of course, is also a weapon, in the sense that an intelligently planned attack is much more likely to succeed than a badly planned or unplanned attack. But even so, brains are computers first, and weapons second. They are useful as weapons precisely because of their ability to think. If you took a brain out of someone's skull to use as a weapon, you could still throw it at someone, but you probably wouldn't do very much damage (other than causing a tremendous sense of revulsion). It is possible in theory to kill people by putting them in a tank and smothering them with thousands of brains. It would be a spectacularly inefficient way of killing someone. So I still think it makes more sense to regard the brain as a computer than to regard it as a weapon, or as an entree, or as an object d'art (suitably preserved in a block of lucite) or any of the other uses to which it might be put.
Similarly, the reference you give regarding information does indicate that there are many different ways to understand the concept of information, but there is also a normal way to do so. I did not use either the words computer or information in any esoteric sense, as I have pointed out previously. When I say that there is information entering the brain by way of the senses and information exiting the brain by way of commands to muscles in the body, my meaning is perfectly clear. It is not necessary to ponder the various ways in which we can conceive of information.
I don't really need to count the number of sentences in which I discuss the issue of mysticism vs. science. I have to explain this multiple times because it keeps coming up. I really should only have to say it once, if this were a more efficient discussion, but by the nature of the bulletin board system, it is somewhat chaotic. If people ask me ten times to define a computer, I am not going to come up with ten different definitions. So, the discussion is somewhat repetitive. But that is not because of any obsession on my part. I am just trying to participate in a discussion.
Despite any complexity in our concept of what computers are and what information is, it is still simple and obvious that the brain is an organic computer which processes information. That is what brains are for, it is their biological function. Do they do something else? If you could levitate by the power of thought, then you would have gone beyond mere computation. But you can't.
posted by grizzled at 11:14 AM on September 2, 2010
no one in this discussion has called the brain a digital computer. I have called it an organic computer.
Well, that's interesting. Is this distinction you're making an admission that brains do something that digital computers don't do? If not, why is the distinction important?
you and I and everyone else on this website know perfectly well what computers are. We are having this discussion by way of computers.
Everyday experience as the working idea of what computers are has some problems, not the least of which is the one I've mentioned before (ie, in everyday experience, we regularly observe brains doing things we don't observe computers doing, we can't even get computers to effectively pretend to be conscious or self-aware in all but limited circumstances), but probably the larger one is that any kind of concept you can grab onto about what's actually going on inside collapses behind a wall of unknown complexity which you can be as mystical about as any concept of the soul. If we're being serious about things, it's probably better to have something at least quasi-formal.
The skin is an outer covering that keeps body fluids in and invading organisms out. And the brain is a data processing device. That's what it does. I fail to see why you need to argue about this simple and obvious statement.
The reason is that you appear to have taken the obvious part and stretched it into completely non-obvious territory. The brain does process data. Extrapolating from that statement to either the idea that (a) that's all it does is or (b) it does it using mechanisms that bear any practical similarity to computers as we understand them is a lot like saying "Mechanical dishwashers wash dishes. Humans wash dishes. Therefore, humans are mechanical dishwashers! What else can they be?" It's true at a stretch of the definition of mechanical, but it's also reductionist to a point of blindness.
If you do not believe that the brain is a computer, and you do not believe that the functioning or consciousness of the human mind requires supernatural explanations, are you arguing that the brain is just incomprehensible and we know nothing about it?
Why do you think that skepticism over specific hypotheses (a&b in my paragraph directly above) implies a wholesale rejection of everything we know about the brain?
posted by weston at 1:14 PM on September 2, 2010
Well, that's interesting. Is this distinction you're making an admission that brains do something that digital computers don't do? If not, why is the distinction important?
you and I and everyone else on this website know perfectly well what computers are. We are having this discussion by way of computers.
Everyday experience as the working idea of what computers are has some problems, not the least of which is the one I've mentioned before (ie, in everyday experience, we regularly observe brains doing things we don't observe computers doing, we can't even get computers to effectively pretend to be conscious or self-aware in all but limited circumstances), but probably the larger one is that any kind of concept you can grab onto about what's actually going on inside collapses behind a wall of unknown complexity which you can be as mystical about as any concept of the soul. If we're being serious about things, it's probably better to have something at least quasi-formal.
The skin is an outer covering that keeps body fluids in and invading organisms out. And the brain is a data processing device. That's what it does. I fail to see why you need to argue about this simple and obvious statement.
The reason is that you appear to have taken the obvious part and stretched it into completely non-obvious territory. The brain does process data. Extrapolating from that statement to either the idea that (a) that's all it does is or (b) it does it using mechanisms that bear any practical similarity to computers as we understand them is a lot like saying "Mechanical dishwashers wash dishes. Humans wash dishes. Therefore, humans are mechanical dishwashers! What else can they be?" It's true at a stretch of the definition of mechanical, but it's also reductionist to a point of blindness.
If you do not believe that the brain is a computer, and you do not believe that the functioning or consciousness of the human mind requires supernatural explanations, are you arguing that the brain is just incomprehensible and we know nothing about it?
Why do you think that skepticism over specific hypotheses (a&b in my paragraph directly above) implies a wholesale rejection of everything we know about the brain?
posted by weston at 1:14 PM on September 2, 2010
grizzled, I think we're done. You clearly don't understand the point of the discussion of the rock, for one, but more generally you're reiterating the same points in a way that makes me think there's not much point in continuing.
To put it in framing like you used, here's my recap of what's gone down here:
Subjectively, you look at the computer you're typing your responses on and feel very, very certain that you're looking at a computer.
Subjectively, you read some books about the brain and feel very very certain that the brain is a computer.
Objectively, your computer is a clump of matter arranged in a particular way. The laws of physics dictate that it reacts to various combinations of electrical impulses in various ways.
Objectively your brain is a clump of matter arranged in a particular way. The laws of physics dictate that it reacts to various combinations of electrical and chemical impulses in various ways.
Subjectively you feel both of these clumps of matter are the same kind of thing.
The obvious, anticipatable objection is that, objectively, they're pretty different: different materials, different structure (at any scale of resolution), different reactions to the same electrical impulses, and so on; objectively they mostly have in common that their weights are usually within an order of magnitude of each other;
You anticipate this objection -- the question of "what, objectively, is the same about these two things?" -- and claim that there's some sort of computer-y-ness that exists independent of any particular physical configuration, and that both brains and computers are computer-y enough to justify calling them computers.
You subjectively are very convinced that this computer-y-ness exists, that both systems have it, and thus that both systems are computers.
This isn't very convincing, so I ask if you have an objective definition of computer-y-ness; some objective criterion that can be applied so as to determine whether a system is a computer or is not a computer.
You reply that the important aspect of computer-y-ness is to be engaged in processing information, and that both devices clearly are engaged in processing information (regardless of any objective differences in their appearance, structure, and so on).
This is puzzling: I don't know where this information is, or what processing is, at an objective level. Objectively both systems are just clumps of matter, and objectively the laws of physics dictate that they react to electrical and chemical impulses in various (seemingly divergent) ways. Where, objectively, is this information? Where, objectively, is this processing? How, objectively, can I test whether a particular clump of matter or electrical impulse carries information? How, objectively, can I tell what information it carries? How, objectively, can I tell when a particular sequence of physical events is "processing"?
Your response isn't very helpful: you establish that information isn't something physical and isn't somehow physically "in" the electrical impulses. It, instead, seems to have something to do with the interpretations we might assign to those impulses, or the interpretations we might assign to those impulses, and so on. Likewise, presumably, for processing.
But now I'm stumped: that nay have felt like a clear definition to you, but objective it ain't; the question of whether or not an electrical impulse carries information or not seems to reduce to how you feel about it; subjective, very subjective.
I ask for clarification and get reiteration of your earlier contentions, but without new evidence or new argumentation, and certainly without anything objective.
At this point I provide you some reasonable surveys of the current state of the debate on these same topics, which currently can be summarized as "don't feel too badly, you're in good company, no one has provided a workable objective criteria for computer-y-ness as of this time".
Your response is then "that's all well and good but that aside I really very strongly am convinced of this subjective judgment" and continue to emphasize how clear, simple, and obvious it is to you (about which fact I wasn't in doubt). If you've solved the philosophical issues in those reviews you would be doing some people a big failure by sending your solutions to some of the authors of the referenced works (or, at least, those of whom are still amongst the living).
So we're at an impasse. It's been fun, good practice for a work-in-progress, but I think this is the end of the road; if we're going in circles may as well take a rest.
posted by hoople at 2:01 PM on September 2, 2010
To put it in framing like you used, here's my recap of what's gone down here:
Subjectively, you look at the computer you're typing your responses on and feel very, very certain that you're looking at a computer.
Subjectively, you read some books about the brain and feel very very certain that the brain is a computer.
Objectively, your computer is a clump of matter arranged in a particular way. The laws of physics dictate that it reacts to various combinations of electrical impulses in various ways.
Objectively your brain is a clump of matter arranged in a particular way. The laws of physics dictate that it reacts to various combinations of electrical and chemical impulses in various ways.
Subjectively you feel both of these clumps of matter are the same kind of thing.
The obvious, anticipatable objection is that, objectively, they're pretty different: different materials, different structure (at any scale of resolution), different reactions to the same electrical impulses, and so on; objectively they mostly have in common that their weights are usually within an order of magnitude of each other;
You anticipate this objection -- the question of "what, objectively, is the same about these two things?" -- and claim that there's some sort of computer-y-ness that exists independent of any particular physical configuration, and that both brains and computers are computer-y enough to justify calling them computers.
You subjectively are very convinced that this computer-y-ness exists, that both systems have it, and thus that both systems are computers.
This isn't very convincing, so I ask if you have an objective definition of computer-y-ness; some objective criterion that can be applied so as to determine whether a system is a computer or is not a computer.
You reply that the important aspect of computer-y-ness is to be engaged in processing information, and that both devices clearly are engaged in processing information (regardless of any objective differences in their appearance, structure, and so on).
This is puzzling: I don't know where this information is, or what processing is, at an objective level. Objectively both systems are just clumps of matter, and objectively the laws of physics dictate that they react to electrical and chemical impulses in various (seemingly divergent) ways. Where, objectively, is this information? Where, objectively, is this processing? How, objectively, can I test whether a particular clump of matter or electrical impulse carries information? How, objectively, can I tell what information it carries? How, objectively, can I tell when a particular sequence of physical events is "processing"?
Your response isn't very helpful: you establish that information isn't something physical and isn't somehow physically "in" the electrical impulses. It, instead, seems to have something to do with the interpretations we might assign to those impulses, or the interpretations we might assign to those impulses, and so on. Likewise, presumably, for processing.
But now I'm stumped: that nay have felt like a clear definition to you, but objective it ain't; the question of whether or not an electrical impulse carries information or not seems to reduce to how you feel about it; subjective, very subjective.
I ask for clarification and get reiteration of your earlier contentions, but without new evidence or new argumentation, and certainly without anything objective.
At this point I provide you some reasonable surveys of the current state of the debate on these same topics, which currently can be summarized as "don't feel too badly, you're in good company, no one has provided a workable objective criteria for computer-y-ness as of this time".
Your response is then "that's all well and good but that aside I really very strongly am convinced of this subjective judgment" and continue to emphasize how clear, simple, and obvious it is to you (about which fact I wasn't in doubt). If you've solved the philosophical issues in those reviews you would be doing some people a big failure by sending your solutions to some of the authors of the referenced works (or, at least, those of whom are still amongst the living).
So we're at an impasse. It's been fun, good practice for a work-in-progress, but I think this is the end of the road; if we're going in circles may as well take a rest.
posted by hoople at 2:01 PM on September 2, 2010
weston: All I did was to describe the brain as an organic computer. Consequently, you demand to know why I think that the brain performs its functions by mechanisms that bear a practical similarity to computers as we understand them. I didn't say that. The computational mechanisms of the brain are clearly different than those of an electronic computer. That's why I described it as an organic computer, to make that distinction. A human being whose job it was to wash dishes could be described as a professional dishwasher, but would not be described as a mechanical dishwasher. You insist on altering my statements to make them appear ridiculous. A person who washes dishes would be a human dishwasher. A brain is an organic computer. A person is not a mechanical dishwasher and a brain is not a digital or electronic computer. I never said they were.
I am trying to figure out what skepticm about specific hypotheses means. We only have a few stated hypotheses about the nature of the brain. I regard it as a computer. Others consider it to be supernatural in nature. Others consider it to be incomprehensible. If there is a fourth hypothesis, no one has stated it yet in the course of this discussion. Feel free.
posted by grizzled at 2:04 PM on September 2, 2010
I am trying to figure out what skepticm about specific hypotheses means. We only have a few stated hypotheses about the nature of the brain. I regard it as a computer. Others consider it to be supernatural in nature. Others consider it to be incomprehensible. If there is a fourth hypothesis, no one has stated it yet in the course of this discussion. Feel free.
posted by grizzled at 2:04 PM on September 2, 2010
to hoople: Since you wish to end the discussion I naturally will do so - although I have noticed that people who want to end discussions with me always explain why I am wrong before they end the discussion, in order to get in the last word. The best way to win a debate is to prevent your opponent from speaking. If I reply, then I am violating your request to end the discussion, which you state at both the beginning and the end of your comment, for emphasis. So, I will end the discussion but first I have some final comments, which are absolutely the last I will make on this subject as long as you are actually done yourself. Then the discussion will end, if you allow it to end. Remember, don't reply. That's the key to ending discussions. You have to stop discussing.
The idea of viewing both electronic computers and brains as complex a clump of matter arranged in a particular way, makes them both into generalized objects that cannot be distinguished from any other object. If this is a legitimate criticism of my description of a brain as an organic computer, then it would work equally well as a criticism of any description of any object. All objects become a generalized, undescrible aggregation of matter. If the objective is to make the world less comprehensible, this is a successful strategy. But you know, those clumps of matter do have distinguishing characteristics that we can discuss and understand (except that the discussion is now ending).
The fact that electronic computers and organic computers have very different structures does not alter the fact that they perform similar functions. That is the point. Computers compute. Regardless of how they do it, and regardless of what they are made of, they have a specific function. It is a function that both electronic computers and brains perform. Hence, there is a similarity.
You also mention that electronic computers lack consciousness. This is because they are not as advanced as a human brain. Computer science is, after all, a very young field. I don't know why it would be expected to achieve such an advanced level in mere decades. However, I see no reason to think that consciouness cannot be achieved in an electronic computer, given sufficiently good hardware and software, which can be produced by a sufficient amount of research and development. The insistance that consciousness is fundamentally different from any other form of data processing is a mystical belief, whether you admit it or not. What produces consciousness other than data processing? Is it magic? Don't answer that. The discussion is ending, remember.
OK, we're done now. Remember, don't reply. The discussion is over.
posted by grizzled at 2:25 PM on September 2, 2010
The idea of viewing both electronic computers and brains as complex a clump of matter arranged in a particular way, makes them both into generalized objects that cannot be distinguished from any other object. If this is a legitimate criticism of my description of a brain as an organic computer, then it would work equally well as a criticism of any description of any object. All objects become a generalized, undescrible aggregation of matter. If the objective is to make the world less comprehensible, this is a successful strategy. But you know, those clumps of matter do have distinguishing characteristics that we can discuss and understand (except that the discussion is now ending).
The fact that electronic computers and organic computers have very different structures does not alter the fact that they perform similar functions. That is the point. Computers compute. Regardless of how they do it, and regardless of what they are made of, they have a specific function. It is a function that both electronic computers and brains perform. Hence, there is a similarity.
You also mention that electronic computers lack consciousness. This is because they are not as advanced as a human brain. Computer science is, after all, a very young field. I don't know why it would be expected to achieve such an advanced level in mere decades. However, I see no reason to think that consciouness cannot be achieved in an electronic computer, given sufficiently good hardware and software, which can be produced by a sufficient amount of research and development. The insistance that consciousness is fundamentally different from any other form of data processing is a mystical belief, whether you admit it or not. What produces consciousness other than data processing? Is it magic? Don't answer that. The discussion is ending, remember.
OK, we're done now. Remember, don't reply. The discussion is over.
posted by grizzled at 2:25 PM on September 2, 2010
The computational mechanisms of the brain are clearly different than those of an electronic computer. That's why I described it as an organic computer, to make that distinction.
Which is actually part of my point. You seemed, at least for a moment there, to be making a distinction that amounts to an acknowledgement that whatever brains are doing, it's something different than digital computers. The questions that follow are: (1) what? If they are not both doing computation, what is the brain doing that the digital computers don't do? and (2) why is the term "computer" useful in understanding how the brain does the things it does -- particularly the things nothing else is observed to do?
We only have a few stated hypotheses about the nature of the brain. I regard it as a computer. Others consider it to be supernatural in nature. Others consider it to be incomprehensible. If there is a fourth hypothesis, no one has stated it yet in the course of this discussion.
It's largely irrelevant. While a strong competing hypothesis can be helpful to an argument, you don't need even a weak one in order to point out problems with or reject any of the others before you. But there has been a fourth possibility mentioned in thread: that the workings of the brain involve something else that's potentially comprehensible even if it's unknown at the moment.
posted by weston at 2:48 PM on September 2, 2010
Which is actually part of my point. You seemed, at least for a moment there, to be making a distinction that amounts to an acknowledgement that whatever brains are doing, it's something different than digital computers. The questions that follow are: (1) what? If they are not both doing computation, what is the brain doing that the digital computers don't do? and (2) why is the term "computer" useful in understanding how the brain does the things it does -- particularly the things nothing else is observed to do?
We only have a few stated hypotheses about the nature of the brain. I regard it as a computer. Others consider it to be supernatural in nature. Others consider it to be incomprehensible. If there is a fourth hypothesis, no one has stated it yet in the course of this discussion.
It's largely irrelevant. While a strong competing hypothesis can be helpful to an argument, you don't need even a weak one in order to point out problems with or reject any of the others before you. But there has been a fourth possibility mentioned in thread: that the workings of the brain involve something else that's potentially comprehensible even if it's unknown at the moment.
posted by weston at 2:48 PM on September 2, 2010
Well grizzled, I thought we were done b/c we were going in circles but now we're not, as you're finally addressing some of the specific points I made previously, so we're actually having a discussion again.
Suppose that "computer" is something that has a solid, objective definition.
If that were the case the following ought to be possible:
- take something you know is a computer -- in this case, let's say a laptop -- and forget that it's a computer
- use your objective definition of computer to show that your laptop is, in fact, a computer (as you already knew)
If you can't do this then perhaps your definition isn't actually as solid and objective as you thought it was.
For most basic ideas it's not that hard to supply objective definitions that pass this test.
EG: consider "liquid". I propose that we can test whether or not some unidentified clump of matter is a liquid (at a particular temperature, of course) by transferring it between two differently-shaped containers (say, one round, one squared-off). I declare that we will judge our mystery matter-clump to be a liquid if we see in the course of our experiment that the volume our unidentified clump of matter occupies is approximately the same in both containers but the shape it takes is different in each container (and, approximately, the shape of the inside of the container in the occupied volume).
This works: I am extremely confident that if you presented me with a long list of unidentified clumps of matter and the outcomes of applying this experimental protocol to those clumps of matter that the ones this procedure identifies as "liquids" would in fact be things we already knew to be liquids.
Thus "liquid" passes this initial sanity check as to whether or not it can be given an objective definition (and a non-circular definition, too!). Most physical properties of interest can be given similarly objective definitions; the exceptions -- the things for which it is very hard to craft a definition that actually passes this thought experiment -- are almost always things that sound objective but turn out to be subjective upon a careful analysis.
What I've been trying to get you to do is supply something similar; some definition of computing that is non-circular -- none of this "Computers compute!" silliness, please! -- and also objective; at a minimum the definition ought to suggest some sort of procedures or tests or investigations I might make into a mystery lump of matter that'd ascertain whether or not the mystery lump is a comptuer, or not.
You still haven't done this. Can you? Where is it?
In response to your discussion of electronic computers having consciousness: I don't think I ever claimed computers don't have consciousness or can't have it or anything similar; if I did I mistyped, though I suspect you are thinking of someone else.
If you're sure it's me can you quote back to me where I made any such claim? It'd help me address your concerns if I can see what you're responding to.
posted by hoople at 3:09 PM on September 2, 2010
The idea of viewing both electronic computers and brains as complex a clump of matter arranged in a particular way, makes them both into generalized objects that cannot be distinguished from any other object.Is both false on the face of it -- we can distinguish those two devices based on their chemical compositoin, as you yourself pointed out -- and is missing the point entirely, so I'll make the point explicitly.
Suppose that "computer" is something that has a solid, objective definition.
If that were the case the following ought to be possible:
- take something you know is a computer -- in this case, let's say a laptop -- and forget that it's a computer
- use your objective definition of computer to show that your laptop is, in fact, a computer (as you already knew)
If you can't do this then perhaps your definition isn't actually as solid and objective as you thought it was.
For most basic ideas it's not that hard to supply objective definitions that pass this test.
EG: consider "liquid". I propose that we can test whether or not some unidentified clump of matter is a liquid (at a particular temperature, of course) by transferring it between two differently-shaped containers (say, one round, one squared-off). I declare that we will judge our mystery matter-clump to be a liquid if we see in the course of our experiment that the volume our unidentified clump of matter occupies is approximately the same in both containers but the shape it takes is different in each container (and, approximately, the shape of the inside of the container in the occupied volume).
This works: I am extremely confident that if you presented me with a long list of unidentified clumps of matter and the outcomes of applying this experimental protocol to those clumps of matter that the ones this procedure identifies as "liquids" would in fact be things we already knew to be liquids.
Thus "liquid" passes this initial sanity check as to whether or not it can be given an objective definition (and a non-circular definition, too!). Most physical properties of interest can be given similarly objective definitions; the exceptions -- the things for which it is very hard to craft a definition that actually passes this thought experiment -- are almost always things that sound objective but turn out to be subjective upon a careful analysis.
What I've been trying to get you to do is supply something similar; some definition of computing that is non-circular -- none of this "Computers compute!" silliness, please! -- and also objective; at a minimum the definition ought to suggest some sort of procedures or tests or investigations I might make into a mystery lump of matter that'd ascertain whether or not the mystery lump is a comptuer, or not.
You still haven't done this. Can you? Where is it?
In response to your discussion of electronic computers having consciousness: I don't think I ever claimed computers don't have consciousness or can't have it or anything similar; if I did I mistyped, though I suspect you are thinking of someone else.
If you're sure it's me can you quote back to me where I made any such claim? It'd help me address your concerns if I can see what you're responding to.
posted by hoople at 3:09 PM on September 2, 2010
hoople: I am a bit surprised that this discussion is still taking place, but if your stamina is not yet exhausted, neither is mine. You are right that the statement about computers not being conscious was made by weston, not yourself, and as I composing replies to both of you, I lost track of who had made that particular statement. I'm sorry for the error.
One of the problems of philosophical debate is that (as was pointed out by Wittgenstein) there are no perfect definitions of anything, and it is always possible to debate any given definition. It isn't always useful to do so, but it is always possible. There is lots of flexibility in our concept of what computers are or of what information is. But the terms are nonetheless quite understandable. If you go to a store and ask for a computer and they give you a coffee maker instead, you will be able to tell the difference.
One of the terms which has been the most elaborately, and often pointlessly debated is the definition of science fiction. I have had a few such debates on this site. Tremendously varying definitions exist. And yet, it is usually possible to determine if a given novel should be considered science fiction as opposed to some other category. Science fiction can be:
A serious attempt to extrapolate the future of scientific progress and its effects on society;
A literary motif that involves certain traditional concepts such as robots, spaceships, and time machines;
An allegorical process by which existing human problems are presented in a novel guise;
Purely a marketting category, determined by the placement of a given book in the designated section of bookstores (this is the definition used by Norman Spinrad).
And no matter which of the above definitions you use, or even if you prefer some other definition entirely, it remains true that a novel such as "The Naked Sun" by Isaac Asimov is classified as science fiction. We still know what it is. Computers are like that, too.
The unaided human brain is capable of working out intellectual problems such as performing mathematical operations including arithmetic, geometry, etc.; it is capable of composing poetry, planning the activities of the day, composing comments on metafilter, and so forth. Computers can also do many of these things. Now, as of yet the best poetry is still composed by the human brain rather than by computers, but computers compose poetry easily at the level of a popular rap singer, for example, and they also compose jokes (this was recently a story on metafilter as well). The mathematical prowess of computers is, of course, their particular strength. These similarities of function are not just a coincidence. Compared to your coffee maker, for example, or pretty much anything else you care to name, the intellectual capacity of either a computer or a human brain is incomparable better. You have told me that I am just saying that computers compute, but that is not what I am doing. I have explained the nature of data processing in at least some detail, and I am now going into additional details. We might imagine data processing to encompass a certain spectrum of activities ranging from the very simple, such as arithmetic, to the extremely complex, which would include the phenomenon of consciousness. You and I are both quite familiar both with the activities of the human mind, and the functions of a modern electronic computer, and I don't really have to exhaustively list them for you. The similarities between these two lists should be apparent. And even at the very simplest level, only computers and minds process data. Rocks really don't, even though rocks can, if you really want to, be used in building computers. An adding machine is a kind of simple computer. The nerve ganglion of an insect is also a kind of simple computer. Both of these do data processing, although the data they process is not the same kind of data. Different computers have differetn specialties. But it's all data processing of some kind or another.
You are concerned about having a way to determine that something is a computer. Hypothetically, we could in some circumstances have difficulty figuring out that something is a computer. Some eccentric inventor might create a device which works in a completely unfamiliar way, and if this device were to be inherited after the death of its inventor with no instruction manual or explanation, you might indeed have trouble figuring out what it is (and you might not even want to bother, but let's say that you do). A more extreme (and more improbable) version of this would be to come into possession of a computer that had been built on another planet by a non-human intelligence having nothing in common with the cultural history of the human race, and which non-human intelligence may have devised entirely novel tehnological strategies. Something like that could present a tremendous challenge, and conceivably could require years, decades, or even centuries of study, in order to figure it out. However, it could still be figured out, and if it actually is a computer, then at some point we would be able to discover that it does perform data processing of some kind, once you do finally figure out how to make it work. It might perform a kind of data processing that is different in some way from what other computers do. It might be designed for some unfamiliar purpose. Perhaps it is dedicated to calculating some parameter of faster than light travel requiring an alien spaceship technology of which we have no knowledge. Even if that is the case, we would still recognize this as a kind of data processing, and we would recognize that the device which does this is a computer.
An abacus is a good example of something can be used as a computer but isn't obviously a computer. If a normal person had never heard of an abacus, and was simply given an abacus and asked to figure out what it could be used for, the overwhelming probaility is that this person would not figure it out. But if an abacus were to be studied by enough intelligent people (none of whom had ever heard of an abacus), they would eventually figure out that it could be used as a kind of counting device to perform simple arithmetical functions. And any device you could construct, no matter how novel or how peculiar, would be susceptible to analysis eventually, given enough study. And if the device performs data processing of some kind, then it qualilfies as a computer. So really, computers are identifiable. The function of a computer is much more complicated than that of a coffee maker, but it is still an identifiable function.
posted by grizzled at 5:34 AM on September 3, 2010
One of the problems of philosophical debate is that (as was pointed out by Wittgenstein) there are no perfect definitions of anything, and it is always possible to debate any given definition. It isn't always useful to do so, but it is always possible. There is lots of flexibility in our concept of what computers are or of what information is. But the terms are nonetheless quite understandable. If you go to a store and ask for a computer and they give you a coffee maker instead, you will be able to tell the difference.
One of the terms which has been the most elaborately, and often pointlessly debated is the definition of science fiction. I have had a few such debates on this site. Tremendously varying definitions exist. And yet, it is usually possible to determine if a given novel should be considered science fiction as opposed to some other category. Science fiction can be:
A serious attempt to extrapolate the future of scientific progress and its effects on society;
A literary motif that involves certain traditional concepts such as robots, spaceships, and time machines;
An allegorical process by which existing human problems are presented in a novel guise;
Purely a marketting category, determined by the placement of a given book in the designated section of bookstores (this is the definition used by Norman Spinrad).
And no matter which of the above definitions you use, or even if you prefer some other definition entirely, it remains true that a novel such as "The Naked Sun" by Isaac Asimov is classified as science fiction. We still know what it is. Computers are like that, too.
The unaided human brain is capable of working out intellectual problems such as performing mathematical operations including arithmetic, geometry, etc.; it is capable of composing poetry, planning the activities of the day, composing comments on metafilter, and so forth. Computers can also do many of these things. Now, as of yet the best poetry is still composed by the human brain rather than by computers, but computers compose poetry easily at the level of a popular rap singer, for example, and they also compose jokes (this was recently a story on metafilter as well). The mathematical prowess of computers is, of course, their particular strength. These similarities of function are not just a coincidence. Compared to your coffee maker, for example, or pretty much anything else you care to name, the intellectual capacity of either a computer or a human brain is incomparable better. You have told me that I am just saying that computers compute, but that is not what I am doing. I have explained the nature of data processing in at least some detail, and I am now going into additional details. We might imagine data processing to encompass a certain spectrum of activities ranging from the very simple, such as arithmetic, to the extremely complex, which would include the phenomenon of consciousness. You and I are both quite familiar both with the activities of the human mind, and the functions of a modern electronic computer, and I don't really have to exhaustively list them for you. The similarities between these two lists should be apparent. And even at the very simplest level, only computers and minds process data. Rocks really don't, even though rocks can, if you really want to, be used in building computers. An adding machine is a kind of simple computer. The nerve ganglion of an insect is also a kind of simple computer. Both of these do data processing, although the data they process is not the same kind of data. Different computers have differetn specialties. But it's all data processing of some kind or another.
You are concerned about having a way to determine that something is a computer. Hypothetically, we could in some circumstances have difficulty figuring out that something is a computer. Some eccentric inventor might create a device which works in a completely unfamiliar way, and if this device were to be inherited after the death of its inventor with no instruction manual or explanation, you might indeed have trouble figuring out what it is (and you might not even want to bother, but let's say that you do). A more extreme (and more improbable) version of this would be to come into possession of a computer that had been built on another planet by a non-human intelligence having nothing in common with the cultural history of the human race, and which non-human intelligence may have devised entirely novel tehnological strategies. Something like that could present a tremendous challenge, and conceivably could require years, decades, or even centuries of study, in order to figure it out. However, it could still be figured out, and if it actually is a computer, then at some point we would be able to discover that it does perform data processing of some kind, once you do finally figure out how to make it work. It might perform a kind of data processing that is different in some way from what other computers do. It might be designed for some unfamiliar purpose. Perhaps it is dedicated to calculating some parameter of faster than light travel requiring an alien spaceship technology of which we have no knowledge. Even if that is the case, we would still recognize this as a kind of data processing, and we would recognize that the device which does this is a computer.
An abacus is a good example of something can be used as a computer but isn't obviously a computer. If a normal person had never heard of an abacus, and was simply given an abacus and asked to figure out what it could be used for, the overwhelming probaility is that this person would not figure it out. But if an abacus were to be studied by enough intelligent people (none of whom had ever heard of an abacus), they would eventually figure out that it could be used as a kind of counting device to perform simple arithmetical functions. And any device you could construct, no matter how novel or how peculiar, would be susceptible to analysis eventually, given enough study. And if the device performs data processing of some kind, then it qualilfies as a computer. So really, computers are identifiable. The function of a computer is much more complicated than that of a coffee maker, but it is still an identifiable function.
posted by grizzled at 5:34 AM on September 3, 2010
Now I will reply to weston. You have some spedific questions for me, which I will now cut and paste: The questions that follow are: (1) what? If they are not both doing computation, what is the brain doing that the digital computers don't do? and (2) why is the term "computer" useful in understanding how the brain does the things it does -- particularly the things nothing else is observed to do?
1. As far as I can determine, human brains and electronic computers are both engaged in data processing, but they do not always process the same data. The context of a biological organism introduces the issues of the survival and reproduction of the organism, which computers do not deal with. Computers do not take care of themselves, do not build themselves, and have no selfish concerns of any kind. Humans are in charge of all those things. But this is not a matter of a fundamental difference. Different programs give different results. Computers do what we program them to do (not counting bugs and glitches, of course). There is no reason in principle why a sufficiently advanced electronic computer could not be programmed to take care of itself, to build more computers like it, and to have a concept of self interest which it would dillilgently pursue, much like a self-interested person. Or it could programmed to be altruistic. Many options exist. Whether an artificially intelligent computer will ever actually be built, we cannot say at this time. If computer science continues to advance then it is inevitable that artificial intelligence will be achieved. But computer science may cease to advance at some point. There is no guarantee that human civilization itself will survive indefinitely, or even that it will survive to the 22nd century. This could easily be the last century in which the human race will enjoy a technologically advanced civilization, before the great collapse. The world faces many very grave problems which seem to be building toward a huge crisis, which we may not survive. But aside from that, as long as research and development continue, we will have better computers and better computer programs, and eventually we will have artificial intelligence. At that point (if we survive long enought to get there) there will be absolutely nothing that the human brain does which cannot also be done by electronic computers.
2. The term "computer" is useful in describing what the brain does, because computers are the only things which resemble the human brain in terms of data processing capability. Originally, the reverse metaphor was used. Rather than describing the brain as a computer, we used to describe computers as electronic brains. The comparison works either way. The similarity is clear. I discuss this at greater length in my preceding reply to hoople.
You have also claimed that the workings of the human brain could involve something that is potentially comprehensible, but which is unknown at the present moment. That is, of course, theoretically possible. But given that all the functions of the human brain, including consciousness, are understandable in terms of data processing, it seems unnecessary to hypothesize that these functions involve something else that we simply have not yet identified. In this case we have to apply Occam's Razor. Nothing is accomplished by making the theory more complicated than it has to be. If, at some future time, we discover that there actually is something else involved, then we can revise our theories accordingly.
posted by grizzled at 5:59 AM on September 3, 2010
1. As far as I can determine, human brains and electronic computers are both engaged in data processing, but they do not always process the same data. The context of a biological organism introduces the issues of the survival and reproduction of the organism, which computers do not deal with. Computers do not take care of themselves, do not build themselves, and have no selfish concerns of any kind. Humans are in charge of all those things. But this is not a matter of a fundamental difference. Different programs give different results. Computers do what we program them to do (not counting bugs and glitches, of course). There is no reason in principle why a sufficiently advanced electronic computer could not be programmed to take care of itself, to build more computers like it, and to have a concept of self interest which it would dillilgently pursue, much like a self-interested person. Or it could programmed to be altruistic. Many options exist. Whether an artificially intelligent computer will ever actually be built, we cannot say at this time. If computer science continues to advance then it is inevitable that artificial intelligence will be achieved. But computer science may cease to advance at some point. There is no guarantee that human civilization itself will survive indefinitely, or even that it will survive to the 22nd century. This could easily be the last century in which the human race will enjoy a technologically advanced civilization, before the great collapse. The world faces many very grave problems which seem to be building toward a huge crisis, which we may not survive. But aside from that, as long as research and development continue, we will have better computers and better computer programs, and eventually we will have artificial intelligence. At that point (if we survive long enought to get there) there will be absolutely nothing that the human brain does which cannot also be done by electronic computers.
2. The term "computer" is useful in describing what the brain does, because computers are the only things which resemble the human brain in terms of data processing capability. Originally, the reverse metaphor was used. Rather than describing the brain as a computer, we used to describe computers as electronic brains. The comparison works either way. The similarity is clear. I discuss this at greater length in my preceding reply to hoople.
You have also claimed that the workings of the human brain could involve something that is potentially comprehensible, but which is unknown at the present moment. That is, of course, theoretically possible. But given that all the functions of the human brain, including consciousness, are understandable in terms of data processing, it seems unnecessary to hypothesize that these functions involve something else that we simply have not yet identified. In this case we have to apply Occam's Razor. Nothing is accomplished by making the theory more complicated than it has to be. If, at some future time, we discover that there actually is something else involved, then we can revise our theories accordingly.
posted by grizzled at 5:59 AM on September 3, 2010
The past couple days I've been in a kind of limbo in real life, waiting on various things before I can move on. With any luck I'll be out of limbo by later today.
It is a bit hard to have a good conversation that winds up being N:1 when you're the one; you have commendable patience and grace, seriously.
I think in general belaboring the philosophical points is going to be a dead end, but I've located the fundamental place where we seem to disagree (perhaps irreconcilably).
I'll bring up two concepts, not that they're new to either of us but just to have terms for them. I think doing so will let me identify precisely where we seem to have an irreconcilable difference of outlook.
One is a notion of "levels" (of things to consider), which I'll illustrate by example: you're right that we can generally recognize what is or isn't science fiction, using criteria along the lines you laid out. Is determining what is or isn't science fiction something for which our theory of what the brain is, or what a computer is, etc., has any impact? I'd argue no: we wanted to we could perfectly well continue to discuss whether or not that definition of sci-fi is accurate, inaccurate, limited, etc., and wether or not the bran is a computer would have nothing to do with any of it. So we can abbreviate that notion as saying that "questions of what the brain is or isn't, or what a computer is or isn't, are on a different level from questions about what is or isn't sci-fi".
The other is a notion of various types of judgments -- eg, aesthetic, or empirical, or legal, or mathematical, etc. -- which in general carry with them different ways of approaching the question "someone has arrived at a judgment J. Is J justifiable?". The correct interplay between the various types of judgments, etc., is obviously not a topic it's worth either of our time to address, but I'll point out that in general it's usually "not even wrong" to try to evaluate a particular kind of judgment -- eg an aesthetic judgment -- using methods usually used to evaluate a different type -- eg, an empirical judgment; if you say "such-and-such poem is a beautiful poem" and my response is "what objective criteria could I use to tell if that poem was even a poem, and not just a sack of potatoes" you'd be right to start ignoring me as some kind of nutcase.
The only other thing to add is that, in general, one of the things that tends to vary from level to level is the set of judgment types we know how to make, or that make sense to make, and so on. It's not worth going into in great detail, but part of what would make it so silly to bring "is the brain a computer?" into a discussion about, say, "what is or is not sci-fi?" is that for the most part it seems exceedingly unlikely that any of the types of judgments that go into the "is the brain a computer?" debate translate into sci-fi related judgments that're at all useful, and conversely none of the types of judgments that'd play into "what is sci-fi?" translate into "brain/computer" judgments that're at all useful.
That's a long rehash of things we both know, presumably, just to set up this.
The first place we seem to disagree is about this: within which types of judgment can we currently justify the claim "the brain is a computer"?
My position is that:
- that judgment isn't justifiable in any kind of strong sense, eg in some mathematical or physical or empirical sense
- the reason it's not justifiable in those senses is b/c -- contrary to most people's intuition -- we currently have no solid criteria that works in those systems that lets us separate clumps-of-matter-that-are-computers from clumps-of-matter-that-aren't-computers
- and the reason we don't have such a criterion is b/c the more basic ideas of "information" and "processing", etc., also don't have solid criteria that define information or processing at that level in a way that distinguishes it from plain-old physics (!)
So although there's certainly some type of judgment in which it'd be fair to say that "the brain is a computer" is justified it seems to me to be a a fairly subjective type of judgment; not as subjective as "is this a beautiful poem" but certainly much more subjective than "is this substance a solid, liquid, or gas at temperature T?" or "is this a lever?" or "is this a camera?".
The philosophical literature backs me up on this assessment, for whatever that's worth: people have mostly tried and failed to find ways of identifying "computers" or "information" or "processing" in the sense you're trying to use them that work in the same kinds of judgment systems people use for math or physics or chemistry and so on, and as of now there's nothing that really works "objectively"; it might be resolved eventually but for now the best we can do is work in more-subjective ways that let us appeal to common sense and intuition.
And, no offense intended, you've not been able to solve this conundrum here; you've supplied arguments that work in whatever system lets us appeal to common sense and intuition but nothing that lets us drop down into the more-objective systems. Don't feel too bad, it's a hard thing to do apparently as no one's actually done it.
So this is the first thing where we have a difference of opinion, or at least of outlook, or are talking about different things: you've supplied adequate support for your judgment in an at-least somewhat subjective system, relying heavily on common sense and intuition and so on; I keep wanting to see something in a different sort of system, b/c in those sorts of system it's not at all settled (and this point isn't even controversial, really).
The follow-up question is then: is there a reason to prefer one system over the other when addressing the question of "is the brain a computer"?
I think there is, and it emerges from consideration of the consequences of accepting the claim "the brain is a computer" together with your other postulate (which I'll summarize as: "no supernatural, please").
Well, what happens if we take both of those claims as true very seriosly, and reason our way through their implications?
One of the obvious consequences is that when we previously did things like separate out topics of inquiries into various systems of judgment and various "levels" (like we did for "what is sci fi" and "is the brain a computer", which seem to be on very different "levels") that's no longer entirely defensible:
- the brain is a computer obeying the laws of physics (eg: it's a computer, no supernatural elements)
- when someone decides whether or not something is science fiction, etc., they do so by using their brains to arrive at a decision
- the process by which that decision is arrived at is just software executing some algorithms on some input data
- ergo, we can provide a purely-computational characterization of how to characterize what is or is not science fiction by characterizing the brain's activity in making the judgment
- and, b/c the brain's operation is entirely "mechanical", this means we can -- conceivably -- construct an entirely mechanical criterion to determine whether or not a particular clump of matter is in fact a science fiction novel
Until this process has been carried out it's hard to say what that criterion would look like, of course -- it might turn out to be essentially equivalent to 'simulate a person of our era -- including their upbringing, media environment, etc. -- and ask them if a simulated copy of the clump of matter is sci-fi or not' -- but it'd in principle be expressible in purely-physical, purely-mechanical terms, with no judgment or appeals to common sense required to understand the definition.
The same for everything else: for poetry our aesthetic judgments are the result of algorithms executing on our brain-computers and thus characterizable as such; those in turn reduce to mechanical descriptions, etc., and thus aesthetic judgments, too, will at least in principle "flatten" into such lower-level, physical/mechanical characterizations.
And, in particular, the same for computer: if "computer" can be given a meaningful definition in a universe where there is nothing supernatural and the brain is also a computer, then it's obviously the case that there has to be some purely-mechanical characterization of what is or is not a computer, etc.; there's simply no way you can have those two claims be true and have a computer simultaneously be something that can't be given a definition in physical or mechanical terms.
Hence why I consider the common-sense system entirely inadequate for addressing this claim, at least with out an apology of the sort "well, we really ought to have a physical characterization but it's beyond our grasp for the moment; I'm confident, however, that in the interim these common sense notions do an adequate job as placeholders for that eventual definition, and that whatever definition we eventually discover or invent will not substantially disagree with our common-sense notions": in the absence of that sort of apology it's the kind of approach that's a category error, in the same way that, in general, requesting a mechanical characterization of the beauty of a poem would be.
Which is where I think we have a bit of an irreconcilable difference: you come across as believing that not only is it adequate to stay in the realm of common sense when making such a fundamental claim, you don't seem to acknowledge (or be aware of, etc.) the implications of the current inability to work with the claim in other (arguably, I'd say, more appropriate) systems.
It's been fun. It's a good chance to try out some ways of presenting various material and seeing where the weak points are and so on, and thanks for putting up with me to this extent.
I'll pop by next week but not necessarily any sooner than that. FWIW I don't consider you so much "wrong" -- in general -- so much as being guilty of drawing overly-strong conclusions and inferences from the amount and type of data available.
posted by hoople at 9:36 AM on September 3, 2010
It is a bit hard to have a good conversation that winds up being N:1 when you're the one; you have commendable patience and grace, seriously.
I think in general belaboring the philosophical points is going to be a dead end, but I've located the fundamental place where we seem to disagree (perhaps irreconcilably).
I'll bring up two concepts, not that they're new to either of us but just to have terms for them. I think doing so will let me identify precisely where we seem to have an irreconcilable difference of outlook.
One is a notion of "levels" (of things to consider), which I'll illustrate by example: you're right that we can generally recognize what is or isn't science fiction, using criteria along the lines you laid out. Is determining what is or isn't science fiction something for which our theory of what the brain is, or what a computer is, etc., has any impact? I'd argue no: we wanted to we could perfectly well continue to discuss whether or not that definition of sci-fi is accurate, inaccurate, limited, etc., and wether or not the bran is a computer would have nothing to do with any of it. So we can abbreviate that notion as saying that "questions of what the brain is or isn't, or what a computer is or isn't, are on a different level from questions about what is or isn't sci-fi".
The other is a notion of various types of judgments -- eg, aesthetic, or empirical, or legal, or mathematical, etc. -- which in general carry with them different ways of approaching the question "someone has arrived at a judgment J. Is J justifiable?". The correct interplay between the various types of judgments, etc., is obviously not a topic it's worth either of our time to address, but I'll point out that in general it's usually "not even wrong" to try to evaluate a particular kind of judgment -- eg an aesthetic judgment -- using methods usually used to evaluate a different type -- eg, an empirical judgment; if you say "such-and-such poem is a beautiful poem" and my response is "what objective criteria could I use to tell if that poem was even a poem, and not just a sack of potatoes" you'd be right to start ignoring me as some kind of nutcase.
The only other thing to add is that, in general, one of the things that tends to vary from level to level is the set of judgment types we know how to make, or that make sense to make, and so on. It's not worth going into in great detail, but part of what would make it so silly to bring "is the brain a computer?" into a discussion about, say, "what is or is not sci-fi?" is that for the most part it seems exceedingly unlikely that any of the types of judgments that go into the "is the brain a computer?" debate translate into sci-fi related judgments that're at all useful, and conversely none of the types of judgments that'd play into "what is sci-fi?" translate into "brain/computer" judgments that're at all useful.
That's a long rehash of things we both know, presumably, just to set up this.
The first place we seem to disagree is about this: within which types of judgment can we currently justify the claim "the brain is a computer"?
My position is that:
- that judgment isn't justifiable in any kind of strong sense, eg in some mathematical or physical or empirical sense
- the reason it's not justifiable in those senses is b/c -- contrary to most people's intuition -- we currently have no solid criteria that works in those systems that lets us separate clumps-of-matter-that-are-computers from clumps-of-matter-that-aren't-computers
- and the reason we don't have such a criterion is b/c the more basic ideas of "information" and "processing", etc., also don't have solid criteria that define information or processing at that level in a way that distinguishes it from plain-old physics (!)
So although there's certainly some type of judgment in which it'd be fair to say that "the brain is a computer" is justified it seems to me to be a a fairly subjective type of judgment; not as subjective as "is this a beautiful poem" but certainly much more subjective than "is this substance a solid, liquid, or gas at temperature T?" or "is this a lever?" or "is this a camera?".
The philosophical literature backs me up on this assessment, for whatever that's worth: people have mostly tried and failed to find ways of identifying "computers" or "information" or "processing" in the sense you're trying to use them that work in the same kinds of judgment systems people use for math or physics or chemistry and so on, and as of now there's nothing that really works "objectively"; it might be resolved eventually but for now the best we can do is work in more-subjective ways that let us appeal to common sense and intuition.
And, no offense intended, you've not been able to solve this conundrum here; you've supplied arguments that work in whatever system lets us appeal to common sense and intuition but nothing that lets us drop down into the more-objective systems. Don't feel too bad, it's a hard thing to do apparently as no one's actually done it.
So this is the first thing where we have a difference of opinion, or at least of outlook, or are talking about different things: you've supplied adequate support for your judgment in an at-least somewhat subjective system, relying heavily on common sense and intuition and so on; I keep wanting to see something in a different sort of system, b/c in those sorts of system it's not at all settled (and this point isn't even controversial, really).
The follow-up question is then: is there a reason to prefer one system over the other when addressing the question of "is the brain a computer"?
I think there is, and it emerges from consideration of the consequences of accepting the claim "the brain is a computer" together with your other postulate (which I'll summarize as: "no supernatural, please").
Well, what happens if we take both of those claims as true very seriosly, and reason our way through their implications?
One of the obvious consequences is that when we previously did things like separate out topics of inquiries into various systems of judgment and various "levels" (like we did for "what is sci fi" and "is the brain a computer", which seem to be on very different "levels") that's no longer entirely defensible:
- the brain is a computer obeying the laws of physics (eg: it's a computer, no supernatural elements)
- when someone decides whether or not something is science fiction, etc., they do so by using their brains to arrive at a decision
- the process by which that decision is arrived at is just software executing some algorithms on some input data
- ergo, we can provide a purely-computational characterization of how to characterize what is or is not science fiction by characterizing the brain's activity in making the judgment
- and, b/c the brain's operation is entirely "mechanical", this means we can -- conceivably -- construct an entirely mechanical criterion to determine whether or not a particular clump of matter is in fact a science fiction novel
Until this process has been carried out it's hard to say what that criterion would look like, of course -- it might turn out to be essentially equivalent to 'simulate a person of our era -- including their upbringing, media environment, etc. -- and ask them if a simulated copy of the clump of matter is sci-fi or not' -- but it'd in principle be expressible in purely-physical, purely-mechanical terms, with no judgment or appeals to common sense required to understand the definition.
The same for everything else: for poetry our aesthetic judgments are the result of algorithms executing on our brain-computers and thus characterizable as such; those in turn reduce to mechanical descriptions, etc., and thus aesthetic judgments, too, will at least in principle "flatten" into such lower-level, physical/mechanical characterizations.
And, in particular, the same for computer: if "computer" can be given a meaningful definition in a universe where there is nothing supernatural and the brain is also a computer, then it's obviously the case that there has to be some purely-mechanical characterization of what is or is not a computer, etc.; there's simply no way you can have those two claims be true and have a computer simultaneously be something that can't be given a definition in physical or mechanical terms.
Hence why I consider the common-sense system entirely inadequate for addressing this claim, at least with out an apology of the sort "well, we really ought to have a physical characterization but it's beyond our grasp for the moment; I'm confident, however, that in the interim these common sense notions do an adequate job as placeholders for that eventual definition, and that whatever definition we eventually discover or invent will not substantially disagree with our common-sense notions": in the absence of that sort of apology it's the kind of approach that's a category error, in the same way that, in general, requesting a mechanical characterization of the beauty of a poem would be.
Which is where I think we have a bit of an irreconcilable difference: you come across as believing that not only is it adequate to stay in the realm of common sense when making such a fundamental claim, you don't seem to acknowledge (or be aware of, etc.) the implications of the current inability to work with the claim in other (arguably, I'd say, more appropriate) systems.
It's been fun. It's a good chance to try out some ways of presenting various material and seeing where the weak points are and so on, and thanks for putting up with me to this extent.
I'll pop by next week but not necessarily any sooner than that. FWIW I don't consider you so much "wrong" -- in general -- so much as being guilty of drawing overly-strong conclusions and inferences from the amount and type of data available.
posted by hoople at 9:36 AM on September 3, 2010
hoople: I am a bit surprised to read that you have found me to have commendable patience and grace, particularly since some of my comments have actually been somewhat impatient; in any event, you & I have both struggled through a rather lengthy and complicated argument, and I am glad that it has not resulted in any feeling of resentment, as lengthy and complicated arguments often do (I recently had a much briefer and simpler argument, which was perfectly courteous on my part, and which resulted in my receipt of a message accusing me of being a psychotic troll).
So, where does this leave us? You believe that I am drawing overly strong conclusions based on the available data. I am prepared to draw different conclusions if the data will warrant it. So far, despite a very elaborate debate involving you and others, notably weston, I still do not have any hypotheses about the nature of the brain which seem to be better in any way than my claim that the brain is an organic computer. If any better hypotheses can be produced and supported by legitimate observations and/or reasoning, I am perfectly prepared to accept them. So far this has not happened. Until it does, I will stick with what I have.
It is admittedly somewhat bizarre to imagine that there would be an algorithm that would in a purely mathematical sense determine a complex issue such as whether a given book could be classified as science fiction, a question which can provoke complex debate among human readers. This problem arises because it is such a complex question, so complex that we could not presently hope to reduce it to a mathematical formula of any sort. We don't know enough to do that. But I still believe that in theory, with enough research it could be done. Ultimately, even very complicated questions do have answers, and those answers can be described mathematically if enough work is invested into the analysis of the question (which, in the case of determining what is or is not science fiction, might never be a justifiable effort).
Despite the fact that there is a tremendous amount that we do not know about computers, or about the human brain, I do not apologize for my conclusion; I think it is remarkable how much we do know about these things, after a relatively brief period of a few decades of real research (following the millennia of superstition and ignorance which has characterized most of human history). I don't think that we should pretend, out of humility, to be more ignorant than we really are. Present knowledge does support the conclusion that the brain can be described as an organic computer.
Anyway, good talking to you, and perhaps we will have more to say about this later.
posted by grizzled at 12:40 PM on September 3, 2010
So, where does this leave us? You believe that I am drawing overly strong conclusions based on the available data. I am prepared to draw different conclusions if the data will warrant it. So far, despite a very elaborate debate involving you and others, notably weston, I still do not have any hypotheses about the nature of the brain which seem to be better in any way than my claim that the brain is an organic computer. If any better hypotheses can be produced and supported by legitimate observations and/or reasoning, I am perfectly prepared to accept them. So far this has not happened. Until it does, I will stick with what I have.
It is admittedly somewhat bizarre to imagine that there would be an algorithm that would in a purely mathematical sense determine a complex issue such as whether a given book could be classified as science fiction, a question which can provoke complex debate among human readers. This problem arises because it is such a complex question, so complex that we could not presently hope to reduce it to a mathematical formula of any sort. We don't know enough to do that. But I still believe that in theory, with enough research it could be done. Ultimately, even very complicated questions do have answers, and those answers can be described mathematically if enough work is invested into the analysis of the question (which, in the case of determining what is or is not science fiction, might never be a justifiable effort).
Despite the fact that there is a tremendous amount that we do not know about computers, or about the human brain, I do not apologize for my conclusion; I think it is remarkable how much we do know about these things, after a relatively brief period of a few decades of real research (following the millennia of superstition and ignorance which has characterized most of human history). I don't think that we should pretend, out of humility, to be more ignorant than we really are. Present knowledge does support the conclusion that the brain can be described as an organic computer.
Anyway, good talking to you, and perhaps we will have more to say about this later.
posted by grizzled at 12:40 PM on September 3, 2010
effugas: If I remove your arm, you are still you.
If I remove your heart, you are still you.
...
My favorite Paradox - The Ship of Theseus
I'm a bit late to the party, but, there are so many fun questions in this.
du: I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.
I absolutely agree with (what I take to be) the premise of this question and I think it's the heart of what this article is all about, but Dennett never mentions it. Humanzee pulls aparts what makes the computer not a brain, but I think the question above is at a more fundamental level. Really this is about input/output, souls, and free will.
Given the exact same inputs into a computer (starting variables, programs, input, etc), you get the same outputs every time. The question is, is the brain like this as well - or is the brain somehow able to break free of it's input and make decisions not based on the sum of current and prior input. This is what I take du's question above to mean (and it looks like hoopla and grizzled go back and forth on this idea quite a bit, I look forward to reading their comments in more depth). Are we simply "expert systems"? Maybe this would be an entirely disappointing outcome for many, but it would at least mean that strong AI is possible in that we have an example in ourselves ;)
Dennetts exploration of the separation of the brain and body, I think touches on the paradox that I link above and the concept of the Homunculus - the idea that there is a little "you" inside of your body, watching and controlling this machine. Which even in separating the brain and body, the mind still tries to recreate this illusion. The distance of the parts from the whole make no difference to that imagined self.
posted by mincus at 8:36 AM on September 5, 2010
If I remove your heart, you are still you.
...
My favorite Paradox - The Ship of Theseus
I'm a bit late to the party, but, there are so many fun questions in this.
du: I don't know whether to ask for specific properties that the brain has that a computer does not in principle also have. Or not.
I absolutely agree with (what I take to be) the premise of this question and I think it's the heart of what this article is all about, but Dennett never mentions it. Humanzee pulls aparts what makes the computer not a brain, but I think the question above is at a more fundamental level. Really this is about input/output, souls, and free will.
Given the exact same inputs into a computer (starting variables, programs, input, etc), you get the same outputs every time. The question is, is the brain like this as well - or is the brain somehow able to break free of it's input and make decisions not based on the sum of current and prior input. This is what I take du's question above to mean (and it looks like hoopla and grizzled go back and forth on this idea quite a bit, I look forward to reading their comments in more depth). Are we simply "expert systems"? Maybe this would be an entirely disappointing outcome for many, but it would at least mean that strong AI is possible in that we have an example in ourselves ;)
Dennetts exploration of the separation of the brain and body, I think touches on the paradox that I link above and the concept of the Homunculus - the idea that there is a little "you" inside of your body, watching and controlling this machine. Which even in separating the brain and body, the mind still tries to recreate this illusion. The distance of the parts from the whole make no difference to that imagined self.
posted by mincus at 8:36 AM on September 5, 2010
I would like to mention that my very first comment in this lengthy discussion addresses the Ship of Thesus paradox (although not by that name). I mention that even if every cell in your body is replaced by a new cell, you would remain yourself. And this is very illustrative, I believe, of the true nature of the human personality, which is essentially information, rather than a physical structure. The physical structure stores the information and uses it.
posted by grizzled at 9:44 AM on September 6, 2010
posted by grizzled at 9:44 AM on September 6, 2010
As far as I can determine, human brains and electronic computers are both engaged in data processing, but they do not always process the same data.
Okay, so you're saying: different problem domains, but no other difference, right?
The context of a biological organism introduces the issues of the survival and reproduction of the organism, which computers do not deal with. Computers do not take care of themselves, do not build themselves, and have no selfish concerns of any kind. Humans are in charge of all those things. But this is not a matter of a fundamental difference.
Really? You genuinely don't consider these significant distinctions at all?
There is no reason in principle why a sufficiently advanced electronic computer could not be programmed to take care of itself, to build more computers like it, and to have a concept of self interest which it would dillilgently pursue, much like a self-interested person. Or it could programmed to be altruistic.
One of these things is not like the others.
If I were given a 10 billion dollars and held at gunpoint to make something that did everything you just described, I'd be confident that I'd have a rough idea of how to make a machine driven by state-of-the-art electronic computers that could monitored its own essential functions, that had some capacity to repair them, that could build more machines like itself, and could even act in accordance with altruistic principles as I'd define them. I'm assuming you would too. These would be tricky engineering problems, but the theory needed to do these things is stuff I'm roughly familiar with.
But the one I left out? Have a "concept" of self interest? Have any concept at all? I'd have no idea. If it meant my life, I'd pretend to, I'd emit an ink-cloud of remote possibilities: Hofstadterian thought experiments and discussions of higher order logic and hand-waving about massively parallel circuits and emergent properties of complex systems, and meanwhile... I'd be trying to pull a Tony Stark and build the device which would enable me to get away. Because despite some at least rudimentary review of the literature, I'm completely unaware of anything that seems credibly promising -- let alone demonstrable -- as a method of building a machine that displays behavior that even suggests it has experiences and concepts and awareness. Are you?
In the scenario I'm giving, would you try to build the self-aware machine, or would you try stealthily building escape devices? Would you consider this essentially an engineering problem? If so, what specific theories about self awareness would you be calling on in order to create your results?
2. The term "computer" is useful in describing what the brain does
Everything the brain does, or some things?
because computers are the only things which resemble the human brain in terms of data processing capability. Originally, the reverse metaphor was used. Rather than describing the brain as a computer, we used to describe computers as electronic brains.
And that's OK as a limited metaphor for central control of an electronic system, but beyond that, it breaks down in the original usage for the same reasons it breaks down now.
I don't think that we should pretend, out of humility, to be more ignorant than we really are.
Out of curiousity, which character in the hoopie's joke about Irish Cows do you think has the right of things? Would you argue that the mathematician or the physicist are pretending to be more ignorant than they really are out of a kind of overwrought humility?
posted by weston at 6:55 PM on September 6, 2010
Okay, so you're saying: different problem domains, but no other difference, right?
The context of a biological organism introduces the issues of the survival and reproduction of the organism, which computers do not deal with. Computers do not take care of themselves, do not build themselves, and have no selfish concerns of any kind. Humans are in charge of all those things. But this is not a matter of a fundamental difference.
Really? You genuinely don't consider these significant distinctions at all?
There is no reason in principle why a sufficiently advanced electronic computer could not be programmed to take care of itself, to build more computers like it, and to have a concept of self interest which it would dillilgently pursue, much like a self-interested person. Or it could programmed to be altruistic.
One of these things is not like the others.
If I were given a 10 billion dollars and held at gunpoint to make something that did everything you just described, I'd be confident that I'd have a rough idea of how to make a machine driven by state-of-the-art electronic computers that could monitored its own essential functions, that had some capacity to repair them, that could build more machines like itself, and could even act in accordance with altruistic principles as I'd define them. I'm assuming you would too. These would be tricky engineering problems, but the theory needed to do these things is stuff I'm roughly familiar with.
But the one I left out? Have a "concept" of self interest? Have any concept at all? I'd have no idea. If it meant my life, I'd pretend to, I'd emit an ink-cloud of remote possibilities: Hofstadterian thought experiments and discussions of higher order logic and hand-waving about massively parallel circuits and emergent properties of complex systems, and meanwhile... I'd be trying to pull a Tony Stark and build the device which would enable me to get away. Because despite some at least rudimentary review of the literature, I'm completely unaware of anything that seems credibly promising -- let alone demonstrable -- as a method of building a machine that displays behavior that even suggests it has experiences and concepts and awareness. Are you?
In the scenario I'm giving, would you try to build the self-aware machine, or would you try stealthily building escape devices? Would you consider this essentially an engineering problem? If so, what specific theories about self awareness would you be calling on in order to create your results?
2. The term "computer" is useful in describing what the brain does
Everything the brain does, or some things?
because computers are the only things which resemble the human brain in terms of data processing capability. Originally, the reverse metaphor was used. Rather than describing the brain as a computer, we used to describe computers as electronic brains.
And that's OK as a limited metaphor for central control of an electronic system, but beyond that, it breaks down in the original usage for the same reasons it breaks down now.
I don't think that we should pretend, out of humility, to be more ignorant than we really are.
Out of curiousity, which character in the hoopie's joke about Irish Cows do you think has the right of things? Would you argue that the mathematician or the physicist are pretending to be more ignorant than they really are out of a kind of overwrought humility?
posted by weston at 6:55 PM on September 6, 2010
weston: You have an objection to the idea that consciousness could result from mere data processing, no matter how sophisticated the data processing may be, although you apparently do not have any suggestion as to what the source of consciousness is, other than data processing. You would rather hypothesize that there is something which is as yet completely unknown which produces consciousness. As I have noted before, I wouldn't completely rule that out, but until there is some actual evidence for that hypothesis I have to consider it to be unconvincing. But to reply in detail:
I never suggested that full artificial intelligence could be achieved with a research budget of ten billion dollars, a figure which you seem to have chosen arbitrarily. Some technical problems cannot be hurried; money alone does not guarantee scientific or technological breakthroughs. Some things take a lot of time and thought and inspiration, as well as funding, although of course, funding makes everything else easier. It would be impossible to predict how long it would take or how much money would have to be invested in order to develop artificial intelligence. I am merely asserting that it is possible in principle to develop artificial intelligence because intelligence as we know it, organic intelligence, results from an organic computer better known as the brain, and the functions of the brain therefore can be reproduced by artificial means, with sufficient technological progress.
We have seen in the history of science that biological processes can be done artificially. There was a time (as you know) when it was believed that no machine could ever fly; only nature is capable of producing flight. Now we have machines that fly faster than any bird. There are endless other examples of biological functions which can now be done artificially, even functions of the human body, such as the artificial heart or the kidney machine. The barriers between nature and artifice are not as absolute as we once thought. I see no particular reason to think that the brain is any different.
I said that there was no fundamental difference between what brains do and what artificial computers do, and your response is to rephrase my statement, asking if I really believe that there is no significant difference. Yes, there are significant differences. But there are no fundamental differences. Brains and electronic computers are obviously very different. But as different as they are, they both engage in the same activity, which is data processing. There are huge differences, and there is a fundamental similarity.
Electronic computers do not, at this time, do everything that the brain does, obviously. Artificial intelligence has not yet been created, and perhaps it will never be created; I have observed before that I do not feel any great confidence even that human civilization will survive long enough to be able to make that much technological progress. The challenges and dangers of the 21st century are very severe. I merely note that such progress is possible in principle, given enough research and development.
You have asked if I am aware of a method of building a machine that would display behavior suggesting that the machine has experiences, concepts, and awareness. Obviously we have not yet reached that level, however, we have definitely moved in that direction. All the technolgical sophisitication that we have now achieved brings us closer to that goal. A hundred years ago we would have had no technology that had any direct bearing on the project of artificial intelligence; today we have vast amounts of technology that appears to be taking us in that direction. Ah, but what if we really aren't going in that direction? What if we never really are able to figure out how to make computers conscious in any way? How can I know what the future of technology will bring? Again, I am speaking about basic principles. If consciousness is produced by data processing in the brain, then in principle it can be produced by data processing in a machine. And if consciousness is not produced by data processing in the brain, we need to have a better candidate to explain what causes consciousness. And if there is no better candidate, then I'm going to stick with the one I have.
And now, at your request, we return to the Irish cows. To refresh your memory and mine, here is the original post:
In the interim: let me recall to you the famous joke about the engineer, physicist, and mathematician. They're all on a train through the Irish countryside and for all three of them it's their first time visiting Ireland.
They spot a cow.
The engineer says: "Look! Irish cows are brown!"
The physicist says: "Not so fast! All we know right now is that at least one Irish cow is brown."
The mathematician says: "Not so fast yourself! All we know right now is that at least one side of one Irish cow is brown."
So, what does this means in terms of my suggestion that we should not pretend to be more ignorant than we really are? If I went to the Irish countryside and observed a cow that was brown on the side I was looking at, I would consider it to be a reasonable hypothesis that the cow was actually brown on both sides, given that cows usually do have a uniform coloration, and I would furthermore hypothesize that this cow was probably typical of cows found in Ireland, although I would certainly not rule out the possibility that there were other kinds of cows in Ireland, such as the familiar black and white colored cow. It is technically true that all I actually saw was one side of one cow, but that is not really the limit of the conclusions that I can reasonably draw, and it would be foolish of me to pretend that it was. My observation is not a strong proof of the hypothesis that Irish cows are brown, and there is some finite possibility that I just happened to stumble across an atypical cow, perhaps the only brown cow in Ireland, and who knows, perhaps a bizarre mutant cow that is brown on one side and purple on the other (although no such cow has ever been seen, to my knowledge). But the odds are against it.
Science draws generalized conclusions all the time. If I test a variety of objects to observe the rate at which they fall, at some point I can produce an equation that describes the force of gravity, even though I have only examined a very small fraction of the total number of objects in the universe. If five billion rocks are observed to fall in a certain way, maybe the next one I test will fall differently, but I doubt it. So, seeing one brown cow leads to conclusions about cows in general. If I continue my tour of Ireland and I get to see more cows, and more sides of more cows, my data improves and I will have more confidence in my conclusions. Although in the end, my conclusion may be exactly the same, that Irish cows are brown.
So, the point is, we do have quite of bit of knowledge about the mind, the brain, consciousness, computers, and data processing; these fields are not entirely mysterious and incomprehensible. And the existing state of knowledge does lead to certain conclusions. I am not going to pretend that just because I don't know everything, therefore I don't know anything.
posted by grizzled at 5:13 AM on September 7, 2010
I never suggested that full artificial intelligence could be achieved with a research budget of ten billion dollars, a figure which you seem to have chosen arbitrarily. Some technical problems cannot be hurried; money alone does not guarantee scientific or technological breakthroughs. Some things take a lot of time and thought and inspiration, as well as funding, although of course, funding makes everything else easier. It would be impossible to predict how long it would take or how much money would have to be invested in order to develop artificial intelligence. I am merely asserting that it is possible in principle to develop artificial intelligence because intelligence as we know it, organic intelligence, results from an organic computer better known as the brain, and the functions of the brain therefore can be reproduced by artificial means, with sufficient technological progress.
We have seen in the history of science that biological processes can be done artificially. There was a time (as you know) when it was believed that no machine could ever fly; only nature is capable of producing flight. Now we have machines that fly faster than any bird. There are endless other examples of biological functions which can now be done artificially, even functions of the human body, such as the artificial heart or the kidney machine. The barriers between nature and artifice are not as absolute as we once thought. I see no particular reason to think that the brain is any different.
I said that there was no fundamental difference between what brains do and what artificial computers do, and your response is to rephrase my statement, asking if I really believe that there is no significant difference. Yes, there are significant differences. But there are no fundamental differences. Brains and electronic computers are obviously very different. But as different as they are, they both engage in the same activity, which is data processing. There are huge differences, and there is a fundamental similarity.
Electronic computers do not, at this time, do everything that the brain does, obviously. Artificial intelligence has not yet been created, and perhaps it will never be created; I have observed before that I do not feel any great confidence even that human civilization will survive long enough to be able to make that much technological progress. The challenges and dangers of the 21st century are very severe. I merely note that such progress is possible in principle, given enough research and development.
You have asked if I am aware of a method of building a machine that would display behavior suggesting that the machine has experiences, concepts, and awareness. Obviously we have not yet reached that level, however, we have definitely moved in that direction. All the technolgical sophisitication that we have now achieved brings us closer to that goal. A hundred years ago we would have had no technology that had any direct bearing on the project of artificial intelligence; today we have vast amounts of technology that appears to be taking us in that direction. Ah, but what if we really aren't going in that direction? What if we never really are able to figure out how to make computers conscious in any way? How can I know what the future of technology will bring? Again, I am speaking about basic principles. If consciousness is produced by data processing in the brain, then in principle it can be produced by data processing in a machine. And if consciousness is not produced by data processing in the brain, we need to have a better candidate to explain what causes consciousness. And if there is no better candidate, then I'm going to stick with the one I have.
And now, at your request, we return to the Irish cows. To refresh your memory and mine, here is the original post:
In the interim: let me recall to you the famous joke about the engineer, physicist, and mathematician. They're all on a train through the Irish countryside and for all three of them it's their first time visiting Ireland.
They spot a cow.
The engineer says: "Look! Irish cows are brown!"
The physicist says: "Not so fast! All we know right now is that at least one Irish cow is brown."
The mathematician says: "Not so fast yourself! All we know right now is that at least one side of one Irish cow is brown."
So, what does this means in terms of my suggestion that we should not pretend to be more ignorant than we really are? If I went to the Irish countryside and observed a cow that was brown on the side I was looking at, I would consider it to be a reasonable hypothesis that the cow was actually brown on both sides, given that cows usually do have a uniform coloration, and I would furthermore hypothesize that this cow was probably typical of cows found in Ireland, although I would certainly not rule out the possibility that there were other kinds of cows in Ireland, such as the familiar black and white colored cow. It is technically true that all I actually saw was one side of one cow, but that is not really the limit of the conclusions that I can reasonably draw, and it would be foolish of me to pretend that it was. My observation is not a strong proof of the hypothesis that Irish cows are brown, and there is some finite possibility that I just happened to stumble across an atypical cow, perhaps the only brown cow in Ireland, and who knows, perhaps a bizarre mutant cow that is brown on one side and purple on the other (although no such cow has ever been seen, to my knowledge). But the odds are against it.
Science draws generalized conclusions all the time. If I test a variety of objects to observe the rate at which they fall, at some point I can produce an equation that describes the force of gravity, even though I have only examined a very small fraction of the total number of objects in the universe. If five billion rocks are observed to fall in a certain way, maybe the next one I test will fall differently, but I doubt it. So, seeing one brown cow leads to conclusions about cows in general. If I continue my tour of Ireland and I get to see more cows, and more sides of more cows, my data improves and I will have more confidence in my conclusions. Although in the end, my conclusion may be exactly the same, that Irish cows are brown.
So, the point is, we do have quite of bit of knowledge about the mind, the brain, consciousness, computers, and data processing; these fields are not entirely mysterious and incomprehensible. And the existing state of knowledge does lead to certain conclusions. I am not going to pretend that just because I don't know everything, therefore I don't know anything.
posted by grizzled at 5:13 AM on September 7, 2010
« Older Z for Zine Editor | You talkin' to me? Newer »
This thread has been archived and is closed to new comments
posted by Gator at 10:50 AM on August 26, 2010 [4 favorites]