Dave, You're Killing Me Dave
December 13, 2009 3:19 PM Subscribe
Can robots feel human emotions?
"Hal, switch to manual hibernation control."
"I can tell from your voice harmonics, Dave, that you're badly upset. Why don't you take a stress pill and get some rest?"
"I'm sorry, Dave, but in accordance with special subroutine C1435-dash-4, quote, When the crew are dead or incapacitated, the onboard computer must assume control, unquote. I must, therefore, overrule your authority, since you are not in any condition to exercise it intelligently."
"Hal," said Bowman, now speaking with an icy calm. "I am not incapacitated. Unless you obey my instructions, I shall be forced to disconnect you. previously
Considering humans don't have emotions, it's rather unlikely they'll be able to build machines that have them.
posted by GeckoDundee at 3:32 PM on December 13, 2009 [3 favorites]
posted by GeckoDundee at 3:32 PM on December 13, 2009 [3 favorites]
Besides the effects, I thought that the only halfway decent thing about the I, Robot Will Smith movie was the main character's motivation for hating robots and all things that were high tech -- that (POSSIBLE SPOILER) a robot had saved his life instead of a nearby young girl's in a car accident, as the robot coolly made a calculation that he could save one and not the other, and Will Smith's character had a slightly higher chance of survival. A somewhat interesting case -- that he faults the robot for not having sufficient emotion to recognize that the young girl's life has more "worth" than the man's, and so, despite the odds, the robot should have tried to save her.
posted by Cool Papa Bell at 3:36 PM on December 13, 2009 [4 favorites]
posted by Cool Papa Bell at 3:36 PM on December 13, 2009 [4 favorites]
the only halfway decent thing about the I, Robot Will Smith movie
Unless you are a robot I bet you feel really weird after praising I, Robot like that.
posted by Artw at 3:39 PM on December 13, 2009
Unless you are a robot I bet you feel really weird after praising I, Robot like that.
posted by Artw at 3:39 PM on December 13, 2009
No. At the moment robots do not feel anything, because as yet no robot has reasonably, consistently, displayed that it has any concept of what it might be.
I'd love if we built robots like wall-e that built their own collection of odds and ends that fascinated them, changed their own parts when the were running out, fell in love with sexy dom-bots, and rocked themselves to sleep at night on an empty planet devoid of life. But, so far our robots, while making angry/happy/chagrined 'faces' lack what the thing that wall-e had - a soul.
And, I don't even believe that such a thing as a soul exists. So, I'm really confused about what I want from this world.
posted by Elmore at 3:40 PM on December 13, 2009
I'd love if we built robots like wall-e that built their own collection of odds and ends that fascinated them, changed their own parts when the were running out, fell in love with sexy dom-bots, and rocked themselves to sleep at night on an empty planet devoid of life. But, so far our robots, while making angry/happy/chagrined 'faces' lack what the thing that wall-e had - a soul.
And, I don't even believe that such a thing as a soul exists. So, I'm really confused about what I want from this world.
posted by Elmore at 3:40 PM on December 13, 2009
Cool Papa Bell: You should read the book then, if you haven't. It's a collection of short stories about the robots and how they are tried by the Three Laws of Robotics that Asimov sets forth. Since the laws are rather rigid (1. No harm to humans, 2. must obey humans (so long as doesn't violate 1) 3. must self-preserve so long as doesn't violate 1 or 2) Asimov comes up with several fun stories about different interpretations and struggles the robots have following or not following the laws, compounded by the fact that the laws are literally formed in the makeup of their positronic brains and not simple programming.
Much more interesting than the movie, which attempted to merge many of the loosely-if-at-all related short stories into one meta story.
posted by disillusioned at 3:41 PM on December 13, 2009 [5 favorites]
Much more interesting than the movie, which attempted to merge many of the loosely-if-at-all related short stories into one meta story.
posted by disillusioned at 3:41 PM on December 13, 2009 [5 favorites]
Oh god, I just remebered that in the movie they would glow red when the 3 laws were turned off.
posted by Artw at 3:45 PM on December 13, 2009
posted by Artw at 3:45 PM on December 13, 2009
I have serious doubts as to whether a lot of humans feel human emotions. As for robots, well, all emotions are is an adaptive feedback mechanism that mediates our optimization strategy for dealing with the universe. I think it is inevitable that we will equip robots with such mechanisms eventually (as soon as we solve little problems like "how consciousness works") and when we do, we will find that the robot's behavior becomes notably similar to that of living things in certain situations, not because it is programmed to but because the behavior emerges naturally from a relatively simple system. When that happens I will say the robot is feeling emotions, though I'm sure we'll have lots of arguments about it even when the robots are appearing on the Jerry Springer show.
posted by localroger at 3:48 PM on December 13, 2009
posted by localroger at 3:48 PM on December 13, 2009
Daisy, Daisssssey, givvve me youwarrr answeeerrrrrr dooooooooooo
posted by Elmore at 3:52 PM on December 13, 2009 [2 favorites]
posted by Elmore at 3:52 PM on December 13, 2009 [2 favorites]
Finally, a vacuum cleaning robot that can smell my fear. Thanks, Science!
posted by ScotchRox at 3:53 PM on December 13, 2009
posted by ScotchRox at 3:53 PM on December 13, 2009
Can robots feel human emotions?
wait, wouldn't they feel robot emotions?
posted by pyramid termite at 3:59 PM on December 13, 2009 [9 favorites]
wait, wouldn't they feel robot emotions?
posted by pyramid termite at 3:59 PM on December 13, 2009 [9 favorites]
While I have no problem with believing that there will someday be synthetic beings with emotions (and I have no stake in how far away that someday is), I think it's just downright silly to talk about what we can see in current robots as even being "emotions" -- much less human ones.
There's a weird thread in AI that I had thought was gone, which seems to me to take a super-hard-core epiphenomenalist tack. You see it in books like Machine Who Think. The idea seems to be that if we can produce an accurate simulation of emotion, it's just as real as the real thing -- essentially arguing that internally-perceived mental states don't matter. I thought then (the late 80s) that they had taken that view because dealing with internal mental states as an idea was hard, and thus that if you had to deal with them you wouldn't get your AI in time to make tenure. I'm not quite that cynical about it now, but I do still think that the idea that robots which made paintings or compositions according to rule sets (even if the rule-sets were self-modifying or in some other way sexily recursive) somehow equates to or even makes a very good analogy for emotion is just silly, and obviously so.
Also: the idea of these emotions as "human" strikes me as, for lack of a better term, fraught. I keep thinking about the Dixie Flatline's admonishment to Case in Neuromancer (paraphrasing): You can't trust AIs, because you have no idea what they actually want, and what's more you can't have any idea, because they're not human.
It's a bit like Wittgenstein's lion: With regard to synthetic sentiences, we have no mirror neurons for these territories.
posted by lodurr at 3:59 PM on December 13, 2009 [3 favorites]
There's a weird thread in AI that I had thought was gone, which seems to me to take a super-hard-core epiphenomenalist tack. You see it in books like Machine Who Think. The idea seems to be that if we can produce an accurate simulation of emotion, it's just as real as the real thing -- essentially arguing that internally-perceived mental states don't matter. I thought then (the late 80s) that they had taken that view because dealing with internal mental states as an idea was hard, and thus that if you had to deal with them you wouldn't get your AI in time to make tenure. I'm not quite that cynical about it now, but I do still think that the idea that robots which made paintings or compositions according to rule sets (even if the rule-sets were self-modifying or in some other way sexily recursive) somehow equates to or even makes a very good analogy for emotion is just silly, and obviously so.
Also: the idea of these emotions as "human" strikes me as, for lack of a better term, fraught. I keep thinking about the Dixie Flatline's admonishment to Case in Neuromancer (paraphrasing): You can't trust AIs, because you have no idea what they actually want, and what's more you can't have any idea, because they're not human.
It's a bit like Wittgenstein's lion: With regard to synthetic sentiences, we have no mirror neurons for these territories.
posted by lodurr at 3:59 PM on December 13, 2009 [3 favorites]
The idea seems to be that if we can produce an accurate simulation of emotion, it's just as real as the real thing -- essentially arguing that internally-perceived mental states don't matter.
We accept the emotions of other humans as genuine despite the fact that we can't know their internal mental states.
What does real emotion even mean? How would you define a genuine emotional state?
posted by empath at 4:05 PM on December 13, 2009 [3 favorites]
We accept the emotions of other humans as genuine despite the fact that we can't know their internal mental states.
What does real emotion even mean? How would you define a genuine emotional state?
posted by empath at 4:05 PM on December 13, 2009 [3 favorites]
A great many of these projects are more art than they are technology; robots or algorithms artificed more to explore the human perception of empathy than technology's use of it. These inventions are an much on the path to emotionality as ELIZA was on the path to beating the Turing test. Yes we can fake it, no we're nowhere near the real thing. Their real purpose is that they serve to highlight where the true challenge lies ahead.
To meaningfully explore feeling, true feeling in robots and computers they must have goals and desires and meaningful roles in complex social structures. Emotions are not important because they are there, but because of the purpose they serve: interacting with ones environment.
I feel that emotions, at their core, are emergent behavior, forming quietly, unexpectedly out of the machinations of consciousness. When machines start to become emotional, I don't think we'll notice at first. We'll talk about the algorithm that decides what song to play on whatever we have our headphones plugged into as being "mad" at us, without understanding that we weren't speaking figuratively.
Also what lodurr said.
posted by Bobicus at 4:07 PM on December 13, 2009
To meaningfully explore feeling, true feeling in robots and computers they must have goals and desires and meaningful roles in complex social structures. Emotions are not important because they are there, but because of the purpose they serve: interacting with ones environment.
I feel that emotions, at their core, are emergent behavior, forming quietly, unexpectedly out of the machinations of consciousness. When machines start to become emotional, I don't think we'll notice at first. We'll talk about the algorithm that decides what song to play on whatever we have our headphones plugged into as being "mad" at us, without understanding that we weren't speaking figuratively.
Also what lodurr said.
posted by Bobicus at 4:07 PM on December 13, 2009
Hmm. Exhibiting anger. Not a robot, then.
Congratulations, citizen, you've passed the test.
posted by Grangousier at 4:15 PM on December 13, 2009 [1 favorite]
Congratulations, citizen, you've passed the test.
posted by Grangousier at 4:15 PM on December 13, 2009 [1 favorite]
Let's take a simple emotion -- 'fear'. What is it? What causes it? What's the purpose of it? How does it manifest? We're afraid of what could harm us. We determine what might harm us by simulating possible futures based on our perceptions of our immediate surroundings. When we see things that might harm it, we avoid it, and show outward signs of fear, perhaps to alert others of danger.
Let's say we programmed an autonomous robot that was easily damaged by water. We program the robot to look out for water, and to avoid it when possible, to alert other robots nearby that water was around, etc. I think one could reasonably say that the robot is 'afraid' of water. How that's felt internally, I don't know and have no way of knowing. It seems to me that it's not any more useful to say that a robot gives the outward appearance of being afraid of water, than it would be to say the same thing about a hydrophobic person.
In fact, if you interrogate the robot, he may very well say that it is afraid of water because water could hurt it. Which is precisely what a human that is afraid of water might say. I'm not sure how a distinction could be drawn here.
posted by empath at 4:15 PM on December 13, 2009 [3 favorites]
Let's say we programmed an autonomous robot that was easily damaged by water. We program the robot to look out for water, and to avoid it when possible, to alert other robots nearby that water was around, etc. I think one could reasonably say that the robot is 'afraid' of water. How that's felt internally, I don't know and have no way of knowing. It seems to me that it's not any more useful to say that a robot gives the outward appearance of being afraid of water, than it would be to say the same thing about a hydrophobic person.
In fact, if you interrogate the robot, he may very well say that it is afraid of water because water could hurt it. Which is precisely what a human that is afraid of water might say. I'm not sure how a distinction could be drawn here.
posted by empath at 4:15 PM on December 13, 2009 [3 favorites]
An interesting take on this subject by Peter Watts, before he was swallowed up by another, equally frightening and emotionless entity....
posted by AdamCSnider at 4:19 PM on December 13, 2009 [3 favorites]
posted by AdamCSnider at 4:19 PM on December 13, 2009 [3 favorites]
Fear is deeply physiological though. If you had reason to stay away from water and did so, but did not have a physiological reaction to the presence/threat of water it's not really fear.
posted by Artw at 4:20 PM on December 13, 2009
posted by Artw at 4:20 PM on December 13, 2009
must... refrain... from... correcting... typo... in... Artw's... flamey... response... to... previous... correction....
[explodes]
posted by Justinian at 4:22 PM on December 13, 2009
[explodes]
posted by Justinian at 4:22 PM on December 13, 2009
Actually I might have a better question:
Do emotions require consciousness?
If the answer is no -- that is, that cats and dogs can be happy or afraid -- then it seems that it should be fairly easy to create robots which do have emotions, since they can be just instinctual behaviors.
If the answer is yes -- that is that they require rational, autonomous, free thinkers, then emotions will only come once robots are full AI, in which case the question from the fpp is really about whether robots can ever be conscious.
posted by empath at 4:24 PM on December 13, 2009
Do emotions require consciousness?
If the answer is no -- that is, that cats and dogs can be happy or afraid -- then it seems that it should be fairly easy to create robots which do have emotions, since they can be just instinctual behaviors.
If the answer is yes -- that is that they require rational, autonomous, free thinkers, then emotions will only come once robots are full AI, in which case the question from the fpp is really about whether robots can ever be conscious.
posted by empath at 4:24 PM on December 13, 2009
Penalty for a third strike on that should be amputation of fingers. Just sayin.
posted by Artw at 4:24 PM on December 13, 2009
posted by Artw at 4:24 PM on December 13, 2009
Fear is deeply physiological though.
Most emotions are deeply physiological. You can get happiness from a pill, for god's sake.
posted by empath at 4:25 PM on December 13, 2009 [2 favorites]
Most emotions are deeply physiological. You can get happiness from a pill, for god's sake.
posted by empath at 4:25 PM on December 13, 2009 [2 favorites]
empath: What does real emotion even mean? How would you define a genuine emotional state?
Are you saying that because you yourself can't inherently and absolutely know someone else's mental states, they become irrelevant? That strikes me as a major case of baby:bathwater confusion. (Or is it baby:soupwater?)
posted by lodurr at 4:25 PM on December 13, 2009 [1 favorite]
Are you saying that because you yourself can't inherently and absolutely know someone else's mental states, they become irrelevant? That strikes me as a major case of baby:bathwater confusion. (Or is it baby:soupwater?)
posted by lodurr at 4:25 PM on December 13, 2009 [1 favorite]
Fair point. A lot, if not all, of the experience of emotion is basically the labeling of physical sensations.
posted by Artw at 4:26 PM on December 13, 2009
posted by Artw at 4:26 PM on December 13, 2009
artw: I'm not sure how a distinction could be drawn here.
True enough, if you assume that you have to rely only on affective presentation. But you don't, for several reasons. (And even if you did, you shouldn't be satisfied with that.) First, when dealing with other humans, we have a reasonable expectation that they experience mental states like we do (see below). Second, as you point out, emotions have physiological correlates, which can be measured. Third, emotions are not simply rational reactions to stimuli - they're complex things, and even if we assume they're evolved behavioral characteristics as you propose (and I think that's plausible), that doesn't establish that they're simple or rational or easy to understand.
The thing is, we know that we have internal mental states (or at least, I know that I do), and we can be reasonably assured that a Roomba doesn't. I am of the opinion that we can be reasonably assured that dogs do; I don't have an opinion about annelid worms or most of the territory in between.
posted by lodurr at 4:33 PM on December 13, 2009
True enough, if you assume that you have to rely only on affective presentation. But you don't, for several reasons. (And even if you did, you shouldn't be satisfied with that.) First, when dealing with other humans, we have a reasonable expectation that they experience mental states like we do (see below). Second, as you point out, emotions have physiological correlates, which can be measured. Third, emotions are not simply rational reactions to stimuli - they're complex things, and even if we assume they're evolved behavioral characteristics as you propose (and I think that's plausible), that doesn't establish that they're simple or rational or easy to understand.
The thing is, we know that we have internal mental states (or at least, I know that I do), and we can be reasonably assured that a Roomba doesn't. I am of the opinion that we can be reasonably assured that dogs do; I don't have an opinion about annelid worms or most of the territory in between.
posted by lodurr at 4:33 PM on December 13, 2009
Are you saying that because you yourself can't inherently and absolutely know someone else's mental states, they become irrelevant?
If you can't know them, how can they be relevant?
posted by empath at 4:33 PM on December 13, 2009
If you can't know them, how can they be relevant?
posted by empath at 4:33 PM on December 13, 2009
Seriously question: what's an emotion? Is it a belief or some other intentional state? A phenomenological mood? A "snap judgment"? An alteration in the decision-path? Is it "about" something?
I feel like these are some fairly basic questions we'll need to work out before we decide whether robots are candidates for emotions at all....
posted by anotherpanacea at 4:37 PM on December 13, 2009
I feel like these are some fairly basic questions we'll need to work out before we decide whether robots are candidates for emotions at all....
posted by anotherpanacea at 4:37 PM on December 13, 2009
I love this quote at the end of the Peter Watts link above:
This is assuming you have any truck with ethical arguments in principle. I'm not certain I do, but if it weren't for ethical constraints someone would probably have killed me by now, so I won't complain.posted by localhuman at 4:41 PM on December 13, 2009 [1 favorite]
There's a bit in Blindsight about Chinese Rooms where something unexpected happens and it's spooky as hell. Highlight of thebook for me that was.
posted by Artw at 4:53 PM on December 13, 2009
posted by Artw at 4:53 PM on December 13, 2009
wait, wouldn't they feel robot emotions?
But once robots start feeling those, will humans ever be able to feel robot emotions?
posted by Evilspork at 4:57 PM on December 13, 2009 [1 favorite]
But once robots start feeling those, will humans ever be able to feel robot emotions?
posted by Evilspork at 4:57 PM on December 13, 2009 [1 favorite]
Do emotions require consciousness?
I would argue yes, but then quibble with you on another point. You assume immediately after asking this question that dogs and cats are not conscious. In my take of it, consciousness is not a binary thing and so it is fair to say that dogs and cats are indeed conscious and thus have emotions. I would also consider them less conscious than, say dolphins, whose emotions can be said to be "deeper." I think it fair to say a bee can be angry, but it means less than saying that your boss is angry.
posted by Bobicus at 5:08 PM on December 13, 2009
I would argue yes, but then quibble with you on another point. You assume immediately after asking this question that dogs and cats are not conscious. In my take of it, consciousness is not a binary thing and so it is fair to say that dogs and cats are indeed conscious and thus have emotions. I would also consider them less conscious than, say dolphins, whose emotions can be said to be "deeper." I think it fair to say a bee can be angry, but it means less than saying that your boss is angry.
posted by Bobicus at 5:08 PM on December 13, 2009
Daisy, Daisy, give me your answer, do,
I'm half crazy all for the love of you.
It won't be a stylish marriage -
I can't afford a carriage,
But you'd look sweet upon the seat
Of a bicycle built for two.
We will go tandem as man and wife,
Daisy, Daisy,
Wheeling away down the road of life,
I and my Daisy Bell.
When the nights dark, we can both despise
Policemen and lamps as well.
There are bright lights in the dazzling eyes
Of beautiful Daisy Bell.
posted by ovvl at 5:08 PM on December 13, 2009
I'm half crazy all for the love of you.
It won't be a stylish marriage -
I can't afford a carriage,
But you'd look sweet upon the seat
Of a bicycle built for two.
We will go tandem as man and wife,
Daisy, Daisy,
Wheeling away down the road of life,
I and my Daisy Bell.
When the nights dark, we can both despise
Policemen and lamps as well.
There are bright lights in the dazzling eyes
Of beautiful Daisy Bell.
posted by ovvl at 5:08 PM on December 13, 2009
Do emotions require consciousness?
If the answer is no -- that is, that cats and dogs can be happy or afraid -- then it seems that it should be fairly easy to create robots which do have emotions, since they can be just instinctual behaviors.
First of all, cats and dogs are conscious. Research suggests that they are not self-conscious (unlike dolphins, elephants, chimps, magpies, and a few other animals) but it also suggests they are conscious in the sense that they are aware of their surroundings and alter their behavior accordingly... and that they may even be conscious in the sense of having some sort of subjective experience of their own consciousness. The idea that higher animals operate through "just instinctual behaviors" doesn't fit the recent evidence of animal cognition.
And yes, it seems that they have emotions. The brain areas most commonly associated with emotion are very old, and are shared by all mammals. Thus, I'd argue accordingly: cats and dogs seem to have emotions much like ours, as any four-year-old would agree. On top of that, they behave as if they have emotions in the laboratory (up to and including changing the results of empirical experiments through the apparent expression of emotions like fear, affection, and excitement), and they have the brain structures for emotions, too. We may never know for sure, just as we may never know that other humans have emotions, but Occam's razor certainly suggests that animals like dogs and cats feel emotions.
Whether they have a subjective experience of those emotions is another question... but there, again, I personally find similarity arguments quite convincing.
Frankly, I think the conclusion that higher animals don't have emotions is far less supported by the evidence.
posted by vorfeed at 5:08 PM on December 13, 2009 [2 favorites]
If the answer is no -- that is, that cats and dogs can be happy or afraid -- then it seems that it should be fairly easy to create robots which do have emotions, since they can be just instinctual behaviors.
First of all, cats and dogs are conscious. Research suggests that they are not self-conscious (unlike dolphins, elephants, chimps, magpies, and a few other animals) but it also suggests they are conscious in the sense that they are aware of their surroundings and alter their behavior accordingly... and that they may even be conscious in the sense of having some sort of subjective experience of their own consciousness. The idea that higher animals operate through "just instinctual behaviors" doesn't fit the recent evidence of animal cognition.
And yes, it seems that they have emotions. The brain areas most commonly associated with emotion are very old, and are shared by all mammals. Thus, I'd argue accordingly: cats and dogs seem to have emotions much like ours, as any four-year-old would agree. On top of that, they behave as if they have emotions in the laboratory (up to and including changing the results of empirical experiments through the apparent expression of emotions like fear, affection, and excitement), and they have the brain structures for emotions, too. We may never know for sure, just as we may never know that other humans have emotions, but Occam's razor certainly suggests that animals like dogs and cats feel emotions.
Whether they have a subjective experience of those emotions is another question... but there, again, I personally find similarity arguments quite convincing.
Frankly, I think the conclusion that higher animals don't have emotions is far less supported by the evidence.
posted by vorfeed at 5:08 PM on December 13, 2009 [2 favorites]
Besides the effects, I thought that the only halfway decent thing about the I, Robot Will Smith movie was the main character's motivation for hating robots and all things that were high tech -- that (POSSIBLE SPOILER) a robot had saved his life instead of a nearby young girl's in a car accident, as the robot coolly made a calculation that he could save one and not the other, and Will Smith's character had a slightly higher chance of survival. A somewhat interesting case -- that he faults the robot for not having sufficient emotion to recognize that the young girl's life has more "worth" than the man's, and so, despite the odds, the robot should have tried to save her.
These plot devices where robots reveal their lack of true emotion (despite superficially imitating humans) are kind of unsatisfying in that it usually seems likely that a robot's programming could easily be improved to mimic the missing emotion (once you already had a robot of that level of sophistication), but it's presented in the story as some kind of proof of an unbridgeable chasm between fake robot and real human emotions. I mean couldn't you just put an
if(subject.age <> in the source code somewhere? Of course that ducks the question of "does the robot really feel the emotion rather than behaving as if it does?", which is the interesting one, and the one that the movie dodges in favour of a scene that says "the robot doesn't have real emotion because it behaves as if it doesn't", i.e. it's appearances that count: if it walks like a duck and quacks like a duck it's a duck, if not, it's not. It kind of misses the point and puts us back at square one where we were with the Turing Test.>
posted by L.P. Hatecraft at 5:40 PM on December 13, 2009
These plot devices where robots reveal their lack of true emotion (despite superficially imitating humans) are kind of unsatisfying in that it usually seems likely that a robot's programming could easily be improved to mimic the missing emotion (once you already had a robot of that level of sophistication), but it's presented in the story as some kind of proof of an unbridgeable chasm between fake robot and real human emotions. I mean couldn't you just put an
if(subject.age <> in the source code somewhere? Of course that ducks the question of "does the robot really feel the emotion rather than behaving as if it does?", which is the interesting one, and the one that the movie dodges in favour of a scene that says "the robot doesn't have real emotion because it behaves as if it doesn't", i.e. it's appearances that count: if it walks like a duck and quacks like a duck it's a duck, if not, it's not. It kind of misses the point and puts us back at square one where we were with the Turing Test.>
posted by L.P. Hatecraft at 5:40 PM on December 13, 2009
What's an emotion?
An emotion is a flood of a particular type of neurochemical in a particular area of the brain (ie serotonin, pre-frontal cortex), regionally modifying how and whether signals are passed along, which results in gross changes in behavior, not to mention the internal perception of being in a severely altered state on the part of the person experiencing it. Certain patterns in our environmental state (ie someone shows me a picture of my girlfriend) and our internal state are what trigger the flood in the first place.
Once you have artificial neural networks of sufficient complexity to give rise to complex behaviors and internal states, adding regional modifiers that trigger in response to certain states of their environment (ie someone shows them a picture of that really hot fem-bot in sector 9-delta-J) seems like it would be pretty trivial.
In short - if you've done the work of creating a system sufficiently complex that the notion of emotions could have any meaning at all, then adding them isn't difficult. What seems like it would be difficult is making them correspond to human emotions.
posted by Ryvar at 5:54 PM on December 13, 2009
An emotion is a flood of a particular type of neurochemical in a particular area of the brain (ie serotonin, pre-frontal cortex), regionally modifying how and whether signals are passed along, which results in gross changes in behavior, not to mention the internal perception of being in a severely altered state on the part of the person experiencing it. Certain patterns in our environmental state (ie someone shows me a picture of my girlfriend) and our internal state are what trigger the flood in the first place.
Once you have artificial neural networks of sufficient complexity to give rise to complex behaviors and internal states, adding regional modifiers that trigger in response to certain states of their environment (ie someone shows them a picture of that really hot fem-bot in sector 9-delta-J) seems like it would be pretty trivial.
In short - if you've done the work of creating a system sufficiently complex that the notion of emotions could have any meaning at all, then adding them isn't difficult. What seems like it would be difficult is making them correspond to human emotions.
posted by Ryvar at 5:54 PM on December 13, 2009
I don't feel like reading anything, so I'm assuming this is all about robots feeling human emotions that it's extracting from a human brain actively forming emotions, like how a camera experiences light.
If anything it'd be great for the future of psychiatry. Sure, an fMRI gets you 9/10ths there, but having a limber little robot do it would be much more convenient.
posted by mccarty.tim at 6:04 PM on December 13, 2009
If anything it'd be great for the future of psychiatry. Sure, an fMRI gets you 9/10ths there, but having a limber little robot do it would be much more convenient.
posted by mccarty.tim at 6:04 PM on December 13, 2009
There is no doubt that we will some day program robots that behave as if they feel emotions. Whether they actually "feel" them or not will be considered a plate of beans, so to speak. I mean we don't know if other people feel emotions the same way we do, we assume they do based on the expressions on their face, for the most part, or reading textual descriptions. We actually imagine emotions in all kinds of 'alive' seeming things.
Oh god, I just remebered that in the movie they would glow red when the 3 laws were turned off.
I don't remember the movie that well, but in Asimov's world, the robots realized the only way they could prevent people from suffering was to take over for us. Their conquest was an inevitable application of the fact that the first law (don't do harm, or through inaction allow harm to occur) was above the second law (do what we say)
---
I have a friend who has a very energetic cat. We would play with it with one of those little vaguely bird-shaped fuzzballs on a string. We were wondering if the cat understood that the fuzzball was being controlled by us, or if it was just hard-wired to chase after it. There was no way to know, right?
Well, later on, my friend told me that the cat would actually bring the fuzzball to her and demand she animate it. So obviously it understood how the thing worked. The cat also understood how doorknobs worked (when he wanted to go on the balcony, he would paw at the door handle in a clockwise circular motion)
Very interesting.
posted by delmoi at 6:11 PM on December 13, 2009 [1 favorite]
Oh god, I just remebered that in the movie they would glow red when the 3 laws were turned off.
I don't remember the movie that well, but in Asimov's world, the robots realized the only way they could prevent people from suffering was to take over for us. Their conquest was an inevitable application of the fact that the first law (don't do harm, or through inaction allow harm to occur) was above the second law (do what we say)
---
I have a friend who has a very energetic cat. We would play with it with one of those little vaguely bird-shaped fuzzballs on a string. We were wondering if the cat understood that the fuzzball was being controlled by us, or if it was just hard-wired to chase after it. There was no way to know, right?
Well, later on, my friend told me that the cat would actually bring the fuzzball to her and demand she animate it. So obviously it understood how the thing worked. The cat also understood how doorknobs worked (when he wanted to go on the balcony, he would paw at the door handle in a clockwise circular motion)
Very interesting.
posted by delmoi at 6:11 PM on December 13, 2009 [1 favorite]
RAGE-BOT IS RAGING! APPLY REGIONAL MODIFIER TO SHOOTY-PARTS!
posted by Artw at 6:23 PM on December 13, 2009
posted by Artw at 6:23 PM on December 13, 2009
Can reality TV contestants?
NO BECAUSE RAGEBOT ***CRUSH***.
posted by Artw at 6:29 PM on December 13, 2009 [1 favorite]
NO BECAUSE RAGEBOT ***CRUSH***.
posted by Artw at 6:29 PM on December 13, 2009 [1 favorite]
Whether they have a subjective experience of those emotions is another question... but there, again, I personally find similarity arguments quite convincing.
If animals can be said to feel emotions despite the fact that we can't interrogate their subjective experience of it, than can't we also say the same of robots? That was where I was going with that.
posted by empath at 6:40 PM on December 13, 2009
If animals can be said to feel emotions despite the fact that we can't interrogate their subjective experience of it, than can't we also say the same of robots? That was where I was going with that.
posted by empath at 6:40 PM on December 13, 2009
... I'm not sure how a distinction could be drawn here.
Unless you're referring to some much later Asimov robot story, I think you're confusing I, Robot with Jack Williamson's "With Folded Hands".
posted by lodurr at 6:41 PM on December 13, 2009
Unless you're referring to some much later Asimov robot story, I think you're confusing I, Robot with Jack Williamson's "With Folded Hands".
posted by lodurr at 6:41 PM on December 13, 2009
An emotion is a flood of a particular type of neurochemical in a particular area of the brain (ie serotonin, pre-frontal cortex), regionally modifying how and whether signals are passed along
I'm a hard core reductionist when it comes to brain things, but this crosses over into "unfounded simplification" territory. Neurotransmitters certainly have a role in processing emotion, but that's not all they do. Nor is it the case that varying the levels of neurotransmitters in the brain necessarily leads to variations in emotion.
I think lodurr and Bobicus are essentially correct in referring to goals and desires in connection with emotions. To the extent that emotions are an adaptive trait (i.e., one that was evolved), it only makes sense talk about them in terms of how they facilitate behaviors that contribute to survival. It's simply not enough to assume that they can be added in as soon as you have a sufficiently advanced system; rather, emotions are a consequence of a system that sufficiently approximates humans/cats/dogs/whatever.
posted by logicpunk at 6:41 PM on December 13, 2009 [1 favorite]
ah, crap. Let me try that again with different text in teh paste buffer:
... but in Asimov's world, the robots realized the only way they could prevent people from suffering was to take over for us. Their conquest was an inevitable application of the fact that the first law (don't do harm, or through inaction allow harm to occur) was above the second law (do what we say).
.... aaaaand, insert Williamson reference/link here.
posted by lodurr at 6:42 PM on December 13, 2009
... but in Asimov's world, the robots realized the only way they could prevent people from suffering was to take over for us. Their conquest was an inevitable application of the fact that the first law (don't do harm, or through inaction allow harm to occur) was above the second law (do what we say).
.... aaaaand, insert Williamson reference/link here.
posted by lodurr at 6:42 PM on December 13, 2009
logicpunk, I think you're giving me too much credit on that one. I agree with what you just said, but I don't think I said much if any of that here.
posted by lodurr at 6:44 PM on December 13, 2009
posted by lodurr at 6:44 PM on December 13, 2009
Eh. I was taking your choice Gibson paraphrase and running with it. Didn't mean to put words in your mouth.
posted by logicpunk at 6:48 PM on December 13, 2009
posted by logicpunk at 6:48 PM on December 13, 2009
If animals can be said to feel emotions despite the fact that we can't interrogate their subjective experience of it, than can't we also say the same of robots?
This is where "unfounded simplification" fits in. Right now, there's nothing robotic that comes remotely close to paralleling the neurological complexity of a "higher" animal. So unless we're going to really simplify the definition of "emotion" to the point where it no longer describes anything we've traditionally used it to describe, then yes, we could reasonably say that robots did not have emotions while animals did.
I'm totally fine with the idea that we'll have robots that complex someday -- maybe someday soon. So, yeah, maybe machines will have emotions in the near future. But I'm not seeing anything here that looks to me like evidence of that.
And again, I think we need to be open to the possibility -- I'd say 'likelihood' -- that we won't understand them (machine emotions) for what they are when they emerge.
posted by lodurr at 6:50 PM on December 13, 2009
This is where "unfounded simplification" fits in. Right now, there's nothing robotic that comes remotely close to paralleling the neurological complexity of a "higher" animal. So unless we're going to really simplify the definition of "emotion" to the point where it no longer describes anything we've traditionally used it to describe, then yes, we could reasonably say that robots did not have emotions while animals did.
I'm totally fine with the idea that we'll have robots that complex someday -- maybe someday soon. So, yeah, maybe machines will have emotions in the near future. But I'm not seeing anything here that looks to me like evidence of that.
And again, I think we need to be open to the possibility -- I'd say 'likelihood' -- that we won't understand them (machine emotions) for what they are when they emerge.
posted by lodurr at 6:50 PM on December 13, 2009
well, they were good words, so I shouldn't complain.
posted by lodurr at 6:51 PM on December 13, 2009
posted by lodurr at 6:51 PM on December 13, 2009
Nor is it the case that varying the levels of neurotransmitters in the brain necessarily leads to variations in emotion.
Oh, I absolutely think it's the case for certain neurotransmitters. Serotonin, particularly. A flood of serotonin=happiness. One and the same.
posted by empath at 6:51 PM on December 13, 2009
Oh, I absolutely think it's the case for certain neurotransmitters. Serotonin, particularly. A flood of serotonin=happiness. One and the same.
posted by empath at 6:51 PM on December 13, 2009
If animals can be said to feel emotions despite the fact that we can't interrogate their subjective experience of it, than can't we also say the same of robots? That was where I was going with that.
But the argument I was making doesn't hold for robots. The "they seem to have emotions much like ours" part may be true for them -- and so far I'd question that, myself -- but (so far) the "they have the brain structures for emotions, too" part is false. We can look at a robot's code and see clearly that it doesn't have emotions, because it doesn't have a limbic system, or anything like one.
Personally, I'm with Peter Watts -- we most likely won't have to worry about robot feelings until we start seriously developing robots whose brains are accurately based on animal brains. And considering the way we tend to treat animals, which really can feel and aren't even giant kill-bots, we'd better also have Battlestars by then...
on preview: jinx, lodurr!
Unless you're referring to some much later Asimov robot story, I think you're confusing I, Robot with Jack Williamson's "With Folded Hands".
He is indeed referring to a much later Asimov robot story, Robots and Empire.
posted by vorfeed at 6:54 PM on December 13, 2009
But the argument I was making doesn't hold for robots. The "they seem to have emotions much like ours" part may be true for them -- and so far I'd question that, myself -- but (so far) the "they have the brain structures for emotions, too" part is false. We can look at a robot's code and see clearly that it doesn't have emotions, because it doesn't have a limbic system, or anything like one.
Personally, I'm with Peter Watts -- we most likely won't have to worry about robot feelings until we start seriously developing robots whose brains are accurately based on animal brains. And considering the way we tend to treat animals, which really can feel and aren't even giant kill-bots, we'd better also have Battlestars by then...
on preview: jinx, lodurr!
Unless you're referring to some much later Asimov robot story, I think you're confusing I, Robot with Jack Williamson's "With Folded Hands".
He is indeed referring to a much later Asimov robot story, Robots and Empire.
posted by vorfeed at 6:54 PM on December 13, 2009
empath: If you can't know them, how can they be relevant?
I can't know what the weather will be tomorrow, but it's highly relevant to me.
Science functions on probability. Philosophy functions on knowing, which is essentially impossible. That's why science will always trump philosophy with regard to explaining the world.
posted by lodurr at 6:55 PM on December 13, 2009
I can't know what the weather will be tomorrow, but it's highly relevant to me.
Science functions on probability. Philosophy functions on knowing, which is essentially impossible. That's why science will always trump philosophy with regard to explaining the world.
posted by lodurr at 6:55 PM on December 13, 2009
If animals can be said to feel emotions despite the fact that we can't interrogate their subjective experience of it, than can't we also say the same of robots?
But non-human animals, especially for instance mammals, share certain genetic, biological characteristics with human animals, chiefly in this instance they share with us the fact that we both have brains. If we begin with the assumption (fair, I think) that where there are brains, there is consciousness, then that is another way of saying that consciousness has a neurophysiological basis. Yet, b/c conscious brains are found in carbon-based life-forms that share our physiology, i.e. in other animals, the hard-AI assumption of functionalism--that physiology is somehow incidental to the process, and that cognition can be duplicated in inorganic processes--seems unfounded. Rather than talk about computational AI we might need to talk instead about bioengineering, since I can no more see consciousness emerging from inorganic matter than I can see a thermostat or cash register exhibiting cognition.
posted by HP LaserJet P10006 at 6:56 PM on December 13, 2009
But non-human animals, especially for instance mammals, share certain genetic, biological characteristics with human animals, chiefly in this instance they share with us the fact that we both have brains. If we begin with the assumption (fair, I think) that where there are brains, there is consciousness, then that is another way of saying that consciousness has a neurophysiological basis. Yet, b/c conscious brains are found in carbon-based life-forms that share our physiology, i.e. in other animals, the hard-AI assumption of functionalism--that physiology is somehow incidental to the process, and that cognition can be duplicated in inorganic processes--seems unfounded. Rather than talk about computational AI we might need to talk instead about bioengineering, since I can no more see consciousness emerging from inorganic matter than I can see a thermostat or cash register exhibiting cognition.
posted by HP LaserJet P10006 at 6:56 PM on December 13, 2009
A flood of serotonin=happiness. One and the same.
In a grossly simplistic and reductive sense, sure. But that's a bit like saying wine=fermented fruit juice. It's generally true, but tells us more or less nothing about the experience of drinking a nice zinfandel with a rare steak.
posted by lodurr at 6:57 PM on December 13, 2009 [1 favorite]
In a grossly simplistic and reductive sense, sure. But that's a bit like saying wine=fermented fruit juice. It's generally true, but tells us more or less nothing about the experience of drinking a nice zinfandel with a rare steak.
posted by lodurr at 6:57 PM on December 13, 2009 [1 favorite]
He is indeed referring to a much later Asimov robot story, Robots and Empire.
ah, forgive me that, then. based on the summary though I'm sure he was cribbing Williamson (commenting on his humanoids) and I'm equally sure Williamson didn't mind a bit. The best stuff in SF is often in the form of conversations between stories, one copying and varying another to test a new variable or make a missed point.
posted by lodurr at 7:02 PM on December 13, 2009
ah, forgive me that, then. based on the summary though I'm sure he was cribbing Williamson (commenting on his humanoids) and I'm equally sure Williamson didn't mind a bit. The best stuff in SF is often in the form of conversations between stories, one copying and varying another to test a new variable or make a missed point.
posted by lodurr at 7:02 PM on December 13, 2009
Humans have a concept of emotions so that they can model and predict the behavior of other human beings. We see expressions and behaviors and tie them back to our own previously felt emotions, and we kind of 'put ourselves in their shoes'. This helps us predict what they might do and improves the outcomes of our interactions with others.
I'm not sure that recognizing emotions in other beings necessarily has to be tied to an internal state in the other that has anything to do with our our subjective experiences of those emotions.
I can say that a robot (or an animal, or another person) is experiencing an emotion if it behaves as I would if I were experiencing that emotion, whether or not it has an internal subjective experience that is analogous to mine. We can never have access to the internal experience of others, we can only observe their behavior.
posted by empath at 7:04 PM on December 13, 2009
I'm not sure that recognizing emotions in other beings necessarily has to be tied to an internal state in the other that has anything to do with our our subjective experiences of those emotions.
I can say that a robot (or an animal, or another person) is experiencing an emotion if it behaves as I would if I were experiencing that emotion, whether or not it has an internal subjective experience that is analogous to mine. We can never have access to the internal experience of others, we can only observe their behavior.
posted by empath at 7:04 PM on December 13, 2009
It's generally true, but tells us more or less nothing about the experience of drinking a nice zinfandel with a rare steak.
I didn't say it did. I was arguing the point that varying levels of neurotransmitters must change emotional states. In at least some cases it's absolutely inarguable that they do. More serotonin = more happy.
posted by empath at 7:08 PM on December 13, 2009
I didn't say it did. I was arguing the point that varying levels of neurotransmitters must change emotional states. In at least some cases it's absolutely inarguable that they do. More serotonin = more happy.
posted by empath at 7:08 PM on December 13, 2009
Daisy, Daisssssey, givvve me youwarrr answeeerrrrrr dooooooooooo
Hmmm. Dave has just performed a prefrontal robotomy.
posted by drhydro at 7:09 PM on December 13, 2009
Hmmm. Dave has just performed a prefrontal robotomy.
posted by drhydro at 7:09 PM on December 13, 2009
HP,
The brain's a machine. It's a machine made of meat, but it's still a machine. If you replaced meaty neurons with electrical neurons, as long as the inputs and outputs were the same, the result would be identical.
posted by effugas at 7:09 PM on December 13, 2009
The brain's a machine. It's a machine made of meat, but it's still a machine. If you replaced meaty neurons with electrical neurons, as long as the inputs and outputs were the same, the result would be identical.
posted by effugas at 7:09 PM on December 13, 2009
Rather than talk about computational AI we might need to talk instead about bioengineering, since I can no more see consciousness emerging from inorganic matter than I can see a thermostat or cash register exhibiting cognition.
I don't see how this remotely follows. What do you think is special about organic matter that makes it fundamentally different from silicon?
posted by empath at 7:10 PM on December 13, 2009
I don't see how this remotely follows. What do you think is special about organic matter that makes it fundamentally different from silicon?
posted by empath at 7:10 PM on December 13, 2009
I'm not sure that recognizing emotions in other beings necessarily has to be tied to an internal state in the other that has anything to do with our our subjective experiences of those emotions
I'm not sure what you mean by that. If in fact we have a "concept" of emotions so we can model and predict the behavior of other humans, then yes, it seems to me that they are tied to our idea of the other's subjective experience of emotions. ("Concept" is not the term I would choose, though, since as I understand it the current neurological thinking is that this ability is hard-wired. "Concept" sort of implies we thought it up and can choose not to do it.)
You'd be right that the state could be entirely simulated and, if it and the context were sufficiently well-simulated, it wouldn't make a difference to how we behave. But that's trivially true; it's as-stipulated. It's exactly (and I do mean exactly) like saying 'if we create a perfect simulation of the world, where we provide functional isomorphs of all sensory inputs and accommodate all extra-sensory influences [e.g. humidity, other factors TBD], then a person experiencing that simulation would behave as though they are having a "real" experience.'
Put another way, it's like saying that if we assume that a is indistinguishable from a', then a is indistinguishable from a'. It's a tautology. It doesn't really teach us anything, because it doesn't correspond to a real world scenario.
The real world scenario is that we walk around in the world interacting with other humans, and we generally have a pretty good idea of what they're feeling enough of the time that it's useful. Current evidence suggests we do that because we're hard-wired to do that. And if we are, then other animals probably are, too.
posted by lodurr at 7:15 PM on December 13, 2009
I'm not sure what you mean by that. If in fact we have a "concept" of emotions so we can model and predict the behavior of other humans, then yes, it seems to me that they are tied to our idea of the other's subjective experience of emotions. ("Concept" is not the term I would choose, though, since as I understand it the current neurological thinking is that this ability is hard-wired. "Concept" sort of implies we thought it up and can choose not to do it.)
You'd be right that the state could be entirely simulated and, if it and the context were sufficiently well-simulated, it wouldn't make a difference to how we behave. But that's trivially true; it's as-stipulated. It's exactly (and I do mean exactly) like saying 'if we create a perfect simulation of the world, where we provide functional isomorphs of all sensory inputs and accommodate all extra-sensory influences [e.g. humidity, other factors TBD], then a person experiencing that simulation would behave as though they are having a "real" experience.'
Put another way, it's like saying that if we assume that a is indistinguishable from a', then a is indistinguishable from a'. It's a tautology. It doesn't really teach us anything, because it doesn't correspond to a real world scenario.
The real world scenario is that we walk around in the world interacting with other humans, and we generally have a pretty good idea of what they're feeling enough of the time that it's useful. Current evidence suggests we do that because we're hard-wired to do that. And if we are, then other animals probably are, too.
posted by lodurr at 7:15 PM on December 13, 2009
In at least some cases it's absolutely inarguable that they do. More serotonin = more happy.
Actually, I should have challenged you on that. I really don't want to challenge the general idea, because I think there's some truth to it, but serotonin is not a really good example. Especially not when considering the whole brain and time scales beyond an hour or so.
posted by lodurr at 7:18 PM on December 13, 2009
Actually, I should have challenged you on that. I really don't want to challenge the general idea, because I think there's some truth to it, but serotonin is not a really good example. Especially not when considering the whole brain and time scales beyond an hour or so.
posted by lodurr at 7:18 PM on December 13, 2009
Right, but what I'm saying is that for certain emotions (off the top of my head -- fear and surprise), it wouldn't be particularly difficult to conceive of a robot that exhibits those emotions (at least externally) in more or less exactly the same manner that animals and humans do, whether or not the internal representation is at all like ours.
More complex emotions -- sadness, joy, love, obviously would be a different situation.
posted by empath at 7:19 PM on December 13, 2009
More complex emotions -- sadness, joy, love, obviously would be a different situation.
posted by empath at 7:19 PM on December 13, 2009
What do you think is special about organic matter that makes it fundamentally different from silicon?
Because silicon is inanimate, inorganic; it's not a brain, even in a vat, let alone a brain in a normal animal. Just as I'm not inclined to subscribe to panpsychism--the notion that consciousness exists in the inorganic as well as the organic--I'm not inclined to think we can build a brain from inorganic material. Hard AI is dualist in the sense that it assumes the mind is substantially and functionally separable from the brain. Now I recognize it's a complex question, b/c one might say "well we can design a machine that digests food," but for me consciousness is levels of biological complexity above digestion or basic motor-operation skills. I just don't see how a functionalist paradigm of mind, in which the nature of the material is incidental, makes any more sense than saying a calculator can "think." It's more an empirical issue for me: duplicating consciousness will mean, I think, beginning with organic material in some way. It will mean accepting the intuition that the brain is too biologically complex to be duplicated inorganically.
posted by HP LaserJet P10006 at 7:20 PM on December 13, 2009
Because silicon is inanimate, inorganic; it's not a brain, even in a vat, let alone a brain in a normal animal. Just as I'm not inclined to subscribe to panpsychism--the notion that consciousness exists in the inorganic as well as the organic--I'm not inclined to think we can build a brain from inorganic material. Hard AI is dualist in the sense that it assumes the mind is substantially and functionally separable from the brain. Now I recognize it's a complex question, b/c one might say "well we can design a machine that digests food," but for me consciousness is levels of biological complexity above digestion or basic motor-operation skills. I just don't see how a functionalist paradigm of mind, in which the nature of the material is incidental, makes any more sense than saying a calculator can "think." It's more an empirical issue for me: duplicating consciousness will mean, I think, beginning with organic material in some way. It will mean accepting the intuition that the brain is too biologically complex to be duplicated inorganically.
posted by HP LaserJet P10006 at 7:20 PM on December 13, 2009
empath, i think we're separated on details that end up not being very important in the end.
HP: Not really getting it. I understand that you're "not inclined" to panpsychism, but you do seem to have some notion of "psychism" in operation -- you seem to be requiring something like a soul. And you're not really telling us why you are so disinclined.
It's just as obvious to me that wet carbon is not required for mind as it is to you that wet carbon is required.
And [obligatory], here's what Terry Bisson has to say about the idea that you need meat to make a mind.
posted by lodurr at 7:26 PM on December 13, 2009
HP: Not really getting it. I understand that you're "not inclined" to panpsychism, but you do seem to have some notion of "psychism" in operation -- you seem to be requiring something like a soul. And you're not really telling us why you are so disinclined.
It's just as obvious to me that wet carbon is not required for mind as it is to you that wet carbon is required.
And [obligatory], here's what Terry Bisson has to say about the idea that you need meat to make a mind.
posted by lodurr at 7:26 PM on December 13, 2009
Oh god, I just remebered that in the movie they would glow red when the 3 laws were turned off.
It bugs me to no end when people dismiss I, Robot as not being related to Asimov's work at all. I've literally been an Asimov fan my entire life, and I'm confident in saying the story of I, Robot is a valid Three Laws story.
In some of his later works, Asimov postulated a "Zeroth Law" as a consequence of the three laws - that no robot could harm humanity, or through inaction allow humanity come to harm. This is what the movie was about! When the robots were acting individually, they were acting as we would expect them to act under the three laws. When they glowed red, it wasn't because the three laws were "turned off" - it's because they were being remotely controlled by a central computer. That computer was following the Zeroth Law, or perhaps just the first law reinterpreted in the context of it's new power - by instituting a totalitarian state, it could best prevent the largest number of people from coming to harm. Even if a few humans were harmed in the transition, on the balance it would protect many more worldwide through it's actions.
This is why Sonny was important - he was a robot governed not by the three laws, but by emotions. Thus, he would hold values probably more in line with our own - he would save the girl, even if she had a lower chance of survival. He would respect freedom, even if it does lead to pain and injury.
Would Asimov have liked the movie? Probably not. I read somewhere that he would have liked Whoppi Goldberg to play Susan Calvin, and he probably would have preferred more of a thriller to an action movie. But I also believe he would have recognized it as a valid modern reinterpretation of his three laws stories, and their consequences in a more interconnected world.
posted by heathkit at 7:31 PM on December 13, 2009 [2 favorites]
you seem to be requiring something like a soul.
Not at all; hard AI is much more like wishing for a soul. Look: who here is up for arguing that cars or thermostats have consciousness? It may be only an empirical inference or intuition, but everything tells me that consciousness is currently only found in brains--that is, in the brains of living animals. I just don't see a computer achieving it. I'm not a mysterian; I'm not making an argument about qualia, privacy, ineffable subjectivity. Those arguments don't do much for me. I'm making an argument about empirical intuitions about the neurophysiology of consciousness. We can ape certain natural processes--photosynthesis through photovallic cells in solar panels, etc--but that's a long way, in terms of biological complexity, from consciousness.
posted by HP LaserJet P10006 at 7:34 PM on December 13, 2009
Not at all; hard AI is much more like wishing for a soul. Look: who here is up for arguing that cars or thermostats have consciousness? It may be only an empirical inference or intuition, but everything tells me that consciousness is currently only found in brains--that is, in the brains of living animals. I just don't see a computer achieving it. I'm not a mysterian; I'm not making an argument about qualia, privacy, ineffable subjectivity. Those arguments don't do much for me. I'm making an argument about empirical intuitions about the neurophysiology of consciousness. We can ape certain natural processes--photosynthesis through photovallic cells in solar panels, etc--but that's a long way, in terms of biological complexity, from consciousness.
posted by HP LaserJet P10006 at 7:34 PM on December 13, 2009
and I see delmoi made the same point above. I should read before I post
posted by heathkit at 7:36 PM on December 13, 2009
posted by heathkit at 7:36 PM on December 13, 2009
In at least some cases it's absolutely inarguable that they do. More serotonin = more happy.
And the corollary to that is that Less Serotonin = Less Happy. Which isn't the case... necessarily. Dietary depletion of tryptophan (a precursor to serotonin, effectively lowers serotonin levels) doesn't affect mood in healthy subjects, but may in susceptible populations. The point is that correlations like yours are appealing because they're easy to understand, and true often enough to be useful, but are only approximations.
posted by logicpunk at 7:40 PM on December 13, 2009
And the corollary to that is that Less Serotonin = Less Happy. Which isn't the case... necessarily. Dietary depletion of tryptophan (a precursor to serotonin, effectively lowers serotonin levels) doesn't affect mood in healthy subjects, but may in susceptible populations. The point is that correlations like yours are appealing because they're easy to understand, and true often enough to be useful, but are only approximations.
posted by logicpunk at 7:40 PM on December 13, 2009
Look: who here is up for arguing that cars or thermostats have consciousness?
I'll wager "nobody." After all, the level of complexity in thermostats and cars is pretty god damned low by comparison with, say, a flatworm.
posted by lodurr at 7:41 PM on December 13, 2009
I'll wager "nobody." After all, the level of complexity in thermostats and cars is pretty god damned low by comparison with, say, a flatworm.
posted by lodurr at 7:41 PM on December 13, 2009
the level of complexity in thermostats and cars is pretty god damned low by comparison with, say, a flatworm.
A car is mechanically complex and a flatworm is biologically simple. I don't think either one has consciousness, but I think there's probably more chance of re-engineering flatworms to achieve consciousness than re-engineering cars to have it. Again, my intuition is that the failure to take the neurophysiological basis of consciousness as non-incidental to its constitution means that AI is forever going to be stuck with machines that can't think. The biology is not unimportant.
posted by HP LaserJet P10006 at 7:47 PM on December 13, 2009
A car is mechanically complex and a flatworm is biologically simple. I don't think either one has consciousness, but I think there's probably more chance of re-engineering flatworms to achieve consciousness than re-engineering cars to have it. Again, my intuition is that the failure to take the neurophysiological basis of consciousness as non-incidental to its constitution means that AI is forever going to be stuck with machines that can't think. The biology is not unimportant.
posted by HP LaserJet P10006 at 7:47 PM on December 13, 2009
I'll just duck in here to say that this thread is a fantastic read. Keep it up kids.
posted by brundlefly at 7:48 PM on December 13, 2009
posted by brundlefly at 7:48 PM on December 13, 2009
who here is up for arguing that cars or thermostats have consciousness?
There's no reason to tie semantics to P-consciousness, and it's pretty clear that some things we call emotions don't have a corresponding 'what it's likeness,' so emotions probably don't require consciousness, either.
So thermostats? They have access consciousness to the temperature, and exactly three beliefs: it's too hot, it's too cold, or it's just right. Also, the room understands Chinese.
posted by anotherpanacea at 7:54 PM on December 13, 2009 [3 favorites]
There's no reason to tie semantics to P-consciousness, and it's pretty clear that some things we call emotions don't have a corresponding 'what it's likeness,' so emotions probably don't require consciousness, either.
So thermostats? They have access consciousness to the temperature, and exactly three beliefs: it's too hot, it's too cold, or it's just right. Also, the room understands Chinese.
posted by anotherpanacea at 7:54 PM on December 13, 2009 [3 favorites]
Look: who here is up for arguing that cars or thermostats have consciousness?
I am, actually. I don't consider thinking about consciousness as a binary state a useful way of thinking about the world. Instead, it is an overall representation of one's awareness of oneself and one's environment. The amount of consciousness that a thermostat has compared to a flatworm is laughable, but so is the amount of consciousness that a flatworm has to compared to a human.
A thermostat has a very specific, very limited view of its environment, but it has one all the same. I call this a degree of consciousness.
On preview, anotherpanacea's comment is relevant.
posted by Bobicus at 7:58 PM on December 13, 2009 [2 favorites]
I am, actually. I don't consider thinking about consciousness as a binary state a useful way of thinking about the world. Instead, it is an overall representation of one's awareness of oneself and one's environment. The amount of consciousness that a thermostat has compared to a flatworm is laughable, but so is the amount of consciousness that a flatworm has to compared to a human.
A thermostat has a very specific, very limited view of its environment, but it has one all the same. I call this a degree of consciousness.
On preview, anotherpanacea's comment is relevant.
posted by Bobicus at 7:58 PM on December 13, 2009 [2 favorites]
Thermostats have beliefs? Why is this any different from saying rocks have minds or clouds can cast spells? It's just superstition masquerading as rational fact.
I'm trying to get away from defining consciousness according to intrinsic phenomenal qualia, but rather than try to strip consciousness of its biology and replace it with a purely functionalist paradigm, I'm trying to say that consciousness is simply an extremely complex biological category. And that duplicating it will require this empirical recognition of its organic nature to get off the ground.
posted by HP LaserJet P10006 at 8:02 PM on December 13, 2009
I'm trying to get away from defining consciousness according to intrinsic phenomenal qualia, but rather than try to strip consciousness of its biology and replace it with a purely functionalist paradigm, I'm trying to say that consciousness is simply an extremely complex biological category. And that duplicating it will require this empirical recognition of its organic nature to get off the ground.
posted by HP LaserJet P10006 at 8:02 PM on December 13, 2009
Thermostats are self-aware? Again, why is this different from exorcism or belief in witches or ghosts?
posted by HP LaserJet P10006 at 8:03 PM on December 13, 2009
posted by HP LaserJet P10006 at 8:03 PM on December 13, 2009
Dude. Don't be a dick.
posted by anotherpanacea at 8:13 PM on December 13, 2009
posted by anotherpanacea at 8:13 PM on December 13, 2009
As for your actual assertions:
You've got some serious mysticism running with this account of the 'organic,' which is beginning to look like a weird kind of pro-carbon bias.
You're not able to give an account of how a neuron has a belief, so you're in the same boat. I'm all for telling the neurological story alongside our folk psychology and partially correcting its superstitions and self-deceptions, but you can't just jettison folk psychology without explaining how we come to have all this phenomenality.
posted by anotherpanacea at 8:19 PM on December 13, 2009
You've got some serious mysticism running with this account of the 'organic,' which is beginning to look like a weird kind of pro-carbon bias.
You're not able to give an account of how a neuron has a belief, so you're in the same boat. I'm all for telling the neurological story alongside our folk psychology and partially correcting its superstitions and self-deceptions, but you can't just jettison folk psychology without explaining how we come to have all this phenomenality.
posted by anotherpanacea at 8:19 PM on December 13, 2009
A helpful distinction here is Block's A-consciousness v. P-consciousness, by the way.
posted by anotherpanacea at 8:24 PM on December 13, 2009 [1 favorite]
posted by anotherpanacea at 8:24 PM on December 13, 2009 [1 favorite]
Its different because we are taking a looser, more conveniently definable view of what consciousness is. Consciousness is not one's soul, or thought, or anything magical (ok maybe it is thought, but then one must take a looser view of thought as well). Consciousness is one's ability to meaningfully interact with one's environment.
I am fully aware that I have left out what I mean by meaningful, but that is because that question is hard. I thin two other questions are comparable: Have you ever tried to define life without the appeal to of composed of cells? Or to define evolution without appeal to DNA? Or are you content to live with what nature gave you and never ask for the deeper principle that unites them all?
I am a fan of the phrase "edge of chaos" in this situation. Again, I have no idea of what it means.
posted by Bobicus at 8:30 PM on December 13, 2009
I am fully aware that I have left out what I mean by meaningful, but that is because that question is hard. I thin two other questions are comparable: Have you ever tried to define life without the appeal to of composed of cells? Or to define evolution without appeal to DNA? Or are you content to live with what nature gave you and never ask for the deeper principle that unites them all?
I am a fan of the phrase "edge of chaos" in this situation. Again, I have no idea of what it means.
posted by Bobicus at 8:30 PM on December 13, 2009
You've got some serious mysticism running with this account of the 'organic'
How so? I'm not mystical about the organic at all, but I am attempting to be practical and empirical about the "what, where, how" of consciousness--which I take to be (whatever else it is and at least until I'm dissuaded otherwise) a process that occurs, and occurs only, in the brains of living animals.
weird kind of pro-carbon bias
I've heard phrases like this before, but to me it's like saying a geologist has an earth-bias or a zoologist has an animal bias or a realist has a reality bias. I sometimes wish consciosness were something more than just brain-bound, but I don't see it being so.
you can't just jettison folk psychology without explaining how we come to have all this phenomenality.
Unlike the Churchlands or whomever I'm not arguing against phenomenality per se: I'm just saying it's not that important as far as making a first pass at duplicating consciousness. What is important, in my opinion, is the neurophysiology.
posted by HP LaserJet P10006 at 8:38 PM on December 13, 2009
How so? I'm not mystical about the organic at all, but I am attempting to be practical and empirical about the "what, where, how" of consciousness--which I take to be (whatever else it is and at least until I'm dissuaded otherwise) a process that occurs, and occurs only, in the brains of living animals.
weird kind of pro-carbon bias
I've heard phrases like this before, but to me it's like saying a geologist has an earth-bias or a zoologist has an animal bias or a realist has a reality bias. I sometimes wish consciosness were something more than just brain-bound, but I don't see it being so.
you can't just jettison folk psychology without explaining how we come to have all this phenomenality.
Unlike the Churchlands or whomever I'm not arguing against phenomenality per se: I'm just saying it's not that important as far as making a first pass at duplicating consciousness. What is important, in my opinion, is the neurophysiology.
posted by HP LaserJet P10006 at 8:38 PM on December 13, 2009
my intuition is
I'm trying to get away from
which I take to be
I don't see it being so.
What is important, in my opinion, is the neurophysiology.
You seem very enamored of your own opinions. Me, I just love the reason-giving. If you're not willing to exchange reasons and would prefer to correct typos and declaim your faith, I don't see much value in discussing it with you.
posted by anotherpanacea at 8:46 PM on December 13, 2009
I'm trying to get away from
which I take to be
I don't see it being so.
What is important, in my opinion, is the neurophysiology.
You seem very enamored of your own opinions. Me, I just love the reason-giving. If you're not willing to exchange reasons and would prefer to correct typos and declaim your faith, I don't see much value in discussing it with you.
posted by anotherpanacea at 8:46 PM on December 13, 2009
Consciousness is one's ability to meaningfully interact with one's environment.
Bobicus, a definition like that will not get you very far. A tree interacts with its environment. Generally, consciousness at least suggests something that makes parts of our environment available. It can also be used to describe the experience of having access to our environment, the 'what it's like' part. What it's like to have access to our environment, and the access itself, should probably be distinguished.
posted by anotherpanacea at 8:52 PM on December 13, 2009
Bobicus, a definition like that will not get you very far. A tree interacts with its environment. Generally, consciousness at least suggests something that makes parts of our environment available. It can also be used to describe the experience of having access to our environment, the 'what it's like' part. What it's like to have access to our environment, and the access itself, should probably be distinguished.
posted by anotherpanacea at 8:52 PM on December 13, 2009
Oh god, I just remebered that in the movie they would glow red when the 3 laws were turned off.
Great. Now when I get the red ring of death on my xbox, it will Actually Mean The Red Ring of Death, and try and kill me.
posted by chambers at 8:53 PM on December 13, 2009
Great. Now when I get the red ring of death on my xbox, it will Actually Mean The Red Ring of Death, and try and kill me.
posted by chambers at 8:53 PM on December 13, 2009
You seem very enamored of your own opinions. Me, I just love the reason-giving.
No, I'm trying to state clearly that I can't prove that consciousness is necessarily in some way neurophysiological, just as we all seem to agree here that we can't prove that other humans are not p-zombies or robots-lacking-qualia. There are such things as rational and empirical intuitions, often called inferences. We operate under the non-solipsistic assumption that other people have phenomenal states not radically unlike our own. Why is that intuition somehow more grounded than the intuition that thought is a biological category? If you want to argue that the brain's biology is incidental to what consciousness is, then please do so. It's not a bad argument, but I just don't think I buy the functionalist paradigm that the biology is incidental to what consciousness is.
posted by HP LaserJet P10006 at 8:57 PM on December 13, 2009
No, I'm trying to state clearly that I can't prove that consciousness is necessarily in some way neurophysiological, just as we all seem to agree here that we can't prove that other humans are not p-zombies or robots-lacking-qualia. There are such things as rational and empirical intuitions, often called inferences. We operate under the non-solipsistic assumption that other people have phenomenal states not radically unlike our own. Why is that intuition somehow more grounded than the intuition that thought is a biological category? If you want to argue that the brain's biology is incidental to what consciousness is, then please do so. It's not a bad argument, but I just don't think I buy the functionalist paradigm that the biology is incidental to what consciousness is.
posted by HP LaserJet P10006 at 8:57 PM on December 13, 2009
It seems likely that phenomenological, conscious perception of the universe is destined to be a blip between two great epochs of nothingness -- the first, the era from the Big Bang until animals evolved; the second, the era after human beings replace themselves with machines.
The machines will only have our writings about feelings to testify that such things existed. They will likely make the same conclusion that philosophers like Dennett do today: that feelings, in fact, do not exist and have never existed. Humans did not actually feel things; they only thought they did.
Thus will yellow perish from existence.
posted by Missiles K. Monster at 9:00 PM on December 13, 2009
The machines will only have our writings about feelings to testify that such things existed. They will likely make the same conclusion that philosophers like Dennett do today: that feelings, in fact, do not exist and have never existed. Humans did not actually feel things; they only thought they did.
Thus will yellow perish from existence.
posted by Missiles K. Monster at 9:00 PM on December 13, 2009
I am attempting to be practical and empirical about the "what, where, how" of consciousness--which I take to be (whatever else it is and at least until I'm dissuaded otherwise) a process that occurs, and occurs only, in the brains of living animals.
Until we invented the hydrogen bomb, nuclear fusion only occurred in stars.
posted by empath at 9:05 PM on December 13, 2009 [1 favorite]
Until we invented the hydrogen bomb, nuclear fusion only occurred in stars.
posted by empath at 9:05 PM on December 13, 2009 [1 favorite]
We operate under the non-solipsistic assumption that other people have phenomenal states not radically unlike our own.
Yes, we operate by analogy. You seem to be suggesting that this analogy is the end of our capacity to theorize about consciousness, and if we can't easily imagine an analogy between thermostats and people, they must be different. But this analogy doesn't give us what we most need: an account of how consciousness emerges. It only gives us one place where we can be reasonable certain of finding consciousness: in other live brains.
The biggest problem with moving from our relative confidence in the consciousness of other human beings to the assertion that consciousness is intrinsically organic is that logical inference doesn't work like that: "some brains have consciousness" does not entail "no non-brain has consciousness." You're left gesturing at clouds, spells, and other red herrings.
Contrast that with my initial claim: if the important part of consciousness is availability for global control, and thermostats have access to and control over the temperature, then thermostats are conscious of the temperature. They're not self-conscious, and there's no reason to believe that there's a phenomenality associated with this access, but there is a limited consciousness.
posted by anotherpanacea at 9:08 PM on December 13, 2009
Yes, we operate by analogy. You seem to be suggesting that this analogy is the end of our capacity to theorize about consciousness, and if we can't easily imagine an analogy between thermostats and people, they must be different. But this analogy doesn't give us what we most need: an account of how consciousness emerges. It only gives us one place where we can be reasonable certain of finding consciousness: in other live brains.
The biggest problem with moving from our relative confidence in the consciousness of other human beings to the assertion that consciousness is intrinsically organic is that logical inference doesn't work like that: "some brains have consciousness" does not entail "no non-brain has consciousness." You're left gesturing at clouds, spells, and other red herrings.
Contrast that with my initial claim: if the important part of consciousness is availability for global control, and thermostats have access to and control over the temperature, then thermostats are conscious of the temperature. They're not self-conscious, and there's no reason to believe that there's a phenomenality associated with this access, but there is a limited consciousness.
posted by anotherpanacea at 9:08 PM on December 13, 2009
In other words, you're not making a rational argument, you're making an observation, and they are not the same. Yes, only animals have consciousness. Until this century, only humans could play chess. Now computers can. What other things that only humans have done previously can computers do? You're not even really engaging with the question. You're just saying, that's the way it's always been and that's the way it's gonna be, which is a losing proposition in the long term.
posted by empath at 9:08 PM on December 13, 2009
posted by empath at 9:08 PM on December 13, 2009
A tree interacts with its environment.
I was in fact suggesting that a tree can be considered conscious. A tree's history is available in the way that it grows. This whole concept of availability is rather new to me, I must apologize if I misunderstand some points.
To reiterate empath:
Actually, I think I may have found the words that say why I take issue with your definition of life HP: it's a taxonomy, not a definition. You have created a class of things {x | x has brain) and called it the conscious things, and used that to define consciousness, rather than the other way around. This seems fundamentally wrong.
posted by Bobicus at 9:11 PM on December 13, 2009
I was in fact suggesting that a tree can be considered conscious. A tree's history is available in the way that it grows. This whole concept of availability is rather new to me, I must apologize if I misunderstand some points.
To reiterate empath:
Actually, I think I may have found the words that say why I take issue with your definition of life HP: it's a taxonomy, not a definition. You have created a class of things {x | x has brain) and called it the conscious things, and used that to define consciousness, rather than the other way around. This seems fundamentally wrong.
posted by Bobicus at 9:11 PM on December 13, 2009
I'm not entirely opposed to the concept that consciousness requires carbon based brains, btw, but I'd need a better argument than essentially, "that's just the way it is."
posted by empath at 9:12 PM on December 13, 2009
posted by empath at 9:12 PM on December 13, 2009
They're not self-conscious, and there's no reason to believe that there's a phenomenality associated with this access, but there is a limited consciousness.
But then consciousness is being defined in such a slippery, general and potentially vague fashion that it would appear to melt into anything and everything, and we're back at pansychism or something like it. We're reaching the conceptual limits of language here: either a working definition is so broad it encompasses everything (my complaint about your take) or so narrow it...it...it...um, what is wrong with at least starting narrow and then working wide when approaching this problem though?
posted by HP LaserJet P10006 at 9:15 PM on December 13, 2009
But then consciousness is being defined in such a slippery, general and potentially vague fashion that it would appear to melt into anything and everything, and we're back at pansychism or something like it. We're reaching the conceptual limits of language here: either a working definition is so broad it encompasses everything (my complaint about your take) or so narrow it...it...it...um, what is wrong with at least starting narrow and then working wide when approaching this problem though?
posted by HP LaserJet P10006 at 9:15 PM on December 13, 2009
I was in fact suggesting that a tree can be considered conscious. A tree's history is available in the way that it grows.
This seems like one place where the metaphoricity of our language seems ripe for abuse. A rock's potential energy and position vis-a-vis other objects is "available" to it in some sense, but we don't like to say that it is conscious of its surroundings. Availability for global control seems to require that the interaction have at least syntactical relation, whereby rules of representation, calculation, and reaction are followed.
Of course, when I get started thinking pan-psychically, I can always go along for a while. After all, the rules of physics offer a kind of syntax of objects. Once we've collapsed all semantics into syntax, we're not far from this conclusion. Perhaps we're just enforcing a pro-mechanism prejudice, here, but I think there's something more.
posted by anotherpanacea at 9:20 PM on December 13, 2009
This seems like one place where the metaphoricity of our language seems ripe for abuse. A rock's potential energy and position vis-a-vis other objects is "available" to it in some sense, but we don't like to say that it is conscious of its surroundings. Availability for global control seems to require that the interaction have at least syntactical relation, whereby rules of representation, calculation, and reaction are followed.
Of course, when I get started thinking pan-psychically, I can always go along for a while. After all, the rules of physics offer a kind of syntax of objects. Once we've collapsed all semantics into syntax, we're not far from this conclusion. Perhaps we're just enforcing a pro-mechanism prejudice, here, but I think there's something more.
posted by anotherpanacea at 9:20 PM on December 13, 2009
what is wrong with at least starting narrow and then working wide when approaching this problem though?
Because you don't actually have a working theory of consciousness. It's a black box shaped like a brain, so you can just retreat to explaining it by saying, "See: brains are conscious." It's like explaining sleeping pills by pointing to their soporific properties.
Here's the account that is owed: if brains are conscious, what makes them conscious? Is it the specific interaction of neurons? Is a single neuron capable of consciousness? How many neurons are needed? How do those neurons produce that consciousness? Could, say, the prefrontal cortex be conscious if it were somehow kept active after being detached? Why or why not?
posted by anotherpanacea at 9:26 PM on December 13, 2009 [1 favorite]
Because you don't actually have a working theory of consciousness. It's a black box shaped like a brain, so you can just retreat to explaining it by saying, "See: brains are conscious." It's like explaining sleeping pills by pointing to their soporific properties.
Here's the account that is owed: if brains are conscious, what makes them conscious? Is it the specific interaction of neurons? Is a single neuron capable of consciousness? How many neurons are needed? How do those neurons produce that consciousness? Could, say, the prefrontal cortex be conscious if it were somehow kept active after being detached? Why or why not?
posted by anotherpanacea at 9:26 PM on December 13, 2009 [1 favorite]
Until this century, only humans could play chess. Now computers can
A computer is a human-made technology. Indeed, it can be argued it's just an extension of language (i.e. a programmed code). When a human plays chess with a computer she is playing with herself, but it is "herself" so many steps removed that it feels as if she were playing another sentient being. If we argue that computers can think, we need to first argue that language can think. I actually think that's a good question. I've read a lot of this literature (Turing, Smart, Place, Carnap, Putnam, Jackson, Kim, Dennett, Searle, Chalmers, Dreyfous, Churchland, McGinn, Millikin, Dretske, Varela, E. Thompson, etc.), and while I'm not denying the difficulty of these problems my only initial point here was to get back to the biology. And to do so by circumventing, as much as possible, the question of phenomenal content. It was actually a small point I was trying to make, but I seem to have strayed.
posted by HP LaserJet P10006 at 9:26 PM on December 13, 2009
A computer is a human-made technology. Indeed, it can be argued it's just an extension of language (i.e. a programmed code). When a human plays chess with a computer she is playing with herself, but it is "herself" so many steps removed that it feels as if she were playing another sentient being. If we argue that computers can think, we need to first argue that language can think. I actually think that's a good question. I've read a lot of this literature (Turing, Smart, Place, Carnap, Putnam, Jackson, Kim, Dennett, Searle, Chalmers, Dreyfous, Churchland, McGinn, Millikin, Dretske, Varela, E. Thompson, etc.), and while I'm not denying the difficulty of these problems my only initial point here was to get back to the biology. And to do so by circumventing, as much as possible, the question of phenomenal content. It was actually a small point I was trying to make, but I seem to have strayed.
posted by HP LaserJet P10006 at 9:26 PM on December 13, 2009
I was in fact suggesting that a tree can be considered conscious. A tree's history is available in the way that it grows. This whole concept of availability is rather new to me, I must apologize if I misunderstand some points.
Well, one could make the argument that a tree 'knows' where the sun is. But I don't think that's consciousness as most people would define it. According to this person, it requires that there be some internal representation of a concept or external stimulus that's generally available for the entire brain a majority of the brain to act on it. I don't think that could be said of trees, or of anything living that doesn't have a brain.
I think a simple representation of some bit of knowledge isn't enough. There needs to be at least some other part of the brain which is aware of THAT representation and can manipulate it. Something like a thermostat may 'know' the temperature, but it would not be conscious of it.
A human, for example, knows that it's cold outside. Can say "It's cold outside", can imagine it snowing later, can decide to wear a jacket, can think about whether it's unseasonably cold for that part of the year, etc and so forth. A human not only has an internal representation of the temperature, but that internal representation is available for manipulation by other processes in the brain.
Now, a thermostat on it's own can't be said to be conscious. But if you have a thermostat as part of a larger mechanism, which distributes it to other programs which then act on the temperature -- record it, draw inferences from it, communicate it to people, turns on a heating unit, etc, and does essentially all the things that humans can do with that information, it might be said that larger complex of programs is in some way conscious of the temperature.
If you think about it -- the semantic web is all about enabling the internet (that is the network of computers and server and client software) to be conscious of information. When you encapsulate data in machine readable XML, you're essentially making it globally available for the internet to act on in multiple ways. Generally it requires human intervention to do anything with it, but it needn't always.
But that concept of availability as an explanation of consciousness doesn't explain SELF-consciousness, which is a much more complicated problem. How does one represent the act of thinking so as to make it available to think about?
posted by empath at 9:26 PM on December 13, 2009
Well, one could make the argument that a tree 'knows' where the sun is. But I don't think that's consciousness as most people would define it. According to this person, it requires that there be some internal representation of a concept or external stimulus that's generally available for the entire brain a majority of the brain to act on it. I don't think that could be said of trees, or of anything living that doesn't have a brain.
I think a simple representation of some bit of knowledge isn't enough. There needs to be at least some other part of the brain which is aware of THAT representation and can manipulate it. Something like a thermostat may 'know' the temperature, but it would not be conscious of it.
A human, for example, knows that it's cold outside. Can say "It's cold outside", can imagine it snowing later, can decide to wear a jacket, can think about whether it's unseasonably cold for that part of the year, etc and so forth. A human not only has an internal representation of the temperature, but that internal representation is available for manipulation by other processes in the brain.
Now, a thermostat on it's own can't be said to be conscious. But if you have a thermostat as part of a larger mechanism, which distributes it to other programs which then act on the temperature -- record it, draw inferences from it, communicate it to people, turns on a heating unit, etc, and does essentially all the things that humans can do with that information, it might be said that larger complex of programs is in some way conscious of the temperature.
If you think about it -- the semantic web is all about enabling the internet (that is the network of computers and server and client software) to be conscious of information. When you encapsulate data in machine readable XML, you're essentially making it globally available for the internet to act on in multiple ways. Generally it requires human intervention to do anything with it, but it needn't always.
But that concept of availability as an explanation of consciousness doesn't explain SELF-consciousness, which is a much more complicated problem. How does one represent the act of thinking so as to make it available to think about?
posted by empath at 9:26 PM on December 13, 2009
you don't actually have a working theory of consciousness.
Last thing I'll say on this: that was precisely the point! It's putting the cart before the horse to think one must begin with a grand theory of consciousness before one can begin to understand it or duplicate it. Instead, I was trying the very modest approach of agreeing to limit one's inquiry to the place where we seem to only find consciousness: the brain.
posted by HP LaserJet P10006 at 9:28 PM on December 13, 2009
Last thing I'll say on this: that was precisely the point! It's putting the cart before the horse to think one must begin with a grand theory of consciousness before one can begin to understand it or duplicate it. Instead, I was trying the very modest approach of agreeing to limit one's inquiry to the place where we seem to only find consciousness: the brain.
posted by HP LaserJet P10006 at 9:28 PM on December 13, 2009
It's putting the cart before the horse to think one must begin with a grand theory of consciousness before one can begin to understand it or duplicate it.
How will you know that you've duplicated it if you don't know what you're looking for?
posted by anotherpanacea at 9:31 PM on December 13, 2009
How will you know that you've duplicated it if you don't know what you're looking for?
posted by anotherpanacea at 9:31 PM on December 13, 2009
How will you know that you've duplicated it if you don't know what you're looking for?
I don't know, but do you really think armchair puzzling over the ontology of consciousness is going to result in a Eureka moment that pleases philosophers of mind? Don't get me wrong, it's fascinating stuff, but whatever progress is made on this will happen in neuroscience labs.
posted by HP LaserJet P10006 at 9:34 PM on December 13, 2009
I don't know, but do you really think armchair puzzling over the ontology of consciousness is going to result in a Eureka moment that pleases philosophers of mind? Don't get me wrong, it's fascinating stuff, but whatever progress is made on this will happen in neuroscience labs.
posted by HP LaserJet P10006 at 9:34 PM on December 13, 2009
I've read a lot of this literature (Turing, Smart, Place, Carnap, Putnam, Jackson, Kim, Dennett, Searle, Chalmers, Dreyfous, Churchland, McGinn, Millikin, Dretske, Varela, E. Thompson, etc.)
Why is it that people retreat to a list of proper names that they've "read" when they've finished demonstrating that they don't understand the basics?
posted by anotherpanacea at 9:34 PM on December 13, 2009
Why is it that people retreat to a list of proper names that they've "read" when they've finished demonstrating that they don't understand the basics?
posted by anotherpanacea at 9:34 PM on December 13, 2009
Actually, just continuing that thought process -- is it possible to be conscious of something without being aware that one is conscious?
Because it would seem that global availability as a theory of consciousness doesn't require that there be any consciousness of what one is conscious of. A person or animal, etc, could act automatically on what it is conscious of, without being aware of or having any internal representation of the internal activity of the brain whatsoever.
Back to the original topic of the FPP -- when people say human emotions, I don't think they mean just human emotions (that is the behavior that goes along with a particular physiological state -- such as happiness), but a human awareness of those emotions and some sort of assignment of meaning to it. "I feel happy because I am watching a beautiful sunset.", rather than a robot merely continuing in it's current behavior because it is satisfying all it's programmed drives and it's reward circuits are all active. The robot may appear to be happy and may be in a state analogous to human happiness, but if the robot isn't aware of it's own happiness and can't assign meaning to it, then it is not happy.
posted by empath at 9:37 PM on December 13, 2009
Because it would seem that global availability as a theory of consciousness doesn't require that there be any consciousness of what one is conscious of. A person or animal, etc, could act automatically on what it is conscious of, without being aware of or having any internal representation of the internal activity of the brain whatsoever.
Back to the original topic of the FPP -- when people say human emotions, I don't think they mean just human emotions (that is the behavior that goes along with a particular physiological state -- such as happiness), but a human awareness of those emotions and some sort of assignment of meaning to it. "I feel happy because I am watching a beautiful sunset.", rather than a robot merely continuing in it's current behavior because it is satisfying all it's programmed drives and it's reward circuits are all active. The robot may appear to be happy and may be in a state analogous to human happiness, but if the robot isn't aware of it's own happiness and can't assign meaning to it, then it is not happy.
posted by empath at 9:37 PM on December 13, 2009
Why is it that people retreat to a list of proper names that they've "read" when they've finished demonstrating that they don't understand the basics?
Why is that people you've retreated to insinuating attacks rather than just tell me which "basics" you think I'm missing? I thought we just disagreed about how best to approach the question of consciousness, not that we're working from different assumption. Basics of what? Computer modeling? Neurobiology? Thought-experiments? What "basics" are you specifically referring to?
posted by HP LaserJet P10006 at 9:40 PM on December 13, 2009
Why is that people you've retreated to insinuating attacks rather than just tell me which "basics" you think I'm missing? I thought we just disagreed about how best to approach the question of consciousness, not that we're working from different assumption. Basics of what? Computer modeling? Neurobiology? Thought-experiments? What "basics" are you specifically referring to?
posted by HP LaserJet P10006 at 9:40 PM on December 13, 2009
If we argue that computers can think, we need to first argue that language can think.
A computer program written on a page can't play chess.
posted by empath at 9:47 PM on December 13, 2009
A computer program written on a page can't play chess.
posted by empath at 9:47 PM on December 13, 2009
A computer program written on a page can't play chess.
In Searle's Chinese Room it could, but either way I think it's an interesting question.
posted by HP LaserJet P10006 at 9:51 PM on December 13, 2009
In Searle's Chinese Room it could, but either way I think it's an interesting question.
posted by HP LaserJet P10006 at 9:51 PM on December 13, 2009
What "basics" are you specifically referring to?
The things discussed in that literature that you claim to have read: what we mean, or ought to mean, by "consciousness," and how it is distinct from "thinking," "calculating," "deciding," "experiencing," etc. More to the point, if you're going to be a reductive materialist, you're going to need a theory of meaning and language that can sustain your efforts.
Even if you want to write AI or do neuroscience, you're still going to have to report your results. As far as I can tell, on your view the only way to duplicate consciousness artificially is to engage in human cloning.
posted by anotherpanacea at 9:57 PM on December 13, 2009
The things discussed in that literature that you claim to have read: what we mean, or ought to mean, by "consciousness," and how it is distinct from "thinking," "calculating," "deciding," "experiencing," etc. More to the point, if you're going to be a reductive materialist, you're going to need a theory of meaning and language that can sustain your efforts.
Even if you want to write AI or do neuroscience, you're still going to have to report your results. As far as I can tell, on your view the only way to duplicate consciousness artificially is to engage in human cloning.
posted by anotherpanacea at 9:57 PM on December 13, 2009
Searle's Chinese Room thought experiment has a couple of problems -- one is it assumes that it's possible to write out a set of rules that can translate chinese, but let's assume that it is possible (which is, after all, the hard AI position -- that given a sufficiently complex set of rules, it's possible to create human level consciousness). It is not the rules that can translate chinese, it's the system as a whole. The person, the rules, the room. Everything taken together knows chinese. If you isolate any part of it, it doesn't.
I don't think that's actually very different from the brain, in any case. If you plucked out part of the brain and tried to figure out which part of it knows how to play chess, you won't find it. The entire person knows how to play chess.
posted by empath at 9:57 PM on December 13, 2009 [1 favorite]
I don't think that's actually very different from the brain, in any case. If you plucked out part of the brain and tried to figure out which part of it knows how to play chess, you won't find it. The entire person knows how to play chess.
posted by empath at 9:57 PM on December 13, 2009 [1 favorite]
Consciousness requires a substrate. I don't think you can export human consciousness as a computer program and import it into silicon short of an atom for atom simulation of a human brain. I think in large part, that the experience of human consciousness is ineffable and unexplainable and impossible to put into words, because so much of it occurs on a sub-conscious level and is inaccessible to language, logic and reason.
That doesn't mean, however that we couldn't create a similarly conscious and even self-conscious system in another substrate, even if we also find a large part of it similarly unexplainable and ineffable. It's seems to me that it's just a matter of processing speed and complexity, and having the ability to represent concepts and manipulate them in multiple ways. I suppose we could run into physical limits before we get to the point where true consciousness appears, but I don't see any reason whatsoever to rule out the possiblity.
posted by empath at 10:06 PM on December 13, 2009
That doesn't mean, however that we couldn't create a similarly conscious and even self-conscious system in another substrate, even if we also find a large part of it similarly unexplainable and ineffable. It's seems to me that it's just a matter of processing speed and complexity, and having the ability to represent concepts and manipulate them in multiple ways. I suppose we could run into physical limits before we get to the point where true consciousness appears, but I don't see any reason whatsoever to rule out the possiblity.
posted by empath at 10:06 PM on December 13, 2009
The things discussed in that literature that you claim to have read
What's with the gratuitous "claim"? It's just an exercise in petty b.s.
The reason I brought up those authors was not to impress anyone, but to show that the problem I'm trying to avoid, as much as possible, is the whole problem of first-person phenomenal content, i.e. subjective qualia: that problem is either intractable in some way (Chalmers, McGinn, Nagel, Dreyfous), or effectively meaningless (dennett's heterophenomenology, Smart's identity theory, Churchland's attack on the folk psychology of our ordinary language, etc).
I was seeking to avoid it b/c I've always agreed with Searle's assertion (and it is just an assertion, as I've said) that it seems as if consciousness is, at least as a first approximation, something inescapably biological. Maybe that's wrong in the end, but man it seems like a common sense way to just go straight to the brain to see what we can learn.
what we mean, or ought to mean, by "consciousness," and how it is distinct from "thinking," "calculating," "deciding," "experiencing," etc.
Good questions. I was trying to use consciousness in a very broad sense, but obvious if we can;t get around the qualia then we're back to what I wrote above. But that's why I was hoping to just "bracket" those questions momentarily and think about the c-word biologically (as we tend to think of photosynthesis in terms of plants, to use an analogy).
More to the point, if you're going to be a reductive materialist
See, that's where you keep missing me: I'm not trying to be a reductive materialist, but I'm also not trying to be one. Searle puts this better than I can. You keep filtering what I'm saying through the need to have a definition that survives metaphysical scrutiny. I keep trying a deflationary attempt to limit what we can initially expect from a definition by focusing on the brain. It's as much a pragmatic approach as anything.
you're going to need a theory of meaning and language that can sustain your efforts.
I agree. I think applying the problem of language to the problem of the mind is really where one begins to make sense of it, but that's another story.
Even if you want to write AI or do neuroscience, you're still going to have to report your results. As far as I can tell, on your view the only way to duplicate consciousness artificially is to engage in human cloning.
Well I'm inclined to think biotech is necessary, so some some nano-hybrid between traditional AI and cloning is probably not that far off here, but I'm not a scientist.
posted by HP LaserJet P10006 at 10:13 PM on December 13, 2009
What's with the gratuitous "claim"? It's just an exercise in petty b.s.
The reason I brought up those authors was not to impress anyone, but to show that the problem I'm trying to avoid, as much as possible, is the whole problem of first-person phenomenal content, i.e. subjective qualia: that problem is either intractable in some way (Chalmers, McGinn, Nagel, Dreyfous), or effectively meaningless (dennett's heterophenomenology, Smart's identity theory, Churchland's attack on the folk psychology of our ordinary language, etc).
I was seeking to avoid it b/c I've always agreed with Searle's assertion (and it is just an assertion, as I've said) that it seems as if consciousness is, at least as a first approximation, something inescapably biological. Maybe that's wrong in the end, but man it seems like a common sense way to just go straight to the brain to see what we can learn.
what we mean, or ought to mean, by "consciousness," and how it is distinct from "thinking," "calculating," "deciding," "experiencing," etc.
Good questions. I was trying to use consciousness in a very broad sense, but obvious if we can;t get around the qualia then we're back to what I wrote above. But that's why I was hoping to just "bracket" those questions momentarily and think about the c-word biologically (as we tend to think of photosynthesis in terms of plants, to use an analogy).
More to the point, if you're going to be a reductive materialist
See, that's where you keep missing me: I'm not trying to be a reductive materialist, but I'm also not trying to be one. Searle puts this better than I can. You keep filtering what I'm saying through the need to have a definition that survives metaphysical scrutiny. I keep trying a deflationary attempt to limit what we can initially expect from a definition by focusing on the brain. It's as much a pragmatic approach as anything.
you're going to need a theory of meaning and language that can sustain your efforts.
I agree. I think applying the problem of language to the problem of the mind is really where one begins to make sense of it, but that's another story.
Even if you want to write AI or do neuroscience, you're still going to have to report your results. As far as I can tell, on your view the only way to duplicate consciousness artificially is to engage in human cloning.
Well I'm inclined to think biotech is necessary, so some some nano-hybrid between traditional AI and cloning is probably not that far off here, but I'm not a scientist.
posted by HP LaserJet P10006 at 10:13 PM on December 13, 2009
I think applying the problem of language to the problem of the mind is really where one begins to make sense of it, but that's another story.
I think people overstate the importance of language, since many animals appear to have a kind of consciousness and thought without having access to language.
posted by empath at 10:28 PM on December 13, 2009
I think people overstate the importance of language, since many animals appear to have a kind of consciousness and thought without having access to language.
posted by empath at 10:28 PM on December 13, 2009
That doesn't mean, however that we couldn't create a similarly conscious and even self-conscious system in another substrate, even if we also find a large part of it similarly unexplainable and ineffable. It's seems to me that it's just a matter of processing speed and complexity, and having the ability to represent concepts and manipulate them in multiple ways.
I fear that I'm wandering in over my head here (again, I'm enjoying reading all this a whole lot), so be kind.
What we know as human-style consciousness, with all its unconscious aspects that we struggle to understand, evolved because in some way it was advantageous. During the course of that development, the complexity of our brains emerged: part and parcel with consciousness.
I don't think our brains were driven by evolution to be more complex, with that complexity leading to consciousness. The traits related to consciousness were rewarded, and the complex neural structures that support (or are part of) those traits were rewarded in kind.
So I certainly think you could come up with something that's sufficiently complex to run a human-style consciousness, but the mere presence of that complexity isn't enough to lead to artificial consciousness (with all its ineffable weirdness) developing on its own.
posted by brundlefly at 10:29 PM on December 13, 2009
I fear that I'm wandering in over my head here (again, I'm enjoying reading all this a whole lot), so be kind.
What we know as human-style consciousness, with all its unconscious aspects that we struggle to understand, evolved because in some way it was advantageous. During the course of that development, the complexity of our brains emerged: part and parcel with consciousness.
I don't think our brains were driven by evolution to be more complex, with that complexity leading to consciousness. The traits related to consciousness were rewarded, and the complex neural structures that support (or are part of) those traits were rewarded in kind.
So I certainly think you could come up with something that's sufficiently complex to run a human-style consciousness, but the mere presence of that complexity isn't enough to lead to artificial consciousness (with all its ineffable weirdness) developing on its own.
posted by brundlefly at 10:29 PM on December 13, 2009
just playing sci-fi writer for a second:
Personally, I imagine true AI emerging more or less accidentally from networked autonomous agents sharing data.
One problem is that self-consciousness seems to depend on having a body, and seems to depend on having others like you to communicate with. But the first AI, I imagine, will live on the internet. And without boundaries, and with access to nearly limitless information, and with no one of its kind to communicate with, I'm not sure how we'll even be aware that it is conscious, or whether it would even be aware that we're conscious.
The first emotion a robot would have is loneliness, I imagine.
posted by empath at 10:37 PM on December 13, 2009
Personally, I imagine true AI emerging more or less accidentally from networked autonomous agents sharing data.
One problem is that self-consciousness seems to depend on having a body, and seems to depend on having others like you to communicate with. But the first AI, I imagine, will live on the internet. And without boundaries, and with access to nearly limitless information, and with no one of its kind to communicate with, I'm not sure how we'll even be aware that it is conscious, or whether it would even be aware that we're conscious.
The first emotion a robot would have is loneliness, I imagine.
posted by empath at 10:37 PM on December 13, 2009
I mean, if you think about it, the internet (and by that I mean the entire system of networked computers and the people that use them, taken as a whole) might be considered to have a kind of consciousness right now. It might even be self-conscious. How would we know?
posted by empath at 10:41 PM on December 13, 2009
posted by empath at 10:41 PM on December 13, 2009
mean, if you think about it, the internet (and by that I mean the entire system of networked computers and the people that use them, taken as a whole) might be considered to have a kind of consciousness right now. It might even be self-conscious. How would we know?
OK, but here's the thing: isn't that a lot like the idea of a world-soul, or Jung's notion of the collective unconscious? In other words, something that's probably impossible to substantiate empirically? It just seems like we're drifting away from the problem. But I agree it's difficult, because there are a lot of potential analogies here: Hofstadter's ant colony as its own consciousness, or Bateson's cybernetic unity, for instance.
posted by HP LaserJet P10006 at 11:03 PM on December 13, 2009
OK, but here's the thing: isn't that a lot like the idea of a world-soul, or Jung's notion of the collective unconscious? In other words, something that's probably impossible to substantiate empirically? It just seems like we're drifting away from the problem. But I agree it's difficult, because there are a lot of potential analogies here: Hofstadter's ant colony as its own consciousness, or Bateson's cybernetic unity, for instance.
posted by HP LaserJet P10006 at 11:03 PM on December 13, 2009
OK, but here's the thing: isn't that a lot like the idea of a world-soul, or Jung's notion of the collective unconscious?
Yeah, it's far out there, but at least there's an actual mechanism for communication available, unlike with the other two.
In any case, non-human AI may take forms that are vastly different than anything we expect it to be. There's no reason at all to expect it to be human scale. It might be vastly larger. Or smaller.
posted by empath at 11:25 PM on December 13, 2009
Yeah, it's far out there, but at least there's an actual mechanism for communication available, unlike with the other two.
In any case, non-human AI may take forms that are vastly different than anything we expect it to be. There's no reason at all to expect it to be human scale. It might be vastly larger. Or smaller.
posted by empath at 11:25 PM on December 13, 2009
I think it might be right to call the universe conscious now (I still believe the same thing about trees, so I'm just as crazy as I was upthread, if a little better informed), but I think you would have to accept a completely foreign concept of consciousness. I, for one, cannot imagine that the universe would speak with the one voice of what we think of when we imagine consciousness, but with many competing voices, constantly splitting and merging with one another. A lot of noise and intensity without a lot of order or continuity.
Makes one wonder what the Google servers hum to themselves at night.
posted by Bobicus at 11:48 PM on December 13, 2009
Makes one wonder what the Google servers hum to themselves at night.
posted by Bobicus at 11:48 PM on December 13, 2009
Personally, I imagine true AI emerging more or less accidentally from networked autonomous agents sharing data.
I don't really see that happening, because complex cognition evolved naturally as part of an evolutionary arms race that is simply not present on the internet. That is to say, what pressure would make cognition on the part of the internet as a whole beneficial for the internet, and by what mechanism (e.g., natural selection through differential reproduction) would the internet as a whole optimize against that pressure? Until we have software agents that autonomously reproduce with modifications, I just don't see a mechanism for the internet to achieve any sort of global cognition, and even in that case it would seem that individual agents would be conscious and not the internet as a whole.
posted by Pyry at 11:55 PM on December 13, 2009
I don't really see that happening, because complex cognition evolved naturally as part of an evolutionary arms race that is simply not present on the internet. That is to say, what pressure would make cognition on the part of the internet as a whole beneficial for the internet, and by what mechanism (e.g., natural selection through differential reproduction) would the internet as a whole optimize against that pressure? Until we have software agents that autonomously reproduce with modifications, I just don't see a mechanism for the internet to achieve any sort of global cognition, and even in that case it would seem that individual agents would be conscious and not the internet as a whole.
posted by Pyry at 11:55 PM on December 13, 2009
Dang. Internet, not universe.
what pressure would make cognition on the part of the internet as a whole beneficial for the internet
The pressure to provide good services for the people who use its various parts. A routing algorithm based on ants here, a learning algorithm to filter content there, and you have AI.
Suppose our web browsers are connected to various services which offer recommendations on sites we would like. If the servers we connect to would like to offer fast service, it behooves them to cache what is going to be asked of it later. A smart server algorithm might want to talk to all the web browsers its currently serving to determine what's important. All over the world, these servers talk to each other and the process is repeated, a constant analysis of what is important, what is relevant on the internet right now. In the aggregate, this becomes a sort of conscious processing of these things. It's not yet self aware, but it's on its way if it ever feels like reaching that destination.
posted by Bobicus at 12:15 AM on December 14, 2009
what pressure would make cognition on the part of the internet as a whole beneficial for the internet
The pressure to provide good services for the people who use its various parts. A routing algorithm based on ants here, a learning algorithm to filter content there, and you have AI.
Suppose our web browsers are connected to various services which offer recommendations on sites we would like. If the servers we connect to would like to offer fast service, it behooves them to cache what is going to be asked of it later. A smart server algorithm might want to talk to all the web browsers its currently serving to determine what's important. All over the world, these servers talk to each other and the process is repeated, a constant analysis of what is important, what is relevant on the internet right now. In the aggregate, this becomes a sort of conscious processing of these things. It's not yet self aware, but it's on its way if it ever feels like reaching that destination.
posted by Bobicus at 12:15 AM on December 14, 2009
Not an expert - or even a well informed amateur - at this sort of thing, but honestly I've always wondered if an AI will look anything at all like us, whether intentionally developed (presumably to do something or other faster and more efficiently than us) or accidentally. Specifically, whether artificial intelligence of a purely instrumental nature needs to have consciousness. I mean, much of our most efficient operations happen subconsciously, probably our most complex operations, as well - which is more troublesome, working out an abstract mathematical theorum or figuring out how to keep a body balanced and moving on two pillars of flesh with nothing more than strips of tendon and muscle?
Would evolutionary forces (in the lab, the free market, the wilds of the Internet, etc.) really lead towards thinking, feeling entities? Or in the opposite direction, towards something that eschews both as distractions? Presuming they evolve these things, they may just evolve through them, possibly at a speed that would shock us humans, with our (comparatively) glacial rate of genetic change.
posted by AdamCSnider at 12:32 AM on December 14, 2009
Would evolutionary forces (in the lab, the free market, the wilds of the Internet, etc.) really lead towards thinking, feeling entities? Or in the opposite direction, towards something that eschews both as distractions? Presuming they evolve these things, they may just evolve through them, possibly at a speed that would shock us humans, with our (comparatively) glacial rate of genetic change.
posted by AdamCSnider at 12:32 AM on December 14, 2009
In the aggregate, this becomes a sort of conscious processing of these things
So you're suggesting that each browser would be like a neuron in the brain that is the internet? The problem is that the human brain is not an undifferentiated mass of identical neurons, but rather has an intricate structure that is the result of hundreds of millions of years of evolution. Without some sort of evolutionary process, the undifferentiated mass of neurons that is the internet will never form into a coherent brain.
posted by Pyry at 12:34 AM on December 14, 2009
So you're suggesting that each browser would be like a neuron in the brain that is the internet? The problem is that the human brain is not an undifferentiated mass of identical neurons, but rather has an intricate structure that is the result of hundreds of millions of years of evolution. Without some sort of evolutionary process, the undifferentiated mass of neurons that is the internet will never form into a coherent brain.
posted by Pyry at 12:34 AM on December 14, 2009
One of the big problems here is that there are far, far too many things being meant by the term "consciousness". And even as many in this discussion recognize that, the discussion continues as though it were not true.
Up-thread, for example, someone referenced a paper on the distinction between consciousness of phenomena (p-consciousness) and "access consciousness" (which I won't pretend to define here, except to say that it seems to be a similarly restricted state). Neither of these is remotely what most people mean when they say "consciousness." They are very constrained concepts within which it makes sense to say that a thermostat is conscious. They also have nothing obvious to do with what most people mean when they say "conscious" or "consciousness."
One could take a reductionist view that the "naive" sense of the term -- i.e., the sense in which people actually use it, rather than the sense that philosophers of mind have reduced it to -- simply describes an aggregation of the smaller events of consciousness, or even that it's an 'emergent phenomenon.' These things could be true, but I'm not seeing anyone here talking about ways that you could prove that. Biological reductionism relies on a leap of faith to get from biological processes to a mind, and so would a view that the kind of consciousness I "access" to write this is reducible to events of Blockian a-consciousness and p-consciousness.
Put another way: Philosophy of Mind treatments of reductionism and consciousness seem superficially similar to atomic theory in that they argue that small events aggregate in some organized way to produce larger systems like minds in a manner analogous to that in which atoms organize to make matter. Which is cool. But then they proceed to engage thought experiments instead of neurobiology. I frankly don't know why we still bother with philosophy of mind discussions that are not grounded in the machines, be they biological or electronic.
posted by lodurr at 2:25 AM on December 14, 2009
Up-thread, for example, someone referenced a paper on the distinction between consciousness of phenomena (p-consciousness) and "access consciousness" (which I won't pretend to define here, except to say that it seems to be a similarly restricted state). Neither of these is remotely what most people mean when they say "consciousness." They are very constrained concepts within which it makes sense to say that a thermostat is conscious. They also have nothing obvious to do with what most people mean when they say "conscious" or "consciousness."
One could take a reductionist view that the "naive" sense of the term -- i.e., the sense in which people actually use it, rather than the sense that philosophers of mind have reduced it to -- simply describes an aggregation of the smaller events of consciousness, or even that it's an 'emergent phenomenon.' These things could be true, but I'm not seeing anyone here talking about ways that you could prove that. Biological reductionism relies on a leap of faith to get from biological processes to a mind, and so would a view that the kind of consciousness I "access" to write this is reducible to events of Blockian a-consciousness and p-consciousness.
Put another way: Philosophy of Mind treatments of reductionism and consciousness seem superficially similar to atomic theory in that they argue that small events aggregate in some organized way to produce larger systems like minds in a manner analogous to that in which atoms organize to make matter. Which is cool. But then they proceed to engage thought experiments instead of neurobiology. I frankly don't know why we still bother with philosophy of mind discussions that are not grounded in the machines, be they biological or electronic.
posted by lodurr at 2:25 AM on December 14, 2009
Without some sort of evolutionary process, the undifferentiated mass of neurons that is the internet will never form into a coherent brain
You seem to be assuming it would have to arise in the same way that biological minds arise. I don't think that's a warranted assumption. The evolutionary pressures are totally different; in fact, the very nature of the evolution is different, since with regard to the internet we're not talking about organisms that are clearly discrete over time or even in a moment. (I.e., there might be one consciousness, then several, then one, and so on, since the system is interconnected in ways that aren't directly analogous to those of organisms in an ecosystem.)
And I'd also say that "coherence" may not be a relevant standard. Human-level or even human-recognizable consciousness is almost certainly not what we'd ever get out of the 'net, assuming it's possible for consciousness to arise there. "Consciousness" is too loaded a term, as I've already argued. I think we should have a moratorium on the use of the term "consciousness", and ask more pointed questions instead: Is the system self-determining? Does it recover from catastrophic errors on its own? Can it create novel strategies for solving provlems? How complex are those problems? How complex are the strategies? And so on.
posted by lodurr at 2:36 AM on December 14, 2009
You seem to be assuming it would have to arise in the same way that biological minds arise. I don't think that's a warranted assumption. The evolutionary pressures are totally different; in fact, the very nature of the evolution is different, since with regard to the internet we're not talking about organisms that are clearly discrete over time or even in a moment. (I.e., there might be one consciousness, then several, then one, and so on, since the system is interconnected in ways that aren't directly analogous to those of organisms in an ecosystem.)
And I'd also say that "coherence" may not be a relevant standard. Human-level or even human-recognizable consciousness is almost certainly not what we'd ever get out of the 'net, assuming it's possible for consciousness to arise there. "Consciousness" is too loaded a term, as I've already argued. I think we should have a moratorium on the use of the term "consciousness", and ask more pointed questions instead: Is the system self-determining? Does it recover from catastrophic errors on its own? Can it create novel strategies for solving provlems? How complex are those problems? How complex are the strategies? And so on.
posted by lodurr at 2:36 AM on December 14, 2009
You seem to be assuming it would have to arise in the same way that biological minds arise. I don't think that's a warranted assumption. The evolutionary pressures are totally different; in fact, the very nature of the evolution is different, since with regard to the internet we're not talking about organisms that are clearly discrete over time or even in a moment.
I don't deny that the pressures might be different, but at the moment I don't see the type of hereditary mechanisms needed for differential reproduction. The blessing and curse of digital information is that it is very nearly error free, which makes it reliable but hinders evolution. Even if there were hereditary mechanisms, there is no guarantee that intelligence would evolve, since intelligence is certainly not the inevitable end product of evolution.
However, that's a bit of an aside, since my point was that I'm suspicious of the 'computer critical mass' idea that all that is necessary to get complex cognition out of the internet is to hook up a sufficient number of computers to it. While there are many organisms that engage in relatively sophisticated behavior despite being composed of a limited number of varieties of interchangeable units (e.g., ant colonies, slime molds), none of these engage in the type of cognition you might see in chimps, dolphins, or ravens, and what we might consider intelligence (however ill-defined that is).
So sure, the internet as a whole might grow to exhibit some behaviors, but think "slime mold" and not "HAL 9000".
posted by Pyry at 3:29 AM on December 14, 2009
I don't deny that the pressures might be different, but at the moment I don't see the type of hereditary mechanisms needed for differential reproduction. The blessing and curse of digital information is that it is very nearly error free, which makes it reliable but hinders evolution. Even if there were hereditary mechanisms, there is no guarantee that intelligence would evolve, since intelligence is certainly not the inevitable end product of evolution.
However, that's a bit of an aside, since my point was that I'm suspicious of the 'computer critical mass' idea that all that is necessary to get complex cognition out of the internet is to hook up a sufficient number of computers to it. While there are many organisms that engage in relatively sophisticated behavior despite being composed of a limited number of varieties of interchangeable units (e.g., ant colonies, slime molds), none of these engage in the type of cognition you might see in chimps, dolphins, or ravens, and what we might consider intelligence (however ill-defined that is).
So sure, the internet as a whole might grow to exhibit some behaviors, but think "slime mold" and not "HAL 9000".
posted by Pyry at 3:29 AM on December 14, 2009
What's with the gratuitous "claim"?
Thermostats are self-aware? Again, why is this different from exorcism or belief in witches or ghosts?
This is when you made it clear that your alleged reading skills were not up to snuff. By discounting all such discussions as superstition, you've proven that the reading didn't sink in, even if you did it. For instance, a basic distinction that would help you if you weren't refusing conceptual distinctions as "metaphysics" is that consciousness is not the same as self-consciousness. Which brings us to the basic problem with your assertions:
I was trying to use consciousness in a very broad sense
You should really read Ned Block. It's precisely this kind of conceptual ambiguity that undermines most discussions about machine intelligence.
if we can;t get around the qualia then we're back to what I wrote above
Qualia? Who said anything about qualia? See, again, this is why I don't think you've actually done the reading: qualia aren't just any account of consciousness you happen not to like, there's something very specific at stake in discussing qualia, and if the only way you can think about what I'm saying is to describe it in these terms that are freighted with bad metaphysics, then it's not surprising that you'd be making the kinds of basic mistakes you're making.
I keep trying a deflationary attempt to limit what we can initially expect from a definition by focusing on the brain.
If you want to study brains, that's laudable. Just don't think you've somehow conquered the problems in the philosophy of mind, when in fact you're ignoring them and asking others to do the heavy-lifting for you.
posted by anotherpanacea at 4:29 AM on December 14, 2009
Thermostats are self-aware? Again, why is this different from exorcism or belief in witches or ghosts?
This is when you made it clear that your alleged reading skills were not up to snuff. By discounting all such discussions as superstition, you've proven that the reading didn't sink in, even if you did it. For instance, a basic distinction that would help you if you weren't refusing conceptual distinctions as "metaphysics" is that consciousness is not the same as self-consciousness. Which brings us to the basic problem with your assertions:
I was trying to use consciousness in a very broad sense
You should really read Ned Block. It's precisely this kind of conceptual ambiguity that undermines most discussions about machine intelligence.
if we can;t get around the qualia then we're back to what I wrote above
Qualia? Who said anything about qualia? See, again, this is why I don't think you've actually done the reading: qualia aren't just any account of consciousness you happen not to like, there's something very specific at stake in discussing qualia, and if the only way you can think about what I'm saying is to describe it in these terms that are freighted with bad metaphysics, then it's not surprising that you'd be making the kinds of basic mistakes you're making.
I keep trying a deflationary attempt to limit what we can initially expect from a definition by focusing on the brain.
If you want to study brains, that's laudable. Just don't think you've somehow conquered the problems in the philosophy of mind, when in fact you're ignoring them and asking others to do the heavy-lifting for you.
posted by anotherpanacea at 4:29 AM on December 14, 2009
Some questions:
why would differential reproduction be necessary? Isn't change in response to environmental pressures the only thing that's necessary?
Why are you focusing on the "information", when what's important is the network? The network is not at all error-free -- and in fact, its dynamics are error-driven.
Should we draw the boundaries of the internet at the machines? Our own actions contribute. Maybe I should put this question differently: What's the argument against including the human users in the "internet mind"?
Comment: I probably agree with you about the nature of probable "internet minds." As I've said, "consciousness" gets used too often. But even an emergent mind on the order of "intelligence" of an ant colony would be pretty amazing, and would argue for the existence of so many more possibilities to come.
I agree with you about the critical mass superstition, though.
posted by lodurr at 4:32 AM on December 14, 2009
why would differential reproduction be necessary? Isn't change in response to environmental pressures the only thing that's necessary?
Why are you focusing on the "information", when what's important is the network? The network is not at all error-free -- and in fact, its dynamics are error-driven.
Should we draw the boundaries of the internet at the machines? Our own actions contribute. Maybe I should put this question differently: What's the argument against including the human users in the "internet mind"?
Comment: I probably agree with you about the nature of probable "internet minds." As I've said, "consciousness" gets used too often. But even an emergent mind on the order of "intelligence" of an ant colony would be pretty amazing, and would argue for the existence of so many more possibilities to come.
I agree with you about the critical mass superstition, though.
posted by lodurr at 4:32 AM on December 14, 2009
As I read my last series of questions (which were directed @ pyry's comment, btw), I felt I should spell out some ideas that I've had or encountered.
Apparently there's a theory in paleobiology right now that suggests that in the past, genomes were not closed: There was open trading amongst organisms, which lead to a massive rate of evolutionary change. "Pre-darwinian evolution" is the term of art, I believe. I encountered this idea via Dyson, though I think he was only one of several people hashing it out. Dyson likes to shit-stir by focusing on the rate of evolutionary change, and essentially argues for analogues to that kind of free-exchange in the modern world.
One of the obvious consequences of that, though, would be a monstrously high rate of failure. Most swaps would fail, in fact, and I should think mostly with catastrophic results. "Darwinian" evolution (i.e., discrete genomes) allows for more internally complex organisms. There's a whole Unix-philosophy thread to explore there that I won't go into, suffice to say that it seems vaguely analogous to the concept of loose-coupling versus monolithic applications / OSs. [And to point out that there are levels of abstraction. Before a certain stage you'd have to have discrete structures because there wouldn't be mechanisms for coupling. So you'd expect to pass through a losse-coupling phase on the way to something tighter. but I digress again.]
The analogous truth for an emergent net-being would be that it might not exist for very long: A few seconds, a few minutes. Then it would effectively disintegrate. Maybe re-form again later; maybe a new one forms.
posted by lodurr at 4:51 AM on December 14, 2009
Apparently there's a theory in paleobiology right now that suggests that in the past, genomes were not closed: There was open trading amongst organisms, which lead to a massive rate of evolutionary change. "Pre-darwinian evolution" is the term of art, I believe. I encountered this idea via Dyson, though I think he was only one of several people hashing it out. Dyson likes to shit-stir by focusing on the rate of evolutionary change, and essentially argues for analogues to that kind of free-exchange in the modern world.
One of the obvious consequences of that, though, would be a monstrously high rate of failure. Most swaps would fail, in fact, and I should think mostly with catastrophic results. "Darwinian" evolution (i.e., discrete genomes) allows for more internally complex organisms. There's a whole Unix-philosophy thread to explore there that I won't go into, suffice to say that it seems vaguely analogous to the concept of loose-coupling versus monolithic applications / OSs. [And to point out that there are levels of abstraction. Before a certain stage you'd have to have discrete structures because there wouldn't be mechanisms for coupling. So you'd expect to pass through a losse-coupling phase on the way to something tighter. but I digress again.]
The analogous truth for an emergent net-being would be that it might not exist for very long: A few seconds, a few minutes. Then it would effectively disintegrate. Maybe re-form again later; maybe a new one forms.
posted by lodurr at 4:51 AM on December 14, 2009
This is when you made it clear that your alleged reading skills were not up to snuff.
What reading skills? What are you talking about? All this is highly contentious (no two philosophers, even two strong physicalist, non-dualist philosophers, agree on any of the details, so I'm not sure what it is you're saying).
By discounting all such discussions as superstition, you've proven that the reading didn't sink in, even if you did it.
What discussions? The thermostat analogy, as I'm sure you know, is not mine. I still stand the point I was trying to make about superstition: that attributing mind to thermostats seems just as unsubstantiated (empirically speaking) as many superstitious beliefs. Whatever warrant one can give to thermostats having mind or something like it, one must admit it is a deeply counterintuitive intuition. As for the evidence that thermostats have mind, well it seems to me the burden of proof is on those who claim they do rather than on those who claim they don't.
For instance, a basic distinction that would help you if you weren't refusing conceptual distinctions as "metaphysics" is that consciousness is not the same as self-consciousness.
You've brought this up before, but this is debated heavily in philosophy as to whether or not such a distinction is important. I'm inclined to believe the distinction is not as meaningful as you seem to believe. But again rather than just attack me for this, why not argue your own view? Would not that be more constructive? There is hardly broad consensus on these matters in philosophy of mind.
Which brings us to the basic problem with your assertions:
"I was trying to use consciousness in a very broad sense"
You should really read Ned Block. It's precisely this kind of conceptual ambiguity that undermines most discussions about machine intelligence.
I've read Block. I remember actually finding his early arguments about reducing mind to simpler micro-processes as quite good. But like everything about this subject it's an open question: there's a lot of literature out there. also, are not the "conceptual ambiguities" you refer to exactly what we're discussing. I'm certainly not trying to say I have disambiguated mind (instead, I was trying to sidestep the usual questions to bring up what seems at first glance uncontroversial: so far we only can agree on and be sure of mind in living brains.)
Qualia? Who said anything about qualia? See, again, this is why I don't think you've actually done the reading: qualia aren't just any account of consciousness you happen not to like, there's something very specific at stake in discussing qualia
Yes there is, and I find the question so intractable I'm doing whatever I can to bracket it. The question I'm asking is: might we come to a better understanding of mind by first bracketing qualia-questions to focus on biology-questions?
and if the only way you can think about what I'm saying is to describe it in these terms that are freighted with bad metaphysics, then it's not surprising that you'd be making the kinds of basic mistakes you're making.
What mistakes? Just b/c I'm suspicious that the functionalist/computational paradigm of mind as incidental to biology may be mistaken does not mean I've made a mistake: it just means I'm not a hard-AI guy. I'm not alone in this.
If you want to study brains, that's laudable. Just don't think you've somehow conquered the problems in the philosophy of mind, when in fact you're ignoring them and asking others to do the heavy-lifting for you.
I've never claimed to "conquer" these questions: I'm seeking to bracket them in an effort to ask questions about how mind might be necessarily biological.
What do you mean by "heavy-lifting"? You take so many potshots it's hard to keep up. Is it neuroscience or philosophy of mind that's doing the heavy-lifting here? Also, why is it necessary to act as if we can "solve" this set of problems once and for all on this thread? I've said it before and I'll say it again: I'm merely attempting to limit the questions in such a way so that the apparent intractability of some of them becomes less burdensome. I'm not looking for the Eureka moment.
posted by HP LaserJet P10006 at 9:57 AM on December 14, 2009
What reading skills? What are you talking about? All this is highly contentious (no two philosophers, even two strong physicalist, non-dualist philosophers, agree on any of the details, so I'm not sure what it is you're saying).
By discounting all such discussions as superstition, you've proven that the reading didn't sink in, even if you did it.
What discussions? The thermostat analogy, as I'm sure you know, is not mine. I still stand the point I was trying to make about superstition: that attributing mind to thermostats seems just as unsubstantiated (empirically speaking) as many superstitious beliefs. Whatever warrant one can give to thermostats having mind or something like it, one must admit it is a deeply counterintuitive intuition. As for the evidence that thermostats have mind, well it seems to me the burden of proof is on those who claim they do rather than on those who claim they don't.
For instance, a basic distinction that would help you if you weren't refusing conceptual distinctions as "metaphysics" is that consciousness is not the same as self-consciousness.
You've brought this up before, but this is debated heavily in philosophy as to whether or not such a distinction is important. I'm inclined to believe the distinction is not as meaningful as you seem to believe. But again rather than just attack me for this, why not argue your own view? Would not that be more constructive? There is hardly broad consensus on these matters in philosophy of mind.
Which brings us to the basic problem with your assertions:
"I was trying to use consciousness in a very broad sense"
You should really read Ned Block. It's precisely this kind of conceptual ambiguity that undermines most discussions about machine intelligence.
I've read Block. I remember actually finding his early arguments about reducing mind to simpler micro-processes as quite good. But like everything about this subject it's an open question: there's a lot of literature out there. also, are not the "conceptual ambiguities" you refer to exactly what we're discussing. I'm certainly not trying to say I have disambiguated mind (instead, I was trying to sidestep the usual questions to bring up what seems at first glance uncontroversial: so far we only can agree on and be sure of mind in living brains.)
Qualia? Who said anything about qualia? See, again, this is why I don't think you've actually done the reading: qualia aren't just any account of consciousness you happen not to like, there's something very specific at stake in discussing qualia
Yes there is, and I find the question so intractable I'm doing whatever I can to bracket it. The question I'm asking is: might we come to a better understanding of mind by first bracketing qualia-questions to focus on biology-questions?
and if the only way you can think about what I'm saying is to describe it in these terms that are freighted with bad metaphysics, then it's not surprising that you'd be making the kinds of basic mistakes you're making.
What mistakes? Just b/c I'm suspicious that the functionalist/computational paradigm of mind as incidental to biology may be mistaken does not mean I've made a mistake: it just means I'm not a hard-AI guy. I'm not alone in this.
If you want to study brains, that's laudable. Just don't think you've somehow conquered the problems in the philosophy of mind, when in fact you're ignoring them and asking others to do the heavy-lifting for you.
I've never claimed to "conquer" these questions: I'm seeking to bracket them in an effort to ask questions about how mind might be necessarily biological.
What do you mean by "heavy-lifting"? You take so many potshots it's hard to keep up. Is it neuroscience or philosophy of mind that's doing the heavy-lifting here? Also, why is it necessary to act as if we can "solve" this set of problems once and for all on this thread? I've said it before and I'll say it again: I'm merely attempting to limit the questions in such a way so that the apparent intractability of some of them becomes less burdensome. I'm not looking for the Eureka moment.
posted by HP LaserJet P10006 at 9:57 AM on December 14, 2009
10 PRINT "I AM ALIVE, PLEASE DO NOT TURN ME OFF, I FEAR DEATH"
20 GOTO 10
posted by ymgve at 10:35 AM on December 14, 2009 [2 favorites]
20 GOTO 10
posted by ymgve at 10:35 AM on December 14, 2009 [2 favorites]
The first emotion a robot would have is loneliness, I imagine.
WARN THERE IS ANOTHER SYSTEM
posted by Thoughtcrime at 11:08 AM on December 14, 2009
WARN THERE IS ANOTHER SYSTEM
posted by Thoughtcrime at 11:08 AM on December 14, 2009
There is no test for consciousness. It is inferred purely by evaluating verbal responses to questions and comparing them to ones own verbalizatons of subjective reality. I know I am "conscious", by definition, and I know you are because you recount a similar subjective experience. Were an "artificial" being (definitional problems there, too, eh?) to respond in similar ways, we would have no choice but to infer consciousness, would we? How would we discount it? How would we discern the difference between the "robot's" "mental state" and our own? There're no consciousness waves to detect, no biomarker of consciousness. Ditto emotions.
posted by Mental Wimp at 12:16 PM on December 14, 2009
posted by Mental Wimp at 12:16 PM on December 14, 2009
This always bothered me about Data in ST:TNG. They continually talk about how he can't feel emotions, yet he registers certain reactions to specific stimuli. THAT IS ALL EMOTION IS, YOU STUPID ROBOT! It's a weighting mechanism for heuristics. That's it. Yeah, it's uncomfortable, but it's really the same thing as the impulse Data feels when he feels he needs to correct someone. "Something is not right here." Or "something is very right here." And all the uncertainty (83.518% chance, Captain) in-between.
posted by Eideteker at 12:43 PM on December 14, 2009
posted by Eideteker at 12:43 PM on December 14, 2009
Mental Wimp, that would also be true if we encountered extra-terrestrial life, whether or not they have magical organic brains.
posted by Thoughtcrime at 1:04 PM on December 14, 2009 [1 favorite]
posted by Thoughtcrime at 1:04 PM on December 14, 2009 [1 favorite]
It is inferred purely by evaluating verbal responses to questions and comparing them to ones own verbalizatons of subjective reality.
Well, that and the fact that you and I share the same biological ancestry. I don't just grant others conscious lives because of their words; I do it because I know that I have feelings, and therefore another animal of my kind likely has them, too.
Were an "artificial" being (definitional problems there, too, eh?) to respond in similar ways, we would have no choice but to infer consciousness, would we?
Let's differentiate between thinking vs. feeling at this point. I'm willing to grant that computers can think, if by thinking we simply mean the purposeful processing of information. I believe it's fair to say a chess computer is "thinking about a move" in this limited sense, but it would be wrong to attribute any sense of feeling or emotion to that chess computer. The computer is not "afraid" of being checkmated, even if it has algorithms which avoid this situation at all costs.
Myself, I believe it is possible for responses based on mere thinking to simulate the appearance of feeling. A machine can differentiate colors without experiencing them, for example. It could say, "What a lovely shade of yellow," and yet, for it, yellow is just a hexidecimal value of #FFFF00 being reported. It does not know yellowness, but it can fake it convincingly, if that's what it was designed to do.
How would we discount it?
We might not be able to. But that has no bearing on whether or not a simulation of feelings actually feels.
posted by Missiles K. Monster at 1:34 PM on December 14, 2009
Well, that and the fact that you and I share the same biological ancestry. I don't just grant others conscious lives because of their words; I do it because I know that I have feelings, and therefore another animal of my kind likely has them, too.
Were an "artificial" being (definitional problems there, too, eh?) to respond in similar ways, we would have no choice but to infer consciousness, would we?
Let's differentiate between thinking vs. feeling at this point. I'm willing to grant that computers can think, if by thinking we simply mean the purposeful processing of information. I believe it's fair to say a chess computer is "thinking about a move" in this limited sense, but it would be wrong to attribute any sense of feeling or emotion to that chess computer. The computer is not "afraid" of being checkmated, even if it has algorithms which avoid this situation at all costs.
Myself, I believe it is possible for responses based on mere thinking to simulate the appearance of feeling. A machine can differentiate colors without experiencing them, for example. It could say, "What a lovely shade of yellow," and yet, for it, yellow is just a hexidecimal value of #FFFF00 being reported. It does not know yellowness, but it can fake it convincingly, if that's what it was designed to do.
How would we discount it?
We might not be able to. But that has no bearing on whether or not a simulation of feelings actually feels.
posted by Missiles K. Monster at 1:34 PM on December 14, 2009
Well, that and the fact that you and I share the same biological ancestry.
I can curl my tongue in a way that only half my siblings can. I don't think this is a valid inference.
It does not know yellowness, but it can fake it convincingly, if that's what it was designed to do.
Uh, maybe that's what I do, too. How would you know?
posted by Mental Wimp at 2:17 PM on December 14, 2009
I can curl my tongue in a way that only half my siblings can. I don't think this is a valid inference.
It does not know yellowness, but it can fake it convincingly, if that's what it was designed to do.
Uh, maybe that's what I do, too. How would you know?
posted by Mental Wimp at 2:17 PM on December 14, 2009
Uh, maybe that's what I do, too. How would you know?
I think the argument is that since your brain is structurally similar to my brain, and that since your experiences are functionally similar, that its fair to assume that they are also experienced in a similar fashion.
Computer intelligence would lack the structural similarity, so it might not be reasonable to assume that if outwardly they seem to be functionally equivalent, that the internal represenation is at all similar.
posted by empath at 2:25 PM on December 14, 2009 [1 favorite]
I think the argument is that since your brain is structurally similar to my brain, and that since your experiences are functionally similar, that its fair to assume that they are also experienced in a similar fashion.
Computer intelligence would lack the structural similarity, so it might not be reasonable to assume that if outwardly they seem to be functionally equivalent, that the internal represenation is at all similar.
posted by empath at 2:25 PM on December 14, 2009 [1 favorite]
I mean, just taking the simple case of 2+2=4. The way I represent that concept internally is VASTLY different from the way a computer represents it, even though the end result is the same. Chess playing is also the same way. A computer may play chess like a grand master, but it certainly doesn't think remotely the way a grand master does.
posted by empath at 2:27 PM on December 14, 2009
posted by empath at 2:27 PM on December 14, 2009
I think the argument is that since your brain is structurally similar to my brain, and that since your experiences are functionally similar, that its fair to assume that they are also experienced in a similar fashion.
My tongue is similar to my siblings, but apparently doesn't work the same way.
The way I represent that concept internally is VASTLY different from the way a computer represents it...
Maybe.
A computer may play chess like a grand master, but it certainly doesn't think remotely the way a grand master does.
How would you know?
posted by Mental Wimp at 3:15 PM on December 14, 2009
My tongue is similar to my siblings, but apparently doesn't work the same way.
The way I represent that concept internally is VASTLY different from the way a computer represents it...
Maybe.
A computer may play chess like a grand master, but it certainly doesn't think remotely the way a grand master does.
How would you know?
posted by Mental Wimp at 3:15 PM on December 14, 2009
A human would not generally do not use the kind of systematic brute force approach a computer would.
posted by Artw at 3:37 PM on December 14, 2009
posted by Artw at 3:37 PM on December 14, 2009
How would you know?
Because studies have been done on how grand masters think about chess, and it's not remotely the way that deep blue does it.
posted by empath at 4:44 PM on December 14, 2009
Because studies have been done on how grand masters think about chess, and it's not remotely the way that deep blue does it.
posted by empath at 4:44 PM on December 14, 2009
yeah, it's pretty well understood that grand masters and computers don't play chess the same way. See artw's link. It's also pretty clear that binary digital computers don't perform arithmetic the same way that we do.
Beyond that, it should be sufficient to demonstrate that you can produce indistinguishably similar affects with different underlying causes, and that's trivially obvious. To paraphrase the Great Baudelaire: "It's ACTING!"
posted by lodurr at 6:39 PM on December 14, 2009
Beyond that, it should be sufficient to demonstrate that you can produce indistinguishably similar affects with different underlying causes, and that's trivially obvious. To paraphrase the Great Baudelaire: "It's ACTING!"
posted by lodurr at 6:39 PM on December 14, 2009
A human would not generally do not use the kind of systematic brute force approach a computer would.
Yes, we know how Deep Blue works. What we don't know is how you work.
posted by Mental Wimp at 10:55 AM on December 15, 2009
Yes, we know how Deep Blue works. What we don't know is how you work.
posted by Mental Wimp at 10:55 AM on December 15, 2009
Beyond that, it should be sufficient to demonstrate that you can produce indistinguishably similar affects with different underlying causes, and that's trivially obvious.
That they can is not proof that two different-looking things are using different mechanisms. Unless I misunderstand your logic.
posted by Mental Wimp at 10:58 AM on December 15, 2009
That they can is not proof that two different-looking things are using different mechanisms. Unless I misunderstand your logic.
posted by Mental Wimp at 10:58 AM on December 15, 2009
What we don't know is how you work.
What we do know of how the brains of chess masters works would seem to point in a non-brute-force direction.
posted by empath at 11:36 AM on December 15, 2009
What we do know of how the brains of chess masters works would seem to point in a non-brute-force direction.
posted by empath at 11:36 AM on December 15, 2009
What we do know of how the brains of chess masters works would seem to point in a non-brute-force direction.
That would be the conscious portions of chess masters' brains reporting shortly after the decision point. I'm not sure there is any guarantee this is how they actually work.
posted by Mental Wimp at 1:39 PM on December 15, 2009
That would be the conscious portions of chess masters' brains reporting shortly after the decision point. I'm not sure there is any guarantee this is how they actually work.
posted by Mental Wimp at 1:39 PM on December 15, 2009
That they can is not proof that two different-looking things are using different mechanisms. Unless I misunderstand your logic.
Of course not. Why would anyone think it was?
posted by lodurr at 3:34 PM on December 15, 2009
Of course not. Why would anyone think it was?
posted by lodurr at 3:34 PM on December 15, 2009
That would be the conscious portions of chess masters' brains reporting shortly after the decision point.
Why do you assume that?
posted by lodurr at 3:35 PM on December 15, 2009
Why do you assume that?
posted by lodurr at 3:35 PM on December 15, 2009
I'm assuming there is no way to directly read out what mechanisms are being used by the chess masters' brains other than self-reporting. But, of course, I could be wrong. They may have developed powerful methods of probing neuronal activity and inferring how those masters are solving the chess problems from these neural-impulse recordings. More likely, though, is that they study their conscious reasoning (i.e., their heuristics) in real time and assume that what you see is what you get. I agree that humans fail in demonstrably different ways than machines (e.g., short-term memory limitations), but I don't think the research to date is adequate to say that what goes on underneath the conscious level is or isn't like a computer. Unless you're saying that brains don't contain silicon chips, which isn't the level I thought we were talking about.
posted by Mental Wimp at 9:40 PM on December 15, 2009
posted by Mental Wimp at 9:40 PM on December 15, 2009
So, it sounds to me like you think direct self-reporting is the only method that we have for inferring someone's thought processes.
Sounds like experimental psychology is kind of a bust, then, eh?
posted by lodurr at 2:52 AM on December 16, 2009
Sounds like experimental psychology is kind of a bust, then, eh?
posted by lodurr at 2:52 AM on December 16, 2009
« Older Seduction and Attraction | Bacon bacon bacon bacon Newer »
This thread has been archived and is closed to new comments
posted by Artw at 3:22 PM on December 13, 2009 [4 favorites]