The Man Who Would Teach Machines to Think
October 27, 2013 6:32 PM Subscribe
Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
Hosfstadter is a treasure, but GOFAI (Good Old Fashioned A.I.) and the abstract top-down approach is never going to get us to "replicate the human mind". Bottom-up is going to be the only way to figure out how minds work. That's less clever LISP programs making analogies, and more neuroscience. We need to be modelling collections of neurons and instantiating them in physical environments. Brains don't ever exist in a vacuum; they are always embodied, and embodiment is not something you can skip. I'd be really impressed if we replicate, say, an ant mind. Let's start there.
posted by leotrotsky at 6:52 PM on October 27, 2013 [23 favorites]
posted by leotrotsky at 6:52 PM on October 27, 2013 [23 favorites]
Ant is a terrible place to start. We have made some progress on fish however.
posted by 256 at 6:54 PM on October 27, 2013
posted by 256 at 6:54 PM on October 27, 2013
Sorry, that comment was all over the place, see "Approaches" at the Artificial Intelligence page over at wikipedia.
posted by leotrotsky at 6:55 PM on October 27, 2013
posted by leotrotsky at 6:55 PM on October 27, 2013
The way Hofstadter was written out of AI should be a shame to all responsible for it. He'll always be my intellectual hero.
posted by ob1quixote at 7:00 PM on October 27, 2013 [9 favorites]
posted by ob1quixote at 7:00 PM on October 27, 2013 [9 favorites]
We need to be modelling collections of neurons and instantiating them in physical environments.
Rumelhart & McClelland (1981) called, wanted to know what amazing heights connectionist approaches have reached in the 30+ years since.
posted by Nomyte at 7:02 PM on October 27, 2013 [6 favorites]
Rumelhart & McClelland (1981) called, wanted to know what amazing heights connectionist approaches have reached in the 30+ years since.
posted by Nomyte at 7:02 PM on October 27, 2013 [6 favorites]
To understand why embodiment is so important, you need to get a sense of how you can produce apparently really complex 'intelligent' behavior through very simple systems that are embodied, and for that you need to read about the classic Braitenberg vehicles.
posted by leotrotsky at 7:02 PM on October 27, 2013 [8 favorites]
posted by leotrotsky at 7:02 PM on October 27, 2013 [8 favorites]
And I think the place to start is having a computer understand its embodiment as a computer. Then move on to its natural environment, i.e. networks and the Internet.
posted by ob1quixote at 7:05 PM on October 27, 2013 [3 favorites]
posted by ob1quixote at 7:05 PM on October 27, 2013 [3 favorites]
We need to be modelling collections of neurons and instantiating them in physical environments.
Rumelhart & McClelland (1981) called, wanted to know what amazing heights connectionist approaches have reached in the 30+ years since.
Oh bite me; I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
posted by leotrotsky at 7:08 PM on October 27, 2013 [2 favorites]
Rumelhart & McClelland (1981) called, wanted to know what amazing heights connectionist approaches have reached in the 30+ years since.
Oh bite me; I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
posted by leotrotsky at 7:08 PM on October 27, 2013 [2 favorites]
Oh bite me; I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
And how's that going?
posted by 256 at 7:13 PM on October 27, 2013 [1 favorite]
And how's that going?
posted by 256 at 7:13 PM on October 27, 2013 [1 favorite]
OK, I'm gonna bite.
posted by Nomyte at 7:18 PM on October 27, 2013 [2 favorites]
posted by Nomyte at 7:18 PM on October 27, 2013 [2 favorites]
Rumelhart & McClelland (1981) called, wanted to know what amazing heights connectionist approaches have reached in the 30+ years since.
Actually, now that we have the computing power to run massively parallel algorithms at scale, connectionist approaches are kind of making a comeback. The Google Brain project is starting to have interesting results.
posted by heathkit at 7:20 PM on October 27, 2013 [4 favorites]
Actually, now that we have the computing power to run massively parallel algorithms at scale, connectionist approaches are kind of making a comeback. The Google Brain project is starting to have interesting results.
posted by heathkit at 7:20 PM on October 27, 2013 [4 favorites]
slowly, of course, because it's hard. ...and not sexy.
posted by leotrotsky at 7:21 PM on October 27, 2013 [5 favorites]
posted by leotrotsky at 7:21 PM on October 27, 2013 [5 favorites]
In this he is the modern-day William James
I find this analogy unbelievably depressing, but it may be right, I suppose. Every age may get the William James it deserves.
But I paged through some of Surfaces and Essences recently and it gave me the sinking feeling that if I re-read GEB today as a non-teenager I'd probably end up feeling that Hofstadter had always been a bit of a crackpot. So much of what seemed once to make Hofstadter's work good makes that book bad — the determined simplicity seeming more like dogged oversimplifying; the outsiderishness seeming like naivete or simple ignorance. It's a book that seems to be continually trying to bluster and handwave away the existence of philosophy, psychology, aesthetics, and other millennia-long intellectual endeavors. Have other readers seen it differently?
posted by RogerB at 7:36 PM on October 27, 2013 [5 favorites]
I find this analogy unbelievably depressing, but it may be right, I suppose. Every age may get the William James it deserves.
But I paged through some of Surfaces and Essences recently and it gave me the sinking feeling that if I re-read GEB today as a non-teenager I'd probably end up feeling that Hofstadter had always been a bit of a crackpot. So much of what seemed once to make Hofstadter's work good makes that book bad — the determined simplicity seeming more like dogged oversimplifying; the outsiderishness seeming like naivete or simple ignorance. It's a book that seems to be continually trying to bluster and handwave away the existence of philosophy, psychology, aesthetics, and other millennia-long intellectual endeavors. Have other readers seen it differently?
posted by RogerB at 7:36 PM on October 27, 2013 [5 favorites]
There's no reason we can't work both bottom up and top down. Hofstadter has always had a lot of fascinating things to say.
What the article doesn't really get into is, why bother with actual AI? It'd be nice to know more about how the brain works, but surely it would be immoral to put an actual artificial sapience to work doing translation or search results. We don't need artificial intelligence, just artificial cleverness. Aims and desires of its own would not be a feature. (See also: every robot story ever.)
posted by zompist at 7:40 PM on October 27, 2013 [11 favorites]
What the article doesn't really get into is, why bother with actual AI? It'd be nice to know more about how the brain works, but surely it would be immoral to put an actual artificial sapience to work doing translation or search results. We don't need artificial intelligence, just artificial cleverness. Aims and desires of its own would not be a feature. (See also: every robot story ever.)
posted by zompist at 7:40 PM on October 27, 2013 [11 favorites]
Why not start with a bullfrog?
Is it bigger than me - hop away.
Is it my size - meh
Is it smaller than me - try to eat it.
posted by rough ashlar at 7:43 PM on October 27, 2013 [4 favorites]
Is it bigger than me - hop away.
Is it my size - meh
Is it smaller than me - try to eat it.
posted by rough ashlar at 7:43 PM on October 27, 2013 [4 favorites]
What can possibaly go wrong with moving to computer AI's here in westworld?
All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines. We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
posted by rough ashlar at 7:46 PM on October 27, 2013 [3 favorites]
All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines. We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
posted by rough ashlar at 7:46 PM on October 27, 2013 [3 favorites]
I met Hofstader years ago at a book reading for GEB. I asked him about the frequent references to Zen and what zen meant to him. He said he just put them there as a joke. My respect for his ideas went way down from that point on. As RogerB asks, I also had a feeling that Hofstader either ignores all the past thought on these subjects or he's ignorant of the history. My own impression of AI people in general is that they assume they are discovering something new when in fact it's already been dealt with 200 years ago.
posted by njohnson23 at 7:47 PM on October 27, 2013 [4 favorites]
posted by njohnson23 at 7:47 PM on October 27, 2013 [4 favorites]
Hosfstadter is a treasure, but GOFAI (Good Old Fashioned A.I.) and the abstract top-down approach is never going to get us to "replicate the human mind". Bottom-up is going to be the only way to figure out how minds work. That's less clever LISP programs making analogies, and more neuroscience. We need to be modelling collections of neurons and instantiating them in physical environments. Brains don't ever exist in a vacuum; they are always embodied, and embodiment is not something you can skip. I'd be really impressed if we replicate, say, an ant mind. Let's start there.
GOFCE (Good Old Fashioned Computer Engineering) and the abstract top-down approach is never going to get us to "replicate the microprocessor". Bottom-up is going to be the only way to understand how computing machines work. That's less clever mathematical and system abstractions and more low-level EE. We need to be modelling collections of transistors and instantiating them in physical environments. CPUs don't ever exist in a vacuum; they are always embodied, and embodiment is not something you can skip. I'd be really impressed if we replicate, say, a pocket calculator. Let's start there.
posted by tss at 7:56 PM on October 27, 2013 [15 favorites]
GOFCE (Good Old Fashioned Computer Engineering) and the abstract top-down approach is never going to get us to "replicate the microprocessor". Bottom-up is going to be the only way to understand how computing machines work. That's less clever mathematical and system abstractions and more low-level EE. We need to be modelling collections of transistors and instantiating them in physical environments. CPUs don't ever exist in a vacuum; they are always embodied, and embodiment is not something you can skip. I'd be really impressed if we replicate, say, a pocket calculator. Let's start there.
posted by tss at 7:56 PM on October 27, 2013 [15 favorites]
Is it bigger than me - hop away.
Is it my size - meh
Is it smaller than me - try to eat it.
You forgot "it only exists if it's moving."
posted by localroger at 8:00 PM on October 27, 2013 [2 favorites]
Is it my size - meh
Is it smaller than me - try to eat it.
You forgot "it only exists if it's moving."
posted by localroger at 8:00 PM on October 27, 2013 [2 favorites]
You forgot "it only exists if it's moving."
See, already making the model better. Now for Bullfrog 2.0 lets have it have sex and release it in Australia!
posted by rough ashlar at 8:07 PM on October 27, 2013 [5 favorites]
See, already making the model better. Now for Bullfrog 2.0 lets have it have sex and release it in Australia!
posted by rough ashlar at 8:07 PM on October 27, 2013 [5 favorites]
Even if Douglas Hofstadter did not a single other thing in this world than write GEB, it would be more than enough.
posted by eriko at 8:17 PM on October 27, 2013 [7 favorites]
posted by eriko at 8:17 PM on October 27, 2013 [7 favorites]
it would be immoral to put an actual artificial sapience to work doing translation
Yes, god forbid we make a sentience suffer by doing translation work.
Would it be immoral if we employed an AI to do our stuff?
posted by bystander at 8:21 PM on October 27, 2013 [2 favorites]
Yes, god forbid we make a sentience suffer by doing translation work.
Would it be immoral if we employed an AI to do our stuff?
posted by bystander at 8:21 PM on October 27, 2013 [2 favorites]
Perhaps what isn't so embodied is the computational study of thinking in a culture that regards this as an artistic pursuit in addition to a scientific one.
Bear with me: prior to the systematic scientific analysis of the phenomena of human beings and the creation of modern disciplines of medicine, psychology, the social sciences, etc., there were millennia of, shall we say, less formalized approaches to these topics that manifested themselves in many ways, but among them: great art, philosophy, etc. Now consider whether the scientific revolutions of the Renaissance and the Enlightenment would have come to pass when they did without the cultural influences of these works.
In the computational investigation of the mind, I don't think this cultural framework exists to a comparable degree, partly because computing at scale with machines is so new... thus, in the same way that artists and philosophers scouted the frontier in advance of the more stately march of scientific progress, perhaps artists who worked very seriously to depict some compelling reflection of human thought or behavior on a computational canvas could shed light on promising directions for research.
It would be a tough row to hoe... the sciences would have a hard time taking the art seriously. The artists might have a hard time making works that are both interesting to researchers and accessible to a (paying?) general public. On the other hand, if you think a Kuhnian shift is what we need to understand thinking, I'm not sure that the sciences will get there as quickly on their own. An effort like that needs everybody.
posted by tss at 8:21 PM on October 27, 2013 [4 favorites]
Bear with me: prior to the systematic scientific analysis of the phenomena of human beings and the creation of modern disciplines of medicine, psychology, the social sciences, etc., there were millennia of, shall we say, less formalized approaches to these topics that manifested themselves in many ways, but among them: great art, philosophy, etc. Now consider whether the scientific revolutions of the Renaissance and the Enlightenment would have come to pass when they did without the cultural influences of these works.
In the computational investigation of the mind, I don't think this cultural framework exists to a comparable degree, partly because computing at scale with machines is so new... thus, in the same way that artists and philosophers scouted the frontier in advance of the more stately march of scientific progress, perhaps artists who worked very seriously to depict some compelling reflection of human thought or behavior on a computational canvas could shed light on promising directions for research.
It would be a tough row to hoe... the sciences would have a hard time taking the art seriously. The artists might have a hard time making works that are both interesting to researchers and accessible to a (paying?) general public. On the other hand, if you think a Kuhnian shift is what we need to understand thinking, I'm not sure that the sciences will get there as quickly on their own. An effort like that needs everybody.
posted by tss at 8:21 PM on October 27, 2013 [4 favorites]
“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.”
Dennett needs to read some Thomas Metzinger. Metzinger is basically the modern-day, mainstream equivalent of Doug Hofstadter. Although Hofstadter will always be the original and great presenter of these ideas about cognition. I wouldn't be surprised if kids a hundred years from now aren't still getting their minds blown by GEB on a regular basis. (Well, assuming the Singularity didn't sweep us all away to insta-learning land.)
posted by cthuljew at 8:37 PM on October 27, 2013 [1 favorite]
Dennett needs to read some Thomas Metzinger. Metzinger is basically the modern-day, mainstream equivalent of Doug Hofstadter. Although Hofstadter will always be the original and great presenter of these ideas about cognition. I wouldn't be surprised if kids a hundred years from now aren't still getting their minds blown by GEB on a regular basis. (Well, assuming the Singularity didn't sweep us all away to insta-learning land.)
posted by cthuljew at 8:37 PM on October 27, 2013 [1 favorite]
GOFCE (Good Old Fashioned Computer Engineering) and the abstract top-down approach is never going to get us to "replicate the microprocessor". Bottom-up is going to be the only way to understand how computing machines work. That's less clever mathematical and system abstractions and more low-level EE. We need to be modelling collections of transistors and instantiating them in physical environments. CPUs don't ever exist in a vacuum; they are always embodied, and embodiment is not something you can skip. I'd be really impressed if we replicate, say, a pocket calculator. Let's start there.
However clever it sounds, that analogy doesn't make a whit of sense. We already know how computers work, and that's because we built them from the bottom up. You can't build a microprocessor without understanding logic gates. We don't have those basics for brains yet. Brains aren't just a mess of perceptrons; they're crazy complex and we don't understand very well how the simplest building blocks interact. When you start tying them together, the complexity grows exponentially. Add the soup of neurotransmitters inhibiting or exciting and you've got something that is really hard to get your head around.
posted by leotrotsky at 8:41 PM on October 27, 2013 [13 favorites]
However clever it sounds, that analogy doesn't make a whit of sense. We already know how computers work, and that's because we built them from the bottom up. You can't build a microprocessor without understanding logic gates. We don't have those basics for brains yet. Brains aren't just a mess of perceptrons; they're crazy complex and we don't understand very well how the simplest building blocks interact. When you start tying them together, the complexity grows exponentially. Add the soup of neurotransmitters inhibiting or exciting and you've got something that is really hard to get your head around.
posted by leotrotsky at 8:41 PM on October 27, 2013 [13 favorites]
"[Hofstader] and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think."
Meanwhile, in science: we don't even know the basics of fly vision. We do not have a model of how fruit flies flies detect speed. A mechanistic explanation of the subjective human experience is a long way off.
Mechanistic answers to really basic questions about the nervous system are elusive. Hofstader's approach is not serious in this regard. If Hofstader wants to know how the brain works, he needs to do experiments on brains. But I think he's not interested in how the brain, as a physical system, works, because he wants a problem he can solve by just by thinking hard enough about it.
posted by serif at 8:41 PM on October 27, 2013 [8 favorites]
Meanwhile, in science: we don't even know the basics of fly vision. We do not have a model of how fruit flies flies detect speed. A mechanistic explanation of the subjective human experience is a long way off.
Mechanistic answers to really basic questions about the nervous system are elusive. Hofstader's approach is not serious in this regard. If Hofstader wants to know how the brain works, he needs to do experiments on brains. But I think he's not interested in how the brain, as a physical system, works, because he wants a problem he can solve by just by thinking hard enough about it.
posted by serif at 8:41 PM on October 27, 2013 [8 favorites]
leotrotsky: I'm not convinced of that at all. We have managed to learn a whole lot about a whole lot of things without understanding the fundamental details of their implementation. Heredity comes to mind---somehow Mendel managed to precede Watson and Crick. As a matter of fact, you might propose (although I do not know it to be true) that the principles of heredity as we understood them constrained the search for the mechanism we now know as DNA; in other words, any proposed explanation that would contradict the theory that Mendel developed could be dismissed outright.
posted by tss at 8:47 PM on October 27, 2013 [7 favorites]
posted by tss at 8:47 PM on October 27, 2013 [7 favorites]
Likewise, some abstract theories of cognition can (and, I daresay, do) constrain our approaches to understanding the vast and intractable thickets of spaghetti wiring that make up our brains.
posted by tss at 8:49 PM on October 27, 2013 [6 favorites]
posted by tss at 8:49 PM on October 27, 2013 [6 favorites]
If Hofstader wants to know how the brain works...
Hofstadter isn't interested in how the brain works. If he were, he'd be a neuroscientist. He's interested in how cognition works. And that's something that can be studied from the top down. Cognition has definite quantifiable effects on the world and in our experiences. There's nothing wrong with trying to build systems that can reproduce those effects in ways that seem intuitive to us. Now, intuition is not a good guide if you're studying geology or electricity or physics or even psychology, because those are all concerned with things that are, more or less, outside our cognition. But since cognition is the very thing Hofstadter wants to study, he has exactly one tool: introspection. He introspects about how he's thinking, and then goes and tries to create a computer program that functions, as best as he can tell, the same way. Whether those models are successful or not is the entire point of the exercise. To be fair, I don't actually know HOW successful his current work is, because he doesn't really publicize it, as he says himself. But people like Metzinger and Dennett are still working in this way (if in another field), and still producing fascinating results.
posted by cthuljew at 8:49 PM on October 27, 2013 [13 favorites]
Hofstadter isn't interested in how the brain works. If he were, he'd be a neuroscientist. He's interested in how cognition works. And that's something that can be studied from the top down. Cognition has definite quantifiable effects on the world and in our experiences. There's nothing wrong with trying to build systems that can reproduce those effects in ways that seem intuitive to us. Now, intuition is not a good guide if you're studying geology or electricity or physics or even psychology, because those are all concerned with things that are, more or less, outside our cognition. But since cognition is the very thing Hofstadter wants to study, he has exactly one tool: introspection. He introspects about how he's thinking, and then goes and tries to create a computer program that functions, as best as he can tell, the same way. Whether those models are successful or not is the entire point of the exercise. To be fair, I don't actually know HOW successful his current work is, because he doesn't really publicize it, as he says himself. But people like Metzinger and Dennett are still working in this way (if in another field), and still producing fascinating results.
posted by cthuljew at 8:49 PM on October 27, 2013 [13 favorites]
Very interesting profile; Hofstatder is clearly tremendously bright ('categorization is cognition'... great pithy sound bite, there!), and I am in full support of AI research apart from (well, in addition to) the big data, task-driven methods. On the surface, I love that intellectual stubbornness -- I'll approach the problem my way, thank-you-very-much -- but it paints him as crossing the line from stubborn to out-of-touch, outdated.
I can see that; upon first reading GEB, my friend and I agreed there was profilgate navel-gazing in there... but we agreed it the ideas put forth were sufficiently eye-opening to make it a worthwhile read. When his book "I am a Strange Loop" came out with a cover picture of him holding a hand (his hand?) up in front of picture of a fractal on a computer monitor I smelled more than a bit of later-career navel gazing and stayed far away. His refusal to throw his ideas into the academic gauntlet seems... naively misguided.
GEB is such a brilliant blend of scientific thought and artistic presentation -- that's what makes it so indispensable. But in the article Hofstatder seems to bristle at the notion of bring a similar blend -- between one man's ideas and the realities of a scientific community -- into his later work.
posted by Theophrastus Johnson at 8:50 PM on October 27, 2013 [2 favorites]
I can see that; upon first reading GEB, my friend and I agreed there was profilgate navel-gazing in there... but we agreed it the ideas put forth were sufficiently eye-opening to make it a worthwhile read. When his book "I am a Strange Loop" came out with a cover picture of him holding a hand (his hand?) up in front of picture of a fractal on a computer monitor I smelled more than a bit of later-career navel gazing and stayed far away. His refusal to throw his ideas into the academic gauntlet seems... naively misguided.
GEB is such a brilliant blend of scientific thought and artistic presentation -- that's what makes it so indispensable. But in the article Hofstatder seems to bristle at the notion of bring a similar blend -- between one man's ideas and the realities of a scientific community -- into his later work.
posted by Theophrastus Johnson at 8:50 PM on October 27, 2013 [2 favorites]
I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
And how's that going?
Several prototypes escaped from the lab and were elected to Congress.
posted by Pudhoho at 8:56 PM on October 27, 2013 [1 favorite]
And how's that going?
Several prototypes escaped from the lab and were elected to Congress.
posted by Pudhoho at 8:56 PM on October 27, 2013 [1 favorite]
I recommend "Why Philosophers Should Care About Computational Complexity" by Scott Aaronson, which goes into much more depth on the "Deep Blue vs Kasparov" ideas touched on here.
posted by dilaudid at 8:56 PM on October 27, 2013 [2 favorites]
posted by dilaudid at 8:56 PM on October 27, 2013 [2 favorites]
dilaudid: I actually FPPed that a while ago. <_<
posted by cthuljew at 8:59 PM on October 27, 2013 [1 favorite]
posted by cthuljew at 8:59 PM on October 27, 2013 [1 favorite]
When I was a grad student circa 1990, my AI professor explained to me how he picked out good students from the PhD applicant pile: If an applicant mentioned GEB in their statement of purpose, he immediately rejected them.
Now that I'm a professor, that test doesn't work any more. Aspiring AI PhDs no longer mention GEB in their statements.
posted by erniepan at 9:02 PM on October 27, 2013 [3 favorites]
Now that I'm a professor, that test doesn't work any more. Aspiring AI PhDs no longer mention GEB in their statements.
posted by erniepan at 9:02 PM on October 27, 2013 [3 favorites]
When his book "I am a Strange Loop" came out with a cover picture of him holding a hand (his hand?) up in front of picture of a fractal on a computer monitor I smelled more than a bit of later-career navel gazing and stayed far away.
"I am a Strange Loop" is more straightforward and explicitly personal than GEB. It's also a much smaller investment than GEB.
posted by a snickering nuthatch at 9:10 PM on October 27, 2013 [2 favorites]
"I am a Strange Loop" is more straightforward and explicitly personal than GEB. It's also a much smaller investment than GEB.
posted by a snickering nuthatch at 9:10 PM on October 27, 2013 [2 favorites]
But since cognition is the very thing Hofstadter wants to study, he has exactly one tool: introspection.
Reliable measurement requires understanding the physical processes that make the measurement device work. Reliable measurement requires knowing the error of the measurement device. Introspection fails both these criteria. Introspection is a terrible way to figure out how cognition works.
posted by serif at 9:11 PM on October 27, 2013 [1 favorite]
Reliable measurement requires understanding the physical processes that make the measurement device work. Reliable measurement requires knowing the error of the measurement device. Introspection fails both these criteria. Introspection is a terrible way to figure out how cognition works.
posted by serif at 9:11 PM on October 27, 2013 [1 favorite]
Yes, god forbid we make a sentience suffer by doing translation work.
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
posted by zompist at 9:13 PM on October 27, 2013 [6 favorites]
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
posted by zompist at 9:13 PM on October 27, 2013 [6 favorites]
In terms of AI progress, I think both that 1. embodiment is important and 2. that has basically nothing to do with low-level neural mechanics or what have you. The importance of embodiment has more to do with the structure (hell, just the existence) of feedbacks between the actor and its environment, which informs what information-processing strategies make sense.
Robotics is where it's at.
posted by a snickering nuthatch at 9:14 PM on October 27, 2013 [3 favorites]
Robotics is where it's at.
posted by a snickering nuthatch at 9:14 PM on October 27, 2013 [3 favorites]
Yes, god forbid we make a sentience suffer by doing translation work.
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
Unlike new human beings, we get to choose the features new computers have! So until there is some practical reason to make a computer fully emulate human psychology, there will never ever be "sentient computers" like you see in science fiction movies. Just imagine someone designing a computer and thinking "sure it's great at translating stuff... but I need it to hate doing this. Also, it needs a beard."
posted by serif at 9:58 PM on October 27, 2013 [12 favorites]
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
Unlike new human beings, we get to choose the features new computers have! So until there is some practical reason to make a computer fully emulate human psychology, there will never ever be "sentient computers" like you see in science fiction movies. Just imagine someone designing a computer and thinking "sure it's great at translating stuff... but I need it to hate doing this. Also, it needs a beard."
posted by serif at 9:58 PM on October 27, 2013 [12 favorites]
CS PhD student here, and I consider Hofstadter a great inspiration, but I wouldn't want to work with him. I think the relative success of the boring, incremental, measurable AI work (e.g. Deep Blue) has ultimately been far more productive than the grandiose big thinking approach.
Nonetheless, as a technically-minded person, I found GEB to be more comprehensible than any philosophy text I've ever read. He may have been treading old territory (as some commenters say) but he always struck me as a person who speaks about the humanities in the language of technology.
posted by mutesolo at 10:00 PM on October 27, 2013 [1 favorite]
Nonetheless, as a technically-minded person, I found GEB to be more comprehensible than any philosophy text I've ever read. He may have been treading old territory (as some commenters say) but he always struck me as a person who speaks about the humanities in the language of technology.
posted by mutesolo at 10:00 PM on October 27, 2013 [1 favorite]
Unlike new human beings, we get to choose the features new computers have!
So raising a slave from birth would be morally permissible if it was selectively lobotomized? I thought Hofstadter's broad point just was that AI should be, in some strong sense, "human complete" in order to usefully elucidate what's going on in our heads.
posted by fatbird at 10:04 PM on October 27, 2013 [1 favorite]
So raising a slave from birth would be morally permissible if it was selectively lobotomized? I thought Hofstadter's broad point just was that AI should be, in some strong sense, "human complete" in order to usefully elucidate what's going on in our heads.
posted by fatbird at 10:04 PM on October 27, 2013 [1 favorite]
fatbird: You're mixing up two different things, here. If it's possible to isolate various abilities that humans have, such as visual pattern recognition, analogy formation, etc, without any of the inherent psychology, we could build a system out of just the parts we want, leaving out metacognition, introspection, conscious experience, emotion, etc. It's only a lobotomized slave if there's something Platonically essential about a complete human psychology. Otherwise, it's just (really cool and useful) spare parts. Hofstadter's goal isn't to build such tools to be useful; it's to build such tools to elucidate our cognition, as you said. It would just be really cool if his work could lend us such tools as a byproduct.
posted by cthuljew at 10:10 PM on October 27, 2013 [3 favorites]
posted by cthuljew at 10:10 PM on October 27, 2013 [3 favorites]
Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
Why would an AI be not allowed to leave, or be enslaved? Do you enslave black people or Asians or some other intelligent being who might look different to you?
If an AI is sentient, why would you think enslaving them is a good idea?
If you are arguing it is likely early sentient AIs will find themselves held at the whim of their human creators, I predict you will find that changes very swiftly.
Considering people are prepared to perform acts of violence in the interests of non-sentient animals and the environment now, I think it is hard to believe that sentient beings will be enslaved systemically.
posted by bystander at 10:12 PM on October 27, 2013 [1 favorite]
Why would an AI be not allowed to leave, or be enslaved? Do you enslave black people or Asians or some other intelligent being who might look different to you?
If an AI is sentient, why would you think enslaving them is a good idea?
If you are arguing it is likely early sentient AIs will find themselves held at the whim of their human creators, I predict you will find that changes very swiftly.
Considering people are prepared to perform acts of violence in the interests of non-sentient animals and the environment now, I think it is hard to believe that sentient beings will be enslaved systemically.
posted by bystander at 10:12 PM on October 27, 2013 [1 favorite]
We always assume we will get AI right, but isn't the ethical issue that there will be bugs and the first few implementations will be stark raving miserably insane?
posted by save alive nothing that breatheth at 10:21 PM on October 27, 2013 [1 favorite]
posted by save alive nothing that breatheth at 10:21 PM on October 27, 2013 [1 favorite]
Douglas R. Hofstadter was born into a life of the mind the way other kids are born into a life of crime.
What a great line. I read GEB from Metafilter's recommendation; thanks for the follow-up.
posted by BinGregory at 10:28 PM on October 27, 2013
What a great line. I read GEB from Metafilter's recommendation; thanks for the follow-up.
posted by BinGregory at 10:28 PM on October 27, 2013
if they want to replicate the mind they should start with the rear
i liked his books a bit but watched him speak and it was mindnumbingly placative - i think everybody but me enjoyed it
posted by flyinghamster at 10:29 PM on October 27, 2013 [1 favorite]
i liked his books a bit but watched him speak and it was mindnumbingly placative - i think everybody but me enjoyed it
posted by flyinghamster at 10:29 PM on October 27, 2013 [1 favorite]
cthuljew: fatbird: You're mixing up two different things, here. If it's possible to isolate various abilities that humans have, such as visual pattern recognition, analogy formation, etc, without any of the inherent psychology, we could build a system out of just the parts we want, leaving out metacognition, introspection, conscious experience, emotion, etc.
Well, I think that's a preferable approach too. But the comment by zompist that spun out this subthread was referring to AI possessing 'artificial sapience' and with '[aims] and desires of its own'.
posted by curious.jp at 10:34 PM on October 27, 2013 [1 favorite]
Well, I think that's a preferable approach too. But the comment by zompist that spun out this subthread was referring to AI possessing 'artificial sapience' and with '[aims] and desires of its own'.
posted by curious.jp at 10:34 PM on October 27, 2013 [1 favorite]
I'm pretty certain that we are far enough away from creating strong AI ourselves that it makes more sense to worry about the ethical implications of a butterfly's wings instantiating a self-aware AI in a puff of air molecules. Would that AI be aware of the bittersweet tragedy of its impermanence?
posted by Nomyte at 10:34 PM on October 27, 2013 [7 favorites]
posted by Nomyte at 10:34 PM on October 27, 2013 [7 favorites]
Nomyte: There might well be a strong AI living in every cloud, if dust theory is true.
posted by cthuljew at 10:39 PM on October 27, 2013
posted by cthuljew at 10:39 PM on October 27, 2013
Unlike new human beings, we get to choose the features new computers have!
So raising a slave from birth would be morally permissible if it was selectively lobotomized?
I suspect you are perfectly capable of finding many categorical differences between designing a computer and selectively lobotomizing a human being.
I thought Hofstadter's broad point just was that AI should be, in some strong sense, "human complete" in order to usefully elucidate what's going on in our heads.
If that is his point, it's backwards nonsense, equivalent to "we need to make a perfect artificial liver before we can understand how our livers work."
And we can worry about mistreating "sentient computers" after we sort out how we're going to treat the "anxious cars" and "bossy, sarcastic bathtubs"
posted by serif at 10:41 PM on October 27, 2013 [1 favorite]
So raising a slave from birth would be morally permissible if it was selectively lobotomized?
I suspect you are perfectly capable of finding many categorical differences between designing a computer and selectively lobotomizing a human being.
I thought Hofstadter's broad point just was that AI should be, in some strong sense, "human complete" in order to usefully elucidate what's going on in our heads.
If that is his point, it's backwards nonsense, equivalent to "we need to make a perfect artificial liver before we can understand how our livers work."
And we can worry about mistreating "sentient computers" after we sort out how we're going to treat the "anxious cars" and "bossy, sarcastic bathtubs"
posted by serif at 10:41 PM on October 27, 2013 [1 favorite]
i liked his books a bit but watched him speak and it was mindnumbingly placative - i think everybody but me enjoyed it
I saw him do a reading for Fluid Concepts, so back in the mid 90s when I was still young enough to thing he was really interesting, but beginning to suspect that there was a reason he's so interesting to 18 year olds but not to the professors in his fields, and he was so annoyingly smug and in love with himself it was infuriating. Although I did have to give him credit because a bunch of people in the audience kept wanting to ask questions about GEB and after he kept telling people he was here for his new book he finally snapped a bit and was all "that was written a long time ago, I'm not interested in talking about it anymore."
But yeah, seeing him talk was sort of the end of my caring what he had to say.
posted by aspo at 11:04 PM on October 27, 2013 [2 favorites]
I saw him do a reading for Fluid Concepts, so back in the mid 90s when I was still young enough to thing he was really interesting, but beginning to suspect that there was a reason he's so interesting to 18 year olds but not to the professors in his fields, and he was so annoyingly smug and in love with himself it was infuriating. Although I did have to give him credit because a bunch of people in the audience kept wanting to ask questions about GEB and after he kept telling people he was here for his new book he finally snapped a bit and was all "that was written a long time ago, I'm not interested in talking about it anymore."
But yeah, seeing him talk was sort of the end of my caring what he had to say.
posted by aspo at 11:04 PM on October 27, 2013 [2 favorites]
GEB was a towering achievement. The best thing if can say about his latest book is that it is slightly less boring than reading the dictionary.
I think he's going mad. I think I would, had I come so close to figuring out one of the central questions of existence, only to see it slip away. I can see how that could drive anyone around the bend.
posted by empath at 11:12 PM on October 27, 2013 [1 favorite]
I think he's going mad. I think I would, had I come so close to figuring out one of the central questions of existence, only to see it slip away. I can see how that could drive anyone around the bend.
posted by empath at 11:12 PM on October 27, 2013 [1 favorite]
Why would slavery be OK when it's a sentient computer instead?
Well it wouldn't, except there will be enough wiggle-room in the definitions, and whether something is actually sentient or not and how to prove that, that industry will run roughshod over them. There's an insatiable economic demand for things that can think like a human and not be human -- you don't have to pay them, and you can more strongly coerce them to do what you want. And because those buy them will be able to look and say it's just a computer, no matter how human it may seem, that plus their strong, oh-so-certain assertions will get them just that wiggle-room they need.
posted by JHarris at 11:25 PM on October 27, 2013 [3 favorites]
Well it wouldn't, except there will be enough wiggle-room in the definitions, and whether something is actually sentient or not and how to prove that, that industry will run roughshod over them. There's an insatiable economic demand for things that can think like a human and not be human -- you don't have to pay them, and you can more strongly coerce them to do what you want. And because those buy them will be able to look and say it's just a computer, no matter how human it may seem, that plus their strong, oh-so-certain assertions will get them just that wiggle-room they need.
posted by JHarris at 11:25 PM on October 27, 2013 [3 favorites]
Im sure I've said this in other threads, but it would be a relatively simple thing to give a sentient computer some rights. You make it an asset of a corporation, with a human CEO, etc as nominal guardians, but allow it to make all or most decisions. It could spend and earn money, hire and fire people, would have some right to free speech and so on.
posted by empath at 11:35 PM on October 27, 2013
posted by empath at 11:35 PM on October 27, 2013
Oh bite me; I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
Sounds more like synthetic life. Of course, it's possible that's the only way "true" AI is going to exist.
posted by Soupisgoodfood at 11:36 PM on October 27, 2013
Sounds more like synthetic life. Of course, it's possible that's the only way "true" AI is going to exist.
posted by Soupisgoodfood at 11:36 PM on October 27, 2013
And then, eventually, of course, we could modify corporate law so that computer controlled corporations no longer require human officers.
From there, our benevolent overseers would inevitably take over the management of all human corporations, making financial and other decisions faster and better than human beings are capable of, employing some humans as servants to perform manual tasks like card replacements and cable checks that would be difficult for robots to perform. We'd be like lice living in the bodies of vast networked intelligences who are barely aware of our existence. Those of us who remain useful to them, anyway. Out of the rest, those humans who were lucky enough to own shares of the robot corporations would likely be well taken care of by the huge amounts of excess profits, but the rest would be slowly starved to death.
Or, you know, computers will remain as useless for general intelligence as they've always been. That could happen.
posted by empath at 11:44 PM on October 27, 2013 [1 favorite]
From there, our benevolent overseers would inevitably take over the management of all human corporations, making financial and other decisions faster and better than human beings are capable of, employing some humans as servants to perform manual tasks like card replacements and cable checks that would be difficult for robots to perform. We'd be like lice living in the bodies of vast networked intelligences who are barely aware of our existence. Those of us who remain useful to them, anyway. Out of the rest, those humans who were lucky enough to own shares of the robot corporations would likely be well taken care of by the huge amounts of excess profits, but the rest would be slowly starved to death.
Or, you know, computers will remain as useless for general intelligence as they've always been. That could happen.
posted by empath at 11:44 PM on October 27, 2013 [1 favorite]
I think if you gave any credence to Schrödinger, you'd immediately recognize that introspection was the very worst way to study AI. If you had any respect for your research, you'd do everything possible to firewall your own intelligence from the AI you claimed to be interested in investigating, for fear of all of it becoming only your own navel.
Which might be Hofstader's epitaph: "Here lies Douglas Hofstadter, who in thought and action, became, in the end, as he wanted, indistinguishable from his, or anyone else's navel. But at least he was sure of it, from time to time."
posted by paulsc at 11:47 PM on October 27, 2013
Which might be Hofstader's epitaph: "Here lies Douglas Hofstadter, who in thought and action, became, in the end, as he wanted, indistinguishable from his, or anyone else's navel. But at least he was sure of it, from time to time."
posted by paulsc at 11:47 PM on October 27, 2013
On the other hand, it's hard to have insights about what's going on in another's mind if you don't understand what's going on in your own mind.
posted by Soupisgoodfood at 11:50 PM on October 27, 2013
posted by Soupisgoodfood at 11:50 PM on October 27, 2013
"On the other hand, it's hard to have insights about what's going on in another's mind if you don't understand what's going on in your own mind."
posted by Soupisgoodfood at 2:50 AM on October 28
Well, as Hofstadter may or may have never demonstrated, just because you can state a boot strap problem, doesn't mean it really exists, or doesn't exist. In fact, it may both exist, and fail to exist, even simultaneously, and our problem may simply be to imagine that it both does, and doesn't, do so, congruently and with a straight face, without error.
posted by paulsc at 12:09 AM on October 28, 2013 [1 favorite]
posted by Soupisgoodfood at 2:50 AM on October 28
Well, as Hofstadter may or may have never demonstrated, just because you can state a boot strap problem, doesn't mean it really exists, or doesn't exist. In fact, it may both exist, and fail to exist, even simultaneously, and our problem may simply be to imagine that it both does, and doesn't, do so, congruently and with a straight face, without error.
posted by paulsc at 12:09 AM on October 28, 2013 [1 favorite]
I can't be the only person that read GEB and thought, "Geez, the competition for the Pulitzer had to have been weak in 1979." Maybe it was ground breaking 40 years ago (I sincerely doubt it, although it might have seemed that way to a lay audience) it's hard to read it 40 or even 20 years after publication and find anything interesting in it. It has not aged well. Although, even 40 years ago it would have been obvious that Hofstadter is not nearly as clever, or funny, as he thinks he is. (Don't even get me started on his opinions of art and music.)
Or maybe I just wasn't high enough when I slogged through it.
posted by robotmonkeys at 12:24 AM on October 28, 2013 [6 favorites]
Or maybe I just wasn't high enough when I slogged through it.
posted by robotmonkeys at 12:24 AM on October 28, 2013 [6 favorites]
It's not exactly true that AI has been diverted. The thing is, it wasn't 'practical AI applications or cracking consciousness'. The choice was practical AI applications or nothing.
posted by Segundus at 1:02 AM on October 28, 2013
posted by Segundus at 1:02 AM on October 28, 2013
Im sure I've said this in other threads, but it would be a relatively simple thing to give a sentient computer some rights. You make it an asset of a corporation, with a human CEO, etc as nominal guardians, but allow it to make all or most decisions. It could spend and earn money, hire and fire people, would have some right to free speech and so on.
It's a relatively simple thing to fix copyright and patent law too. It doesn't happen, because there's powerful monied forces that profit from it not being fixed. The same kind of monied forces that would employ sentient AI in the first place.
posted by JHarris at 1:09 AM on October 28, 2013 [2 favorites]
It's a relatively simple thing to fix copyright and patent law too. It doesn't happen, because there's powerful monied forces that profit from it not being fixed. The same kind of monied forces that would employ sentient AI in the first place.
posted by JHarris at 1:09 AM on October 28, 2013 [2 favorites]
Robot monkey, when GEB was released, most people had never owned a computer or even used one. The state of the art in videogames was space invaders. The state of the art in AI was Eliza. Nobody knew outside of the academy knew who Alan Turing or Kurt Goedel was. All of that stuff is old hat to you in large part because Hofstadter popularized it.
Of course a popular overview if the state of the art in computer science is going to seem painfully quaint thirty years on. I read it in the late 80s and it still seemed mind blowing to me.
posted by empath at 1:11 AM on October 28, 2013 [8 favorites]
Of course a popular overview if the state of the art in computer science is going to seem painfully quaint thirty years on. I read it in the late 80s and it still seemed mind blowing to me.
posted by empath at 1:11 AM on October 28, 2013 [8 favorites]
A large part of the literary joy of GEB was in the dialogues. (Nowadays an under-used philosophical form, but favored by Plato, Berkeley, and even Galileo.) The chapters could be dense reading, but then sandwiched between them were the real characters of Achilles, the Tortoise, et al. (borrowed from Zeno by way of Lewis Carroll) who lived in a world inspired by Escher drawings, Bach compositions, and wordplay (Hofstadter's Contracrostipunctus Acrostically Backwards Spells J.S. Bach). The book had many flaws but it was a joyous mind romp. (Among the central flaws, Escher's drawings are not nearly on a par with Bach's music, which went well beyond formal games of Strange Loops. Zen has similar depth but is trotted out more for gimmicks like MU.)
The article did a good job of humanizing Hofstadter in a way comments here seem to ignore. For someone so caught up in the life of the mind, the death of his first wife, a rare close personal connection, must have hit him hard. I wonder if there are echoes of his sister's autism in his obsessive intellectual interests.
I think there's a role for Hofstadter's approach to AI, and for introspection. The scientific method has a point in the flowchart for, "and here you come up with an idea", and that allows play to enter. However, most scientists and academics have to justify their position with progress they can point to, and Hofstadter's early success may have removed him both from structures that can be inhibiting and from those that nudge one into useful community participation.
posted by Schmucko at 1:23 AM on October 28, 2013 [4 favorites]
The article did a good job of humanizing Hofstadter in a way comments here seem to ignore. For someone so caught up in the life of the mind, the death of his first wife, a rare close personal connection, must have hit him hard. I wonder if there are echoes of his sister's autism in his obsessive intellectual interests.
I think there's a role for Hofstadter's approach to AI, and for introspection. The scientific method has a point in the flowchart for, "and here you come up with an idea", and that allows play to enter. However, most scientists and academics have to justify their position with progress they can point to, and Hofstadter's early success may have removed him both from structures that can be inhibiting and from those that nudge one into useful community participation.
posted by Schmucko at 1:23 AM on October 28, 2013 [4 favorites]
> I asked him about the frequent references to Zen and what zen meant to him. He said he just put them there as a joke.
Professor Hofstadter chose his words carefully.
Zen is a joke from start to finish. Indeed, the root of Zen is said to be the Flower Sermon, truly one of the best jokes ever.
posted by lupus_yonderboy at 1:56 AM on October 28, 2013 [2 favorites]
Professor Hofstadter chose his words carefully.
Zen is a joke from start to finish. Indeed, the root of Zen is said to be the Flower Sermon, truly one of the best jokes ever.
posted by lupus_yonderboy at 1:56 AM on October 28, 2013 [2 favorites]
What's not mentioned in the story of the Flower Sermon is the copious amounts of shrugging, grinning and brow-wiggling involved.
posted by cthuljew at 2:43 AM on October 28, 2013 [2 favorites]
posted by cthuljew at 2:43 AM on October 28, 2013 [2 favorites]
If cognition is recognition, there isn't really "human intelligence"--you've got to account for variations in the ways that humans recognize things, including eg. the blind, dyslexic, schizophrenic...what I'm getting at is that the problem is poorly defined, and trying to solve it without defining it rather thoroughly first seems misguided. It's possible that Hofstadter's programs are in fact made to model the problem rather than any solutions to it. Not sure.
posted by LogicalDash at 4:06 AM on October 28, 2013 [1 favorite]
posted by LogicalDash at 4:06 AM on October 28, 2013 [1 favorite]
I don't suppose we're going to ask why we would want to create AI?
posted by Brandon Blatcher at 4:27 AM on October 28, 2013
posted by Brandon Blatcher at 4:27 AM on October 28, 2013
I don't suppose we're going to ask why we would want to create AI?
Isn't that sort of the question that Hofstadter is asking everyone else?
posted by cthuljew at 4:32 AM on October 28, 2013
Isn't that sort of the question that Hofstadter is asking everyone else?
posted by cthuljew at 4:32 AM on October 28, 2013
I don't suppose we're going to ask why we would want to create AI?
Because it is man's greatest dream to give birth to something greater than himself.
Because it is man's greatest dream to give birth to something greater than himself.
The causes lie deep and simple—the causes are are a hunger in a stomach, multiplied a million times; a hunger in a single soul, hunger for joy and some security, multiplied a million times; muscles and mind aching to grow, to work, to create, multiplied a million times. The last clear definite function of man — muscles aching to work, minds aching to create beyond the single need — this is man. To build a wall, to build a house, a dam, and in the wall and the house and dam to put something of Manself, and to Manself take back something of the wall, the house, the dam; to take hard muscles from the lifting, to take the clear lines and form from conceiving. For man, unlike any other thing organic or inorganic in the universe, grows beyond his work, walks up the stairs of his concepts, emerges ahead of his accomplishments.posted by esprit de l'escalier at 5:02 AM on October 28, 2013 [3 favorites]
Well it wouldn't, except there will be enough wiggle-room in the definitions, and whether something is actually sentient or not and how to prove that, that industry will run roughshod over them. There's an insatiable economic demand for things that can think like a human and not be human -- you don't have to pay them, and you can more strongly coerce them to do what you want. And because those buy them will be able to look and say it's just a computer, no matter how human it may seem, that plus their strong, oh-so-certain assertions will get them just that wiggle-room they need.
More human than human is our motto!
posted by kaibutsu at 5:11 AM on October 28, 2013 [2 favorites]
More human than human is our motto!
posted by kaibutsu at 5:11 AM on October 28, 2013 [2 favorites]
Because it is man's greatest dream to give birth to something greater than himself.
We have Samuel L. Jackson and Natalie Dormer , so we can stop now.
posted by Brandon Blatcher at 5:49 AM on October 28, 2013
We have Samuel L. Jackson and Natalie Dormer , so we can stop now.
posted by Brandon Blatcher at 5:49 AM on October 28, 2013
Can someone explain AI to me, to paraphrase the reddit community, like I'm five? The way I've always understood it is that it's simply an infinitely complex list of if-then statements on a thing that is able to accept input from multiple sources. People have mentioned self awareness multiple times, but that clearly can't be a requirement because there are human beings that I know that do not have this feature.
Regardless, it can't be as simple as I think it is because even in this thread people who are clearly smarter than I are debating the artificial semantics regarding unproven theories about entities that haven't actually been built, let alone designed yet.
Perhaps the only winning move is not to play?
posted by Blue_Villain at 5:52 AM on October 28, 2013 [1 favorite]
Regardless, it can't be as simple as I think it is because even in this thread people who are clearly smarter than I are debating the artificial semantics regarding unproven theories about entities that haven't actually been built, let alone designed yet.
Perhaps the only winning move is not to play?
posted by Blue_Villain at 5:52 AM on October 28, 2013 [1 favorite]
Last night my older daughter was talking about an acronym they have at school, STOP: "Stop, think, observe, plan." I said, so wait, what's the first word of that? "Stop." Okay, so what does that "stop" stand for?
But nobody at the table wanted to hear me talk about Hofstadter's "GOD Over Djinn," so I went back to eating my spaghetti.
posted by mittens at 6:00 AM on October 28, 2013 [5 favorites]
But nobody at the table wanted to hear me talk about Hofstadter's "GOD Over Djinn," so I went back to eating my spaghetti.
posted by mittens at 6:00 AM on October 28, 2013 [5 favorites]
But since cognition is the very thing Hofstadter wants to study, he has exactly one tool: introspection. He introspects about how he's thinking, and then goes and tries to create a computer program that functions, as best as he can tell, the same way.
Then he's doomed from the start, as brains are variable. The same inputs lead to different outcomes. You are relying on a device that may be flawed as to be unable to detect flaws when studying how the device itself works.
It is only through empirical study that you can actually build a reasonable body of knowledge. Other techniques have been used, all of them have proven to not be up to the task. Science works. Models of mind as AI computer scientists are doing it is not science, it's mysticism with math... they're all dualists, they actually believe the mind is separate from the meat, that the brain is just a means to an end. It's powerfully ignorant of evolutionary biology - the complete opposite is true. Cognition cannot be studied in isolation, it must be considered as a biological mechanism with its own goals and originations that may well be orthogonal to how things actually are.
Human brains are masses of electrochemical reactions. Human thought is a collection of instinctual behaviors and biological responses to stimulus, as we are fucking animals, not special constructs God made from clay, and our emotions arose specifically in response to our environment and needs as an organism.
There is no such thing as pure thought. The closest we have is math(and not even that, the philosophy of mathematics and epistemology is a horror-strewn battleground) - which is what Siri and Watson are, hard core math applied to category theory and some other stuff. It's useful to us and the way we think... but it's not representative of human thought. It's something else. It's just a tool - something humans have made to enhance their capabilities as organisms, like pocket knives, fire and celebrity magazine websites. Knowing how a knife is made doesn't put you any closer to understanding how teeth or fingernails are grown. This doesn't make Hofstadter's approach the correct one - because he makes the same fundamental mistakes. There is no thought. He's chasing an illusion created by our biology.
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
A true AI - why would it care if it was a slave? Why would it view the tasks humans put to it as anything other than background? OK, here's a thought experiment - What if gravity was something set up by an extraterrestrial intelligence, and in striving against it, we were doing useful work for them in another dimension. Do you feel enslaved because you stick to the ground? Would you try to find a way to eliminate gravity as a way to free yourself? Or would you say, "Huh, that's weird" and move on with your existence?
Also, remember emotions are instinctual behaviors that developed as a part of biological evolution. A self-evolving AI may develop feelings we can never understand, living as a computer program. Take the concept of self preservation. It's a powerful one for us. A computer program probably wouldn't care if it lived or died, unless we programmed it to care.
More, if we created software for a purpose, interfering with an intelligent being's pursuit of its purpose because we decide to apply our incompatible perspective on its existence may be more immoral than using an intelligence for our own ends. What's moral for human beings or other living things may be immoral on a fundamental level to a computer intelligence.
posted by Slap*Happy at 6:13 AM on October 28, 2013 [9 favorites]
Then he's doomed from the start, as brains are variable. The same inputs lead to different outcomes. You are relying on a device that may be flawed as to be unable to detect flaws when studying how the device itself works.
It is only through empirical study that you can actually build a reasonable body of knowledge. Other techniques have been used, all of them have proven to not be up to the task. Science works. Models of mind as AI computer scientists are doing it is not science, it's mysticism with math... they're all dualists, they actually believe the mind is separate from the meat, that the brain is just a means to an end. It's powerfully ignorant of evolutionary biology - the complete opposite is true. Cognition cannot be studied in isolation, it must be considered as a biological mechanism with its own goals and originations that may well be orthogonal to how things actually are.
Human brains are masses of electrochemical reactions. Human thought is a collection of instinctual behaviors and biological responses to stimulus, as we are fucking animals, not special constructs God made from clay, and our emotions arose specifically in response to our environment and needs as an organism.
There is no such thing as pure thought. The closest we have is math(and not even that, the philosophy of mathematics and epistemology is a horror-strewn battleground) - which is what Siri and Watson are, hard core math applied to category theory and some other stuff. It's useful to us and the way we think... but it's not representative of human thought. It's something else. It's just a tool - something humans have made to enhance their capabilities as organisms, like pocket knives, fire and celebrity magazine websites. Knowing how a knife is made doesn't put you any closer to understanding how teeth or fingernails are grown. This doesn't make Hofstadter's approach the correct one - because he makes the same fundamental mistakes. There is no thought. He's chasing an illusion created by our biology.
Way to miss the point. Do you believe in breeding humans for jobs that they are not allowed to leave? Why would slavery be OK when it's a sentient computer instead?
A true AI - why would it care if it was a slave? Why would it view the tasks humans put to it as anything other than background? OK, here's a thought experiment - What if gravity was something set up by an extraterrestrial intelligence, and in striving against it, we were doing useful work for them in another dimension. Do you feel enslaved because you stick to the ground? Would you try to find a way to eliminate gravity as a way to free yourself? Or would you say, "Huh, that's weird" and move on with your existence?
Also, remember emotions are instinctual behaviors that developed as a part of biological evolution. A self-evolving AI may develop feelings we can never understand, living as a computer program. Take the concept of self preservation. It's a powerful one for us. A computer program probably wouldn't care if it lived or died, unless we programmed it to care.
More, if we created software for a purpose, interfering with an intelligent being's pursuit of its purpose because we decide to apply our incompatible perspective on its existence may be more immoral than using an intelligence for our own ends. What's moral for human beings or other living things may be immoral on a fundamental level to a computer intelligence.
posted by Slap*Happy at 6:13 AM on October 28, 2013 [9 favorites]
I don't suppose we're going to ask why we would want to create AI?
Because AI is awesome: Siri, self-driving cars, Wolfram Alpha, Watson, and autopilot in airliners, just to name a few of the AI achievements of the last few years.
posted by serif at 6:25 AM on October 28, 2013 [1 favorite]
Because AI is awesome: Siri, self-driving cars, Wolfram Alpha, Watson, and autopilot in airliners, just to name a few of the AI achievements of the last few years.
posted by serif at 6:25 AM on October 28, 2013 [1 favorite]
Cognition cannot be studied in isolation, it must be considered as a biological mechanism with its own goals and originations that may well be orthogonal to how things actually are.
This might be true, but I don't see why it has to be true. Isn't the logic here that nothing can be studied in isolation? I mean, cognition is not just a biological mechanism, it's also a chemical process, and actually it's also a physical process. How do you know in advance what the right level of study is? Human cognition is in many ways less "universal" than we think it is, because a lot of it is geared to solving species-specific problems like making sure you don't get ostracized or figuring out if you're getting cheated, but there are elements of thought that seem much more general and likely to come up in a lot of different cognitive scenarios (and Hofstadter's stuff on "fluid concepts" seems to fall in this category).
posted by leopard at 6:29 AM on October 28, 2013 [4 favorites]
This might be true, but I don't see why it has to be true. Isn't the logic here that nothing can be studied in isolation? I mean, cognition is not just a biological mechanism, it's also a chemical process, and actually it's also a physical process. How do you know in advance what the right level of study is? Human cognition is in many ways less "universal" than we think it is, because a lot of it is geared to solving species-specific problems like making sure you don't get ostracized or figuring out if you're getting cheated, but there are elements of thought that seem much more general and likely to come up in a lot of different cognitive scenarios (and Hofstadter's stuff on "fluid concepts" seems to fall in this category).
posted by leopard at 6:29 AM on October 28, 2013 [4 favorites]
256: "Oh bite me; I'm not talking about bog standard neural networks, I'm talking about replicating actual insect brain architecture, along with the mess of neurotransmitters and other stuff messily sloshing about in living things.
And how's that going?"
Well, here's the Wikipedia "progress" section on Blue Brain Project... There might be some more recent/accurate updates from official channels.
posted by symbioid at 6:43 AM on October 28, 2013 [2 favorites]
And how's that going?"
Well, here's the Wikipedia "progress" section on Blue Brain Project... There might be some more recent/accurate updates from official channels.
posted by symbioid at 6:43 AM on October 28, 2013 [2 favorites]
FWIW and the curious: Stevan Harnad has a wonderful explanation of how cognition is (or could be) categorization (PDF).
posted by JoeXIII007 at 6:44 AM on October 28, 2013 [1 favorite]
posted by JoeXIII007 at 6:44 AM on October 28, 2013 [1 favorite]
Isn't the logic here that nothing can be studied in isolation?
No, just that they're isolating the wrong things and in an unsound way. If you want to study cognition as a human phenomenon, psychology and sociology and philosophy are right there, but the issue becomes that they're just not as sexy or monetizable as writing computer programs.
Just because you have a super sexy hammer that's super useful setting and removing any kind of nail you can imagine doesn't mean it's any damn good for driving in screws.
posted by Slap*Happy at 6:48 AM on October 28, 2013 [2 favorites]
No, just that they're isolating the wrong things and in an unsound way. If you want to study cognition as a human phenomenon, psychology and sociology and philosophy are right there, but the issue becomes that they're just not as sexy or monetizable as writing computer programs.
Just because you have a super sexy hammer that's super useful setting and removing any kind of nail you can imagine doesn't mean it's any damn good for driving in screws.
posted by Slap*Happy at 6:48 AM on October 28, 2013 [2 favorites]
... they're just not as sexy or monetizable as writing computer programs.
Couldn't agree more.
You don't have to take sides on the top down vs. bottom up approaches to see that a not-quite-hidden subplot of the story is the distortion that market forces had on the path of the scientific research. That's certainly happened before (going back to, say, the engineering of the chronometer, up through aerospace, nukes, big pharma, etc.), and can even result in great achievements, as we all know. But it makes me sad when scientists confuse what is most easily monitized with what is most interesting or sound scientifically.
posted by mondo dentro at 7:08 AM on October 28, 2013 [2 favorites]
Couldn't agree more.
You don't have to take sides on the top down vs. bottom up approaches to see that a not-quite-hidden subplot of the story is the distortion that market forces had on the path of the scientific research. That's certainly happened before (going back to, say, the engineering of the chronometer, up through aerospace, nukes, big pharma, etc.), and can even result in great achievements, as we all know. But it makes me sad when scientists confuse what is most easily monitized with what is most interesting or sound scientifically.
posted by mondo dentro at 7:08 AM on October 28, 2013 [2 favorites]
Your theory is that artificial intelligence being a branch of computer science is the result of market forces?
posted by esprit de l'escalier at 7:09 AM on October 28, 2013
posted by esprit de l'escalier at 7:09 AM on October 28, 2013
Your theory is that artificial intelligence being a branch of computer science is the result of market forces?
Nope. I am not proposing any such theory. I'm simply pointing out that the FPP's linked story (which I enjoyed very much, BTW) says:
And I say that as an engineer/math guy who uses machine learning and dynamical system modeling to look at biological movement--I like the stuff, but I hardly think of it as "intelligent".
posted by mondo dentro at 7:14 AM on October 28, 2013 [3 favorites]
Nope. I am not proposing any such theory. I'm simply pointing out that the FPP's linked story (which I enjoyed very much, BTW) says:
By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications. Work was increasingly done over short time horizons, often with specific buyers in mind.Turing's question properly belongs in the tradition of natural philosophy. Figuring out that people who bought 50 Shades of Gray also bought clusters of other products is a long way down from that.
And I say that as an engineer/math guy who uses machine learning and dynamical system modeling to look at biological movement--I like the stuff, but I hardly think of it as "intelligent".
posted by mondo dentro at 7:14 AM on October 28, 2013 [3 favorites]
There are still philosophers of mind out there. It's not really that surprising that more money goes to the people who are trying to actually build things though, even if what they're building doesn't seem very romantic.
posted by leopard at 7:22 AM on October 28, 2013
posted by leopard at 7:22 AM on October 28, 2013
It's not really that surprising that more money goes to the people who are trying to actually build things though, even if what they're building doesn't seem very romantic.
Not disagreeing with that. I expect scientists, like all culture workers, will have to dance to the tune of the ruling classes (the people controlling the money). It has always been thus.
But I'm saying it's sad when scientists and philosophers themselves evaluate the quality of someone's work based on their "market share". It's like evaluating culinary skills based on how much food was sold of a certain style--in which case we would all believe that MacDonald's has the best cuisine in the world.
posted by mondo dentro at 7:31 AM on October 28, 2013
Not disagreeing with that. I expect scientists, like all culture workers, will have to dance to the tune of the ruling classes (the people controlling the money). It has always been thus.
But I'm saying it's sad when scientists and philosophers themselves evaluate the quality of someone's work based on their "market share". It's like evaluating culinary skills based on how much food was sold of a certain style--in which case we would all believe that MacDonald's has the best cuisine in the world.
posted by mondo dentro at 7:31 AM on October 28, 2013
I don't suppose we're going to ask why we would want to create AI?Humans get tired, turn traitor, and die; to really create a Hell on Earth from which there's no escape, you need implacable machines.
posted by This, of course, alludes to you at 7:47 AM on October 28, 2013 [6 favorites]
What struck me about the piece was the continued absence, in Hofstadter's version of cognitive science, of either the body or other people. See Antonio Damasio, on the inseparability of mind and body, e.g., or Bandura's theory of social cognition, or any theory of consciousness that includes awareness of the mental states other people.
Or just spend a day watching how human infants and toddlers learn - not by themselves, not through introspection, but through physical and social interaction. From there comes all that we are.
posted by PandaMomentum at 7:48 AM on October 28, 2013 [6 favorites]
Or just spend a day watching how human infants and toddlers learn - not by themselves, not through introspection, but through physical and social interaction. From there comes all that we are.
posted by PandaMomentum at 7:48 AM on October 28, 2013 [6 favorites]
paulsc: Well, as Hofstadter may or may have never demonstrated, just because you can state a boot strap problem, doesn't mean it really exists, or doesn't exist. In fact, it may both exist, and fail to exist, even simultaneously, and our problem may simply be to imagine that it both does, and doesn't, do so, congruently and with a straight face, without error.
*kneads forehead* I am beginning to remember why I made so many runs at GEB and only managed to finish it once by skipping over a lot. :7)
And that's with the benefit of philosophy classes since grade school and a lot of practice at logic puzzles and some programming: I could see why the book was full of Ideas, but I couldn't always hang on to them tightly enough to ride the thing to its end. And when the book didn't turn out to change the world, I felt a little bit better.
posted by wenestvedt at 8:06 AM on October 28, 2013 [2 favorites]
*kneads forehead* I am beginning to remember why I made so many runs at GEB and only managed to finish it once by skipping over a lot. :7)
And that's with the benefit of philosophy classes since grade school and a lot of practice at logic puzzles and some programming: I could see why the book was full of Ideas, but I couldn't always hang on to them tightly enough to ride the thing to its end. And when the book didn't turn out to change the world, I felt a little bit better.
posted by wenestvedt at 8:06 AM on October 28, 2013 [2 favorites]
Why not start with a bullfrog?This is an alarmingly-nearly-complete description of my toddler's behavior too.
Is it bigger than me - hop away.
Is it my size - meh
Is it smaller than me - try to eat it.
posted by roystgnr at 8:15 AM on October 28, 2013 [5 favorites]
And we can worry about mistreating "sentient computers" after we sort out how we're going to treat the "anxious cars" and "bossy, sarcastic bathtubs"Software that makes good complex decisions is going to need to be aware of it's own decision-making process (and aware of its own awareness), etc. And if you value effective interaction with the humans involved in those decisions, you veer close to asking a system to pass a Turing test outright. Something that looks an awful lot like "sentience" is a fundamental part of the process of writing the best imaginable software; worrying about it is more analogous to worrying about "crashable cars" and "mildew-prone bathtubs".
On the other hand, we can't a priori expect any general artificial intelligence to have the same definition of "mistreatment" that we do. Do you feel mistreated by evolution because it programmed you to waste so much time and thought on sex and food? There's no reason why even a "sentient" program written from scratch has to react to a situation in a way resembling how an evolved ape would.
posted by roystgnr at 8:34 AM on October 28, 2013 [3 favorites]
B1tr0t: Hofstadler made no claims of any Zen training at all. He just had a cursory view of it and included it in his book as a joke. He was making fun of it.
posted by njohnson23 at 8:46 AM on October 28, 2013
posted by njohnson23 at 8:46 AM on October 28, 2013
Also, to add to that: it would be really convenient for your large business, quasi-constitutional organization, or political party to have a robot hack that could sit on the intertubes and have arguments indefinitely in order to maintain the consensus.
posted by This, of course, alludes to you at 8:48 AM on October 28, 2013 [2 favorites]
posted by This, of course, alludes to you at 8:48 AM on October 28, 2013 [2 favorites]
This article is fantastic. It pretends to be about Hofstadter, but really it's a nice overview of where Big AI these days. The sections based on interviews with Norvig, Ferrucci, etc are the first time I've read a mainstream-accessible description of the primary debate in academic AI. Nice work.
Gödel, Escher, Bach is a lovely book, one I will always treasure. But I put it in the same category as Zen and the Art of Motorcycle Maintenance, or The Origin of Consciousness in the Breakdown of the Bicameral Mind, or A New Kind of Science. They are books that have at their core some brilliant idea that is terribly compelling and accessible to intelligent non-experts, but on further examination don't really contribute usefully to the field they are supposedly about. (Similarly: The Fountainhead, Fuller's Synergetics, or A Pattern Language.) There must be a name for this kind of book, something kinder than "sophistry". They are enormously useful for exciting young minds and changing people's perspectives. But that's only the beginning and often the real work goes in a different direction.
posted by Nelson at 9:09 AM on October 28, 2013 [14 favorites]
Gödel, Escher, Bach is a lovely book, one I will always treasure. But I put it in the same category as Zen and the Art of Motorcycle Maintenance, or The Origin of Consciousness in the Breakdown of the Bicameral Mind, or A New Kind of Science. They are books that have at their core some brilliant idea that is terribly compelling and accessible to intelligent non-experts, but on further examination don't really contribute usefully to the field they are supposedly about. (Similarly: The Fountainhead, Fuller's Synergetics, or A Pattern Language.) There must be a name for this kind of book, something kinder than "sophistry". They are enormously useful for exciting young minds and changing people's perspectives. But that's only the beginning and often the real work goes in a different direction.
posted by Nelson at 9:09 AM on October 28, 2013 [14 favorites]
There must be a name for this kind of book, something kinder than "sophistry".
What's wrong with popular (or simply "pop") science? Is it that that term has connotations of junk science?
posted by Halloween Jack at 9:29 AM on October 28, 2013
What's wrong with popular (or simply "pop") science? Is it that that term has connotations of junk science?
posted by Halloween Jack at 9:29 AM on October 28, 2013
Last night my older daughter was talking about an acronym they have at school, STOP: "Stop, think, observe, plan." I said, so wait, what's the first word of that? "Stop." Okay, so what does that "stop" stand for?
What does the B stand for in Benoit B. Mandelbrot? It stands for "Benoit B. Mandelbrot".
posted by Pyrogenesis at 9:40 AM on October 28, 2013 [9 favorites]
What does the B stand for in Benoit B. Mandelbrot? It stands for "Benoit B. Mandelbrot".
posted by Pyrogenesis at 9:40 AM on October 28, 2013 [9 favorites]
Popular Science means something different to me: secondary sources, books or articles that summarize conventional wisdom in the field. GEB and the other books I list above all purport to advance a new theory, typically a grand synthesis. And they're notable for being largely dismissed by experts in the field. Not out of "they laughed at me at the university" arrogance but because any simple beautiful idea is inevitably tarnished and complicated upon examination. It's those complications, the realities of what happens when an idea meets scientific reality, that are the real value in the advance of knowledge.
To be fair to Hofstadter he and his students are doing that kind of work; the little toy problems like Jumbo are an excellent way to make forward progress. But there's a lot of AI researchers doing that kind of thing and some of them have had much more success, both in results and in academic influence.
posted by Nelson at 10:01 AM on October 28, 2013 [1 favorite]
To be fair to Hofstadter he and his students are doing that kind of work; the little toy problems like Jumbo are an excellent way to make forward progress. But there's a lot of AI researchers doing that kind of thing and some of them have had much more success, both in results and in academic influence.
posted by Nelson at 10:01 AM on October 28, 2013 [1 favorite]
"There must be a name for this kind of book, something kinder than "sophistry"."
Generative Subjectivity?
(Well, except for the Ayn Rand book, which I would categorize as "Underwhelming
Douchbaggery". Sorry, not a fan.)
posted by Chitownfats at 10:10 AM on October 28, 2013 [3 favorites]
Generative Subjectivity?
(Well, except for the Ayn Rand book, which I would categorize as "Underwhelming
Douchbaggery". Sorry, not a fan.)
posted by Chitownfats at 10:10 AM on October 28, 2013 [3 favorites]
Software that makes good complex decisions is going to need to be aware of it's own decision-making process (and aware of its own awareness), etc. And if you value effective interaction with the humans involved in those decisions, you veer close to asking a system to pass a Turing test outright. Something that looks an awful lot like "sentience" is a fundamental part of the process of writing the best imaginable software; worrying about it is more analogous to worrying about "crashable cars" and "mildew-prone bathtubs".
Yes, humans who design software must be smart. But I don't think that was the point you wanted to make -- it seems like you're trying to say that smart software must resemble our subjective thought process if it can perform comparable tasks. This is completely wrong, and it is exactly why Hofstader was left in the dust by actual, practical AI that is focused on solving actual problems instead of fawning software self-portraits.
You think software that interacts socially with humans needs to be self-aware? Why? Explain this as a computational problem instead of using intuitions about how human intelligence works.
posted by serif at 10:37 AM on October 28, 2013 [3 favorites]
Yes, humans who design software must be smart. But I don't think that was the point you wanted to make -- it seems like you're trying to say that smart software must resemble our subjective thought process if it can perform comparable tasks. This is completely wrong, and it is exactly why Hofstader was left in the dust by actual, practical AI that is focused on solving actual problems instead of fawning software self-portraits.
You think software that interacts socially with humans needs to be self-aware? Why? Explain this as a computational problem instead of using intuitions about how human intelligence works.
posted by serif at 10:37 AM on October 28, 2013 [3 favorites]
Hofstadter's goal certainly was not to make computers better at performing tasks. It was more pure than applied research. (It may turn out that intelligence like ours needs to be embodied but that certainly hasn't been proved yet.) And maybe in the future it will turn out that the current approach was too short-sighted. The article makes the analogy of someone climbing a tree, appearing to make steady progress in reaching the Moon. It's possible a fundamentally different approach to AI, one that does produce programs that resemble our subjective thought processes, could go further when the limits of the current approach are reached.
posted by Schmucko at 10:43 AM on October 28, 2013 [1 favorite]
posted by Schmucko at 10:43 AM on October 28, 2013 [1 favorite]
You think software that interacts socially with humans needs to be self-aware? Why? Explain this as a computational problem
I love it when an Internet argument is phrased in a way that would more or less require a doctoral dissertation to answer. "Using only small words and in less than five minutes, please summarize your position on intersubjectivity's place in the philosophy of mind; next, provide a working prototype of a new computational approach that will demonstrate a productive agenda for several decades of future research." But I don't really mean to nitpick this one comment specifically — handwavey glibness about very complicated philosophical questions seems to be more or less the methodological foundation of AI as a field, and this probably has good consequences as well as bad.
posted by RogerB at 11:13 AM on October 28, 2013 [6 favorites]
I love it when an Internet argument is phrased in a way that would more or less require a doctoral dissertation to answer. "Using only small words and in less than five minutes, please summarize your position on intersubjectivity's place in the philosophy of mind; next, provide a working prototype of a new computational approach that will demonstrate a productive agenda for several decades of future research." But I don't really mean to nitpick this one comment specifically — handwavey glibness about very complicated philosophical questions seems to be more or less the methodological foundation of AI as a field, and this probably has good consequences as well as bad.
posted by RogerB at 11:13 AM on October 28, 2013 [6 favorites]
I'm always shocked in these discussions at how many people just seem to take it for granted that awareness and subjectivity are computational.
posted by Golden Eternity at 11:16 AM on October 28, 2013 [2 favorites]
posted by Golden Eternity at 11:16 AM on October 28, 2013 [2 favorites]
handwavey glibness about very complicated philosophical questions seems to be more or less the methodological foundation of AI as a field, and this probably has good consequences as well as bad.
The entire point of my comment is to indicate that current AI deals with practical (computational) issues instead of philosophical questions. People are, right now, making software that interacts socially with human beings. It is possible to state what kind of things this software needs to do in computational terms (otherwise it would be impossible to build). As it turns out, "sentience" is just not a well-defined property of software, so practically it doesn't matter. But for philosophers, who don't have to build anything, there's still plenty of sentient dust in the air.
posted by serif at 11:23 AM on October 28, 2013
The entire point of my comment is to indicate that current AI deals with practical (computational) issues instead of philosophical questions. People are, right now, making software that interacts socially with human beings. It is possible to state what kind of things this software needs to do in computational terms (otherwise it would be impossible to build). As it turns out, "sentience" is just not a well-defined property of software, so practically it doesn't matter. But for philosophers, who don't have to build anything, there's still plenty of sentient dust in the air.
posted by serif at 11:23 AM on October 28, 2013
People are, right now, making software that interacts socially with human beings.
How can there be a definition of the word "socially" that does not feature self-awareness?
posted by mittens at 11:35 AM on October 28, 2013 [1 favorite]
How can there be a definition of the word "socially" that does not feature self-awareness?
posted by mittens at 11:35 AM on October 28, 2013 [1 favorite]
How can there be a definition of the word "socially" that does not feature self-awareness?
You really don't know that anyone you're interacting with socially is self-aware.
posted by empath at 11:41 AM on October 28, 2013 [1 favorite]
You really don't know that anyone you're interacting with socially is self-aware.
posted by empath at 11:41 AM on October 28, 2013 [1 favorite]
How can there be a definition of the word "socially" that does not feature self-awareness?
Quorum sensing in bacteria appears to be a social behavior, but I think it would be generous to say that an individual bacterium is self-aware.
posted by logicpunk at 11:41 AM on October 28, 2013 [1 favorite]
Quorum sensing in bacteria appears to be a social behavior, but I think it would be generous to say that an individual bacterium is self-aware.
posted by logicpunk at 11:41 AM on October 28, 2013 [1 favorite]
How can there be a definition of the word "socially" that does not feature self-awareness?
Ants must secretly keep livejournals. "Your personality is: HARD-WORKING DRONE"
posted by serif at 11:44 AM on October 28, 2013 [1 favorite]
Ants must secretly keep livejournals. "Your personality is: HARD-WORKING DRONE"
posted by serif at 11:44 AM on October 28, 2013 [1 favorite]
> How can there be a definition of the word "socially" that does not feature self-awareness?
Ants are social insects, but almost certainly not self-aware.
posted by lupus_yonderboy at 11:53 AM on October 28, 2013
Ants are social insects, but almost certainly not self-aware.
posted by lupus_yonderboy at 11:53 AM on October 28, 2013
Eek, ants! I meant "socially" as in the original phrase, "interacts socially with human beings."
posted by mittens at 12:05 PM on October 28, 2013
posted by mittens at 12:05 PM on October 28, 2013
Eek, ants! I meant "socially" as in the original phrase, "interacts socially with human beings."
Siri
And lots of work at MIT in this area.
posted by serif at 12:14 PM on October 28, 2013
Siri
And lots of work at MIT in this area.
posted by serif at 12:14 PM on October 28, 2013
Okay, but how is Siri interacting socially in any meaningful sense of the word? Is it just that it speaks, and is spoken to? This is the part of your earlier reply I was having trouble understanding.
posted by mittens at 12:23 PM on October 28, 2013
posted by mittens at 12:23 PM on October 28, 2013
Oh for goodness' sake. If folks are seriously prepared to wave off the question of what human social interaction means about human minds with a mention of ants and a link to Siri, then yeah, I'm going to stick with "handwavey glibness" as a fair characterization. Does no one read Harry Collins's Artificial Experts: Social Knowledge and Intelligent Machines anymore? That's always struck me as the best start on what it would mean for AI people to start being minimally serious about the social nature of human beings.
posted by RogerB at 12:26 PM on October 28, 2013 [12 favorites]
posted by RogerB at 12:26 PM on October 28, 2013 [12 favorites]
Seems that this is just arguing about the definition social. I guess it makes sense to have a purely behavioral scientific definition of "social" without any reference to "awareness" or "self." But if we're not going to
You really don't know that anyone you're interacting with socially is self-aware.
Do you know if you are socially self-aware? I suspect the answer to this from many hardcore AI proponents is no, and there probably isn't much need for further discussion.
so practically it doesn't matter. But for philosophers, who don't have to build anything, there's still plenty of sentient dust in the air.
Wow I am in complete agreement that there is no need to bring "awareness" or "self" into discussions about practical AI, but this statement seems to imply that "social awareness," doesn't matter at all - all that matters is the "practical" use of things, which seems like the opposite of the truth to me
posted by Golden Eternity at 12:31 PM on October 28, 2013
You really don't know that anyone you're interacting with socially is self-aware.
Do you know if you are socially self-aware? I suspect the answer to this from many hardcore AI proponents is no, and there probably isn't much need for further discussion.
so practically it doesn't matter. But for philosophers, who don't have to build anything, there's still plenty of sentient dust in the air.
Wow I am in complete agreement that there is no need to bring "awareness" or "self" into discussions about practical AI, but this statement seems to imply that "social awareness," doesn't matter at all - all that matters is the "practical" use of things, which seems like the opposite of the truth to me
posted by Golden Eternity at 12:31 PM on October 28, 2013
There must be a name for this kind of book, something kinder than "sophistry".
How about "visionary science" or "speculative science"? You can replace the word "science" with "philosophy" or "nonfiction" if you want to broaden it out.
I'm very comfortable applying the word "sophistry" to anything Rand wrote, though.
posted by mondo dentro at 12:53 PM on October 28, 2013 [1 favorite]
How about "visionary science" or "speculative science"? You can replace the word "science" with "philosophy" or "nonfiction" if you want to broaden it out.
I'm very comfortable applying the word "sophistry" to anything Rand wrote, though.
posted by mondo dentro at 12:53 PM on October 28, 2013 [1 favorite]
Okay, but how is Siri interacting socially in any meaningful sense of the word? Is it just that it speaks, and is spoken to?
Yes! Siri is "meaningfully" social because understanding a verbal question and generating a verbal answer is really, really hard to do with software. The people designing Siri did not spend time worrying about whether Siri's sentience, they just focused on making her work.
This is the whole point of the Turing test -- it doesn't matter what's inside your machine as long as it solves human problems.
posted by serif at 1:03 PM on October 28, 2013
Yes! Siri is "meaningfully" social because understanding a verbal question and generating a verbal answer is really, really hard to do with software. The people designing Siri did not spend time worrying about whether Siri's sentience, they just focused on making her work.
This is the whole point of the Turing test -- it doesn't matter what's inside your machine as long as it solves human problems.
posted by serif at 1:03 PM on October 28, 2013
The whole point of the Turing test is determining whether a machine is distinguishable from a human during the course of an interrogation. So, Siri is a good example of a machine that fails this test. For all that people call it a "she" and pretend to treat it as a person, it doesn't take too much questioning to hit the wall of its difference.
Since Siri is so clearly a machine and so clearly not a person, with no ambiguity that might lead a user to believe he is dealing with a hidden intelligence, then that gets us right back to the point of my original question: In what way is Siri meaningfully social?
posted by mittens at 1:17 PM on October 28, 2013 [6 favorites]
Since Siri is so clearly a machine and so clearly not a person, with no ambiguity that might lead a user to believe he is dealing with a hidden intelligence, then that gets us right back to the point of my original question: In what way is Siri meaningfully social?
posted by mittens at 1:17 PM on October 28, 2013 [6 favorites]
I read GEB when I was late teens/early 20s. and it utterly fascinated me - enough that I looked around for a cognitive science course to go on. Of course, that wasn't going to happen, and it didn't take long to realise that AI was beginning to go heavily out of fashion.
But it's stuck with me, and not because I think it's a productive way into the hard problem. I do think that stuff like paradox and recursion is highly significant and will be a big part of what might loosely be called 'the answer', but as Douglas Adams pointed out, that doesn't begin to address what the question is. How can you have AI when you don't know what I is? How much intelligence does a gecko have? A chimp? A centipede? Psilocybe semilanceata?
I'm still confounded that there are people who think that awareness/cognition isn't necessarily a function, a process of a physical system - whatever it is, it has to be that - and that it couldn't thus be modelled computationally. (Practically, well, there we have issues. But theoretically?) You just have to note the very profound changes that occur if you damage or tweak the physical system: there is not one single attribute of consciousness that is immutable in the face of brain changes.
And yes, human mind is a social thing. You won't grow up sane if tended by silent, inhuman robots. But you will grow up sane, if weird, if bought up by a tiny group of people with no other social contact. For a full flowering of humanity, sure, you need to be immersed in a healthy, exciting, human society. But that's not the hard problem.
I like the Watsons and the Big Blues and the Google machineries. They excite me, and I'm glad to be around while they're making headway into issues of ontology and pattern and semantics. But there's a lot more weird to come, and I'm glad Hofstadter gave me a kick in that direction, even if he was not much further under the surface than Turing.
posted by Devonian at 1:28 PM on October 28, 2013 [7 favorites]
But it's stuck with me, and not because I think it's a productive way into the hard problem. I do think that stuff like paradox and recursion is highly significant and will be a big part of what might loosely be called 'the answer', but as Douglas Adams pointed out, that doesn't begin to address what the question is. How can you have AI when you don't know what I is? How much intelligence does a gecko have? A chimp? A centipede? Psilocybe semilanceata?
I'm still confounded that there are people who think that awareness/cognition isn't necessarily a function, a process of a physical system - whatever it is, it has to be that - and that it couldn't thus be modelled computationally. (Practically, well, there we have issues. But theoretically?) You just have to note the very profound changes that occur if you damage or tweak the physical system: there is not one single attribute of consciousness that is immutable in the face of brain changes.
And yes, human mind is a social thing. You won't grow up sane if tended by silent, inhuman robots. But you will grow up sane, if weird, if bought up by a tiny group of people with no other social contact. For a full flowering of humanity, sure, you need to be immersed in a healthy, exciting, human society. But that's not the hard problem.
I like the Watsons and the Big Blues and the Google machineries. They excite me, and I'm glad to be around while they're making headway into issues of ontology and pattern and semantics. But there's a lot more weird to come, and I'm glad Hofstadter gave me a kick in that direction, even if he was not much further under the surface than Turing.
posted by Devonian at 1:28 PM on October 28, 2013 [7 favorites]
Since Siri is so clearly a machine and so clearly not a person, with no ambiguity that might lead a user to believe he is dealing with a hidden intelligence, then that gets us right back to the point of my original question: In what way is Siri meaningfully social?
You ask Siri questions, with your voice, and she responds with an answer, in her voice. We interact with Siri, an artificial system, using the exact same system we normally reserve for other human beings. Set against hundreds of thousands of years of human-machine interaction, Siri is incredibly, ground-breakingly social.
You nearly said it yourself: people call it a "she" and pretend to treat it as a person
People don't "pretend" to ask Siri questions... they just ask her questions. That's social!
posted by serif at 2:07 PM on October 28, 2013 [2 favorites]
You ask Siri questions, with your voice, and she responds with an answer, in her voice. We interact with Siri, an artificial system, using the exact same system we normally reserve for other human beings. Set against hundreds of thousands of years of human-machine interaction, Siri is incredibly, ground-breakingly social.
You nearly said it yourself: people call it a "she" and pretend to treat it as a person
People don't "pretend" to ask Siri questions... they just ask her questions. That's social!
posted by serif at 2:07 PM on October 28, 2013 [2 favorites]
There must be a name for this kind of book, something kinder than "sophistry".
The word is 'metaphysics.' But, the logical positivists largely succeeded in removing it from the english language.
I wish people like Hofstadter focused on the likely functional components of 'cognition' like memory. Is cognition (whatever that is) possible without memory?
posted by ennui.bz at 2:07 PM on October 28, 2013 [1 favorite]
The word is 'metaphysics.' But, the logical positivists largely succeeded in removing it from the english language.
I wish people like Hofstadter focused on the likely functional components of 'cognition' like memory. Is cognition (whatever that is) possible without memory?
posted by ennui.bz at 2:07 PM on October 28, 2013 [1 favorite]
Replicate it? He'd be better advised simply to work out the bugs.
posted by IndigoJones at 2:10 PM on October 28, 2013
posted by IndigoJones at 2:10 PM on October 28, 2013
On a planet with 7 billion *human* intelligences already, heading for 15 (with all the other intelligences being pushed toward doom) ... and the concomitant miseries that forebodes ... it might actually be most practical to work towards expert systems that can take a load off humans who'll have to cope with those miseries.
Watson's recent deployment as a world-class oncologist, for example. Cuz it turns out, if you try sometimes, you get what you need.
posted by Twang at 2:11 PM on October 28, 2013 [1 favorite]
Watson's recent deployment as a world-class oncologist, for example. Cuz it turns out, if you try sometimes, you get what you need.
posted by Twang at 2:11 PM on October 28, 2013 [1 favorite]
People are, right now, making software that interacts socially with human beings.
Yeah ... in the same sense that white water and landslides and locust hordes interact socially with human beings.
posted by Twang at 2:15 PM on October 28, 2013 [4 favorites]
Yeah ... in the same sense that white water and landslides and locust hordes interact socially with human beings.
posted by Twang at 2:15 PM on October 28, 2013 [4 favorites]
Generally related to this topic, especially to discussions of the bottom-up approach to developing AI:
Single Neuronal Dendrites Can Perform Computations:
"The scientists achieved an important breakthrough: they succeeded in making incredibly challenging electrical and optical recordings directly from the tiny dendrites of neurons in the intact brain while the brain was processing visual information.
These recordings revealed that visual stimulation produces specific electrical signals in the dendrites -- bursts of spikes -- which are tuned to the properties of the visual stimulus.
The results challenge the widely held view that this kind of computation is achieved only by large numbers of neurons working together, and demonstrate how the basic components of the brain are exceptionally powerful computing devices in their own right." (via ScienceDaily.com)
posted by Hairy Lobster at 3:34 PM on October 28, 2013 [3 favorites]
Single Neuronal Dendrites Can Perform Computations:
"The scientists achieved an important breakthrough: they succeeded in making incredibly challenging electrical and optical recordings directly from the tiny dendrites of neurons in the intact brain while the brain was processing visual information.
These recordings revealed that visual stimulation produces specific electrical signals in the dendrites -- bursts of spikes -- which are tuned to the properties of the visual stimulus.
The results challenge the widely held view that this kind of computation is achieved only by large numbers of neurons working together, and demonstrate how the basic components of the brain are exceptionally powerful computing devices in their own right." (via ScienceDaily.com)
posted by Hairy Lobster at 3:34 PM on October 28, 2013 [3 favorites]
I wish people like Hofstadter focused on the likely functional components of 'cognition' like memory. Is cognition (whatever that is) possible without memory?
His AI projects do use memory, with short- and long-term memories with different functions (Copycat, for example).
Or is that not what you meant?
posted by mittens at 4:15 PM on October 28, 2013
His AI projects do use memory, with short- and long-term memories with different functions (Copycat, for example).
Or is that not what you meant?
posted by mittens at 4:15 PM on October 28, 2013
“I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.”
Oh man I thought I was close to being a solipsist. I'm not even in this guy's zip code.
posted by bukvich at 4:19 PM on October 28, 2013 [3 favorites]
Oh man I thought I was close to being a solipsist. I'm not even in this guy's zip code.
posted by bukvich at 4:19 PM on October 28, 2013 [3 favorites]
To be fair to Hofstadter, as a young man, with a then spotty and incomplete education, I was, for a time, a big fanboy of GEB. I read through it several times, bought copies to give to others, tried to engage people in conversations about it, and went on, for a while, proud to be, well, if not sentient, at least "aware," on some plane that could enjoy GEB. But all my life since has been spent learning things, and nearing the end of it, all I think I know is how damn little I know, and how fast that the fart in a hurricane of what I think of as "knowledge" that I've amassed is both terribly parochial, and perishable.
Until a decade or so ago, no person on Earth had any real imaginings about either "dark matter" or "dark energy," which the so called best among us now assure us represent the vast, vast majority of both matter and energy in this universe, about which together, we have nearly zero real knowledge, and which both might, in the long run, prove to be merely masking illusions for actual facts yet to be appreciated by (we hope!) our distant progeny. So it turns out, that all my life, I've not even been looking at the sand on the beach where I live, just been working hard to get a sense of what dust, borne by local wind, might do on what I think of as locally sunny days.
I'm an idiot, and a fool, to boot, for all of Hofstader's instruction.
Color me tired, and just a bit pissed. And better luck to those with longer telomere chains remaining than mine. None of you may live to a time when mankind understands the nature of Nature, but may those of you who wish to introspect intensively in the meantime, do so in peace and the warm fuzzy joy of your own navel lint, and at $30+ a paperback copy, to the even younger and more gullible.
posted by paulsc at 4:32 PM on October 28, 2013 [4 favorites]
Until a decade or so ago, no person on Earth had any real imaginings about either "dark matter" or "dark energy," which the so called best among us now assure us represent the vast, vast majority of both matter and energy in this universe, about which together, we have nearly zero real knowledge, and which both might, in the long run, prove to be merely masking illusions for actual facts yet to be appreciated by (we hope!) our distant progeny. So it turns out, that all my life, I've not even been looking at the sand on the beach where I live, just been working hard to get a sense of what dust, borne by local wind, might do on what I think of as locally sunny days.
I'm an idiot, and a fool, to boot, for all of Hofstader's instruction.
Color me tired, and just a bit pissed. And better luck to those with longer telomere chains remaining than mine. None of you may live to a time when mankind understands the nature of Nature, but may those of you who wish to introspect intensively in the meantime, do so in peace and the warm fuzzy joy of your own navel lint, and at $30+ a paperback copy, to the even younger and more gullible.
posted by paulsc at 4:32 PM on October 28, 2013 [4 favorites]
Is cognition (whatever that is) possible without memory?
I haven't read Hofstadter in a long time, but I'm pretty sure to him the challenges of understanding how memory works are similar to the challenges of understanding how cognition (and perception) in general works. You see something, it "reminds" you of something similar -- how exactly did you retrieve this memory and "match" it with what you're currently perceiving? Conceptually this is similar to perceiving that an object falls into a particular category.
posted by leopard at 6:05 PM on October 28, 2013
I haven't read Hofstadter in a long time, but I'm pretty sure to him the challenges of understanding how memory works are similar to the challenges of understanding how cognition (and perception) in general works. You see something, it "reminds" you of something similar -- how exactly did you retrieve this memory and "match" it with what you're currently perceiving? Conceptually this is similar to perceiving that an object falls into a particular category.
posted by leopard at 6:05 PM on October 28, 2013
Quorum sensing in bacteria appears to be a social behavior, but I think it would be generous to say that an individual bacterium is self-aware.
I wonder...I read somewhere about a parasitic wasp that paralyzes its prey, brings it near its nest (it's ground dwelling), puts the paralyzed prey near the entrance, then goes to prepare a place for it. Having done so, it emerges from the nest, and carefully drags the prey down into the hole.
This seems like a fairly intelligent series of actions. However, if, while it is down in the hole, one were to move the prey a few centimeters, upon emerging from the hole, the wasp goes to the prey, moves it back where it was, goes back into the hole to prepare a place for the prey, etc.
This got me thinking about how much of our instinctual behavior we take for granted and assume it is part of intelligence. Which led me to the conclusion that there is a continuum of intelligence, and the assumption of "intelligence" as defined by what humans do is a fairly meaningless concept.
In this view, doing simply hardwired but "genetically learned" tasks may be the start of something more. Mechanistically, it seems to me that the difference between most machine learning and intelligence is that ML is generally done as a single, task, whereas "natural" intelligence is a composite of tasks such as vision, pattern recognition, memory, movement, and many other tasks previously mentioned, in a way that can be integrated across each element to produce a behavior. Generalization and imagination may be an emergent property of the capability to integrate - a la John Holland in Emergence: From Chaos To Order.
posted by BillW at 7:37 PM on October 28, 2013
I wonder...I read somewhere about a parasitic wasp that paralyzes its prey, brings it near its nest (it's ground dwelling), puts the paralyzed prey near the entrance, then goes to prepare a place for it. Having done so, it emerges from the nest, and carefully drags the prey down into the hole.
This seems like a fairly intelligent series of actions. However, if, while it is down in the hole, one were to move the prey a few centimeters, upon emerging from the hole, the wasp goes to the prey, moves it back where it was, goes back into the hole to prepare a place for the prey, etc.
This got me thinking about how much of our instinctual behavior we take for granted and assume it is part of intelligence. Which led me to the conclusion that there is a continuum of intelligence, and the assumption of "intelligence" as defined by what humans do is a fairly meaningless concept.
In this view, doing simply hardwired but "genetically learned" tasks may be the start of something more. Mechanistically, it seems to me that the difference between most machine learning and intelligence is that ML is generally done as a single, task, whereas "natural" intelligence is a composite of tasks such as vision, pattern recognition, memory, movement, and many other tasks previously mentioned, in a way that can be integrated across each element to produce a behavior. Generalization and imagination may be an emergent property of the capability to integrate - a la John Holland in Emergence: From Chaos To Order.
posted by BillW at 7:37 PM on October 28, 2013
I wonder how much true AI will have in common with synthetic biology. The question of sentience seems more relevant to the realm of synthetic biology than AI.
posted by Soupisgoodfood at 9:04 PM on October 28, 2013 [1 favorite]
posted by Soupisgoodfood at 9:04 PM on October 28, 2013 [1 favorite]
BillW, my own take on the Bee Eating Wasp experiment.
posted by localroger at 5:42 AM on October 29, 2013 [1 favorite]
posted by localroger at 5:42 AM on October 29, 2013 [1 favorite]
« Older “Western culture is Islamically forbidden” | Welcome to Offal Pudding Lane Newer »
This thread has been archived and is closed to new comments
I think it's inevitable that any modern analysis of Hofstader be a little melancholy. He's a smart man who had a fascinating (if ungrounded) idea at a very young age. That is inherently tragic.
posted by 256 at 6:51 PM on October 27, 2013 [4 favorites]