Simulated Language
May 28, 2011 10:57 AM Subscribe
In the recent MIT symposium "Brains, Minds and Machines," Chomsky criticized the use of purely statistical methods to understand linguistic behavior. Google's Director of Research, Peter Norvig responds. (via)
Relevant bit from Technology Review:
Relevant bit from Technology Review:
The two linguists on the panel, Noam Chomsky and Barbara Partee, both made seminal contributions to our understanding of language by considering it as a computational, rather than purely cultural, phenomenon. Both also felt that understanding human language was the key to creating genuinely thinking machines. "Really knowing semantics is a prerequisite for anything to be called intelligence," said Partee.
Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. "That's a notion of [scientific] success that's very novel. I don't know of anything like it in the history of science," said Chomsky.
I agree with the general drift Chomsky and the other critics, but their problem is that they don't go far enough. For example...
By the early 90's I had pretty much settled, thanks to the lack of progress in AI, into a belief that it was going to be far too complicated for us to work out. What changed that was an essay by Stephen Jay Gould, about some amateur researchers who were studying the Bee Eating Wasp. In one anecdote, they wanted to show that the wasp locates its home hole by visual cues. So they waited for the wasp to go hunting and then moved all the nearby landmarks six inches to the side. When the wasp returned, it plopped down with great precision six inches away from its hole in the same direction.
While that was clever and useful, though, it was what happened next that really struck me: They wrote that the wasp appeared quite agitated and circled irregularly until it stumbled on its home. Then, after going in and out several times as if to make sure it was home, the wasp hovered and flew around for several minutes as if examining the territory closely.
It struck me that this is exactly how a human would respond if a sufficiently godlike being played a similar trick on us. Here was a creature with barely a million neurons, not even a vertebrate, exhibiting behavior more human-like than the most adaptable and lifelike machines ever made by us. The wasp was conscious, even if its concepts and urges might be very "low resolution" compared to ours. Solve the problem of the wasp, I realized, which certainly is a problem we should be able to attack with our existing technology, and Moore's Law will take care of solving the problem of the human mind.
But AFAIK absolutely nobody, even the "radicals" of AI research like Chompsky, is suggesting that such an approach might be worthwhile. I am certain that consciousness is an emergent property, and current AI study is like a person studying fractals by trying to make different algorithms to draw the different features of the Mandelbrot Set, without realizing that he should be looking for something far simpler from which the pattern arises naturally without being designed.
posted by localroger at 11:25 AM on May 28, 2011 [40 favorites]
Winston said he believes researchers should instead focus on those things that make humans distinct from other primates, or even what made them distinct from Neanderthals. Once researchers think they have identified the things that make humans unique, he said, they should develop computational models of these properties, implementing them in real systems so they can discover the gaps in their models, and refine them as needed.See the hidden assumption? He thinks that humans are unique, and that that unique thing is what AI is all about. And that is planning for failure, because not only is human consciousness not that unique, it is the power app running atop an evolved operating system that is actually very common and universal, and until we understand that trying to understand human intelligence is going to be like trying to understand how a car works when we haven't figured out fire yet.
By the early 90's I had pretty much settled, thanks to the lack of progress in AI, into a belief that it was going to be far too complicated for us to work out. What changed that was an essay by Stephen Jay Gould, about some amateur researchers who were studying the Bee Eating Wasp. In one anecdote, they wanted to show that the wasp locates its home hole by visual cues. So they waited for the wasp to go hunting and then moved all the nearby landmarks six inches to the side. When the wasp returned, it plopped down with great precision six inches away from its hole in the same direction.
While that was clever and useful, though, it was what happened next that really struck me: They wrote that the wasp appeared quite agitated and circled irregularly until it stumbled on its home. Then, after going in and out several times as if to make sure it was home, the wasp hovered and flew around for several minutes as if examining the territory closely.
It struck me that this is exactly how a human would respond if a sufficiently godlike being played a similar trick on us. Here was a creature with barely a million neurons, not even a vertebrate, exhibiting behavior more human-like than the most adaptable and lifelike machines ever made by us. The wasp was conscious, even if its concepts and urges might be very "low resolution" compared to ours. Solve the problem of the wasp, I realized, which certainly is a problem we should be able to attack with our existing technology, and Moore's Law will take care of solving the problem of the human mind.
But AFAIK absolutely nobody, even the "radicals" of AI research like Chompsky, is suggesting that such an approach might be worthwhile. I am certain that consciousness is an emergent property, and current AI study is like a person studying fractals by trying to make different algorithms to draw the different features of the Mandelbrot Set, without realizing that he should be looking for something far simpler from which the pattern arises naturally without being designed.
posted by localroger at 11:25 AM on May 28, 2011 [40 favorites]
It's amazing how often dropping the assumption "We are special" yields scientific goodness.
posted by benito.strauss at 11:28 AM on May 28, 2011 [5 favorites]
posted by benito.strauss at 11:28 AM on May 28, 2011 [5 favorites]
Skinner v. Chomsky this is not.
posted by fourcheesemac at 11:31 AM on May 28, 2011
posted by fourcheesemac at 11:31 AM on May 28, 2011
The wasp was conscious
Neat story. I became convinced of insect consciousness when I closely watched an ant crossing a floor. It would walk one way, stop, flail its antennae while looking around, walk another direction, stop and flail some more, and then backtrack before choosing another path. The ant was making decisions. And I was sold on animal self consciousness (and not just those animals that recognize themselves in the mirror) when watching snakes and wolves hunt. They have to exhibit remarkable self control to silently stalk, striking only when they have the best chance of success. They have less impulsivity than many humans.
posted by blargerz at 11:43 AM on May 28, 2011
Neat story. I became convinced of insect consciousness when I closely watched an ant crossing a floor. It would walk one way, stop, flail its antennae while looking around, walk another direction, stop and flail some more, and then backtrack before choosing another path. The ant was making decisions. And I was sold on animal self consciousness (and not just those animals that recognize themselves in the mirror) when watching snakes and wolves hunt. They have to exhibit remarkable self control to silently stalk, striking only when they have the best chance of success. They have less impulsivity than many humans.
posted by blargerz at 11:43 AM on May 28, 2011
But AFAIK absolutely nobody, even the "radicals" of AI research like Chompsky, is suggesting that such an approach might be worthwhile
The Blue Brain project is attempting something along those lines. So far they've simulated a rat cortical column.
posted by jedicus at 11:44 AM on May 28, 2011
The Blue Brain project is attempting something along those lines. So far they've simulated a rat cortical column.
posted by jedicus at 11:44 AM on May 28, 2011
My research is closely related in that it deals with insertion and decoding of meaning within musical language.
And it turns out -- the proper application of a few, simple Gestalt-based cognition principles goes a long way toward making machines deal sensitively with music in a host of situations.
Related attempts that apply statistical methods to these sorts of problems seem to work 40%-60% of the time -- at best.
AFAICT, Chomsky's general approach is moving us in the right direction.
posted by Dr. Fetish at 11:52 AM on May 28, 2011 [3 favorites]
And it turns out -- the proper application of a few, simple Gestalt-based cognition principles goes a long way toward making machines deal sensitively with music in a host of situations.
Related attempts that apply statistical methods to these sorts of problems seem to work 40%-60% of the time -- at best.
AFAICT, Chomsky's general approach is moving us in the right direction.
posted by Dr. Fetish at 11:52 AM on May 28, 2011 [3 favorites]
But AFAIK absolutely nobody, even the "radicals" of AI research like Chompsky, is suggesting that such an approach might be worthwhile. I am certain that consciousness is an emergent property, and current AI study is like a person studying fractals by trying to make different algorithms to draw the different features of the Mandelbrot Set, without realizing that he should be looking for something far simpler from which the pattern arises naturally without being designed.
I'm with you right up to this analogy, localroger. No reason to assume it is simple at all.
posted by Chuckles at 12:00 PM on May 28, 2011 [1 favorite]
I'm with you right up to this analogy, localroger. No reason to assume it is simple at all.
posted by Chuckles at 12:00 PM on May 28, 2011 [1 favorite]
Funny, I know Peter Norvig as the guy who created (via computer) the world's longest palindromic sentence , a 17,000 word expansion of "A man, a plan, a canal: Panama."
Knowing that he is the director of research at Google puts that in a whole new light. Not just a goof tinkering with an Excel macro....
posted by msalt at 12:04 PM on May 28, 2011
Knowing that he is the director of research at Google puts that in a whole new light. Not just a goof tinkering with an Excel macro....
posted by msalt at 12:04 PM on May 28, 2011
Hippybear, Chomsky wrote a long demolition of B. F. Skinner which first appeared in The New York Review of Books under the title The Case Against B. F. Skinner.
In the forty-odd years since, I have never read any attack on anyone or anything that even came close to being as devastating as that was.
I hated Skinner and thought he was a fool, but by the time I finished Chomsky's piece I had goose pimples of sympathy for poor old B. F. all over my body.
I think it destroyed Skinner's reputation, and if it didn't, it certainly should have.
posted by jamjam at 12:06 PM on May 28, 2011
In the forty-odd years since, I have never read any attack on anyone or anything that even came close to being as devastating as that was.
I hated Skinner and thought he was a fool, but by the time I finished Chomsky's piece I had goose pimples of sympathy for poor old B. F. all over my body.
I think it destroyed Skinner's reputation, and if it didn't, it certainly should have.
posted by jamjam at 12:06 PM on May 28, 2011
I'm with you right up to this analogy, localroger. No reason to assume it is simple at all.
"Far simpler" (than human consciousness) != "simple".
posted by vorfeed at 12:11 PM on May 28, 2011
"Far simpler" (than human consciousness) != "simple".
posted by vorfeed at 12:11 PM on May 28, 2011
It seems like Chomsky is attacking experimental science as such. It's true, all these statistical studies can do is confirm or deny various hypotheses--possibly even the ones being tested for. We still need to do experiments and get high-quality data if we want to understand anything. Having specialists in the field of experimental science--including statisticians--who don't care about the theory, and are just interested in getting good data, seems like a good way to get good data.
posted by LogicalDash at 12:26 PM on May 28, 2011
posted by LogicalDash at 12:26 PM on May 28, 2011
Barking up the wrong tree. Google-sponsored research is not interested in understanding how language works, but in persuading us that, as something useful for Google's business practices, there is a way we ought to assume it works.
Please stop acting like MIT and Google are equivalent fucking players. It's like listening to what a McDonalds scientist has to contribute to a symposium cattle genetics without thinking that maybe, just maybe, the people signing the paychecks matter here.
posted by mobunited at 12:28 PM on May 28, 2011 [1 favorite]
Please stop acting like MIT and Google are equivalent fucking players. It's like listening to what a McDonalds scientist has to contribute to a symposium cattle genetics without thinking that maybe, just maybe, the people signing the paychecks matter here.
posted by mobunited at 12:28 PM on May 28, 2011 [1 favorite]
As vorfeed says. It is obvious that there is some complexity as to how the different brain organs wire and arrange themselves, but it's equally obvious on simple information theory terms that it's not at all possible for the wiring to be micro-managed under genetic control, because the wiring contains orders of magnitude more information than the genome, and there must be some serious generality to the underlying functions. If our distant ancestors weren't using the organ that evolved into the cerebral cortex to think in human terms, it stands to reason that they were using it for something, and that must be something else. Solve that, and you are a long way toward solving what looks like a much bigger problem.
posted by localroger at 12:29 PM on May 28, 2011 [4 favorites]
posted by localroger at 12:29 PM on May 28, 2011 [4 favorites]
From Norvig:
I love how this is brushed off as another casual citation. The data sources and analytical tools that produced these statistics weren't even available until after Chomsky had exceeded his statistical life expectancy (callous, but it's true).
I absolutely understand preferring to speculate about the "underlying principles" of linguistics, if robust statistics were not available for most of your life. However, once it becomes clear that the facts do not support your theories, it's a shame if you use your fame and prominence to dissuade others from taking advantage of the new information.
posted by Riki tiki at 12:35 PM on May 28, 2011 [2 favorites]
Now let's consider the non-statistical model of spelling expressed by the rule "I before E except after C." Compare that to the probabilistic, trained statistical model:[emphasis added]
P(IE) = 0.0177 P(CIE) = 0.0014
P(EI) = 0.0046 P(CEI) = 0.0005
This model comes from statistics on a corpus of a trillion words of English text.
I love how this is brushed off as another casual citation. The data sources and analytical tools that produced these statistics weren't even available until after Chomsky had exceeded his statistical life expectancy (callous, but it's true).
I absolutely understand preferring to speculate about the "underlying principles" of linguistics, if robust statistics were not available for most of your life. However, once it becomes clear that the facts do not support your theories, it's a shame if you use your fame and prominence to dissuade others from taking advantage of the new information.
posted by Riki tiki at 12:35 PM on May 28, 2011 [2 favorites]
jedicus, I am aware of Blue Brain but I think even they are aiming too high in the layers of abstraction. You see this in the turns of phrase used in the literature, e.g. "what is the micro level functionality of a cortical column." Well that might be no more useful than asking "what is the functionality of a FPGA," where the answer is going to be "whatever you need it to do, within its limits." You see some of the most convincing results in studies of the visual cortex because researchers can readily relate inputs to outputs; they find all kinds of different functionality as information becomes more abstract moving away from V1, and yet the cortex is physically almost completely homogeneous. Whatever micro circuits in V1-5 are processing our vision must also be doing everything else in other parts of the cortex -- sorting out sound, practicing and implementing movements, implementing memory, face recognition, language, and long-term plans. All of those functions are somehow done by different parts of a structure that is, as far as we can tell, physically the same throughout.
We might figure out what the magic algorithm is that creates such versatility when implemented in a column of neurons by studying the neurons, but I have a feeling we would make more progress by looking for algorithms that versatile in the computer science space and trying to get them to self-organize and implement complex behaviors in simulations that do not hew so strongly to biological models.
Everything a computer does can be done by any computer, or without a computer at all by some system that is Turing equivalent. What we need to be looking for is the Turing Machine of consciousness. I think there is such a thing, I think it's a lot simpler than anybody wants to admit, and once we have that all the rest will be details.
posted by localroger at 12:40 PM on May 28, 2011 [4 favorites]
We might figure out what the magic algorithm is that creates such versatility when implemented in a column of neurons by studying the neurons, but I have a feeling we would make more progress by looking for algorithms that versatile in the computer science space and trying to get them to self-organize and implement complex behaviors in simulations that do not hew so strongly to biological models.
Everything a computer does can be done by any computer, or without a computer at all by some system that is Turing equivalent. What we need to be looking for is the Turing Machine of consciousness. I think there is such a thing, I think it's a lot simpler than anybody wants to admit, and once we have that all the rest will be details.
posted by localroger at 12:40 PM on May 28, 2011 [4 favorites]
I have to admit being a little taken aback by coming across a reference to Chomsky on the internet that is actually about linguistics.
I've read the article and I have to admit to be a little bit at a loss as to the point under contention. Is this like the thought experiment cited in "Godel, Escher, Bach", about whether or not a man using a purely mechanical algorithm to translate English characters into Mandarin actually "understands" Chinese?
posted by Ipsifendus at 12:54 PM on May 28, 2011 [1 favorite]
I've read the article and I have to admit to be a little bit at a loss as to the point under contention. Is this like the thought experiment cited in "Godel, Escher, Bach", about whether or not a man using a purely mechanical algorithm to translate English characters into Mandarin actually "understands" Chinese?
posted by Ipsifendus at 12:54 PM on May 28, 2011 [1 favorite]
localroger: What we need to be looking for is the Turing Machine of consciousness. I think there is such a thing...
Really? Where is it? My God man, tell us...
posted by Dr. Fetish at 12:55 PM on May 28, 2011
Really? Where is it? My God man, tell us...
posted by Dr. Fetish at 12:55 PM on May 28, 2011
I'm sure Chomsky's take-down of Skinner was thorough and effective, but there is a short, devastating knock-out attributed, as usual, to Sidney Morgenbesser:
''Let me see if I understand your thesis,'' he once said to the psychologist B. F. Skinner. ''You think we shouldn't anthropomorphize people?''posted by benito.strauss at 1:00 PM on May 28, 2011 [8 favorites]
Here is an interesting article which touches on a Physical Church Turing Thesis and the notion of neural computation:
Computation in Nervous Systems. - Gerhard Werner
posted by kuatto at 1:01 PM on May 28, 2011
Computation in Nervous Systems. - Gerhard Werner
posted by kuatto at 1:01 PM on May 28, 2011
Really? Where is it? My God man, tell us...
It's on the hard drive of this very computer I'm using to type this, cleverly disgused as a recipe for Authentic New Orleans Jamabalaya. To decode it you have to sort all of the ingredients into anagrams representing common flow chart blocks, and put them in the order you would add the ingredients to the recipe.
I tested it with a Parallax BOE-Bot upgraded to a Propeller Protobard with a SD card and CMU Cam, and the last day I left it running while I was at work I came home to find out it had nearly finished building an atomic bomb. So I know it works.
The reason I haven't published it is that I haven't figured out whether to give it to the people who will use it to build SkyNet, the people who want to build Colossus, or the people who want to build Prime Intellect. The latter are the most attractive, but also the least likely to be willing to pay me enough to go through what happens next.
posted by localroger at 1:03 PM on May 28, 2011 [6 favorites]
It's on the hard drive of this very computer I'm using to type this, cleverly disgused as a recipe for Authentic New Orleans Jamabalaya. To decode it you have to sort all of the ingredients into anagrams representing common flow chart blocks, and put them in the order you would add the ingredients to the recipe.
I tested it with a Parallax BOE-Bot upgraded to a Propeller Protobard with a SD card and CMU Cam, and the last day I left it running while I was at work I came home to find out it had nearly finished building an atomic bomb. So I know it works.
The reason I haven't published it is that I haven't figured out whether to give it to the people who will use it to build SkyNet, the people who want to build Colossus, or the people who want to build Prime Intellect. The latter are the most attractive, but also the least likely to be willing to pay me enough to go through what happens next.
posted by localroger at 1:03 PM on May 28, 2011 [6 favorites]
Barking up the wrong tree. Google-sponsored research is not interested in understanding how language works, but in persuading us that, as something useful for Google's business practices, there is a way we ought to assume it works.
I sort of doubt that Google is interested in persuading us of anything. In fact, I suspect they're not even interested in persuading themselves of anything deep about the nature of language (as a company anyway, I'm sure many of the individuals working there are). Their view of language is a purely instrumental one, they want to be able to sell their services to people and a black-box model of language is perfectly fine for that.
In fact, this is better viewed as an academic rivalry between a computer scientist (Norvig spent years at Berkeley and wrote some of the seminal texts in AI theory and this was published on his personal site) and a linguist. Obviously "leave it up to the computers to understand" is going to be a more satisfying answer to an AI expert than to a linguist!
posted by atrazine at 1:07 PM on May 28, 2011 [1 favorite]
I sort of doubt that Google is interested in persuading us of anything. In fact, I suspect they're not even interested in persuading themselves of anything deep about the nature of language (as a company anyway, I'm sure many of the individuals working there are). Their view of language is a purely instrumental one, they want to be able to sell their services to people and a black-box model of language is perfectly fine for that.
In fact, this is better viewed as an academic rivalry between a computer scientist (Norvig spent years at Berkeley and wrote some of the seminal texts in AI theory and this was published on his personal site) and a linguist. Obviously "leave it up to the computers to understand" is going to be a more satisfying answer to an AI expert than to a linguist!
posted by atrazine at 1:07 PM on May 28, 2011 [1 favorite]
It would walk one way, stop, flail its antennae while looking around, walk another direction, stop and flail some more, and then backtrack before choosing another path.
Roomba's do this, too.
posted by empath at 1:11 PM on May 28, 2011 [6 favorites]
Roomba's do this, too.
posted by empath at 1:11 PM on May 28, 2011 [6 favorites]
Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior.
Neurons don't know the meaning of the symbols they manipulate, either.
Look, if you ask Watson a question, and he gives you the correct answer, did he understand you, or not?
Is this like the thought experiment cited in "Godel, Escher, Bach", about whether or not a man using a purely mechanical algorithm to translate English characters into Mandarin actually "understands" Chinese?
Yes, and it's just as stupid now as it was then, since it relies on a homunculus manipulating symbols that doesn't need to exist.
posted by empath at 1:17 PM on May 28, 2011 [4 favorites]
Neurons don't know the meaning of the symbols they manipulate, either.
Look, if you ask Watson a question, and he gives you the correct answer, did he understand you, or not?
Is this like the thought experiment cited in "Godel, Escher, Bach", about whether or not a man using a purely mechanical algorithm to translate English characters into Mandarin actually "understands" Chinese?
Yes, and it's just as stupid now as it was then, since it relies on a homunculus manipulating symbols that doesn't need to exist.
posted by empath at 1:17 PM on May 28, 2011 [4 favorites]
There's also a boot strapping problem involved here. You can't explain intelligence by proposing some underlying intelligent process. You've just pushed the problem off, because now you need to know where that intelligence came from.
At some point, intelligence has to be explained as a statistical phenomenon that arises from the accumulation of a large number of unintelligent processes, in the same way that entropy, friction, quantum decoherence, and most other macroscopic phenomena are explained in physics and biology.
posted by empath at 1:25 PM on May 28, 2011 [2 favorites]
At some point, intelligence has to be explained as a statistical phenomenon that arises from the accumulation of a large number of unintelligent processes, in the same way that entropy, friction, quantum decoherence, and most other macroscopic phenomena are explained in physics and biology.
posted by empath at 1:25 PM on May 28, 2011 [2 favorites]
Analogy: suppose you are in charge of industrial design at HP and you decide you want your consumer laptops' designs to look as if might've been designed by Jony Ive.
Norvig's approach: first, put together a software system capable of generating laptop designs. More specifically, to generate a laptop design the system requires choosing a concrete value for each of, say, 100K input variables, and based on that input the system will produce an industrial design for a laptop. The system is deterministic in the sense that given the same input it will produce the same output.
Now, appoint a panel of, say, 1M consumers, and generate a very large number of random selections of input parameters. Generate images of the laptops specified by those input parameters, keeping track of which inputs generated which images. Each consumer on the panel will be shown a large number of randomly-selected pairs of images and asked to answer the question "which of these two laptops looks more Ive-esque to you?"
The panel's answers will be collected and used as input to a machine learning algorithm, which will try to suss out correlations between choice(s) of parameter(s) and Ive-esque-ness.
Depending on the specific machine learning approach taken from here it is quite likely that further batches of random inputs will need to be generated, very likely using the output of the machine learning algorithm on previous batches to winnow the range of random possibilities considered (for example: if a particular parameter choice had a very strong negative impact upon a design's Ive-esque-ness, subsequent rounds of random generation might exclude that parameter choice from consideration).
The cycle of random generation => consumer panel => machine learning => random generation will continue until the sponsor is happy with the results. For a task like this, it might be something like: "We will convene another panel like the others. For this panel, we will include not only our current designs but also actual Ive-designed-laptops. We'll be happy with the results once our laptops are rated as more-Ive-esque than the actual designs by Ive 50% or more of the time." (Or, more technically: when the consumer panel can't tell the difference between them in any statistically significant way.)
So what has Norvig's approach delivered?
Something useful? Certainly. HP has a machine that can design laptops that look like Ive might've designed them.
A machine that designs like Jony Ive? Leave that debate to the philosophers, please.
Any deeper understanding into what makes an Ive-esque design Ive-esque? It's hard to see how this approach would deliver any deeper understanding on its own. It's *possible* that study of this system might produce deeper understanding -- for example, investigating the interrelationships between (groups of) the input parameter(s) and the Ive-esque-ness of the output -- but all by itself all this approach gets you is a new black box (previously Jony Ive was a black box, but now Jony Ive *and* this system are black boxes).
This is what Chomsky's comments are getting at: even when it's fair to say that the approach sketched above gets better results than an attempt at formulating first principles -- which is fair in the world of computational linguistics -- it's hard to characterize the approach as "science (of language)".
That's not to say there isn't a lot of legitimate science done *in and around* this sort of approach -- eg, developing the mathematics that make the machine learning algorithms work is legitimately scientific -- but that approach isn't an approach that as-of-yet can be claimed to deliver . Moreover, the extent it *does* wind up yielding further understanding, it only does so by applying the approach Chomsky would advocate in the first place; in the Ive-esque-ness example, you'd look at the model and see that particular combinations of material choices, edge curvature, etc. are correlating more with Ive-esque design than others.
So Chomsky's defending his turf, yes, but he's defending it against someone who's claiming that being able to produce a system generating empirically useful results is somehow yielding further insight into language itself. Without an actual example of such insight -- for example, a claim other than "if we build a system like this and train the system like that then we get results that seem useful" -- there's no reason to believe such insights have been produced. What's this new understanding?
You see, particularly to a linguist the difference between those who can perform a task and those who understand the task is fairly fundamental; just about everyone reading and commenting on this article, I'd assume, is capable of reading and writing the English language -- and even comprehending what is read and written -- but it's pretty safe to say that absolutely none of you understand how you actually do those things, (myself included, of course)? For, if you did understand, you'd be able to chime in and settle this debate outright.
Norvig's approach is good engineering -- and like all good engineering it depends upon results drawn from science, and applies something resembling the scientific method during its development -- but Chomsky's point was that calling it "science" (the way that physics or chemistry or even linguistics is a science) is more than a bit of a stretch.
posted by hoople at 1:25 PM on May 28, 2011 [7 favorites]
Norvig's approach: first, put together a software system capable of generating laptop designs. More specifically, to generate a laptop design the system requires choosing a concrete value for each of, say, 100K input variables, and based on that input the system will produce an industrial design for a laptop. The system is deterministic in the sense that given the same input it will produce the same output.
Now, appoint a panel of, say, 1M consumers, and generate a very large number of random selections of input parameters. Generate images of the laptops specified by those input parameters, keeping track of which inputs generated which images. Each consumer on the panel will be shown a large number of randomly-selected pairs of images and asked to answer the question "which of these two laptops looks more Ive-esque to you?"
The panel's answers will be collected and used as input to a machine learning algorithm, which will try to suss out correlations between choice(s) of parameter(s) and Ive-esque-ness.
Depending on the specific machine learning approach taken from here it is quite likely that further batches of random inputs will need to be generated, very likely using the output of the machine learning algorithm on previous batches to winnow the range of random possibilities considered (for example: if a particular parameter choice had a very strong negative impact upon a design's Ive-esque-ness, subsequent rounds of random generation might exclude that parameter choice from consideration).
The cycle of random generation => consumer panel => machine learning => random generation will continue until the sponsor is happy with the results. For a task like this, it might be something like: "We will convene another panel like the others. For this panel, we will include not only our current designs but also actual Ive-designed-laptops. We'll be happy with the results once our laptops are rated as more-Ive-esque than the actual designs by Ive 50% or more of the time." (Or, more technically: when the consumer panel can't tell the difference between them in any statistically significant way.)
So what has Norvig's approach delivered?
Something useful? Certainly. HP has a machine that can design laptops that look like Ive might've designed them.
A machine that designs like Jony Ive? Leave that debate to the philosophers, please.
Any deeper understanding into what makes an Ive-esque design Ive-esque? It's hard to see how this approach would deliver any deeper understanding on its own. It's *possible* that study of this system might produce deeper understanding -- for example, investigating the interrelationships between (groups of) the input parameter(s) and the Ive-esque-ness of the output -- but all by itself all this approach gets you is a new black box (previously Jony Ive was a black box, but now Jony Ive *and* this system are black boxes).
This is what Chomsky's comments are getting at: even when it's fair to say that the approach sketched above gets better results than an attempt at formulating first principles -- which is fair in the world of computational linguistics -- it's hard to characterize the approach as "science (of language)".
That's not to say there isn't a lot of legitimate science done *in and around* this sort of approach -- eg, developing the mathematics that make the machine learning algorithms work is legitimately scientific -- but that approach isn't an approach that as-of-yet can be claimed to deliver . Moreover, the extent it *does* wind up yielding further understanding, it only does so by applying the approach Chomsky would advocate in the first place; in the Ive-esque-ness example, you'd look at the model and see that particular combinations of material choices, edge curvature, etc. are correlating more with Ive-esque design than others.
So Chomsky's defending his turf, yes, but he's defending it against someone who's claiming that being able to produce a system generating empirically useful results is somehow yielding further insight into language itself. Without an actual example of such insight -- for example, a claim other than "if we build a system like this and train the system like that then we get results that seem useful" -- there's no reason to believe such insights have been produced. What's this new understanding?
You see, particularly to a linguist the difference between those who can perform a task and those who understand the task is fairly fundamental; just about everyone reading and commenting on this article, I'd assume, is capable of reading and writing the English language -- and even comprehending what is read and written -- but it's pretty safe to say that absolutely none of you understand how you actually do those things, (myself included, of course)? For, if you did understand, you'd be able to chime in and settle this debate outright.
Norvig's approach is good engineering -- and like all good engineering it depends upon results drawn from science, and applies something resembling the scientific method during its development -- but Chomsky's point was that calling it "science" (the way that physics or chemistry or even linguistics is a science) is more than a bit of a stretch.
posted by hoople at 1:25 PM on May 28, 2011 [7 favorites]
P(IE) = 0.0177 P(CIE) = 0.0014
P(EI) = 0.0046 P(CEI) = 0.0005
But for the second column, isn't what you want P(IE|C) vs. P(EI|C)? Or is he making the point that the prior probability of "IE" is so much greater that it doesn't matter?
posted by en forme de poire at 1:28 PM on May 28, 2011
P(EI) = 0.0046 P(CEI) = 0.0005
But for the second column, isn't what you want P(IE|C) vs. P(EI|C)? Or is he making the point that the prior probability of "IE" is so much greater that it doesn't matter?
posted by en forme de poire at 1:28 PM on May 28, 2011
Norvig's reply was worth reading just for "In this, Chomsky is in complete agreement with O'Reilly."
posted by AsYouKnow Bob at 1:42 PM on May 28, 2011 [2 favorites]
posted by AsYouKnow Bob at 1:42 PM on May 28, 2011 [2 favorites]
Norvig:
posted by kuatto at 1:43 PM on May 28, 2011
It was reasonable for Plato to think that the ideal of, say, a horse, was more important than any individual horse we can perceive in the world. In 400BC, species were thought to be eternal and unchanging. We now know that is not true; that the horses on another cave wall—in Lascaux—are now extinct, and that current horses continue to evolve slowly over time. Thus there is no such thing as a single ideal eternal "horse" form.Seriously, did he just say that? Wow.
posted by kuatto at 1:43 PM on May 28, 2011
First of all, I disagree about insect behavior as being "Conscious". It's not hard to build a robot with the abilities of an ant. And check out the ant mill where ants follow each other in a great circle until they die. You can find other examples of programming 'bugs' in bugs. Computers make decisions all the time, even when running old-school 'rules' based logic. The fact that an ant makes a decision doesn't make it 'conscious'.
Here's the thing though, who's to say we'd ever be able to 'understand' the human mind in a sensible way? I mean, take for example the turbulence that follows a jet. It's the result of complex interplay of molecules and you can simulate it, but you can't just use a simple formula to predict it, you have to simulate the whole thing.
With the brain, you have the complex interplay of neurons, but who's to say we'll ever be able to 'figure it out' in a specific way?
I think Chomsky has a point that looking for the 'root causes' of language is an interesting project, but ultimately it means analyzing the neural structures of the human brain in great detail, which is obviously difficult to do. You can't just cut up live human brains and pick them apart.
The other problem is: who's to say that human language isn't done with statistics? The brain is pretty good at learning rules from probabilistic events, and with an innate grammatical structure it's entirely possible that the brain's hard-wired language learning features are highly statistical in nature.
---
Anyway, Chomsky seems to have a romantic view of science here. It's one that probably made more sense back in the day, when you look at how people viewed science as it was beginning. People really did think that their theories were based on 'real' things and not just statistical aggregation. It's true that Newton used statistical data captured by Brahe and analyzed by Kepler to figure out universal gravitation - but people didn't think of those things as being simple mathematical rules derived from statistical observation, even though they were. They thought of gravity as the "real thing" one of the "laws of universe" handed down by god.
Now we know they weren't actually totally correct, they were just close approximations that worked for non-relativistic situations. And there are a lot of situations now where we can't really claim to 'know' how things work at the basic level, all we can do is describe what happens in certain situations, which gives us all the 'usefulness' of science but may be emotionally unsatisfying.
posted by delmoi at 2:11 PM on May 28, 2011
Here's the thing though, who's to say we'd ever be able to 'understand' the human mind in a sensible way? I mean, take for example the turbulence that follows a jet. It's the result of complex interplay of molecules and you can simulate it, but you can't just use a simple formula to predict it, you have to simulate the whole thing.
With the brain, you have the complex interplay of neurons, but who's to say we'll ever be able to 'figure it out' in a specific way?
I think Chomsky has a point that looking for the 'root causes' of language is an interesting project, but ultimately it means analyzing the neural structures of the human brain in great detail, which is obviously difficult to do. You can't just cut up live human brains and pick them apart.
The other problem is: who's to say that human language isn't done with statistics? The brain is pretty good at learning rules from probabilistic events, and with an innate grammatical structure it's entirely possible that the brain's hard-wired language learning features are highly statistical in nature.
jedicus, I am aware of Blue Brain but I think even they are aiming too high in the layers of abstraction. You see this in the turns of phrase used in the literature, e.g. "what is the micro level functionality of a cortical column." Well that might be no more useful than asking "what is the functionality of a FPGA," where the answer is going to be "whatever you need it to do, within its limits."That's because you are totally missing the point of the system. The purpose is to study the brain itself not "create intelligence" A statistical (or classical) AI system won't help you test drugs to treat Alzheimer's, for example. An AI system can't help you design neural implants or figure out how to use stem cells to cure ALS. I mean, maybe it could but only by helping you build a model of the brain. I don't know why people think that project is about pure, general AI. It's obviously not.
---
Anyway, Chomsky seems to have a romantic view of science here. It's one that probably made more sense back in the day, when you look at how people viewed science as it was beginning. People really did think that their theories were based on 'real' things and not just statistical aggregation. It's true that Newton used statistical data captured by Brahe and analyzed by Kepler to figure out universal gravitation - but people didn't think of those things as being simple mathematical rules derived from statistical observation, even though they were. They thought of gravity as the "real thing" one of the "laws of universe" handed down by god.
Now we know they weren't actually totally correct, they were just close approximations that worked for non-relativistic situations. And there are a lot of situations now where we can't really claim to 'know' how things work at the basic level, all we can do is describe what happens in certain situations, which gives us all the 'usefulness' of science but may be emotionally unsatisfying.
posted by delmoi at 2:11 PM on May 28, 2011
delmoi: That's because you are totally missing the point of the system. The purpose is to study the brain itself not "create intelligence"
I get that, but a lot of people in Singularity land think Blue Brain is a good stepping off point to simulation, which of course would create intelligence if it succeeded. It would be Jedicus that missed your point.
Also, I did not advance the ant story as evidence for insect consciousness; that was blagerz. Social insects have a lot more hardwired programming, and I find the account of the Bee Eating Wasp much more compelling.
posted by localroger at 2:34 PM on May 28, 2011
I get that, but a lot of people in Singularity land think Blue Brain is a good stepping off point to simulation, which of course would create intelligence if it succeeded. It would be Jedicus that missed your point.
Also, I did not advance the ant story as evidence for insect consciousness; that was blagerz. Social insects have a lot more hardwired programming, and I find the account of the Bee Eating Wasp much more compelling.
posted by localroger at 2:34 PM on May 28, 2011
empath:
This is, I suppose where I side solidly with Chomsky against Norvig. Chomsky thinks that there is, at some level, a consistent and firm way to describe the mechanism of consciousness which we haven't found, just as there was always a way to describe a certain type of problem solving space that Alan Turing found. Norvig thinks all we will ever be able to do is describe it, and that there is no underlying relatively simple order to discover.
If you were introduced to the Mandelbrot set without explanation you would probably make exactly the same argument about the amount of work that went into creating and ordering such vast lifelike detail. Even when you know how it's done the idea that it's two lines of code and some general purpose memory buffering seems unbelievable. Yet from such a simple system comes a whole universe of never-repeating intricate detail.
Upthread Chuckles expressed a common attitude, pretty much equivalent to Norvig's, that there's no reason to suspect an underlying simplicity. But I'd turn that right around and say, oh yeah? There's an underlying simplicity to the weather, there's an underlying simplicity to the Mandelbrot Set that is almost beyond belief, so why shouldn't there be an underlying simplicity to gene expression, consciousness, and higher intelligence? It turns out there's such an underlying simplicity to almost everything else we've studied in depth enough to find it.
posted by localroger at 2:59 PM on May 28, 2011 [1 favorite]
At some point, intelligence has to be explained as a statistical phenomenon that arises from the accumulation of a large number of unintelligent processes, in the same way that entropy, friction, quantum decoherence, and most other macroscopic phenomena are explained in physics and biology.You would, if you were completely ignorant of computer science, find it pretty reasonable if you had lived with and taken computers for granted all your life, and you did not know they were created by people, to make exactly the same statement about "computation." After all, the magic box can even answer questions humans can't with uncanny accuracy. Must be some great big magic beans in there.
This is, I suppose where I side solidly with Chomsky against Norvig. Chomsky thinks that there is, at some level, a consistent and firm way to describe the mechanism of consciousness which we haven't found, just as there was always a way to describe a certain type of problem solving space that Alan Turing found. Norvig thinks all we will ever be able to do is describe it, and that there is no underlying relatively simple order to discover.
If you were introduced to the Mandelbrot set without explanation you would probably make exactly the same argument about the amount of work that went into creating and ordering such vast lifelike detail. Even when you know how it's done the idea that it's two lines of code and some general purpose memory buffering seems unbelievable. Yet from such a simple system comes a whole universe of never-repeating intricate detail.
Upthread Chuckles expressed a common attitude, pretty much equivalent to Norvig's, that there's no reason to suspect an underlying simplicity. But I'd turn that right around and say, oh yeah? There's an underlying simplicity to the weather, there's an underlying simplicity to the Mandelbrot Set that is almost beyond belief, so why shouldn't there be an underlying simplicity to gene expression, consciousness, and higher intelligence? It turns out there's such an underlying simplicity to almost everything else we've studied in depth enough to find it.
posted by localroger at 2:59 PM on May 28, 2011 [1 favorite]
localroger:
posted by kuatto at 3:24 PM on May 28, 2011 [1 favorite]
Chomsky thinks that there is, at some level, a consistent and firm way to describe the mechanism of consciousness which we haven't found, just as there was always a way to describe a certain type of problem solving space that Alan Turing found. Norvig thinks all we will ever be able to do is describe it, and that there is no underlying relatively simple order to discover.This is a deep schism in their respective philosophies. Norvig strikes the pose, through his reasoning about process stripped of the knowing subject, as a crass materialist. It's interesting in the context of Google's "Do No Evil" motto: What is evil in a universe of pure statistical mechanism? Norvig also reveals his strict materialist stance when he needles chomsky:
Chomsky shows that he is happy with a Mystical answer [.]I think that Norvig stretches himself here, he is pushing for a totalitarian view of the Natural world, one where statistics and pure mechanism necessarily eliminate the human spirit. This is a view of Science that eliminates hypothesis in favor of asymptotic runtimes.
posted by kuatto at 3:24 PM on May 28, 2011 [1 favorite]
The Case Against B.F. Skinner
Anyway, Chomsky seems to have a romantic view of science here. It's one that probably made more sense back in the day, when you look at how people viewed science as it was beginning. People really did think that their theories were based on 'real' things and not just statistical aggregation. It's true that Newton used statistical data captured by Brahe and analyzed by Kepler to figure out universal gravitation - but people didn't think of those things as being simple mathematical rules derived from statistical observation, even though they were. They thought of gravity as the "real thing" one of the "laws of universe" handed down by god.
Except that your view of science went out with the positivists maybe 80 years ago.
posted by ennui.bz at 3:31 PM on May 28, 2011 [1 favorite]
Anyway, Chomsky seems to have a romantic view of science here. It's one that probably made more sense back in the day, when you look at how people viewed science as it was beginning. People really did think that their theories were based on 'real' things and not just statistical aggregation. It's true that Newton used statistical data captured by Brahe and analyzed by Kepler to figure out universal gravitation - but people didn't think of those things as being simple mathematical rules derived from statistical observation, even though they were. They thought of gravity as the "real thing" one of the "laws of universe" handed down by god.
Except that your view of science went out with the positivists maybe 80 years ago.
posted by ennui.bz at 3:31 PM on May 28, 2011 [1 favorite]
Upthread Chuckles expressed a common attitude, pretty much equivalent to Norvig's, that there's no reason to suspect an underlying simplicity.
:)
I do not assume it will be simple, but I also don't like the idea that intelligence must be statistical the way the quantum physics minded comments assume. For example, I think Donald Rumsfeld's known known's and known unknown's (and so on) capture a better understanding of intelligence than empath's view about Watson.
The way we understand known knowns is nothing like Watson. I guess there are also things we guess at statistically, which might be very much like Watson. However, there are also things that we extrapolate, and I don't think statistics is a particularly big part of that process. Probably other modalities too. And if you are really set on a statistical model, maybe you could envision that there is a giant Kelman filter integrating all the data from all the modalities.
I very much like the idea of starting from the simplest organisms and trying to understand, model and emulate their brain functions. There is something different about human intelligence, but it is built with the same tools that made every other biological brain. As far as I know, we still aren't really close to understanding even the simplest brains, so...
posted by Chuckles at 3:48 PM on May 28, 2011
:)
I do not assume it will be simple, but I also don't like the idea that intelligence must be statistical the way the quantum physics minded comments assume. For example, I think Donald Rumsfeld's known known's and known unknown's (and so on) capture a better understanding of intelligence than empath's view about Watson.
The way we understand known knowns is nothing like Watson. I guess there are also things we guess at statistically, which might be very much like Watson. However, there are also things that we extrapolate, and I don't think statistics is a particularly big part of that process. Probably other modalities too. And if you are really set on a statistical model, maybe you could envision that there is a giant Kelman filter integrating all the data from all the modalities.
I very much like the idea of starting from the simplest organisms and trying to understand, model and emulate their brain functions. There is something different about human intelligence, but it is built with the same tools that made every other biological brain. As far as I know, we still aren't really close to understanding even the simplest brains, so...
posted by Chuckles at 3:48 PM on May 28, 2011
chuckles...
Kuatto:
The whole question of "how will we recognize the difference between something that is really intelligent and a dumb machine pretending to be" evaporates if the intelligence self-organizes and, without deliberate programming, starts to do all the things we associate with real animal and human behavior.
I think that will happen. I think it might have already if all the people who are working on the problem weren't looking in the wrong direction.
posted by localroger at 4:05 PM on May 28, 2011 [1 favorite]
As far as I know, we still aren't really close to understanding even the simplest brains, so...Well yeah. I tend to think there is less fundamental difference between us and a wasp or a bee or an ant than between any of those things and a Commodore 64. Trying to figure out the complicated things first is a fool's errand when there are related simpler things to explore.
Kuatto:
Norvig also reveals his strict materialist stance when he needles chomskyWell I haven't read enough of Chomsky to know just how true this criticism is, but the thing is *I* am a pretty strict materialist (at least on this topic), and I see nothing in the arguments here which would support the idea that Chomsky is addicted to woo. I think consciousness will turn out to be a (possibly complex, where complex might mean writing a few thousand lines of application specific code) subset of Turing completion.
The whole question of "how will we recognize the difference between something that is really intelligent and a dumb machine pretending to be" evaporates if the intelligence self-organizes and, without deliberate programming, starts to do all the things we associate with real animal and human behavior.
I think that will happen. I think it might have already if all the people who are working on the problem weren't looking in the wrong direction.
posted by localroger at 4:05 PM on May 28, 2011 [1 favorite]
What is evil in a universe of pure statistical mechanism?
I think Sam Harris attempts to address this question in his much-maligned but rarely-read book The Moral Landscape.
posted by treepour at 4:32 PM on May 28, 2011
I think Sam Harris attempts to address this question in his much-maligned but rarely-read book The Moral Landscape.
posted by treepour at 4:32 PM on May 28, 2011
localroger This is, I suppose where I side solidly with Chomsky against Norvig. Chomsky thinks that there is, at some level, a consistent and firm way to describe the mechanism of consciousness which we haven't found, just as there was always a way to describe a certain type of problem solving space that Alan Turing found. Norvig thinks all we will ever be able to do is describe it, and that there is no underlying relatively simple order to discover.
This is deeply unfair to Norvig. The fact that Norvig is engaged in collating descriptions of English, in an unprecedented level of depth and detail, most definitely does not imply that he thinks that description is all that can be done with the language. Good description aids the process of derivation of rules. Useful rules cannot be derived at all without good enough description.
Gather data, infer rules, test the rules against the data, make predictions from the rules, gather further data, refine the rules further. This is the never-ending cycle. The more it runs, the closer it gets to describing the underlying actual rule. At some point, the observed data "3.9999823n + 0.999487n ~= 5.000245n" has to imply the rule "4n + 1n = 5n".
posted by aeschenkarnos at 4:51 PM on May 28, 2011 [1 favorite]
This is deeply unfair to Norvig. The fact that Norvig is engaged in collating descriptions of English, in an unprecedented level of depth and detail, most definitely does not imply that he thinks that description is all that can be done with the language. Good description aids the process of derivation of rules. Useful rules cannot be derived at all without good enough description.
Gather data, infer rules, test the rules against the data, make predictions from the rules, gather further data, refine the rules further. This is the never-ending cycle. The more it runs, the closer it gets to describing the underlying actual rule. At some point, the observed data "3.9999823n + 0.999487n ~= 5.000245n" has to imply the rule "4n + 1n = 5n".
posted by aeschenkarnos at 4:51 PM on May 28, 2011 [1 favorite]
This is a false dichotomy.
It's simply incorrect to state that statistical methods are focused on just trying to make things that work, without reference to the underlying principles. That might be true for google, but loads of people within cognitive science (and, yes, even linguistics) -- who Norvig cites and talks about -- use statistical methods because they actually seem to be helpful in understanding human cognition.
As Norvig points out, many aspects of language seem highly probabilistic. Pro-drop in English, parsing, grammar learning, dealing with errors -- all of these things must have some statistical component. It's not just state-of-the-art engineering that recognises this, it is a lot of the state-of-the-art cognitive science, too. Google "probabilistic models of cognition", or just "statistical learning cognitive science language" and you'll find loads of work on this topic.
In fact, it seems to me as if Chomsky is arguing with a strawman (granted, I wasn't there so I don't know precisely what he said, but this would be consistent with statements of his in other contexts that I'm aware of). His point is that "purely" statistical models aren't useful ways of understanding linguistic behaviour. But there are few people nowadays -- in linguistics and cognitive science at least -- who simply rely on statistical methods: they are usually wrapped around some theoretical constructs, like structured grammars (e.g., PCFGs, dependency grammars) or theorised memory mechanisms (e.g., explaining parsing patterns), or other types of structure (e.g., models of semantics are often networks of statistical connections, or occasionally taxonomies or clusters). Few people use purely statistical methods, but almost nobody eschews statistics entirely: it just doesn't work if your goal is to understand human behaviour.
posted by forza at 4:58 PM on May 28, 2011 [5 favorites]
It's simply incorrect to state that statistical methods are focused on just trying to make things that work, without reference to the underlying principles. That might be true for google, but loads of people within cognitive science (and, yes, even linguistics) -- who Norvig cites and talks about -- use statistical methods because they actually seem to be helpful in understanding human cognition.
As Norvig points out, many aspects of language seem highly probabilistic. Pro-drop in English, parsing, grammar learning, dealing with errors -- all of these things must have some statistical component. It's not just state-of-the-art engineering that recognises this, it is a lot of the state-of-the-art cognitive science, too. Google "probabilistic models of cognition", or just "statistical learning cognitive science language" and you'll find loads of work on this topic.
In fact, it seems to me as if Chomsky is arguing with a strawman (granted, I wasn't there so I don't know precisely what he said, but this would be consistent with statements of his in other contexts that I'm aware of). His point is that "purely" statistical models aren't useful ways of understanding linguistic behaviour. But there are few people nowadays -- in linguistics and cognitive science at least -- who simply rely on statistical methods: they are usually wrapped around some theoretical constructs, like structured grammars (e.g., PCFGs, dependency grammars) or theorised memory mechanisms (e.g., explaining parsing patterns), or other types of structure (e.g., models of semantics are often networks of statistical connections, or occasionally taxonomies or clusters). Few people use purely statistical methods, but almost nobody eschews statistics entirely: it just doesn't work if your goal is to understand human behaviour.
posted by forza at 4:58 PM on May 28, 2011 [5 favorites]
Probabilistic data-driven models seem entirely appropriate, useful, and descriptive when the thing you're trying to model is a probabilistic data-driven system -- like a human brain.
posted by thandal at 5:10 PM on May 28, 2011
posted by thandal at 5:10 PM on May 28, 2011
I think I see the problem.
"Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way."
And rightly so. The key phrase there is "but who don't try to understand". I find it hard to imagine that such a deridable researcher could exist. They would be, if they existed at all, a data-gatherer for actual researchers. It is Norvig's program, not Norvig, that Chomsky is describing; and on the other side of the argument, the entire point of gathering data at all is so that it may be understood and applied.
For the possessor of a mind, it is very difficult to apply data without adding to one's own understanding of that data. Even "did this particular application of data actually work?" is a novel question. Tossing a ball about gives us a heuristic understanding of gravity, our own muscular power generation, ballistics, and air resistance. At some point, we, and those like ourselves, will have tossed balls about enough to have a descriptive understanding of how it works. Toss it this hard, and it will go that far. From this comes the derivation of rules and formulae. The mathematics we use to do that is itself the product of this same process - someone, somewhere, enumerated and computed heuristically, then descriptively, then derived rules for enumeration and computation.
The rules and formulae may be very much more complex (and occasionally less complex) than they actually need to be. But this will only be noticed in the presence of sufficiently refined new data, because that is where the rules make incorrect predictions. Correcting the rules advances the body of knowledge. Eventually, as with simple computation, we will get things to the point where the process is truly perfect.
Personally, I philosophically stand with Stephen Wolfram on the subject of physical laws: I strongly suspect that our formulaic descriptions, such as the inverse-square law of luminosity, are statistical summaries of underlying cellular automata rules applied to nodes of space and time and energy/matter. Much of the discussion of emergent consciousness in this thread is of a similar nature: simple rules that iterate on their own outputs, lead to complex consequences.
posted by aeschenkarnos at 5:19 PM on May 28, 2011 [1 favorite]
"Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way."
And rightly so. The key phrase there is "but who don't try to understand". I find it hard to imagine that such a deridable researcher could exist. They would be, if they existed at all, a data-gatherer for actual researchers. It is Norvig's program, not Norvig, that Chomsky is describing; and on the other side of the argument, the entire point of gathering data at all is so that it may be understood and applied.
For the possessor of a mind, it is very difficult to apply data without adding to one's own understanding of that data. Even "did this particular application of data actually work?" is a novel question. Tossing a ball about gives us a heuristic understanding of gravity, our own muscular power generation, ballistics, and air resistance. At some point, we, and those like ourselves, will have tossed balls about enough to have a descriptive understanding of how it works. Toss it this hard, and it will go that far. From this comes the derivation of rules and formulae. The mathematics we use to do that is itself the product of this same process - someone, somewhere, enumerated and computed heuristically, then descriptively, then derived rules for enumeration and computation.
The rules and formulae may be very much more complex (and occasionally less complex) than they actually need to be. But this will only be noticed in the presence of sufficiently refined new data, because that is where the rules make incorrect predictions. Correcting the rules advances the body of knowledge. Eventually, as with simple computation, we will get things to the point where the process is truly perfect.
Personally, I philosophically stand with Stephen Wolfram on the subject of physical laws: I strongly suspect that our formulaic descriptions, such as the inverse-square law of luminosity, are statistical summaries of underlying cellular automata rules applied to nodes of space and time and energy/matter. Much of the discussion of emergent consciousness in this thread is of a similar nature: simple rules that iterate on their own outputs, lead to complex consequences.
posted by aeschenkarnos at 5:19 PM on May 28, 2011 [1 favorite]
You would, if you were completely ignorant of computer science, find it pretty reasonable if you had lived with and taken computers for granted all your life, and you did not know they were created by people, to make exactly the same statement about "computation."
Are you proposing the theory that the human brain was designed by another intelligence?
posted by empath at 5:24 PM on May 28, 2011
Are you proposing the theory that the human brain was designed by another intelligence?
posted by empath at 5:24 PM on May 28, 2011
empath:
This does not imply design at all. Neither did Turing's thesis, although all the extant expressions of Turing complete machines we know of are in fact so far designed by humans.
posted by localroger at 5:43 PM on May 28, 2011
Are you proposing the theory that the human brain was designed by another intelligence?Of course not. I am proposing that as with computers, there is something like the Turing Machine that might represent the fundamental component of consciousness, and that once that is identified it will be relatively simple to identify systems which are mathematically equivalent to it.
This does not imply design at all. Neither did Turing's thesis, although all the extant expressions of Turing complete machines we know of are in fact so far designed by humans.
posted by localroger at 5:43 PM on May 28, 2011
Forza above is right that each party is arguing past each other to some extent, and that its not clear that the approaches can't be combined. However, these two researchers apparently do have approaches and commitments that are in conflict.
Chomsky's linguistic competence model focuses on the ideal form of language as it is generated by the mind -- he assumes that actual speech will contain some "noise" in the signal. As he is often quoted:
"Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-communication, who know its (the speech community's) language perfectly and that it is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance."
Compare with the theoretical stance of Norvig, who is concerned with evaluating actual output, warts and all, and making connections between that output and linguistic competence.
An (somewhat paltry) example of what I have in mind would be a statistical analysis of a typewritten corpus with spelling mistakes. A pattern of errors could be used as evidence for or against theories about linguistic performance, but they could also arise from something like errors that are systematically common because of the QWERTY keyboard layout and the limitations of human hands.
posted by anotherbrick at 6:01 PM on May 28, 2011
Chomsky's linguistic competence model focuses on the ideal form of language as it is generated by the mind -- he assumes that actual speech will contain some "noise" in the signal. As he is often quoted:
"Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-communication, who know its (the speech community's) language perfectly and that it is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance."
Compare with the theoretical stance of Norvig, who is concerned with evaluating actual output, warts and all, and making connections between that output and linguistic competence.
An (somewhat paltry) example of what I have in mind would be a statistical analysis of a typewritten corpus with spelling mistakes. A pattern of errors could be used as evidence for or against theories about linguistic performance, but they could also arise from something like errors that are systematically common because of the QWERTY keyboard layout and the limitations of human hands.
posted by anotherbrick at 6:01 PM on May 28, 2011
Of course not. I am proposing that as with computers, there is something like the Turing Machine that might represent the fundamental component of consciousness, and that once that is identified it will be relatively simple to identify systems which are mathematically equivalent to it.
I personally don't believe a simple turing machine can account for consciousness, because consciousness seems to be massively parallel, without all parts of the mind communicating with all other parts of it, which accounts for the mind's seeming ability to work around Godel's incompleteness theorem. You're never going to come up with a set of logical rules that accounts for the activity of the human mind, unless you simply simulate all of the laws of physics. Consciousness is fragmentary and elusive, and may include a lot of different systems which aren't tightly connected together, or even be very related.
Logic is something that the brain can do, and there may be systems in the brain that might be simulated by a turing machine, but I don't think you are ever going to explain the brain as a turing machine, entirely.
I firmly believe that we will create a truly intelligent computer in the near future, and we will still have no idea how consciousness works, in the way that you're thinking about it. It will still just be a mess of massively interconnected nodes.
posted by empath at 6:14 PM on May 28, 2011
I personally don't believe a simple turing machine can account for consciousness, because consciousness seems to be massively parallel, without all parts of the mind communicating with all other parts of it, which accounts for the mind's seeming ability to work around Godel's incompleteness theorem. You're never going to come up with a set of logical rules that accounts for the activity of the human mind, unless you simply simulate all of the laws of physics. Consciousness is fragmentary and elusive, and may include a lot of different systems which aren't tightly connected together, or even be very related.
Logic is something that the brain can do, and there may be systems in the brain that might be simulated by a turing machine, but I don't think you are ever going to explain the brain as a turing machine, entirely.
I firmly believe that we will create a truly intelligent computer in the near future, and we will still have no idea how consciousness works, in the way that you're thinking about it. It will still just be a mess of massively interconnected nodes.
posted by empath at 6:14 PM on May 28, 2011
anotherbrick: The question you point out is definitely a real distinction between the two research programs, and it's a very interesting question to boot.[*] I just think that it's not mainly what Chomsky is talking about -- or if it is, the rhetoric about "purely statistical methods" is rather misleading.
[*] Just to make my biases clear, I think Chomsky is wrong on this too. Essentially the situation is that output is some function of the underlying competence plus whatever "error" comes as a result of performance, i.e., f(C,E) = O. Given that we as scienctists have to start from the output in some way -- otherwise you're doing philosophy, not science -- it seems completely wrongheaded to me to entirely ignore E. Chomsky et al justify doing so by saying it is "irrelevant", but they can't know that without knowing where the error comes from is and how it interacts with competence. Indeed, people who study linguistic performance often end up finding that that errors in performance aren't random, but actually deeply related to the underlying competence - so much so that I begin to question whether the competence/performance distinction is meaningful at all. But, well, that's getting on a bit of a tangent so I'll stop ranting now.
posted by forza at 6:17 PM on May 28, 2011
[*] Just to make my biases clear, I think Chomsky is wrong on this too. Essentially the situation is that output is some function of the underlying competence plus whatever "error" comes as a result of performance, i.e., f(C,E) = O. Given that we as scienctists have to start from the output in some way -- otherwise you're doing philosophy, not science -- it seems completely wrongheaded to me to entirely ignore E. Chomsky et al justify doing so by saying it is "irrelevant", but they can't know that without knowing where the error comes from is and how it interacts with competence. Indeed, people who study linguistic performance often end up finding that that errors in performance aren't random, but actually deeply related to the underlying competence - so much so that I begin to question whether the competence/performance distinction is meaningful at all. But, well, that's getting on a bit of a tangent so I'll stop ranting now.
posted by forza at 6:17 PM on May 28, 2011
empath: "
I personally don't believe a simple turing machine can account for consciousness, because consciousness seems to be massively parallel, without all parts of the mind communicating with all other parts of it, which accounts for the mind's seeming ability to work around Godel's incompleteness theorem. You're never going to come up with a set of logical rules that accounts for the activity of the human mind, unless you simply simulate all of the laws of physics. Consciousness is fragmentary and elusive, and may include a lot of different systems which aren't tightly connected together, or even be very related.
"
What!?!
The mind can't "work around" Godel's incompleteness theorem, what would that even mean? It's not something you can "work around", either you're performing logical reasoning in a system of well defined axioms or you're not. You can't work around it any more than you can work around the Halting Problem.
posted by Proofs and Refutations at 8:22 PM on May 28, 2011 [2 favorites]
I personally don't believe a simple turing machine can account for consciousness, because consciousness seems to be massively parallel, without all parts of the mind communicating with all other parts of it, which accounts for the mind's seeming ability to work around Godel's incompleteness theorem. You're never going to come up with a set of logical rules that accounts for the activity of the human mind, unless you simply simulate all of the laws of physics. Consciousness is fragmentary and elusive, and may include a lot of different systems which aren't tightly connected together, or even be very related.
"
What!?!
The mind can't "work around" Godel's incompleteness theorem, what would that even mean? It's not something you can "work around", either you're performing logical reasoning in a system of well defined axioms or you're not. You can't work around it any more than you can work around the Halting Problem.
posted by Proofs and Refutations at 8:22 PM on May 28, 2011 [2 favorites]
cleverly disgused as a recipe for Authentic New Orleans Jamabalaya. To decode it you have to sort all of the ingredients into anagrams representing common flow chart blocks, and put them in the order you would add the ingredients to the recipe.
Trigger warning, please? You just gave me a flashback to horrible adventure game puzzles.
posted by ymgve at 8:29 PM on May 28, 2011
Trigger warning, please? You just gave me a flashback to horrible adventure game puzzles.
posted by ymgve at 8:29 PM on May 28, 2011
localroger writes:
I'm very sad to hear this.
"application specific"? uhh, yeah.
posted by kuatto at 10:30 PM on May 28, 2011 [1 favorite]
I think consciousness will turn out to be a (possibly complex, where complex might mean writing a few thousand lines of application specific code) subset of Turing completion.You believe consciousness can be discretized? Or rather, you believe consciousness is discretized (since it can be represented equivalently in a turing complete machine)?
I'm very sad to hear this.
"application specific"? uhh, yeah.
posted by kuatto at 10:30 PM on May 28, 2011 [1 favorite]
It's not something you can "work around", either you're performing logical reasoning in a system of well defined axioms or you're not.
That was kind of my point. Consciousness is not the result 'logical reasoning in a system of well defined axioms'.
posted by empath at 11:04 PM on May 28, 2011
That was kind of my point. Consciousness is not the result 'logical reasoning in a system of well defined axioms'.
posted by empath at 11:04 PM on May 28, 2011
empath: "
That was kind of my point. Consciousness is not the result 'logical reasoning in a system of well defined axioms'"
But then you're not proving anything and Godel doesn't apply. It's not getting around anything. Really, what on earth were you getting at with that? I keep looking at it and it makes as much sense to me as "the mind's seeming ability to work around the Uncertainty Principle".
posted by Proofs and Refutations at 11:57 PM on May 28, 2011
That was kind of my point. Consciousness is not the result 'logical reasoning in a system of well defined axioms'"
But then you're not proving anything and Godel doesn't apply. It's not getting around anything. Really, what on earth were you getting at with that? I keep looking at it and it makes as much sense to me as "the mind's seeming ability to work around the Uncertainty Principle".
posted by Proofs and Refutations at 11:57 PM on May 28, 2011
The human mind isn't limited by things by like Godel's theorem or the halting problem because it's not just a turing machine executing a system of logical rules. I'm not sure why it's hard to understand where I was going with that.
posted by empath at 6:02 AM on May 29, 2011
posted by empath at 6:02 AM on May 29, 2011
kuatto: You believe consciousness can be discretized?
Well, since consciousness is implemented by brains, and brains are made of matter, and matter is made up of particles which are capable only of expressing discrete quantum states, then I'd say any other belief is pretty unscientific.
The problem is that a simple system can be used to organize a truly massively unsimple amount of information, and that is what happens in a brain; that's what makes consciousness look so complicated and unfathomable. When you have enough discrete points in a fine enough pattern, from the right distance they stop looking discrete and it can even become hard to believe that they are.
It is probably worth mentioning that while the brain may be Turing equivalent that doesn't imply that an actual Turing machine would do a very good job of implementing a brain; an actual Turing machine wouldn't even do a very good job of implementing most computers, and a lot of computers which are supposedly Turing equivalent would actually do a very poor job of implementing each other. I would expect consciousness to be rather more different from existing practical computers than, say, a 1802 is from a modern Intel CPU.
However, once we know what a brain-emulating computer needs to do, we can start making designs aimed at that process. It's possible to estimate how much processing power would be needed to emulate the brain at the level of physical processes; Ray Kurzweil makes a pretty good argument that we will be there around 2050. However, I don't think emulation of physical processes is necessary to implement consciousness, and I would expect even existing computers to be able to implement a system that is as functional and adaptable as, say, a wasp.
So far nobody has figured out how to do even that.
posted by localroger at 6:05 AM on May 29, 2011
Well, since consciousness is implemented by brains, and brains are made of matter, and matter is made up of particles which are capable only of expressing discrete quantum states, then I'd say any other belief is pretty unscientific.
The problem is that a simple system can be used to organize a truly massively unsimple amount of information, and that is what happens in a brain; that's what makes consciousness look so complicated and unfathomable. When you have enough discrete points in a fine enough pattern, from the right distance they stop looking discrete and it can even become hard to believe that they are.
It is probably worth mentioning that while the brain may be Turing equivalent that doesn't imply that an actual Turing machine would do a very good job of implementing a brain; an actual Turing machine wouldn't even do a very good job of implementing most computers, and a lot of computers which are supposedly Turing equivalent would actually do a very poor job of implementing each other. I would expect consciousness to be rather more different from existing practical computers than, say, a 1802 is from a modern Intel CPU.
However, once we know what a brain-emulating computer needs to do, we can start making designs aimed at that process. It's possible to estimate how much processing power would be needed to emulate the brain at the level of physical processes; Ray Kurzweil makes a pretty good argument that we will be there around 2050. However, I don't think emulation of physical processes is necessary to implement consciousness, and I would expect even existing computers to be able to implement a system that is as functional and adaptable as, say, a wasp.
So far nobody has figured out how to do even that.
posted by localroger at 6:05 AM on May 29, 2011
matter is made up of particles which are capable only of expressing discrete quantum states,
They're only discrete if you look at them. There's evidence that quantum computation happens in living cells during processes like photosynthesis. I don't know why thought should be any different.
posted by empath at 6:11 AM on May 29, 2011
They're only discrete if you look at them. There's evidence that quantum computation happens in living cells during processes like photosynthesis. I don't know why thought should be any different.
posted by empath at 6:11 AM on May 29, 2011
empath, you are failing to separate what in a computer would be the hardware from the software. You have no basis for saying the brain isn't "a turing machine executing a system of logical rules," and there is some basis for saying it is exactly that. But those rules aren't "I think therefore I am;" those rules would be things like "if this neuron fires more than N times, form a new synapse." We are no more connected to those rules than you are to your computer's assembly language as you use it to browse Metafilter.
And this is important, because I think it's one of the reasons so many AI people are looking through the wrong end of the telescope. If you learn how the brain programs itself through experience you don't need to design a visual system that can do what V1-V5 do in the human brain; you can just build it and throw data at it until it programs itself, and if the physical evidence of the homogeneity of the cerebral cortex is anything to go by, once you get a machine that successfully programs itself to make a human-like visual cortex (which we do pretty well understand nowadays) you can just keep extending it and let it work out the rest of the functionality the way actual animals and humans do.
posted by localroger at 6:15 AM on May 29, 2011 [1 favorite]
And this is important, because I think it's one of the reasons so many AI people are looking through the wrong end of the telescope. If you learn how the brain programs itself through experience you don't need to design a visual system that can do what V1-V5 do in the human brain; you can just build it and throw data at it until it programs itself, and if the physical evidence of the homogeneity of the cerebral cortex is anything to go by, once you get a machine that successfully programs itself to make a human-like visual cortex (which we do pretty well understand nowadays) you can just keep extending it and let it work out the rest of the functionality the way actual animals and humans do.
posted by localroger at 6:15 AM on May 29, 2011 [1 favorite]
You believe consciousness can be discretized?
localroger's reply is good, but this comment made me speak involuntarily and incredulously, "How could it not be?"
posted by callmejay at 6:18 AM on May 29, 2011
localroger's reply is good, but this comment made me speak involuntarily and incredulously, "How could it not be?"
posted by callmejay at 6:18 AM on May 29, 2011
They're only discrete if you look at them. There's evidence that quantum computation happens in living cells during processes like photosynthesis.
I don't find that idea credible, because there is no evidence of any mechanism in a cell that can maintain quantum coherence. If you feel there is anything to it other than chemical reactions I'm afraid we will have to agree to disagree.
posted by localroger at 6:18 AM on May 29, 2011
I don't find that idea credible, because there is no evidence of any mechanism in a cell that can maintain quantum coherence. If you feel there is anything to it other than chemical reactions I'm afraid we will have to agree to disagree.
posted by localroger at 6:18 AM on May 29, 2011
I don't find that idea credible, because there is no evidence of any mechanism in a cell that can maintain quantum coherence.
How long do you need to maintain it for it to be usable? There is evidence that they CAN maintain it.
posted by empath at 6:22 AM on May 29, 2011
How long do you need to maintain it for it to be usable? There is evidence that they CAN maintain it.
posted by empath at 6:22 AM on May 29, 2011
an actual Turing machine wouldn't even do a very good job of implementing most computers
A Turing machine will emulate any computer we have just fine, by our 'usefulness' needs just way way way too slowly, but that is not the point of the 'machine'. Remember it's a mathematical model to make it easier to analyse certain problems.
If 'intelligence' is some sort of algorithm, the Turing machine will represent that just fine, if it is some other 'quantum' (meaning unknown) mechanism, then no.
posted by sammyo at 6:29 AM on May 29, 2011
A Turing machine will emulate any computer we have just fine, by our 'usefulness' needs just way way way too slowly, but that is not the point of the 'machine'. Remember it's a mathematical model to make it easier to analyse certain problems.
If 'intelligence' is some sort of algorithm, the Turing machine will represent that just fine, if it is some other 'quantum' (meaning unknown) mechanism, then no.
posted by sammyo at 6:29 AM on May 29, 2011
if this neuron fires more than N times, form a new synapse.
I'm really, really, not sure that that is all there is to it. If it were, we'd have figured it out by now. People researching consciousness are still missing something, I'm fairly convinced.
Which isn't to say that you can't get some kind of partial consciousness out of something that simulates one level of what's happening in the brain. I just don't think that simulating neurons at the connection level is enough to generate true consciousness, and I don't think that trying to somehow encode thought and language as a system of logical rules ignoring the physical substrate is enough, either.
It's not that I think thought is something magical and non-physical , I just think that there is something happening below the neuronal connection level that is probably important.
posted by empath at 6:31 AM on May 29, 2011
I'm really, really, not sure that that is all there is to it. If it were, we'd have figured it out by now. People researching consciousness are still missing something, I'm fairly convinced.
Which isn't to say that you can't get some kind of partial consciousness out of something that simulates one level of what's happening in the brain. I just don't think that simulating neurons at the connection level is enough to generate true consciousness, and I don't think that trying to somehow encode thought and language as a system of logical rules ignoring the physical substrate is enough, either.
It's not that I think thought is something magical and non-physical , I just think that there is something happening below the neuronal connection level that is probably important.
posted by empath at 6:31 AM on May 29, 2011
so many AI people are looking through the wrong end of the telescope.
But that's somewhat like positing that we do not need chemistry departments because chemistry is just a special case of physics. We have no idea what the actual way the problem of 'intelligence' may be understood.
posted by sammyo at 6:36 AM on May 29, 2011 [1 favorite]
But that's somewhat like positing that we do not need chemistry departments because chemistry is just a special case of physics. We have no idea what the actual way the problem of 'intelligence' may be understood.
posted by sammyo at 6:36 AM on May 29, 2011 [1 favorite]
I'm really, really, not sure that that is all there is to it. If it were, we'd have figured it out by now.
That we'd have figured it out by now does not follow at all. Even if the underlying system is simple, it is expressed in a form that stores mountains of data. Benoit Mandelbrot didn't figure out the Mandelbrot Set by inspection, he discovered it, and I feel the basis of consciousness will only be discovered in a similar manner.
The quantum mechanisms of photosynthesis are cool, but they are not very long lasting in terms of neuronal activity (which happens at the millisecond level) and they appear to be doing nothing more computationally intensive than facilitating chemical reactions. For quantum effects to be important in consciousness there would have to be a very large number of such events happening across a large volume of brain, and there is no real evidence that anything like that is happening, while there is a great deal of evidence for neuronal firing making changes to growth patterns, ion pathways, and so on.
The fact that a google search for "quantum photosynthesis" turns up page after page of articles speculating on consciousness is actually cause for increased skepticism. There is a strong inherent desire for people to feel that we are somehow unique and that our specialness as a species cannot be duplicated. People who are motivated by that desire are going to find rationalizations for it.
posted by localroger at 7:39 AM on May 29, 2011
That we'd have figured it out by now does not follow at all. Even if the underlying system is simple, it is expressed in a form that stores mountains of data. Benoit Mandelbrot didn't figure out the Mandelbrot Set by inspection, he discovered it, and I feel the basis of consciousness will only be discovered in a similar manner.
The quantum mechanisms of photosynthesis are cool, but they are not very long lasting in terms of neuronal activity (which happens at the millisecond level) and they appear to be doing nothing more computationally intensive than facilitating chemical reactions. For quantum effects to be important in consciousness there would have to be a very large number of such events happening across a large volume of brain, and there is no real evidence that anything like that is happening, while there is a great deal of evidence for neuronal firing making changes to growth patterns, ion pathways, and so on.
The fact that a google search for "quantum photosynthesis" turns up page after page of articles speculating on consciousness is actually cause for increased skepticism. There is a strong inherent desire for people to feel that we are somehow unique and that our specialness as a species cannot be duplicated. People who are motivated by that desire are going to find rationalizations for it.
posted by localroger at 7:39 AM on May 29, 2011
> I just don't think that simulating neurons at the connection level is enough to generate true consciousness, and I don't think that trying to somehow encode thought and language as a system of logical rules ignoring the physical substrate is enough, either.
Is this a reference to Penrose and Hammeroff and their theory that microtubules can function as quantum computers?
I don't think Penrose understands neurology very well. And I don't think it's necessary to posit some sort of quantum process in the brain to account for it's ability 'work around' Gödel's incompleteness theorem; I think it's sufficient that organisms are embedded in and part of the world. Logic and mathematics are closed axiomatic systems, but brains aren't. Brains are part of organisms which are part of the real world. The results of reasoning, counting, arithmetic and spatial reasoning that brains do has to work in the real world with a reasonable amount of reliability. Evolution and experience have a way of taking care of bad axioms and alternate systems of logic and geometry.
This is not to say the Hammeroff's theory is wrong. But the problem Penrose thinks it could solve is not a problem if you assume a naturalistic (or embodied mind) view of mathematics rather than a Platonic one.
This is somewhat analogous to Barbara Partee's criticism of statistical analysis in linguists (as I understand it), "Really knowing semantics is a prerequisite for anything to be called intelligence." In practice this usually analysis of the likelihood than a word or morph will occur within a certain context (in certain patterns with other morphs). This can be very useful up to a point, but language is not a closed system. People use language to communicate. You can't really understand language without understanding semantics and how people use it.
posted by nangar at 8:03 AM on May 29, 2011
Is this a reference to Penrose and Hammeroff and their theory that microtubules can function as quantum computers?
I don't think Penrose understands neurology very well. And I don't think it's necessary to posit some sort of quantum process in the brain to account for it's ability 'work around' Gödel's incompleteness theorem; I think it's sufficient that organisms are embedded in and part of the world. Logic and mathematics are closed axiomatic systems, but brains aren't. Brains are part of organisms which are part of the real world. The results of reasoning, counting, arithmetic and spatial reasoning that brains do has to work in the real world with a reasonable amount of reliability. Evolution and experience have a way of taking care of bad axioms and alternate systems of logic and geometry.
This is not to say the Hammeroff's theory is wrong. But the problem Penrose thinks it could solve is not a problem if you assume a naturalistic (or embodied mind) view of mathematics rather than a Platonic one.
This is somewhat analogous to Barbara Partee's criticism of statistical analysis in linguists (as I understand it), "Really knowing semantics is a prerequisite for anything to be called intelligence." In practice this usually analysis of the likelihood than a word or morph will occur within a certain context (in certain patterns with other morphs). This can be very useful up to a point, but language is not a closed system. People use language to communicate. You can't really understand language without understanding semantics and how people use it.
posted by nangar at 8:03 AM on May 29, 2011
empath: "The human mind isn't limited by things by like Godel's theorem or the halting problem because it's not just a turing machine executing a system of logical rules. I'm not sure why it's hard to understand where I was going with that"
You seem really confused as to what the Halting Problem and Godel's theorem are and mean. The human mind absolutely is limited by both of them. The human mind *cannot* prove whether an arbitrary turing machine halts, and *cannot* use an axiomatic system powerful enough to embed the natural numbers with the operations of addition and multiplication to prove its own completeness.
Also probabilistic computation gives no increase in power over deterministic as far as anyone has found, so it's really hard to see what possible system could be more powerful (rather than simply faster) than a turing machine.
posted by Proofs and Refutations at 8:09 AM on May 29, 2011
You seem really confused as to what the Halting Problem and Godel's theorem are and mean. The human mind absolutely is limited by both of them. The human mind *cannot* prove whether an arbitrary turing machine halts, and *cannot* use an axiomatic system powerful enough to embed the natural numbers with the operations of addition and multiplication to prove its own completeness.
Also probabilistic computation gives no increase in power over deterministic as far as anyone has found, so it's really hard to see what possible system could be more powerful (rather than simply faster) than a turing machine.
posted by Proofs and Refutations at 8:09 AM on May 29, 2011
There is a strong inherent desire for people to feel that we are somehow unique and that our specialness as a species cannot be duplicated.
The fact that bacteria make use of it would imply that humans wouldn't be particularly special if they also use quantum effects. I think human consciousness is a special case of consciousness, but only in terms of complexity and size, not in terms of doing anything fundamentally different on the lowest levels from how cats or dogs or fish brains.
posted by empath at 8:11 AM on May 29, 2011
The fact that bacteria make use of it would imply that humans wouldn't be particularly special if they also use quantum effects. I think human consciousness is a special case of consciousness, but only in terms of complexity and size, not in terms of doing anything fundamentally different on the lowest levels from how cats or dogs or fish brains.
posted by empath at 8:11 AM on May 29, 2011
The human mind absolutely is limited by both of them. The human mind *cannot* prove whether an arbitrary turing machine halts, and *cannot* use an axiomatic system powerful enough to embed the natural numbers with the operations of addition and multiplication to prove its own completeness.
We're talking about two different things. The fact that we can't do those things 'formally', using logic, but never the less don't continuously run into difficulties in real life with brains locking up and people being flummoxed by paradoxes implies that the mind is not a turing machine executing machine logic.
posted by empath at 8:17 AM on May 29, 2011
We're talking about two different things. The fact that we can't do those things 'formally', using logic, but never the less don't continuously run into difficulties in real life with brains locking up and people being flummoxed by paradoxes implies that the mind is not a turing machine executing machine logic.
posted by empath at 8:17 AM on May 29, 2011
localroger,
For examples of continuous phenomenon in neural nets look at synaptic integration and spike arrival times. These mechanisms are a part of how neural networks operate, and must necessarily be discretized to 'run' on a turing machine.
Breaking it down even more simply, time and space are both continuous and should probably be attributes of a Turing machine neural network model. Will discretization the parameter 't' have an effect on your model of consciousness? Can you affirm that it has no effect?
posted by kuatto at 8:19 AM on May 29, 2011
For examples of continuous phenomenon in neural nets look at synaptic integration and spike arrival times. These mechanisms are a part of how neural networks operate, and must necessarily be discretized to 'run' on a turing machine.
Breaking it down even more simply, time and space are both continuous and should probably be attributes of a Turing machine neural network model. Will discretization the parameter 't' have an effect on your model of consciousness? Can you affirm that it has no effect?
posted by kuatto at 8:19 AM on May 29, 2011
My hunch is that you are not concerned with consciousness, rather you are concerned with similarity of input/output regimes by some measure. Is this so?
posted by kuatto at 8:27 AM on May 29, 2011
posted by kuatto at 8:27 AM on May 29, 2011
I should really say 'just' a turing machine. I really doubt that evolution had a strict developmental process and the brain is probably a bunch of hacked together systems using various techniques that create the illusion of a unified logical system, but will probably fall apart on closer examination. There are probably systems in the brain which can be well approximated by turing machines, but I am guessing there are others that are probabilistic, and others that are neural networks, and others that will depend on quantum weirdness and others that are using techniques we haven't even thought of yet.
I think the best method of figuring it out is to be agnostic about it and see what kinds of intelligence we can get out of all of those methods and whatever else we can think of, because I'm sure that whatever we can imagine has already been tried in nature somewhere.
posted by empath at 8:29 AM on May 29, 2011
I think the best method of figuring it out is to be agnostic about it and see what kinds of intelligence we can get out of all of those methods and whatever else we can think of, because I'm sure that whatever we can imagine has already been tried in nature somewhere.
posted by empath at 8:29 AM on May 29, 2011
empath, I think we agree more than I thought at first then. I'd be absolutely astonished if the mind *wasn't* some horrendous kludge of hacked together systems. I just don't think our ability to be wrong about things entitles us to the claim of "working around" the limitations of axiomatic systems.
It seems to me rather like claiming that the ability to reach over and unplug Deep Blue then declare a win by default is "working around" the limitations of our chess strategy. I suppose it is, but is seems rather philosophically vacuous when it comes to the theoretical capabilities of AI.
TLDR - I don't see why we can't eventually produce AI just as inconsistent as us.
posted by Proofs and Refutations at 8:42 AM on May 29, 2011 [1 favorite]
It seems to me rather like claiming that the ability to reach over and unplug Deep Blue then declare a win by default is "working around" the limitations of our chess strategy. I suppose it is, but is seems rather philosophically vacuous when it comes to the theoretical capabilities of AI.
TLDR - I don't see why we can't eventually produce AI just as inconsistent as us.
posted by Proofs and Refutations at 8:42 AM on May 29, 2011 [1 favorite]
kuatto, it is pretty much proven that the basic unit of information transfer from one part of the brain to another is the firing of neurons. These are discrete events which happen on a scale of milliseconds. We have good experiments showing that systems in the brain react to such things as a certain frequency of firing or a certain number of pulses occurring in a timeframe. We have no evidence at all that there is any sensitivity to timing issues which would be affected by time being quantized. We also have no evidence that there is an amplitude component in neural firing; such pulses seem to be very digital in nature.
Also, while we have no evidence for the quantization of time, we have no evidence against either and one could easily interpret the Heisenberg uncertainty principle as direct evidence that particle position/velocity information is quantized.
In any case we have no evidence that quantum effects are present or that they are doing anything computationally complex if they are, and we do have lots of evidence that it's neural firing, ion channel mods, dendrition, and synapse building. Appealing to quantum effects on top of all that is akin to someone who only knew a Commodore 64 encountering a modern computer and assuming it must be powered by quantum effects and woo because there's no way you could make transistors switch that fast and be stable.
posted by localroger at 8:47 AM on May 29, 2011
Also, while we have no evidence for the quantization of time, we have no evidence against either and one could easily interpret the Heisenberg uncertainty principle as direct evidence that particle position/velocity information is quantized.
In any case we have no evidence that quantum effects are present or that they are doing anything computationally complex if they are, and we do have lots of evidence that it's neural firing, ion channel mods, dendrition, and synapse building. Appealing to quantum effects on top of all that is akin to someone who only knew a Commodore 64 encountering a modern computer and assuming it must be powered by quantum effects and woo because there's no way you could make transistors switch that fast and be stable.
posted by localroger at 8:47 AM on May 29, 2011
Also: My hunch is that you are not concerned with consciousness, rather you are concerned with similarity of input/output regimes by some measure. Is this so?
If this is directed at me, precisely the opposite is true. I believe it is possible and ultimately inevitable that we will build machines that act like animals and humans not because we have observed how animals and humans act and written software to do the same thing, but because they successfully emulate the algorithms by which animals and humans self-program and develop via exposure to real world input. They will be as complex and adaptable as we not so much because we figured out how to make them that way as because they figure it out in the course of growth. They would surprise their creators as much and in the same way as children surprise us, and I feel this latter feature is one reason some people are reluctant to explore that path, because the possibility of creating SkyNet is quite real and almost impossible to fully prevent.
posted by localroger at 8:54 AM on May 29, 2011
If this is directed at me, precisely the opposite is true. I believe it is possible and ultimately inevitable that we will build machines that act like animals and humans not because we have observed how animals and humans act and written software to do the same thing, but because they successfully emulate the algorithms by which animals and humans self-program and develop via exposure to real world input. They will be as complex and adaptable as we not so much because we figured out how to make them that way as because they figure it out in the course of growth. They would surprise their creators as much and in the same way as children surprise us, and I feel this latter feature is one reason some people are reluctant to explore that path, because the possibility of creating SkyNet is quite real and almost impossible to fully prevent.
posted by localroger at 8:54 AM on May 29, 2011
kuatto, it is pretty much proven that the basic unit of information transfer from one part of the brain to another is the firing of neurons.
a basic unit of information transfer in the brain. The brain distributes information in lots of different ways, though -- the flow of neurotransmitters is just one example I can think of off the top of my head.
And you're still left with the question of how a neuron 'decides' when to fire, though.
In any case we have no evidence that quantum effects are present or that they are doing anything computationally complex if they are,
I don't know how you can say that when we still are barely scratching the surface of understanding how the brain works.
posted by empath at 8:56 AM on May 29, 2011
a basic unit of information transfer in the brain. The brain distributes information in lots of different ways, though -- the flow of neurotransmitters is just one example I can think of off the top of my head.
And you're still left with the question of how a neuron 'decides' when to fire, though.
In any case we have no evidence that quantum effects are present or that they are doing anything computationally complex if they are,
I don't know how you can say that when we still are barely scratching the surface of understanding how the brain works.
posted by empath at 8:56 AM on May 29, 2011
Was that supposed to be a link?
In any case, I would say that the brain modifies how neurons fire in lots of different ways, but it is neuronal firing which is the foundation of it all. While it is technically true that releasing hormones, for example, transmits information, it is also true that this sort of thing is a rather slow, one-dimensional channel and I seriously doubt it is either basic or necessary to the mechanisms that make us able to form and follow strategies for following urges and optimizing our environment, the sort of activity we consider conscious expression.
posted by localroger at 9:09 AM on May 29, 2011
In any case, I would say that the brain modifies how neurons fire in lots of different ways, but it is neuronal firing which is the foundation of it all. While it is technically true that releasing hormones, for example, transmits information, it is also true that this sort of thing is a rather slow, one-dimensional channel and I seriously doubt it is either basic or necessary to the mechanisms that make us able to form and follow strategies for following urges and optimizing our environment, the sort of activity we consider conscious expression.
posted by localroger at 9:09 AM on May 29, 2011
localrodger,
Please examine the reference I gave above, Synaptic Integration and Spike arrival times have nothing (overtly) to do with quantum effects.
posted by kuatto at 9:13 AM on May 29, 2011
Please examine the reference I gave above, Synaptic Integration and Spike arrival times have nothing (overtly) to do with quantum effects.
posted by kuatto at 9:13 AM on May 29, 2011
localroger:
Is there a significant statement here?
We are talking about equivalence of consciousness to simulation right? This strikes me as a bunch of hand-waving on your part, e.g. "pretty much equivalent."
posted by kuatto at 9:24 AM on May 29, 2011
it is pretty much proven that the basic unit of information transfer from one part of the brain to another is the firing of neurons.A couple questions:
Is there a significant statement here?
We are talking about equivalence of consciousness to simulation right? This strikes me as a bunch of hand-waving on your part, e.g. "pretty much equivalent."
posted by kuatto at 9:24 AM on May 29, 2011
While it is technically true that releasing hormones, for example, transmits information, it is also true that this sort of thing is a rather slow, one-dimensional channel and I seriously doubt it is either basic or necessary to the mechanisms that make us able to form and follow strategies for following urges and optimizing our environment, the sort of activity we consider conscious expression.
I don't know if you've ever tried any psychedelics, but if you haven't, trust me, you'd be surprised (to put it mildly) by how much a tiny change to the activity of your serotonin system alters how you think and perceive reality.
posted by empath at 9:35 AM on May 29, 2011
I don't know if you've ever tried any psychedelics, but if you haven't, trust me, you'd be surprised (to put it mildly) by how much a tiny change to the activity of your serotonin system alters how you think and perceive reality.
posted by empath at 9:35 AM on May 29, 2011
In any case we have no evidence that quantum effects are present or that they are doing anything computationally complex if they are,
> I don't know how you can say that when we still are barely scratching the surface of understanding how the brain works
This is basically a God in the gaps argument, but substituting 'quantum stuff' for 'God.' I'm not sure what kind of problem about consciousness quantum magic is supposed to solve here. Why is a parallel system that uses quantum processes supposed to capable of consciousness, while systems that don't use quantum processes are not? What is it about consciousness that hypothetical quantum processes are supposed to explain?
There are a lot of known and very significant differences between brains and computers. I'm not sure why we need to bring in quantum processing to explain why there are differences in what they can do.
posted by nangar at 9:36 AM on May 29, 2011
> I don't know how you can say that when we still are barely scratching the surface of understanding how the brain works
This is basically a God in the gaps argument, but substituting 'quantum stuff' for 'God.' I'm not sure what kind of problem about consciousness quantum magic is supposed to solve here. Why is a parallel system that uses quantum processes supposed to capable of consciousness, while systems that don't use quantum processes are not? What is it about consciousness that hypothetical quantum processes are supposed to explain?
There are a lot of known and very significant differences between brains and computers. I'm not sure why we need to bring in quantum processing to explain why there are differences in what they can do.
posted by nangar at 9:36 AM on May 29, 2011
empath, I know very well that transmitters can make changes, even big ones, but I maintain that this is a peripheral effect. When you perceive a giant pink bunny rabbit, whether it's real or an LSD hallucination, that means certain neurons are firing to assert (and probably sharpen and confirm) that pattern. Without the LSD you might have no bunny rabbit, but without neurons firing you will see nothing at all.
posted by localroger at 9:42 AM on May 29, 2011
posted by localroger at 9:42 AM on May 29, 2011
This is basically a God in the gaps argument, but substituting 'quantum stuff' for 'God.' I'm not sure what kind of problem about consciousness quantum magic is supposed to solve here. Why is a parallel system that uses quantum processes supposed to capable of consciousness, while systems that don't use quantum processes are not?
I'm not saying that quantum processes are essential to intelligence. I'm saying that they might be a big part of human consciousness, and we now know that they are used in other cells in other animals for other purposes. Quantum computation is many, many, many times faster for certain tasks than regular computation, and if we know anything about evolution, it's that it finds efficiencies where it can.
I guess the debate here really is whether the brain works like the OSI model where you have all the computation and thinking happening at one level and it gets neatly encapsulated into symbols and rules of some logical mind-language that is then passed down to the neurons which dumbly move the symbols around. And you could, if you wanted, simply swap out the neurons with some other computational substrate and you'd have an equivalent intelligence.
I strongly suspect that it is not that the case, and that there is some level of 'intelligence' happening at many layers of the brain, and that there is probably some kind of quantum computation and decision making happening even at the neuronal level, and possibly at larger scales, as well (ie, neurotransmitters and brain waves, etc).
I think it's clear that a great deal of computation is done by neurons transmitting pulses. Perhaps most of it is done that way, but I doubt it's the whole story, and I doubt we'd be able to simulate a brain just by building enough simulated neurons in the right pattern.
posted by empath at 9:57 AM on May 29, 2011
I'm not saying that quantum processes are essential to intelligence. I'm saying that they might be a big part of human consciousness, and we now know that they are used in other cells in other animals for other purposes. Quantum computation is many, many, many times faster for certain tasks than regular computation, and if we know anything about evolution, it's that it finds efficiencies where it can.
I guess the debate here really is whether the brain works like the OSI model where you have all the computation and thinking happening at one level and it gets neatly encapsulated into symbols and rules of some logical mind-language that is then passed down to the neurons which dumbly move the symbols around. And you could, if you wanted, simply swap out the neurons with some other computational substrate and you'd have an equivalent intelligence.
I strongly suspect that it is not that the case, and that there is some level of 'intelligence' happening at many layers of the brain, and that there is probably some kind of quantum computation and decision making happening even at the neuronal level, and possibly at larger scales, as well (ie, neurotransmitters and brain waves, etc).
I think it's clear that a great deal of computation is done by neurons transmitting pulses. Perhaps most of it is done that way, but I doubt it's the whole story, and I doubt we'd be able to simulate a brain just by building enough simulated neurons in the right pattern.
posted by empath at 9:57 AM on May 29, 2011
Without the LSD you might have no bunny rabbit, but without neurons firing you will see nothing at all.
You'll die if you don't drink water, but that doesn't mean you don't also need to breathe.
Sight and thinking, as concepts, as well as time, cause and effect, self-consciousness and many other things that you might consider fundamental to thought, are all tremendously altered when you're on LSD. It's not as simple as 'seeing things'. It's kind of hard to put into words, but let's just say that it is not just a matter of seeing pink bunny rabbits, and it's not a minor change. You can't hand wave away pretty much all of the normal human experience of consciousness as being incidental to it.
posted by empath at 10:04 AM on May 29, 2011
You'll die if you don't drink water, but that doesn't mean you don't also need to breathe.
Sight and thinking, as concepts, as well as time, cause and effect, self-consciousness and many other things that you might consider fundamental to thought, are all tremendously altered when you're on LSD. It's not as simple as 'seeing things'. It's kind of hard to put into words, but let's just say that it is not just a matter of seeing pink bunny rabbits, and it's not a minor change. You can't hand wave away pretty much all of the normal human experience of consciousness as being incidental to it.
posted by empath at 10:04 AM on May 29, 2011
kuatto: I'm familiar with those effects, and there is nothing there that requires time to be continuous. These are events occurring at millisecond scale; you are not going to convince me that timing discrepancies of less than say a tenth of a millisecond are of any significance. I base this, incidentally, on a quarter century career designing industrial controls which have to do a lot of the things brains do, and I've often had to deal with quantized time (in an interrupt system) dealing with continuous activity (such as high speed in-motion weighing). There is a point at which quantized scans are frequent enough that you don't notice an improvement making them any faster, and I am quite certain that such a point exists for the simulation of neural activity -- if we even have to simulate actual neurons to get the qualities of conscious behavior to emerge.
We are talking about equivalence of consciousness to simulation right?
I frankly don't even know what you mean by this. What I am asserting is that three things will happen in order, which make it reasonable to conclude that the result is "strong AI," that is something comparable to ourselves:
1. Someone builds a machine which is not directly programmed to do something like emulate the retina and V1-V5, but which teaches itself to do so using live input and a combination of mechanisms which are either known or suspected to be at work in vivo.
2. This system is extended as computers improve and develops characteristics remniscent of animal behavior, even though it is not exactly understood how those behaviors work. (The machine would be a great study platform for such research, being much more accessible than animal models.)
3. Extended far enough with good enough computers, the system becomes capable of language, reason, and other uniquely human behaviors both positive and negative. At this point anyone unwilling to admit the system is a "strong AI" is just in denial.
Step 2 in that is why I think everyone is barking up wrong trees. Right now our very best machines do not have the real-world adaptability of even a wasp, which is something we should be able to emulate. We are failing because we are attacking problems that either go in the worng direction (trying to code something that will deliberately write one whorl of the Mandelbrot Set, instead of letting the whole thing emerge), or are aiming too high (what's unique about humans instead of what makes a wasp different from the winner of the DARPA challenge).
posted by localroger at 10:09 AM on May 29, 2011
We are talking about equivalence of consciousness to simulation right?
I frankly don't even know what you mean by this. What I am asserting is that three things will happen in order, which make it reasonable to conclude that the result is "strong AI," that is something comparable to ourselves:
1. Someone builds a machine which is not directly programmed to do something like emulate the retina and V1-V5, but which teaches itself to do so using live input and a combination of mechanisms which are either known or suspected to be at work in vivo.
2. This system is extended as computers improve and develops characteristics remniscent of animal behavior, even though it is not exactly understood how those behaviors work. (The machine would be a great study platform for such research, being much more accessible than animal models.)
3. Extended far enough with good enough computers, the system becomes capable of language, reason, and other uniquely human behaviors both positive and negative. At this point anyone unwilling to admit the system is a "strong AI" is just in denial.
Step 2 in that is why I think everyone is barking up wrong trees. Right now our very best machines do not have the real-world adaptability of even a wasp, which is something we should be able to emulate. We are failing because we are attacking problems that either go in the worng direction (trying to code something that will deliberately write one whorl of the Mandelbrot Set, instead of letting the whole thing emerge), or are aiming too high (what's unique about humans instead of what makes a wasp different from the winner of the DARPA challenge).
posted by localroger at 10:09 AM on May 29, 2011
I'm not saying that quantum processes are essential to intelligence ... Quantum computation is many, many, many times faster for certain tasks than regular computation, and if we know anything about evolution ...
Speed is actual problem that quantum computation could solve. And it's a real problem.
I guess the debate here really is whether the brain works like the OSI model where you have all the computation and thinking happening at one level and it gets neatly encapsulated into symbols and rules of some logical mind-language that is then passed down to the neurons which dumbly move the symbols around. And you could, if you wanted, simply swap out the neurons with some other computational substrate and you'd have an equivalent intelligence.
I don't think anyone studying neurology and trying to understand how brains work really thinks they work like that. What you're describing is a dumb just-like-a-computer-program model. It's a bit of a straw man, though I'm sure this is unintentional. Neural systems don't organize themselves the way we organize human-designed systems. Analogies from those systems only carry us only so far; we're dealing with something different.
posted by nangar at 10:19 AM on May 29, 2011
Speed is actual problem that quantum computation could solve. And it's a real problem.
I guess the debate here really is whether the brain works like the OSI model where you have all the computation and thinking happening at one level and it gets neatly encapsulated into symbols and rules of some logical mind-language that is then passed down to the neurons which dumbly move the symbols around. And you could, if you wanted, simply swap out the neurons with some other computational substrate and you'd have an equivalent intelligence.
I don't think anyone studying neurology and trying to understand how brains work really thinks they work like that. What you're describing is a dumb just-like-a-computer-program model. It's a bit of a straw man, though I'm sure this is unintentional. Neural systems don't organize themselves the way we organize human-designed systems. Analogies from those systems only carry us only so far; we're dealing with something different.
posted by nangar at 10:19 AM on May 29, 2011
I don't disagree with anything you said. There are others who have stated that the brain is a turing machine in this thread, though. I think the brain is complicated, and that there is no magic bullet that's going to explain what consciousness is in a way that satisfies everyone. Consciousness isn't one phenomenon, it's a collection of phenomena, which arise from the activity from various parts of the brain working in parallel (and perhaps even working against each other). There's never going to be an aha! moment where we find the soul of the machine.
The experience of consciousness is a bit of a magic show, and I think we'll continue to gradually understand various parts of the brain, explaining one magic trick at a time, and people will gradually say, well, if it's that simple, it wasn't really part of consciousness at all, until gradually we've got nothing left but a guy in a shabby suit pulling rabbits out of a hat on a ramshackle stage. (I think i may have stretched that analogy too far).
posted by empath at 10:28 AM on May 29, 2011
The experience of consciousness is a bit of a magic show, and I think we'll continue to gradually understand various parts of the brain, explaining one magic trick at a time, and people will gradually say, well, if it's that simple, it wasn't really part of consciousness at all, until gradually we've got nothing left but a guy in a shabby suit pulling rabbits out of a hat on a ramshackle stage. (I think i may have stretched that analogy too far).
posted by empath at 10:28 AM on May 29, 2011
empath: You'll die if you don't drink water, but that doesn't mean you don't also need to breathe.
W. T. F. ?
Let me get a bit more specific. The particular model I think would lead to success is a modified form of an algorithm published by Erich Harth in the late 1980's, in which the thalamus implements a feature extractor which sharpens input data toward stored patterns in the cerebral cortex. Using fairly primitive computers Harth et al built a model and fed it input and showed it making essentially not just the same kind of discrimination, but the same kind of mistakes known to occur in natural vision systems.
In this model "what we are perceiving" is encoded by the firing of neurons in the thalamus. (Note that this is also contradictory to what a lot of people assume about consciousness occurring in the cortex.) There may be many other factors involved; Harth's own putative algorithm for what the thalamus is doing contains a couple of magic numbers that have to be adjusted to get it to work. But this sort of thing is a start, and what's more Harth was able to show that the neural wiring of the thalamus is capable of implementing his algorithm, getting all the data together with the right kind of processing on the assumptions that were known then about how synapses were likely to act.
Now, this leaves a lot out; Harth has no suggestion for how the patterns are selected and stored, or for the effects of hormones and such. But it is a fully artificial system making the same mistakes humans do not because it was told to, but because that behavior emerged from a simple underlying algorithm. You could code the algorithm by simulating neural firing, but Harth didn't have to; he used something more like a computer neural net and with fairly coarse time quantization got similar results to the natural system. That is very, very suggestive.
But even though it was Harth's work (coming just after the SJG essay about the wasps) that really convinced me AI was doable, ironically Harth himself did not believe this (and even argues, unconvincingly to me, against the possibility in his popularization The Creative Loop).
If someone wanted to give me the grant money to pursue it myself (and I didn't want to become a character from one of my own stories) this is where I would start. Extend this model with an at first simple emotion system to influence the selection and storage of patterns derived from experience instead of programmed in. Fiddle with it until it starts acting like a real animal visual system, then start extending it. Harth went in other directions, having reached the limits of what could be inspired by known natural architecture. I'd advise taking a leaf from the fractint geeks and start throwing algorithms at the wall until some of them stick.
Because when one of them sticks, there will be absolutely no doubt, since there is nothing quite as magical as directed self-organized behavior that arises without explicit direction.
posted by localroger at 10:34 AM on May 29, 2011 [1 favorite]
W. T. F. ?
Let me get a bit more specific. The particular model I think would lead to success is a modified form of an algorithm published by Erich Harth in the late 1980's, in which the thalamus implements a feature extractor which sharpens input data toward stored patterns in the cerebral cortex. Using fairly primitive computers Harth et al built a model and fed it input and showed it making essentially not just the same kind of discrimination, but the same kind of mistakes known to occur in natural vision systems.
In this model "what we are perceiving" is encoded by the firing of neurons in the thalamus. (Note that this is also contradictory to what a lot of people assume about consciousness occurring in the cortex.) There may be many other factors involved; Harth's own putative algorithm for what the thalamus is doing contains a couple of magic numbers that have to be adjusted to get it to work. But this sort of thing is a start, and what's more Harth was able to show that the neural wiring of the thalamus is capable of implementing his algorithm, getting all the data together with the right kind of processing on the assumptions that were known then about how synapses were likely to act.
Now, this leaves a lot out; Harth has no suggestion for how the patterns are selected and stored, or for the effects of hormones and such. But it is a fully artificial system making the same mistakes humans do not because it was told to, but because that behavior emerged from a simple underlying algorithm. You could code the algorithm by simulating neural firing, but Harth didn't have to; he used something more like a computer neural net and with fairly coarse time quantization got similar results to the natural system. That is very, very suggestive.
But even though it was Harth's work (coming just after the SJG essay about the wasps) that really convinced me AI was doable, ironically Harth himself did not believe this (and even argues, unconvincingly to me, against the possibility in his popularization The Creative Loop).
If someone wanted to give me the grant money to pursue it myself (and I didn't want to become a character from one of my own stories) this is where I would start. Extend this model with an at first simple emotion system to influence the selection and storage of patterns derived from experience instead of programmed in. Fiddle with it until it starts acting like a real animal visual system, then start extending it. Harth went in other directions, having reached the limits of what could be inspired by known natural architecture. I'd advise taking a leaf from the fractint geeks and start throwing algorithms at the wall until some of them stick.
Because when one of them sticks, there will be absolutely no doubt, since there is nothing quite as magical as directed self-organized behavior that arises without explicit direction.
posted by localroger at 10:34 AM on May 29, 2011 [1 favorite]
Extend this model with an at first simple emotion system to influence the selection and storage of patterns derived from experience instead of programmed in. Fiddle with it until it starts acting like a real animal visual system, then start extending it.
You act like people haven't been trying exactly this for decades with limited success. It's an incomplete picture of what is going on in the brain, as pretty much anybody who actually does this research will tell you. Neural networks are clearly part of human thinking, and even the ones people have already made are capable of decision making and perception, but they are not enough to explain what is going on in the brain, imo.
posted by empath at 10:49 AM on May 29, 2011
You act like people haven't been trying exactly this for decades with limited success. It's an incomplete picture of what is going on in the brain, as pretty much anybody who actually does this research will tell you. Neural networks are clearly part of human thinking, and even the ones people have already made are capable of decision making and perception, but they are not enough to explain what is going on in the brain, imo.
posted by empath at 10:49 AM on May 29, 2011
empath, I have had some contact with some of the larger figures in the Singularity world, some of whom are in turn conversant with vast areas of neuroscience and AI work, and I have never heard of anyone with funding who was aiming at my target, which is an emergently self-sufficient system on the scale of a wasp. They are all aiming much higher in abstraction, either at human level functionality through deliberately coded modules, at useful expert systems that can be sold to pay off their venture capital, or at detailed biological modeling which simply isn't feasible at full scale with current hardware. They get partial functionality and an incomplete picture because they aren't trying for the whole enchilada, because they think even the bite-sized enchilada won't fit in their computers.
posted by localroger at 11:44 AM on May 29, 2011
posted by localroger at 11:44 AM on May 29, 2011
empath, I have had some contact with some of the larger figures in the Singularity world
The singularity people are off the deep end, imo.
There's AI research on the insect level being done, though, and I remember reading that insect-inspired AI was big in robotics for autonomous motion control.
posted by empath at 1:25 PM on May 29, 2011
The singularity people are off the deep end, imo.
There's AI research on the insect level being done, though, and I remember reading that insect-inspired AI was big in robotics for autonomous motion control.
posted by empath at 1:25 PM on May 29, 2011
I would agree that some -- not all -- of the Singularity people are off the deep end. But they are all much more conversant with the literature on this subject than anybody else, since it's their obsession.
The article you linked to about the drosophilia model is a perfect example of not doing what I'm suggesting. They are trying to model the fly biologically so that they can test how different environmental influences might alter a fly's behavior. They are not in any sense trying to build a model that coaxes behavior on the level of a fly out of an emergent system.
posted by localroger at 1:51 PM on May 29, 2011
The article you linked to about the drosophilia model is a perfect example of not doing what I'm suggesting. They are trying to model the fly biologically so that they can test how different environmental influences might alter a fly's behavior. They are not in any sense trying to build a model that coaxes behavior on the level of a fly out of an emergent system.
posted by localroger at 1:51 PM on May 29, 2011
I philosophically stand with Stephen Wolfram on the subject
Wolfram Alpha Turns 2: "Our main conclusion is that there is an irreducible amount of work that requires humans and algorithms"
posted by kliuless at 8:11 AM on May 30, 2011
Wolfram Alpha Turns 2: "Our main conclusion is that there is an irreducible amount of work that requires humans and algorithms"
posted by kliuless at 8:11 AM on May 30, 2011
Discussion of Norvig and Chomsky by Mark Liberman on Language Log: Norvig channels Shannon, Straw men and Bee Science. The second article article includes a transcript of a relevant passage from Chomsky's talk (made from a cell phone recording).
Barbara Partee has uploaded a pdf of her talk (4 pages).
posted by nangar at 7:17 AM on June 5, 2011
Barbara Partee has uploaded a pdf of her talk (4 pages).
posted by nangar at 7:17 AM on June 5, 2011
« Older Risking it all in Pakistan | Snickt! Newer »
This thread has been archived and is closed to new comments
posted by Llama-Lime at 11:13 AM on May 28, 2011