BOOP BEEP BOOP
May 19, 2016 8:55 AM Subscribe
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.
I used to think the brain was the most facinating thing in the universe, but then I thought, "Well, look who's telling me that."
posted by Mr.Encyclopedia at 9:02 AM on May 19, 2016 [122 favorites]
posted by Mr.Encyclopedia at 9:02 AM on May 19, 2016 [122 favorites]
AI researcher Dr. Abraham Perelman makes a similar point in Infocom's old A Mind Forever Voyaging game (p. 6)
posted by johngoren at 9:05 AM on May 19, 2016 [8 favorites]
posted by johngoren at 9:05 AM on May 19, 2016 [8 favorites]
I guess I disagree. If I'm allowed to have access to pencil and paper, I can run any program that any computer can, just slower. I am Turing Complete.
I think a much more interesting question would be: how do we characterize the things a human brain can do that no computer can?
posted by steveminutillo at 9:06 AM on May 19, 2016 [19 favorites]
I think a much more interesting question would be: how do we characterize the things a human brain can do that no computer can?
posted by steveminutillo at 9:06 AM on May 19, 2016 [19 favorites]
Of course it's a computer. By the definition of 'computer', that's almost vacuously true; you can run algorithms on a human brain, so it's as Turing complete as any physical computer. It may be more than that, of course, but this writer gives no clues as to what that might mean.
posted by rhamphorhynchus at 9:06 AM on May 19, 2016 [17 favorites]
posted by rhamphorhynchus at 9:06 AM on May 19, 2016 [17 favorites]
Wasn't the computer=brain thing a fairy story we told to IT people to help them cope?
posted by biffa at 9:07 AM on May 19, 2016 [6 favorites]
posted by biffa at 9:07 AM on May 19, 2016 [6 favorites]
THANK YOU
All this talk about uploading consciousness drives me up the wall, because it neglects to mention that WE DON'T UNDERSTAND SENTIENCE
See also this previous FPP
posted by Existential Dread at 9:07 AM on May 19, 2016 [16 favorites]
All this talk about uploading consciousness drives me up the wall, because it neglects to mention that WE DON'T UNDERSTAND SENTIENCE
See also this previous FPP
posted by Existential Dread at 9:07 AM on May 19, 2016 [16 favorites]
Shorter Robert Epstein: "I have a limited understanding of information. Here's a straw man argument. Here's a 20 year old paper that sort of but not really supports my position. Opinions. Opinions!"
posted by logicpunk at 9:09 AM on May 19, 2016 [29 favorites]
posted by logicpunk at 9:09 AM on May 19, 2016 [29 favorites]
"The brain does much more than just recollect. It intercompares. It synthesizes. It analyzes. It generates abstractions. The simplest thought, like the concept of the number one, is an elaborate logical underpinning."
- Carl Sagan
posted by prepmonkey at 9:10 AM on May 19, 2016 [13 favorites]
- Carl Sagan
posted by prepmonkey at 9:10 AM on May 19, 2016 [13 favorites]
Why do we assume all computers work the same way? I can do math in my brain. I can compute. I am a computer. Hell, "computers" used to refer to people, usually women, who spent their days calculating artillery tables.
Also, I always thought it was the other way around. When computers were first introduced they were referred to as electronic brains.
posted by bondcliff at 9:10 AM on May 19, 2016 [23 favorites]
Also, I always thought it was the other way around. When computers were first introduced they were referred to as electronic brains.
posted by bondcliff at 9:10 AM on May 19, 2016 [23 favorites]
If I'm allowed to have access to pencil and paper, I can run any program that any computer can, just slower.
I mean, sure, you can. But people (looking at you, Ray Kurzweil) have this sloppy tendency to mistake the method by which a computer computes with the actual function of how the human brain works; logic gates, short term memory = RAM, yadda yadda yadda. And then they extrapolate that to presume that we'll be able to understand the human brain as a computer, and model it, and turn neurons into a processor etc.
There is no "elaborate logical underpinning," to the brain. There is a complex, messy, and bizarre biochemical and electrochemical system with massive redundancies and functions that we simply don't understand. We are correlating activities in the brain to its functions by looking at glucose consumption and electrical activity, and finding that the same regions of the brain do massively different things at any given time.
posted by Existential Dread at 9:14 AM on May 19, 2016 [40 favorites]
I mean, sure, you can. But people (looking at you, Ray Kurzweil) have this sloppy tendency to mistake the method by which a computer computes with the actual function of how the human brain works; logic gates, short term memory = RAM, yadda yadda yadda. And then they extrapolate that to presume that we'll be able to understand the human brain as a computer, and model it, and turn neurons into a processor etc.
There is no "elaborate logical underpinning," to the brain. There is a complex, messy, and bizarre biochemical and electrochemical system with massive redundancies and functions that we simply don't understand. We are correlating activities in the brain to its functions by looking at glucose consumption and electrical activity, and finding that the same regions of the brain do massively different things at any given time.
posted by Existential Dread at 9:14 AM on May 19, 2016 [40 favorites]
stop all the downloading
posted by robocop is bleeding at 9:16 AM on May 19, 2016 [13 favorites]
posted by robocop is bleeding at 9:16 AM on May 19, 2016 [13 favorites]
So where does this leave us mentats?
posted by griphus at 9:16 AM on May 19, 2016 [11 favorites]
posted by griphus at 9:16 AM on May 19, 2016 [11 favorites]
Oh man, I was working on bringing it back!
I deeply and genuinely love this piece, because it dovetails so nicely with a lot of the problems that I have as a biologist about how people--including many scientists!--perceive the brain and how it works. I've ranted about this in the context of evolutionary psych before, but I find that it's also sometimes a problem when it comes to the way that psychologists and animal behavior specialists think about the brain. It also reminds me of the way that biologists who study model organisms tend to gloss over and forget that the systems they work with have a contest outside the laboratory; that rats evolved to be the way that they specifically are for a reason, and there are things that are specific to them that aren't just specific to tiny biology sponges.
The brain isn't a computer. It doesn't act like a computer, and biological memories don't work the way that computational ones do. Its modules aren't necessarily like anything as separate as computer modules are; there's complicated networks of redundancy and interdependency, in the way that you would expect anything that has been slowly built by degrees over billions of years to be. Imagining a brain to work exactly like a mathematical model or a computational device built to run on mathematics is... well, a model analogy, in the truest sense of the word; a model that isn't remotely perfectly accurate, but which maybe helps us to inch our painful way closer to the incomprehensible reality.
We're not math. We don't work like math. And insisting that we do, because "computer" refers to anything that can (no matter how flawfully) compute math... ignoring the reality of what modern humans mean and what the piece means when we talk about computers... well, that's willful ignorance of the piece. Read it again and think about what it means when we use mental models of understanding a complex process that are so very, very flawed.
posted by sciatrix at 9:16 AM on May 19, 2016 [53 favorites]
I deeply and genuinely love this piece, because it dovetails so nicely with a lot of the problems that I have as a biologist about how people--including many scientists!--perceive the brain and how it works. I've ranted about this in the context of evolutionary psych before, but I find that it's also sometimes a problem when it comes to the way that psychologists and animal behavior specialists think about the brain. It also reminds me of the way that biologists who study model organisms tend to gloss over and forget that the systems they work with have a contest outside the laboratory; that rats evolved to be the way that they specifically are for a reason, and there are things that are specific to them that aren't just specific to tiny biology sponges.
The brain isn't a computer. It doesn't act like a computer, and biological memories don't work the way that computational ones do. Its modules aren't necessarily like anything as separate as computer modules are; there's complicated networks of redundancy and interdependency, in the way that you would expect anything that has been slowly built by degrees over billions of years to be. Imagining a brain to work exactly like a mathematical model or a computational device built to run on mathematics is... well, a model analogy, in the truest sense of the word; a model that isn't remotely perfectly accurate, but which maybe helps us to inch our painful way closer to the incomprehensible reality.
We're not math. We don't work like math. And insisting that we do, because "computer" refers to anything that can (no matter how flawfully) compute math... ignoring the reality of what modern humans mean and what the piece means when we talk about computers... well, that's willful ignorance of the piece. Read it again and think about what it means when we use mental models of understanding a complex process that are so very, very flawed.
posted by sciatrix at 9:16 AM on May 19, 2016 [53 favorites]
So, I need to process information (at a work learning event) so I can't read the whole article, but this is arguing against something so specific it sounds like it's missing the forest for the cell walls.
posted by MikeKD at 9:17 AM on May 19, 2016
posted by MikeKD at 9:17 AM on May 19, 2016
The dollar bill trick he does, halfway down, depends on a participant who either has a crappy memory, or just has no interest in the outcome (or has never really looked at a dollar bill). Dude is defacating on decades of studies, whole fields of academic endeavor, and that's all he's got? I call shenanigans.
posted by newdaddy at 9:19 AM on May 19, 2016 [2 favorites]
posted by newdaddy at 9:19 AM on May 19, 2016 [2 favorites]
I read this yesterday. I expected to hate it, although I hoped it was making some sort of coherent argument, and then it wasn't and I pretty much hated it. There are interesting bits, but they add up to (at most) a weird, straw-man idea of computation. It is, at this point in history, trivially obvious that animals aren't digital computers like the one I'm typing this on. That doesn't make computation a "metaphor" for what a nervous system does.
posted by brennen at 9:20 AM on May 19, 2016 [22 favorites]
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computerI mean, yes, it does. It does do those things, among others, even if Von Neumann machines and such are a real lousy model of its architecture. Brains are computational in nature. Squint, and computation of various flavors seems more or less intrinsic to life itself.
posted by brennen at 9:20 AM on May 19, 2016 [22 favorites]
He's just playing disingenuous games with words here. He claims computers have memories but people don't, but using the word "memory" for what computers do is a metaphor borrowed from the word for what people do when they recite pi to 100 places. Even the word "computer" originally referred to a person using algorithms to process symbols.
He says that when people make and retrieve a memory (say, reciting pi to 100 places) that they aren't really storing and retrieving information, rather the brain has changed to make it able to recite pi to 100 places. Well, duh, what does he think a memory in a computer is? The hardware in the computer changes in some way to make it able to store and retrieve information. The human brain works differently, but it's still storing and retrieving those 100 digits somehow.
His example of the dollar bill is about the filters the brain uses when it decides what information to retain. The fact that it doesn't store everything doesn't mean no information is being stored.
posted by straight at 9:20 AM on May 19, 2016 [42 favorites]
He says that when people make and retrieve a memory (say, reciting pi to 100 places) that they aren't really storing and retrieving information, rather the brain has changed to make it able to recite pi to 100 places. Well, duh, what does he think a memory in a computer is? The hardware in the computer changes in some way to make it able to store and retrieve information. The human brain works differently, but it's still storing and retrieving those 100 digits somehow.
His example of the dollar bill is about the filters the brain uses when it decides what information to retain. The fact that it doesn't store everything doesn't mean no information is being stored.
posted by straight at 9:20 AM on May 19, 2016 [42 favorites]
Something tells me that this is not completely correct.
Wait! My brain is telling me that.
Quiet you!
posted by Splunge at 9:21 AM on May 19, 2016 [6 favorites]
Wait! My brain is telling me that.
Quiet you!
posted by Splunge at 9:21 AM on May 19, 2016 [6 favorites]
Oh, the brain doesn't store memories? Try telling that to someone without a hippocampus. And then tell them again the next day because they won't fucking remember it.
sorry. The article touched a nerve. I'll stop now
posted by logicpunk at 9:22 AM on May 19, 2016 [34 favorites]
sorry. The article touched a nerve. I'll stop now
posted by logicpunk at 9:22 AM on May 19, 2016 [34 favorites]
To try and be less shitting-on-the-post-in-question, I think this piece is good in that it helped me crystallize what I actually do think about this, while I was eating lousy pizza and being angry at it over lunch yesterday.
posted by brennen at 9:23 AM on May 19, 2016 [1 favorite]
posted by brennen at 9:23 AM on May 19, 2016 [1 favorite]
This just in: Birds are not airplanes.
posted by anarch at 9:24 AM on May 19, 2016 [21 favorites]
posted by anarch at 9:24 AM on May 19, 2016 [21 favorites]
This sounds like someone arguing a grandfather clock and a digital wristwatch don't do the same thing because one has hands and the other does not.
posted by Mooski at 9:25 AM on May 19, 2016 [10 favorites]
posted by Mooski at 9:25 AM on May 19, 2016 [10 favorites]
His argument seems to boil down to something like:
Computers are made out of circuits that move information around.
People aren't made out of circuits.
Therefore, people are not computers.
Which for a really narrow definition of 'computer', sure, but come on.
posted by Ned G at 9:25 AM on May 19, 2016 [4 favorites]
Computers are made out of circuits that move information around.
People aren't made out of circuits.
Therefore, people are not computers.
Which for a really narrow definition of 'computer', sure, but come on.
posted by Ned G at 9:25 AM on May 19, 2016 [4 favorites]
I remember why I decided to dislike this piece, when I read it yesterday.
posted by newdaddy at 9:25 AM on May 19, 2016 [3 favorites]
posted by newdaddy at 9:25 AM on May 19, 2016 [3 favorites]
He thinks it's some kind of "gotcha" when cognitive scientists can't describe brain function without using what he calls IP metaphors, which is stupid because most IP terms are metaphors that get their names from brain functions.
posted by straight at 9:29 AM on May 19, 2016 [21 favorites]
posted by straight at 9:29 AM on May 19, 2016 [21 favorites]
If you can't say something nice ...
posted by Jonathan Livengood at 9:29 AM on May 19, 2016 [1 favorite]
posted by Jonathan Livengood at 9:29 AM on May 19, 2016 [1 favorite]
help computer
posted by Mayor West at 9:30 AM on May 19, 2016 [4 favorites]
posted by Mayor West at 9:30 AM on May 19, 2016 [4 favorites]
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
"Move so that the ball is in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm. It's a straightforward application of machine vision to program a robot to catch fly balls that way, without requiring the robot to know anything about "the force of the impact, the angle of the trajectory, that kind of thing" or "an internal model of the path along which the ball will likely move."
posted by jedicus at 9:30 AM on May 19, 2016 [22 favorites]
"Move so that the ball is in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm. It's a straightforward application of machine vision to program a robot to catch fly balls that way, without requiring the robot to know anything about "the force of the impact, the angle of the trajectory, that kind of thing" or "an internal model of the path along which the ball will likely move."
posted by jedicus at 9:30 AM on May 19, 2016 [22 favorites]
Hell, this even spills over into our art and our stories--when we imagine a brain that is "better," more functional, we imagine something with fewer emotions and more computational skill. Yes? Think Spock, or Data, or any of the other bloodless avatars for intelligence that appear in our media. Look at what happens when humans imagine a mind that is "smarter" than average; we envision that mind as being highly suited to mathematics and computational logic, and being completely devoid of emotional connection.
But that's not how a human mind learns. All our imaginings are inaccurate, and that bleeds into our concepts of what intelligence even is and who is likely to contain it. It ruins our ability to correctly identify efficient systems and it introduces bias into our calculations, because we expect ourselves to think like a computer does and that isn't reality. You know what happens when you pull emotion out of a human brain? You lose the ability to make any decisions at all. We learn by making associations and forming heuristics, which are inherently a little fuzzy in order to make them flexible; we prioritize our memories by tagging them with emotional states. Hell, we know that emotional salience is a huge part of the easiness of learning new information. Insofar as computational programming does that, it's because programmers pull from what we know about how brains work, not the other way around.
Or hell, take the study of behavioral neuroscience; people will assume that you can treat a brain like a computational program with a variety of independent modules, and that evolution can select for a perfect formulation of each module independently of all the others, and that this won't affect the remaining modules. (That, even aside from the sexism, is one of my biggest, angriest quarrels with evo psych. They don't fucking understand any of the fields they're trying to tie together.) Or look at how people think about brain regions like modules that do only one thing, even when we know that they integrate information about multiple functions and we know that the functions we're thinking about are much more complicated than they originally seem.
(I mean, for fuck's sake, the hippocampus does indeed store memory, but it also is heavily involved in spatial navigation and spatial memory, and it's not unreasonable to think that one of the reasons it's so important to memory formation is because we lose the ability to organize our episodic memories of particular states into some kind of coherent whole. And the hippocampus is not involved in all types of memory; skill-memory is totally different from episodic memory. You can see exactly this kind of oversimplified modular mindset happening in this fucking thread. Memory is not memory is not memory. )
posted by sciatrix at 9:31 AM on May 19, 2016 [41 favorites]
But that's not how a human mind learns. All our imaginings are inaccurate, and that bleeds into our concepts of what intelligence even is and who is likely to contain it. It ruins our ability to correctly identify efficient systems and it introduces bias into our calculations, because we expect ourselves to think like a computer does and that isn't reality. You know what happens when you pull emotion out of a human brain? You lose the ability to make any decisions at all. We learn by making associations and forming heuristics, which are inherently a little fuzzy in order to make them flexible; we prioritize our memories by tagging them with emotional states. Hell, we know that emotional salience is a huge part of the easiness of learning new information. Insofar as computational programming does that, it's because programmers pull from what we know about how brains work, not the other way around.
Or hell, take the study of behavioral neuroscience; people will assume that you can treat a brain like a computational program with a variety of independent modules, and that evolution can select for a perfect formulation of each module independently of all the others, and that this won't affect the remaining modules. (That, even aside from the sexism, is one of my biggest, angriest quarrels with evo psych. They don't fucking understand any of the fields they're trying to tie together.) Or look at how people think about brain regions like modules that do only one thing, even when we know that they integrate information about multiple functions and we know that the functions we're thinking about are much more complicated than they originally seem.
(I mean, for fuck's sake, the hippocampus does indeed store memory, but it also is heavily involved in spatial navigation and spatial memory, and it's not unreasonable to think that one of the reasons it's so important to memory formation is because we lose the ability to organize our episodic memories of particular states into some kind of coherent whole. And the hippocampus is not involved in all types of memory; skill-memory is totally different from episodic memory. You can see exactly this kind of oversimplified modular mindset happening in this fucking thread. Memory is not memory is not memory. )
posted by sciatrix at 9:31 AM on May 19, 2016 [41 favorites]
The central point here - that using a von Neumann computer as a metaphor for the brain is very misleading - is absolutely correct. It's just that he uses vocabulary related to computation and information seemingly oblivious to the fact that these concepts are studied outside - were studied before - the existence of digital computers.
posted by atoxyl at 9:31 AM on May 19, 2016 [12 favorites]
posted by atoxyl at 9:31 AM on May 19, 2016 [12 favorites]
Robert Epstein clearly does not understand the first thing about computationalist theories of neural function. logicpunk has it exactly right; he's attacking some bizarre straw notion of computationalism that has little to no resemblance to how the concept is actually used in neuroscience. (Though I am happy to acknowledge that the absurd version that he's attacking is in fact held by people like Ray Kurzweil, but not by any practicing neuroscientists I know.)
First point, not all computation is digital computation. I don't think the brain is a digital computer, I think it's an analog computer. Not all neuroscientists who subscribe to computationalism agree on this point.
Second point, Epstein repeatedly attacks the "metaphor" of information processing. It is not a metaphor, it is a description. Information has an actual, physical definition, and we can describe the kinds of things that the nervous system (and other biological systems) does in terms of how information is transformed. An extremely important thing that we need biology for to complete our understanding of information processing is function, which is not something that physics gives us.
Epstein: Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
But senses, reflexes, and learning mechanisms have rules, representations, algorithms, models, etc. These statements are self-contradictory. For example, there is evidence that a very wide range of animals, including humans, have learning mechanisms which are characterized by something like reinforcement learning, and that specific neurobiological processes, such as dopamine release in humans and other vertebrates, correspond closely to specific representational elements in reinforcement learning algorithms, such as "reward prediction error". RL is an algorithm that characterizes the learning systems we are born with; RPE is a representation that our nervous systems extract from the various sensory stimuli we encounter in tandem with our own internal states.
Epstein: The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
This is ridiculous. No computationalist I know would suggest that this is how ballplayers catch balls.
Epstein: That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
THIS IS AN ALGORITHM. The "linear optical trajectory" is a representation. The behavior of the ballplayer is computed. Only the grossest misunderstanding of these terms could lead him to say this.
I could go on and on, but I have a grant application to write. In which I am proposing specific experiments to understand computational principles in the brain.
posted by biogeo at 9:33 AM on May 19, 2016 [68 favorites]
First point, not all computation is digital computation. I don't think the brain is a digital computer, I think it's an analog computer. Not all neuroscientists who subscribe to computationalism agree on this point.
Second point, Epstein repeatedly attacks the "metaphor" of information processing. It is not a metaphor, it is a description. Information has an actual, physical definition, and we can describe the kinds of things that the nervous system (and other biological systems) does in terms of how information is transformed. An extremely important thing that we need biology for to complete our understanding of information processing is function, which is not something that physics gives us.
Epstein: Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
But senses, reflexes, and learning mechanisms have rules, representations, algorithms, models, etc. These statements are self-contradictory. For example, there is evidence that a very wide range of animals, including humans, have learning mechanisms which are characterized by something like reinforcement learning, and that specific neurobiological processes, such as dopamine release in humans and other vertebrates, correspond closely to specific representational elements in reinforcement learning algorithms, such as "reward prediction error". RL is an algorithm that characterizes the learning systems we are born with; RPE is a representation that our nervous systems extract from the various sensory stimuli we encounter in tandem with our own internal states.
Epstein: The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
This is ridiculous. No computationalist I know would suggest that this is how ballplayers catch balls.
Epstein: That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
THIS IS AN ALGORITHM. The "linear optical trajectory" is a representation. The behavior of the ballplayer is computed. Only the grossest misunderstanding of these terms could lead him to say this.
I could go on and on, but I have a grant application to write. In which I am proposing specific experiments to understand computational principles in the brain.
posted by biogeo at 9:33 AM on May 19, 2016 [68 favorites]
otoh, your brain is a neural net.
posted by Apocryphon at 9:34 AM on May 19, 2016 [2 favorites]
posted by Apocryphon at 9:34 AM on May 19, 2016 [2 favorites]
All this talk about uploading consciousness drives me up the wall, because it neglects to mention that WE DON'T UNDERSTAND SENTIENCE
posted by Existential Dread
hey it's one of them thar eponysterical posts
posted by the phlegmatic king at 9:37 AM on May 19, 2016 [3 favorites]
posted by Existential Dread
hey it's one of them thar eponysterical posts
posted by the phlegmatic king at 9:37 AM on May 19, 2016 [3 favorites]
this post is offensive to robot-americans
posted by poffin boffin at 9:37 AM on May 19, 2016 [10 favorites]
posted by poffin boffin at 9:37 AM on May 19, 2016 [10 favorites]
Seriously, the dollar bill demonstration is the "but it's snowing" argument against climate change. Christ on a cracker.
posted by uncleozzy at 9:38 AM on May 19, 2016 [13 favorites]
posted by uncleozzy at 9:38 AM on May 19, 2016 [13 favorites]
But I do not think any person that ever suggested a metaphor for how the mind works ever pretended that we have a cpu and a huge wad of tiny hard drives in the knoggen.
But information is retained in the grey matter, something akin to pattern matching does occur, somehow 327 * 5194 is calculated (with physical tool integration). But yeah, every time I read a new amazing tidbit of brain science, it's amazing and obvious that there is a long long way to go.
Could the key element of what is occurring in the electro-chemical system between the ears be extracted and reproduced as a simulation in some beyond-super-computer in the future? Why not? There is no demonstrated supernatural soul element that is needed to explain "thinking", we don't now how to take a tiny flat chip and pack it in a 3D form and power and cool and program it but that will happen. Right now it's possible from this laptop to spin up a petabyte of storage on amazons AWS platform (very expensive) with servers with 2TB of memory. That's huge but a great grand child's phone/toy. There are limits and unlimited routes around the limits.
I for one plan on getting uploaded!
posted by sammyo at 9:39 AM on May 19, 2016
But information is retained in the grey matter, something akin to pattern matching does occur, somehow 327 * 5194 is calculated (with physical tool integration). But yeah, every time I read a new amazing tidbit of brain science, it's amazing and obvious that there is a long long way to go.
Could the key element of what is occurring in the electro-chemical system between the ears be extracted and reproduced as a simulation in some beyond-super-computer in the future? Why not? There is no demonstrated supernatural soul element that is needed to explain "thinking", we don't now how to take a tiny flat chip and pack it in a 3D form and power and cool and program it but that will happen. Right now it's possible from this laptop to spin up a petabyte of storage on amazons AWS platform (very expensive) with servers with 2TB of memory. That's huge but a great grand child's phone/toy. There are limits and unlimited routes around the limits.
I for one plan on getting uploaded!
posted by sammyo at 9:39 AM on May 19, 2016
This is a fascinating piece. Pointing out how the computer metaphor is essentially a thought stopping metaphor is extremely interesting.
This is exactly why I find this article so annoying. Aside from the fact that, as I said before, computationalism isn't a metaphor, it's a description (one which could be wrong), the idea that computationalism is somehow "thought stopping" is just dead wrong. I came late to computationalism; my perspective on neuroscience is fundamentally that of a biologist. Computationalism is useful to me because it helps me understand and make predictions about biological systems. Computationalism isn't thought stopping, it's thought provoking.
posted by biogeo at 9:44 AM on May 19, 2016 [7 favorites]
This is exactly why I find this article so annoying. Aside from the fact that, as I said before, computationalism isn't a metaphor, it's a description (one which could be wrong), the idea that computationalism is somehow "thought stopping" is just dead wrong. I came late to computationalism; my perspective on neuroscience is fundamentally that of a biologist. Computationalism is useful to me because it helps me understand and make predictions about biological systems. Computationalism isn't thought stopping, it's thought provoking.
posted by biogeo at 9:44 AM on May 19, 2016 [7 favorites]
an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective
Does it? Does it now. A book by Ray Kurzweil exemplifies the work of thousands of researchers and billions of dollars in funding.
the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
I see. Maintaining a linear optical trajectory is devoid of algorithms. Yes. In fact, it's hard to even imagine a computer doing something like maintaining a linear optical trajectory! Try saying that in a robot voice and see how far it gets you!
even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it.
And if you're simulating 86 billion neurons somehow, surely there's no way you could possibly spare the computational power to simulate the physics of a human body.
posted by IjonTichy at 9:46 AM on May 19, 2016 [10 favorites]
Does it? Does it now. A book by Ray Kurzweil exemplifies the work of thousands of researchers and billions of dollars in funding.
the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
I see. Maintaining a linear optical trajectory is devoid of algorithms. Yes. In fact, it's hard to even imagine a computer doing something like maintaining a linear optical trajectory! Try saying that in a robot voice and see how far it gets you!
even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it.
And if you're simulating 86 billion neurons somehow, surely there's no way you could possibly spare the computational power to simulate the physics of a human body.
posted by IjonTichy at 9:46 AM on May 19, 2016 [10 favorites]
"Move so that the ball is in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm can be modeled by an algorithm. Algorithms are mathematical representations of physical reality, not physical reality themselves. Their use is fundamentally dependent on their accuracy. Connecting a photosensor to an ADC which then hooks into a processor running an algorithm to recognize ball trajectory which then hooks into another algorithm which activates servomotors to move a robot into position to catch the ball may appear to be indistinguishable from what a human does to catch the ball, but that does not imply that the human is running an algorithm in the same sense.
Computation =/= cognition, but I acknowledge that computationalism might be a useful way of modeling cognition, so that we might try to understand it better.
posted by Existential Dread at 9:47 AM on May 19, 2016 [4 favorites]
Computation =/= cognition, but I acknowledge that computationalism might be a useful way of modeling cognition, so that we might try to understand it better.
posted by Existential Dread at 9:47 AM on May 19, 2016 [4 favorites]
i was reading this article thinking "Man has this author heard of behavior analysis?" and then I hit this bit:
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
I laughed out loud. I had skipped the byline. Robert Epstein is a huge name in Behavior Analysis and was a colleague of B. F. Skinner's going waaaaaay back. He still maintains some of the classic research reporting on his youtube account, which I enjoy returning to from time to time.
Basic behavior analytic principles are good to refresh, because it is so simple and demonstrable, and yet the denial of those principles is the water that we swim in. Although I studied behavior analysis for an advanced degree, I feel like the winds of culture are pushing me further and further out to sea.
posted by rebent at 9:48 AM on May 19, 2016 [6 favorites]
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
I laughed out loud. I had skipped the byline. Robert Epstein is a huge name in Behavior Analysis and was a colleague of B. F. Skinner's going waaaaaay back. He still maintains some of the classic research reporting on his youtube account, which I enjoy returning to from time to time.
Basic behavior analytic principles are good to refresh, because it is so simple and demonstrable, and yet the denial of those principles is the water that we swim in. Although I studied behavior analysis for an advanced degree, I feel like the winds of culture are pushing me further and further out to sea.
posted by rebent at 9:48 AM on May 19, 2016 [6 favorites]
It's worth mentioning here that my background is heavily interdisciplinary work, which has a lot of strengths but in practice means that a lot of people--including neuroscientists, insofar as they work on brains and how neural connections work--are working with basically a smattering of a bunch of different fields and trying to integrate them into one thing. (That's why they call it integrative biology!)
So it's also very possible that my delight with this article is coming from a place where I don't expect the biologists using the metaphor to necessarily have a particular background in the history of computer programming. I don't expect biologists studying brains to necessarily understand much more about the intricacies of cognitive neuroscience than are relevant and directly necessary to the questions they are interested in asking, which tend to focus around the evolution of particular behaviors in nonhuman animals... which, I am very well aware animal cognition is a thing, but it's also a fairly abstracted thing with minimal crossover between the fields where I spend most of my time.* And there are massive flaws in that perspective, too; refer back to my point about model systems earlier.
And my point here is, these biologists still use this metaphor as a way of guiding their thinking about how brains can evolve and change over time, and it still leads them down roads of reasoning about the world which aren't true and which muddy the science. This metaphor is dangerous precisely because the word "computer" means, to most people, digital computers--and most of the people using those metaphors are going to have a fairly simplistic view of how computers actually work. This is not actually a problem only for the lay public; it's a problem for professional biologists directly studying brains as well, and in general for anyone who is interested in how brains work but who is not particularly well informed about certain fields of programming.
*thank god, because cognitive psych is not in any way my bag. did animal cognition in undergrad for a while, really glad other people are doing it now so I don't have to.
posted by sciatrix at 9:49 AM on May 19, 2016 [15 favorites]
So it's also very possible that my delight with this article is coming from a place where I don't expect the biologists using the metaphor to necessarily have a particular background in the history of computer programming. I don't expect biologists studying brains to necessarily understand much more about the intricacies of cognitive neuroscience than are relevant and directly necessary to the questions they are interested in asking, which tend to focus around the evolution of particular behaviors in nonhuman animals... which, I am very well aware animal cognition is a thing, but it's also a fairly abstracted thing with minimal crossover between the fields where I spend most of my time.* And there are massive flaws in that perspective, too; refer back to my point about model systems earlier.
And my point here is, these biologists still use this metaphor as a way of guiding their thinking about how brains can evolve and change over time, and it still leads them down roads of reasoning about the world which aren't true and which muddy the science. This metaphor is dangerous precisely because the word "computer" means, to most people, digital computers--and most of the people using those metaphors are going to have a fairly simplistic view of how computers actually work. This is not actually a problem only for the lay public; it's a problem for professional biologists directly studying brains as well, and in general for anyone who is interested in how brains work but who is not particularly well informed about certain fields of programming.
*thank god, because cognitive psych is not in any way my bag. did animal cognition in undergrad for a while, really glad other people are doing it now so I don't have to.
posted by sciatrix at 9:49 AM on May 19, 2016 [15 favorites]
sciatrix: That, even aside from the sexism, is one of my biggest, angriest quarrels with evo psych. They don't fucking understand any of the fields they're trying to tie together.
I know it's a bit off-topic, but I can't resist the opportunity to say, here, here. I would like to inject a little defense of behavioral neuroscience, though; many behavioral neuroscientists avoid the trap of hypermodularity, though some are as guilty as the evo-psych crowd.
posted by biogeo at 9:50 AM on May 19, 2016 [3 favorites]
I know it's a bit off-topic, but I can't resist the opportunity to say, here, here. I would like to inject a little defense of behavioral neuroscience, though; many behavioral neuroscientists avoid the trap of hypermodularity, though some are as guilty as the evo-psych crowd.
posted by biogeo at 9:50 AM on May 19, 2016 [3 favorites]
It seems that people want to understand themselves in terms of the dominant technology. So you once had La Mettre and Man A Machine, and now you have this dubious analogy with a computing devices.
posted by thelonius at 9:52 AM on May 19, 2016 [3 favorites]
posted by thelonius at 9:52 AM on May 19, 2016 [3 favorites]
This article presupposes that people who use the computer as brain metaphor have a real grasp on how computers work, which is patently false.
If however, this is meant to be an argument against the possibility of artificial intelligence, it's a crap one.
posted by lumpenprole at 9:54 AM on May 19, 2016 [3 favorites]
If however, this is meant to be an argument against the possibility of artificial intelligence, it's a crap one.
posted by lumpenprole at 9:54 AM on May 19, 2016 [3 favorites]
Don't Turing Machines have infinite storage capacity? I sort of doubt the brain does. QE fuckin' D.
posted by thelonius at 9:55 AM on May 19, 2016 [5 favorites]
posted by thelonius at 9:55 AM on May 19, 2016 [5 favorites]
Is it gauche to post a link to one of my own papers? sciatrix, I'd be really interested to hear what someone with your background and perspective would say about my paper on the neuroethology of decision making.
posted by biogeo at 9:56 AM on May 19, 2016 [3 favorites]
posted by biogeo at 9:56 AM on May 19, 2016 [3 favorites]
Layman chiming in: from my psychoactive adventures I wouldn't necessarily say the mind is a computer, but it sure as hell acts like one at times.
posted by Valued Customer at 10:04 AM on May 19, 2016
posted by Valued Customer at 10:04 AM on May 19, 2016
I deeply and genuinely love this piece, because it dovetails so nicely with a lot of the problems that I have as a biologist about how people--including many scientists!--perceive the brain and how it works.
It might help to point out that computers aren't entirely logical and perfect beasts. Not only are they only as good as the programmer who programs them, but also as good as the chip designer, fabricator and market economy allow:
Obviously the brain is not wired identically to a computer, is wildly disorganized in comparison, and even the fundamental components are dramatically different. But I'm having a hard time reconciling this essay with the TED talk this week that included a scientist who claimed to find a region in this one lady's brain active when and only when watching The Simpsons.
posted by pwnguin at 10:05 AM on May 19, 2016 [1 favorite]
It might help to point out that computers aren't entirely logical and perfect beasts. Not only are they only as good as the programmer who programs them, but also as good as the chip designer, fabricator and market economy allow:
"Unlike static RAM, which keeps its data intact as long as power is applied to it, DRAM loses information, because the electric charge used to record the bits in its memory cells slowly leaks away. So circuitry must refresh the charge state of each of these memory cells many times each second—hence the appellation “dynamic.”"If not revisited regualrly, DRAM stored memories will slowly fade away. We build additional circuitry in to constantly restore state.
Obviously the brain is not wired identically to a computer, is wildly disorganized in comparison, and even the fundamental components are dramatically different. But I'm having a hard time reconciling this essay with the TED talk this week that included a scientist who claimed to find a region in this one lady's brain active when and only when watching The Simpsons.
posted by pwnguin at 10:05 AM on May 19, 2016 [1 favorite]
We're not math. We don't work like math.
Discrete math, sure, but that's not the only kind of math. There are at least two other models we could use beyond digital turing-type representations: analog networks (e.g. resistive/capacitive/inductive) and quantum mechanical (e.g. particle/chemical) ones. Human brains, to my mind, are likely some combination of those, as our heads are full of electrical and chemical and photonic signal pathways.
Math too, is just a model anyway, not the thing itself.
posted by bonehead at 10:05 AM on May 19, 2016 [3 favorites]
Discrete math, sure, but that's not the only kind of math. There are at least two other models we could use beyond digital turing-type representations: analog networks (e.g. resistive/capacitive/inductive) and quantum mechanical (e.g. particle/chemical) ones. Human brains, to my mind, are likely some combination of those, as our heads are full of electrical and chemical and photonic signal pathways.
Math too, is just a model anyway, not the thing itself.
posted by bonehead at 10:05 AM on May 19, 2016 [3 favorites]
You can tell me my brain is not a computer, and yet I have been studying Yiddish for almost five months. I have been using smart flash cards to cram as many words into my brain as I can. My brain struggles to remember, and fails.
And then, every so often, on its own, it just starts running word lists. I'll just be sitting there and a cascade of the words I have been learning will scroll through my head. And then, all at once, I know all the words. Without effort. They're just there.
I've never experienced this before, but must assume that my brain computer is running some sort of program because ikh ken nisht farshtein and now it's happening again
posted by maxsparber at 10:06 AM on May 19, 2016 [2 favorites]
And then, every so often, on its own, it just starts running word lists. I'll just be sitting there and a cascade of the words I have been learning will scroll through my head. And then, all at once, I know all the words. Without effort. They're just there.
I've never experienced this before, but must assume that my brain computer is running some sort of program because ikh ken nisht farshtein and now it's happening again
posted by maxsparber at 10:06 AM on May 19, 2016 [2 favorites]
a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
That's an interesting statement since we're making strides toward locating words and concepts in human brains. And it turns out we have techniques for training neural nets to cluster words and concepts that results in an eerily similar sort of mapping. Just because it's not stored in a convenient form for researchers with a given set of tools doesn't mean it isn't stored.
I haven't been following the Human Brain Project, but SciAm reports that it's less an indictment of Markram and his ideas than it is of decision-making at the European Commission. If so, that's a cheap shot.
posted by RobotVoodooPower at 10:07 AM on May 19, 2016 [4 favorites]
That's an interesting statement since we're making strides toward locating words and concepts in human brains. And it turns out we have techniques for training neural nets to cluster words and concepts that results in an eerily similar sort of mapping. Just because it's not stored in a convenient form for researchers with a given set of tools doesn't mean it isn't stored.
I haven't been following the Human Brain Project, but SciAm reports that it's less an indictment of Markram and his ideas than it is of decision-making at the European Commission. If so, that's a cheap shot.
posted by RobotVoodooPower at 10:07 AM on May 19, 2016 [4 favorites]
If not, show me where the magical woo is.
Show me the complete physical description of how the brain works, one might counter. Saying "we don't know" is not the same as "magical woo".
posted by thelonius at 10:08 AM on May 19, 2016 [12 favorites]
Show me the complete physical description of how the brain works, one might counter. Saying "we don't know" is not the same as "magical woo".
posted by thelonius at 10:08 AM on May 19, 2016 [12 favorites]
23skidoo: "Zuh? Clearly there's some representation of a dollar bill stored in a memory register in a brain - it's just that the representation is imperfect and only includes metadata like "includes a picture of a dead president" and "has In God We Trust on there somewhere"."
Maybe a useful distinction is to think of it does not store a representation of a dollar bill, but it stores a method to categorize whether or not something is a dollar bill. To use that to create a representation of a dollar bill, you kind of have to run the process in reverse, and people differ greatly in their ability to do so. The closest computer analogy would be more like when Google was using their image-recognition neural network to generate images.
posted by RobotHero at 10:09 AM on May 19, 2016 [2 favorites]
Maybe a useful distinction is to think of it does not store a representation of a dollar bill, but it stores a method to categorize whether or not something is a dollar bill. To use that to create a representation of a dollar bill, you kind of have to run the process in reverse, and people differ greatly in their ability to do so. The closest computer analogy would be more like when Google was using their image-recognition neural network to generate images.
posted by RobotHero at 10:09 AM on May 19, 2016 [2 favorites]
We are physics. Physics is math. We are math.
Physics and math are tools that we use to model, represent, and predict reality. They are not reality in and of themselves.
posted by Existential Dread at 10:10 AM on May 19, 2016 [19 favorites]
Physics and math are tools that we use to model, represent, and predict reality. They are not reality in and of themselves.
posted by Existential Dread at 10:10 AM on May 19, 2016 [19 favorites]
sciatrix, I think you raise good points about the cultural aspects of this. There's a lot of pop understanding of computers and computation and brains that is variously confused and misleading. This article might serve as a corrective to some of that. On the other hand, it seems to introduce a lot of its own confusion to the topic. I think the underlying cause in both cases might just be a weirdly constrained cultural model of what computation itself is.
posted by brennen at 10:11 AM on May 19, 2016 [4 favorites]
posted by brennen at 10:11 AM on May 19, 2016 [4 favorites]
sciatrix, I'd be really interested to hear what someone with your background and perspective would say about my paper on the neuroethology of decision making.
I actually have to run to campus to set up some (non Mus) mice for an experiment, but I'll read it when I get back and make a comment then!
posted by sciatrix at 10:13 AM on May 19, 2016 [1 favorite]
I actually have to run to campus to set up some (non Mus) mice for an experiment, but I'll read it when I get back and make a comment then!
posted by sciatrix at 10:13 AM on May 19, 2016 [1 favorite]
We are physics. Physics is math. We are math.
This is kind of a strawman. The author is not denying "physics" (by which you actually mean biochemistry). All the author is saying is that our brains don't use a process of storing and recalling memory that's identical or easily-mappable to the digital algorithms of a Turing machine.
This is not the argument for mysticism implied by your statement. It is saying one particular model, that used in digital technologies, is not a good metaphor for the way humans process information.
posted by bonehead at 10:16 AM on May 19, 2016 [7 favorites]
This is kind of a strawman. The author is not denying "physics" (by which you actually mean biochemistry). All the author is saying is that our brains don't use a process of storing and recalling memory that's identical or easily-mappable to the digital algorithms of a Turing machine.
This is not the argument for mysticism implied by your statement. It is saying one particular model, that used in digital technologies, is not a good metaphor for the way humans process information.
posted by bonehead at 10:16 AM on May 19, 2016 [7 favorites]
Where we're going we don't need brains!
[insert Huey Lewis music here]
posted by blue_beetle at 10:18 AM on May 19, 2016 [1 favorite]
[insert Huey Lewis music here]
posted by blue_beetle at 10:18 AM on May 19, 2016 [1 favorite]
On the Internet, nobody knows you're an algorithm.
posted by It's Raining Florence Henderson at 10:18 AM on May 19, 2016 [6 favorites]
posted by It's Raining Florence Henderson at 10:18 AM on May 19, 2016 [6 favorites]
The behavior of matter is described by physics. The language of physics is math. Either we are fully described by math or by some kind of magical woo that just happens to look like math from the angles that we look at it. Find me some woo, and I'll abandon my claim that we are math.
I agree with your first two sentences; matter IS described by physics, to the best of our current knowledge, and the language of physics IS math. The key point is described by. However, I won't say we are fully described by physics, math, biochemistry, or any other discipline, because we have an incomplete understanding of how the brain works.
posted by Existential Dread at 10:20 AM on May 19, 2016 [4 favorites]
I agree with your first two sentences; matter IS described by physics, to the best of our current knowledge, and the language of physics IS math. The key point is described by. However, I won't say we are fully described by physics, math, biochemistry, or any other discipline, because we have an incomplete understanding of how the brain works.
posted by Existential Dread at 10:20 AM on May 19, 2016 [4 favorites]
it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge
Nobody tell this guy the definition of metaphor...or knowledge...
posted by nzero at 10:28 AM on May 19, 2016 [1 favorite]
Nobody tell this guy the definition of metaphor...or knowledge...
posted by nzero at 10:28 AM on May 19, 2016 [1 favorite]
"...a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found."
I guess he never saw the video of what a cat sees reconstructed using taps into the cat's visual cortex. (previously on MetaFilter)
posted by Hairy Lobster at 10:29 AM on May 19, 2016 [6 favorites]
I guess he never saw the video of what a cat sees reconstructed using taps into the cat's visual cortex. (previously on MetaFilter)
posted by Hairy Lobster at 10:29 AM on May 19, 2016 [6 favorites]
The three choices I mentioned before:
1. The brain is less powerful than a Turing Machine:...
Serious question....it's been a long time since school....how is "more powerful" defined? Simply being able to solve more classes of problems? In this sense a Turing machine is more powerful than a finite state machine, for example, since it can solve things that a FSM cannot (iirc), and also do everything that the FSM can.
Is that right?
posted by thelonius at 10:32 AM on May 19, 2016
1. The brain is less powerful than a Turing Machine:...
Serious question....it's been a long time since school....how is "more powerful" defined? Simply being able to solve more classes of problems? In this sense a Turing machine is more powerful than a finite state machine, for example, since it can solve things that a FSM cannot (iirc), and also do everything that the FSM can.
Is that right?
posted by thelonius at 10:32 AM on May 19, 2016
The three choices
So, how does, for example, a quantum computer fit in that schema? Are those just the three choices? Are you certain that you're not excluding a middle? The author is saying that the human brain is provably different from a TM. Better or worse doesn't figure here, just different.
posted by bonehead at 10:32 AM on May 19, 2016
So, how does, for example, a quantum computer fit in that schema? Are those just the three choices? Are you certain that you're not excluding a middle? The author is saying that the human brain is provably different from a TM. Better or worse doesn't figure here, just different.
posted by bonehead at 10:32 AM on May 19, 2016
There are big problems with eliding the differences between the kind of retrieval a human brain does and the kind a computer does. Brains are incapable of retrieving memories without changing them - the changes can be subtle, often inconsequential, but they are real. The memory as hardware metaphor (which of course predates computers - we once compared memory to a book after all) can even have harmful consequences (when evaluating the testimony of a witness or suspect in the court of law, for example). We are very good at creating memories of things that never happened, and there are reliable tricks for inducing this effect.
posted by idiopath at 10:45 AM on May 19, 2016 [5 favorites]
posted by idiopath at 10:45 AM on May 19, 2016 [5 favorites]
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace... Has this guy never heard of trolling, etc.?
posted by Bella Donna at 10:49 AM on May 19, 2016 [1 favorite]
posted by Bella Donna at 10:49 AM on May 19, 2016 [1 favorite]
a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
If I went back to 1977 and handed George Lucas a Blu-Ray Star Wars, and told him "there's a picture of the Death Star on here" he wouldn't be able to find that either. Just because we haven't yet been able to reverse-engineer the architecture of the brain doesn't mean that information isn't stored there somehow.
posted by HighLife at 10:51 AM on May 19, 2016 [11 favorites]
If I went back to 1977 and handed George Lucas a Blu-Ray Star Wars, and told him "there's a picture of the Death Star on here" he wouldn't be able to find that either. Just because we haven't yet been able to reverse-engineer the architecture of the brain doesn't mean that information isn't stored there somehow.
posted by HighLife at 10:51 AM on May 19, 2016 [11 favorites]
I think the most true thing that I got out of the article is this:
The "moment-to-moment" activity of the brain might be an inherent feature of the current structure/state of the neurons and interconnections and proteins, or it may be something extrinsic. If it's the former, then whether or not our brains are "computers" is not what's going to stop us from being able to model our consciousness in those computers. But maybe there's something else that's driving the "activity," maybe something with strange and quantum interactions that we don't have the ability to understand yet. And maybe we'll end up hitting some kind of Uncertainy Principle that keeps us from simulating any given actual brain.
So yeah, brains don't work the way that most computers do. But that doesn't mean we couldn't make a computer that acts like a brain.
posted by sparklemotion at 10:55 AM on May 19, 2016 [7 favorites]
To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.We can model neurons, interconnections, and proteins. Not at the scale necessary to simulate a human brain, but that's a processing power/memory capacity problem.
The "moment-to-moment" activity of the brain might be an inherent feature of the current structure/state of the neurons and interconnections and proteins, or it may be something extrinsic. If it's the former, then whether or not our brains are "computers" is not what's going to stop us from being able to model our consciousness in those computers. But maybe there's something else that's driving the "activity," maybe something with strange and quantum interactions that we don't have the ability to understand yet. And maybe we'll end up hitting some kind of Uncertainy Principle that keeps us from simulating any given actual brain.
So yeah, brains don't work the way that most computers do. But that doesn't mean we couldn't make a computer that acts like a brain.
posted by sparklemotion at 10:55 AM on May 19, 2016 [7 favorites]
"More powerful" is a tricky phrase in this context. A Turing machine is more powerful than a finite state machine because it can compute things that cannot be computed on a FSM. A quantum computer still can't compute anything that a TM can't (eg, the halting problem), so they're not more powerful in that sense. They can be thought of as more powerful in that they can, in principle, compute some things faster. There are theoretical classes of machine that also have this property (like non-deterministic Turing machines, assuming that P!=NP), but they also cannot compute anything that a TM can't. As b1tr0t says, we haven't come up with anything that can compute *more* than a TM. It's a very powerful concept.
posted by rhamphorhynchus at 10:58 AM on May 19, 2016 [3 favorites]
posted by rhamphorhynchus at 10:58 AM on May 19, 2016 [3 favorites]
I think the article doesn't go nearly far enough to demonstrate its point, but he's absolutely right that computers are a metaphor for the brain, and not a great one.
But understand that "not a great metaphor" doesn't mean "entirely wrong". Consider some of the earlier metaphors he mentions: the mind is a mechanical device; the body is determined by the flow of liquids. These are both correct! The brain is a machine, just as a steam engine or a clock is; it really is made of bits of matter in a very complicated arrangement. And it's even true that the body runs on the flow of liquids! You've absolutely got fluids and chemicals of various sorts pumping through your brain.
The problem with the machine metaphor isn't that brains aren't machines, it's that the particular instance of machines that people used— clocks, millwork, steam engines— aren't all that helpful in figuring out the brain.
The problem with the computer metaphor isn't that brains aren't (very broadly defined) computers— it's that the metaphor invites too close a comparison with actual digital computers, and makes people think that with just a couple semesters of work they'll get a brain running on their Linux box.
posted by zompist at 11:05 AM on May 19, 2016 [7 favorites]
But understand that "not a great metaphor" doesn't mean "entirely wrong". Consider some of the earlier metaphors he mentions: the mind is a mechanical device; the body is determined by the flow of liquids. These are both correct! The brain is a machine, just as a steam engine or a clock is; it really is made of bits of matter in a very complicated arrangement. And it's even true that the body runs on the flow of liquids! You've absolutely got fluids and chemicals of various sorts pumping through your brain.
The problem with the machine metaphor isn't that brains aren't machines, it's that the particular instance of machines that people used— clocks, millwork, steam engines— aren't all that helpful in figuring out the brain.
The problem with the computer metaphor isn't that brains aren't (very broadly defined) computers— it's that the metaphor invites too close a comparison with actual digital computers, and makes people think that with just a couple semesters of work they'll get a brain running on their Linux box.
posted by zompist at 11:05 AM on May 19, 2016 [7 favorites]
Serious question....it's been a long time since school....how is "more powerful" defined?
More powerful means it can solve a larger class of problems. A computer more powerful than a Turing machine is a hypercomputer. That means it can do things like solve the Halting Problem, etc. It's not clear if such a computer could be physical. Some approximations might be possible. But, it seems kind of unlikely.
posted by delicious-luncheon at 11:06 AM on May 19, 2016 [1 favorite]
More powerful means it can solve a larger class of problems. A computer more powerful than a Turing machine is a hypercomputer. That means it can do things like solve the Halting Problem, etc. It's not clear if such a computer could be physical. Some approximations might be possible. But, it seems kind of unlikely.
posted by delicious-luncheon at 11:06 AM on May 19, 2016 [1 favorite]
I'm entirely sympathetic to his general point, but I don't think his arguments are very good, and he doesn't seem very well informed about the things he's talking about.
posted by edheil at 11:11 AM on May 19, 2016 [3 favorites]
posted by edheil at 11:11 AM on May 19, 2016 [3 favorites]
>All this talk about uploading consciousness drives me up the wall, because it neglects to mention that WE DON'T UNDERSTAND SENTIENCE
No, you don't understand--I'm gonna "upload my mind into a computer," just as soon as I finish installing Windows 98 on my cow.
posted by Sing Or Swim at 11:11 AM on May 19, 2016 [3 favorites]
No, you don't understand--I'm gonna "upload my mind into a computer," just as soon as I finish installing Windows 98 on my cow.
posted by Sing Or Swim at 11:11 AM on May 19, 2016 [3 favorites]
I was just going to say I think b1tr0t is mixing up definitions of "powerful" a little bit. Can a quantum TM compute things that a "regular" TM cannot? what I remember is no but I don't know much about quantum computing and I don't know whether that's established. A quantum TM can provably compute certain problems *with more favorable complexity* than a deterministic TM can (i.e. your computer can) but not to the extent of a (theoretical) nondeterministic TM etc.
posted by atoxyl at 11:15 AM on May 19, 2016
posted by atoxyl at 11:15 AM on May 19, 2016
The dollar bill example is, if amazingly ridiculous due to the fact the subject drew the most basic version of a bill with no ornamentation whatsoever, a good example of a misunderstanding how how computers work and the intricacies of programming them.
If you asked a computer that had "seen" a dollar bill what a dollar bill looks like, it could possibly spit out a high resolution image of a dollar bill. If it had vector art stored, it could show you a dollar bill at any level of detail, perfectly, assuming the images on it are what you want and not the nuances of a particular dollar bill's fabric or wear pattern.
So the article assumes a computer will give you an exact version of what one particular dollar bill looks like. But this isn't how image analysis works. An image analysis algorithm might store the shapes, patterns, and colors but would not be able to spit out an exact image of a dollar bill, unless it has an internal representation available for comparison.
Going further, storing these details for quick analysis is very intensive when it comes to storage. Without getting into the specifics of data storage and glossing over my own ignorance on image recognition, let's say that a computer doesn't have the ability to recreate a dollar bill, much like a person without a photographic memory could not. It takes a handful of fragments -- shape, a rough idea of the pictures on a bill and its geometric shapes, and creates a hash, some kind of shortcut. It sees an image of what looks like a president, the numbers in the corners, the round shapes, and says "yeah, this is a dollar bill." It's a one-way function -- you can go back to the blueprint of what defines the look of a dollar bill in the computer's storage, in the process of recognition, the computer doesn't do this. It takes a measure of snippets, throws them together, and says "these together mean it's a dollar bill."
Now, isn't that pretty much what a human does? We don't know the exact methodology of how the human mind functions, but we've created algorithms that are very close to how we imagine the mind analyzes things.
posted by mikeh at 11:16 AM on May 19, 2016
If you asked a computer that had "seen" a dollar bill what a dollar bill looks like, it could possibly spit out a high resolution image of a dollar bill. If it had vector art stored, it could show you a dollar bill at any level of detail, perfectly, assuming the images on it are what you want and not the nuances of a particular dollar bill's fabric or wear pattern.
So the article assumes a computer will give you an exact version of what one particular dollar bill looks like. But this isn't how image analysis works. An image analysis algorithm might store the shapes, patterns, and colors but would not be able to spit out an exact image of a dollar bill, unless it has an internal representation available for comparison.
Going further, storing these details for quick analysis is very intensive when it comes to storage. Without getting into the specifics of data storage and glossing over my own ignorance on image recognition, let's say that a computer doesn't have the ability to recreate a dollar bill, much like a person without a photographic memory could not. It takes a handful of fragments -- shape, a rough idea of the pictures on a bill and its geometric shapes, and creates a hash, some kind of shortcut. It sees an image of what looks like a president, the numbers in the corners, the round shapes, and says "yeah, this is a dollar bill." It's a one-way function -- you can go back to the blueprint of what defines the look of a dollar bill in the computer's storage, in the process of recognition, the computer doesn't do this. It takes a measure of snippets, throws them together, and says "these together mean it's a dollar bill."
Now, isn't that pretty much what a human does? We don't know the exact methodology of how the human mind functions, but we've created algorithms that are very close to how we imagine the mind analyzes things.
posted by mikeh at 11:16 AM on May 19, 2016
but we've created algorithms that are very close to how we imagine the mind analyzes things.
Well the counterargument would be - do we know that's how the mind analyzes things? How well do we know that? You may know more about this than I do but I do think it's fair to say that assuming certain analogies may not be a good idea. I just also get why CS and information science and computational neuro people will jump on this because he's throwing around his idiosyncratic definitions of what it means to "process information" willy nilly.
posted by atoxyl at 11:22 AM on May 19, 2016
Well the counterargument would be - do we know that's how the mind analyzes things? How well do we know that? You may know more about this than I do but I do think it's fair to say that assuming certain analogies may not be a good idea. I just also get why CS and information science and computational neuro people will jump on this because he's throwing around his idiosyncratic definitions of what it means to "process information" willy nilly.
posted by atoxyl at 11:22 AM on May 19, 2016
I hope I'm not repeating anyone else's sentiment, but this reminds me of my consternation in a high school programming class. I'd asked the natural question you'd hope a computer science class would answer: How does a computer "think?" It being an intro class with Visual Basic, the teacher said a few things about logic gates but said it was outside the depth of the course and that we should stick to the abstractions in VB. But those abstractions weren't satisfying.
Of course, now that I've read Code by Charles Petzold and taken computer architecture courses, I get it. But while I can find out how a CPU is a series of logic gates that follow simple logic to choose and execute instructions, I can't figure out how neurons make a mind. That's because humans invented the computer, and were able to reject bad abstractions other people came up with. So many Computers for Dummies books start with a section explaining that computers cannot think for themselves. But nobody can definitively tell us a brain abstraction is bad until we actually figure out how the brain works.
The fear in the article is that we're letting our computer abstractions become our model of the human brain, which seems like a reasonable fear outside the AI community. But in AI, the trend has been massively parallel neural networks which is a fairly close model to how we understand brains today.
The most interesting new (to me) model I've seen is hypervectors, which are a way of representing knowledge with linear algebra concepts over vectors with tons of dimensions. It sounds like technobabble, but it works well with the way human brains associate concepts and the way that there isn't anything like a memory-bit to neuron analogy.
(Also, I think the brain is like a raven or a writing desk, but never both at the same time)
posted by mccarty.tim at 11:23 AM on May 19, 2016 [1 favorite]
Of course, now that I've read Code by Charles Petzold and taken computer architecture courses, I get it. But while I can find out how a CPU is a series of logic gates that follow simple logic to choose and execute instructions, I can't figure out how neurons make a mind. That's because humans invented the computer, and were able to reject bad abstractions other people came up with. So many Computers for Dummies books start with a section explaining that computers cannot think for themselves. But nobody can definitively tell us a brain abstraction is bad until we actually figure out how the brain works.
The fear in the article is that we're letting our computer abstractions become our model of the human brain, which seems like a reasonable fear outside the AI community. But in AI, the trend has been massively parallel neural networks which is a fairly close model to how we understand brains today.
The most interesting new (to me) model I've seen is hypervectors, which are a way of representing knowledge with linear algebra concepts over vectors with tons of dimensions. It sounds like technobabble, but it works well with the way human brains associate concepts and the way that there isn't anything like a memory-bit to neuron analogy.
(Also, I think the brain is like a raven or a writing desk, but never both at the same time)
posted by mccarty.tim at 11:23 AM on May 19, 2016 [1 favorite]
Awful article, awesome Mefi comment thread.
posted by fistynuts at 11:24 AM on May 19, 2016 [12 favorites]
posted by fistynuts at 11:24 AM on May 19, 2016 [12 favorites]
to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery
Can't good players see a ball get hit and then run to where it's going to be without keeping it in their visual field?
posted by little onion at 11:30 AM on May 19, 2016 [3 favorites]
Can't good players see a ball get hit and then run to where it's going to be without keeping it in their visual field?
posted by little onion at 11:30 AM on May 19, 2016 [3 favorites]
All I'm trying to say is you're playing fast and loose with two, maybe three definitions of "powerful:"
- can it solve the same problems?
- can it solve the same problems with the same computational complexity?
- can it run the same algorithms with the same theoretical complexity faster in practice?
posted by atoxyl at 11:33 AM on May 19, 2016
- can it solve the same problems?
- can it solve the same problems with the same computational complexity?
- can it run the same algorithms with the same theoretical complexity faster in practice?
posted by atoxyl at 11:33 AM on May 19, 2016
b1tr0t, personally I'm inclined to think that the idea of comparing brains to Turing machines isn't all that useful beyond sort of a general framework for thinking about first-order (or zeroth-order) principles. You describe the TM as the most powerful computing system we've yet developed. I think by "powerful" you're referring to its universality? As I understand it, the thing that makes the TM so useful as a concept is precisely that it is universal, yet also extraordinarily simple; all you need is a small number of rules and symbols, and an arbitrarily large amount of tape. So any (digital) algorithm can be implemented by a TM, given enough time. In some sense, it's trivial to note that the human brain is as general as a TM, in that you could give a human a set of rules and symbols, and an arbitrarily large amount of tape, and they could compute any algorithm. But this isn't really capturing any of the interesting features of human cognition/behavior. What matters is the actual algorithms and actual implementations by which the brain solves the computational problems that are presented by the problems of living. The TM framework doesn't really give us much guidance in understanding that, as far as I've ever seen.
posted by biogeo at 11:37 AM on May 19, 2016 [1 favorite]
posted by biogeo at 11:37 AM on May 19, 2016 [1 favorite]
Computer scientists and physicists have interesting things to say on this.
posted by Tuco Benedicto Pacifico Juan Maria Ramirez at 11:37 AM on May 19, 2016
posted by Tuco Benedicto Pacifico Juan Maria Ramirez at 11:37 AM on May 19, 2016
I'm so glad other people had issues with this article as well. I was interested in the premise, but kept waiting for him to tell me how memories DO work if they are not "stored". Instead, it was just a bunch of "your metaphor is WRONGITY WRONG because reasons" followed by "oh and by the way no one knows how memory works lol".
Regardless, without understanding either computers or neurology, it is simply a matter of common sense that if I can recite to you my telephone number or the lyrics to "Don't Stop Believin'", that information must be "stored" in some form or another and then "accessed."
posted by Ben Trismegistus at 11:38 AM on May 19, 2016 [3 favorites]
Regardless, without understanding either computers or neurology, it is simply a matter of common sense that if I can recite to you my telephone number or the lyrics to "Don't Stop Believin'", that information must be "stored" in some form or another and then "accessed."
posted by Ben Trismegistus at 11:38 AM on May 19, 2016 [3 favorites]
This is the best counterexample I could find to his dollar bill test. Not that it invalidates his point. A digital computer is a barely adequate metaphor for a subset of the ways the brain functions, and we should always be looking for better models. A neural network made up of analog rather than digital components might be a better model, but would still be missing pieces.
posted by BrotherCaine at 11:41 AM on May 19, 2016 [1 favorite]
posted by BrotherCaine at 11:41 AM on May 19, 2016 [1 favorite]
But our hypothetical student can look up what is known about the neural-architecture of the brain, construct some data structures that will allow her to simulate those structures on a computer, and then estimate how many years it will take before her research budget allows her to buy sufficient compute hardware to simulate the brain.
This is not what the article is arguing. The author is saying that the discrete store/move/copy/recall mechanism used by a TM is not a good model for the way human brains work. He's not saying that a sufficiently powerful computer cannot do some form of simulation of what a brain actually does, though he does argue that consciousness likely can't even be modeled that simply:
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive.
He's arguing that we are as much or more software as hardware, and that possibly the line between those isn't easy to draw.
Here's a simple form of the argument: a computer can't do non-discrete math like say simulate the current in a capacitance-inductance network. Not directly. You cannot, on current digital hardware have an exact value for that quantity. You can approximate it with the hack that is an IEEE754 floating point number, but you can't represent it directly. A digital value is not a good metaphor for an analogue value.
That's the type of argument being made here: a store/copy/retrieve model of information processing, in a similar way, does not adequately describe how a human brain forms memory or processes them.
posted by bonehead at 11:45 AM on May 19, 2016 [6 favorites]
This is not what the article is arguing. The author is saying that the discrete store/move/copy/recall mechanism used by a TM is not a good model for the way human brains work. He's not saying that a sufficiently powerful computer cannot do some form of simulation of what a brain actually does, though he does argue that consciousness likely can't even be modeled that simply:
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive.
He's arguing that we are as much or more software as hardware, and that possibly the line between those isn't easy to draw.
Here's a simple form of the argument: a computer can't do non-discrete math like say simulate the current in a capacitance-inductance network. Not directly. You cannot, on current digital hardware have an exact value for that quantity. You can approximate it with the hack that is an IEEE754 floating point number, but you can't represent it directly. A digital value is not a good metaphor for an analogue value.
That's the type of argument being made here: a store/copy/retrieve model of information processing, in a similar way, does not adequately describe how a human brain forms memory or processes them.
posted by bonehead at 11:45 AM on May 19, 2016 [6 favorites]
An ASIC is just hardware built to execute a specific algorithm as fast as possible. Quantum computing offers algorithms that don't exist in the common deterministic model of computation, and some of those algorithms can solve particular problems with superior time/space complexity characteristics to what is known/thought to be possible with your basic DTM. A hypercomputer would be able to solve problems that TMs/traditional models of computing can't solve at all.
posted by atoxyl at 11:47 AM on May 19, 2016 [1 favorite]
posted by atoxyl at 11:47 AM on May 19, 2016 [1 favorite]
That's the type of argument being made here: a store/copy/retrieve model of information processing, in a similar way, does not adequately describe how a human brain forms memory or processes them.
But why? That's the question the article doesn't answer. How is store/copy/retrieve not what the human brain does? Clearly there is generation loss (storage, copying, and retrieval producing incomplete versions of the original), but doesn't that mean that, to the extent the human brain functions like a computer, it's not a very good one?
posted by Ben Trismegistus at 11:52 AM on May 19, 2016 [2 favorites]
But why? That's the question the article doesn't answer. How is store/copy/retrieve not what the human brain does? Clearly there is generation loss (storage, copying, and retrieval producing incomplete versions of the original), but doesn't that mean that, to the extent the human brain functions like a computer, it's not a very good one?
posted by Ben Trismegistus at 11:52 AM on May 19, 2016 [2 favorites]
The brain isn't a digital computer.
or The brain isn't an electronic computer.
or The brain isn't a designed computer.
FTFY.
posted by Foosnark at 12:00 PM on May 19, 2016 [3 favorites]
or The brain isn't an electronic computer.
or The brain isn't a designed computer.
FTFY.
posted by Foosnark at 12:00 PM on May 19, 2016 [3 favorites]
anyone who wants to argue whether the brain is or isn't a computer really needs to take a semester of automata theory and a semester of computational complexity. It has been nearly two decades since I've taken those courses myself. The definitions are extremely formal and precise
after we learn what computers are, maybe we can also figure out what a "machine" is... a heat engine?
behind all of this is the absolute failure of academic CS based AI to make any advances over the last 40 years in *understanding* just what cognition is. And much neuroscience based work seems to be based on the doubtful basis that if we can model our measurements of neuronal events then that model will indicate cognition (of some sort). Which leads to projects like the simulated worm. A sufficiently complicated simulated worm would be indistinguishable from a real one, but would you understand it any better?
posted by ennui.bz at 12:01 PM on May 19, 2016
after we learn what computers are, maybe we can also figure out what a "machine" is... a heat engine?
behind all of this is the absolute failure of academic CS based AI to make any advances over the last 40 years in *understanding* just what cognition is. And much neuroscience based work seems to be based on the doubtful basis that if we can model our measurements of neuronal events then that model will indicate cognition (of some sort). Which leads to projects like the simulated worm. A sufficiently complicated simulated worm would be indistinguishable from a real one, but would you understand it any better?
posted by ennui.bz at 12:01 PM on May 19, 2016
Accidental Analog Meat Computer is my new band name.
posted by Foosnark at 12:01 PM on May 19, 2016 [3 favorites]
posted by Foosnark at 12:01 PM on May 19, 2016 [3 favorites]
The dollar bill thing is an interesting illustrative example of how much algorithm matters. Your digital camera has a a set of algorithms for faithfully recording, copying, and retrieving specific images of dollar bills (among other things). This is obviously not how human memory works. However, other algorithms, like Google's DeepDream (previously), work very differently, and are in fact inspired in part by neurobiological models of human vision. The weirdnesses that come out of DeepDream are more or less analogous to asking a person to draw a dollar bill from memory: there is no "faithful" representation of a dollar bill used anywhere in the algorithm, but instead a looser "template" representation. DeepDream "knows" what dogs look like, but it can't produce any specific dog faithfully, because it contains no representations of individual dogs (at least ones that are accessible to its output algorithm, I don't know exactly how DeepDream works on the backend).
posted by biogeo at 12:04 PM on May 19, 2016 [6 favorites]
posted by biogeo at 12:04 PM on May 19, 2016 [6 favorites]
i didn't think this was a well reasoned essay. But, the larger question is whether there are fundamental questions about "brains" that need to be answered, or whether we just have to build something sufficiently complex or large, based on what we already understand.
Most people implicitly believe the latter, for reasons that are even less well-reasoned.
posted by ennui.bz at 12:08 PM on May 19, 2016 [1 favorite]
Most people implicitly believe the latter, for reasons that are even less well-reasoned.
posted by ennui.bz at 12:08 PM on May 19, 2016 [1 favorite]
bonehead, I think you're being pretty generous in your reading of Epstein's position. I don't see anything like a position with the level of nuance you've given in his article. In fact his argument is explicitly that the brain does not use algorithms or representations, which is really more like software than hardware.
posted by biogeo at 12:10 PM on May 19, 2016 [3 favorites]
posted by biogeo at 12:10 PM on May 19, 2016 [3 favorites]
behind all of this is the absolute failure of academic CS based AI to make any advances over the last 40 years in *understanding* just what cognition is.
Do you know the field well enough to make that claim? I'm certain Epstein doesn't.
posted by straight at 12:12 PM on May 19, 2016 [2 favorites]
Do you know the field well enough to make that claim? I'm certain Epstein doesn't.
posted by straight at 12:12 PM on May 19, 2016 [2 favorites]
"All the author is saying is that our brains don't use a process of storing and recalling memory that's identical or easily-mappable to the digital algorithms of a Turing machine."
You're being generous to the author who says:
posted by I-baLL at 12:14 PM on May 19, 2016 [4 favorites]
You're being generous to the author who says:
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.Our brain is a physical object. What we do and how we function arises from the electro-chemical processes occurring in the brain and the rest of the nervous system. We are a biological computer. We are not the same thing as the type of computer that I'm typing this on but there are still basic principles that my digital computer and I share. We both have some form of memory. I can recall how to type, the language I'm typing in, the fact that I left this tab open for too long, etc. He doesn't differentiate between definitions. He says that we have no memory but doesn't say what it is that we do have.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?
posted by I-baLL at 12:14 PM on May 19, 2016 [4 favorites]
Brains are more powerful than Turing Machines in the sense that they can take input. Classical Turing Machines start with an empty tape, and can't interact with the world. Your PC is more powerful than a Turing machine in this sense as well.
Do thermometers compute temperature? Or are they "just" physical systems embedded in the world? Could brains be the same sort of thing?
posted by BungaDunga at 12:21 PM on May 19, 2016 [3 favorites]
Do thermometers compute temperature? Or are they "just" physical systems embedded in the world? Could brains be the same sort of thing?
posted by BungaDunga at 12:21 PM on May 19, 2016 [3 favorites]
I was amusing myself with trying to figure out which details my brain had attached to the generic 'US currency" abstraction. So I remember Washington's portrait quite well, down to his slightly constipated expression. The series and the signature, yeah, that's attached, even the color difference of the series. I even remember that there's a bit where the filigree engraving around the edges makes it look like there's a tiny spider sitting on the frame. But wow, I cannot remember the back at all. I know there's an engraving of a scene of some sort, and that it's greener and there's overall less contrast, but that's about it.
(And now that I've looked it up, there's not even a scene, just ONE and the seals. It's entertaining seeing what your brain has decided is useless shit.)
posted by tavella at 12:21 PM on May 19, 2016
(And now that I've looked it up, there's not even a scene, just ONE and the seals. It's entertaining seeing what your brain has decided is useless shit.)
posted by tavella at 12:21 PM on May 19, 2016
Instead:Thank you so much for putting these very difficult and confusing matters so clearly and succinctly, b1tr0t.
1. Turing Machines ("computers") are provably the most complex machines humans have developed.
2. 80 years later, we still don't have anything even slightly more computationally powerful than Turing Machines.
3. Therefore, the brain is either less powerful than a Turing Machine (seems unlikely); The brain is a Turing Machine whose architecture we don't yet fully understand (seems increasingly likely, the longer we go without coming up with anything more powerful than a TM); Or the brain something more computationally powerful than a TM (this mostly appeals to people who hate the idea of being "just" a computer)
...
Quantum Computers also fall into those three categories. Current QCs seem to be very special purpose devices that are in general less powerful than TMs but may be more powerful for specific applications, much like ASICs. We want to believe that QCs will be more powerful than TMs, but as far as I know, that has not yet been proven to be the case.
If we develop a QC that is more powerful than a TM then we will have four categories: less powerful than a TM, TM-equivelant, QC-equivelant and more powerful than a QC. There may also be an intermediate step between TM and QC equivalence.
The brain has to fit into some physical category unless you want it to be some sort of antenna-like interface device to the MMORPG in a 17-dimensional universe that we are obviously running around in. But even if you suppose that we are simply avatars in a simulated universe and our brains actually exist in some large enclosing universe you can still ask whether they are less powerful than, equivalent to or more powerful than Turing Machines.
I think there may well be QCs that are more powerful than TMs, and that the brain may turn out to be one of them:
Computing with time travel?There is a huge tradition of prophecy and prophetic dreams among human beings, and given these results, it would be possible to argue that those forms of prophecy are (at least) a side effect of the superior entangling timelike loop QC functionality of the human brain.
It turns out that an unopened message can be exceedingly useful. This is true if the experimenter entangles the message with some other system in the laboratory before sending it. Entanglement, a strange effect only possible in the realm of quantum physics, creates correlations between the time-travelling message and the laboratory system. These correlations can fuel a quantum computation.
Around ten years ago researcher Dave Bacon, now at Google, showed that a time-travelling quantum computer could quickly solve a group of problems, known as NP-complete, which mathematicians have lumped together as being hard.
The problem was, Bacon's quantum computer was travelling around 'closed timelike curves'. These are paths through the fabric of spacetime that loop back on themselves. General relativity allows such paths to exist through contortions in spacetime known as wormholes.
Physicists argue something must stop such opportunities arising because it would threaten 'causality' -- in the classic example, someone could travel back in time and kill their grandfather, negating their own existence.
And it's not only family ties that are threatened. Breaking the causal flow of time has consequences for quantum physics too. Over the past two decades, researchers have shown that foundational principles of quantum physics break in the presence of closed timelike curves: you can beat the uncertainty principle, an inherent fuzziness of quantum properties, and the no-cloning theorem, which says quantum states can't be copied.
However, the new work shows that a quantum computer can solve insoluble problems even if it is travelling along "open timelike curves," which don't create causality problems. That's because they don't allow direct interaction with anything in the object's own past: the time travelling particles (or data they contain) never interact with themselves. Nevertheless, the strange quantum properties that permit "impossible" computations are left intact. "We avoid 'classical' paradoxes, like the grandfathers paradox, but you still get all these weird results," says Mile Gu, who led the work.
Gu is at the Centre for Quantum Technologies (CQT) at the National University of Singapore and Tsinghua University in Beijing. His eight other coauthors come from these institutions, the University of Oxford, UK, Australian National University in Canberra, the University of Queensland in St Lucia, Australia, and QKD Corp in Toronto, Canada.
"Whenever we present the idea, people say no way can this have an effect" says Jayne Thompson, a co-author at CQT. But it does: quantum particles sent on a timeloop could gain super computational power, even though the particles never interact with anything in the past. "The reason there is an effect is because some information is stored in the entangling correlations: this is what we're harnessing," Thompson says.
There is a caveat -- not all physicists think that these open timeline curves are any more likely to be realisable in the physical universe than the closed ones. One argument against closed timelike curves is that no-one from the future has ever visited us. That argument, at least, doesn't apply to the open kind, because any messages from the future would be locked.
But a side effect rather than a main effect, and perhaps a deleterious side effect at that in some sense, because true prophecy would seem to require actually opening the message from the future, which would have to somehow negate any causal implications the prophecy might otherwise carry in order to avoid the paradoxes.
It would be a very rich irony indeed if the reason Cassandra could never escape the horrible things she prophesied in her dreams was that she was powerless to do anything about them one way or another because she prophesied them.
posted by jamjam at 12:59 PM on May 19, 2016 [2 favorites]
Brains: They're made of meat.
Now that I'm getting older, I believe mine is stew.
posted by BlueHorse at 1:03 PM on May 19, 2016 [1 favorite]
Now that I'm getting older, I believe mine is stew.
posted by BlueHorse at 1:03 PM on May 19, 2016 [1 favorite]
In fact his argument is explicitly that the brain does not use algorithms or representations, which is really more like software than hardware.
Yeah, that language more my fault than his. And I think he's right in this, the distinction isn't one that helps understand how the brains function, from what little I know of the subject. Experienced trauma, for example, doesn't just change some software layer, it seems to induce physical changes as well. Talking about software and hardware in brainmeat doesn't help a lot.
posted by bonehead at 1:17 PM on May 19, 2016
Yeah, that language more my fault than his. And I think he's right in this, the distinction isn't one that helps understand how the brains function, from what little I know of the subject. Experienced trauma, for example, doesn't just change some software layer, it seems to induce physical changes as well. Talking about software and hardware in brainmeat doesn't help a lot.
posted by bonehead at 1:17 PM on May 19, 2016
DeepDream "knows" what dogs look like, but it can't produce any specific dog faithfully, because it contains no representations of individual dogs (at least ones that are accessible to its output algorithm, I don't know exactly how DeepDream works on the backend).
Actually ... we're getting scarily good at making neural networks spit back out what they've learned, and completely novel transformations of what they've learned. Like, you train it to recognize pictures of bedrooms, and then it can spit back out infinite varieties of bedrooms.
posted by RobotVoodooPower at 1:28 PM on May 19, 2016 [4 favorites]
Actually ... we're getting scarily good at making neural networks spit back out what they've learned, and completely novel transformations of what they've learned. Like, you train it to recognize pictures of bedrooms, and then it can spit back out infinite varieties of bedrooms.
posted by RobotVoodooPower at 1:28 PM on May 19, 2016 [4 favorites]
BungaDunga: Do thermometers compute temperature? Or are they "just" physical systems embedded in the world? Could brains be the same sort of thing?
Here's another really important point. All computation is physical. Your computer performing calculations is an entropy engine, displaying Metafilter to you on the back of the transformation of energy from a low-entropy state (the electric current coming out of your wall socket or the chemical potential stored in your laptop or phone battery) to a high-entropy state (heat).
Personally I wouldn't necessarily say that thermometers compute temperature. But they can represent temperature, which is useful for computations. A mercury thermometer represents temperature with the height of a column of mercury. Add in a simple sensor controlling your air conditioner, and you have a thermostat. Now you have an algorithm: if the column of mercury is above a certain level, power the A/C, otherwise unpower it. The algorithm is a simple comparison-controller operation, which employs representations of the current temperature (implemented as the height of the mercury column) and a set point temperature (implemented as the position of the sensor in the thermometer), and altogether these solve the computational problem of keeping the temperature below a certain level, which has the function of keeping you cool.
Note I am employing a version of the computationalist framework of David Marr, which is not without some flaws (he produced it in the late 70s and we've made some progress since then), but I think gives a very helpful formalism for thinking about these questions.
posted by biogeo at 1:34 PM on May 19, 2016 [4 favorites]
Here's another really important point. All computation is physical. Your computer performing calculations is an entropy engine, displaying Metafilter to you on the back of the transformation of energy from a low-entropy state (the electric current coming out of your wall socket or the chemical potential stored in your laptop or phone battery) to a high-entropy state (heat).
Personally I wouldn't necessarily say that thermometers compute temperature. But they can represent temperature, which is useful for computations. A mercury thermometer represents temperature with the height of a column of mercury. Add in a simple sensor controlling your air conditioner, and you have a thermostat. Now you have an algorithm: if the column of mercury is above a certain level, power the A/C, otherwise unpower it. The algorithm is a simple comparison-controller operation, which employs representations of the current temperature (implemented as the height of the mercury column) and a set point temperature (implemented as the position of the sensor in the thermometer), and altogether these solve the computational problem of keeping the temperature below a certain level, which has the function of keeping you cool.
Note I am employing a version of the computationalist framework of David Marr, which is not without some flaws (he produced it in the late 70s and we've made some progress since then), but I think gives a very helpful formalism for thinking about these questions.
posted by biogeo at 1:34 PM on May 19, 2016 [4 favorites]
If there is a research group working on cognition from a CS perspective, I'd love to know about it so that I can start filling in the gaps since I finished my studies and apply for their PhD program.
I think that the whole field lost its Einstein when David Marr died so young.
(Woops, posted before seeing Marr already name-checked).
posted by Chitownfats at 1:37 PM on May 19, 2016 [3 favorites]
RobotVoodooPower, that is awesome. But my point is that this is still fundamentally different from what Epstein is claiming the "information processing" perspective requires, in that these bedrooms are being generated from a representation of what bedrooms are like rather than a faithful imprint of a specific bedroom. I think it's fairly clear that these computer programs are algorithmic in nature, and yet they are behaving in ways that Epstein insists is incompatible with computation.
posted by biogeo at 1:40 PM on May 19, 2016
posted by biogeo at 1:40 PM on May 19, 2016
Chitownfats: I think that the whole field lost its Einstein when David Marr died so young.
QFT
posted by biogeo at 1:41 PM on May 19, 2016 [1 favorite]
QFT
posted by biogeo at 1:41 PM on May 19, 2016 [1 favorite]
Brains are more powerful than Turing Machines in the sense that they can take input. Classical Turing Machines start with an empty tape, and can't interact with the world. Your PC is more powerful than a Turing machine in this sense as well.
Any input you can imagine you can also imagine being encoded from a collection device onto an input tape to a Turing machine. So, this isn't really any more powerful computationally than any other Turing machine.
posted by delicious-luncheon at 1:46 PM on May 19, 2016 [1 favorite]
Any input you can imagine you can also imagine being encoded from a collection device onto an input tape to a Turing machine. So, this isn't really any more powerful computationally than any other Turing machine.
posted by delicious-luncheon at 1:46 PM on May 19, 2016 [1 favorite]
bonehead: Talking about software and hardware in brainmeat doesn't help a lot.
Yeah, I agree, it's not a meaningful distinction for neurobiology, because they're conceptual details created for general-purpose digital computers, and brains compute according to fundamentally different principles. There isn't a meaningful distinction between the two for electronic analog computers, either.
posted by biogeo at 2:03 PM on May 19, 2016 [1 favorite]
Yeah, I agree, it's not a meaningful distinction for neurobiology, because they're conceptual details created for general-purpose digital computers, and brains compute according to fundamentally different principles. There isn't a meaningful distinction between the two for electronic analog computers, either.
posted by biogeo at 2:03 PM on May 19, 2016 [1 favorite]
I think there may well be QCs that are more powerful than TMs, and that the brain may turn out to be one of them:
Again, more powerful than TM's means a hypercomputer. A "regular" QC using qubits is not more powerful since you can simulate it (using a lot of space and time) on a non-quantum Turing Machine.
Anything else approaching a hypercomputer seems to require really exotic states many of which are questionable from the POV of a physical implementation, and even if they were possible physically, almost certainly aren't the kind of thing going on in a mammal brain since many of them involve pretty exotic physical states. That is, there probably aren't wormholes forming closed time-like curves in your brain or an infinite superposition of states or anything.
That's not to say there is not anything quantum going on in brains. However, it's likely to be pretty "mundane" quantum things that aren't going to be leading to any hypercomputation.
posted by delicious-luncheon at 2:08 PM on May 19, 2016
Again, more powerful than TM's means a hypercomputer. A "regular" QC using qubits is not more powerful since you can simulate it (using a lot of space and time) on a non-quantum Turing Machine.
Anything else approaching a hypercomputer seems to require really exotic states many of which are questionable from the POV of a physical implementation, and even if they were possible physically, almost certainly aren't the kind of thing going on in a mammal brain since many of them involve pretty exotic physical states. That is, there probably aren't wormholes forming closed time-like curves in your brain or an infinite superposition of states or anything.
That's not to say there is not anything quantum going on in brains. However, it's likely to be pretty "mundane" quantum things that aren't going to be leading to any hypercomputation.
posted by delicious-luncheon at 2:08 PM on May 19, 2016
There's a hidden assumption here that the human brain is capable of being understood by a human brain. I think this undermines b1tr0t's contention that the brain can be mapped to a Turing machine because that's the most fundamental model of computation that humans have come up with.
posted by monotreme at 2:41 PM on May 19, 2016
posted by monotreme at 2:41 PM on May 19, 2016
Was there some weird time-like loop that spat out this connectionist bombast from a 1985 edition of The New Scientist?
posted by chortly at 3:09 PM on May 19, 2016 [1 favorite]
posted by chortly at 3:09 PM on May 19, 2016 [1 favorite]
The whole thread about computational power seems irrelevant to the central question. That a brain is not computationally more powerful than a turing machine doesn't affect the question of whether "computation" is the best way to understand what the brain is doing. Conversely, if we did have evidence that the brain *was* more powerful than a turing machine that would be evidence in favor of not using that model, but we don't.
There's been the argument in the thread that we should understand the brain as a computer because it's a physical system which can in princible be modeled mathematically, and mathematical models can be computed. Again, that doesn't seem to hold water. The physical mechanism by which the liquid in a thermometer can also be modeled and computed, but no one will argue that the liquid is performing computation when it expands as thermal energy of the system increases. The best you get out of this line of argument is that a brain can in principle be modeled by a powerful enough computer, and that a computational model is not disallowed by the material nature of the brain. If we had evidence that the brain was in some way supernatural, that would be evidence that the brain is not a computer, but the lack of such evidence is not proof of the opposite claim, just proof of its possibility when examined from this angle.
I'm interested in what seems to me roughly a version of the pessimistic meta-induction argument here. We have various previous models of brain functioning which we no longer hold to be helpful, each superseded in turn by a model closely aligned with a more complicated technology. Do we have good reason to believe that our current model will not likewise be superseded? That's not clear to me. The current model has helped us make progress in our understanding of the brain and behavior, but so did previous models. The turning machine provides evidence that whatever supersedes this model, it can't be more computationally powerful, but this is not an argument that it can't be superseded, only a restriction on what types of understandings might supersede it.
Anyway, the brain is a gland, I have no particular beliefs about how it functions, and the article has some flaws but I don't think it merits the kind of derision that it's mostly getting here.
posted by vibratory manner of working at 3:26 PM on May 19, 2016 [8 favorites]
There's been the argument in the thread that we should understand the brain as a computer because it's a physical system which can in princible be modeled mathematically, and mathematical models can be computed. Again, that doesn't seem to hold water. The physical mechanism by which the liquid in a thermometer can also be modeled and computed, but no one will argue that the liquid is performing computation when it expands as thermal energy of the system increases. The best you get out of this line of argument is that a brain can in principle be modeled by a powerful enough computer, and that a computational model is not disallowed by the material nature of the brain. If we had evidence that the brain was in some way supernatural, that would be evidence that the brain is not a computer, but the lack of such evidence is not proof of the opposite claim, just proof of its possibility when examined from this angle.
I'm interested in what seems to me roughly a version of the pessimistic meta-induction argument here. We have various previous models of brain functioning which we no longer hold to be helpful, each superseded in turn by a model closely aligned with a more complicated technology. Do we have good reason to believe that our current model will not likewise be superseded? That's not clear to me. The current model has helped us make progress in our understanding of the brain and behavior, but so did previous models. The turning machine provides evidence that whatever supersedes this model, it can't be more computationally powerful, but this is not an argument that it can't be superseded, only a restriction on what types of understandings might supersede it.
Anyway, the brain is a gland, I have no particular beliefs about how it functions, and the article has some flaws but I don't think it merits the kind of derision that it's mostly getting here.
posted by vibratory manner of working at 3:26 PM on May 19, 2016 [8 favorites]
There's a hidden assumption here that the human brain is capable of being understood by a human brain.
I don't understand. What would you mean by the sentence "The human brain is not capable of being understood by a human brain"? Can you give me an example of something else that's "not capable of being understood by a human brain"?
posted by straight at 3:37 PM on May 19, 2016
I don't understand. What would you mean by the sentence "The human brain is not capable of being understood by a human brain"? Can you give me an example of something else that's "not capable of being understood by a human brain"?
posted by straight at 3:37 PM on May 19, 2016
The physical mechanism by which the liquid in a thermometer can also be modeled and computed, but no one will argue that the liquid is performing computation when it expands as thermal energy of the system increases.
I wouldn't say "no one."
posted by atoxyl at 3:43 PM on May 19, 2016 [1 favorite]
I wouldn't say "no one."
posted by atoxyl at 3:43 PM on May 19, 2016 [1 favorite]
The physical mechanism by which the liquid in a thermometer can also be modeled and computed, but no one will argue that the liquid is performing computation when it expands as thermal energy of the system increases.
I just provided upthread a computationalist account of this. To rephrase my earlier argument in another way, the liquid in a thermometer expanding with temperature is not computing anything. But as part of a thermostat (and I know real thermostats don't use mercury thermometers), it is serving as an element of a computational system. The difference is that within the context of a thermostat, the thermometer liquid has a function (in the biological sense of the word). I differ from pan-computationalists in that I regard computation as essentially teleological in that sense.
We have various previous models of brain functioning which we no longer hold to be helpful, each superseded in turn by a model closely aligned with a more complicated technology.
I mean, I think it's fairly obvious that we're nowhere near a Final Theory of the brain; that's why neuroscience is so exciting right now. But the whole point of this article was that computationalism is a fruitless dead end that needs to be abandoned, and that just doesn't match reality at all. I will be very disappointed if we don't have more sophisticated theoretical frameworks for understanding the brain in 20, 50, or 100 years, but those are going to come from the advances we're making right now, many of which are coming from computational neuroscience.
the brain is a gland
What? No it's not. It has glandular components (the neurohypophysis, aka posterior pituitary gland), but that obviously is not the whole brain.
posted by biogeo at 4:04 PM on May 19, 2016 [2 favorites]
I just provided upthread a computationalist account of this. To rephrase my earlier argument in another way, the liquid in a thermometer expanding with temperature is not computing anything. But as part of a thermostat (and I know real thermostats don't use mercury thermometers), it is serving as an element of a computational system. The difference is that within the context of a thermostat, the thermometer liquid has a function (in the biological sense of the word). I differ from pan-computationalists in that I regard computation as essentially teleological in that sense.
We have various previous models of brain functioning which we no longer hold to be helpful, each superseded in turn by a model closely aligned with a more complicated technology.
I mean, I think it's fairly obvious that we're nowhere near a Final Theory of the brain; that's why neuroscience is so exciting right now. But the whole point of this article was that computationalism is a fruitless dead end that needs to be abandoned, and that just doesn't match reality at all. I will be very disappointed if we don't have more sophisticated theoretical frameworks for understanding the brain in 20, 50, or 100 years, but those are going to come from the advances we're making right now, many of which are coming from computational neuroscience.
the brain is a gland
What? No it's not. It has glandular components (the neurohypophysis, aka posterior pituitary gland), but that obviously is not the whole brain.
posted by biogeo at 4:04 PM on May 19, 2016 [2 favorites]
Get off my lawn sort of rant I read at stack overflow from some questions about "design patterns" in the programming site, or philosophy of science questions in the physics site. Physics, "X is what we say it is" argument, now stop asking damn impossible to answer questions, I've made my life about practical evidentiary knowledge. Buy my book, too. Hahaha.
I read a carefully constructed argument which compares newborns to all that there is behind the computer and became bored. Besides the serial interface of human anatomy make us highly inefficient at sharing information. My messy brain is thinking a ton of reasons why I don't like this article, but I only have this...text...
posted by xtian at 4:12 PM on May 19, 2016
I read a carefully constructed argument which compares newborns to all that there is behind the computer and became bored. Besides the serial interface of human anatomy make us highly inefficient at sharing information. My messy brain is thinking a ton of reasons why I don't like this article, but I only have this...text...
posted by xtian at 4:12 PM on May 19, 2016
But I'm reasonably confident that it will be really good at creating schedules for Traveling Salespeople.
You're just messing around so I feel like the pickiest nit in the world but wouldn't a better example be detecting infinite loops in computer programs? TSP is entirely within the bounds of what conventional computers can compute in theory - it's just that the number of algorithmic steps required grows exponentially making large instances intractable in practice. A nondeterministic computer could do it in polynomial time - too bad they don't, as far as we know, exist - but still couldn't solve the halting problem, ever. A hypercomputer would by definition be able to solve at least one problem that conventional computers cannot - I guess an interesting question would be does it follow that it would make possible more efficient algorithms for conventionally computable problems? I was going to say it does not but I haven't thought through whether there are implications I'm missing so maybe somebody will tell me I'm wrong.
posted by atoxyl at 4:16 PM on May 19, 2016 [1 favorite]
You're just messing around so I feel like the pickiest nit in the world but wouldn't a better example be detecting infinite loops in computer programs? TSP is entirely within the bounds of what conventional computers can compute in theory - it's just that the number of algorithmic steps required grows exponentially making large instances intractable in practice. A nondeterministic computer could do it in polynomial time - too bad they don't, as far as we know, exist - but still couldn't solve the halting problem, ever. A hypercomputer would by definition be able to solve at least one problem that conventional computers cannot - I guess an interesting question would be does it follow that it would make possible more efficient algorithms for conventionally computable problems? I was going to say it does not but I haven't thought through whether there are implications I'm missing so maybe somebody will tell me I'm wrong.
posted by atoxyl at 4:16 PM on May 19, 2016 [1 favorite]
What? No it's not.
Poe's law strikes again, sorry.
posted by vibratory manner of working at 4:18 PM on May 19, 2016 [1 favorite]
Poe's law strikes again, sorry.
posted by vibratory manner of working at 4:18 PM on May 19, 2016 [1 favorite]
Alternately: I consulted my pineal gland and it told me it was true!
posted by vibratory manner of working at 4:19 PM on May 19, 2016 [1 favorite]
posted by vibratory manner of working at 4:19 PM on May 19, 2016 [1 favorite]
Here's another really important point. All computation is physical. Your computer performing calculations is an entropy engine, displaying Metafilter to you on the back of the transformation of energy from a low-entropy state (the electric current coming out of your wall socket or the chemical potential stored in your laptop or phone battery) to a high-entropy state (heat).
Computation is physical, but the converse might not be true. The main difference with digital computation is that there's a digital "abstraction layer" that's tractable. All bits are (basically) the same- some are slower, some might have higher latency, etc, but as long as computer engineers have built your computer reasonably well, you don't need a complete physical description of your computer to understand how it's going to behave 99% of the time. This falls down when you reach the edges of the hardware's capabilities (radiation, heat, etc). A physical computer can only approximate mathematical ideas about computation, but it can approximate it arbitrarily closely.
We don't really know if there's a digital abstraction layer for brains that will render them tractable like this. I think it's plausible, but we don't even have a digital abstraction for water.
Personally I wouldn't necessarily say that thermometers compute temperature. But they can represent temperature,
Arguably any such ideas of the thermometer representing anything is coming from "outside" it, not inside. You can dissect it as much as you like and you'll never find a representation inside the mercury.
Any input you can imagine you can also imagine being encoded from a collection device onto an input tape to a Turing machine. So, this isn't really any more powerful computationally than any other Turing machine.
Turing called these kind of machines "choice machines." An (automatic) Turing machine can't act in the real world and then evaluate the result of the action; there's a real difference there. If the real world is actually perfectly simulatable, then yes- you just encode the entire world into the tape, and simulate the evolution of the world, and simulate interactions with the world, and then the Turing machine isn't more powerful than a choice machine. This is probably technically true, unless our model of physics is wrong.
But, in terms of a useful model for how brains behave, it's horrible. It's not even a very good model for how our computers actually behave in practice- between human operators and networked processors, they're not automatic Turing machines at all.
posted by BungaDunga at 4:33 PM on May 19, 2016 [3 favorites]
Computation is physical, but the converse might not be true. The main difference with digital computation is that there's a digital "abstraction layer" that's tractable. All bits are (basically) the same- some are slower, some might have higher latency, etc, but as long as computer engineers have built your computer reasonably well, you don't need a complete physical description of your computer to understand how it's going to behave 99% of the time. This falls down when you reach the edges of the hardware's capabilities (radiation, heat, etc). A physical computer can only approximate mathematical ideas about computation, but it can approximate it arbitrarily closely.
We don't really know if there's a digital abstraction layer for brains that will render them tractable like this. I think it's plausible, but we don't even have a digital abstraction for water.
Personally I wouldn't necessarily say that thermometers compute temperature. But they can represent temperature,
Arguably any such ideas of the thermometer representing anything is coming from "outside" it, not inside. You can dissect it as much as you like and you'll never find a representation inside the mercury.
Any input you can imagine you can also imagine being encoded from a collection device onto an input tape to a Turing machine. So, this isn't really any more powerful computationally than any other Turing machine.
Turing called these kind of machines "choice machines." An (automatic) Turing machine can't act in the real world and then evaluate the result of the action; there's a real difference there. If the real world is actually perfectly simulatable, then yes- you just encode the entire world into the tape, and simulate the evolution of the world, and simulate interactions with the world, and then the Turing machine isn't more powerful than a choice machine. This is probably technically true, unless our model of physics is wrong.
But, in terms of a useful model for how brains behave, it's horrible. It's not even a very good model for how our computers actually behave in practice- between human operators and networked processors, they're not automatic Turing machines at all.
posted by BungaDunga at 4:33 PM on May 19, 2016 [3 favorites]
Turing called these kind of machines "choice machines."
By "these" I mean machines which stop at a particular state and wait for input from outside.
posted by BungaDunga at 4:34 PM on May 19, 2016
By "these" I mean machines which stop at a particular state and wait for input from outside.
posted by BungaDunga at 4:34 PM on May 19, 2016
but doesn't that mean that, to the extent the human brain functions like a computer, it's not a very good one?
If you must call it a bad computer, then is a computer the most appropriate thing to call it in the first place?
posted by anazgnos at 4:40 PM on May 19, 2016
If you must call it a bad computer, then is a computer the most appropriate thing to call it in the first place?
posted by anazgnos at 4:40 PM on May 19, 2016
People who don't understand neither brains nor computers shouldn't write articles comparing the two. I kept waiting for something back up the article's bold proclamation that "brains don't store information!" but it never seemed to get there.
Obviously, brains don't run in binary. Obviously, modern computers are built to be powerful and precise (but fragile.. reboot!) while brains are built to be as adaptive as possible. It doesn't mean that information processing isn't relevant, only incomplete! It's well known that memories are hazy, but that's what happens when your data storage is a pile of meat. What should be fairly obvious is that brains are super, super, hyper-connected but store no "data" except through the power of those connections. The model is very different than how computers work but it is still in the same realm.
posted by lubujackson at 4:52 PM on May 19, 2016 [3 favorites]
Obviously, brains don't run in binary. Obviously, modern computers are built to be powerful and precise (but fragile.. reboot!) while brains are built to be as adaptive as possible. It doesn't mean that information processing isn't relevant, only incomplete! It's well known that memories are hazy, but that's what happens when your data storage is a pile of meat. What should be fairly obvious is that brains are super, super, hyper-connected but store no "data" except through the power of those connections. The model is very different than how computers work but it is still in the same realm.
posted by lubujackson at 4:52 PM on May 19, 2016 [3 favorites]
Arguably any such ideas of the thermometer representing anything is coming from "outside" it, not inside. You can dissect it as much as you like and you'll never find a representation inside the mercury.
Yes, exactly!
posted by biogeo at 4:55 PM on May 19, 2016 [2 favorites]
Yes, exactly!
posted by biogeo at 4:55 PM on May 19, 2016 [2 favorites]
Unless you are some sort of super-human alien-god with a meta-brain, what good would an example do you? Your brain wouldn't be able to comprehend it, by definition.
I'm not asking you to explain something that can't be comprehended by a human brain, I'm just asking whether there is such a thing that exists that you can point to.
TMs can model lower-level machines like Regular Expressions, but you can't model a Turing Machine with a Regular Expression.
So what you mean by "something that can't be understood by a human brain" is something that can't be modeled completely by a human brain? That seems like a rather specialized definition of "understand" and that there's a lot of knowledge and understanding we could have about something even if we couldn't model it completely in our own brains.
It seems very silly to throw up our hands and say "maybe a human brain can't understand a human brain" if you're using such an all-or-nothing definition of "understand."
posted by straight at 5:00 PM on May 19, 2016
I'm not asking you to explain something that can't be comprehended by a human brain, I'm just asking whether there is such a thing that exists that you can point to.
TMs can model lower-level machines like Regular Expressions, but you can't model a Turing Machine with a Regular Expression.
So what you mean by "something that can't be understood by a human brain" is something that can't be modeled completely by a human brain? That seems like a rather specialized definition of "understand" and that there's a lot of knowledge and understanding we could have about something even if we couldn't model it completely in our own brains.
It seems very silly to throw up our hands and say "maybe a human brain can't understand a human brain" if you're using such an all-or-nothing definition of "understand."
posted by straight at 5:00 PM on May 19, 2016
I mean, I think it's fairly obvious that we're nowhere near a Final Theory of the brain; that's why neuroscience is so exciting right now. But the whole point of this article was that computationalism is a fruitless dead end that needs to be abandoned, and that just doesn't match reality at all. I will be very disappointed if we don't have more sophisticated theoretical frameworks for understanding the brain in 20, 50, or 100 years, but those are going to come from the advances we're making right now, many of which are coming from computational neuroscience.
Going back to your more substantial points, this seems fair to me. I like the monkey-wrench of "your brain is not a computer" because there's a lot of pop-science, transhumanist wankery built around the identification of the brain with computers (see kurzweil as exhibit A), and a certain amount of the reaction in this thread feels somewhere in that area. "Your brain is a gland" was a monkey-wrench of my own along those lines. "Organ" would have been actually a true statement, but "gland" just felt more specific, fleshy, and jarring (although not strictly true).
In light of agreement that we're not at the final theory of the brain, I can reframe: do we have reason to expect the successor theory will include reference to computation? I think it's probably safe to say that any remaining references to say, the hydraulic understanding of brain processes is currently interpreted as metaphors of limited utility, and the more productive aspects of that theory recast in computational terms. Shouldn't we expect the same fate for computational understandings of the brain? If we we expect that, why object to the statement "Your brain is not a computer"? (I have no evidence that you personally object to that statement, but plenty of people here do).
posted by vibratory manner of working at 5:12 PM on May 19, 2016 [1 favorite]
Going back to your more substantial points, this seems fair to me. I like the monkey-wrench of "your brain is not a computer" because there's a lot of pop-science, transhumanist wankery built around the identification of the brain with computers (see kurzweil as exhibit A), and a certain amount of the reaction in this thread feels somewhere in that area. "Your brain is a gland" was a monkey-wrench of my own along those lines. "Organ" would have been actually a true statement, but "gland" just felt more specific, fleshy, and jarring (although not strictly true).
In light of agreement that we're not at the final theory of the brain, I can reframe: do we have reason to expect the successor theory will include reference to computation? I think it's probably safe to say that any remaining references to say, the hydraulic understanding of brain processes is currently interpreted as metaphors of limited utility, and the more productive aspects of that theory recast in computational terms. Shouldn't we expect the same fate for computational understandings of the brain? If we we expect that, why object to the statement "Your brain is not a computer"? (I have no evidence that you personally object to that statement, but plenty of people here do).
posted by vibratory manner of working at 5:12 PM on May 19, 2016 [1 favorite]
I picked TSP because I know it is solvable, and it seems like a safe bet that the God-Machine would be really good at solving it. And by really good, I mean it might be able to solve TSP in better than NP time. I don't know where the capability boundaries for the God-Machine and Hypercomputer lie, nor whether they stretch to reach the Halting Problem. So: maybe? I don't know.
Maybe I misunderstood what you were getting at. When you use the analogy "[God-Machine]:[Turing Machine]::[Turing Machine]:[Regular Expression]" I take that to mean that the God-Machine is a super-Turing-machine, a "hypercomputer" - that is to say a machine that can simulate a Turing Machine plus (by definition) can compute at least one function that is not computable by Turing Machines. I'm not sure offhand that such a machine would necessarily have to provide a more efficient approach to functions that are already computable, though again I'm not a CS theorist. If your "God-Machine" is meant to be a machine that computes exactly the set of Turing computable functions but can do so more efficiently then the analogy to the relationship between TMs and REs is not the right analogy - more like NTMs to DTMs I guess.
posted by atoxyl at 5:14 PM on May 19, 2016
Maybe I misunderstood what you were getting at. When you use the analogy "[God-Machine]:[Turing Machine]::[Turing Machine]:[Regular Expression]" I take that to mean that the God-Machine is a super-Turing-machine, a "hypercomputer" - that is to say a machine that can simulate a Turing Machine plus (by definition) can compute at least one function that is not computable by Turing Machines. I'm not sure offhand that such a machine would necessarily have to provide a more efficient approach to functions that are already computable, though again I'm not a CS theorist. If your "God-Machine" is meant to be a machine that computes exactly the set of Turing computable functions but can do so more efficiently then the analogy to the relationship between TMs and REs is not the right analogy - more like NTMs to DTMs I guess.
posted by atoxyl at 5:14 PM on May 19, 2016
I like the monkey-wrench of "your brain is not a computer" because there's a lot of pop-science, transhumanist wankery built around the identification of the brain with computers (see kurzweil as exhibit A), and a certain amount of the reaction in this thread feels somewhere in that area. "Your brain is a gland" was a monkey-wrench of my own along those lines.
I grok that perspective. My version of that monkey wrench might be something like, "Sure, the brain's a computer, but you don't actually know what a computer is." That is, the theory of computation is broader and older than digital computers, and if your intuitions about computation come only from digital computing, there are going to be some serious limitations in your thinking about neural computation. Which is exactly what we see from the likes of Kurzweil et al.
In light of agreement that we're not at the final theory of the brain, I can reframe: do we have reason to expect the successor theory will include reference to computation? I think it's probably safe to say that any remaining references to say, the hydraulic understanding of brain processes is currently interpreted as metaphors of limited utility, and the more productive aspects of that theory recast in computational terms. Shouldn't we expect the same fate for computational understandings of the brain?
I mean, honestly, I don't know. To some extent computation is such a general framework for understanding the brain that it's hard to see how it would lose its utility as part of a larger theory. On the other hand, it's possible to approach many of the same systems productively from a systems-control theory perspective, which I think is mutually compatible with the computationalist perspective, but may be a more useful operating framework for certain problems. And borrowing a third hand, it's also clear to me that while computationalism is extremely valuable for understanding the brain, it doesn't come anywhere near being enough. We also have to consider evolution, development, and ecology, and for humans culture and beliefs, not to mention all of the basic cell and systems biology that lets us work out the implementation level of things. Brains are really really complicated, and we need every good tool in our toolkit to understand them.
If we we expect that, why object to the statement "Your brain is not a computer"?
So in light of what I've just said, my less provocative, more honest response to this would be, no, your brain isn't a computer in any conventional sense, but it does compute, and understanding its computational design* helps us understand and describe how it works. I expect that to remain true even as our understanding of the brain improves (though of course I could be wrong). To borrow analogies from the history of physics, sometimes scientific progress means abandoning theories which provide no value, like phlogiston theory, and sometimes it means realizing that our earlier theories are still valid and useful, but only as special cases, as with Newton's laws of motion. I don't know, but I suspect that at worst, computational neuroscience is in the latter camp. At least at present, it is certainly providing us with many fruitful avenues of research and demonstrable progress in understanding.
* N.B., "design" in the biological sense of a system having functions shaped by its evolutionary history, just to head off any objections on that front.
posted by biogeo at 6:26 PM on May 19, 2016 [5 favorites]
I grok that perspective. My version of that monkey wrench might be something like, "Sure, the brain's a computer, but you don't actually know what a computer is." That is, the theory of computation is broader and older than digital computers, and if your intuitions about computation come only from digital computing, there are going to be some serious limitations in your thinking about neural computation. Which is exactly what we see from the likes of Kurzweil et al.
In light of agreement that we're not at the final theory of the brain, I can reframe: do we have reason to expect the successor theory will include reference to computation? I think it's probably safe to say that any remaining references to say, the hydraulic understanding of brain processes is currently interpreted as metaphors of limited utility, and the more productive aspects of that theory recast in computational terms. Shouldn't we expect the same fate for computational understandings of the brain?
I mean, honestly, I don't know. To some extent computation is such a general framework for understanding the brain that it's hard to see how it would lose its utility as part of a larger theory. On the other hand, it's possible to approach many of the same systems productively from a systems-control theory perspective, which I think is mutually compatible with the computationalist perspective, but may be a more useful operating framework for certain problems. And borrowing a third hand, it's also clear to me that while computationalism is extremely valuable for understanding the brain, it doesn't come anywhere near being enough. We also have to consider evolution, development, and ecology, and for humans culture and beliefs, not to mention all of the basic cell and systems biology that lets us work out the implementation level of things. Brains are really really complicated, and we need every good tool in our toolkit to understand them.
If we we expect that, why object to the statement "Your brain is not a computer"?
So in light of what I've just said, my less provocative, more honest response to this would be, no, your brain isn't a computer in any conventional sense, but it does compute, and understanding its computational design* helps us understand and describe how it works. I expect that to remain true even as our understanding of the brain improves (though of course I could be wrong). To borrow analogies from the history of physics, sometimes scientific progress means abandoning theories which provide no value, like phlogiston theory, and sometimes it means realizing that our earlier theories are still valid and useful, but only as special cases, as with Newton's laws of motion. I don't know, but I suspect that at worst, computational neuroscience is in the latter camp. At least at present, it is certainly providing us with many fruitful avenues of research and demonstrable progress in understanding.
* N.B., "design" in the biological sense of a system having functions shaped by its evolutionary history, just to head off any objections on that front.
posted by biogeo at 6:26 PM on May 19, 2016 [5 favorites]
I think digital computers can serve as a very useful metaphor for understanding various aspects of the brain, but I also think it can be constructive to reflect on the ways in which the two differ, and the functional consequences of those differences. What appears to be a disadvantage in one context may be an advantage in others, or may confer some other benefit that is not immediately obvious. Imprecision, for example, can function as a component of adaptability.
If you do a single task over and over again, you will become better and more efficient at doing it, even if you're not trying to become so. One of the ways this occurs is that you don't perform the task in precisely the same way each time. Your brain is a bit sloppy, and so without intending to, you perform very slight variations in the timing and force of your movements. Subconsciously, you will tend to repeat the variations that are easier or more successful, and thus you end up becoming better at the task over time. If your brain were more precise, it would be less able to optimize itself in this manner.
posted by dephlogisticated at 7:06 PM on May 19, 2016 [3 favorites]
If you do a single task over and over again, you will become better and more efficient at doing it, even if you're not trying to become so. One of the ways this occurs is that you don't perform the task in precisely the same way each time. Your brain is a bit sloppy, and so without intending to, you perform very slight variations in the timing and force of your movements. Subconsciously, you will tend to repeat the variations that are easier or more successful, and thus you end up becoming better at the task over time. If your brain were more precise, it would be less able to optimize itself in this manner.
posted by dephlogisticated at 7:06 PM on May 19, 2016 [3 favorites]
> "the serial interface of human anatomy make us highly inefficient at sharing information"
Actually human anatomy is a massively parallel interface, and it is very good at sharing information. It's our tools for storing and transmitting that information that are deficient.
You are, at all times, immersed in a rich bath of stimuli, even when lying quietly in a float tank. These stimuli reach the brain by multiple paths, and even when they originate from the same nominal place (say, your left ear) the input reaching your brain cannot be one bit wide, because a major nerve is not a wire but is more akin to a bundle of wires, poorly insulated from one another at each end, with new data being added along the way as the signal travels from one part of the inner ear to another even before setting out for the brain. It's the difference between listening to a recorded concert through ear buds versus listening to the live concert while sitting in the concert hall. So much for input.
For output, try calming a frightened young child from another room, through a voder. That's communication using a more-or-less serial interface. Now go into their room, sit beside them, hug them and stroke them and speak softly to them, fine tuning your actions and words based on the feedback you get from their body. That's using human anatomy to communicate. Which is more likely to calm the child?
The issue is not that human anatomy is serial, because it's not, but that our artificial systems for recording and transmitting human communication are effectively serial. They are not yet rich enough to carry the full width and nuance of human input/output. Which is why we use crutches like emoticons :p as inadequate stand-ins for the rich information otherwise conveyed by our faces as we speak.
Continuing this line of thought, considering the brain as a computer is one thing, but "we" are not just a brain. We are the rest of our body, too. Not all of "us" is stored in our brains. I had reason to think about this recently. All my life I have been blessed with an excellent sense of balance. When I begin to tip over, my correction has always been so quick that it happens before I even become conscious of it. Some part of my brain is probably involved, but I don't have to think about it, calculate vectors and assess the necessary muscles to activate to take corrective action, my body just continuously corrects on the fly. But then I acquired an inner-ear infection that put my semicircular canals out of whack. They didn't simply stop working, they actively began feeding me spurious information. For a couple of weeks, just walking down the street became an adventure. I had to build up a new set of balance reflexes using my ankles, my eyes and my conscious brain. It worked, but it wasn't as good. I steered clear of climbing ladders until (thankfully) the infection abated and my instinctive sense of balance returned. But my body remembers that period and I tense up (sweat, breath more heavily, focus on the horizon) when I need to do something that requires instinctive (as opposed to conscious) balance. It's as if now my brain is standing by ready to take over, like a back-seat driver, in case my ears fail me.
To successfully replicate a human mind in software, I think you'll need to take into account much more than just the brain. You'll need to replicate the brain's environment, including external factors the body interacts with. Even such actions as typing a comment into Metafilter are affected by the tools we use. I am typing this comment on my laptop keyboard. If I was typing it on my touchscreen tablet, I suspect it would be a different comment - shorter and less detailed - due to the different medium. The feedback I get from touching a physical keyboard makes me faster and more confident in my typing, which makes me more inclined to rant at length instead of just setting down the minimum information. Which might or might not be a good thing, lol, but would at any rate be a different thing.
posted by Autumn Leaf at 8:01 PM on May 19, 2016 [3 favorites]
Actually human anatomy is a massively parallel interface, and it is very good at sharing information. It's our tools for storing and transmitting that information that are deficient.
You are, at all times, immersed in a rich bath of stimuli, even when lying quietly in a float tank. These stimuli reach the brain by multiple paths, and even when they originate from the same nominal place (say, your left ear) the input reaching your brain cannot be one bit wide, because a major nerve is not a wire but is more akin to a bundle of wires, poorly insulated from one another at each end, with new data being added along the way as the signal travels from one part of the inner ear to another even before setting out for the brain. It's the difference between listening to a recorded concert through ear buds versus listening to the live concert while sitting in the concert hall. So much for input.
For output, try calming a frightened young child from another room, through a voder. That's communication using a more-or-less serial interface. Now go into their room, sit beside them, hug them and stroke them and speak softly to them, fine tuning your actions and words based on the feedback you get from their body. That's using human anatomy to communicate. Which is more likely to calm the child?
The issue is not that human anatomy is serial, because it's not, but that our artificial systems for recording and transmitting human communication are effectively serial. They are not yet rich enough to carry the full width and nuance of human input/output. Which is why we use crutches like emoticons :p as inadequate stand-ins for the rich information otherwise conveyed by our faces as we speak.
Continuing this line of thought, considering the brain as a computer is one thing, but "we" are not just a brain. We are the rest of our body, too. Not all of "us" is stored in our brains. I had reason to think about this recently. All my life I have been blessed with an excellent sense of balance. When I begin to tip over, my correction has always been so quick that it happens before I even become conscious of it. Some part of my brain is probably involved, but I don't have to think about it, calculate vectors and assess the necessary muscles to activate to take corrective action, my body just continuously corrects on the fly. But then I acquired an inner-ear infection that put my semicircular canals out of whack. They didn't simply stop working, they actively began feeding me spurious information. For a couple of weeks, just walking down the street became an adventure. I had to build up a new set of balance reflexes using my ankles, my eyes and my conscious brain. It worked, but it wasn't as good. I steered clear of climbing ladders until (thankfully) the infection abated and my instinctive sense of balance returned. But my body remembers that period and I tense up (sweat, breath more heavily, focus on the horizon) when I need to do something that requires instinctive (as opposed to conscious) balance. It's as if now my brain is standing by ready to take over, like a back-seat driver, in case my ears fail me.
To successfully replicate a human mind in software, I think you'll need to take into account much more than just the brain. You'll need to replicate the brain's environment, including external factors the body interacts with. Even such actions as typing a comment into Metafilter are affected by the tools we use. I am typing this comment on my laptop keyboard. If I was typing it on my touchscreen tablet, I suspect it would be a different comment - shorter and less detailed - due to the different medium. The feedback I get from touching a physical keyboard makes me faster and more confident in my typing, which makes me more inclined to rant at length instead of just setting down the minimum information. Which might or might not be a good thing, lol, but would at any rate be a different thing.
posted by Autumn Leaf at 8:01 PM on May 19, 2016 [3 favorites]
There is a theory out of the late great Alison Doupe's group that holds that birds learning their song employ exactly that strategy, dephlogisticated. Male oscine songbirds (which have to learn their song from adult tutors) produce both "directed song" when courting a female and "undirected song" when no female is around. Directed song is extremely precise, nearly identical every time the male sings. Undirected song is slightly more variable, not so much that it's very obvious to a human listening, but easy to quantify with a spectrogram of the song. It turns out that when the female is present, more dopamine is released into the male's neural song system, and this seems to "turn off" a specialized system for introducing a little randomness into the motor system for song production. (The actual mechanism is more complex but that's the gist of the idea.) So more dopamine causes the birds to sing their "best song." But the undirected song is actually still important; it's a bit like practicing, allowing the birds to "improve" their song over time.
I think this is a nice case study of how traditional ideas of computation (which don't handle randomness well) can be too limiting for understanding the nervous system, for which randomness may often be a useful computational principle. Of course, in computer science, there are many modern algorithms (like Markov Chain Monte Carlo, though maybe it's a stretch to call that "modern") which exploit randomness (or at least pseudorandomness, though in principle true randomness should be at least as good) for solving optimization problems that are at least metaphorically similar to bird song learning.
posted by biogeo at 8:15 PM on May 19, 2016 [2 favorites]
I think this is a nice case study of how traditional ideas of computation (which don't handle randomness well) can be too limiting for understanding the nervous system, for which randomness may often be a useful computational principle. Of course, in computer science, there are many modern algorithms (like Markov Chain Monte Carlo, though maybe it's a stretch to call that "modern") which exploit randomness (or at least pseudorandomness, though in principle true randomness should be at least as good) for solving optimization problems that are at least metaphorically similar to bird song learning.
posted by biogeo at 8:15 PM on May 19, 2016 [2 favorites]
Well, I should clarify then by saying that while one can certainly address (pseudo-)stochastic algorithms using a Turing Machine (and I think there are even properly stochastic generalizations of the TM), the idea of randomness as a useful computational principle doesn't appear to be an important part of the classical computer science perspective that informed ideas like the Von Neumann architecture, and heavily influenced thinkers like David Marr. That's what I mean by "traditional ideas of computation" in this context.
Again, this is a question of algorithms and implementations. A universal Turing machine can implement any algorithm, but it's not equally facile at doing so for all algorithms. Brains are actually really good at taking advantage of phenomena like stochastic resonance to efficiently solve certain classes of problems, a strategy which, while possible to implement with a Turing machine, is non-obvious and not particularly efficient in that implementation.
posted by biogeo at 10:11 PM on May 19, 2016
Again, this is a question of algorithms and implementations. A universal Turing machine can implement any algorithm, but it's not equally facile at doing so for all algorithms. Brains are actually really good at taking advantage of phenomena like stochastic resonance to efficiently solve certain classes of problems, a strategy which, while possible to implement with a Turing machine, is non-obvious and not particularly efficient in that implementation.
posted by biogeo at 10:11 PM on May 19, 2016
I need to read up on Hypercomputers - if anyone can recommend a good book or paper on the topic, I would certainly appreciate it.
Unfortunately I am not the guy to ask - I'm just someone who has taken CS theory classes (probably a little more recently than you have). I don't know how much there even is - I'm sure there are appearances in CS theory but I don't know about conjectures as to whether/how it could be plausible physically. I was just using that term (with a definition that encompasses any super-Turing computer) because we seemed to be talking about hypothetical super-Turing computers.
posted by atoxyl at 11:16 PM on May 19, 2016
Unfortunately I am not the guy to ask - I'm just someone who has taken CS theory classes (probably a little more recently than you have). I don't know how much there even is - I'm sure there are appearances in CS theory but I don't know about conjectures as to whether/how it could be plausible physically. I was just using that term (with a definition that encompasses any super-Turing computer) because we seemed to be talking about hypothetical super-Turing computers.
posted by atoxyl at 11:16 PM on May 19, 2016
Oh, the brain doesn't store memories? Try telling that to someone without a hippocampus. And then tell them again the next day because they won't fucking remember it.
sorry. The article touched a nerve. I'll stop now
It's ok, you will have forgotten all about it by tomorrow.
posted by boilermonster at 12:01 AM on May 20, 2016
sorry. The article touched a nerve. I'll stop now
It's ok, you will have forgotten all about it by tomorrow.
posted by boilermonster at 12:01 AM on May 20, 2016
The following is an abridged conversation between two actual cognitive scientists discussing this article:
Scientist A: Jesus. What a fucking incoherent article that somehow is making all the rounds now.
Scientist B: Bwahahahaha. I ... just .... wow. That is soooooo much dumber than I was expecting. I was expecting some Penrose Quantum Spooky level of dumb. This is I Failed Undergrad CS And Undergrad Psych dumb. I just ... words fail me. I'm not sure which of the dozen egregious errors I should start complaining about. I haven't laughed this hard in ages. Thanks!
Scientist A: I'm so glad you found the humour in it. Oh god it pisses me off. Probably because the first context I saw it in was someone putting it on Facebook saying basically "this is so smart, I'm so glad someone is saying this finally" that I just. can't. handle. it.
Scientist B: Heh. Yeah. But seriously if someone can't understand the difference between Turing computability and superficial properties of my laptop then I don't feel obligated to do anything other than giggle. And oh God the awful awful stuff about memory. I kind of feel sorry for Bartlett getting cited there. Like the Bartlett studies are really interesting and tell us something cool about the generative nature of memory and does not deserve to be associated with this kind of drivel.
Scientist A: That makes me feel happier about it.
Scientist B: Actually it's like listening to someone earnestly complaining that Pat Benatar doesn't understand human emotions because love is not literally a battlefield.
posted by langtonsant at 12:18 AM on May 20, 2016 [7 favorites]
Scientist A: Jesus. What a fucking incoherent article that somehow is making all the rounds now.
Scientist B: Bwahahahaha. I ... just .... wow. That is soooooo much dumber than I was expecting. I was expecting some Penrose Quantum Spooky level of dumb. This is I Failed Undergrad CS And Undergrad Psych dumb. I just ... words fail me. I'm not sure which of the dozen egregious errors I should start complaining about. I haven't laughed this hard in ages. Thanks!
Scientist A: I'm so glad you found the humour in it. Oh god it pisses me off. Probably because the first context I saw it in was someone putting it on Facebook saying basically "this is so smart, I'm so glad someone is saying this finally" that I just. can't. handle. it.
Scientist B: Heh. Yeah. But seriously if someone can't understand the difference between Turing computability and superficial properties of my laptop then I don't feel obligated to do anything other than giggle. And oh God the awful awful stuff about memory. I kind of feel sorry for Bartlett getting cited there. Like the Bartlett studies are really interesting and tell us something cool about the generative nature of memory and does not deserve to be associated with this kind of drivel.
Scientist A: That makes me feel happier about it.
Scientist B: Actually it's like listening to someone earnestly complaining that Pat Benatar doesn't understand human emotions because love is not literally a battlefield.
posted by langtonsant at 12:18 AM on May 20, 2016 [7 favorites]
sciatrix: "We're not math. We don't work like math. And insisting that we do, because "computer" refers to anything that can (no matter how flawfully) compute math... ignoring the reality of what modern humans mean and what the piece means when we talk about computers... well, that's willful ignorance of the piece."
On a more serious note, it would be really great if people who don't actually work in the field were to refrain from trying to change the definitions of technical terms that they don't understand. "Computational" in the sense used by cognitive scientists does not mean what this guy is trying to use it to mean, and the "computational metaphor" has fuck all to do with the kinds of machinery that Apple sells except in the most abstract sense. This is hugely problematic because he's trying to point at cognitive scientists being so stupid by grossly misrepresenting the nature of the work that they do.
Think of it like this: the author is either trying to claim that (a) the human mind is not very similar to a laptop or (b) the behaviour of the human mind is not expressible in terms of Turing computable functions. If he means the former he is a madman tilting at windmills because literally no-one in the field believes the hypothesis he is trying to disprove; and if he means the latter, then he has not presented any evidence in support of it. If you strip out the rhetoric and look at the tiny bit of scientific evidence referenced in the article, what's left is utterly uncontroversial stuff (e.g., Bartlett's work) that is absolutely and entirely consistent with the standard information processing view of human cognition. The article is utter bollocks at every level.
posted by langtonsant at 12:44 AM on May 20, 2016 [5 favorites]
On a more serious note, it would be really great if people who don't actually work in the field were to refrain from trying to change the definitions of technical terms that they don't understand. "Computational" in the sense used by cognitive scientists does not mean what this guy is trying to use it to mean, and the "computational metaphor" has fuck all to do with the kinds of machinery that Apple sells except in the most abstract sense. This is hugely problematic because he's trying to point at cognitive scientists being so stupid by grossly misrepresenting the nature of the work that they do.
Think of it like this: the author is either trying to claim that (a) the human mind is not very similar to a laptop or (b) the behaviour of the human mind is not expressible in terms of Turing computable functions. If he means the former he is a madman tilting at windmills because literally no-one in the field believes the hypothesis he is trying to disprove; and if he means the latter, then he has not presented any evidence in support of it. If you strip out the rhetoric and look at the tiny bit of scientific evidence referenced in the article, what's left is utterly uncontroversial stuff (e.g., Bartlett's work) that is absolutely and entirely consistent with the standard information processing view of human cognition. The article is utter bollocks at every level.
posted by langtonsant at 12:44 AM on May 20, 2016 [5 favorites]
wait guys i've got it - reading this article as a cognitive scientist is like being a physicist listening to someone arguing whether the luminiferous aether is a better theory than jj thomsons plum pudding model
posted by langtonsant at 1:59 AM on May 20, 2016
posted by langtonsant at 1:59 AM on May 20, 2016
Oh for the love of ... okay, so I think to myself, I wonder why this guy is writing these things and why does he think he has some business offering an opinion when he is so blatantly ignorant of the current state of the literature? And apparently his claim to fame is his role in Psychology Today a twee piece of pop psychobabble with zero scientific merit. I mean seriously? FFS *I* have more serious credentials in the field than he does, and by a long fucking margin to boot. Fuck, I'd almost go so far as to say that *sciatrix* is better qualified to have an opinion than this wanker, and - per her own admission upthread - she's a biologist with essentially zero expertise in cognition. This guy is a fucking idiot and the article is overreach of the most egregious kind. God WTF can't people stay the fuck out of fields they know nothing about? What the hell is wrong with these people?
... And then, finally, I discover the answer by looking at his CV. So he co-authored something with Skinner and wrote a few moderately tedious S-R theory papers that sort of make sense if you think of him as a second rate behaviourist with no grasp of the current state of reinforcement learning, computational cognitive science or mathematical psychology. There is a reason why you don't see his name attached to any papers in Psych Review, Cognitive Science, Cognition, or Cognitive Psychology. He is not qualified in this field, and should not be allowed to write articles like this one.
posted by langtonsant at 3:08 AM on May 20, 2016 [1 favorite]
... And then, finally, I discover the answer by looking at his CV. So he co-authored something with Skinner and wrote a few moderately tedious S-R theory papers that sort of make sense if you think of him as a second rate behaviourist with no grasp of the current state of reinforcement learning, computational cognitive science or mathematical psychology. There is a reason why you don't see his name attached to any papers in Psych Review, Cognitive Science, Cognition, or Cognitive Psychology. He is not qualified in this field, and should not be allowed to write articles like this one.
posted by langtonsant at 3:08 AM on May 20, 2016 [1 favorite]
Fucking amateurs.
posted by langtonsant at 3:11 AM on May 20, 2016
posted by langtonsant at 3:11 AM on May 20, 2016
Arguably any such ideas of the thermometer representing anything is coming from "outside" it, not inside. You can dissect it as much as you like and you'll never find a representation inside the mercury.
Yes, exactly!
When I utter a series of, let's face it, arbitrary sonic vibrations from my throat and mouth alone in my study, nothing particularly interesting happens. However, in the proper context, these same physical effects on air by my person and a Latte appears. This happens with such regularity and consistency I've since learned I expend less energy if I add modify this noise to Venti Latte instead of Large Latte. Whatever!
posted by xtian at 3:29 AM on May 20, 2016
Yes, exactly!
When I utter a series of, let's face it, arbitrary sonic vibrations from my throat and mouth alone in my study, nothing particularly interesting happens. However, in the proper context, these same physical effects on air by my person and a Latte appears. This happens with such regularity and consistency I've since learned I expend less energy if I add modify this noise to Venti Latte instead of Large Latte. Whatever!
posted by xtian at 3:29 AM on May 20, 2016
langtonsant has it. The formula this article is following is:
1) Grossly misunderstand one of the fundamental ideas in a field.
2) Argue that this misunderstanding is massively misguided.
3) Conclude that, ergo, people who do research in the field are massively misguided.
...
4) Profit!
posted by forza at 3:55 AM on May 20, 2016 [1 favorite]
1) Grossly misunderstand one of the fundamental ideas in a field.
2) Argue that this misunderstanding is massively misguided.
3) Conclude that, ergo, people who do research in the field are massively misguided.
...
4) Profit!
posted by forza at 3:55 AM on May 20, 2016 [1 favorite]
I mean, who does this? He looks at a field that he knows exceedingly little about. He thinks some idea within it looks utterly ridiculous to him.
Does he think, "Hmm, this appears ridiculous, but a lot of very smart, very hardworking people believe it. Maybe I'm missing something!"
No. He does not.
Instead he thinks "I know! I'll tell all of these people who understand this field far better than I do that they are wrong. And I'll do it in the strongest possible terms, incoherently, in such a way that it makes other people who also don't understand the field think that everyone in the field is an idiot."
This article is the equivalent of an anti-evolution zealot attacking biology. It's so wrong it's not even wrong.
posted by forza at 4:00 AM on May 20, 2016 [1 favorite]
Does he think, "Hmm, this appears ridiculous, but a lot of very smart, very hardworking people believe it. Maybe I'm missing something!"
No. He does not.
Instead he thinks "I know! I'll tell all of these people who understand this field far better than I do that they are wrong. And I'll do it in the strongest possible terms, incoherently, in such a way that it makes other people who also don't understand the field think that everyone in the field is an idiot."
This article is the equivalent of an anti-evolution zealot attacking biology. It's so wrong it's not even wrong.
posted by forza at 4:00 AM on May 20, 2016 [1 favorite]
The issue is not that human anatomy is serial, because it's not, but that our artificial systems for recording and transmitting human communication are effectively serial. They are not yet rich enough to carry the full width and nuance of human input/output. Which is why we use crutches like emoticons :p as inadequate stand-ins for the rich information otherwise conveyed by our faces as we speak.
I happily submit to these clarifications! (>_<)
It's worth noting that while graphic designers have developed various techniques for multi-leveled communications, the public reception is mixed. There remains a preference for the serialized text both from those managing communications (Risk-reward) and from readers (work-understanding).
Following your personal comment, A.L., on physical balance and brain input, I read an interesting comment in a popular scientific book which claimed the stomach contains nerve structures similar to the brain, al though in a significantly lower quantity. I find this fascinating. Considering the mission critical activity of food consumption, I like the idea of a 'dedicated processor' in the stomach to be highly effective design. Hahah.
(Sent from my iPhone BTW)
posted by xtian at 4:19 AM on May 20, 2016
I happily submit to these clarifications! (>_<)
It's worth noting that while graphic designers have developed various techniques for multi-leveled communications, the public reception is mixed. There remains a preference for the serialized text both from those managing communications (Risk-reward) and from readers (work-understanding).
Following your personal comment, A.L., on physical balance and brain input, I read an interesting comment in a popular scientific book which claimed the stomach contains nerve structures similar to the brain, al though in a significantly lower quantity. I find this fascinating. Considering the mission critical activity of food consumption, I like the idea of a 'dedicated processor' in the stomach to be highly effective design. Hahah.
(Sent from my iPhone BTW)
posted by xtian at 4:19 AM on May 20, 2016
In regards to the hardware-software divide in digital computers: this is a practical limitation as opposed to a hard limitation. FPGAs _Field Programmable Gate Arrays exist and are basically rewireable hardware. Reading that Wiki page makes me amazed at how much progress there has apparently been in this field as I just know about it as a layman but apparently there are now fully programmable system-on-a-chip chips that incorporate FPGA technology. This is the closest thing, hardware-wise, that seems to work similarly how the neural structures in a brain work. Still, it doesn't seem like the mst efficient way or replicating brain-like computing. SOmewhere somebody is probably working on some sort of bio computer (and I'd wager that somebody on here probably has a lot more of an idea on this than I do) and, interestingly, in his second novel, "Count Zero", William Gibson speculated that the next big jump in computing would be biochips, that is some sort of computing strcture that incorporates biologica computing methods. I bring this up because William Gibson tends to base the ideas in his works from things he hears happening in the real world. So if he wrotr about biochips in the mid 1980s then somebody was already working on them back then. Hmmm, maybe I'll use my lunch break today to research into that.
posted by I-baLL at 4:33 AM on May 20, 2016
posted by I-baLL at 4:33 AM on May 20, 2016
Let me play devil's advocate with one more comment regarding the serialization of communications. Briefly, when I was young during the 1980s I would communicate with my siblings frequently using movie references. Out of context one liners could communicate emotive content irrespective of whatever we were talking about (subjectively). Over time, as we grew up and spent more time apart (and I could add we've all grown up to have our own separate lives), we fell out of this practice. Not because those references failed to point to the same movies and events which they had for us for many years before. Rather, something else occurred, they didn't carry the same emotional content when taken in these new social group contexts. How do you explain the private in-jokes to outsiders? What's more, these communications were very much emotive based, and so defied explanations.
In the same way relationships can fall apart. It only takes one complex series of unexpected events to color the myriad of multi-dimensional non-verbal communications between persons to the point serial verbal communications simply can't seem to repair them effectively. It can take a long time, if at all, to recover the ability to comfort and "calm the child". Human anatomy is also "very good at sharing [mis] information" when the auditor hears one thing, and interprets this differently than we expect within that array of input A.L. mentions.
posted by xtian at 5:09 AM on May 20, 2016
In the same way relationships can fall apart. It only takes one complex series of unexpected events to color the myriad of multi-dimensional non-verbal communications between persons to the point serial verbal communications simply can't seem to repair them effectively. It can take a long time, if at all, to recover the ability to comfort and "calm the child". Human anatomy is also "very good at sharing [mis] information" when the auditor hears one thing, and interprets this differently than we expect within that array of input A.L. mentions.
posted by xtian at 5:09 AM on May 20, 2016
langtonsant and Forza, I think you've identified one of the central causes of so much discontent with this article.
As I noted previously, Epstein is well regarded in the field of behavior analysis, which, although I studied it in graduate school, I believe can be narrow minded and insular. But so is every other field. In fact, one of the biggest non-examples of insularity in science right now is the multi-disciplinary efforts towards understanding the brain.
If I may be so bold as to give you my interpretation of how many members of the field of Behavior Analysis, both experimental and applied, it would be something like this:
"65 years ago we found a nugget of truth so fundamental that the behavior of all organisms can be explained - and that truth is that behavior is controlled (motivated) by the environmental consequences contingent on that behavior. We've been dutifully studying this and developing a more robust understanding of why animals (people included) do what they do, and how to control that, ever sense. But the world moved to cognitive science, because reasons.
"The cognitive revolution has produced a lot of interesting and fun theories, but the application of those theories has not improved the lives of people. In fact, most of the application is just drugging people. When it is at its most effective, using CBT, it's actually the behavioral component that's doing all the legwork.
"We have yet to see a theory of the brain that explains behavior as succinctly as our theory of anticedant:behavior->consequence. We have yet to see a cognitive technique that is proven (to our extremely rigorous standards) to actually change behavior more effectively than simple Behavior Modification Techniques.
"The role of the behaviorist is to keep the flame alive, keep our heads down, and eventually, slowly, be seen as the savior of humankind. But from time to time, seeing these people making super crazy claims - so crazy that there is now a crisis of confidence in the very concept of research - makes us vent our bile into a hitpiece that will be shared on behavior-analytic facebook groups for years to come."
So, no, I don't expect any behavior analytic researcher would spend time trying to understand either cognitive psych, or computer science. I mean why would they? It's all based on a load of bunk and behavior science is decades, aeons, past that anyway. Yes, it can be an arrogant view, but it's one that has kept the discipline somewhat pure throughout the last 40 years.
posted by rebent at 8:00 AM on May 20, 2016
As I noted previously, Epstein is well regarded in the field of behavior analysis, which, although I studied it in graduate school, I believe can be narrow minded and insular. But so is every other field. In fact, one of the biggest non-examples of insularity in science right now is the multi-disciplinary efforts towards understanding the brain.
If I may be so bold as to give you my interpretation of how many members of the field of Behavior Analysis, both experimental and applied, it would be something like this:
"65 years ago we found a nugget of truth so fundamental that the behavior of all organisms can be explained - and that truth is that behavior is controlled (motivated) by the environmental consequences contingent on that behavior. We've been dutifully studying this and developing a more robust understanding of why animals (people included) do what they do, and how to control that, ever sense. But the world moved to cognitive science, because reasons.
"The cognitive revolution has produced a lot of interesting and fun theories, but the application of those theories has not improved the lives of people. In fact, most of the application is just drugging people. When it is at its most effective, using CBT, it's actually the behavioral component that's doing all the legwork.
"We have yet to see a theory of the brain that explains behavior as succinctly as our theory of anticedant:behavior->consequence. We have yet to see a cognitive technique that is proven (to our extremely rigorous standards) to actually change behavior more effectively than simple Behavior Modification Techniques.
"The role of the behaviorist is to keep the flame alive, keep our heads down, and eventually, slowly, be seen as the savior of humankind. But from time to time, seeing these people making super crazy claims - so crazy that there is now a crisis of confidence in the very concept of research - makes us vent our bile into a hitpiece that will be shared on behavior-analytic facebook groups for years to come."
So, no, I don't expect any behavior analytic researcher would spend time trying to understand either cognitive psych, or computer science. I mean why would they? It's all based on a load of bunk and behavior science is decades, aeons, past that anyway. Yes, it can be an arrogant view, but it's one that has kept the discipline somewhat pure throughout the last 40 years.
posted by rebent at 8:00 AM on May 20, 2016
And I must add, this excellent formula is basically exactly what Chomsky did in the 70s:
1) Grossly misunderstand one of the fundamental ideas in a field.
2) Argue that this misunderstanding is massively misguided.
3) Conclude that, ergo, people who do research in the field are massively misguided.
...
4) Profit!
He actually hit step 4, though. Which I doubt Epstein will do with this article.
posted by rebent at 8:02 AM on May 20, 2016 [1 favorite]
1) Grossly misunderstand one of the fundamental ideas in a field.
2) Argue that this misunderstanding is massively misguided.
3) Conclude that, ergo, people who do research in the field are massively misguided.
...
4) Profit!
He actually hit step 4, though. Which I doubt Epstein will do with this article.
posted by rebent at 8:02 AM on May 20, 2016 [1 favorite]
rebent: "65 years ago we found a nugget of truth so fundamental that the behavior of all organisms can be explained - and that truth is that behavior is controlled (motivated) by the environmental consequences contingent on that behavior. We've been dutifully studying this and developing a more robust understanding of why animals (people included) do what they do, and how to control that, ever sense. But the world moved to cognitive science, because reasons. "
But this is kind of absurd, and reads rather like the lament of the behaviourist who doesn't really grasp what the cognitive revolution was about and still thinks this is some kind of weird Skinner v Chomsky cage match, as if any of us in the field actually still care. Look, the fundamental flaw of behaviourism was never the idea that there are exogenous controls on human behaviour. Only an idiot would argue otherwise. The problem was that S-R theory in the form that Skinner and the like espoused, and even the more interesting associationist ideas that still survive (e.g., Rescorla-Wagner doesn't suck), do a terrible job of explaining the more complicated theories that actual humans bring to the world. We explain our experiences using complex models that characterise our understanding of the world, and our idiosyncratic notions of what counts as "reinforcement" are heavily dependent on the particular experiences that lead us to interpret the world in the way we do. Your rewards are not my rewards. Your punishments are not my punishments. Your beliefs and my beliefs are the things that explain or mediate this difference.
This is one of the reasons why, for instance, you see a lot of discussion about model-based versus model-free reinforcement learning in the literature on decision making (that's assuming you buy into the idea that there's a clear distinction to be made). Which is kind of a nice step forward, but to be honest the literature on concept learning has been at that point for a very, very long time and the decision making folks are sort of trailing behind on this one. Even from the most uncharitable perspective you'd be obligated to concede that this idea that theory-based inferences are central to human reasoning has been around since the 1980s (e.g., Murphy & Medin) but it goes back well before that.
This "nugget of truth"? Not actually as impressive as you think it is, nor as general as the behaviourists claim that it is. Humans are messy, complicated beasts, and you cannot explain their actions in terms of simple S-R contingencies. You actually do need to postulate the existence of latent variables (like say... beliefs) to explain why people do what we do.
The world moved to cognitive science because behaviourism is fucking stupid and is utterly incapable of explaining the extant data about human behaviour.
posted by langtonsant at 8:29 AM on May 20, 2016 [1 favorite]
But this is kind of absurd, and reads rather like the lament of the behaviourist who doesn't really grasp what the cognitive revolution was about and still thinks this is some kind of weird Skinner v Chomsky cage match, as if any of us in the field actually still care. Look, the fundamental flaw of behaviourism was never the idea that there are exogenous controls on human behaviour. Only an idiot would argue otherwise. The problem was that S-R theory in the form that Skinner and the like espoused, and even the more interesting associationist ideas that still survive (e.g., Rescorla-Wagner doesn't suck), do a terrible job of explaining the more complicated theories that actual humans bring to the world. We explain our experiences using complex models that characterise our understanding of the world, and our idiosyncratic notions of what counts as "reinforcement" are heavily dependent on the particular experiences that lead us to interpret the world in the way we do. Your rewards are not my rewards. Your punishments are not my punishments. Your beliefs and my beliefs are the things that explain or mediate this difference.
This is one of the reasons why, for instance, you see a lot of discussion about model-based versus model-free reinforcement learning in the literature on decision making (that's assuming you buy into the idea that there's a clear distinction to be made). Which is kind of a nice step forward, but to be honest the literature on concept learning has been at that point for a very, very long time and the decision making folks are sort of trailing behind on this one. Even from the most uncharitable perspective you'd be obligated to concede that this idea that theory-based inferences are central to human reasoning has been around since the 1980s (e.g., Murphy & Medin) but it goes back well before that.
This "nugget of truth"? Not actually as impressive as you think it is, nor as general as the behaviourists claim that it is. Humans are messy, complicated beasts, and you cannot explain their actions in terms of simple S-R contingencies. You actually do need to postulate the existence of latent variables (like say... beliefs) to explain why people do what we do.
The world moved to cognitive science because behaviourism is fucking stupid and is utterly incapable of explaining the extant data about human behaviour.
posted by langtonsant at 8:29 AM on May 20, 2016 [1 favorite]
hm. Well, besides you being a little insulting about something a lot of people have dedicated their lives to, I think there's probably more commonality than either side would like to admit. The parts you list as things that behaviorism doesn't account for sounds quite a bit like what my colleagues who went on to get PhDs in the field spend years debating and researching.
I think there's room for everyone to be more open minded. The post linked in the OP does not help bridge that gap - but it was not written as an attempt to do so.
posted by rebent at 9:27 AM on May 20, 2016
I think there's room for everyone to be more open minded. The post linked in the OP does not help bridge that gap - but it was not written as an attempt to do so.
posted by rebent at 9:27 AM on May 20, 2016
I do have to quibble with some comments at the top here - neither the human brain nor any other computer are, in fact, Turing-complete. A Turing machine has an infinite tape and thus can generate results of unbounded size - this is not true of either humans or computers.
As to whether a human brain "is" a computer - this is an issue that would simply not come up in better-formed languages like E-Prime. A human brain shares many aspects with silicon-based central processor computing machines, and also has many aspects that are wildly different. Is that "is" or not? Who knows? Why is it relevant?
I found the article uncompelling. A lot of it just basically him decreeing that brains don't do things: "We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. "
Well, to me as someone inside a brain, it sure as heck feels that way. I have what certainly seem to me to be representations of visual stimuli I have seen in the past; and it's feels clear that I have a short-term and a long-term memory. Yes, this work rather differently to how silicon-based central processing computers do, I certainly don't have a literal image, but a bike moves differently than a canoe, and yet they are both transportation.
posted by lupus_yonderboy at 9:28 AM on May 20, 2016 [2 favorites]
As to whether a human brain "is" a computer - this is an issue that would simply not come up in better-formed languages like E-Prime. A human brain shares many aspects with silicon-based central processor computing machines, and also has many aspects that are wildly different. Is that "is" or not? Who knows? Why is it relevant?
I found the article uncompelling. A lot of it just basically him decreeing that brains don't do things: "We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. "
Well, to me as someone inside a brain, it sure as heck feels that way. I have what certainly seem to me to be representations of visual stimuli I have seen in the past; and it's feels clear that I have a short-term and a long-term memory. Yes, this work rather differently to how silicon-based central processing computers do, I certainly don't have a literal image, but a bike moves differently than a canoe, and yet they are both transportation.
posted by lupus_yonderboy at 9:28 AM on May 20, 2016 [2 favorites]
rebent: "it was not written as an attempt to do so."
Quite so. It was written as an attempt to be dismissive and insulting to a lot of scientists, by someone who ought to know better. Hence the level of hostility on my part.
posted by langtonsant at 1:12 PM on May 20, 2016
Quite so. It was written as an attempt to be dismissive and insulting to a lot of scientists, by someone who ought to know better. Hence the level of hostility on my part.
posted by langtonsant at 1:12 PM on May 20, 2016
Which is to say - I'm genuinely sorry to the extent that my comments come across hostile toward you, because you didn't write the bloody thing and from your perspective this is just another article on the internet. But I don't even slightly regret being angry at the author of the piece, nor do I regret dismissing his opinions as essentially worthless. As I've mentioned above, he is not an expert and should not be writing as if he were. He can't reasonably expect to write such things and hope to be treated kindly.
posted by langtonsant at 2:33 PM on May 20, 2016
posted by langtonsant at 2:33 PM on May 20, 2016
I'm not scientist or programmer, but the overall point the article makes is fair. Humans are not like digital computers and insisting that their brains work in a similar fashion isn't a great idea.
You could say that humans store and retrieve information but it's such a simplistic view of what happens that the metaphor just breaks down.
posted by Brandon Blatcher at 6:32 AM on June 2, 2016
You could say that humans store and retrieve information but it's such a simplistic view of what happens that the metaphor just breaks down.
posted by Brandon Blatcher at 6:32 AM on June 2, 2016
The article claims that we don't store and retrieve information which is such a simple fallacy that the article breaks down.
posted by I-baLL at 7:31 AM on June 2, 2016
posted by I-baLL at 7:31 AM on June 2, 2016
« Older "You’re giving the fruit pleasure." | The 0-113 Racehorse Who Charmed a Country Newer »
This thread has been archived and is closed to new comments
posted by entropicamericana at 9:02 AM on May 19, 2016 [46 favorites]