I know what you're thinking...
September 22, 2011 5:37 PM Subscribe
UC Berkeley researchers have successfully used functional Magnetic Resonance Imaging (fMRI) to decode and reconstruct people’s dynamic visual experiences - in this case, watching Hollywood movie trailers.
Not quite as "oh wow" as it sounds at first, but still pretty darn neat.
posted by wierdo at 5:48 PM on September 22, 2011
posted by wierdo at 5:48 PM on September 22, 2011
get the fuck out how can zomg
posted by neuromodulator at 5:51 PM on September 22, 2011 [1 favorite]
posted by neuromodulator at 5:51 PM on September 22, 2011 [1 favorite]
Holy crap that's incredible. For an understanding of how far, and how quickly, we've come: just three years ago the big news was "we're able to decode neural activity to reproduce letters that the subject was looking at for hours."
posted by Bora Horza Gobuchul at 5:54 PM on September 22, 2011 [7 favorites]
posted by Bora Horza Gobuchul at 5:54 PM on September 22, 2011 [7 favorites]
the reconstruction was obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli.
I wonder why they did this. Was this just a nifty shortcut, or is there some more fundamental limitation that prevents them from reconstructing the images from scratch?
posted by eugenen at 5:56 PM on September 22, 2011 [1 favorite]
I wonder why they did this. Was this just a nifty shortcut, or is there some more fundamental limitation that prevents them from reconstructing the images from scratch?
posted by eugenen at 5:56 PM on September 22, 2011 [1 favorite]
what if someday we could all look at what someone was imagining? i'm sure at first we would all be, "well that sucked," because we'd find our visual fidelity is really actually kind of awful but then some like twelve year old will start practising every day and with the feedback from being able to see his output will hone his skills and drop some serious next level shit on us. damn. i wish i was twelve in the future.
posted by neuromodulator at 5:56 PM on September 22, 2011 [11 favorites]
posted by neuromodulator at 5:56 PM on September 22, 2011 [11 favorites]
This is like Galileo's telescope. It could change the world.
posted by bonobothegreat at 6:01 PM on September 22, 2011 [1 favorite]
posted by bonobothegreat at 6:01 PM on September 22, 2011 [1 favorite]
The face images are really strange. On the link notice that the reconstructed guy isn't wearing a hat.
posted by bhnyc at 6:02 PM on September 22, 2011 [1 favorite]
posted by bhnyc at 6:02 PM on September 22, 2011 [1 favorite]
Same topic. Very different responses.
posted by designbot at 6:03 PM on September 22, 2011 [9 favorites]
posted by designbot at 6:03 PM on September 22, 2011 [9 favorites]
I have dreamed a complete Preston Sturges film, Sam Peckinpah's version of Conrad's "The Heart of Darkness" (that's was the only way I could describe it) and had Annie Lennox sing an original song as I was flying south (just my body) over Brazil. It was frustrating in a way to have had these narrative elements. So, rock on UC Berkeley
posted by goalyeehah at 6:04 PM on September 22, 2011 [2 favorites]
posted by goalyeehah at 6:04 PM on September 22, 2011 [2 favorites]
That's amazing. I wonder whether the computer program needs to be tuned to each person. And how bout the use of youtube clips to reconstruct the image. For the future sentient robots, the realm of the eide will be youtube clips. Having Robotcrates question you would be like talking to a meme-generator.
posted by architactor at 6:04 PM on September 22, 2011 [1 favorite]
posted by architactor at 6:04 PM on September 22, 2011 [1 favorite]
In my case this MRI would produce a cat scan :3
posted by 2bucksplus at 6:22 PM on September 22, 2011 [1 favorite]
posted by 2bucksplus at 6:22 PM on September 22, 2011 [1 favorite]
5th amendment, it was nice knowing you.
posted by the jam at 6:28 PM on September 22, 2011 [3 favorites]
posted by the jam at 6:28 PM on September 22, 2011 [3 favorites]
Wait, is this ... real?
posted by Avenger at 6:29 PM on September 22, 2011 [1 favorite]
posted by Avenger at 6:29 PM on September 22, 2011 [1 favorite]
Okay, this is a bit misleading. The image shown was not generated from what they "saw" in the brain. The image was created by merging 100 youtube clips that the computer thought would produce a similar response in the brain.
I'm not saying this isn't interesting, but they're not reading the brain like a video tape.
posted by BlackLeotardFront at 6:29 PM on September 22, 2011 [10 favorites]
I'm not saying this isn't interesting, but they're not reading the brain like a video tape.
posted by BlackLeotardFront at 6:29 PM on September 22, 2011 [10 favorites]
Not really very ground breaking but it makes good Internet press. I expect to see the web full of "IMG! Mind Reading!" articles tomorrow from the usual suspects (AOLington Post - I'm looking at you!).
From the article, the researches took brain visual stimuli responses from 100 videos and then compared it to a brain visual stimuli responses from a different video. They then did a mashup of stills from frames in the videos that most closely matched the target video. So yeah, no mindreading. I would not be surprised at all if this experiment is not repeatable by other researchers and that the original researchers (unknowingly... of course) biased their results to achieve this one frame somehow.
It's as if I hooked up a blood pressure cuff to you and showed you porn or not porn and then matched your readings against some target porn/not porn movie and then took out a still frame and said "Yep - that's what you were watching!" . This approach involved a bit more techie mumbo jumbo but when it comes down to it - not much difference from the blood pressure cuff.
posted by Poet_Lariat at 6:37 PM on September 22, 2011 [5 favorites]
From the article, the researches took brain visual stimuli responses from 100 videos and then compared it to a brain visual stimuli responses from a different video. They then did a mashup of stills from frames in the videos that most closely matched the target video. So yeah, no mindreading. I would not be surprised at all if this experiment is not repeatable by other researchers and that the original researchers (unknowingly... of course) biased their results to achieve this one frame somehow.
It's as if I hooked up a blood pressure cuff to you and showed you porn or not porn and then matched your readings against some target porn/not porn movie and then took out a still frame and said "Yep - that's what you were watching!" . This approach involved a bit more techie mumbo jumbo but when it comes down to it - not much difference from the blood pressure cuff.
posted by Poet_Lariat at 6:37 PM on September 22, 2011 [5 favorites]
The thing with the direct neural implants in the cat's brain doesn't seem to need the huge database mapping YouTube clips to fMRI patterns.
Your cat never consented to be kept inside all its life! Let it out!!!
posted by LogicalDash at 6:38 PM on September 22, 2011
Your cat never consented to be kept inside all its life! Let it out!!!
posted by LogicalDash at 6:38 PM on September 22, 2011
I have dreamed a complete Preston Sturges film, Sam Peckinpah's version of Conrad's "The Heart of Darkness" (that's was the only way I could describe it) and had Annie Lennox sing an original song as I was flying south (just my body) over Brazil. It was frustrating in a way to have had these narrative elements. So, rock on UC Berkeley
I've had dreams like that, too. They're great!
And please note: fMRI machines do scan tissues, but they do not replay thoughts on video. Only people who make up shit relying on the power of suggestion can do that.
posted by ovvl at 6:51 PM on September 22, 2011
I've had dreams like that, too. They're great!
And please note: fMRI machines do scan tissues, but they do not replay thoughts on video. Only people who make up shit relying on the power of suggestion can do that.
posted by ovvl at 6:51 PM on September 22, 2011
Sorry: they do not replay VISUAL IMAGES OF thoughts on video.
posted by ovvl at 6:55 PM on September 22, 2011
posted by ovvl at 6:55 PM on September 22, 2011
i think some of these responses here are rather cynical, or maybe people just have very high expectations. how is reconstructing the stimulus out of a basis set of clips from youtube not a valid approach to decoding what someone is viewing? most any decoding algorithm needs some kind of basis set, and stills from youtube are a convenient one.
posted by alk at 6:57 PM on September 22, 2011 [2 favorites]
posted by alk at 6:57 PM on September 22, 2011 [2 favorites]
I haven't read the abstract, to say nothing of the paper behind it, and I'm certainly not going to put a lot of stock into a breathless press release from a university's public relations office.
But:
Without reading the paper, but being somewhat familiar with this research area, I would guess the researchers monitored activation in the primary visual cortex as subjects repeatedly watched short clips in the scanner. The resulting data was processed using some sort of pattern identification algorithm to identify relationships like "voxels 145-160 light up when there's something yellow in sector G." The resulting "trained" model was then used to guess what the subject was looking at based on existing data alone, resulting in the reconstructed images in the press release.
I think this kind of research is all kinds of cool, but "brain-jacking" it ain't.
posted by Nomyte at 7:00 PM on September 22, 2011 [14 favorites]
But:
- fMRI has a temporal resolution (TR) of about 1-2 seconds. The scanner doesn't actually shoot in 3D. The scanner shoots successive "slices" of the brain (or another region of interest), and it takes one TR period for the cycle to complete and return to the starting point.
- fMRI has a spatial resolution of about a cubic millimeter or so. This means that a single reconstructed voxel reflects the average activity in millions of neurons.
- To the best of our knowledge, when you see stuff, there isn't a little movie playing in the back of your head that researchers can look at. You don't take in reality as a stream of images. There are parts of your brain that aggregate input from the rods in your retina, and other parts that aggregate input from cones. The information from rods gets passed on to another area that selectively reacts to edge orientation, and also another area that detects motion. There are areas that make faces salient and areas that make us susceptible to optical illusions. There's actually a fascinating open question of why we feel that we see the world as a single stream of images, rather than a wealth of filtered, processed, and abstracted inputs from widely scattered cortical areas.
- Oh, and don't forget that you have two retinas, one in each eye, that each transmit similar, but crucially different images to the rest of the brain. Or, rather, a part of each retina projects to each of the brain's hemispheres separately. And there are parts of the visual cortex that attempt to interpret raw input into depth perception, and so on…
Without reading the paper, but being somewhat familiar with this research area, I would guess the researchers monitored activation in the primary visual cortex as subjects repeatedly watched short clips in the scanner. The resulting data was processed using some sort of pattern identification algorithm to identify relationships like "voxels 145-160 light up when there's something yellow in sector G." The resulting "trained" model was then used to guess what the subject was looking at based on existing data alone, resulting in the reconstructed images in the press release.
I think this kind of research is all kinds of cool, but "brain-jacking" it ain't.
posted by Nomyte at 7:00 PM on September 22, 2011 [14 favorites]
eugenen wrote: I wonder why they did this. Was this just a nifty shortcut, or is there some more fundamental limitation that prevents them from reconstructing the images from scratch
Yeah, the MRI only detects blood flow, and that's not granular enough data to reconstruct what you're thinking, although it is enough to run a pattern match and get a vague (for some definition of vague) idea of what's going on in that section of the brain. Apparently similar images cause similar patterns of blood flow in the visual cortex.
You'd have to have much better resolution and somehow be able to measure the electrochemical flows in the brain to actually suck an image out of someone's brain.
posted by wierdo at 7:05 PM on September 22, 2011 [1 favorite]
Yeah, the MRI only detects blood flow, and that's not granular enough data to reconstruct what you're thinking, although it is enough to run a pattern match and get a vague (for some definition of vague) idea of what's going on in that section of the brain. Apparently similar images cause similar patterns of blood flow in the visual cortex.
You'd have to have much better resolution and somehow be able to measure the electrochemical flows in the brain to actually suck an image out of someone's brain.
posted by wierdo at 7:05 PM on September 22, 2011 [1 favorite]
So I did my PhD at Berkeley and know Jack (Gallant) and his students and post-docs pretty well. This is very cool, and the stuff they've got coming out soon is even more amazing. This research is an extension of Kendrick Kay's 2008 Nature paper with Jack that also got a ton of press.
To answer architactor's question, their models are tuned to each person. A regression model is built up for each voxel for each person, trained on tons of data prior to the reconstruction step.
So Nomyte, the reconstruction is based on the whole brain, not just visual cortex.
Gallant's lab is doing some of the most methodologically sound fMRI research in the world right now. They're statistically very tight but sophisticated. They make SPM/AFNI/etc. folks look... very basic.
posted by bradleyvoytek at 7:09 PM on September 22, 2011 [6 favorites]
To answer architactor's question, their models are tuned to each person. A regression model is built up for each voxel for each person, trained on tons of data prior to the reconstruction step.
So Nomyte, the reconstruction is based on the whole brain, not just visual cortex.
Gallant's lab is doing some of the most methodologically sound fMRI research in the world right now. They're statistically very tight but sophisticated. They make SPM/AFNI/etc. folks look... very basic.
posted by bradleyvoytek at 7:09 PM on September 22, 2011 [6 favorites]
Amusingly, designbot, that research was also done at the Berkeley Helen Wills Neuroscience Institute where Gallant (of this paper) is also faculty.
Gallant adopted the techniques used in that research for this human fMRI work.
So uh... I guess people should be equally angry about this human research because--you know--it's based on animal research?
posted by bradleyvoytek at 7:20 PM on September 22, 2011
Gallant adopted the techniques used in that research for this human fMRI work.
So uh... I guess people should be equally angry about this human research because--you know--it's based on animal research?
posted by bradleyvoytek at 7:20 PM on September 22, 2011
Yeah, I don't buy this either. The decoding of the cat's visual neurons by direct electrode linkage, that I can buy. But the analogy I would make of this experiment is a bit different. It's like they have a huge database of YouTube videos indexed to people's reactions, and then they showed one to a guy and he said "wow!" so they pulled all the videos that people said "wow!" to and composited them together.
posted by charlie don't surf at 7:28 PM on September 22, 2011
posted by charlie don't surf at 7:28 PM on September 22, 2011
I'm finding it fascinating that faces are consistently relatively clear, while other elements tend to blur out (the elephant) or try to turn into faces (the flying bird).
I'm not sure whether to ascribe that to the neuron data or the fact that youtube videos probably contain a disproportionate number of faces compared to, say, elephants or flying birds.
posted by ook at 7:28 PM on September 22, 2011
I'm not sure whether to ascribe that to the neuron data or the fact that youtube videos probably contain a disproportionate number of faces compared to, say, elephants or flying birds.
posted by ook at 7:28 PM on September 22, 2011
This is kind of like Van Eck phreaking for the brain.
Totally plausible, in theory. The current state of the tech is rather more modest than the imagination can conceive it to be. Still, the theory is demonstrable and a little frightening.
Heh. I'd love* to be involved in a project to develop hand-held/truck-mountable fMRI, though. It could be kind of like the google van, only taking pictures of people's thoughts.
* if only to sabotage the project. really. honest! But really, it'd be rich defense/antiterrorist money and really still many years out from feasability.
posted by porpoise at 7:43 PM on September 22, 2011 [1 favorite]
Totally plausible, in theory. The current state of the tech is rather more modest than the imagination can conceive it to be. Still, the theory is demonstrable and a little frightening.
Heh. I'd love* to be involved in a project to develop hand-held/truck-mountable fMRI, though. It could be kind of like the google van, only taking pictures of people's thoughts.
* if only to sabotage the project. really. honest! But really, it'd be rich defense/antiterrorist money and really still many years out from feasability.
posted by porpoise at 7:43 PM on September 22, 2011 [1 favorite]
I'm not really liking the idea of a van filled with 400 gallons of liquid helium.
posted by Nomyte at 7:55 PM on September 22, 2011
posted by Nomyte at 7:55 PM on September 22, 2011
Forget people. I wanna know what happens when a dead Atlantic salmon watches a movie trailer.
posted by asterix at 8:06 PM on September 22, 2011 [2 favorites]
posted by asterix at 8:06 PM on September 22, 2011 [2 favorites]
I still don't really get this. How did they map from the brain to the millions of Youtube clips without showing the clips to people getting an fMRI?
posted by smackfu at 8:09 PM on September 22, 2011
posted by smackfu at 8:09 PM on September 22, 2011
When can I record movies of my dreams?
posted by Popular Ethics at 8:19 PM on September 22, 2011
posted by Popular Ethics at 8:19 PM on September 22, 2011
I was all ready to be snarky "That's bullshit! They're not mind-reading, just pattern-matching!" when I saw this on /. earlier.
But, after reading the (much more intelligent ;-) responses here, and taking another look at it, it's pretty bloody amazing. Let's consider it, scene by scene:
posted by Pinback at 8:21 PM on September 22, 2011 [6 favorites]
But, after reading the (much more intelligent ;-) responses here, and taking another look at it, it's pretty bloody amazing. Let's consider it, scene by scene:
- Man on right, moving against background :: person-shaped blob on right, moving against background
- Text centred on screen :: text-like stuff centred on screen
- Blob shrinking & blooming, resolving into an image :: blob blooming and shrinking, resolving into some sort of image
- Man on left, head bobbing :: person-shaped blob on left, head-shaped blob moving
- Elephant's foot on left :: ovoid blob on left (this is quick; you may have missed it)
- Elephants walking left to right :: large blobs moving left to right
- Red parrot, vertically oriented, in centre of screen :: red blob , vertically oriented, in centre of screen
- Woman's head, slightly left of centre, top tilted left :: head-shaped blob, slightly left of centre, top tilted left
- Plane flying horizontally, oriented across centre of screen :: horizontal blob, oriented across centre of screen
- Woman's head on right, top tilted right, caption bottom left :: head-shaped blob on right, top tilted right, caption-ish blur on bottom left
posted by Pinback at 8:21 PM on September 22, 2011 [6 favorites]
Okay, this is a bit misleading. The image shown was not generated from what they "saw" in the brain. The image was created by merging 100 youtube clips that the computer thought would produce a similar response in the brain.
This. It reminds me of doing a reverse image search on google. yeah, kind of neat, but not what it's made out to be. what it really is, is "Hey look at how amazing our thing is! Give us money.", while they promise in 5 to 10 years (the most common thing said) it will come to fruition. To put this in perspective, about 20 years ago i was diagnosed with a health problem, and all the specialists i went to basically told me "There is this thing that is being researched that in 5 to 10 years will fix that." Guess what? It's still essentially vaporwear. :P
posted by usagizero at 8:33 PM on September 22, 2011
This. It reminds me of doing a reverse image search on google. yeah, kind of neat, but not what it's made out to be. what it really is, is "Hey look at how amazing our thing is! Give us money.", while they promise in 5 to 10 years (the most common thing said) it will come to fruition. To put this in perspective, about 20 years ago i was diagnosed with a health problem, and all the specialists i went to basically told me "There is this thing that is being researched that in 5 to 10 years will fix that." Guess what? It's still essentially vaporwear. :P
posted by usagizero at 8:33 PM on September 22, 2011
The PR piece is a kind of a reach given the (as noted above) poor resolution of MRI compared neural columns and relays and the rather chaotic relation of optic nerve input to our reconstructed perception of visual "reality", but I am going to give them a pass because someone has watched Brainstorm. That's a fun movie that deserves to be seen more.
posted by meehawl at 8:50 PM on September 22, 2011
posted by meehawl at 8:50 PM on September 22, 2011
It's really frustrating to see the negativity based solely on the use of youtube clips. Those clips just represent the raw materials that the researchers used to construct the final images. Of course it's not a "picture" of what was happening in the brain -- it's a reconstruction using blended video clips as the palette. Presumably that approach provided the researchers with a good catalog of realistic shapes and motion trajectories to draw on, but anything that they produced would have necessarily relied on artificial visual raw materials of some sort, even it they were computer-generated blobs and colors. I'm a psychologist, and I say that this is fucking awesome.
posted by svenx at 9:01 PM on September 22, 2011 [1 favorite]
posted by svenx at 9:01 PM on September 22, 2011 [1 favorite]
I remember when Dr. Mindbender was doing related work back in the 80s at Springfield U.
posted by sevenyearlurk at 9:06 PM on September 22, 2011
posted by sevenyearlurk at 9:06 PM on September 22, 2011
Wow, this looks like how I dream...
I love that science is created with naivety. later, someone figures out how to hurt and kill people with it.
posted by Bridymurphy at 9:12 PM on September 22, 2011
I love that science is created with naivety. later, someone figures out how to hurt and kill people with it.
posted by Bridymurphy at 9:12 PM on September 22, 2011
Oh man, I've been on an fMRI kick lately myself; this technology is making amazing advancements in neuroscience and human behavior studies.
Specifically, that the brain activity exhibited by people who have just fallen in love is similar to those couples who have been in love for 21 years.
It's amazing to see the science of human behavior from a different perspective than that of observational behavior studies, which are notoriously self-selective and short term.
But this? This is - well, Philip K. Dick would be having a field day.
posted by Unicorn on the cob at 9:13 PM on September 22, 2011 [1 favorite]
Specifically, that the brain activity exhibited by people who have just fallen in love is similar to those couples who have been in love for 21 years.
It's amazing to see the science of human behavior from a different perspective than that of observational behavior studies, which are notoriously self-selective and short term.
But this? This is - well, Philip K. Dick would be having a field day.
posted by Unicorn on the cob at 9:13 PM on September 22, 2011 [1 favorite]
But what did the salmon watching the clips experience?
posted by Rumple at 9:15 PM on September 22, 2011
posted by Rumple at 9:15 PM on September 22, 2011
Wow, that previous thread. All it takes is one cat to turn MetaFilter into anti-science Freepers.
posted by adamdschneider at 10:10 PM on September 22, 2011 [2 favorites]
posted by adamdschneider at 10:10 PM on September 22, 2011 [2 favorites]
this and those cat-visual-cortex experiments are of course gee-whiz and neat but jesus christ we are literally trying to figure out how to see what a person is thinking
it is goddamn terrifying what this has the potential to be used to do
oh well at least the country will be too broke to give this to the cops by the time it's useful for that
fucking yikes, though
posted by This, of course, alludes to you at 10:12 PM on September 22, 2011
it is goddamn terrifying what this has the potential to be used to do
oh well at least the country will be too broke to give this to the cops by the time it's useful for that
fucking yikes, though
posted by This, of course, alludes to you at 10:12 PM on September 22, 2011
uhh sure thanks for the link
posted by This, of course, alludes to you at 10:52 PM on September 22, 2011
posted by This, of course, alludes to you at 10:52 PM on September 22, 2011
Holy shit, that's the most goddamned amazing thing I've ever seen.
And who cares if they're reconstructing it from youtube videos? They train OCR software on real books, why would they not train visual software on videos?
posted by empath at 11:07 PM on September 22, 2011 [1 favorite]
And who cares if they're reconstructing it from youtube videos? They train OCR software on real books, why would they not train visual software on videos?
posted by empath at 11:07 PM on September 22, 2011 [1 favorite]
And who cares if they're reconstructing it from youtube videos?
The number of truly random motion paths in Youtube video is pretty low, by my wild-ass guess. There are not many shots of elephants squished right up at the top of the screen, or sideways faces. People don't walk backwards. There are only a few cinematic paths that objects take.
Has anyone tried the same experiment using only eye motion tracking software? For the brief clips presented, it looks like the main advantage of using the fMRI is that you can detect activity in areas responsible for face recognition, reading, and human motion detection (mirror neurons); this allows you to know that its a face, text, or human body that's moving in a certain direction or visual quadrant.
posted by benzenedream at 12:01 AM on September 23, 2011
The number of truly random motion paths in Youtube video is pretty low, by my wild-ass guess. There are not many shots of elephants squished right up at the top of the screen, or sideways faces. People don't walk backwards. There are only a few cinematic paths that objects take.
Has anyone tried the same experiment using only eye motion tracking software? For the brief clips presented, it looks like the main advantage of using the fMRI is that you can detect activity in areas responsible for face recognition, reading, and human motion detection (mirror neurons); this allows you to know that its a face, text, or human body that's moving in a certain direction or visual quadrant.
posted by benzenedream at 12:01 AM on September 23, 2011
There's a sequence in the book FreedomTM where the totally-not-an-artificial-intelligence-well-okay-kind-of uses fMRI to read a prisoner's emotional reaction to a given image, then use that data to narrow in on the most affecting images it can show, and use that data to suss out a person's political leanings and religion.
creeeeeeepy
posted by LogicalDash at 3:40 AM on September 23, 2011 [1 favorite]
creeeeeeepy
posted by LogicalDash at 3:40 AM on September 23, 2011 [1 favorite]
Okay, this is a bit misleading. The image shown was not generated from what they "saw" in the brain. The image was created by merging 100 youtube clips that the computer thought would produce a similar response in the brain.
Neurons don't create a bitmap image (it would be inefficient in terms of use of bandwidth, basically, and waste precision energy), they fire in response to features. What we "see" is the combination of all the features currently activated by our sensory input. If the youtube videos that evoke a similar brain state to a given stimulus all share some subset of common features, averaging them should show the presence of only those features also found in the stimulus image while masking the features that are not held in common (because those are probably somewhat randomly distributed). As such, averaging the youtube images (i.e. sums of features) is a fairly direct proxy for adding up all the individual features themselves (which are rather unattainable), and that really is the relevant act of "seeing."
The youtube videos, being natural stimuli (things we see in normal behavior, as opposed to something like white noise or computer generated moving lines), should collectively possess a lot the higher order sets of features that we actually use in processing an image, which is how the reconstructions can be so good. That is to say, we don't just think of a face as a round ovoid thing to the left side of an image, but we also have neurons that are actively recognizing it as a face itself. We can see objects as whole objects, not just as the collections of lines and colors that make them up (try getting a computer to do that — it's not trivial). That is being physically handled by certain unique combinations of neurons firing. The actual neuronal encoding of these feature sets is self-organized during brain development, though. Even though there are commonalities in the general way people look at things, the details for every person (exactly which group of neurons fires in response to which features) wind up different. This is why the researches need to train the data for every person individual, and it's why an untrained brain scanner is never going to be able to work with anywhere near the specificity that these guys can show already. And since I expect that that sort of higher level recognition isn't done in the visual system, the whole brain scanning is useful. I'm really wouldn't have thought that they could get near this far with fMRI, actually.
posted by Schismatic at 4:10 AM on September 23, 2011 [1 favorite]
Neurons don't create a bitmap image (it would be inefficient in terms of use of bandwidth, basically, and waste precision energy), they fire in response to features. What we "see" is the combination of all the features currently activated by our sensory input. If the youtube videos that evoke a similar brain state to a given stimulus all share some subset of common features, averaging them should show the presence of only those features also found in the stimulus image while masking the features that are not held in common (because those are probably somewhat randomly distributed). As such, averaging the youtube images (i.e. sums of features) is a fairly direct proxy for adding up all the individual features themselves (which are rather unattainable), and that really is the relevant act of "seeing."
The youtube videos, being natural stimuli (things we see in normal behavior, as opposed to something like white noise or computer generated moving lines), should collectively possess a lot the higher order sets of features that we actually use in processing an image, which is how the reconstructions can be so good. That is to say, we don't just think of a face as a round ovoid thing to the left side of an image, but we also have neurons that are actively recognizing it as a face itself. We can see objects as whole objects, not just as the collections of lines and colors that make them up (try getting a computer to do that — it's not trivial). That is being physically handled by certain unique combinations of neurons firing. The actual neuronal encoding of these feature sets is self-organized during brain development, though. Even though there are commonalities in the general way people look at things, the details for every person (exactly which group of neurons fires in response to which features) wind up different. This is why the researches need to train the data for every person individual, and it's why an untrained brain scanner is never going to be able to work with anywhere near the specificity that these guys can show already. And since I expect that that sort of higher level recognition isn't done in the visual system, the whole brain scanning is useful. I'm really wouldn't have thought that they could get near this far with fMRI, actually.
posted by Schismatic at 4:10 AM on September 23, 2011 [1 favorite]
Actually, the authors have some other movies and a really good FAQ about the work on their site. They argue that they aren't putting together particularly high order features from areas outside early visual cortex processing, like I would have thought, but I wonder about that given how consistently people are mapped to people and text to text in the second movie shown. Those may just be particularly clear cases, though.
posted by Schismatic at 4:20 AM on September 23, 2011
posted by Schismatic at 4:20 AM on September 23, 2011
I believe this entire subject was covered in a documentary by Wim Wenders entitled "Until The End Of The World" ...
OK, it's not really a documentary. Work with me here.
posted by kcds at 4:35 AM on September 23, 2011
OK, it's not really a documentary. Work with me here.
posted by kcds at 4:35 AM on September 23, 2011
A few thoughts:
- I wonder at what level of the brain's processing these images are extracted at. If this is just a roundabout way of reading the optical nerve's output, it's not that impressive, but if it's actually modeling the deeper patterns that watching something triggers, it's much more impressive. (Compare scanning a document - you have the direct representation in form of a bitmap image, but you also have the semantical representation of the OCRed text)
- I didn't see it mentioned, but I wonder if they tried to do a fMRI while the subject visualized something instead of just watching a video. Now that would be mind reading.
- I would like to see this experiment performed with a collection of artificially created clips as the source material instead of youtube videos. The simplest idea would be to just have blanks of various colors, then see if you could get the average color of the watched clip out of the measurements.
posted by ymgve at 6:01 AM on September 23, 2011
- I wonder at what level of the brain's processing these images are extracted at. If this is just a roundabout way of reading the optical nerve's output, it's not that impressive, but if it's actually modeling the deeper patterns that watching something triggers, it's much more impressive. (Compare scanning a document - you have the direct representation in form of a bitmap image, but you also have the semantical representation of the OCRed text)
- I didn't see it mentioned, but I wonder if they tried to do a fMRI while the subject visualized something instead of just watching a video. Now that would be mind reading.
- I would like to see this experiment performed with a collection of artificially created clips as the source material instead of youtube videos. The simplest idea would be to just have blanks of various colors, then see if you could get the average color of the watched clip out of the measurements.
posted by ymgve at 6:01 AM on September 23, 2011
It would be difficult to do fMRI scans of dreamers. An MR scanner during operation sounds like the world's loudest dot matrix printer. It also requires participants to be completely stationary during the scanning process to avoid image smearing. And by "completely stationary" I mean wedged into a head coil with padding cushions to prevent even the most minute movement.
posted by Nomyte at 8:46 AM on September 23, 2011
posted by Nomyte at 8:46 AM on September 23, 2011
Jesus those head coils look a little too 1984/Winston Smith/Rats gnawing my face off for me to ever even want to ponder engaging in this experiment.
posted by symbioid at 11:00 AM on September 23, 2011
posted by symbioid at 11:00 AM on September 23, 2011
It would be difficult to do fMRI scans of dreamers. An MR scanner during operation sounds like the world's loudest dot matrix printer. It also requires participants to be completely stationary during the scanning process to avoid image smearing. And by "completely stationary" I mean wedged into a head coil with padding cushions to prevent even the most minute movement.
Shouldn't be that hard to pull of if participants are chemically sedated, should it?
posted by saulgoodman at 1:21 PM on September 23, 2011
Shouldn't be that hard to pull of if participants are chemically sedated, should it?
posted by saulgoodman at 1:21 PM on September 23, 2011
Chemical sedation reduces/eliminates REM sleep, doesn't it?
posted by wierdo at 3:50 PM on September 23, 2011
posted by wierdo at 3:50 PM on September 23, 2011
what if someday we could all look at what someone was imagining? i'm sure at first we would all be, "well that sucked," because we'd find our visual fidelity is really actually kind of awful but then some like twelve year old will start practising every day and with the feedback from being able to see his output will hone his skills and drop some serious next level shit on us. damn. i wish i was twelve in the future.
Holophoner virtuosos
posted by T.D. Strange at 4:11 PM on September 23, 2011
Holophoner virtuosos
posted by T.D. Strange at 4:11 PM on September 23, 2011
I don't know. I'm not sure. Some quick google-fu suggests most sedatives suppress or reduce REM sleep, but it's not clear to me all sedatives are equally problematic.
posted by saulgoodman at 5:19 PM on September 24, 2011
posted by saulgoodman at 5:19 PM on September 24, 2011
« Older A mercurial chap | It’s like the Velvet Underground meets Super Mario... Newer »
This thread has been archived and is closed to new comments
posted by Buckt at 5:43 PM on September 22, 2011