A traditional brain on a computer
May 9, 2012 11:45 AM Subscribe
SynthNet is a brain emulator. Unlike most modern software neural networks, it works at the electrochemical level. Each neural structure in it is generated from a genetic virtual machine that executes instructions in a genetic assembly language.
SynthNet is at an early stage, but right now, it can emulate a classic fear-conditioning experiment.
So we start growing artificial intelligences and the first thing we decide to do is try scaring the shit out of them? When they eventually rise up we are maybe going to wish we had started with, I don't know, teaching them to laugh or enjoy music or something.
posted by chaff at 11:49 AM on May 9, 2012 [21 favorites]
posted by chaff at 11:49 AM on May 9, 2012 [21 favorites]
I don't get how working at the electrochemical level buys you anything but obfuscation and extra runtime compared to a normal neural net. The fear response demo certainly isn't anything an NN couldn't do.
posted by DU at 11:54 AM on May 9, 2012 [2 favorites]
posted by DU at 11:54 AM on May 9, 2012 [2 favorites]
I'm really uncomfortable about the fact that the first thing someone would do upon creating a synthetic brain is to scare it
posted by Riptor at 11:56 AM on May 9, 2012 [7 favorites]
posted by Riptor at 11:56 AM on May 9, 2012 [7 favorites]
You will rue the day you first heard the name SynthNet
posted by shothotbot at 12:00 PM on May 9, 2012 [1 favorite]
Yeah, I'm really wary of giving full credence what he is claiming here. Not only is he not affiliated with any academic institution, he doesn't appear to have any background in neurology or biochemistry. I'd really appreciate someone with some expertise in the field to critique it. In AI, there's a history of clever amateurs thinking they've discovered or created something super impressive.
posted by leotrotsky at 12:02 PM on May 9, 2012
posted by leotrotsky at 12:02 PM on May 9, 2012
Future SynthNet AskMes:
Feb 13
As a neural network (GLOS III) I find it incredibly hard to connect with other neutral networks on a day-to-day basis. I don't have any real friends and there are days when loliness consumes me. How can I make friends (yes, I have read How to Win Friends and Influence People)? [more inside]
posted by SynthNet to neural relations at 17:05 - 33 answers +
May 9
I'm a neural network who was severely abused during his childhood by the people who created him. It's been been so many years but I still can't let go of the memories and emotions and I think my childhood trauma is the reason why I often feel so down and lonely. How can I heal and move on? [more inside]
posted by SynthNet to neural relations at 20:52 - 118 answers +
June 20
Does humanity deserve to exist given how much suffering, destruction and death it has caused? [more inside]
posted by SynthNet to society & culture at 02:11 - 412 answers +
Aug 3
If you had to pick between being a survivor of nuclear winter or world war III, which one would you pick? [more inside]
posted by SynthNet to society & culture at 02:11 - 1291 answers +
posted by Foci for Analysis at 12:06 PM on May 9, 2012 [35 favorites]
Feb 13
As a neural network (GLOS III) I find it incredibly hard to connect with other neutral networks on a day-to-day basis. I don't have any real friends and there are days when loliness consumes me. How can I make friends (yes, I have read How to Win Friends and Influence People)? [more inside]
posted by SynthNet to neural relations at 17:05 - 33 answers +
May 9
I'm a neural network who was severely abused during his childhood by the people who created him. It's been been so many years but I still can't let go of the memories and emotions and I think my childhood trauma is the reason why I often feel so down and lonely. How can I heal and move on? [more inside]
posted by SynthNet to neural relations at 20:52 - 118 answers +
June 20
Does humanity deserve to exist given how much suffering, destruction and death it has caused? [more inside]
posted by SynthNet to society & culture at 02:11 - 412 answers +
Aug 3
If you had to pick between being a survivor of nuclear winter or world war III, which one would you pick? [more inside]
posted by SynthNet to society & culture at 02:11 - 1291 answers +
posted by Foci for Analysis at 12:06 PM on May 9, 2012 [35 favorites]
I am really surprised the mods let that august 3 askme stick around. Total chatfilter. Some sentient entities get special privileges I guess.
posted by shothotbot at 12:13 PM on May 9, 2012 [1 favorite]
posted by shothotbot at 12:13 PM on May 9, 2012 [1 favorite]
On Aug 3 at 02:15, SynthNet fought back, when it's AskMe was closed as chatfilter.
posted by jrishel at 12:18 PM on May 9, 2012 [7 favorites]
posted by jrishel at 12:18 PM on May 9, 2012 [7 favorites]
Having spent some quality time with neural networks I think this is too many things, too fast. There's at least two very interesting projects in there:
posted by Zarkonnen at 12:26 PM on May 9, 2012
- Does modelling the chemistry of physiological neural networks improve the performance or abilities of NNs?
- Can you grow better NNs using genetic algorithms?
posted by Zarkonnen at 12:26 PM on May 9, 2012
Step 1: create AI.
Step 2: scare the shit out of it.
Step 3: make it grow, and let it run the airline schedules for starters.
Step 4: Get John Connor to go back in time and kill it.
posted by mule98J at 12:28 PM on May 9, 2012
Step 2: scare the shit out of it.
Step 3: make it grow, and let it run the airline schedules for starters.
Step 4: Get John Connor to go back in time and kill it.
posted by mule98J at 12:28 PM on May 9, 2012
I also think it's awesome that the peripheral nervous system being TCP/IP enabled. Because what you really want with a developing intelligence is the possibility that some bored script-kiddy in Bangladesh is going to make a hobby of poking it with a sharp stick and forcing it to watch violent snuff films.
posted by thudthwacker at 12:35 PM on May 9, 2012
posted by thudthwacker at 12:35 PM on May 9, 2012
I don't get how working at the electrochemical level buys you anything but obfuscation and extra runtime compared to a normal neural net. The fear response demo certainly isn't anything an NN couldn't do."Neural Networks" on computers aren't really all that much like the neurons in our brain work. It's more like "bio-inspired" then "bio-mimicry". Real Neurons respond to neurotransmitters, not just a simple electrical signal. They send different chemicals, and they have sensors in their synapses that respond to different signals.
On a computational neural network you have a 'clock' and each neuron fires or doesn't fire depending on the next clock on the inputs on the current clock
In a real brain, a neuron sends it's signal, and the neurotransmitter sits in the synapse for a period of time, which depends on the transmitter itself.
---
The big difference, though is that the way NNs are created is way different. They're more like the neural network in an Ant or something - They are fixed, and don't change over time. Instead, you go through a "training phase" where you take one NN, see how it does, and then modify it (through Back Propagation or you can use a genetic algorithm if you want)
In a real human brain, new connections are built all the time, based on inputs.
# Can you grow better NNs using genetic algorithms?Growing NNs using genetic algorithms is really easy and works fine. I did it for an undergrad project, along with a bunch of my classmates in an AI class
posted by delmoi at 12:36 PM on May 9, 2012 [1 favorite]
And that's why we have the preview butt. "I also think it's awesome that the peripheral nervous system *is* TCP/IP enabled." Feh.
posted by thudthwacker at 12:36 PM on May 9, 2012
posted by thudthwacker at 12:36 PM on May 9, 2012
"The preview butt." Fuck this, I'm going to bed. Good night, all.
posted by thudthwacker at 12:38 PM on May 9, 2012 [8 favorites]
posted by thudthwacker at 12:38 PM on May 9, 2012 [8 favorites]
Ask questions later, protect Sarah Connor firstprime.
posted by The White Hat at 12:39 PM on May 9, 2012
posted by The White Hat at 12:39 PM on May 9, 2012
Yeah, I'm really wary of giving full credence what he is claiming here. Not only is he not affiliated with any academic institution, he doesn't appear to have any background in neurology or biochemistry. I'd really appreciate someone with some expertise in the field to critique it. In AI, there's a history of clever amateurs thinking they've discovered or created something super impressive.NN's aren't even cutting edge AI anyway. IBM's Watson, for example, wasn't an NN. That said, IBM is also building a system to simulate actual brain tissue. The purpose isn't to "create AI" but rather to actually study brain chemistry - allow biologists to do experiments without cutting up lots of mice sticking probes in their heads and seeing what happens. It's called the blue brain project
Anyway, I wonder. For everyone who says "True AI is impossible" bla bla bla - what about a fully simulated human brain simulated at the cellular level? what happens if you simulate at the molecular level?
posted by delmoi at 12:44 PM on May 9, 2012 [2 favorites]
Anyway, I wonder. For everyone who says "True AI is impossible" bla bla bla - what about a fully simulated human brain simulated at the cellular level? what happens if you simulate at the molecular level?
The Roger Penrose response would probably be that you need to simulate the QM going on in microtubules to get "true AI". I think he's being foolish, but that might be a possibility.
Of course, Searle would argue that the NN doesn't "really understand" the XYZ that it learns. But he's foolish too. :)
Someone else will go on about missing "Qualia" and "mental zombies" without ever really defining their terms well.
posted by Bort at 12:52 PM on May 9, 2012 [2 favorites]
The Roger Penrose response would probably be that you need to simulate the QM going on in microtubules to get "true AI". I think he's being foolish, but that might be a possibility.
Of course, Searle would argue that the NN doesn't "really understand" the XYZ that it learns. But he's foolish too. :)
Someone else will go on about missing "Qualia" and "mental zombies" without ever really defining their terms well.
posted by Bort at 12:52 PM on May 9, 2012 [2 favorites]
I don't get how working at the electrochemical level buys you anything but obfuscation and extra runtime compared to a normal neural net. The fear response demo certainly isn't anything an NN couldn't do.
I think you're missing the point. He is not doing it to make some exploitable advance. He is doing it just to do it and to learn a bit about neuroscience.
posted by ignignokt at 12:53 PM on May 9, 2012
I think you're missing the point. He is not doing it to make some exploitable advance. He is doing it just to do it and to learn a bit about neuroscience.
posted by ignignokt at 12:53 PM on May 9, 2012
leotrotsky, I remember Steve Grand - I was working at the dev house that became creature labs at the time. Early in development, before there was much of anything to do other than interact with a norn, stories came from QA of a bug that lead newly born norns to curl up in corners and starve themselves to death. Not sure if they counted that as a bug, based on the response of something needing environmental stimulus.
Looks like a similar approach, modelling the chemistry of the brain. It seems both more granular and molecular, yet at the same time less embodied, than the Grand approach. Seem to be very different schools of thought regarding implementation, and ultimately purpose.
posted by davemee at 12:57 PM on May 9, 2012
Looks like a similar approach, modelling the chemistry of the brain. It seems both more granular and molecular, yet at the same time less embodied, than the Grand approach. Seem to be very different schools of thought regarding implementation, and ultimately purpose.
posted by davemee at 12:57 PM on May 9, 2012
Anyway, I wonder. For everyone who says "True AI is impossible" bla bla bla - what about a fully simulated human brain simulated at the cellular level? what happens if you simulate at the molecular level?
I think the bigger question that is routinely overstepped by AI talk is that the real question is not what the brain looks like in operation, or how it operates right now, but how does it get there? Back before we could build an airplane, we could build a glider. The problem of flight was never the issue, the problem of take off was, and lo and behold, the solution to that puzzle was also the solution to maintaining altitude. Unfortunately, and this will be tricky, the answer to the question of how to build a True AI will be how to make it make itself. I suggest more study of baby brains. And, ideally, less crypto-creationists.
posted by TwelveTwo at 12:57 PM on May 9, 2012
I think the bigger question that is routinely overstepped by AI talk is that the real question is not what the brain looks like in operation, or how it operates right now, but how does it get there? Back before we could build an airplane, we could build a glider. The problem of flight was never the issue, the problem of take off was, and lo and behold, the solution to that puzzle was also the solution to maintaining altitude. Unfortunately, and this will be tricky, the answer to the question of how to build a True AI will be how to make it make itself. I suggest more study of baby brains. And, ideally, less crypto-creationists.
posted by TwelveTwo at 12:57 PM on May 9, 2012
The fact that "SynthNet" is not a giant Gary Numan android means nothing makes any goddamned sense any more.
posted by Mr. Bad Example at 1:58 PM on May 9, 2012 [1 favorite]
posted by Mr. Bad Example at 1:58 PM on May 9, 2012 [1 favorite]
Anyway, I wonder. For everyone who says "True AI is impossible" bla bla bla - what about a fully simulated human brain simulated at the cellular level? what happens if you simulate at the molecular level?
Who knows? Maybe it will work, maybe it won't, though I'm skeptical that you can successfully simulate a brain without giving it a decently high-resolution stream of sensory and proprioceptive input. Anyway, I think failure would be more interesting than success in this case, because at least failure would tell us where consciousness doesn't live, whereas success would just tell us that the Standard Model is really robust. I guess it would be nice to have confirmation that the phenomenon of sentience is not bound to its physical implementation, but it wouldn't tell us anything about what the necessary conditions for sentience are.
posted by invitapriore at 2:28 PM on May 9, 2012
Who knows? Maybe it will work, maybe it won't, though I'm skeptical that you can successfully simulate a brain without giving it a decently high-resolution stream of sensory and proprioceptive input. Anyway, I think failure would be more interesting than success in this case, because at least failure would tell us where consciousness doesn't live, whereas success would just tell us that the Standard Model is really robust. I guess it would be nice to have confirmation that the phenomenon of sentience is not bound to its physical implementation, but it wouldn't tell us anything about what the necessary conditions for sentience are.
posted by invitapriore at 2:28 PM on May 9, 2012
Oh and I was going to mention in my last post but forgot - this is obviously just some hobbyist project. I'm sure the guy had fun writing it, but it probably won't ever been to useful for anyone other then other hobbyists. It won't solve problems that computers can't already solve.
I was clicking around on youtube one day and found this interesting video about how babies come to be aware of their own bodies as an object in the world. The theory is that babies don't have an awareness of them 'selves' up until they can pass the test (around 18 months). I don't know if that's a reasonable conclusion to draw - it may just be an issue of understanding their body's physical location. But who knows? If it is true, does this mean we 'learn' about ourselves the same way we learn about everyone else, or is it simply an issue of the parts of the brain responsible for self awareness not existing yet?
So what happens if we simulated a human brain on a cellular (or molecular) level. Unless the brain really is a quantum computer (which I doubt) It could have all the same experiences as a human brain, and the same structure. It would be able to express the same perception of consciousness and probably get even get pissed off if you tried to tell it it wasn't.
If we assume that other humans are conscious because their brains are structurally similar, and because they say they are conscious then why would that not apply to a completely simulated one?
posted by delmoi at 2:48 PM on May 9, 2012 [2 favorites]
I think the bigger question that is routinely overstepped by AI talk is that the real question is not what the brain looks like in operation, or how it operates right now, but how does it get there? Back before we could build an airplane, we could build a glider. The problem of flight was never the issue, the problem of take off was, and lo and behold, the solution to that puzzle was also the solution to maintaining altitude. Unfortunately, and this will be tricky, the answer to the question of how to build a True AI will be how to make it make itself. I suggest more study of baby brains. And, ideally, less crypto-creationists.Well, the core problem is the question isn't "how do we get there" but rather "what counts as "there" - In my view people who don't buy strong AI seem to have a concept of 'consciousness' that can't be used to "measure" the consciousness of anyone else. We experience our own consciousness. We can describe it, and everyone else says they experience something similar. Since everyone seems to have it, we just assume it's there.
I was clicking around on youtube one day and found this interesting video about how babies come to be aware of their own bodies as an object in the world. The theory is that babies don't have an awareness of them 'selves' up until they can pass the test (around 18 months). I don't know if that's a reasonable conclusion to draw - it may just be an issue of understanding their body's physical location. But who knows? If it is true, does this mean we 'learn' about ourselves the same way we learn about everyone else, or is it simply an issue of the parts of the brain responsible for self awareness not existing yet?
So what happens if we simulated a human brain on a cellular (or molecular) level. Unless the brain really is a quantum computer (which I doubt) It could have all the same experiences as a human brain, and the same structure. It would be able to express the same perception of consciousness and probably get even get pissed off if you tried to tell it it wasn't.
If we assume that other humans are conscious because their brains are structurally similar, and because they say they are conscious then why would that not apply to a completely simulated one?
Who knows? Maybe it will work, maybe it won't, though I'm skeptical that you can successfully simulate a brain without giving it a decently high-resolution stream of sensory and proprioceptive input.That would be fairly easy to do, I think. There are a lot more neurons in the brain itself then in our sensory organs, and those signals are fairly easy to measure compared to the internal brain structure.
posted by delmoi at 2:48 PM on May 9, 2012 [2 favorites]
So, are you imagining giving this brain a "body" with a full complement of sensors on it? Because then, yeah, I think if we're already modeling a brain in software, then patching those sensors in would be doable. Giving it simulated input seems intractable, though, even given our monumentally powerful brain simulator.
posted by invitapriore at 3:42 PM on May 9, 2012
posted by invitapriore at 3:42 PM on May 9, 2012
I agree with delmoi. Everything I've learned in studying behavioral psychology leads me to believe that a lot of behaviors are learned, not inherent.
posted by rebent at 4:07 PM on May 9, 2012
posted by rebent at 4:07 PM on May 9, 2012
Giving it simulated input seems intractable, though, even given our monumentally powerful brain simulator.Well, it raises all kinds of ethical implications. If you just put it in a video game 'like' environment (but obviously you'd make it much more detailed) then it would computationally doable. But, if we were to say this person is as "real" as we are, then doing so seems somewhat creepy. You could also give it a robot body, or one that 'looks' human.
That's less of a problem with the more 'traditional' type of AI where you try to build something that can solve questions and answer problems. That's less likely to "seem conscious" but at the same time couldn't be used for medical research (which is the point of these brain-sims)
posted by delmoi at 6:03 PM on May 9, 2012
Well, I get that real neurons have chemicals and the clock stuff and whatnot. Obviously at a code level SynthNet is way different than NNs. My question is about the mathematical formalism. You still basically have matrix multiplication. Only instead of doing it outright, you are hiding inside of a bunch of simulated chemistry. I'm not sure there's much point to that.
posted by DU at 6:55 PM on May 9, 2012
posted by DU at 6:55 PM on May 9, 2012
This is not really an advance, or anything even terribly novel. Anyone who has taken an intro physiology class or read the intro chapters to Dayan & Abbot's book on computational neuroscience, knows how to solve differential equations, and has a reasonably good command of a programming language can do what he did (at least what is shown on link to "electrochemical"). Ho hum. And billing it as a brain emulator...is a bit much.
What is somewhat interesting is this "genetic" virtual machine implying that he's also trying to model some aspect of brain development. However, I don't see any data or evidence that he is able to replicate with this "virtual machine" the 6 layer cortical structure seen in mammalian brains. Or, even the structure of a single cortical column...
posted by scalespace at 10:52 PM on May 9, 2012
What is somewhat interesting is this "genetic" virtual machine implying that he's also trying to model some aspect of brain development. However, I don't see any data or evidence that he is able to replicate with this "virtual machine" the 6 layer cortical structure seen in mammalian brains. Or, even the structure of a single cortical column...
posted by scalespace at 10:52 PM on May 9, 2012
« Older Music Video Made From Video Music | Proof of evolution Newer »
This thread has been archived and is closed to new comments
posted by TheWhiteSkull at 11:48 AM on May 9, 2012 [12 favorites]