AlphaGo's farewell?
May 28, 2017 4:37 AM Subscribe
Lessons from AlphaGo: Storytelling, bias and program management "Over the past few days, AlphaGo has taken the world by storm once again. Over a week in Wuzhen, it beat the worlds’ best player Ke Jie three times, a team of players from China, and finally lost a game (unavoidable, since it played against itself in a human pair-go match) ... In fact, the most interesting reveal happened only after the match, and that is when DeepMind released the first set of self-play games where AlphaGo played itself (similar to how it is trained in order to improved the AI). Those games were surprisingly non-human, so much so that it is not clear at a glance if the average human go player can learn anything from them. "
From the "previously" link: "The techniques employed in AlphaGo can be used to teach computers to recognise faces, translate between languages, show relevant advertisements to internet users or hunt for subatomic particles in data from atom-smashers."
From the "DeepMind released the first set" link: "...the Future of Go Summit is our final match event with AlphaGo. The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials."
While Alphabet Inc's current branding of AlphaGo is aiming at loftier goals, I'm not sure this means the parent company of Google has suddenly lost interest in using their advanced AI to make more advertising revenue.
posted by ardgedee at 5:34 AM on May 28, 2017 [2 favorites]
From the "DeepMind released the first set" link: "...the Future of Go Summit is our final match event with AlphaGo. The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials."
While Alphabet Inc's current branding of AlphaGo is aiming at loftier goals, I'm not sure this means the parent company of Google has suddenly lost interest in using their advanced AI to make more advertising revenue.
posted by ardgedee at 5:34 AM on May 28, 2017 [2 favorites]
Since I don't see it in the article, here's the first game against Ke Jie with commentary once again from Michael Redmond. You can easily find the later games from there.
The tag should be baduk, not baiduk.
posted by Wolfdog at 5:34 AM on May 28, 2017 [2 favorites]
The tag should be baduk, not baiduk.
posted by Wolfdog at 5:34 AM on May 28, 2017 [2 favorites]
the Future of Go Summit is our final match event with AlphaGo.
The new version is undefeated in 3 games? Well, now that we're certain nobody could ever possibly beat it, it's obviously time for it to retire. At least Ke Jie will apparently get to join Fan Hui and whoever else gets to play with it in private under a strict non-disclosure agreement. The paper they publish will explain the basics of the architecture without revealing too many details of how the training was done or what problems were encountered. The self-play games will be doled out one at a time for so long as it is in accord with their marketing strategy. It seems the future of go is a bit like 18th-century Japan, when rival schools jealously guarded their secrets.
It's an unstable situation though, and not representative of the future, only the present. The confusion between the two is understandable, with all the futuristic nonsense going on in the world, the space race heating up, artificial intelligence making headlines every week, everyone carrying around pocket computers, farmers getting into firmware hacking, and so on. I would've really liked some better hints about the future of Go and of AI, but it seems as uncertain as last week.
Good games, though. The first of the self-play games looks strikingly alien and incomprehensible to me, but then so did the 2nd game with Ke Jie.
posted by sfenders at 6:27 AM on May 28, 2017 [8 favorites]
The new version is undefeated in 3 games? Well, now that we're certain nobody could ever possibly beat it, it's obviously time for it to retire. At least Ke Jie will apparently get to join Fan Hui and whoever else gets to play with it in private under a strict non-disclosure agreement. The paper they publish will explain the basics of the architecture without revealing too many details of how the training was done or what problems were encountered. The self-play games will be doled out one at a time for so long as it is in accord with their marketing strategy. It seems the future of go is a bit like 18th-century Japan, when rival schools jealously guarded their secrets.
It's an unstable situation though, and not representative of the future, only the present. The confusion between the two is understandable, with all the futuristic nonsense going on in the world, the space race heating up, artificial intelligence making headlines every week, everyone carrying around pocket computers, farmers getting into firmware hacking, and so on. I would've really liked some better hints about the future of Go and of AI, but it seems as uncertain as last week.
Good games, though. The first of the self-play games looks strikingly alien and incomprehensible to me, but then so did the 2nd game with Ke Jie.
posted by sfenders at 6:27 AM on May 28, 2017 [8 favorites]
This question of AIs doing things we don't really understand is super fascinating to me. Trained neural networks are basically impossible to understand, they don't operate much like humans think at all. I'd love to see more work on visualizing the insides of a neural network in ways that help us comprehend what it's doing. That's what Deep Dream does with its nightmare images; it shows that network's bias towards eyes and puppies as high level features. More work like this please.
Understanding what an AI is doing is going to be particularly important as we apply AIs more to human systems. There's a lot of hand-wringing about using AI as a predictor for criminal behavior, for instance. I'm willing to believe that software may more accurately predict whether a given convict is likely to re-offend, for instance. But if it can't explain why it came to that conclusion I'm not sure the prediction is useful at all, and certainly not ethical to use in sentencing or the like.
posted by Nelson at 6:35 AM on May 28, 2017 [12 favorites]
Understanding what an AI is doing is going to be particularly important as we apply AIs more to human systems. There's a lot of hand-wringing about using AI as a predictor for criminal behavior, for instance. I'm willing to believe that software may more accurately predict whether a given convict is likely to re-offend, for instance. But if it can't explain why it came to that conclusion I'm not sure the prediction is useful at all, and certainly not ethical to use in sentencing or the like.
posted by Nelson at 6:35 AM on May 28, 2017 [12 favorites]
The AI and ML practitioners that I know don't care how it works. They dump in training data, turn on production data, and the magic weights in the matrices do their thing. Pointing out that AI doesn't work like humans is met with a shrug. Of course it doesn't. It is just algebra.
I think that we are going to discover that our minds are alien and mostly algebra.
posted by pdoege at 7:10 AM on May 28, 2017 [8 favorites]
I think that we are going to discover that our minds are alien and mostly algebra.
posted by pdoege at 7:10 AM on May 28, 2017 [8 favorites]
Impossible to understand... so far, it looks like analyzing nets is an active research topic. And a really hard problem but we have loads of mathematicians that need good projects, and when we have competitive machine learning strategies IRL (say military or financial strategies) it'll probably be a topic with a lot of funding and equivalently powerful systems to deconstruct (deconvolve?) the output of neural nets.
So has anyone watched the matches? Omg six hours, near the end one commentator anticipated one of AlphaGo's moves, probably totally obvious but perhaps interesting that the machine took extra time on an "obvious" move. But it had taken significantly less time per move over the course of the game.
posted by sammyo at 7:10 AM on May 28, 2017 [1 favorite]
So has anyone watched the matches? Omg six hours, near the end one commentator anticipated one of AlphaGo's moves, probably totally obvious but perhaps interesting that the machine took extra time on an "obvious" move. But it had taken significantly less time per move over the course of the game.
posted by sammyo at 7:10 AM on May 28, 2017 [1 favorite]
I'm not up on ML and the human-machine dichotomy, but is it not true that the human nervous system doesn't much operate like human thought either? As in, there is a disconnect between higher order logic and reasoning and the underlying processes? In some ways AI reasoning could be seen as more understandable, because we can set the loss function or what have you.
Human reasoning seems familiar to us, because we can imagine arriving at a similar decision having been presented with the same raw information. Can we say that AIs with similar architectures would see each others' decisions as familiar?
posted by cichlid ceilidh at 7:11 AM on May 28, 2017 [5 favorites]
Human reasoning seems familiar to us, because we can imagine arriving at a similar decision having been presented with the same raw information. Can we say that AIs with similar architectures would see each others' decisions as familiar?
posted by cichlid ceilidh at 7:11 AM on May 28, 2017 [5 favorites]
There is a fundamental issue with any kind of machine learning. The entire point is to get the machine to do something that you didn't have to tell it to do. The result is a machine that does things you didn't tell it to do.
posted by notoriety public at 7:59 AM on May 28, 2017 [7 favorites]
posted by notoriety public at 7:59 AM on May 28, 2017 [7 favorites]
What rough beast, its hour come round at last, slouches toward Menlo Park to be born?
posted by flabdablet at 8:12 AM on May 28, 2017 [13 favorites]
posted by flabdablet at 8:12 AM on May 28, 2017 [13 favorites]
is it not true that the human nervous system doesn't much operate like human thought either?
It is true, and is the central problem in philosophy of mind. We have barely the most limited understanding of how our biology implements cognition. But as humans we have thousands of years of history of being OK with that. We develop rigorous forms of thinking, like predicate logic, that are verifiable by other people. We spend our entire lives becoming masters (or failures) at understanding how other people think. We have empathy. We have models of other people's minds, what they are thinking.
None of that theory-of-mind stuff applies very well to a trained neural network. It's not possible to argue with AlphaGo about its reason for making a move, or to get it to reflect on the effect of that move and maybe learn in an understandable way how to make a better move next time. Instead you can have it analyze millions of games and play millions more games against itself, refining a bunch of weights for a better statistical model. Which results in a great Go playing program, but doesn't make it any easier to relate to it.
(The exception to all this foreignness is visual perception. In fact the firstneural networks perceptrons were explicitly modelled after neurons in the retina. We have a pretty good understanding of 10-15 layers of neurons in the eye / input to the brain and exactly what visual functions they perform, functions like edge detection. We also know how to trick our neurons with fun optical illusions. Neural networks trained for vision work in a very similar way, by design, with specific layers for edge detection, etc. But there's a world of optical illusions that apply to, say, character recognition AI that doesn't look anything like how you can fool a human.)
posted by Nelson at 8:22 AM on May 28, 2017 [8 favorites]
It is true, and is the central problem in philosophy of mind. We have barely the most limited understanding of how our biology implements cognition. But as humans we have thousands of years of history of being OK with that. We develop rigorous forms of thinking, like predicate logic, that are verifiable by other people. We spend our entire lives becoming masters (or failures) at understanding how other people think. We have empathy. We have models of other people's minds, what they are thinking.
None of that theory-of-mind stuff applies very well to a trained neural network. It's not possible to argue with AlphaGo about its reason for making a move, or to get it to reflect on the effect of that move and maybe learn in an understandable way how to make a better move next time. Instead you can have it analyze millions of games and play millions more games against itself, refining a bunch of weights for a better statistical model. Which results in a great Go playing program, but doesn't make it any easier to relate to it.
(The exception to all this foreignness is visual perception. In fact the first
posted by Nelson at 8:22 AM on May 28, 2017 [8 favorites]
The Alpha Go vs Alpha Go Game 2 seems completely insane to me around move 220 and onward. Although my beliefs aren't worth much when it comes to Go -- I'll never be particularly capable at it -- it's hard for me to believe some of those plays are optimal.
posted by Coventry at 9:14 AM on May 28, 2017
posted by Coventry at 9:14 AM on May 28, 2017
In fact the first neural networks perceptrons
are you trying to say that "perceptrons" are or are not neural networks?
posted by thelonius at 9:16 AM on May 28, 2017
are you trying to say that "perceptrons" are or are not neural networks?
posted by thelonius at 9:16 AM on May 28, 2017
The paper they publish will explain the basics of the architecture without revealing too many details of how the training was done or what problems were encountered.
I've read the first Alpha Go paper closely, and I think they gave enough information to reproduce it without too much stumbling.
posted by Coventry at 9:19 AM on May 28, 2017 [2 favorites]
I've read the first Alpha Go paper closely, and I think they gave enough information to reproduce it without too much stumbling.
posted by Coventry at 9:19 AM on May 28, 2017 [2 favorites]
I think they gave enough information to reproduce it
So, when they say "the policy network alternates between convolutional layers with weights σ, and rectifier non-linearities", this is a sufficient description? All the layers are just the same and we can guess what they were and kind of rectifier to use? Compared to other neural networks I've seen described it seems a bit vague, but I don't know enough to judge for myself. However, the others who have tried to reproduce or perhaps improve upon it have thus far somehow failed to do nearly as well.
posted by sfenders at 11:46 AM on May 28, 2017
So, when they say "the policy network alternates between convolutional layers with weights σ, and rectifier non-linearities", this is a sufficient description? All the layers are just the same and we can guess what they were and kind of rectifier to use? Compared to other neural networks I've seen described it seems a bit vague, but I don't know enough to judge for myself. However, the others who have tried to reproduce or perhaps improve upon it have thus far somehow failed to do nearly as well.
posted by sfenders at 11:46 AM on May 28, 2017
Perceptrons are the individual nodes which make up a neural network, thelonius.
posted by ambrosen at 12:16 PM on May 28, 2017
posted by ambrosen at 12:16 PM on May 28, 2017
I was using the word "perceptron" casually as an early synonym for "neural network". Mostly because I liked how the first applications were perceiving, image recognition. Although Wikipedia has a delicious 1958 quote about "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence", so ambitions were high from the beginning.
posted by Nelson at 12:29 PM on May 28, 2017 [1 favorite]
posted by Nelson at 12:29 PM on May 28, 2017 [1 favorite]
this is a sufficient description? All the layers are just the same and we can guess what they were and kind of rectifier to use?
The architectures they used are described in Extended Data Tables 2-5 of the paper (pp. 31-33.)
In a Deep Learning context, "rectifier non-linearities" without further specification means ReLUs.
However, the others who have tried to reproduce or perhaps improve upon it have thus far somehow failed to do nearly as well.
Has anyone documented a serious replication attempt? What was their computation budget?
posted by Coventry at 12:49 PM on May 28, 2017
The architectures they used are described in Extended Data Tables 2-5 of the paper (pp. 31-33.)
In a Deep Learning context, "rectifier non-linearities" without further specification means ReLUs.
However, the others who have tried to reproduce or perhaps improve upon it have thus far somehow failed to do nearly as well.
Has anyone documented a serious replication attempt? What was their computation budget?
posted by Coventry at 12:49 PM on May 28, 2017
Has anyone documented a serious replication attempt?
Deep Zen and Jue Yi seem like they could be described as serious efforts at basically the same thing, though if there's any documentation of exactly what they've done I've not heard of it. There are other attempts, some of which would be more severely limited by budget or other constraints.
I see they get into things in more detail on page 27 with the heading Neural Network Architecture. Somehow I missed that bit earlier. As I recall, the paper only describes what was done up until some time before the Lee Sedol match, so it will certainly be interesting to see some of what they've done since.
posted by sfenders at 1:52 PM on May 28, 2017
Deep Zen and Jue Yi seem like they could be described as serious efforts at basically the same thing, though if there's any documentation of exactly what they've done I've not heard of it. There are other attempts, some of which would be more severely limited by budget or other constraints.
I see they get into things in more detail on page 27 with the heading Neural Network Architecture. Somehow I missed that bit earlier. As I recall, the paper only describes what was done up until some time before the Lee Sedol match, so it will certainly be interesting to see some of what they've done since.
posted by sfenders at 1:52 PM on May 28, 2017
We develop rigorous forms of thinking, like predicate logic, that are verifiable by other people.
OK, this got me thinking. What predicate logic is good for, mainly, is communicating with other people. Its other main use is communicating with our future selves, by creating a model proof of something that we know to be true at that moment but the knowledge of which relies on too many internal states (our emotions, intuitions, half-noticed observations etc.) to be easily stored and recalled.
It seems to me that we're basically like the AI. Our internal drives are only knowable to the extent they can be communicated, but the communication is not the process. Analysing an AI to see why it does something is a category error; it would be like dissecting a human to see why they're religious or vote Democratic or whatever. And even if we taught AlphaGo to communicate, it couldn't tell us why it reached a particular conclusion- at best it would be a map of how that conclusion could be reached using predicate logic.
I experienced something like this recently myself. We were discussing a problem and I had this huge intuitive sensation of knowing the answer, so I blurted it out. It took much longer for me to come up with a proof, not because the intuition was at all wrong but because it was not the sort of thing that could be conveyed. Similarly, the state of AlphaGo that led to a particular move may not be comprehensible, even though it could be recorded and duplicated: it makes sense on the hardware, but not by itself. An explanation that can be comprehended off the hardware is by definition not the one that was actually used.
posted by Joe in Australia at 4:14 PM on May 28, 2017 [8 favorites]
OK, this got me thinking. What predicate logic is good for, mainly, is communicating with other people. Its other main use is communicating with our future selves, by creating a model proof of something that we know to be true at that moment but the knowledge of which relies on too many internal states (our emotions, intuitions, half-noticed observations etc.) to be easily stored and recalled.
It seems to me that we're basically like the AI. Our internal drives are only knowable to the extent they can be communicated, but the communication is not the process. Analysing an AI to see why it does something is a category error; it would be like dissecting a human to see why they're religious or vote Democratic or whatever. And even if we taught AlphaGo to communicate, it couldn't tell us why it reached a particular conclusion- at best it would be a map of how that conclusion could be reached using predicate logic.
I experienced something like this recently myself. We were discussing a problem and I had this huge intuitive sensation of knowing the answer, so I blurted it out. It took much longer for me to come up with a proof, not because the intuition was at all wrong but because it was not the sort of thing that could be conveyed. Similarly, the state of AlphaGo that led to a particular move may not be comprehensible, even though it could be recorded and duplicated: it makes sense on the hardware, but not by itself. An explanation that can be comprehended off the hardware is by definition not the one that was actually used.
posted by Joe in Australia at 4:14 PM on May 28, 2017 [8 favorites]
I'd love to see more work on visualizing the insides of a neural network in ways that help us comprehend what it's doing.
I too keenly feel this lack. I appreciate that NN functioning can't easily succumb to rigorous analysis - we can't breakpoint it and step through, saying 'and here we see that object A is being transformed by function B' - but in terms of pattern and flow our eyes are pretty good tools. (I likewise think that other complex networks would benefit from visualisation, as in how the internet reacts to attacks or other insults. I would get on board any art project that sought to dig under the skin here.
The exception to all this foreignness is visual perception. In fact the first neural networks perceptrons were explicitly modelled after neurons in the retina. We have a pretty good understanding of 10-15 layers of neurons in the eye / input to the brain and exactly what visual functions they perform, functions like edge detection.
QED. My retina and anterior optic nerve bundle got shotgunned by $evolutionaryfuckup not so long ago, and so I have a first-class lifetime ticket to the viewing gallery of what happens when those layers of visual processing, having been trained by 40-odd years of functioning input, behave when that input goes gloriously piecemeal haywire. Being a curious fellow with reasonable literacy in scientific research, I've hit the literature even harder than the gin, and the point at which those very well described processing pipelines feed 'conscious awareness' is like flying into a fogbank. Some of my enduring visual artefacts are well explained by the model; others are vastly mysterious.
We have a way to go before we understand out own cognition, provided by hundreds of millions of years of evolutionary happenstance, let alone anything that comes from our cargo-cultish efforts at replication. Let's keep at it, but not beat ourselves up too badly because it still makes us go 'uuhhh...' .
posted by Devonian at 4:38 PM on May 28, 2017 [2 favorites]
I too keenly feel this lack. I appreciate that NN functioning can't easily succumb to rigorous analysis - we can't breakpoint it and step through, saying 'and here we see that object A is being transformed by function B' - but in terms of pattern and flow our eyes are pretty good tools. (I likewise think that other complex networks would benefit from visualisation, as in how the internet reacts to attacks or other insults. I would get on board any art project that sought to dig under the skin here.
The exception to all this foreignness is visual perception. In fact the first neural networks perceptrons were explicitly modelled after neurons in the retina. We have a pretty good understanding of 10-15 layers of neurons in the eye / input to the brain and exactly what visual functions they perform, functions like edge detection.
QED. My retina and anterior optic nerve bundle got shotgunned by $evolutionaryfuckup not so long ago, and so I have a first-class lifetime ticket to the viewing gallery of what happens when those layers of visual processing, having been trained by 40-odd years of functioning input, behave when that input goes gloriously piecemeal haywire. Being a curious fellow with reasonable literacy in scientific research, I've hit the literature even harder than the gin, and the point at which those very well described processing pipelines feed 'conscious awareness' is like flying into a fogbank. Some of my enduring visual artefacts are well explained by the model; others are vastly mysterious.
We have a way to go before we understand out own cognition, provided by hundreds of millions of years of evolutionary happenstance, let alone anything that comes from our cargo-cultish efforts at replication. Let's keep at it, but not beat ourselves up too badly because it still makes us go 'uuhhh...' .
posted by Devonian at 4:38 PM on May 28, 2017 [2 favorites]
It is true, and is the central problem in philosophy of mind.
That is far from the central problem in philosophy of mind, in virtue, if of nothing else, of the fact that "the central problem in philosophy of mind" is a not a referring term.
posted by kenko at 5:14 PM on May 28, 2017
That is far from the central problem in philosophy of mind, in virtue, if of nothing else, of the fact that "the central problem in philosophy of mind" is a not a referring term.
posted by kenko at 5:14 PM on May 28, 2017
This question of AIs doing things we don't really understand is super fascinating to me. Trained neural networks are basically impossible to understand, they don't operate much like humans think at all.
That doesn't mean it's impossible. It just means it's impossible to understand now. People will set to work in analyzing their actions, and they'll have computers to help in that, too.
posted by JHarris at 7:55 PM on May 28, 2017
That doesn't mean it's impossible. It just means it's impossible to understand now. People will set to work in analyzing their actions, and they'll have computers to help in that, too.
posted by JHarris at 7:55 PM on May 28, 2017
> Understanding what an AI is doing is going to be particularly important as we apply AIs more to human systems.
It will be for as long ad humans remain relevant. Which may not be for much longer.
posted by empath at 1:09 AM on May 29, 2017
It will be for as long ad humans remain relevant. Which may not be for much longer.
posted by empath at 1:09 AM on May 29, 2017
I still have little idea how much human players can learn from AlphaGo, and how the game will change as a result. It's only just beginning.
Apparently in response to public demand they've now released all 50 of the self-play games, so kudos to DeepMind on that. I wonder whether they are chosen out of a set of hundreds, or they are the last set run with the final version for testing before the big match, or what.
posted by sfenders at 5:27 AM on May 29, 2017
Apparently in response to public demand they've now released all 50 of the self-play games, so kudos to DeepMind on that. I wonder whether they are chosen out of a set of hundreds, or they are the last set run with the final version for testing before the big match, or what.
posted by sfenders at 5:27 AM on May 29, 2017
I think there would be a lot to be learned from a legible representation of the highest-probability plays from Alpha Go's Monte-Carlo Tree Search (assuming the current generation still uses MCTS.) I hope the teaching program they're planning to release will include something like that.
posted by Coventry at 7:22 AM on May 29, 2017
posted by Coventry at 7:22 AM on May 29, 2017
If it relies too much on MCTS, that might make it less helpful than it could be otherwise. In the worst case I can think of, it seems plausible that the policy network is in no way better or more interesting than human ones. I mean I'm sure it's better than me, but perhaps it has nothing to teach professionals. Maybe it's only the application of super-human computing power and a simple tree search that takes it from mediocre to brilliant. Then we can't hope to emulate it directly at all and the odds of anyone learning much from it are reduced. I don't think the situation is likely to be quite that bad, and even if it's close to it, perhaps its mere existence is enough to prompt a lot of innovation. But it seems to me there is a wide range of possible utility for improving human play, from doing little more than shaking things up with a few new moves to pointing somewhat directly to grand new theories.
posted by sfenders at 8:23 PM on May 29, 2017
posted by sfenders at 8:23 PM on May 29, 2017
from what i understand, this is a game playing neural network that is told what the ultimate goal is (i.e., the winning conditions of the game Go) and learns via trial and error to achieve that result most efficiently. it isn't surprising to me that the result would feel foreign to humans, since i think most people don't approach most problems or learning that way. when confronted with a problem, i think most people try to intuit and understand its processes, mechanics, and meanings before jumping in, because our brains don't have the speediness to simply try all possible moves and analyze retrospectively what works best. so while the computer's approach to problems like this may in some cases be superior to ours, it seems fundamentally different from human intelligence.
posted by wibari at 1:28 AM on May 30, 2017
posted by wibari at 1:28 AM on May 30, 2017
The entire point is to get the machine to do something that you didn't have to tell it to do. The result is a machine that does things you didn't tell it to do.
I'm not sure that the first statement is correct. The goal is to get a machine to do something without telling the machine exactly how to do it. Additional emphasis on the how. But the end state is communicated, and is part of training the network.
If you care intensely about how the machine gets to the end state, probably a NN is not a good choice; if having a sort of "black box" is tolerable, then it can be.
Personally, I am not sure that the black-box-ness of neural networks is really that much of an impediment to their use; as others have pointed out, humans are inherently that way to each other, and a great deal of effort is spent, and has been spent historically, building up linguistic and cultural constructs primarily for the purpose of communicating one's internal mental state to another. (I would probably go further and argue that this is perhaps the primary purpose of language, but I suspect that's a rabbithole without a satisfying end to it, since we don't have an entirely complete understanding of the origins of language.)
The feeling of knowing an answer but then having to painstakingly step through a sort of proof, despite not having actually gone mechanistically through those same steps yourself to generate the original conclusion, is not particularly rare — and in some disciplines we basically just accept that communicating how a particular conclusion was arrived at, in a way that lets someone else experience the solution themselves, is not possible. (E.g. the creation of art or music seem largely resistant to explanation, at least not in a sort of methodological way. Someone can say "it will sound better if you do [A] and not [B]", and we might all agree that yes, within whatever constraints and shared assumptions we're working under, "A" definitely sounds better, but asking them to explain how they knew that "A" was going to sound better than "B" in non-trivial cases is often an exercise in frustration. Typically that sort of thing is just chalked up to some combination of intrinsic skill and learned experience and left at that.) So it's not as though we as humans are unused to dealing with this sort of thing.
posted by Kadin2048 at 10:28 AM on May 30, 2017 [2 favorites]
I'm not sure that the first statement is correct. The goal is to get a machine to do something without telling the machine exactly how to do it. Additional emphasis on the how. But the end state is communicated, and is part of training the network.
If you care intensely about how the machine gets to the end state, probably a NN is not a good choice; if having a sort of "black box" is tolerable, then it can be.
Personally, I am not sure that the black-box-ness of neural networks is really that much of an impediment to their use; as others have pointed out, humans are inherently that way to each other, and a great deal of effort is spent, and has been spent historically, building up linguistic and cultural constructs primarily for the purpose of communicating one's internal mental state to another. (I would probably go further and argue that this is perhaps the primary purpose of language, but I suspect that's a rabbithole without a satisfying end to it, since we don't have an entirely complete understanding of the origins of language.)
The feeling of knowing an answer but then having to painstakingly step through a sort of proof, despite not having actually gone mechanistically through those same steps yourself to generate the original conclusion, is not particularly rare — and in some disciplines we basically just accept that communicating how a particular conclusion was arrived at, in a way that lets someone else experience the solution themselves, is not possible. (E.g. the creation of art or music seem largely resistant to explanation, at least not in a sort of methodological way. Someone can say "it will sound better if you do [A] and not [B]", and we might all agree that yes, within whatever constraints and shared assumptions we're working under, "A" definitely sounds better, but asking them to explain how they knew that "A" was going to sound better than "B" in non-trivial cases is often an exercise in frustration. Typically that sort of thing is just chalked up to some combination of intrinsic skill and learned experience and left at that.) So it's not as though we as humans are unused to dealing with this sort of thing.
posted by Kadin2048 at 10:28 AM on May 30, 2017 [2 favorites]
« Older Medieval fantasy city generator | Read something Newer »
This thread has been archived and is closed to new comments
posted by dhruva at 4:38 AM on May 28, 2017