I'll See Your Hand, and Raise You the Future: Computer Learning of Games via Video Input
July 13, 2012 2:10 PM Subscribe
I See What You Did There: Software Uses Video to Infer Game Rules and Achieve Victory Conditions.
A French computer scientist has constructed a system that successfully divines the rules to simple games just by using video input of human players at work.
I'd like to see it try to learn Mao.
Wait, now that I think about it, it seems like this program is doing almost exactly what a human player would be doing in a game of Mao. I was being sarcastic but now I seriously would like to see that!
posted by kmz at 2:19 PM on July 13, 2012 [1 favorite]
Wait, now that I think about it, it seems like this program is doing almost exactly what a human player would be doing in a game of Mao. I was being sarcastic but now I seriously would like to see that!
posted by kmz at 2:19 PM on July 13, 2012 [1 favorite]
OK, but let's just not teach the machines to learn from each other or Skynetblahblahblah etc.
Too late, by about 20 years.
posted by jedicus at 2:19 PM on July 13, 2012
Too late, by about 20 years.
posted by jedicus at 2:19 PM on July 13, 2012
So lets decide now to make it impossible for robots to learn how to kill a human being. hmm?
posted by PipRuss at 2:21 PM on July 13, 2012
posted by PipRuss at 2:21 PM on July 13, 2012
It will only learn games with consistent rules. I.e., it will never play Calvin Ball very well.
posted by jeffamaphone at 2:24 PM on July 13, 2012 [1 favorite]
posted by jeffamaphone at 2:24 PM on July 13, 2012 [1 favorite]
So lets decide now to make it impossible for robots to learn how to kill a human being. hmm?
But what if by not killing somebody, the robot allows another human to come to harm? Or, humanity as a whole?
posted by kmz at 2:26 PM on July 13, 2012 [2 favorites]
But what if by not killing somebody, the robot allows another human to come to harm? Or, humanity as a whole?
posted by kmz at 2:26 PM on July 13, 2012 [2 favorites]
The only winning move is not to play.
posted by hanoixan at 2:28 PM on July 13, 2012 [1 favorite]
posted by hanoixan at 2:28 PM on July 13, 2012 [1 favorite]
The computer vision task doesn't seem all that difficult and the games it's analyzing are all extremely simple so it wouldn't take a very sophisticated machine learning program to analyze them. It strikes me as being more of an interesting project than a state of the art breakthrough in either field.
posted by burnmp3s at 2:31 PM on July 13, 2012
posted by burnmp3s at 2:31 PM on July 13, 2012
>It will only learn games with consistent rules. I.e., it will never play Calvin Ball very well.
1. Move toward existing Victory Conditions, using existing movement repertoire.
2. RND on set of possible new rules.
3. Test: Would new rule make victory more likely than playing with existing rules?
4. If so, play
Move: Add New Rule.
(If not, play with existing rules.)
posted by darth_tedious at 2:31 PM on July 13, 2012
1. Move toward existing Victory Conditions, using existing movement repertoire.
2. RND on set of possible new rules.
3. Test: Would new rule make victory more likely than playing with existing rules?
4. If so, play
Move: Add New Rule.
(If not, play with existing rules.)
posted by darth_tedious at 2:31 PM on July 13, 2012
But what if by not killing somebody, the robot allows another human to come to harm? Or, humanity as a whole?
An interesting question kmz, perhaps it would be best to not rely on robots to save us.
posted by PipRuss at 2:36 PM on July 13, 2012
An interesting question kmz, perhaps it would be best to not rely on robots to save us.
posted by PipRuss at 2:36 PM on July 13, 2012
"It will only learn games with consistent rules."
Nomic has "consistent" rules (in the sense that Calvinball doesn't), but I bet it won't learn to play it, either.
You might say that a game that changes its rules is by definition inconsistent, but I think that a large number of games have conditional rules — rules that apply or don't apply depending upon the gamestate. I'm not sure how you could rigorously distinguish such conditional rules in principle from rules which can be arbitrarily altered but only according to prior rules. We'd like to say that Nomic is a game with meta-rules, but it's not. It's just got rules.
posted by Ivan Fyodorovich at 3:03 PM on July 13, 2012
Nomic has "consistent" rules (in the sense that Calvinball doesn't), but I bet it won't learn to play it, either.
You might say that a game that changes its rules is by definition inconsistent, but I think that a large number of games have conditional rules — rules that apply or don't apply depending upon the gamestate. I'm not sure how you could rigorously distinguish such conditional rules in principle from rules which can be arbitrarily altered but only according to prior rules. We'd like to say that Nomic is a game with meta-rules, but it's not. It's just got rules.
posted by Ivan Fyodorovich at 3:03 PM on July 13, 2012
I will bring my copy of Diplomacy over.
posted by ricochet biscuit at 3:19 PM on July 13, 2012 [3 favorites]
posted by ricochet biscuit at 3:19 PM on July 13, 2012 [3 favorites]
I will bring my copy of Diplomacy over.
Now there's a way to convince SkyNet to kill us.
"You promised to convoy me!"
posted by verb at 4:03 PM on July 13, 2012 [5 favorites]
Now there's a way to convince SkyNet to kill us.
"You promised to convoy me!"
posted by verb at 4:03 PM on July 13, 2012 [5 favorites]
"Now there's a way to convince SkyNet to kill us."
That's funny, but it resonates more deeply with me than just that it's funny. An AI that was trained to understand human psychology through a game theoretical format and in the context of the intersection between diplomacy and war would come to have a very true but very limited understanding of human beings — an understanding that would, I think, lead to SkyNet starting the Kill All Humans initiative at the earliest opportunity.
And the really scary thing to me is that this is exactly the sort of thing that the military-industrial complex would develop and build an artificially intelligent computer to do (not kill us all, of course, but to analyse and plan diplomacy and war in a game theoretical context). It's like the exact opposite of War Games, because the context in this imagined future would make winning-by-genocide the only winning move from the perspective of such an intelligence. Because we'd be teaching it that we absolutely cannot be trusted. That there's nothing more fundamental to be understood about human beings than that.
posted by Ivan Fyodorovich at 5:00 PM on July 13, 2012 [1 favorite]
That's funny, but it resonates more deeply with me than just that it's funny. An AI that was trained to understand human psychology through a game theoretical format and in the context of the intersection between diplomacy and war would come to have a very true but very limited understanding of human beings — an understanding that would, I think, lead to SkyNet starting the Kill All Humans initiative at the earliest opportunity.
And the really scary thing to me is that this is exactly the sort of thing that the military-industrial complex would develop and build an artificially intelligent computer to do (not kill us all, of course, but to analyse and plan diplomacy and war in a game theoretical context). It's like the exact opposite of War Games, because the context in this imagined future would make winning-by-genocide the only winning move from the perspective of such an intelligence. Because we'd be teaching it that we absolutely cannot be trusted. That there's nothing more fundamental to be understood about human beings than that.
posted by Ivan Fyodorovich at 5:00 PM on July 13, 2012 [1 favorite]
« Older "It turns out that Chinese are not the only ones... | THIS IS MY ABORTION Newer »
This thread has been archived and is closed to new comments
posted by Aquaman at 2:18 PM on July 13, 2012