We all love robot pancakes.
October 31, 2010 1:55 PM Subscribe
You've seen them here before: serving ice cream, pole-socking, with teddy bear heads, climbing trees, and sporting hands. But now robots are truly Metafilteranean, because they want to know: Who here likes pancakes?
TUM-Rosie and PRJames of the Munich-based Cluster of Excellence Cognition for Technical Systems work together to prepare and serve pancakes.
Want more robot-pancake interface? Check out a robot learning to flip pancakes and robots picking pancakes (the latter mentioned previously).
("Who here likes pancakes?" "I love pancakes," is a running gag spawned from a comment first posted by aaron in 2001 and, if MeFi had one, would be its call-and-response song.)
TUM-Rosie and PRJames of the Munich-based Cluster of Excellence Cognition for Technical Systems work together to prepare and serve pancakes.
Want more robot-pancake interface? Check out a robot learning to flip pancakes and robots picking pancakes (the latter mentioned previously).
("Who here likes pancakes?" "I love pancakes," is a running gag spawned from a comment first posted by aaron in 2001 and, if MeFi had one, would be its call-and-response song.)
"Do you like pancakes?"
"Yes we like pancakes..."
posted by ejfox at 3:07 PM on October 31, 2010 [1 favorite]
"Yes we like pancakes..."
posted by ejfox at 3:07 PM on October 31, 2010 [1 favorite]
But can the robots get the rabbit to sit still for a picture?
posted by kuujjuarapik at 3:40 PM on October 31, 2010
posted by kuujjuarapik at 3:40 PM on October 31, 2010
"Who here likes pancakes?"
I like flapjax at midnight :)
posted by puny human at 4:11 PM on October 31, 2010 [1 favorite]
I like flapjax at midnight :)
posted by puny human at 4:11 PM on October 31, 2010 [1 favorite]
Something about its expression makes it always look so curious.
posted by codacorolla at 5:17 PM on October 31, 2010
posted by codacorolla at 5:17 PM on October 31, 2010
which hotel in Seattle? Enquiring minds want to know.
posted by warbaby at 5:38 PM on October 31, 2010
posted by warbaby at 5:38 PM on October 31, 2010
pole-socking
Socking (to hit or strike forcefully) might be be better expressed as gloving (to cover with or as if with a glove.) Yeah I know, but hey, it's Metafilter.
posted by StickyCarpet at 7:23 PM on October 31, 2010
Socking (to hit or strike forcefully) might be be better expressed as gloving (to cover with or as if with a glove.) Yeah I know, but hey, it's Metafilter.
posted by StickyCarpet at 7:23 PM on October 31, 2010
But they're not gloves; they're socks!
If you just can't live with "socking" as an ambiguous word, I propose "pole ensockening."
posted by nebulawindphone at 9:13 PM on October 31, 2010 [1 favorite]
If you just can't live with "socking" as an ambiguous word, I propose "pole ensockening."
posted by nebulawindphone at 9:13 PM on October 31, 2010 [1 favorite]
How do you positively reinforce a robot's behavior if it can't feel pleasure or reward? Is it simply programmed to "want" to flip a pancake the right way? This robot operant conditioning is very interesting.
posted by tamagogirl at 7:49 AM on November 1, 2010
posted by tamagogirl at 7:49 AM on November 1, 2010
It turns out that this sort of "conditioning" (well, it's usually called "machine learning," but I guess "machine conditioning" would have been an appropriate name too) doesn't require pleasure or desire at all. What it does require is expectations — or, well, predictions, at least.
The machine has a mathematical model of how pancakes move. The model lets it make predictions. ("If I move the pan like this, these equations say that the pancake should move like this.") Of course, the computer doesn't care if the model's predictions are right. It doesn't "enjoy" being right. But it can still recognize when the predictions are right and when they are not. ("The model says the pancake should move like this, but it actually moved like that.") So you can program it to make certain corrections to the model's parameters whenever a prediction turns out wrong. ("The pancake spun faster than I predicted. My programmer says when that happens, I should change this angular momentum parameter here...")
Conveniently, the machine doesn't have to "know why" it's making those corrections. It just has to make them, following the program it's been given. Hopefully the programmer knows why he's calling for the corrections to be made in a certain way. But the machine is like a very dumb lab assistant in a physics lab: "My boss says I should work out this equation and try this experiment; and then if it doesn't work, I should try this instead; and if that doesn't work.... — No, I don't know why he told me to do that. I'm just following directions."
If the machine keeps repeating this process — make a prediction; test it; adjust the model's parameters if the prediction was wrong — it will eventually arrive at a very precisely tuned model indeed. Then you can query that model.
It will look like the robot is "trying" to follow your instructions, like it "wants" to do a good job. But really it's just making and testing predictions, making and testing them, until the model is tuned properly and its predictions match with reality.
posted by nebulawindphone at 9:10 AM on November 1, 2010 [2 favorites]
The machine has a mathematical model of how pancakes move. The model lets it make predictions. ("If I move the pan like this, these equations say that the pancake should move like this.") Of course, the computer doesn't care if the model's predictions are right. It doesn't "enjoy" being right. But it can still recognize when the predictions are right and when they are not. ("The model says the pancake should move like this, but it actually moved like that.") So you can program it to make certain corrections to the model's parameters whenever a prediction turns out wrong. ("The pancake spun faster than I predicted. My programmer says when that happens, I should change this angular momentum parameter here...")
Conveniently, the machine doesn't have to "know why" it's making those corrections. It just has to make them, following the program it's been given. Hopefully the programmer knows why he's calling for the corrections to be made in a certain way. But the machine is like a very dumb lab assistant in a physics lab: "My boss says I should work out this equation and try this experiment; and then if it doesn't work, I should try this instead; and if that doesn't work.... — No, I don't know why he told me to do that. I'm just following directions."
If the machine keeps repeating this process — make a prediction; test it; adjust the model's parameters if the prediction was wrong — it will eventually arrive at a very precisely tuned model indeed. Then you can query that model.
"Find a set of input motions that you predict will make the pancake bounce. Now carry out those motions."And so on. The better-tuned the model, the more accurate the answer. And of course, if its answer to your query turns out to be wrong, the machine will change its model parameters yet again in response, and so on until everything is just right.
"Okay, what input motions will make the pancake fly across the room and hit my annoying labmate? Carry out those motions."
"Okay, now what input motions will make it turn over once and land without bouncing?"
It will look like the robot is "trying" to follow your instructions, like it "wants" to do a good job. But really it's just making and testing predictions, making and testing them, until the model is tuned properly and its predictions match with reality.
posted by nebulawindphone at 9:10 AM on November 1, 2010 [2 favorites]
This is ridiculously awesome. People are betting a lot of money that domestic robots that can accomplish a variety of household tasks are possible -- the Japanese government, faced with a rapidly declining population, spends $25 billion a year on robotics research. Given the amazing progress I've seen just in the last 2-3 years, I think we'll have robots that can (say) make a meal or do the laundry in my lifetime. Whether they're cost effective or reliable for routine use is another matter entirely.
posted by miyabo at 1:37 PM on November 2, 2010
posted by miyabo at 1:37 PM on November 2, 2010
« Older Drácula | Not Fit for Human Consumption Newer »
This thread has been archived and is closed to new comments
posted by elizardbits at 2:46 PM on October 31, 2010