Peekaboom!
August 4, 2005 11:50 AM Subscribe
Peekaboom! It's not Friday -- then again this isn't flash -- but it sure is fun. Partner with another anonymous player to identify pictures by gradually revealing them. The kicker is that as we play, the system gets smarter -- the goal is to teach computers how to identify photos the same way we can.
Entertaining game, I was going to post this too but hadn't gotten around to it. Also try The ESP game from the same people.
posted by fvw at 1:35 PM on August 4, 2005
posted by fvw at 1:35 PM on August 4, 2005
It's an interesting game, but I really don't understand how one might go about guessing the answers. Some of them I couldn't even identify when the puzzle was revealed. Then again, I ain't so braht.
posted by Pokeyzilla at 2:59 PM on August 4, 2005
posted by Pokeyzilla at 2:59 PM on August 4, 2005
It's an interesting game, but I really don't understand how one might go about guessing the answers. Some of them I couldn't even identify when the puzzle was revealed. Then again, I ain't so braht.
posted by Pokeyzilla at 3:03 PM on August 4, 2005
posted by Pokeyzilla at 3:03 PM on August 4, 2005
Some are near-impossible, in that case you should go for the pass button as soon as possible.
posted by fvw at 4:09 PM on August 4, 2005
posted by fvw at 4:09 PM on August 4, 2005
For a site claiming to be about training computers to recognize images by having humans play their game, there's very little info about how they're doing that.
The images seem to be chosen "from the web" -- probably using something like google's image search. That would explain why some of the target words don't seem to match up with the images. I imagine the goal is to allow a machine searchable database of images without metadata markup, but, again, the site doesn't make that clear.
(And what are the bonus rounds teaching? And for that matter, how are they scored?)
Is there a more in-depth info page I'm missing?
posted by nobody at 4:18 PM on August 4, 2005
The images seem to be chosen "from the web" -- probably using something like google's image search. That would explain why some of the target words don't seem to match up with the images. I imagine the goal is to allow a machine searchable database of images without metadata markup, but, again, the site doesn't make that clear.
(And what are the bonus rounds teaching? And for that matter, how are they scored?)
Is there a more in-depth info page I'm missing?
posted by nobody at 4:18 PM on August 4, 2005
The bonus round, I believe, tests the computers ability to identify objects in an image. For example , if there's an orange ball in the picture, then the computer is doing its job well and as a side benefit you get points for clicking on the right portion of the picture.
That being said, someone come play Verbosity with me!
posted by Parannoyed at 5:28 PM on August 4, 2005
That being said, someone come play Verbosity with me!
posted by Parannoyed at 5:28 PM on August 4, 2005
When you type 'penis' it enters 'compassion'... And really, the picture showed a penis.
posted by kika at 5:59 PM on August 4, 2005
posted by kika at 5:59 PM on August 4, 2005
This game is fantastic. Thanks for posting it.
posted by Samsonov14 at 7:35 PM on August 4, 2005
posted by Samsonov14 at 7:35 PM on August 4, 2005
This is way more fun than it should be. It's amusing how people try to communicate through such a limited interface--sending chatty messages instead of guesses, spelling out letters with mouse clicks, etc.
Someone wrote "bemine" just as our game was ending. I'm totally gonna find a way to hook up through this thing.
posted by Galvatron at 11:53 PM on August 4, 2005
Someone wrote "bemine" just as our game was ending. I'm totally gonna find a way to hook up through this thing.
posted by Galvatron at 11:53 PM on August 4, 2005
My best game ever started with "usethehints" and ended with, I assume, both of us sitting back and going "damn, that guy was on the ball."
Peekaboom probably culls its guesswords from the ESP game. This explains the low quality of some words.
"Shit" returns "kiss."
posted by NickDouglas at 7:45 AM on August 5, 2005
Peekaboom probably culls its guesswords from the ESP game. This explains the low quality of some words.
"Shit" returns "kiss."
posted by NickDouglas at 7:45 AM on August 5, 2005
For a site claiming to be about training computers to recognize images by having humans play their game, there's very little info about how they're doing that.
The images seem to be chosen "from the web" -- probably using something like google's image search. That would explain why some of the target words don't seem to match up with the images. I imagine the goal is to allow a machine searchable database of images without metadata markup, but, again, the site doesn't make that clear.
nobody: There is some information here and here. Although Luis's paper refers to the previous game project, ESP, it looks pretty straightforward to extrapolate what's going on. In ESP, the researchers "use human cycles" to label a vast number of web images by making the task enjoyable to the human users. The human-applied tags are much more reliable than those applied by a computer, and can be used, for example, as indices in a searchable image database.
This game is geared not towards merely labelling images, but annotating the images, by getting the humans to mark the parts of the image which are most relevant to its identification. Again, the annotation procedure is made into a game so that people will enjoy doing it -- in most research projects, annotation is a deadly dull process. W.r.t. your observation, why some of the target words don't seem to match up with the images, my guess is that the labels are in fact the human-applied labels from the ESP game.
In other words, the point of this game is to collect a vast database of images, labels, and explicit annotations of the visual cues that a human uses to deduce the label from the image. This information could then be fed to a machine learning algorithm which can then be used to automatically label new images. The specifics of the ML algorithm are irrelevant (suffice it to say that there are hundreds of them; they tend to be mathematical and esoteric) -- probably any number of them could be applied. The real value in this project is the ease with which they can collect the data used to train those algorithms. Data collection for many ML processes is a huge time-consuming bottleneck.
This is cool.
posted by 김치 at 8:19 AM on August 5, 2005
The images seem to be chosen "from the web" -- probably using something like google's image search. That would explain why some of the target words don't seem to match up with the images. I imagine the goal is to allow a machine searchable database of images without metadata markup, but, again, the site doesn't make that clear.
nobody: There is some information here and here. Although Luis's paper refers to the previous game project, ESP, it looks pretty straightforward to extrapolate what's going on. In ESP, the researchers "use human cycles" to label a vast number of web images by making the task enjoyable to the human users. The human-applied tags are much more reliable than those applied by a computer, and can be used, for example, as indices in a searchable image database.
This game is geared not towards merely labelling images, but annotating the images, by getting the humans to mark the parts of the image which are most relevant to its identification. Again, the annotation procedure is made into a game so that people will enjoy doing it -- in most research projects, annotation is a deadly dull process. W.r.t. your observation, why some of the target words don't seem to match up with the images, my guess is that the labels are in fact the human-applied labels from the ESP game.
In other words, the point of this game is to collect a vast database of images, labels, and explicit annotations of the visual cues that a human uses to deduce the label from the image. This information could then be fed to a machine learning algorithm which can then be used to automatically label new images. The specifics of the ML algorithm are irrelevant (suffice it to say that there are hundreds of them; they tend to be mathematical and esoteric) -- probably any number of them could be applied. The real value in this project is the ease with which they can collect the data used to train those algorithms. Data collection for many ML processes is a huge time-consuming bottleneck.
This is cool.
posted by 김치 at 8:19 AM on August 5, 2005
« Older All things Beat | Son of Sam! Newer »
This thread has been archived and is closed to new comments
posted by skynxnex at 1:06 PM on August 4, 2005