“The reinforcement learning process involves making gradual progress—”
April 11, 2018 9:36 AM Subscribe
Virtual robots that teach themselves kung fu could revolutionize video games [MIT Technology Review] “In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game. AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right. The work could transform the way video games and movies are made. Instead of planning a character’s actions in excruciating detail, animators might feed real footage into a program and have their characters master them through practice. Such a character could be dropped into a scene and left to perform the actions.” [Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills][YouTube]
[obligatory Keanu "whoah"]
posted by adamgreenfield at 11:12 AM on April 11, 2018 [1 favorite]
posted by adamgreenfield at 11:12 AM on April 11, 2018 [1 favorite]
wow, I've been thinking for years that it would be really cool to have behaviorally-trained AI for gaming. This sounds similar.
I'd love to see it happen in real time as a game is happening - characters that learn, that change, that grow. And not enemies either - but pets etc.
posted by rebent at 12:57 PM on April 11, 2018
I'd love to see it happen in real time as a game is happening - characters that learn, that change, that grow. And not enemies either - but pets etc.
posted by rebent at 12:57 PM on April 11, 2018
They just have to teach these virtual robots to hurl an endless stream of racial and homophobic slurs and they will be indistinguishable from online human gamers.
posted by Sangermaine at 2:13 PM on April 11, 2018 [2 favorites]
posted by Sangermaine at 2:13 PM on April 11, 2018 [2 favorites]
Making an AI that plays well has never been particularly difficult. The real challenge is making AI that's fun to play against.
One of the nice things about this kind of approach, I think, is that you can generate a ton of different AI "styles" and then select a limited subset based on manual "funness" ratings. So you can eliminate the AIs that learn optimal strategies that would be considered "bullshit" because they're not replicable by human players (or are just repetitive and one-dimensional).
posted by tobascodagama at 2:44 PM on April 11, 2018 [1 favorite]
One of the nice things about this kind of approach, I think, is that you can generate a ton of different AI "styles" and then select a limited subset based on manual "funness" ratings. So you can eliminate the AIs that learn optimal strategies that would be considered "bullshit" because they're not replicable by human players (or are just repetitive and one-dimensional).
posted by tobascodagama at 2:44 PM on April 11, 2018 [1 favorite]
Hey, it's me, your time-travelling pal from the future. Just wanted to pop back in time again to let all my fellow human friends know that teaching computers (and ultimately robots) how to learn to do sweet-ass ninja kicks and flips on their own also does not backfire on the human race.
posted by mhum at 3:39 PM on April 11, 2018 [4 favorites]
posted by mhum at 3:39 PM on April 11, 2018 [4 favorites]
This is really more about making movement both look right and be right, which is a hard problem, than about opponent AI though.
And also possibly for making more lifelike but agile hardware robots.
posted by Foosnark at 6:09 PM on April 11, 2018 [2 favorites]
And also possibly for making more lifelike but agile hardware robots.
posted by Foosnark at 6:09 PM on April 11, 2018 [2 favorites]
This is great. I really want to see simulated gaits in lunar and martian gravity.
posted by fzx101 at 8:07 AM on April 12, 2018
posted by fzx101 at 8:07 AM on April 12, 2018
« Older Do you have to do it in front of my kids? | #NewsMatters: A decimated newsroom revolts Newer »
This thread has been archived and is closed to new comments
I worked on a character navigation behavior system for an MMO -- the "brains" and the glue between collision detection/physics, pathing, path smoothing, player controls, selecting the appropriate artist-created animation to get it where it wants to go, and trying to hide all but the most egregious network latency issues. You could tell it "run forward" or "follow this particular entity" or "maintain appropriate melee range" or "go to this exact location and face this exact heading" -- and it would do its best.
One of the fun parts was that pathing data was built by a process that would take the character and try to make it move between subdivided points in the terrain, using those same rules and animations -- a crude sort of neural network that didn't teach the characters how to move, but where they could move. When trying to walk uphill, if it kept sliding backwards and didn't reach its destination, it would map movement in that direction as impassable -- but if it was moving downhill it might be fine as long as it didn't fall too far.
The system in the article is frankly much better than that.
posted by Foosnark at 10:29 AM on April 11, 2018 [3 favorites]