In the training process, players first have to face simple single-player games, such as finding a purple cube or placing a yellow ball on the red floor. They advanced to more complex multiplayer games, such as hide and seek or capture the flag, in which teams compete to be the first to find and grab the opponent’s flag. The playground manager has no specific goals, but aims to improve the player’s overall ability over time.
Why is this cool? Artificial intelligence like DeepMind’s AlphaZero has beaten the world’s best human players in chess and Go. But they can only learn one game at a time.As DeepMind co-founder Shane Legg said in a conversation with him last year, it’s like You must replace your chess brain with your Go brain Every time you want to switch games.
Researchers are now trying to build artificial intelligence that can learn multiple tasks at the same time, which means teaching them general skills to make it easier to adapt.
An exciting trend in this direction is open learning. In this case, artificial intelligence can be trained on many different tasks without specific goals. In many ways, this is how humans and other animals learn through aimless games. But this requires a lot of data. XLand automatically generates this data in the form of endless challenges.It is similar to poet, An AI training dojo where a two-legged robot learns to navigate obstacles in a 2D landscape. However, the world of XLand is much more complicated and detailed.
XLand is also an example Artificial intelligence learning selfOr Jeff Clune who helped develop POET and lead the team Work on this topic In OpenAI, it is called AI Generation Algorithm (AI-GA). “This work has pushed the forefront of AI-GAs,” Clune said. “It’s very exciting to see it.”