•   
  •   
  •   

TechnologyOpenAI experiment proves that even bots cheat at hide-and-seek

14:46  18 september  2019
14:46  18 september  2019 Source:   engadget.com

Texas police seek clues to explain Walmart shooting that killed 20

Texas police seek clues to explain Walmart shooting that killed 20 Texas police seek clues to explain Walmart shooting that killed 20

OpenAI . Can artificial intelligence evolve and become more sophisticated when put in a competitive world, similar to how life on Earth evolved through competition and The agents' development didn't even stop there. They eventually learned how to exploit glitches in their environment, such as getting

The hide - and - seek AI training environment, which was made available in open source today, joins the countless others OpenAI , DeepMind, and DeepMind sister company Google have contributed to crowdsource solutions to hard problems in AI. In December, OpenAI published CoinRun, which is

Can artificial intelligence evolve and become more sophisticated when put in a competitive world, similar to how life on Earth evolved through competition and natural selection? That's a question the researchers at OpenAI have been trying to answer through its experiments, including its most recent one that pitted AI agents against each other in nearly 500 million rounds of hide-and-seek. They found that the AI agents or bots were able to conjure up several different strategies as they played, developing new ones to counter techniques the other team came up with.

OpenAI experiment proves that even bots cheat at hide-and-seek

At first, the hiders and the seekers simply ran around the environment. But after 25 million games, the hiders learned how to use boxes to block exits and barricade themselves inside rooms. They also learned how to work with each other, passing boxes to one another to quickly block the exits. The seekers then learned how to find the hiders inside those forts after 75 million games by moving ramps against walls and using them to get over obstacles. After around 85 million games, though, the hiders learned to take the ramp inside the fort with them before blocking the exits, so the seekers have no tool to use.

At Defcon, teaching disinformation is child's play

At Defcon, teaching disinformation is child's play The r00tz Asylum, Defcon’s kid-friendly event, is hosting a workshop on how disinformation is spread on social media.

That’s a question the researchers at OpenAI have been trying to answer through its experiments , including its most recent one that pitted AI agents against At first, the hiders and the seekers simply ran around the environment. But after 25 million games, the hiders learned how to use boxes to block

OpenAI . We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide - and - seek . Through training in our new simulated hide - and - seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know

As OpenAI's Bowen Baker said:

"Once one team learns a new strategy, it creates this pressure for the other team to adapt. It has this really interesting analogue to how humans evolved on Earth, where you had constant competition between organisms."

The agents' development didn't even stop there. They eventually learned how to exploit glitches in their environment, such as getting rid of ramps for good by shoving them through walls at a certain angle. Bower said this suggests that artificial intelligence could find solutions for complex problems that we might not think of ourselves. "Maybe they'll even be able to solve problems that humans don't yet know how to," he explained.

OpenAI
Read More

OpenAI published the tool that writes disturbingly believable fake news .
In February, OpenAI announced that it had developed an algorithm that could write believable fake news and spam. Deciding that power was too dangerous to unleash, OpenAI planned a staged release so that it could offer pieces of the tech and analyze how it was used. Now, OpenAI says it has seen "no strong evidence of misuse," and this week, it published the full AI. The AI, GPT-2, was originally designed to answer questions, summarize stories and translate texts. But researchers came to fear that it could be used to pump out large volumes of misinformation.

usr: 3
This is interesting!