Technology: First AI that sees like a human could lead to automated search and rescue robots, scientists say - PressFrom - US
  •   
  •   
  •   

TechnologyFirst AI that sees like a human could lead to automated search and rescue robots, scientists say

21:11  17 may  2019
21:11  17 may  2019 Source:   dailymail.co.uk

Robots inspired by organic cells can coordinate their movements, transport objects and even respond to light

Robots inspired by organic cells can coordinate their movements, transport objects and even respond to light Experts at Columbia University, New York, created particle robots that measure between 6.1 inches and 29.2 inches across and can join together to accomplish tasks.

First AI that sees like a human could lead to automated search and rescue robots, scientists say© Provided by Associated Newspapers Limited Computer scientists have taught an artificial intelligence agent how to do something that usually only humans can do by taking a few quick glimpses around and infer its whole environment

Computer scientists have taught an artificial intelligence agent how to take in its whole environment by just taking a few snapshots.

The new technology can gather visual information that can be used for a wide range of tasks including search-and-rescue.

Researchers have taught the computer system how to take quick glimpses around a room it has never seen before to create a 'full scene'.

The robots that learned to reproduce: Scientists teach AI-powered bots to 'mate' by combining pieces of their code

The robots that learned to reproduce: Scientists teach AI-powered bots to 'mate' by combining pieces of their code The process would work with two robots that are able to combine their code and produce 3D-printed offspring. Researchers say this could become commonplace within about 20 years.

The scientists used deep learning, a type of machine learning inspired by the brain's neural networks, to train their agent on thousands of 360-degree images of different environments.

They say that their research could aid effective search-and-rescue missions by making robots that could relay information to authorities.

Most computer systems are trained for very specific tasks - such as to recognise an object or estimate its volume - in an environment they have experienced before.

The tech, developed by a team of computer scientists from the University of Texas,  gathers visual information that can then be used for a wide range of tasks.

Google Duplex starts rolling out to more phones

Google Duplex starts rolling out to more phones Appearing on a Galaxy S10 Plus

The main aim being that it could quickly locate people, flames and hazardous materials and relay that information to firefighters, the researchers said.

After each glimpse, it chooses the next shot that it predicts will add the most new information about the whole scene.

They use the example of a human being in a shopping centre they had never visited before, and they saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way.

Based on these glances, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.

When presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses.

Professor Kristen Grauman, who led the study, said : 'Just as you bring in prior information about the regularities that exist in previously experienced environments - like all the grocery stores you have ever been to - this agent searches in a nonexhaustive way.'

NASA eyes soft robots for dirty jobs on the moon and Mars

NASA eyes soft robots for dirty jobs on the moon and Mars I shall call him Squishy.

'We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise.

'It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.'

'What makes this system so effective is that it's not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene, Professor Grauman said.

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency and the US Air Force Office of Scientific Research.

MIT researchers taught robots to link senses like sight and touch.
MIT researchers at the Computer Science and Artificial Intelligence Lab (CSAIL) have created a predictive AI that allows robots to link multiple senses in much the same way humans do. require(["medianetNativeAdOnArticle"], function (medianetNativeAdOnArticle) { medianetNativeAdOnArticle.getMedianetNativeAds(true); }); “While our sense of touch gives us a channel to feel the physical world, our eyes help us immediately understand the full picture of these tactile signals,” writes Rachel Gordon, of MIT CSAIL. In robots, this connection doesn’t exist.

—   Share news in the SOC. Networks

Topical videos:

usr: 1
This is interesting!