Author: Tronserve admin
Monday 26th July 2021 10:14 PM
Let’s Build Robots That Are as Smart as Babies
Let’s face it: Robots are dumb. At the best they are idiot savants, ideal for doing one thing really well. All in all, even those robots require particular environments in which to do their one thing really well. This is precisely why autonomous cars or robots for home health care are so hard to build. They may need to react to an uncountable number of situations, and they are going to need a generalized understanding of the world in order to navigate them all.
Infants as young as two months already perceive that an unsupported object will fall, while five-month-old babies know materials just like sand and water will pour from a container instead of plop out as a single chunk. Robots lack these understandings, which stops them as they try to navigate the world without a prescribed task and movement.
In spite of this we could see robots with a generalised insight of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines — the software that provides real-time physical interactions in complex video-game worlds — to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.
Providing robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company developing robots that could operate on Mars. SE4 plans to escape the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.
Wilson says that his company uses simple physics engines like PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s key processor, which will often be focused on a more complicated task.
Wilson’s firm sometimes still turns to a traditional graphics engine, like Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, and not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.
Bullet is a trendy physics-engine option, but it is not the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands necessary for robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to develop kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.
The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup. Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.