Your browser does not support JavaScript!

Author: Tronserve admin

Wednesday 28th July 2021 02:36 PM

Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing


image cap
147 Views

A sudden, tragic wake-up call: That’s how many roboticists view the Fukushima Daiichi nuclear disaster, caused by the big earthquake and tsunami that struck Japan in 2011. Records following the accident defined how high levels of radiation foiled workers’ attempts to carry on urgent measures, such as working pressure valves. It was the perfect mission for a robot, but none in Japan or elsewhere had the capabilities to pull it off. Fukushima forced many of us in the robotics community to realize that we needed to get our technology out of the lab and into the world.

 

Disaster-response robots have made significant progress since Fukushima. Research groups around the world have shown unmanned ground vehicles that can take over rubble, robotic snakes that can squeeze through narrow gaps, and drones that can map a site from above. Researchers are also building humanoid robots that can survey the damage and perform critical tasks such as accessing instrumentation panels or transporting first-aid equipment.

 

But despite the advances, creating robots that have the same motor and decision-making skills of emergency workers continues to be a challenge. Pushing open a heavy door, discharging a fire extinguisher, and other painless but hard work demand a level of coordination that robots have yet to master.

 

One way of compensating for this limitation is to use tele-operation — having a human operator remotely control the robot, either regularly or during specific tasks, to help it complete more than it could on its own.

 

Tele-operated robots have long been utilized in industrial, aerospace, and underwater settings. More newly, researchers have experimented with motion-capture systems to transfer a person’s movements to a humanoid robot in real time: You wave your arms and the robot mimics your gestures. For a fully immersive experience, special goggles can let the operator see what the robot sees through its cameras, and a haptic vest and gloves can provide tactile sensations to the operator’s body.

 

At MIT’s Biomimetic Robotics Lab, our group is moving the melding of human and machine even farther, in hopes of accelerating the development of practical disaster robots. With support from the Defense Advanced Research Projects Agency (DARPA), we are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to escape falling. We then catch that physical response and send it back to the robot, which helps it eliminate falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.

 

You could say we’re putting a human brain inside the machine.

 

Potential disaster robots will essentially have a great deal of autonomy. Someday, we desire to be able to submit a robot into a burning building to search for victims all on its own, or deploy a robot at a broken industrial facility and have it place which valve it needs to shut off. We’re nowhere near that level of capability. Hence the growing interest in teleoperation.

 

The DARPA Robotics Challenge in the United States and Japan’s ImPACT Tough Robotics Challenge are among the current efforts that have demonstrated the possibilities of teleoperation. One reason to have humans in the loop is the unpredictable nature of a disaster scene. Navigating these chaotic environments needs a high degree of adaptability that current artificial-intelligence algorithms can’t yet achieve.

 

For case study, if an autonomous robot experiences a door handle but can’t find a match in its database of door handles, the mission fails. If the robot gets its arm stuck and doesn’t know how to free itself, the mission fails. Humans, on the other hand, can readily deal with such situations: We can adapt and learn on the fly, and we do that on a daily basis. We can determine variations in the shapes of objects, cope with poor visibility, and even figure out how to use a new tool on the spot.

 

The same goes for our motor skills. Consider running with a heavy backpack. You may run slower or not as far as you would without the extra weight, but you can still carry out the task. Our bodies can conform to new dynamics with surprising ease.

 

The tele-operation system we are building is not fashioned to replace the autonomous controllers that legged robots use to self-balance and play other tasks. We’re still equipping our robots with as much autonomy as we can. But by coupling the robot to a human, we take advantage of the best of both worlds: robot endurance and strength in addition to human versatility and perception.

 

Our lab has long explored how biological systems can inspire the design of better machines. A particular limitation of existing robots is their inability to perform what we call power manipulation—strenuous feats like knocking a chunk of concrete out of the way or swinging an axe into a door. Most robots are designed for more delicate and precise motions and gentle contact.

 

We designed our humanoid robot, called HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System), specifically for this type of heavy manipulation. The robot is relatively light—weighing in at 45 kilograms—and yet strong and robust. Its body is about 90 percent of the size of an average human, which is big enough to allow it to naturally maneuver in human environments.

 

As an alternative of using regular DC motors, we built custom actuators to power HERMES’s joints, painting on years of experience with our Cheetah platform, a quadruped robot capable of explosive motions such as sprinting and jumping. The actuators include brushless DC motors coupled to a planetary gearbox—so called because its three “planet” gears revolve around a “sun” gear—and they can generate a big amount of torque for their weight. The robot’s shoulders and hips are actuated directly while its knees and elbows are driven by metal bars connected to the actuators. This makes HERMES less rigid than other humanoids, able to absorb mechanical shocks without its gears shattering to pieces.

 

The first time we operated HERMES on, it was still just a pair of legs. The robot couldn’t even stand on its own, so we suspended it from a harness. As a simple test, we created its left leg to kick. We grabbed the first thing we discovered lying around the lab—a plastic trash can—and placed it in front of the robot. It was satisfying to see HERMES kick the trash can across the room.

 

The human-machine interface we made for controlling HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. We call it the balance-feedback interface, or BFI.

 

The BFI took months and multi iterations to develop. The initial concept had some resemblance to that of the full-body virtual-reality suits featured in the 2018 Steven Spielberg movie Ready Player One. That design never left the drawing board. We spotted that physically tracking and moving a person’s body—with more than 200 bones and 600 muscles—isn’t a straightforward task, and so we decided to start with a simpler system.

 

To work with HERMES, the operator stands on a square platform, about 90 centimeters on a side. Load cells measure the forces on the platform’s surface, so we know where the operator’s feet are pushing down. A set of linkages attaches to the operator’s limbs and waist (the human body’s center of mass, basically) and uses rotary encoders to accurately measure displacements to within less than a centimeter. But some of the linkages aren’t just for sensing: They also have motors in them, to apply forces and torques to the operator’s torso. If you strap yourself to the BFI, those linkages can apply up to 80 newtons of force to your body, which is enough to give you a good shove.

 

We put up two separate computers for controlling HERMES and the BFI. Each computer runs its own control loop, but the two sides are constantly exchanging data. In the starting of each loop, HERMES assembles data about its posture and compares it with data received from the BFI about the operator’s posture. Based on how the data differs, the robot changes its actuators and then immediately sends the new posture data to the BFI. The BFI then carries out a similar control loop to adjust the operator’s posture. This process repeats 1,000 times per second.

 

To permit the two sides to function at such fast rates, we had to condense the information they share. For example, rather than sending a detailed representation of the operator’s posture, the BFI sends only the position of the person’s center of mass and the relative position of each hand and foot. The robot’s computer then scales these measurements proportionally to the sizes of HERMES, which reproduces that resource posture. As in any other two-way teleoperation loop, this coupling may cause vibration or instability. We reduced that by fine-tuning the scaling parameters that map the postures of the human and the robot.

 

To test the BFI, one of us (Ramos) volunteered to be the driver. After all, if you’ve created the core parts of the system, you’re probably best equipped to debug it.

 

In one of the first tests, we tested an early balancing algorithm for HERMES to see how human and robot would behave when coupled together. In the test, one of the researchers used a rubber mallet to hit HERMES on its upper body. With every hit, the BFI exerted a similar jolt on Ramos, who reflexively changed his body to regain balance, causing the robot to also catch itself.

 

Up to this point, HERMES was still just a couple of legs and a torso, but we eventually completed the rest of its body. We crafted arms that use the same actuators as those used by the legs and hands, made out of 3D-printed parts reinforced with carbon fiber. The head features a stereo camera, for streaming video to a headset worn by the operator. We also added a hard hat, just because.

 

In another round of experiments, we had HERMES punch through drywall, swing an axe against a board, and, with oversight from the local fire department, put out a controlled blaze using a fire extinguisher. Disaster robots will need more than just brute force, though, so HERMES and Ramos also performed tasks that want more dexterity, like pouring water from a jug into a cup.

 

In each case, as the operator simulated performing the task while strapped to the BFI, we observed how well the robot mirrored those actions. We also looked at the scenarios in which the operator’s reactions could help the robot the most. When HERMES punched the drywall, for instance, its torso rebounded backward. Almost immediately, a corresponding force pushed the operator, who reflexively leaned forward, helping HERMES to adjust its posture.

 

We were prepared for more tests, but we recognized that HERMES is too big and powerful for many of the experiments we wanted to do. Although a human-scale machine allows you to carry out realistic tasks, it is also time-consuming to move, and it demands lots of safety precautions — it’s wielding an axe! Trying more dynamic behaviors, or even walking, proved difficult. We decided HERMES needed a little sibling.

 

Little HERMES is a scaled-down version of HERMES. Like its big brother, it uses custom high-torque actuators, which are attached closer to the body rather than on the legs. This setting allows the legs to swing much faster. For a more compact design, we cut the number of axes of motion—or degrees of freedom, in robotic parlance—from six to three per limb, and we replaced the original two-toed feet with simple rubber spheres, each having a three-axis force sensor tucked inside.

 

Connecting the BFI to Little HERMES needed alterations. There’s a big difference in scale between a human adult and this smaller robot, and when we tried to link their movements directly—mapping the position of the human’s knees and the robot’s knees, and so forth—it resulted in jerky motion. We needed a different mathematical model to mediate between the two systems. The model we came up with tracks parameters such as ground contact forces and the operator’s center of mass. It captures a sort of “outline” of the operator’s intended motion, which Little HERMES is able to execute.

 

In one experiment, we had the operator step in place, slowly at first and then faster. We were happy to see Little HERMES marching in just the same way. When the operator hopped, Little HERMES jumped too.

 

In a sequence of photos we took, you can see both human and robot in midair for a brief instant. We also placed pieces of wood underneath the robot’s feet as obstacles, and the robot’s controller was able to keep the robot from dropping.

 

Much of this was still preliminary work, and Little HERMES wasn’t freely standing or able to walk around. A supporting pole attached to its back prevented it from tipping forth. At some point, we’d like to create the robot further and set it loose to amble around the lab and perhaps even outdoors, as we’ve done with Cheetah and Mini Cheetah (yes, it too has a little sibling).

 

Our next steps include addressing a host of challenges. One of them is the mental fatigue that an operator experience after using the BFI for extended periods of time or for tasks that demand a lot of concentration. Our experiments advise that when you have to command not only your own body but also a machine’s, your brain tires quickly. The effect is specifically pronounced for fine-manipulation tasks, such as pouring water into a cup. After repeating the experiment three times in a row, the operator had to take a break.

 



This article is originally posted on IEEESPECTRUM.com


Share this post:


This is the old design: Please remove this section after work on the functionalities for new design

Posted on : Wednesday 28th July 2021 02:36 PM

Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing


none
Posted by  Tronserve admin
image cap

A sudden, tragic wake-up call: That’s how many roboticists view the Fukushima Daiichi nuclear disaster, caused by the big earthquake and tsunami that struck Japan in 2011. Records following the accident defined how high levels of radiation foiled workers’ attempts to carry on urgent measures, such as working pressure valves. It was the perfect mission for a robot, but none in Japan or elsewhere had the capabilities to pull it off. Fukushima forced many of us in the robotics community to realize that we needed to get our technology out of the lab and into the world.

 

Disaster-response robots have made significant progress since Fukushima. Research groups around the world have shown unmanned ground vehicles that can take over rubble, robotic snakes that can squeeze through narrow gaps, and drones that can map a site from above. Researchers are also building humanoid robots that can survey the damage and perform critical tasks such as accessing instrumentation panels or transporting first-aid equipment.

 

But despite the advances, creating robots that have the same motor and decision-making skills of emergency workers continues to be a challenge. Pushing open a heavy door, discharging a fire extinguisher, and other painless but hard work demand a level of coordination that robots have yet to master.

 

One way of compensating for this limitation is to use tele-operation — having a human operator remotely control the robot, either regularly or during specific tasks, to help it complete more than it could on its own.

 

Tele-operated robots have long been utilized in industrial, aerospace, and underwater settings. More newly, researchers have experimented with motion-capture systems to transfer a person’s movements to a humanoid robot in real time: You wave your arms and the robot mimics your gestures. For a fully immersive experience, special goggles can let the operator see what the robot sees through its cameras, and a haptic vest and gloves can provide tactile sensations to the operator’s body.

 

At MIT’s Biomimetic Robotics Lab, our group is moving the melding of human and machine even farther, in hopes of accelerating the development of practical disaster robots. With support from the Defense Advanced Research Projects Agency (DARPA), we are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to escape falling. We then catch that physical response and send it back to the robot, which helps it eliminate falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.

 

You could say we’re putting a human brain inside the machine.

 

Potential disaster robots will essentially have a great deal of autonomy. Someday, we desire to be able to submit a robot into a burning building to search for victims all on its own, or deploy a robot at a broken industrial facility and have it place which valve it needs to shut off. We’re nowhere near that level of capability. Hence the growing interest in teleoperation.

 

The DARPA Robotics Challenge in the United States and Japan’s ImPACT Tough Robotics Challenge are among the current efforts that have demonstrated the possibilities of teleoperation. One reason to have humans in the loop is the unpredictable nature of a disaster scene. Navigating these chaotic environments needs a high degree of adaptability that current artificial-intelligence algorithms can’t yet achieve.

 

For case study, if an autonomous robot experiences a door handle but can’t find a match in its database of door handles, the mission fails. If the robot gets its arm stuck and doesn’t know how to free itself, the mission fails. Humans, on the other hand, can readily deal with such situations: We can adapt and learn on the fly, and we do that on a daily basis. We can determine variations in the shapes of objects, cope with poor visibility, and even figure out how to use a new tool on the spot.

 

The same goes for our motor skills. Consider running with a heavy backpack. You may run slower or not as far as you would without the extra weight, but you can still carry out the task. Our bodies can conform to new dynamics with surprising ease.

 

The tele-operation system we are building is not fashioned to replace the autonomous controllers that legged robots use to self-balance and play other tasks. We’re still equipping our robots with as much autonomy as we can. But by coupling the robot to a human, we take advantage of the best of both worlds: robot endurance and strength in addition to human versatility and perception.

 

Our lab has long explored how biological systems can inspire the design of better machines. A particular limitation of existing robots is their inability to perform what we call power manipulation—strenuous feats like knocking a chunk of concrete out of the way or swinging an axe into a door. Most robots are designed for more delicate and precise motions and gentle contact.

 

We designed our humanoid robot, called HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System), specifically for this type of heavy manipulation. The robot is relatively light—weighing in at 45 kilograms—and yet strong and robust. Its body is about 90 percent of the size of an average human, which is big enough to allow it to naturally maneuver in human environments.

 

As an alternative of using regular DC motors, we built custom actuators to power HERMES’s joints, painting on years of experience with our Cheetah platform, a quadruped robot capable of explosive motions such as sprinting and jumping. The actuators include brushless DC motors coupled to a planetary gearbox—so called because its three “planet” gears revolve around a “sun” gear—and they can generate a big amount of torque for their weight. The robot’s shoulders and hips are actuated directly while its knees and elbows are driven by metal bars connected to the actuators. This makes HERMES less rigid than other humanoids, able to absorb mechanical shocks without its gears shattering to pieces.

 

The first time we operated HERMES on, it was still just a pair of legs. The robot couldn’t even stand on its own, so we suspended it from a harness. As a simple test, we created its left leg to kick. We grabbed the first thing we discovered lying around the lab—a plastic trash can—and placed it in front of the robot. It was satisfying to see HERMES kick the trash can across the room.

 

The human-machine interface we made for controlling HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. We call it the balance-feedback interface, or BFI.

 

The BFI took months and multi iterations to develop. The initial concept had some resemblance to that of the full-body virtual-reality suits featured in the 2018 Steven Spielberg movie Ready Player One. That design never left the drawing board. We spotted that physically tracking and moving a person’s body—with more than 200 bones and 600 muscles—isn’t a straightforward task, and so we decided to start with a simpler system.

 

To work with HERMES, the operator stands on a square platform, about 90 centimeters on a side. Load cells measure the forces on the platform’s surface, so we know where the operator’s feet are pushing down. A set of linkages attaches to the operator’s limbs and waist (the human body’s center of mass, basically) and uses rotary encoders to accurately measure displacements to within less than a centimeter. But some of the linkages aren’t just for sensing: They also have motors in them, to apply forces and torques to the operator’s torso. If you strap yourself to the BFI, those linkages can apply up to 80 newtons of force to your body, which is enough to give you a good shove.

 

We put up two separate computers for controlling HERMES and the BFI. Each computer runs its own control loop, but the two sides are constantly exchanging data. In the starting of each loop, HERMES assembles data about its posture and compares it with data received from the BFI about the operator’s posture. Based on how the data differs, the robot changes its actuators and then immediately sends the new posture data to the BFI. The BFI then carries out a similar control loop to adjust the operator’s posture. This process repeats 1,000 times per second.

 

To permit the two sides to function at such fast rates, we had to condense the information they share. For example, rather than sending a detailed representation of the operator’s posture, the BFI sends only the position of the person’s center of mass and the relative position of each hand and foot. The robot’s computer then scales these measurements proportionally to the sizes of HERMES, which reproduces that resource posture. As in any other two-way teleoperation loop, this coupling may cause vibration or instability. We reduced that by fine-tuning the scaling parameters that map the postures of the human and the robot.

 

To test the BFI, one of us (Ramos) volunteered to be the driver. After all, if you’ve created the core parts of the system, you’re probably best equipped to debug it.

 

In one of the first tests, we tested an early balancing algorithm for HERMES to see how human and robot would behave when coupled together. In the test, one of the researchers used a rubber mallet to hit HERMES on its upper body. With every hit, the BFI exerted a similar jolt on Ramos, who reflexively changed his body to regain balance, causing the robot to also catch itself.

 

Up to this point, HERMES was still just a couple of legs and a torso, but we eventually completed the rest of its body. We crafted arms that use the same actuators as those used by the legs and hands, made out of 3D-printed parts reinforced with carbon fiber. The head features a stereo camera, for streaming video to a headset worn by the operator. We also added a hard hat, just because.

 

In another round of experiments, we had HERMES punch through drywall, swing an axe against a board, and, with oversight from the local fire department, put out a controlled blaze using a fire extinguisher. Disaster robots will need more than just brute force, though, so HERMES and Ramos also performed tasks that want more dexterity, like pouring water from a jug into a cup.

 

In each case, as the operator simulated performing the task while strapped to the BFI, we observed how well the robot mirrored those actions. We also looked at the scenarios in which the operator’s reactions could help the robot the most. When HERMES punched the drywall, for instance, its torso rebounded backward. Almost immediately, a corresponding force pushed the operator, who reflexively leaned forward, helping HERMES to adjust its posture.

 

We were prepared for more tests, but we recognized that HERMES is too big and powerful for many of the experiments we wanted to do. Although a human-scale machine allows you to carry out realistic tasks, it is also time-consuming to move, and it demands lots of safety precautions — it’s wielding an axe! Trying more dynamic behaviors, or even walking, proved difficult. We decided HERMES needed a little sibling.

 

Little HERMES is a scaled-down version of HERMES. Like its big brother, it uses custom high-torque actuators, which are attached closer to the body rather than on the legs. This setting allows the legs to swing much faster. For a more compact design, we cut the number of axes of motion—or degrees of freedom, in robotic parlance—from six to three per limb, and we replaced the original two-toed feet with simple rubber spheres, each having a three-axis force sensor tucked inside.

 

Connecting the BFI to Little HERMES needed alterations. There’s a big difference in scale between a human adult and this smaller robot, and when we tried to link their movements directly—mapping the position of the human’s knees and the robot’s knees, and so forth—it resulted in jerky motion. We needed a different mathematical model to mediate between the two systems. The model we came up with tracks parameters such as ground contact forces and the operator’s center of mass. It captures a sort of “outline” of the operator’s intended motion, which Little HERMES is able to execute.

 

In one experiment, we had the operator step in place, slowly at first and then faster. We were happy to see Little HERMES marching in just the same way. When the operator hopped, Little HERMES jumped too.

 

In a sequence of photos we took, you can see both human and robot in midair for a brief instant. We also placed pieces of wood underneath the robot’s feet as obstacles, and the robot’s controller was able to keep the robot from dropping.

 

Much of this was still preliminary work, and Little HERMES wasn’t freely standing or able to walk around. A supporting pole attached to its back prevented it from tipping forth. At some point, we’d like to create the robot further and set it loose to amble around the lab and perhaps even outdoors, as we’ve done with Cheetah and Mini Cheetah (yes, it too has a little sibling).

 

Our next steps include addressing a host of challenges. One of them is the mental fatigue that an operator experience after using the BFI for extended periods of time or for tasks that demand a lot of concentration. Our experiments advise that when you have to command not only your own body but also a machine’s, your brain tires quickly. The effect is specifically pronounced for fine-manipulation tasks, such as pouring water into a cup. After repeating the experiment three times in a row, the operator had to take a break.

 



This article is originally posted on IEEESPECTRUM.com

Tags:
robotics hermes humanoid bots