Training Robots To Get Along With Humans

Training Robots To Work With Us.

Robots are changing – again. Now for industrial robots the buzz word is co-bot, short for collaborative robot meaning it is meant to work either with other robots or more interestingly, humans. Regardless of the cute name robots that are able to work safely with and around humans will be in big demand for more that just manufacturing.

The latest robot designs range from human looking replicas of famous people to torso less arms or just legs depending what they are designed to do.

A Professional Robot Chef In Your Kitchen

A Professional Robot Chef In Your Kitchen

Arms only, is great for cooking and cleaning.

New robot from Agility Robotics will allow students access to the latest hardware

New robot from Agility Robotics will allow students access to the latest hardware

Legs only robots are being tested for balance and agility. At some point we will put these things together and make a robot chef that can run really fast. Joking aside, creating and perfecting robots that work with humans and not just for humans is a great challenge. Engineers and scientist the world over are dedicating resources to developing new innovative ways for robots to help humanity.

Like toddlers, robots can use a little help as they learn to function in the physical world. That’s the purpose of a Rice University program that gently guides robots toward the most helpful, human-like ways to collaborate on tasks.

Rice engineer Marcia O’Malley and graduate student Dylan Losey have refined their method to train robots by applying gentle physical feedback to machines while they perform tasks. The goal is to simplify the training of robots expected to work efficiently side by side with humans.

A paper on their study appears in IEEE Explore.

“Historically, the role of robots was to take over the mundane tasks we don’t want to do: manufacturing, assembly lines, welding, painting,” said O’Malley, a professor of mechanical engineering, electrical and computer engineering and computer science.

As we become more willing to share personal information with technology, like the way my watch records how many steps I take, that technology moves into embodied hardware as well.

Robots are already in our homes vacuuming or controlling our thermostats or mowing the lawn. There are all sorts of ways technology permeates our lives. I already talk to Alexa in the kitchen, so why not also have machines we can physically collaborate with? A lot of our work is about making human-robot interactions safe.

According to the researchers, robots adapted to respond to physical human-robot interaction (pHRI) traditionally treat such interactions as disturbances and resume their original behaviors when the interactions end. The Rice researchers have enhanced pHRI with a method that allows humans to physically adjust a robot’s trajectory in real time.

At the heart of the program is the concept of impedance control, literally a way to manage what happens when push comes to shove. A robot that allows for impedance control through physical input adjusts its programmed trajectory to respond but returns to its initial trajectory when the input ends.

The Rice algorithm builds upon that concept as it allows the robot to adjust its path beyond the input and calculate a new route to its goal, something like a GPS system that recalculates the route to its destination when a driver misses a turn.

Losey spent much of last summer in the lab of Anca Dragan, an assistant professor of electrical engineering and computer sciences at the University of California, Berkeley, testing the theory. He and other students trained a robot arm and hand to deliver a coffee cup across a desktop, and then used enhanced pHRI to keep it away from a computer keyboard and low enough so that the cup wouldn’t break if dropped. (A separate paper on the experiments appears in the Proceedings of Machine Learning Research.)

The goal was to deform the robot’s programmed trajectory through physical interaction.

Here the robot has a plan, or desired trajectory, which describes how the robot thinks it should perform the task,” Losey wrote in an essay about the Berkeley experiments. “We introduced a real-time algorithm that modified, or deformed, the robot’s future desired trajectory.

In impedance mode, the robot consistently returned to its original trajectory after an interaction. In learning mode, the feedback altered not only the robot’s state at the time of interaction but also how it proceeded to the goal, Losey said. If the user directed it to keep the cup from passing over the keyboard, for instance, it would continue to do so in the future.

“By our replanning the robot’s desired trajectory after each new observation, the robot was able to generate behavior that matches the human’s preference,” he said.

Further tests employed 10 Rice students who used the O’Malley lab’s rehabilitative force-feedback robot, the OpenWrist, to manipulate a cursor around obstacles on a computer screen and land on a blue dot. The tests first used standard impedance control and then impedance control with physically interactive trajectory deformation, an analog of pHRI that allowed the students to train the device to learn new trajectories.

The results showed trials with trajectory deformation were physically easier and required significantly less interaction to achieve the goal. The experiments demonstrated that interactions can program otherwise-autonomous robots that have several degrees of freedom, in this case flexing an arm and rotating a wrist.

One current limitation is that pHRI cannot yet modify the amount of time it takes a robot to perform a task, but that is on the Rice team’s agenda.

“The paradigm shift in this work is that instead of treating a human as a random disturbance, the robot should treat the human as a rational being who has a reason to interact and is trying to convey something important,” Losey said.

The robot shouldn’t just try to get out of the way. It should learn what’s going on and do its job better.

Source

Rice University researchers led by graduate student Dylan Losey want to help humans and robots collaborate by enabling interactive tasks like rehabilitation, surgery and training programs in which environments are less predictable. In early studies, Losey and colleagues at the University of California, Berkeley, used gentle feedback to train a robot arm to manipulate a coffee cup in real time.

CREDIT Andrea Bajcsy

Training Robots To Get Along With Humans

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>