Robot's in-hand eye maps surroundings & determines location

Posted By : Jordan Mulcare
Robot's in-hand eye maps surroundings & determines location

Before a robot arm can reach into a tight space or pick up a delicate object, the robot needs to know precisely where its hand is. Researchers at Carnegie Mellon University’s Robotics Institute have shown that a camera attached to the robot’s hand can rapidly create a 3D model of its environment and also locate the hand within that 3D world.

Doing so with imprecise cameras and wobbly arms in real-time is tough, but the CMU team found they could improve the accuracy of the map by incorporating the arm itself as a sensor, using the angle of its joints to better determine the pose of the camera. This would be important for a number of applications, including inspection tasks, said Matthew Klingensmith, a Ph.D. student in robotics.

The researchers will presented their findings on May 17th at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. Siddhartha Srinivasa, associate professor of robotics, and Michael Kaess, assistant research professor of robotics, joined Klingensmith in the study.

Placing a camera or other sensor in the hand of a robot has become feasible as sensors have grown smaller and more power-efficient, Srinivasa said. That’s important, he explained, because robots “usually have heads that consist of a stick with a camera on it.” They can’t bend over like a person could to get a better view of a work space.

But an eye in the hand isn’t much good if the robot can’t see its hand and doesn’t know where its hand is relative to objects in its environment. It’s a problem shared with mobile robots that must operate in an unknown environment. A popular solution for mobile robots is called simultaneous localisation and mapping, or SLAM, in which the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3D map of the new environment and to figure out where the robot is within that 3D world.

“There are several algorithms available to build these detailed worlds, but they require accurate sensors and a ridiculous amount of computation,” Srinivasa said.

Those algorithms often assume that little is known about the pose of the sensors, as might be the case if the camera was handheld, Klingensmith said. But if the camera is mounted on a robot arm, he added, the geometry of the arm will constrain how it can move.

“Automatically tracking the joint angles enables the system to produce a high-quality map even if the camera is moving very fast or if some of the sensor data is missing or misleading,” Klingensmith said.

The researchers demonstrated their Articulated Robot Motion for SLAM (ARM-SLAM) using a small depth camera attached to a lightweight manipulator arm, the Kinova Mico. In using it to build a 3D model of a bookshelf, they found that it produced reconstructions equivalent or better to other mapping techniques.

“We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation,” Srinivasa said.

Toyota, the U.S. Office of Naval Research and the National Science Foundation supported this research.


You must be logged in to comment

Write a comment

No comments




Sign up to view our publications

Sign up

Sign up to view our downloads

Sign up

CES 2018
9th January 2018
United States of America Las Vegas, Nevada
Developing wearable products: technology and opportunities
17th January 2018
United Kingdom Cocoon Networks, London
Smart Mobility Executive Forum
12th February 2018
Germany Berlin
embedded world 2018
27th February 2018
Germany Nuremberg
Industry 4.0 Summit 2018
28th February 2018
United Kingdom Manchester