Browse Prior Art Database

Methods of Robot Perception and Control for Human-Robot Interaction

IP.com Disclosure Number: IPCOM000200694D
Publication Date: 2010-Oct-25
Document File: 6 page(s) / 652K

Publishing Venue

The IP.com Prior Art Database

Abstract

The invention is about the basic core methods of robot perception and control for human-robot interaction so that, based on these core techniques, robots can further perform the intelligent tasks such as following humans through buildings, imitating stealthy behaviors, manipulating tools and weapons, learning complex assembly by observation, and moving as a team to provide cover or surveillance. The unique human-robot interaction (or coordination) techniques can be explained as follows in the example application that a robot picks up a rubber duck asked by a commander - (1) Interaction/attention: A commander says “Hey! Robot” with a loud voice to draw its attention; (2) Audio source localization using microphone array: Robot turns to the commander based on the sound source; (3) Pose estimation based human pose estimation: The commander asks “Go pick up that rubber duck (or any other object)” by pointing at the duck with his/her hand; (4) Image data collection for learning: Robot doesn’t know what ‘rubber duck’ is, so it does Google/Yahoo/MSN Image Search and collects matching images; (5) Training classifier or candidate template: Given the candidate images collected, Robot trains a classifier from them or just saves them for matching; (6) Bio-inspired preprocessing using visual saliency: Robot goes the area of the commander’s pointing and generates a visual saliency map; (7) Object recognition: Robot performs object recognition and localizes the objects within an area constrained by hot candidate regions in the saliency map; (8) Manipulation/control: Robot picks the rubber duck up and comes back to the commander with it.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 23% of the total text.

Page 01 of 6

The robotic perception and control techniques developed in our invention for C2M machines are demonstrated with three different robotic platforms. First, a mini autonomous ground vehicle (R/C car) automatically follows another car. This involves robust visual tracking on unstable wireless videos and wireless PC-based car control. Second, we show that a humanoid version of the Lego Mindstorm NXT can follow a ball automatically. This is possible by autonomous ball tracker and control algorithm embedded in RoboRealm (robot development environment, http://www.roborealm.com). Third, a more advanced humanoid, named NAO, demonstrates audio source localization using a microphone array, visually-guided grasping of an object with a fiducial marker, and object detection/localization for faces and any objects trained for classification. The rest of the section is presented with visual tracking, audio source localization, object detection, and finally conclusion and future work.

Visual Tracking

Visual tracking has been thoroughly studied by computer vision researchers and is widely used in many different application areas such as video surveillance, bio-medical image analysis, gaming, and robotics. There are many different ways to perform object tracking from video, from simple color blob tracking to sophisticated probabilistic methods like particle filtering Error! Reference source not found. or mean-shift Error! Reference source not found.. The capability of following an independently moving object is important to un-manned autonomous vehicles and humanoids for their human-robot interaction/coordination.

   We implemented and embedded visual tracking modules into two robotic platforms - an R/C car following another car and a humanoid version of Lego NXT Mindstorm following a small ball.

(This page contains 00 pictures or other non-text object)

Figure 1. Humanoid version of Lego Mindstorm NXT following a red ball autonomously.

Figure 1 shows the humanoid version of Lego Mindstorm NXT automatically following a colored

ball. It has a USB camera as its head for visual inputs. Object following by visual tracking is possible by a ball tracker and a control algorithm, all embedded in RoboRealm which has the programming interfaces for USB cameras and Lego Mindstorm NXT. The robot is wirelessly controlled through Bluetooth. The humanoid can move forward, backward, left/right by changing the speed/turning direction of the motor in each leg. The ball tracker uses a simple color blob tracking method based on color thresholding and connected component analysis. The centroid/size of the biggest blob detected is chosen for the ball location/size. Given the ball size and location (within the image) measured by the ball tracker, the proper control command is sent to the robot so it can keep the ball in the center of the image (moving left/right) at the initial ball size (moving forward/backward).


Page 02 of 6

(This page contains 00 pictures or other non-text o...