Browse Prior Art Database

# Learning Hardware Software System for the Recognition and Pick Up of Objects

IP.com Disclosure Number: IPCOM000082550D
Original Publication Date: 1974-Dec-01
Included in the Prior Art Database: 2005-Feb-28
Document File: 4 page(s) / 95K

IBM

## Related People

Ayoub, EE: AUTHOR

## Abstract

A computer hardware software system recognizes and picks up objects. The hardware shown in Fig. 1 includes computer 10, sensing device 11 (TV camera, ultrasonic detectors, fiber optic detectors, etc.) and manipulator 12.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 49% of the total text.

Page 1 of 4

Learning Hardware Software System for the Recognition and Pick Up of Objects

A computer hardware software system recognizes and picks up objects. The hardware shown in Fig. 1 includes computer 10, sensing device 11 (TV camera, ultrasonic detectors, fiber optic detectors, etc.) and manipulator 12.

The software consists of an algorithm specifically tailored for recognition and picking up of an object 14, through minimization of an objective function (see Fig.
2). System operation involves four stages: Stage 1 (Learning stage): Step 1 Determine the "canonical" positions of an object. The "canonical" positions (1, 2,..., pi,...) of an object can be defined, for example, as the positions that minimize the potential energy (e) of the object with respect to the surface(s) on which the object rests. The order of the "canonical" positions must be chosen so as to be consistent with the condition p(1) > or = p(2) > or = ... > or = p(pi) > or =
... where p(pi) is the probability for the occurrence of position pi. Step 2 Using the sensing device, obtain the "Images" 1/(1)/, I/(2)/,...I(pi,... that correspond to the object (in canonical positions 1, 2,..., pi,...). The images are stored (in the computer) as matrices I(i,j). (see Fig. 3a). Step 3 Construct (using a joy stick for example) and store the "canonical" procedures required to pick up the object, the canonical procedures P(1), P(2),..., P(pi),... corresponding to the canonical positions 1, 2,..., pi,... Stage 2 (Recognition stage): Step 1 Using the sensing device, the computer obtains an image I of the object (the object occupying an unknown position). The computer stores the image as a matrix I(i j). (see Fig. 3b). Step 2 Using the algorithm of Fig. 2, the computer determines both the canonical position pi of the object and the optimal translation and rotation parameters (a and b, and theta, respectively) that satisfy the relation I(pi) approx. = [R(theta) Q T (a,b)] I Here, R(theta) stands for a rotation and T(a,b) for a translation. Stage 3 (Learning stage): Step 1 If the position of the object cannot be determined, the computer updates the results obtained through the execution of Stage 1, Steps 1, 2, and 3. Stage 4 (Pick-up stage): Step 1 Using stored procedure P(pi) of Stage 1, Step 3, and using parameters a, b, and theta of Stage 2, the computer constructs a new procedure, the execution of which allows the manipulator to pick up the object; the new procedure could be obtained, for example, simply by rotating and translating the coordinates generated through execution of procedure P(pi).

In Fig. 2, the recognition algorithm is shown. In step 20, set pi to 1, assuming the object is in position 1. In step 22, which follows, the matrix I/(pi)/ is obtained. Next in step 23, the minimum iteration counter upsilon is set to 0. In step 24, values for a(upsilon), b(upsilon) and theta(upsilon) are estimated. In step 25, a new matrix is constructed

(Image Omitted)

In step 26, the objective f...