Browse Prior Art Database

# A Measurement Reduction Technique

IP.com Disclosure Number: IPCOM000096229D
Original Publication Date: 1963-Feb-01
Included in the Prior Art Database: 2005-Mar-07
Document File: 3 page(s) / 27K

IBM

Hu, KC: AUTHOR

## Abstract

Given a set of measurements of size L, measurement reduction in a recognition system finds the sub-set of measurements of size S, S contained in L and S < < L, which gives the best performance among all possible L!/ S! (L-S)! sub-sets of measurements of size S.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 54% of the total text.

Page 1 of 3

A Measurement Reduction Technique

Given a set of measurements of size L, measurement reduction in a recognition system finds the sub-set of measurements of size S, S contained in L and S < < L, which gives the best performance among all possible L!/ S! (L-S)! sub-sets of measurements of size S.

A method of reduction is developed by first measuring the information content of the measurements, then starting from the measurement with the highest information measure to find the first S pairwise most independent measurements that approximate the best sub-set of measurements of size S.

The following function is used to measure information and measurement pairwise independence:

(Image Omitted)

I(x) is the information gain of measurement x,

M is the total number of distinct classes (identities) of all the samples used,

N is the number of discrete levels (states) of measurement x, P(i) is the a priori probability of a sample being in class i, P(j) is the probability of a sample after measured by measurement x, being in state j,

P(ji) is the conditional probability of a sample being in class i, given that the value of the measurement for this sample is in state j,

The base for logarithm is arbitrary, which depends on the unit I(x) used.

The first term on the right-hand side is the uncertainty with a priori knowledge but no observation. The second term is the negative of the amount of uncertainty after some observation.

For example, if the discrete measurements given are binary measurements, the equation is first used to calculate the information of each of the L measurements, with N = 2. After the measurement with highest information gain is chosen as the first measurement of the sub-set of measurement S, the second one is chosen by considering each of the remaining L measurements with the first one, S(1), as a four-level discrete measurement. Its information gain I(S(1)x) is...