Dismiss
InnovationQ will be updated on Sunday, Jan. 21, from 9am - 11am ET. You may experience brief service interruptions during that time.
Browse Prior Art Database

# A Statistical Quantizer

IP.com Disclosure Number: IPCOM000097424D
Original Publication Date: 1962-Nov-01
Included in the Prior Art Database: 2005-Mar-07
Document File: 3 page(s) / 26K

IBM

Hu, KC: AUTHOR

## Abstract

A technique of quantizing measurements with real values into measurements with discrete values is to calculate the total amount of uncertainty before quantizing and the amount of uncertainty after quantizing and then maximize the difference. Thus, for any given number of discrete levels, the information content of a measurement is maximized. To measure the uncertainty, a priori probability distribution of the samples and the conditional probability distribution of the samples with respect to discrete levels of the measurements are all taken into account. Based on some practical consideration, exhaustive computation is avoided.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 54% of the total text.

Page 1 of 3

A Statistical Quantizer

A technique of quantizing measurements with real values into measurements with discrete values is to calculate the total amount of uncertainty before quantizing and the amount of uncertainty after quantizing and then maximize the difference. Thus, for any given number of discrete levels, the information content of a measurement is maximized. To measure the uncertainty, a priori probability distribution of the samples and the conditional probability distribution of the samples with respect to discrete levels of the measurements are all taken into account. Based on some practical consideration, exhaustive computation is avoided.

The following function is used to measure information:

I(x) = /M/Sigma(i=1) P(i) log P(i) + /N/Sigma(j=1) P(ji) * ;/M/Sigma(i=1) P(ji) log P(ji) (1) I(x) is the information gain of measurement x, M is the total number of distinct classes (identities) of all the samples used, N is the number of discrete levels (states) of measurement x, P(i) is the a priori probability of a sample being in class i, P(j) is the probability of a sample, after measured by measurement x, being in state j, P(ji) is the conditional probability of a sample being in class i, given that the value of the measurement for this sample is in state j, The base for logarithm is arbitrary, which depends on the unit of I(x) used.

The first term on the right-hand side is the uncertainty with a priori knowledge but no observation. The second term is the negative of the amount of uncertainty after some observation.

An example of quantizing a measurement with real value into a binary measurement...