Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Optimizing the Weights in the Weighted Distance Calculation Of a Vector Quantizer

IP.com Disclosure Number: IPCOM000121024D
Original Publication Date: 1991-Jul-01
Included in the Prior Art Database: 2005-Apr-03
Document File: 2 page(s) / 80K

Publishing Venue

IBM

Related People

Bahl, LR: AUTHOR [+4]

Abstract

A vector quantizer identifies the "nearest" reference vector to a given sample vector. Common definitions of "nearest" include those based upon a weighted Euclidean distance, diagonal Gaussian likelihood, or full-covariance Gaussian likelihood. All of these lead to distance calculations of the form: where X denotes a sample vector, R denotes a reference vector, yo = 1, and for k > 0, yk = (xi - rj)(xj - rj) for some pair of indices i, j.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Optimizing the Weights in the Weighted Distance Calculation Of a
Vector Quantizer

      A vector quantizer identifies the "nearest" reference
vector to a given sample vector.  Common definitions of "nearest"
include those based upon a weighted Euclidean distance, diagonal
Gaussian likelihood, or full-covariance Gaussian likelihood.  All of
these lead to distance calculations of the form: where X denotes a
sample vector, R denotes a reference vector, yo = 1, and for k > 0,
yk = (xi - rj)(xj - rj) for some pair of indices i, j.

      For vector quantizers using a distance measure of this form,
the invention detailed below optimizes the weights without assuming
the underlying model to be correct.

      We will assume the existence of some correctly labeled training
vectors, a set of reference vectors, and initial estimates of the
weights.  If no other estimates of the weights exist, they may be
initialized as w0 =0,wk = 1, for k > 0.

      The algorithm, which is a hill-climbing procedure, is similar
in concept to the well known error-correcting training algorithm for
linear classifiers [*].  The steps are as follows.
(1)  Select an initial step size s; s = 1 is a reasonable first
choice.
(2)  For each reference vector R initialize an adjustment vector AR
to zero and an adjustment count cR to zero. Set a global error
counter e to zero.
(3)  Perform steps (4) - (5) for each training vector X.
(4)  Find the nearest reference vector to X and denote it as N.  If N
is correct, make no adjustments; skip step (5).  If N is incorrect,
make some adjustments; perform step (5).
(5)  Denote the correct reference prototype as M.  Compute YM from X
and M, and YN from X and N.  Update the accumulators as follows:
      AM = AM - h.YM
     ...