Browse Prior Art Database

# Vector Processor Method for Computing Multiple Multivariate Gaussian Probabilities

IP.com Disclosure Number: IPCOM000034039D
Original Publication Date: 1989-Jan-01
Included in the Prior Art Database: 2005-Jan-26
Document File: 3 page(s) / 45K

IBM

## Abstract

In many speech recognition systems, it is necessary to compute the probability of an observation over many nultivariate Gaussian probability density functions. A vector processor algorithm is described for computing multiple multivariate Gaussians that provides substantial performance improvements over scalar computation even when the dimensionality of the observation vector is small. 1. Background Many speech recognition algorithms [1,2,3] model an input parameter vector y as having an underlying Gaussian distribution of the form where N is the dimension of y, m is the mean vector, and is a (positive definite) covariance matrix. Many of these algorithms require computation of y for many different values of m and . Typically, one is able to deal with log p(y) since the log is a monotonic transformation.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Vector Processor Method for Computing Multiple Multivariate Gaussian Probabilities

In many speech recognition systems, it is necessary to compute the probability of an observation over many nultivariate Gaussian probability density functions. A vector processor algorithm is described for computing multiple multivariate Gaussians that provides substantial performance improvements over scalar computation even when the dimensionality of the observation vector is small. 1. Background Many speech recognition algorithms [1,2,3] model an input parameter vector y as having an underlying Gaussian distribution of the form where N is the dimension of y, m is the mean vector, and is a (positive definite) covariance matrix. Many of these algorithms require computation of y for many different values of m and .

Typically, one is able to deal with log p(y) since the log is a monotonic transformation. Hence, the basic calculation reduces to The first term on the righthand side of Equation 2 can be precomputed and presents no particular computational burden. The second term on the righthand side, however, requires a matrix-vector multiply and a dot product. For many speech applications, the dimension of the vector y is small, e.g., on the order of 20. Although a modern vector processor, such as an IBM 3090, is very efficient at matrix and vector operations, there is little or no improvement in performance when the dimension of the matrices and the vectors are on the order of 20. Therefore, straightforward use of matrix-vector routines to compute q(y) for fixed y and many values of m and produce no performance improvements and even performance degradation relative to carrying out the above calculation in a scalar fashion. A reformulation of the above calculation is described to take full advantage of the IBM 3090 vector performance. 2. Algorithm The goal of the proposed algorithm is to rapidly compute q(y m, ) for fixed y and many different pairs of m and . Equation 2 can be rewritten as where Now reinterpret ijk as elements of a matrix with ij rows and k columns. In a typical speech recognition system, the number of multivariate Gaussian distributions is 200, and the dimensionality of y is 20, so that the number of rows in ijk is 4000. There now is a matrix-vector multiply in which one can obtain a large gain in performance because of the large size of ij. Similarly, defining one can calculate the second part of Equation 6 as (7) Again, if the number of Gaussians is on the order of 200, once again there is obtained a large performance advantage for a matrix- vector multiply over a straight scalar calculation. It should be pointed out that the data in íijk must be arra...