Browse Prior Art Database

Using Information Entropy to Select Learning Words

IP.com Disclosure Number: IPCOM000107693D
Original Publication Date: 1992-Mar-01
Included in the Prior Art Database: 2005-Mar-22
Document File: 2 page(s) / 56K

Publishing Venue

IBM

Related People

Nishimura, M: AUTHOR

Abstract

The speaker adaptation efficiency of a speech recognition system is greatly affected by the selection of learning words, but no specific ways have been proposed for selecting such words effectively. This article describes how to select learning words for speaker adaptation on the basis of information entropy.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 66% of the total text.

Using Information Entropy to Select Learning Words

       The speaker adaptation efficiency of a speech recognition
system is greatly affected by the selection of learning words, but no
specific ways have been proposed for selecting such words
effectively.  This article describes how to select learning words for
speaker adaptation on the basis of information entropy.

      We previously proposed a speaker adaptation method for a
fenonic Markov-model-based speech recognizer (1).  In this method, a
mapping between speakers is estimated from the matching between the
label sequences of fenonic word baseforms and the utterance of
learning words by the speaker who is the object of adaptation.
Consequently, learning words should be selected so as to include many
different labels (or fenones).  Since the label sequences of learning
utterance are not predictable, learning words are selected by using
the fenone (label) sequences of fenonic word baseforms.  Information
entropy is used for efficiently selecting learning words that include
many different fenones.

      Each fenone consists of two kinds of sub-fenones, namely, the
static features' fenone Ms and the dynamic features' fenone Md, which
are independent of the basis of their definition (2).  Given a set of
learning words W and the conditional probabilities of the fenones,
the information entropy H of an information source X is calculated by
the following formula:

                      ...