Syllabic Speech Recognition for Real-time Phonetic Subtitling for the Deaf
Original Publication Date: 2000-May-01
Included in the Prior Art Database: 2003-Jun-18
Disclosed is a method of using an off-the-shelf speech recognition system for real-time phonetic subtitling, as an assist to other methods of speech perception (lip reading, residual hearing) for deaf people. The novelty of this approach is to reuse as much of a commercial speech recognition system as possible, instead of developing an ad-hoc system specific to this problem. The modifications pertain to recognizing syllables instead of words, which enables real-time decoding of unlimited vocabulary speech. 1 Background Many deaf (or severely hearing-impaired) people rely heavily on lip-reading. However, many sounds look alike on the lips (e.g. /p/, /b/ and /m/). It is estimated that lip-reading conveys only about 30% of the speech information. Deaf people have to draw on other knowledge (like contextual or semantic knowledge) to distinguish words that are visual look-alikes, like "party", "Marty", "bar tea", etc... This makes lip-reading a very tiring and somewhat inefficient exercise. Some manual methods practiced by the speaker (like Cued Speech [Cornett, 1967] can complement the information conveyed on the lips, and have proven very successful. Yet they require training on the part of the people wanting to address deaf people, thereby limiting the circle of people able to communicate with them.