Browse Prior Art Database

KANA-KANJI Segmentation Utilization for Text to Speech

IP.com Disclosure Number: IPCOM000116512D
Original Publication Date: 1995-Sep-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 2 page(s) / 38K

Publishing Venue

IBM

Related People

Baba, M: AUTHOR [+3]

Abstract

Disclosed is an introduction of a mechanism to force voice synthesis programs to refrain from dividing the given sentence into segments.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 85% of the total text.

KANA-KANJI Segmentation Utilization for Text to Speech

      Disclosed is an introduction of a mechanism to force voice
synthesis programs to refrain from dividing the given sentence into
segments.

      Voice synthesis programs are normally implemented so that they
should first try to divide the sentences into segments, since the
sentences usually consist of multiple segments including Kanji's
and/or Kana's.  It is possible, however, that in certain cases the
programs calling the voice synthesis programs have already completed
the same segmentation process when they need to invoke voice
synthesis.  This is exemplified by the Kana-to-Kanji conversion
function case.  With the callers providing the voice synthesis
program with segments (not sentences), the segmentation process can
be eliminated.

      To make the voice synthesis program not analyze the
segmentation of the sentence when invoked from the KANA-KANJI
Conversion (KKC) program, there are two implementation methods as
follows:
  1.  Set the mode statically at the initial setting.
  2.  Prepare the flag to set the mode in an Application Program
       Interface (API) which activates the voice synthesis process
       and set/reset the flag dynamically each time when KANA-KANJI
       conversion program call the API.

      This method has two merits:
  1.  It can save time which voice synthesis program consumes for
       analyzing the segmentation of the sentence wh...