Browse Prior Art Database

Speaker's emotion rich scripts Disclosure Number: IPCOM000246961D
Publication Date: 2016-Jul-19
Document File: 4 page(s) / 65K

Publishing Venue

The Prior Art Database


Disclosed a system to simulate the spearker's emotion from the text-based script. Applying Emochi or Virtual Reality (VR) technique, the sysmte can vistualize the emotion and present the vivid communication to the receivers.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 66% of the total text.

Page 01 of 4

Speaker's emotion rich scripts

When talking with friends via phone, you can know his/her emotion/feeling/thinking by voices, ex: you'll know if your mom is angry by identifying her tone is higher, you can know your best friend is sad by his/her low voice. This is common human communication method to know his/her feeling/emotion by voices/tones. However, current voice to text input method can't be this smart.

Voice to text is a popular technology, especially after the smart glass and VR comes out to the market. The voice recognition technology also makes a lot of progress these years. However, when using voice to text input, current voice recognition system can't imitate user's emotion/tone/feeling by the user's voice. It is a big drawback when using Instant Messenger software because human voice has feeling and emotion inside.

Therefore, we came out a method to detect user's emotion/tone/feeling by user's speech and it can show on the UI by text font size/style/mark. Also, this method can help text-to-speech technology be more human-like, especially for robots.

While using voice input analysis we proposed, it will make input content more vivid in terms of content display with different expression of font size, style, mark or original voice simulation ...etc.

1. Speaker's emotional expression can be described in contents.
2. Emotional contents can be presented via different media.

System Diagram:


Page 02 of 4

Fig. 1

Sample tags:



Page 03 of 4