Browse Prior Art Database

Gesture-Based Mood And Emotion Indicators

IP.com Disclosure Number: IPCOM000237941D
Publication Date: 2014-Jul-23
Document File: 2 page(s) / 23K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a system that uses a gesture-based Software Development Kit (SDK) and webcam to analyze the user's expression or action in real time and then translate it to a textual or image-based representation or mood identifier. That mood identifier is then shared via the active method of online communication (e.g., chat, web cam, video stream, etc.).

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 2

Gesture-

-Based Mood And Emotion Indicators

Based Mood And Emotion Indicators

The system disclosed herein uses a gesture-based Software Development Kit (SDK) and webcam to analyze a user's expression or action in real time and then translate it to a textual or image-based representation or mood identifier.

The use of emoticons and acronyms to convey mood, emotion, or actions the user is performing (such as laughing) is widespread. Acronyms also act as a form of shorthand, which reduces the need to type commonly used phrases. Reducing the

amount of typing necessary is desirable, as evidenced by the common use of shorthand, acronyms etc.

Webcams can be leveraged to enhance text interchanges and can reduce typing efforts.

According to a preferred embodiment of the present invention, to implement the disclosed system:

1. User identifies gestures and facial expressions to associate with any one or more of the following:

A. Text


B. Images


C. Emotions


2. User stores these using the SDK

3. User identifies a particular hotkey or keystroke sequence which should launch the code for analysis of the current image captured by a webcam

4. System compares the image analysis to the stored gestures and facial expressions


5. System identifies the current user context

A. Typing in a document, email, or chat


B. Viewing a movie, show, or other media


6. System identifies appropriate stored gesture or expression and look up what to insert


7. Depending on the results of Step 5, the system:

A. Inserts text or image into active document at cursor point


B. Inserts emotion tag into video stream at that point

The following examples provide additional details regarding the disclosed system:

Example Embodiment #1: User A may be sitting at a computer with the webcam on,

1


Page 02 of 2

and the application running in the background. User A opens a chat window or an email editor. While typing, User A performs a gesture to simulate an embrace (e.g., arms crossed with hands on shoulders) and hits a predefined hotkey. The software analyzes the image captured by the webcam immediately prior to user entry of the hotkey, identifies the crossed-arm position, and looks it up to find the corr...