Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method and Apparatus to Support Live Avatar Expression in Virtual World

IP.com Disclosure Number: IPCOM000198086D
Publication Date: 2010-Jul-26
Document File: 3 page(s) / 132K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method and device for support live avatar expression in 3D virtual world. So called live expression means the avatar’s facial expression will be real-timely updated to present its controlling user’s real facial expression. The main idea is to capture user expression by computer camera, then update avatar’s facial texture and mesh model in read time and distribute expression control points to all other virtual world client-side applications.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Method and Apparatus to Support Live Avatar Expression in Virtual World

Disclosed is a method and device for support live avatar expression in 3D virtual world. So called live expression means the avatar's facial expression will be real-timely updated to present its controlling user's real facial expression

Stiff expression of avatars is one of apparent problems in current virtual worlds. In current virtual world technologies, during the communication between avatars/users, virtual world platforms only provide limited motion / expression animations. If avatar can present user's facial expression, it will make communication more immersive. However, it is hard to present rich and delicate expression of current users. So our invention will solve the problem. It will make virtual world-base human interaction more vivid, more immersive, and more similar with face-to-face communication

Our main idea is to capture user expression by computer camera, then update avatar's facial texture and mesh model in read time and distribute expression control points to all other virtual world client-side applications.

The detailed approach includes the following steps:
Step 1: Setup standard expression
-User builds his/her own 3D avatar head model in (x,y,z) space according to his/her head. See fig.1
-User inputs his/her photo(Istd) with normal expression. Istd is in (x,y) space. See fig.2
-The system generates projecting image (Gstd in (x,y) space) with triangle gird from 3D head model. See fig.3
-Match Istd and Gstdtogether in (x,y) space. See fig.4
-Transform Istd to (u,v) space and get standard expression texture: Tstd . See Fig.5 -Recognize key expression control points on Istd . See fig.6 (Pls. see ref. [1],[2] for the algorithm)
-Get corresponding control points on mesh model, and get the control point mapping list of Lcp . See fig.6
-Distribute 3D model, standard user expression texture(Tstd) and mapping relation(Lcp) to other clients

Step 2: Update local expression
- Capture user vi...