Browse Prior Art Database

Method to capture and use user emotions and reactions from audio/video based on real time detection

IP.com Disclosure Number: IPCOM000234839D
Original Publication Date: 2014-Feb-10
Included in the Prior Art Database: 2014-Feb-10
Document File: 1 page(s) / 10K

Publishing Venue

Lenovo

Related People

Nathan J Peterson: INVENTOR [+5]

Abstract

One problem today is that audio/video is not rated based on real user reaction data, it is generally rated based on what a person wants to say about it versus how they actually react to it. Another issue we are solving is the lack of ability to be able to jump to a specific portion of audio/video that had a large user response. For example if a user cried or laughed at a particular offset in the audio/video, others might want to know about it.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 100% of the total text.

Page 01 of 1

Method to capture and use user emotions and reactions from audio/video based on real time detection

There are no methods today that take real user reactions to audio/video and tag the audio/video stream with this data. Or any methods that will collect real user reactions to video/audio and show this metric as a rating system for the audio/video.

What can be done is to monitor user reactions whether in a movie theater or via a computer camera for specific reactions using facial recognition and audio detection. During collection, specific data can be collected: amount of time of a certain reaction, aggregate, collection, or percentage of people in the audience or PC users that reacted, at which offset of the audio/video these reactions took place, etc. This data can be used in a couple of good use cases.

One way to use this data is to create a new way to get video or audio reviews, based on real emotional or reactional data from audio/video. For example if, on average, people laughed 20 times during a movie, this could be listed as a "Real Review".

Another method for using the data is to allow tagging of video/audio to help users jumping to specific events within the audio/video stream. For example, rather than users watching a video for 10 minutes to find the funny part, the video could be tagged as "funny event" at time offset 6:30. This could be illustrated within the video/audio progress bar for easy jump-to locations.