Browse Prior Art Database

Using Cognitive Computing to Create "Descriptive Video" Narrations Disclosure Number: IPCOM000250557D
Publication Date: 2017-Aug-02
Document File: 1 page(s) / 66K

Publishing Venue

The Prior Art Database

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Using Cognitive Computing to Create "Descriptive Video" Narrations

Descriptive Video (also known as Audio Descriptions) services provide a narration to describe a visual scene for the visually-impaired. This additional audio has always required additional studio time to generate a script, adapt it to fill the "audio pauses" in dialog and crucial sound effects, and extensive studio time for the recording artist to generate the recording.

Existing cognitive computing construction has been applied to analyze a movie for emotional impact, content, and importance. In doing so, such technology displayed capabilities similar to the skills used by humans to create what is called "Descriptive Video". A method or system is needed that can leverage this technology to adapt visual scenes for the visually impaired

The novel contribution is a process and technology to use cognitive analytics components to ingest all available media coming with any video media (e.g. scripts, producer's notes, etc.) along with the actual video to cognitively generate both an effective and evocative script as well as an option to synthetically voice the audio track.

The process is to feed an artificial intelligence/cognitive computing solution all available information about a selected movie. This includes the video, scripts, acting notes, commentaries, reviews, etc. The cognitive computing system identifies the important visual elements (e.g., actors, props, or a scene) using visual recognition technology. Addition...