Browse Prior Art Database

%INC3% IMPROVED AUDIO/VIDEO SYNCHRONIZATION METHOD

IP.com Disclosure Number: IPCOM000014074D
Original Publication Date: 2000-Jun-01
Included in the Prior Art Database: 2003-Jun-19
Document File: 1 page(s) / 39K

Publishing Venue

IBM

Abstract

Synchronization between Audio and Video during decode/play is defined by the MPEG Standard at the "frame" level. These are the smallest presentation units as defined by MPEG. Audio and Video frames are of different time duration and are relatively long when compared to the allowable synchronization error. This can lead to an unacceptable synchronization error between Audio and Video during playback.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 1

%INC3% IMPROVED AUDIO/VIDEO SYNCHRONIZATION METHOD

    Synchronization between Audio and Video during decode/play is defined by the MPEG Standard at the "frame" level. These are the smallest presentation units as defined by MPEG. Audio and Video frames are of different time duration and are relatively long when compared to the allowable synchronization error. This can lead to an unacceptable synchronization error between Audio and Video during playback.

Audio and Video streams are made up of fairly long duration frames that must be made to line up precisely. Adding to the difficulty are the facts that Audio and Video frames do not have the same duration in time, and in fact, the duration of an Audio frame is dependent on the Sample Rate which can change.

A solution to the problem of synchronization is to change the way that Audio frames are treated by the Audio decoder. During synchronization, MPEG treats frames as "atomic" units. That is, they are the smallest unit of Audio or Video playback. Each Program Time Stamp (PTS) refers only to a frame and nothing smaller. Each frame is regarded as being of a pre-determined time duration.

Audio synchronization is defined by MPEG in the same way, but this is merely to allow MPEG to treat Audio in a similiar manner to Video. Once Audio decode is complete, there is no reason that a frame cannot be "expanded" or "shrunk" to adjust synchronization. There is nothing truly "atomic" about an Audio frame after decode. The "atomic" level is the PCM Sample. A number of PCM Samples are grouped together to create the Audio frame.

The synchronization solution is to adjust the time duration of the Audio frame by dealing with the actual "atomic" level, the Sample, which is the lowest level of data. By adding o...