%INC3% IMPROVED AUDIO/VIDEO SYNCHRONIZATION METHOD
Original Publication Date: 2000-Jun-01
Included in the Prior Art Database: 2003-Jun-19
Synchronization between Audio and Video during decode/play is defined by the MPEG Standard at the "frame" level. These are the smallest presentation units as defined by MPEG. Audio and Video frames are of different time duration and are relatively long when compared to the allowable synchronization error. This can lead to an unacceptable synchronization error between Audio and Video during playback. Audio and Video streams are made up of fairly long duration frames that must be made to line up precisely. Adding to the difficulty are the facts that Audio and Video frames do not have the same duration in time, and in fact, the duration of an Audio frame is dependent on the Sample Rate which can change. A solution to the problem of synchronization is to change the way that Audio frames are treated by the Audio decoder. During synchronization, MPEG treats frames as "atomic" units. That is, they are the smallest unit of Audio or Video playback. Each Program Time Stamp (PTS) refers only to a frame and nothing smaller. Each frame is regarded as being of a pre-determined time duration. Audio synchronization is defined by MPEG in the same way, but this is merely to allow MPEG to treat Audio in a similiar manner to Video. Once Audio decode is complete, there is no reason that a frame cannot be "expanded" or "shrunk" to adjust synchronization. There is nothing truly "atomic" about an Audio frame after decode. The "atomic" level is the PCM Sample. A number of PCM Samples are grouped together to create the Audio frame.