Browse Prior Art Database

Parallel Multicast Fast Zapping

IP.com Disclosure Number: IPCOM000171768D
Original Publication Date: 2008-Jul-10
Included in the Prior Art Database: 2008-Jul-10
Document File: 7 page(s) / 320K

Publishing Venue

Siemens

Related People

Juergen Carstens: CONTACT

Abstract

Reducing channel zapping time is an important issue in current video network technology. Especially delays, which are originated by the waiting time, are required for receiving the first video I-frame (Intra coded frames). Also for buffering a few segments of the video stream, in order to cope with network latency and jitter, the zapping time is in an interval from 300 to 600ms. Current solutions try to solve the mentioned problem by adding unicast channels that rapidly provide the content to the end-user. A Second option is to add other multicast channels with the same information, but with a phase delay. In the following an idea is proposed which is based on a parallel multicast channel for each broadcast content channel, containing repeated pieces of information. By sending a burst of video data with high transmission rates, it is possible to reduce the waiting times for the 1st video I-frame significantly as well as for the video stream buffering at the STB (set-top box) to compensate network jitter. Figure 1 illustrates the new concept. The parallel multicast channel in this figure requires a transmission rate which is three times higher. However, the maximum time for waiting by the 1st frame and/or video buffering can be reduced by three times. T represents the time between two consecutive I-frames on the broadcast channel. Typically, after each I-frame video frames are highly compressed during codification. In order to start visualizing a channel, the application has to wait for the 1st available I-frame.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 40% of the total text.

Page 1 of 7

Parallel Multicast Fast Zapping

Idea: Artur Miguel do Amaral Arsenio, PT-Lisbon

Reducing channel zapping time is an important issue in current video network technology. Especially delays, which are originated by the waiting time, are required for receiving the first video I-frame (Intra coded frames). Also for buffering a few segments of the video stream, in order to cope with network latency and jitter, the zapping time is in an interval from 300 to 600ms.

Current solutions try to solve the mentioned problem by adding unicast channels that rapidly provide the content to the end-user. A Second option is to add other multicast channels with the same information, but with a phase delay.

In the following an idea is proposed which is based on a parallel multicast channel for each broadcast content channel, containing repeated pieces of information. By sending a burst of video data with high transmission rates, it is possible to reduce the waiting times for the 1st video I-frame significantly as well as for the video stream buffering at the STB (set-top box) to compensate network jitter. Figure 1 illustrates the new concept. The parallel multicast channel in this figure requires a transmission rate which is three times higher. However, the maximum time for waiting by the 1st frame and/or video buffering can be reduced by three times. ΔT represents the time between two consecutive I-frames on the broadcast channel. Typically, after each I-frame video frames are highly compressed during codification. In order to start visualizing a channel, the application has to wait for the 1st available I- frame.

Figure 2 shows an application to reduce video de-jitter buffering time. The parallel multicast channel, seen in the quadrate of Figure 2, requires a transmission rate which is m times higher. It contains a copy of past m-blocks information's (video from t-m-1 to t-1) for the original broadcasted video (from t- 1 to t). Two alternative approaches can be realized. In the first alternative a parallel buffering at the STB will be arranged. Before the original video stream is inserted into the buffer, the parallel multicast channel has to send, for every m blocks, the buffer size required for these blocks. So the original video is buffered and starts from a parallel buffering position. An alternative is to include an additional block containing a copy of the original video from t-1 to t into the parallel multicast channel. This insertion may be at the end of the m-blocks. The inserting of the data from the original broadcast into the buffer can be started after the de-jitter buffer is filled by data from the parallel channel. This type of buffering is called sequential buffering. Figure 3 illustrates an application to reduce the waiting time for the first video I-frame. The parallel multicast channel, which is seen in the figur...