Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

DCT Interpolation for Motion Video

IP.com Disclosure Number: IPCOM000119792D
Original Publication Date: 1991-Feb-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 1 page(s) / 59K

Publishing Venue

IBM

Related People

Feig, E: AUTHOR [+2]

Abstract

A DCT-based interpolation is disclosed for motion video application. It is MPEG compatible, that is, in keeping with proposed standards of the Motion Picture Experts Group of the International Standards Organization, and extensible to other similar environments.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 60% of the total text.

DCT Interpolation for Motion Video

      A DCT-based interpolation is disclosed for motion video
application. It is MPEG compatible, that is, in keeping with proposed
standards of the Motion Picture Experts Group of the International
Standards Organization, and extensible to other similar environments.

      A very popular method for motion video compression is to
transmit a scene, frequently subsampled two-to-one in each of two
spatial dimensions, together with motion vectors for predicting
ensuing scenes. One may predict many subsequent scenes; an MPEG
proposal, for example, calls for predicting 4 scenes after each
frames in the motion image, corresponding to the 4th, 7th, 10th and
13th frame in the motion image, and coding the difference between the
real and predicted values (lossy coding). In decompressing, the
missing frames are interpolated back using auxiliary motion vector
information. This interpolation is frequently the bottleneck in the
computation. The present disclosure introduces a computationally
efficient way for mitigating this delay.

      The scenes are sectioned into M x N blocks; in general, these
are square blocks with M = N, but this is not necessary.  The
subsampled and predicted 1, 4, 7, 13 and ensuing 1 indexed (in the
temporal domain) frames are subjected to an M x N x 5 point DCT. The
output is padded with 0's, M rows, N columns, and 11 units in the
temporal direction. A 2M x 2N x 16 inverse DCT is then applied to the
resulting 3-d...