Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Time Buffering for Parallel Graphics and Video

IP.com Disclosure Number: IPCOM000116447D
Original Publication Date: 1995-Sep-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 2 page(s) / 44K

Publishing Venue

IBM

Related People

Narayanaswami, C: AUTHOR [+2]

Abstract

A "rendering scenario" is one in which geometric primitives are scan-converted into pixels which are then written into a frame buffer. A "parallel-rendering scenario" is one in which rendering of primitives is done by more than one processor. In a parallel-rendering scenario there are two simple ways of parallelization. They are: 1. Distribute primitives among processors. 2. Distribute the frame buffer among the processors.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 73% of the total text.

Time Buffering for Parallel Graphics and Video

      A "rendering scenario" is one in which geometric primitives are
scan-converted into pixels which are then written into a frame
buffer.  A "parallel-rendering scenario" is one in which rendering of
primitives is done by more than one processor.  In a
parallel-rendering scenario there are two simple ways of
parallelization.  They are:
  1.  Distribute primitives among processors.
  2.  Distribute the frame buffer among the processors.

      "Time" buffers are used for the above purposes.  Assume in the
rest of the discussion, that there are N processors.  Now examine the
proposed buffer in greater detail.

TIME BUFFERING - The graphics primitives, such as polygons and lines,
are received for rendering in a certain sequential order.  In order
to maintain the semantics of the application these commands need to
be processed in the order in which they were received.  For example
two primitives could have the same depth value at a pixel.  Then
which of the two primitives gets rendered in the frame buffer?
According to the one possible definition of the depth buffer test,
the primitive drawn last should be rendered in the frame buffer.

      However, in a parallel processing scenario it is often
advantageous to perform concurrent or out of order execution.  In
order to facilitate this, a new buffer is proposed called the "time"
buffer.  This buffer stores the time stamp for each pixel in the
frame buffe...