Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

High-Speed Algorithm for Integrated Rendering Using Adaptive Under Sampling

IP.com Disclosure Number: IPCOM000120893D
Original Publication Date: 1991-Jun-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 6 page(s) / 188K

Publishing Venue

IBM

Related People

Miyazawa, T: AUTHOR

Abstract

Disclosed is an algorithm for improving the performance of a volume rendering method or an integrated rendering method by using hierarchical adaptive undersampling over a pixel plane. (The integrated rendering method, described here, is an extension of a volume rendering method based on ray-tracing for handling both volume and geometric data, such as points, lines, and polygons). The algorithm exploits area coherence, the tendency for small differences to occur among pixel intensities in a sample region of a generated image. The image plane is divided into square sample regions, rays are first cast from the corner pixels of each sample region, and the intensities of these pixels are calculated on the basis of ray-tracing.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 51% of the total text.

High-Speed Algorithm for Integrated Rendering Using Adaptive Under
Sampling

      Disclosed is an algorithm for improving the performance
of a volume rendering method or an integrated rendering method by
using hierarchical adaptive undersampling over a pixel plane. (The
integrated rendering method, described here, is an extension of a
volume rendering method based on ray-tracing for handling both volume
and geometric data, such as points, lines, and polygons).  The
algorithm exploits area coherence, the tendency for small differences
to occur among pixel intensities in a sample region of a generated
image.  The image plane is divided into square sample regions, rays
are first cast from the corner pixels of each sample region, and the
intensities of these pixels are calculated on the basis of
ray-tracing.  Then, according to the degree of the similarity of the
pixel intensities at the four corners of each region, the pixel
intensity in the center of the region is calculated by using one of
two procedures, ray-tracing or interpolation.  This process is
hierarchically repeated until the sampling interval over the image
plane reaches the pixel resolution. The similarity of the pixel
intensities is determined by using the ranges of variance of the
accumulated color and accumulated transparency of the four pixels.
The algorithm also employs a mechanism for expressing the shapes of
polygons that may otherwise be partially or completely unvisualized
(*).  The intensities of pixels that include rasterized lines or
points are initially marked, and are never interpolated but always
ray-traced.  These processes guarantee that geometric data are
visualized. According to our experiments, the algorithm performs
three to seven times better than the brute-force approach.

      Fig. 1 shows an overview of the data flow in the integrated
rendering algorithm. Fig. 2 shows a rendering algorithm for a cast
ray.  The main steps of the algorithm are as follows:
 STEP 1:  Calculate the accumulated colors (RGB components) and
accumulated transparencies of the pixels in the pixel interval 'dmax'
by using ray-tracing.
 STEP 2:  Examine the similarity of the pixels at the four corners of
each square sample region.
            STEP 2-1:  If there is little similarity, calculate the
accumulated color and transparency of the central pixel in the sample
region by using ray-tracing, and proceed to STEP 3.
            STEP 2-2:  If there is a strong similarity, calculate the
accumulated color and transparency of the central pixel in the sample
region by using interpolation, and repeat STEP 2 for the next region.
     STEP 3:  Detect erroneously interpolated pixels and re-calculate
the accumulated colors and transparencies of the pixels by u...