Browse Prior Art Database

Region of Interest Caching for Improving the Performance of a Shapes Processing System

IP.com Disclosure Number: IPCOM000016281D
Original Publication Date: 2002-Sep-09
Included in the Prior Art Database: 2003-Jun-21
Document File: 2 page(s) / 43K

Publishing Venue

IBM

Abstract

Disclosed is a caching algorithm on arguments of shape processing functions for VLSI designs. The arguments to a function are typically sets of shapes. When a function with the same arguments is called two or more times the value of the function is computed only once and for all the subsequent calls is taken from the cache. Due to the repetitive structure of many VLSI designs, this results in a significant improvement of execution time. This invention gives a very high practical benefit. As an example, the average time for doing model-based optical proximity correction (MBOPC) on a design that ranged from 2 days to several weeks is now reduced by 5 times or more. This brings MBOPC to the class of practically executable tasks. This improvement is completely independent i.e. on top of any speedup obtained by improving the implementation of the MBOPC function code itself. The invention also applies to simpler (faster) functions on VLSI designs such as 'shrink' function. The effect ranges from 1.5x improvement of runtime and up. The main idea of the invention is to capture repeating patterns of shapes in the region of interest for a shape processing function. For each such set, we generate a tag that completely identifies it. We store the tags in a red-black tree based dictionary and for each new call of the function, first check if the argument is already cached. If so, we skip the execution of the function and take its value from the cache instead. The tags are much more compact than the full descriptions of the sets of shapes. This brings the extra memory usage and the lookup time to the minimum. The time needed to check if the current argument is already cached is much less then the execution time of most of the functions. However for extremely fast functions, such as counting the number of shapes of the argument, the execution time may increase. To avoid this, the names of all time-expensive functions are stored in a list. The caching mechanism turns on only for the functions from this list, otherwise the cache is ignored.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

  Region of Interest Caching for Improving the Performance of a Shapes Processing System

Disclosed is a caching algorithm on arguments of shape processing functions for VLSI designs. The arguments to a function are typically sets of shapes. When a function with the same arguments is called two or more times the value of the function is computed only once and for all the subsequent calls is taken from the cache. Due to the repetitive structure of many VLSI designs, this results in a significant improvement of execution time.

This invention gives a very high practical benefit. As an example, the average time for doing model-based optical proximity correction (MBOPC) on a design that ranged from 2 days to several weeks is now reduced by 5 times or more. This brings MBOPC to the class of practically executable tasks. This improvement is completely independent ( i.e. on top of ) any speedup obtained by improving the implementation of the MBOPC function code itself. The invention also applies to simpler (faster) functions on VLSI designs such as 'shrink' function. The effect ranges from 1.5x improvement of runtime and up.

The main idea of the invention is to capture repeating patterns of shapes in the region of interest for a shape processing function. For each such set, we generate a tag that completely identifies it. We store the tags in a red-black tree based dictionary and for each new call of the function, first check if the argument is already cached. If so, we skip the execution of the function and take its value from the cache instead. The tags are much more compact than the full descriptions of the sets of shapes. This brings the extra memory usage and the lookup time to the minimum.

The time needed to check if the current argument is already cached is much less then the execution time of most of the functions. However for extremely fast functions, such as counting the number of shapes of the argument, the execution time may increase. To avoid this, the names of all time-expensive functions are stored in a list. The caching mechanism turns on only for the functions from this list, otherwise the cache is ignored.

The algorithm has the following two memory saving features:

i) Each argument of a shape processing function consists of the basic shape and the surrounding shapes. The shape hierarchy is traversed in such a way that all the arguments with the same basic shape are processed consecutively. After the algorithm leaves a basic shape, it never gets back to it. The cache is reset each time the basic shape changes. This reduces the memory usage by approximately a factor of #_of_different_basic_shapes, which might be, depending on the desi...