Browse Prior Art Database

Modelling Process Variation in System-Level Static Timing Analysis

IP.com Disclosure Number: IPCOM000117292D
Original Publication Date: 1996-Jan-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 6 page(s) / 271K

Publishing Venue

IBM

Related People

Fredrickson, MS: AUTHOR [+3]

Abstract

Disclosed is a method for doing static timing analysis of electronic systems accounting for the possibility that some chips in the system will, because of manufacturing process variation, operate faster or slower than other chips relative to their nominal speed. This method is a generalization of the method currently used for modelling on-chip process variation during chip-level static timing analysis.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 19% of the total text.

Modelling Process Variation in System-Level Static Timing Analysis

      Disclosed is a method for doing static timing analysis of
electronic systems accounting for the possibility that some chips in
the system will, because of manufacturing process variation, operate
faster or slower than other chips relative to their nominal speed.
This method is a generalization of the method currently used for
modelling on-chip process variation during chip-level static timing
analysis.

The modelling of delay (process) variation during chip-level static
timing analysis can be generalized into the following two steps:
  1.  The delay models which represent circuit elements or wires must
       be able to model the full range of possible delays of those
       elements, from best-case to worst-case.
  2.  The timing analysis program must be able to use the timing
       models in such a way as to select, from the full range of
       possible delays, the appropriate sub-range of delays for
       each element in order to model the desired amount of
       uncertainty and the correlation between the delays of
different
       elements.  Statistical methods or "Linear Combination of
Delays"
       are two of the ways currently used in timing programs to
       control the selection of the delay sub-ranges.

      When a chip is built, there is a high degree of correlation
between the delays of the different circuits on the chip.  So if the
process creates a fast chip, most of the circuits on that chip will
have delays closer to best-case than the delays of circuits on a slow
chip.  However, the delays on a fast chip will not all be exactly
best-case.  There will be some variation in the delays, but they will
be correlated close to the best-case end of the spectrum.  Similarly,
for a slow chip, the circuit delays will be correlated near the
worst-case end of the range.

      When doing an analysis of the timing behavior of a "slow" chip,
it would not be accurate to use worst-case delays for all circuits.
Consider, for example, a setup test, where the data transition is
expected to reach a memory element before the clock transition
arrives.  It is possible, since the delays vary a little bit around
worst-case, that when the chip is built, the delays on the data path
could be slightly slower than usual, while the delays on the clock
path could be slightly faster than usual.  If worst-case delays were
used in the analysis for both the clock path and the data path, the
setup test results would be erroneously optimistic.  On the other
hand, if best-case delays were used to model the clock path, and
worst-case delays were used for the data path, the setup test results
would be unrealistically pessimistic because the delays on a slow
real chip will be correlated around worst-case.  So, to model the
worst possible situation for a setup test on a slow chip, the data
path could be modelled...