Browse Prior Art Database

A process for detecting performance design problems during random functional verification Disclosure Number: IPCOM000012860D
Original Publication Date: 2003-Jun-04
Included in the Prior Art Database: 2003-Jun-04
Document File: 4 page(s) / 57K

Publishing Venue



Traditionally, performance verification of logic systems has been a difficult manual process. It frequently fails to detect logic errors which can cause significant performance loss. This paper presents a process for detecting most of these logic errors during the random functional verification. It consists of capturing the times of logic signals that represent the start end of performance critical functions. The duration of these functions are then displayed in histograms that represent a distribution of the function time under various conditions. A performance related logic design error appears as anomaly in a histogram.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 4

  A process for detecting performance design problems during random functional verification

   Performance verification of logic systems has been traditionally done using manually generated test cases. There are significant drawbacks to this approach. These are:

1. Manual performance tests are difficult to write & debug. Typically, tests require a detailed understanding of the system being tested. The test writer needs to know how to control the input signals to generate the conditions that demonstrate the desired performance. It is particularly difficult to setup conditions to verify performance acceleration hardware such as history based algorithms such as speculation or prediction logic, bypass path logic and caches.

2. Design changes or derivative designs frequently cause many performance tests to fail. Generally, manual verification uses expected values that are either constants or within a small range of values. A minor design change or a technology change can increase or reduce the actual time. This can require replacing large numbers of expected values.

3. Manual performance tests provide insufficient test coverage. Experience with a microprocessor design revealed that the 130 manual tests failed to reveal three performance design problems that resulted in an 11% performance loss. These design problems were detected on the hardware in the performance test lab just before the final design was released.

An experiment was conducted to test a new performance verification process. The results demonstrated that performance design errors can be detected during random functional simulation. The following is a high level flow of this process.

Page 2 of 4

This process is used to generate performance histograms of times between selected Performance Indicator Signal during random functional simulation. These histograms can be used to verify performance by displaying timing distributions of various logic algorithms for different hardware configurations.


[This page contains 1 picture or other non-text object]

Page 3 of 4

Logic signals are selected for tracing during functional simulation. These signals can be divided into two categories, Configuration Signals and Performance Indicator Signals. Configuration signals need to be captured only at the start of each functional test. They are used to categorize the Performance Indicator signal traces. Performance I...