Browse Prior Art Database

Test Case Evaluation Methodology

IP.com Disclosure Number: IPCOM000117802D
Original Publication Date: 1996-Jun-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 4 page(s) / 191K

Publishing Venue

IBM

Related People

Beer, I: AUTHOR [+4]

Abstract

Disclosed is a programming tool suite that analyzes a hardware description language representation of logic along with simulation results and produces a detailed measure report of the quality of testing data-flow aspects by the test-cases used in the simulation of the hardware.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 30% of the total text.

Test Case Evaluation Methodology

      Disclosed is a programming tool suite that analyzes a hardware
description language representation of logic along with simulation
results and produces a detailed measure report of the quality of
testing data-flow aspects by the test-cases used in the simulation of
the hardware.

      Simulation is a valuable tool for verifying the correctness of
a logic implementation prior to committing the design to silicon.  As
the size and scope of simulation has increased, it is becoming
increasingly difficult to determine the adequacy of the simulation
test-cases.  Designers are not easily able to determine
quantitatively how much of their design has been simulated and to
what extent the simulation has been done.

      The Test Case Evaluation (TCE) is a program tool suite that
accepts a description of a hardware unit and a set of results of
test-cases and produces a detailed measure of the quality of the
test-cases with respect to the hardware unit.  In particular, the
description of the hardware unit may be given by a Hardware
Description Language, the test-cases may be run during simulation of
the design, and the output measure may be used as a measure of
confidence in the correctness of the design before realizing it in
silicon.

      The detailed measure of quality supplied by the tool evaluates
the thoroughness of the test-cases in checking the data-flow of the
design; evaluation of this aspect is not provided by currently
available tools.  The computational resources and human intervention
the tool necessitates are acceptable even for large designs and large
sets of test-cases (Figure).

      The tool employs a new family of test coverage criteria called
local-propagation criteria.  A local-propagation occurs on an
assignment when a target signal and a source signal switch
simultaneously.  The criteria include loose (requiring only
simultaneous transitions), directional (confirming that the target
rises and falls) and strict (requiring that other source signals are
stable).

      Management of a design verification process requires means to
measure the progress and quality of the verification.  Two basic
approaches exist for measuring verification: 1) Statistical quality
control estimates the reliability of the design by counting design
bugs as a function of accumulated simulation cycles.  2) Test
coverage indicates which areas and functionalities of the design have
been properly covered with respect to pre-defined coverage criteria.
In a sense, the two approaches represent functional level
verification (statistical control) and implementation level testing
(coverage).

      Coverage is also of importance in the different but related
field of hardware production testing.  Two relatively simple criteria
have been shown in practice to gauge well the production testing
adequacy.  These are the stuck-at criteria which demand that: 1)
Each bit in the design is changed...