Browse Prior Art Database

Computing Minimal Test Assemblages

IP.com Disclosure Number: IPCOM000083812D
Original Publication Date: 1975-Jul-01
Included in the Prior Art Database: 2005-Mar-01
Document File: 3 page(s) / 17K

Publishing Venue

IBM

Related People

Roth, JP: AUTHOR

Abstract

In the functioning of computer hardware, it is mandatory in a reliable system that there be an assemblage of tests with the ability to detect -- and possibly also to diagnose -- each failure that the hardware may assume. In some applications high reliability is a leading requirement, a sine qua non of performance. Furthermore, the error-detection-followed by-failure-location may be performed in realtime: in order to minimize damage, these diagnostic tests should be performed as quickly as possible.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 36% of the total text.

Page 1 of 3

Computing Minimal Test Assemblages

In the functioning of computer hardware, it is mandatory in a reliable system that there be an assemblage of tests with the ability to detect -- and possibly also to diagnose -- each failure that the hardware may assume. In some applications high reliability is a leading requirement, a sine qua non of performance. Furthermore, the error-detection-followed by-failure-location may be performed in realtime: in order to minimize damage, these diagnostic tests should be performed as quickly as possible.

To this end, the method described here constitutes practical means for dealing with large hardware designs. In such designs, the number of possible tests and possible failures will in general be astronomically large. Thus, the array methods, using a row for each test and a column for each failure, would be completely out of bounds.

The failures of the system under consideration will, in general, be described by an algorithm or formula, which is capable of generating any of the category of failures assumed. In general, failures organized as a list would be incomputible - - too large. A typical category of failures considered might be all lines stuck at 1 or 0 and any of a set of "critical" pairs of lines in danger of shorting.

The first step is to SELECT a failure f by a procedure such as dis distance from primary outputs. Next the D-algorithm/(1)/ is invoked, to compute a test t for failure f. (The D-algorithm guarantees, given sufficient Time and Space, to compute such a test, if one exists.) The algorithm TESTDETECT/(2)/ in turn computes the set of all failures detected by this test. (Tests have shown TESTDETECT to be computationally superior to SIMULATION, by an order of magnitude at least.)

Consider a test ta together with the set of failures ka it detects. Rather than simply listing the names of these failures, e.g., in array form, encode or embed these failures, in particular, their names, into an n-cube. (Later it will be shown how to do this same operation more generally into an n-cell, wherein each variable is allowed to assume a number of values in excess of 2.)

The embedding method chosen here is simply to enumerate the failures, in binary numbers, retaining a correspondence between these numbers assigned and the original name. Then a failure is.assigned a number, which is viewed as a vertex of an n-cube, the first time it is encountered, as being detected by some constructed test. Thence, whenever it is subsequently encountered (covered by some new test), this same number persists.

Let Ka denote the complex of all failures, in their vertex representation, detected by a given test. Actually, Ka may be any subset of vertices of the totality of all "failure vertices". This complex Ka will be represented by a cover consisting of higher dimensional cubes, all vertices of which belong to the complex. The operations which will be formed, notably the P-operation, will tend to increase with the size...