Browse Prior Art Database

Determining Occurrence of Undetected Errors in Logic

IP.com Disclosure Number: IPCOM000082545D
Original Publication Date: 1974-Dec-01
Included in the Prior Art Database: 2005-Feb-28
Document File: 1 page(s) / 12K

IBM

Roth, JP: AUTHOR

Abstract

In the effort to design logics which allege to have the property that no undetected error, caused by any of a prescribed category of failures, can occur, it is important, crucial, to be able to ascertain whether or not this allegation is true. One procedure, which has been used for small pieces of logic, is to simulate the logic's behavior, for all input patterns, for all failures - a method effective only for small logic designs. Another method, also used, is to run a periodic exhaustive testing procedure for each possible input pattern. Although this is more efficient by an order of magnitude than simulation, it is still too limited for many practical problems.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 69% of the total text.

Page 1 of 1

Determining Occurrence of Undetected Errors in Logic

In the effort to design logics which allege to have the property that no undetected error, caused by any of a prescribed category of failures, can occur, it is important, crucial, to be able to ascertain whether or not this allegation is true. One procedure, which has been used for small pieces of logic, is to simulate the logic's behavior, for all input patterns, for all failures - a method effective only for small logic designs. Another method, also used, is to run a periodic exhaustive testing procedure for each possible input pattern. Although this is more efficient by an order of magnitude than simulation, it is still too limited for many practical problems.

The method described here determines for each potential failure whether any test, i.e., input pattern can cause an error which is not detected.

The reader is assumed to be familiar with the D-calculus and D-algorithm * **. The actions are as follows: 1) Perform D-drive, from the site of the failure assumed, to any single primary output (PO).

It is assumed that each primary output has associated with it a complementary primary output. With no loss of generality, it may be assumed that they are logically complementary so that in the absence of error, they should always disagree in value. 2) Continue D-drive to complementary PO. 3) If these PO's have opposite values - but both of them incorrect - then proceed to CONSISTENCY, a subrouting of the D-algorith...