Browse Prior Art Database

Automated Electrical Validation of Microprocessor and Shared Memory Subsystem

IP.com Disclosure Number: IPCOM000238081D
Publication Date: 2014-Jul-31
Document File: 6 page(s) / 309K

Publishing Venue

The IP.com Prior Art Database

Abstract

A memory validation test environment that uses the underlying cache architecture for all caches in the memory subsystem hierarchy of a microprocessor system is presented. Tests are run from the main microprocessor(s) and use an algorithm for driving controlled data traffic on the local bus as well as the interconnect. Since the tests are runs from the main microprocessor(s), we are able to stress various code paths such as load store and invalidate queues where timing constraints may make a lot of difference at non-nominal physical and electrical conditions. To enhance the possibility of hitting faults sooner and easily, we use a mix of various kinds of memory patterns as input vectors, some random and some static, the static ones being derived from the performance modeling of various memory fault models.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 27% of the total text.

Title

Automated Electrical Validation of Microprocessor and Shared Memory Subsystem

Abstract

A memory validation test environment that uses the underlying cache architecture for all caches in the memory subsystem hierarchy of a microprocessor system is presented.  Tests are run from the main microprocessor(s) and use an algorithm for driving controlled data traffic on the local bus as well as the interconnect.  Since the tests are runs from the main microprocessor(s), we are able to stress various code paths such as load store and invalidate queues where timing constraints may make a lot of difference at non-nominal physical and electrical conditions.  To enhance the possibility of hitting faults sooner and easily, we use a mix of various kinds of memory patterns as input vectors, some random and some static, the static ones being derived from the performance modeling of various memory fault models.

Background

When testing the memories of a microprocessor system, which can have multiple processors and multiple, different types of memories, most traditional memory I/O approaches don’t stress the electrical parameters of the system in a satisfactory manner hence are not able to reproduce (or even surpass the specification) the worst case electrical numbers obtainable from ATE (automated tester equipment) built-in self test results.  Memory built-in self tests (BIST) mainly concentrate on Double Data Rate (DDR) memories. Even if there are some tests for verifying the cache, most probably they won’t be as complex and complete as can be done when bringing the core into the picture so that the algorithms actually get executed from the main microprocessor(s). 

In our validation environment, we never actually utilized any I/O vectors obtainable from Design For Test (DFT) and performance modeling teams.  Thus, we have designed a memory validation test environment that uses the underlying cache architecture for all caches in the memory subsystem hierarchy of a microprocessor system.  Tests are run from the main microprocessor(s) and use an algorithm for driving controlled data traffic on the local bus as well as the interconnect.  Since the tests are runs from the main microprocessor(s), we are able to stress various code paths such as load store and invalidate queues where timing constraints may make a lot of difference at non-nominal physical and electrical conditions.  To enhance the possibility of hitting faults sooner and easily, we use a mix of various kinds of memory patterns as input vectors, some random and some static, the static ones being derived from the performance modeling of various memory fault models.

The algorithm plays with the initial patterns, generating various kinds of interim patterns, thus exposing the underlying system to a new set of input vectors that may bring new interesting scenarios. All the I/O is done on cache lines only. Cache lines are carefully selected so that they all go into the same set of the cach...