Browse Prior Art Database

Method for cache dump compression

IP.com Disclosure Number: IPCOM000033796D
Publication Date: 2004-Dec-28
Document File: 2 page(s) / 17K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for cache dump compression. Benefits include an improved development process, improved performance, and improved cost effectiveness.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 55% of the total text.

Method for cache dump compression

Disclosed is a method for cache dump compression. Benefits include an improved development process, improved performance, and improved cost effectiveness.

Background

              One of the limiters in acquiring the useful internal cache state for debug is the time required to extract this data from the component. As cache sizes grow, both the time required and the amount of trace memory required  grow, which slows debug and dramatically increases the cost of logic analyzers required by the debug and validation teams.

              Conventionally, memory dumps put out all available information.

 

General description

              The disclosed method applies an algorithm to the dump process to reduce the amount of data that must be moved or written without reducing the information transmitted.

              The disclosed method consists of a standard CPU or chipset with a cache and associated debug logic, and a functional block that compresses the data in the cache for faster write-out (dump) to a debug port or first significant byte for debug purposes. The resulting information is captured by a logic analyzer or other means and the actual cache state is reconstructed by decompression post-processing software.

              The method can be applied to all levels of cache.

Advantages

              The disclosed method provides advantages, including:

•             Improved development process due to reducing the time required to extract large volumes of information from the component for use in debug

•             Improved performance due to using L2 cache rather than a general purpose data bus

•             Improved cost effectiveness due to reducing memory requirements for logic analyzers

Detailed description

              The disclosed method is cache dump compression. The method can be implemented using level 2 (L2) cache circuitry, including memory, tags, and control circuitry. A data exchange unit (DXU) transfers data between the processor core(s), the external bus, dedicated debug ports, and any other on-die agents that might require access to the L2 data. Typically, a request for data is made from an on- or off-die agent to the D...