Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Processor Performance Enhancement Using a Memory Cache Scheme

IP.com Disclosure Number: IPCOM000036982D
Original Publication Date: 1989-Nov-01
Included in the Prior Art Database: 2005-Jan-29
Document File: 6 page(s) / 57K

Publishing Venue

IBM

Related People

Hoover, RD: AUTHOR [+3]

Abstract

Within a processor, obtaining a faster access to main storage usually results in enhanced performance. One way of increasing this access speed is to provide a copy of main storage within the processor itself. This storage buffering is commonly known as a main-store cache.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 24% of the total text.

Page 1 of 6

Processor Performance Enhancement Using a Memory Cache Scheme

Within a processor, obtaining a faster access to main storage usually results in enhanced performance. One way of increasing this access speed is to provide a copy of main storage within the processor itself. This storage buffering is commonly known as a main-store cache.

Performance enhancements with a cache depend upon three factors.

Hit Ratio: the number of times the cache is

addressed and valid data is found

Access Time: the time required to obtain the data

from the cache

(Image Omitted)

Operation Parallelism: the number of

functions that can be done in parallel

If the hit ratio of the cache can be maximized, the access time to the cache minimized, and processor operation parallelism maximized, the ideal results will occur from the cache.

To tailor the cache to achieve the above results with a particular processor, the operating characteristics of that processor must be investigated. This investigation in the case of the Main Storage Processor (MSP) in the IBM System/36 has yielded a set of unique problems which have resulted in a specific set of features for this cache. These features and their advantages to cache designs in general are described in further detail. In addition, how these features affect the hit ratio, access time, and parallelism of the processor is investigated.

The features of this cache scheme are:

Direct-Mapped Read Addressing

Separate Instruction and Data Areas

Access Request to Main Storage, Instruction Cache and

Data Cache Initiated in Parallel

Cache Filled in Parallel With Execution Unit

Execution Unit Write-Through

Hardware-Implemented Search/Invalidate

Programmable Single-Row Invalidate

Internal Chip Implementation

Support for Task Switching

Support for Instruction Restart

Direct-Mapped Real Addressing: Addressing the cache has effects on both the access time to the cache and on the hit ratio received from the cache. This cache uses a direct-mapped real-addressing scheme. In this case, the cache control logic selects the correct row address from several bits of the real address, at which point the cache arrays are accessed. For the MSP, the real address is the physical memory address that is obtained as the result of translating

1

Page 2 of 6

the virtual address. The sequence of events performed is as follows:

The execution unit (a subset of the MSP) requests a

memory access at a virtual address.

The virtual address is translated to a physical (real)

address.

Several bits of the real address are used to access the

cache directly.

Access time to the cache in the MSP is defined as the time from the fetch or write request through the cache to the execution unit latching the data. This access time in the MSP is two machine cycles. All references to a machine cycle will assume a 50 ns-cycle period. Thus, this access would be 100 ns. Without direct-mapped real addressing, this access time would be increased, reducing overall performance.

Using...