Browse Prior Art Database

Inexpensive Cache Memory for Microprocessors

IP.com Disclosure Number: IPCOM000039927D
Original Publication Date: 1987-Aug-01
Included in the Prior Art Database: 2005-Feb-01
Document File: 2 page(s) / 46K

Publishing Venue

IBM

Related People

Nielsen, CV: AUTHOR

Abstract

A technique is described whereby an inexpensive cache memory is added to microprocessor systems so as to increase performance during sequential address execution of operating programs. The concept is particularly applicable to systems which utilize memories running in non-zero wait states which have not incorporated cache. The concept relies on the assumption that the majority of memory accesses are sequential in application. After completion of a memory read operation, a second read is immediately started for the next sequential address. The data is held for the next request. If the next requested address matches the held address, then the data is processed without the need to have another memory access. The effective performance gain is then realized through the use of the internal cycle (Image Omitted) overlap.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 100% of the total text.

Page 1 of 2

Inexpensive Cache Memory for Microprocessors

A technique is described whereby an inexpensive cache memory is added to microprocessor systems so as to increase performance during sequential address execution of operating programs. The concept is particularly applicable to systems which utilize memories running in non-zero wait states which have not incorporated cache. The concept relies on the assumption that the majority of memory accesses are sequential in application. After completion of a memory read operation, a second read is immediately started for the next sequential address. The data is held for the next request. If the next requested address matches the held address, then the data is processed without the need to have another memory access. The effective performance gain is then realized through the use of the internal cycle

(Image Omitted)

overlap. The cache consists of plus one adder 10, as shown in Fig. 1, comparator 11, control state machine 12, holding register 13 and flush logic 14. The timing chart, as shown in Fig. 2, indicates the relative degree of performance increase which may be realized from the concept.

1

Page 2 of 2

2

[This page contains 5 pictures or other non-text objects]