Browse Prior Art Database

Improved Data Cache Reload Performance

IP.com Disclosure Number: IPCOM000036360D
Original Publication Date: 1989-Sep-01
Included in the Prior Art Database: 2005-Jan-29
Document File: 3 page(s) / 31K

Publishing Venue

IBM

Related People

Hardell, WR: AUTHOR [+3]

Abstract

Dirty data cache lines must be stored back to memory. The problem is not to impact system performance. This disclosure provides a unique method of overlapping data cache store-back operations with processor instruction execution.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 55% of the total text.

Page 1 of 3

Improved Data Cache Reload Performance

Dirty data cache lines must be stored back to memory. The problem is not to impact system performance. This disclosure provides a unique method of overlapping data cache store-back operations with processor instruction execution.

In processor implementations incorporating a store-in data cache. On executing a LOAD/STORE instruction a data cache miss can result with the the replaced cache line being dirty. This dirty line must now be stored back to memory. In typical data cache implementations, a reload followed by a store- back cache line sequence is generated. When a second LOAD/STORE instruction is executed resulting in a second data cache miss, another reload sequence is generated. Now any dependent op encountered must hold execution until the first reload and store-back sequence completes and the second reload sequence returns the dependent data. To solve this problem a Pending Store-Back Queue (PSBQ) is implemented. This allows store-back sequences to be queued, allowing a second reload to go ahead of the first store- back. The figure below illustrates the new store-back timing sequence.

Note that the PSBQ writing to memory should be delayed only to the point that it does not create a gap in memory read/write sequences. The reason is that once a write sequence starts, all read sequences must wait. This delay window can be quite large, however, considering the time to fetch a data cache line from memory.

(Image Omitted)

.

A) In executing LOAD 1, a data cache miss, with the re...