Browse Prior Art Database

Speculative Fetch Memory Protocol Method

IP.com Disclosure Number: IPCOM000015780D
Original Publication Date: 2002-May-02
Included in the Prior Art Database: 2003-Jun-21
Document File: 2 page(s) / 39K

Publishing Venue

IBM

Abstract

This disclosure describes a Speculative Fetch protocol for a command response interface between a memory requester and a central storage memory controller of a computer memory subsystem that 1) improves the utilization of the bidirectional data bus between them, and 2) improves the utilization of fetch resources, while not adding any additional latency to a memory access. The protocol allows the memory requester to either 1) acquire requested data or not and 2) in the case where data is not acquired, to re-use a memory controller fetch resource before the current operation being processed by that resource has completely ended. In a highly pipelined memory subsystem with a distributed L2 cache, in order to minimize the access time required to obtain data from central storage when it is not found in the local L2 cache (cache miss), the memory requester will initiate a directory search of the remote L2 caches, while simultaneously initiating a fetch access to central storage for the data.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 2

Speculative Fetch Memory Protocol Method

  This disclosure describes a Speculative Fetch protocol for a command response interface between a memory requester and a central storage memory controller of a computer memory subsystem that 1) improves the utilization of the bidirectional data bus between them, and 2) improves the utilization of fetch resources, while not adding any additional latency to a memory access. The protocol allows the memory requester to either
1) acquire requested data or not and 2) in the case where data is not acquired, to re-use a memory controller fetch resource before the current operation being processed by that resource has completely ended.

In a highly pipelined memory subsystem with a distributed L2 cache, in order to minimize the access time required to obtain data from central storage when it is not found in the local L2 cache (cache miss), the memory requester will initiate a directory search of the remote

L2 caches, while simultaneously initiating a fetch access to central storage for the data.

Once it has been determined by the directory searches that data resides in a remote L2 cache (remote cache hit), the data associated with the fetch access to central storage is no longer needed.

The central storage fetch resources could be in use processing a memory fetch access to the DRAMs when the remote L2 cache hit determination is made, since there are no fixed timing relationships among the receipt of the fetch request by the central storage memory controller, the processing of the fetch request by the central storage memory controller, and the remote L2 cache hit/miss determination. Re-using the central storage fetch resources by the memory requester before the DRAM access cycle completes is not permitted. In order for the memory requester to re-use the central storage fetch resources after a remote L2 cache hit has been determined, it has to be synchronized to the current fetch operation by receiving status from the central storage memory controller indicating that the current fetch operation has completed the DRAM access portion of the operation.

In the memory subsystem described here, there could be potentially 8 outstanding fetch requests with data, queued in internal fetch buffers in central storage, waiting to be sent back to the memory requester. Reducing the use of the data port between the memory requester and controller by eliminating the need to send unwanted data increases the availability of the port for needed data transfers. Eliminating the need to send unwanted data also makes the fetch resources available for use again when the DRAM access is complete, as opposed to after the data transfer is complete.

For con...