Browse Prior Art Database

Cache Arbitration Prioritization

IP.com Disclosure Number: IPCOM000108819D
Original Publication Date: 1992-Jun-01
Included in the Prior Art Database: 2005-Mar-23
Document File: 3 page(s) / 120K

Publishing Venue

IBM

Related People

Balser, DM: AUTHOR [+4]

Abstract

Selecting a priority for cache requests, which will allow high levels of performance, is easy to implement, and will not cause deadlocks within a processor, can be a challenging task. What follows is a description of and the reasons for the priorities chosen in the processor that powers the IBM RISC System/6000* (RS/6000) Model 220.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Cache Arbitration Prioritization

       Selecting a priority for cache requests, which will allow
high levels of performance, is easy to implement, and will not cause
deadlocks within a processor, can be a challenging task. What follows
is a description of and the reasons for the priorities chosen in the
processor that powers the IBM RISC System/6000* (RS/6000) Model 220.

      Multiple sources can be requesting use of the cache in any
cycle, and only one of the requesters can be serviced at any given
time.  Selecting the priority in which to grant the requests can have
a significant effect on the performance of the processor.  The
problem is selecting the priority that will give good performance and
cache utilization without introducing high complexity.  The
priorities must be defined such that a given request can never be
dependent upon a lower priority request completing ahead of it, which
would result in a deadlock.

      The possible types of requests for use of the cache are listed
below, from highest priority to lowest.  The reasons for assigning
the given priority are explained for each possible type.
1.  Storing of load data from memory.
2.  Retrieval of data for a store request accepted by the memory
system.
3.  Splicing of load data for unaligned requests.
4.  Data transfer requests from the co-processor.
5.  Branch requests out of the fixed-point writeback stage.
6.  Load requests out of the fixed-point writeback stage.
7.  Store requests out of the fixed-point writeback stage.
8.  Branch requests out of the fixed-point execute stage.
9.  Fetch requests resulting from the prediction of a branch.
10. Load requests out of the fixed-point execute stage.
11. Store requests out of the fixed-point execute stage.
12. Retrieval of data for the next store to be written to memory.
13. Fetch requests for the next sequential instruction.

      The storing of load data from memory was given highest priority
because it is imperative that the load buffers be emptied and able to
accept the next piece of memory data. There is no mechanism in the
processor for forcing the memory data to remain on the bus for more
than one cycle.

      Retrieval of store data is given a high priority when the store
request has already been made to the memory system and we know that
it will be expecting data to be sent next cycle.  The memory system
will read the data bus and store whatever is there, so we must ensure
that the data is on the bus.  This request cannot conflict with a
request to store load data from memory because the memory system
cannot switch from a load to a store operation quickly eno...