Browse Prior Art Database

Memory Queue Priority Mechanism for a RISC Processor

IP.com Disclosure Number: IPCOM000112681D
Original Publication Date: 1994-Jun-01
Included in the Prior Art Database: 2005-Mar-27
Document File: 4 page(s) / 160K

Publishing Venue

IBM

Related People

Garcia, MJ: AUTHOR [+5]

Abstract

Disclosed is a priority scheme used in a RISC microprocessor memory queue. The memory queue is used to buffer read, write, I/O and cache control operations between the processor's logic and the external memory and I/O bus. The priority scheme described herein is used both to improve the processor performance and to avoid bus deadlock situations in coherent memory systems.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 35% of the total text.

Memory Queue Priority Mechanism for a RISC Processor

      Disclosed is a priority scheme used in a RISC microprocessor
memory queue.  The memory queue is used to buffer read, write, I/O
and cache control operations between the processor's logic and the
external memory and I/O bus.  The priority scheme described herein is
used both to improve the processor performance and to avoid bus
deadlock situations in coherent memory systems.

      The Figure shows a RISC microprocessor cache and memory queue
system.  This consists of the cache logic feeding two read queues and
three write queues.  The read queues are used to queue fetch and load
operations and cacheable store operations, while the write queues are
used for non-cacheable writes, cast-out of modified data (to make
room in the cache for other data), pushing data out of the cache as
under program control (data cache line flush, data cache line store,
etc., and snoop copy-back (modified data in cache written back to
memory because another device on the snooped bus needs to access the
data).

      In addition to the read and write queues, there is a path from
the cache logic to the memory bus for executing various address-only
transactions, which include cache line invalidation, synchronization,
and similar operations.

      With these six sources for memory bus operations, there are
several types of operations which can be queued:

1.  READS: cacheable and non-cacheable load and fetch operations.

2.  DYNAMIC RELOADS: cacheable reads of the second sector of a cache
    line, following a normal READ of the first cache line sector.

3.  CAST OUT WRITES:  cacheable sector writes of modified data to
    free up the cache line for use as another address.

4.  SNOOP COPYBACK WRITES: writing modified data from cache back into
    memory in response to some other bus device's request for the
    data on the memory bus.

5.  NON-CACHEABLE WRITES: non-burst write operations which should go
    onto the bus in correct temporal order.

6.  CACHE OPERATIONS: Data Cache Sector Flush, Invalidate, or zero.

7.  SYNCHRONIZING OPERATIONS: DSYNC operation to assure data has been
    written to memory before another processor operation occurs.

8.  PROGRAMMED I/O OPERATIONS: input/output operations generated by
    the processor logic.

    For these operations, the priority that the queued operations go
    to the bus is critical in various ways:
     a.  Performance:  Some operations, such as cast-outs, can be
very
        low priority, since the operation is dealing with data of no
        immediate consequence to program execution.  Other
        operations, such as loads and fetches, may stall the
        processor if not executed immediately.
     b.  Sequential Operation:  Some operations, such as
non-cacheable
        loads or stores, should execute sequentially in the order
        requested by th...