Browse Prior Art Database

Asynchronous Pipeline for Queueing Synchronous DMA Cache Management Requests

IP.com Disclosure Number: IPCOM000110287D
Original Publication Date: 1992-Nov-01
Included in the Prior Art Database: 2005-Mar-25
Document File: 2 page(s) / 98K

Publishing Venue

IBM

Related People

Arimilli, RK: AUTHOR [+3]

Abstract

I/O Bus DMA throughput may be significantly improved by providing a DMA controller with a cache, and allowing reads or writes to the cache by a bus master to be overlapped with memory accesses which pre-fill or post-flush the cache. Using such a scheme, a small DMA cache (e.g., 128 bytes) can be used to provide a bus master with uninterrupted access to a relatively large area (e.g., 4048 bytes) of memory. But conditions which indicate the need for additional cache fill/flush may arise during the overlapped operation. Moreover, those conditions are inherently transient (i.e., the bus master access may continue beyond the address which indicates that another fill/flush is needed) and they occur asynchronously with respect to the memory-cache accesses with which they are overlapped.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Asynchronous Pipeline for Queueing Synchronous DMA Cache Management Requests

       I/O Bus DMA throughput may be significantly improved by
providing a DMA controller with a cache, and allowing reads or writes
to the cache by a bus master to be overlapped with memory accesses
which pre-fill or post-flush the cache.  Using such a scheme, a small
DMA cache (e.g., 128 bytes) can be used to provide a bus master with
uninterrupted access to a relatively large area (e.g., 4048 bytes) of
memory.  But conditions which indicate the need for additional cache
fill/flush may arise during the overlapped operation.  Moreover,
those conditions are inherently transient (i.e., the bus master
access may continue beyond the address which indicates that another
fill/flush is needed) and they occur asynchronously with respect to
the memory-cache accesses with which they are overlapped.  The
problem of using transient, asynchronous conditions to generate
deferred requests for cache fill/flush actions must be solved in
order to achieve overlapped DMA cache operation.  The invention
disclosed here is an asynchronous pipeline structure which solves
this problem for a DMA cache which is partitioned into two sections
such that a bus master can perform DMA accesses to one cache section
while the other cache section is accessing memory.

      The invention is a pipeline of two stages such that the second
stage is clocked synchronously with the cache manager and the first
stage is clocked asynchronously based on I/O bus signals controlled
by the bus master.  The asynchronous stage captures a fill or flush
request which is a Boolean function of indicators which are valid
when the active edge of the asynchronous clock occurs.  The second
stage has dual roles: it acts acts both as a pipeline stage and as a
synchronizing latch.

      The figure illustrates a realization of...