Browse Prior Art Database

Dual I/O Cache Buffers

IP.com Disclosure Number: IPCOM000109363D
Original Publication Date: 1992-Aug-01
Included in the Prior Art Database: 2005-Mar-24
Document File: 1 page(s) / 49K

Publishing Venue

IBM

Related People

Arimilli, RK: AUTHOR [+2]

Abstract

Disclosed is an implementation that doubles throughput between two buses operating at different frequencies using dual I/O cache buffers.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 68% of the total text.

Dual I/O Cache Buffers

       Disclosed is an implementation that doubles throughput
between two buses operating at different frequencies using dual I/O
cache buffers.

      The IBM RISC System/6000* I/O Channel Controller (IOCC)
implements sixteen DMA cache buffers.  During a DMA transfer, these
cache buffers are used to match speeds between two buses and transfer
data between them with minimum overhead.

      In general, most DMA transfers are sequential.  To take
advantage of this, the DMA cache buffer size was selected as 128
bytes and logically split into two 64-byte components.  During reads,
the two components are managed as the current 64-byte cache buffer
and next 64-byte cache buffer.  When data is accessed from a 64-byte
cache buffer, the next 64-byte cache buffer is prefetched.  If the
fill rate is faster than the empty rate, the device can read without
any hold-offs, except at a 4K page boundary, where the IOCC accesses
a Translation Control Word.  When the device terminates the read
operation, the prefetched component is held in the cache buffer along
with its address in the event the device resumes the read operation
at a later time.  There is no hold-off during that subsequent read.
During reads, the 128-byte cache buffer acts as a ring buffer
comprised of two components, each 64 bytes.  The number of components
may be increased to amortize bus overhead.

      During writes, the 128-byte cache buffer is split logically
into two 64...