Browse Prior Art Database

MICRO CHANNEL Data Streaming and Input/Output Snooping Facility for Personal Computer Systems

IP.com Disclosure Number: IPCOM000106206D
Original Publication Date: 1993-Oct-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 6 page(s) / 231K

Publishing Venue

IBM

Related People

Genduso, TB: AUTHOR [+3]

Abstract

Described is an architectural implementation to provide an improved input/output (I/O) acquisition streaming and snooping facility for store-in caches for personal computer systems equipped with a Micro Channel* (MC).

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 30% of the total text.

MICRO CHANNEL Data Streaming and Input/Output Snooping Facility for Personal Computer Systems

      Described is an architectural implementation to provide an
improved input/output (I/O) acquisition streaming and snooping
facility for store-in caches for personal computer systems equipped
with a Micro Channel* (MC).

      Enhancement of data transfer potentials is provided by means of
a unique I/O acquisition and snooping method for store-in caches so
as to enable MC data streaming to better achieve its rated data
transfer potentials while coexisting with new store-in caches of high
performance processors.

      Typically, high performance processors incorporate store-in
cache facilities to improve processor performance.  The store-in
cache facility allows processor store operations to store into and
out of a high speed cache at clock speed, or close to clock speed,
for store and fetch operations.  The store-in cache is optimized with
the processor further then a store-thru cache such that the I/O must
perform a snoop operation on both I/O read and write operations.
This is because the cache may contain the latest copy of the data.
As a result, on an I/O read the cache must be snooped to determine if
it has been updated by the processor and if its data must be fetched
from the cache.  If the cache is updated, then it must be fetched
from cache to the I/O device.

      On an I/O write operation, if the addressed data has been
updated, then the cache line is first stored back into memory and the
cache line is marked invalid.  Then the I/O write operation is
performed to memory.  The snoop takes several clock cycles to perform
its operations at the cache.  If it is an I/O write which has
addressed a cache line, an extra memory cycle must be expended before
the I/O write operation may proceed.  The additional steps increase
memory utilization and add extra I/O latency.  This can be
detrimental to MC data streaming performance.

      The significant aspects of the concept are to improve
operational performance during data streaming and is implemented in
four steps:

1.  I/O snooping of the processor cache is in the accumulation to a
    cache line and to only snoop that cache line.  This reduces
    memory utilization, I/O latency and snoop traffic to the
    processor cache tag array.  At the same time it increases the
    potential for a higher I/O data rate.

2.  Both the I/O read and write operations are pre-snooped by one
    sequential cache line address ahead.  This anticipatory action
    reduces latency and increases the potential for higher I/O data
    rates.

3.  On write operations into a store-in cache and if data is not
    updated, when the I/O operation is not on a full cache line, the
    data not updated is cast out and the cache line is marked
    invalid.  Then an I/O write operation is performed to memory.
    When the I/O operation is on a full cache line,...