Browse Prior Art Database

Initialization of I/O Subsystems According to Cache Technique

IP.com Disclosure Number: IPCOM000111397D
Original Publication Date: 1994-Feb-01
Included in the Prior Art Database: 2005-Mar-26
Document File: 2 page(s) / 98K

Publishing Venue

IBM

Related People

Genduso, TB: AUTHOR [+2]

Abstract

Disclosed is a method for method for reducing latency caused by snoops to complex processor caching, such as "write-back" caching. This latency impedes the operation of I/O subsystems implementing high-speed data streaming, such as the EISA bus, or such as the Micro Channel* bus implementing data streaming at 80 or 160 Mb/sec.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Initialization of I/O Subsystems According to Cache Technique

      Disclosed is a method for method for reducing latency caused by
snoops to complex processor caching, such as "write-back" caching.
This latency impedes the operation of I/O subsystems implementing
high-speed data streaming, such as the EISA bus, or such as the Micro
Channel* bus implementing data streaming at 80 or 160 Mb/sec.

      To preserve data coherence, I/O subsystems must, taking into
consideration the processor cache design, snoop the processor cache
based on whether this cache is "store-in or "store-thru."  Latency is
introduced to an extent depending on the caching property of the
system memory and on how it is used by software.  For "store-in"
cache, the I/O subsystem must snoop both read and write operations.
This process typically adds cycles to an I/O operation because it is
necessary to determine whether the read/write data is "dirty" in the
cache before the I/O operation can be started.  For "store-thru"
cache, the I/O subsystem must snoop the write operation only, in a
process which can occur concurrently with the I/O operation.  For
"uncache" memory, the I/O subsystem does not need to snoop at all.

      Certain processors have memory defined, for the paging manager
of the operating system, as "store-in," "store-thru," or "uncache,"
on a 4K page boundary.  This characterization aids memory management
of the operating system and directs the caching of memory accesses by
the processor.  Memory that is "shared" is marked as "store-thru."

      Software operating systems allocate memory based on a storage
management policy, with some operating systems assuring that I/O
operations are isolated from application space for security as well
as for ease of management.  Thus, an I/O operation is read or written
into 4K pages that are separated from the user space, to be copied
into the user space when the I/O operation is completed.  In this
way, I/O operations are "common supervisor services," which should be
assigned to "store-thru" cache and, in some instances, to "uncache"
memory.

      To exploit data streaming fully by using I/O subsystems in a
more efficient manner, the memory properties and storage management
policy of the operating system are made known to the I/O bridge in
the Central Electronic Complex (CEC).  This information, which
indicates how the operating system treats I/O buffers and memory
allocated for I/O operations, is transferred by invoking an ABIOS
routine during IPL.  In this way, th...