Browse Prior Art Database

Apparatus for High Throughput Protocols in High-Performance Computer Systems

IP.com Disclosure Number: IPCOM000116153D
Original Publication Date: 1995-Aug-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 2 page(s) / 55K

Publishing Venue

IBM

Related People

So, K: AUTHOR [+2]

Abstract

A scheme is disclosed to double the amount of message bandwidth in high performance or multiprocessor systems such that the high data bandwidth can be fully utilized without the expense of two full uni-directional message and address buses. The basic idea is to sort out the messages in the system into requests and replies and accordingly, allocate a full bi-directional bus for all the requests and a very narrow but uni-directional bus for replies from the memory side.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 54% of the total text.

Apparatus for High Throughput Protocols in High-Performance Computer
Systems

      A scheme is disclosed to double the amount of message bandwidth
in high performance or multiprocessor systems such that the high data
bandwidth can be fully utilized without the expense of two full
uni-directional message and address buses.  The basic idea is to sort
out the messages in the system into requests and replies and
accordingly,
allocate a full bi-directional bus for all the requests and a very
narrow but uni-directional bus for replies from the memory side.

      There are several types of messages being sent between the
components in the system, for examples, a cache miss from the on-chip
caches to the L2 caches, return of a cache miss, a
cross-interrogation (XI) check from one cache to a remote cache, and
return of data from a cache to the memory or vice versa.  In order to
simplify our description, the following implementation steps are
confined to the interconnection between a processor chip (with
on-chip L1 caches) and its L2 cache only.  Extensions to other
interconnections, say between L2 caches and the memory are done in
the same way; furthermore, the L2 cache can be shared by more than
one processor.  Our disclosed interconnect structure contains the
following ingredients:
  1.  Message types: There are two types of messages in the system:
       request and reply.  All requests between components carry an
       address and a request id; the r...