Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Distributed Main Store Bus Arbitration Algorithm and Tie Break Register

IP.com Disclosure Number: IPCOM000101901D
Original Publication Date: 1990-Sep-01
Included in the Prior Art Database: 2005-Mar-17
Document File: 3 page(s) / 136K

Publishing Venue

IBM

Related People

Eikill, RG: AUTHOR [+2]

Abstract

The design and implementation of a multi-processor system requires overcoming a significant number of design and implementation hurdles. This article deals with the problems encountered in sharing a main store bus between symmetric multi-processor without a central hub.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 42% of the total text.

Distributed Main Store Bus Arbitration Algorithm and Tie Break Register

       The design and implementation of a multi-processor system
requires overcoming a significant number of design and implementation
hurdles.  This article deals with the problems encountered in sharing
a main store bus between symmetric multi-processor without a central
hub.

      This article describes a high-performance arbitration algorithm
for the main store bus that can be implemented in distributed fashion
across multiple processors.  It assumes an arbitrary number of
processors connected together by a shared memory bus, each receiving
some information from all processors on the shared bus.  Each
processor receives the control buses from all other processors and
drives his own control bus.  The arbitration logic is assumed to be
duplicated in all processors.  The invention consists of a tie break
register which goes hand-in-hand with the multiple control buses to
provide a high-speed main store bus algorithm.  It also consists of
queueing registers which are duplicated for all processors in all
processors.  A high-speed arbitration algorithm with no dead cycles
for bus arbitration overhead is described; the algorithm speeds up
cache miss arbitration.  This invention avoids a central arbitration
unit and allows all main store control interfaces to be driven every
cycle.

      The arbitration algorithm receives information from all
processors every cycle.  The information used by the bus arbitration
algorithm comes from the multiple control buses from each of the
processors including itself.  The purpose of the bus arbitration
algorithm is to maximize the use of the main store bus and minimize
decision-making time.  The goal of this algorithm is to keep both the
processor and memory interfaces running as fast as possible without
encountering a deadlock.  The algorithm is consistent with a single
bus architecture using a cache snooping strategy which reduces I/O
constraints and simplifies the system.

      Information received from all processors by this algorithm
      -  Access mode for this cycle (2 bits)
         --  00=No cache access this cycle
         --  01=Cache store this cycle
         --  10=Cache fetch this cycle
         --  11=Primary directory search request
      -  Cache miss on last fetch (1 bit)

      There are queueing registers for all processors in each
processor.  The queueing registers are tied to the multiple control
buses, one queueing register for each bus.  There should be a
one-to-one correspondence between the queueing registers and the
control buses.

      The queueing registers track old fetches, stores and PD search
requests.  Only cache misses are tracked in the queueing sequencer,
not cache hits.  The queues can either be empty or have one or more
pending requests.  If the queues are not empty, then there are old
accesses pending. If a...