Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Virtual Channels Connect Cache Memories

IP.com Disclosure Number: IPCOM000118818D
Original Publication Date: 1997-Jul-01
Included in the Prior Art Database: 2005-Apr-01
Document File: 2 page(s) / 76K

Publishing Venue

IBM

Related People

Getzlaff, K: AUTHOR [+3]

Abstract

A multiprocessor system with private cache memories (per processor) necessitates a method of maintaining cache coherency. In a system that uses a Modified, Exclusive, Shared, Invalid (MESI) protocol (or some similar protocol), one cache must be able to change the state of a copy of a line in any other cache to "invalid" or "shared".

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Virtual Channels Connect Cache Memories

      A multiprocessor system with private cache memories (per
processor) necessitates a method of maintaining cache coherency.  In
a system that uses a Modified, Exclusive, Shared, Invalid (MESI)
protocol (or some similar protocol), one cache must be able to change
the state of  a copy of a line in any other cache to "invalid" or
"shared".

      The disclosed system has a shared L2 cache with a connection to
each L1 cache.  All the communication from one L1 cache is done via
the shared L2 cache.  The L2 cache also records which L1 has a copy
of which  cacheline.  The L1 cache in the described system is a
"write through" cache while the L2 is "write back".  The described
network can be used  without any change for a "write back"
implementation of the L1 cache.

      As packaging restricts the number of connecting wires to a
minimum, it is desirable to implement the connection of the caches in
a single bus (may it be Bidirectional Bus (BIDI) or one single bus
per direction).  This allows the highest bandwidth for a given number
of connecting wires.  But there is a deadlock problem with this
approach:
  o  PU_A and PU_B have a copy of line X as shared.
  o  PU_A and PU_B request the exclusive state for line X at
      about the same time.
  o  Both PUs send a message to the shared L2 requesting the
      exclusive state.
  o  The request of PU_A is processed in the L2, while the
      request of PU_B has to wait in the inbound message queue
      of the L2.  (Only one of the two requests can be granted.
      If PU_A's request is granted, then PU_B's request must wait
      or vice versa.)
  o  The L2 forwards PU_A's request to PU_B.
  o  PU_B responds, but this message never gets through
      to the L2.  It waits in the incoming message queue
      of the L2 behind the first...