Browse Prior Art Database

PIPELINED SYNCHRONOUS MEMORY ACCESS

IP.com Disclosure Number: IPCOM000006435D
Original Publication Date: 1992-May-01
Included in the Prior Art Database: 2002-Jan-03
Document File: 2 page(s) / 117K

Publishing Venue

Motorola

Related People

Paul Reed: AUTHOR [+2]

Abstract

Single ported synchronous memories usually have a single address access path referenced to a given clock edge. Memory applications with multiple address sources must arbitrate between these sources for access to the single port. Adding additional ports to a memory array may exceed the allowable chip area or complexity. Oper- ating the memory at twice the speed of the external clock is not possible for large memory arrays operating in fast environments. An example of a memory application using multiple address paths is a multiprocessing cache tag which must service processor, snoop, and flush requests. This paper describes a method for "pipelining" the accesses to a single port tag memory for better utiliza- tion of the memory array bandwidth.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 50% of the total text.

Page 1 of 2

MOTOROLA INC. Technical Developments Volume 15

May 1992

PIPELINED SYNCHRONOUS MEMORY ACCESS

by Paul Reed and Brad Beavers

ABSTRACT OPERATION

  The Cache Controller tag memory contains a superset of the address entries stored within an associated microprocessor. These addresses are passed between the controller's processor address bus and the controller's system address bus. The address busses and associated transaction signals have input setup times referenced to external clock rise. Transactions may be initiated from either bus asynchronously, and therefore, both may occur simultaneously.

  Sin& ported synchronous memories usually have a single address access path referenced to a given clock edge. Memory applications with multiple address sources must arbitrate between these sources for access to the single port. Adding additional ports to a memory array may exceed the allowable chip area or complexity. Oper- ating the memory at twice the speed of the external clock is not possible for large memory arrays operating in fast environments. An example of a memory application using multiple address paths is a multiprocessing cache tag which must service processor, snoop, and flush requests. This paper describes a method for "pipelining" the accesses to a single port tag memory for better utiliza- tion of the memory array bandwidth.

  The chip generates an internal four phase clocking scheme with 4 "T&s: each with a 25% duty cycle. Nor- mally, each access consumes an entire clock cycle. See Figure 1. The address decoding is performed in Tl. The array sensing occurs in T2. A comparison of the address read from the TAG memory and the address presented occurs in T3. The output of the address comparison (match/match) generates the proper response to the access (driven by clock fall for a setup to clock rise). Any writing of the TAG memory occurs in T4.

  In the case of simultaneous accesses 2 and 3, access 3 would have to be delayed by one cycle if a conven- tional synchronous tag RAM design was used (whether to delay access 2 or 3 is a function of internal arbitration hardware). This delay may generate unacceptable per- formance degradation.

INTRODUCTION

  A typical synchronous memory such as a Static RAM will have address inputs referenced to a certain clock edge with data out referenced to a later clock. Inter- nally, the address decoding, array sensing, and data driv- ing or writing occur in a predetermined sequence relative to the clock input. More complex designs may have the memory embedded within a logic chip having multiple sources of addresses such as a dual address bus or an internal address co...