Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

System Memory Access Latency Reduction when Crossing Single in Line Memory Module/Dual in Line Memory Module Boundaries

IP.com Disclosure Number: IPCOM000117617D
Original Publication Date: 1996-Apr-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 2 page(s) / 58K

Publishing Venue

IBM

Related People

Wolford, BJ: AUTHOR

Abstract

Disclosed is a method of improving System Memory Dynamic Random Access Memory (DRAM) access latency for page-mode memory accesses which cross Single in Line Memory Module/Dual in Line Memory Module (SIMM/DIMM) address boundaries and thus, page boundaries. Fast page-mode memory accesses which cross a page boundary are known as a page miss. Therefore, fast page-mode access which crosses a page boundary as well as a SIMM/DIMM boundary can be thought of as a bank miss and is by definition a page miss.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 66% of the total text.

System Memory Access Latency Reduction when Crossing Single in Line
Memory Module/Dual in Line Memory Module Boundaries

      Disclosed is a method of improving System Memory Dynamic Random
Access Memory (DRAM) access latency for page-mode memory accesses
which cross Single in Line Memory Module/Dual in Line Memory Module
(SIMM/DIMM) address boundaries and thus, page boundaries.  Fast
page-mode memory accesses which cross a page boundary are known as a
page miss.  Therefore, fast page-mode access which crosses a page
boundary as well as a SIMM/DIMM boundary can be thought of as a bank
miss and is by definition a page miss.

      Fast page mode accesses which page miss require that RAS(X)_ be
precharged for a finite period of time (i.e., 50nS precharge for 70nS
DRAM) before the DRAM can be reaccessed.  DRAM controllers typically
wait the precharge time before reaccessing DRAM contained in a
separate bank as depicted in Fig. 1.  However, this precharge time
translates directly to increased access latency to system memory.
When a fast page-mode access page miss is also a bank miss, access
latency can be reduced by recognizing that the bank being accessed
(which is different from the previous bank selected) is already
precharged by virtue of the fact that the memory bank was not
enabled.  As a result, the current access can begin immediately by
driving the new row address, deasserting RAS(X)_, and asserting
RAS(Y)_ on the targeted SIMM/DIMM as depicted in Fig. 2. ...