Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Memory Compression Latency Enhancement via "Pinned" Addresses

IP.com Disclosure Number: IPCOM000115096D
Original Publication Date: 1995-Mar-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 2 page(s) / 53K

Publishing Venue

IBM

Related People

Brown, JD: AUTHOR [+5]

Abstract

Described is a method that will reduce latency to highly accessed memory locations in systems with a Compressed Memory Architecture. It will directly map a portion or portions of the "real" address space (as seen on the system bus) to the "physical" address space (as seen in the memory subsystem). The portions to be directly mapped will be physically small but highly referenced. This will speed up these accesses and decrease the average latency.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 59% of the total text.

Memory Compression Latency Enhancement via "Pinned" Addresses

      Described is a method that will reduce latency to highly
accessed memory locations in systems with a Compressed Memory
Architecture.  It will directly map a portion or portions of the
"real" address space (as seen on the system bus) to the "physical"
address space (as seen in the memory subsystem).  The portions to be
directly mapped will be physically small but highly referenced.  This
will speed up these accesses and decrease the average latency.

      Most computers and operating systems have an address range that
gets a much higher than average access rate.  This range may cover
only a fraction of a percent of the available main store address
range, and yet get a double digit percent of all memory accesses.
This range may be only one continuous range or a few smaller
non-continuous ranges.  These areas typically hold operating system
primitives or another low-level system object.  This code and data is
normally loaded at Initial Program Load (IPL) time and left in memory
at the same memory addresses the entire time the system is running.

      Also the access time of any individual memory request is
usually not as large of a concern for performance as the average
access time of large number of memory requests; therefore, system
performance can be substantially enhanced by speeding up only part of
the memory requests.  These two facts taken together imply that if a
way could be found...