Browse Prior Art Database

Means for Improving the Performance of Multiprogrammed Cache

IP.com Disclosure Number: IPCOM000101781D
Original Publication Date: 1990-Sep-01
Included in the Prior Art Database: 2005-Mar-16
Document File: 3 page(s) / 167K

Publishing Venue

IBM

Related People

Stone, HS: AUTHOR

Abstract

This article describes how to control a cache to improve its performance in multiprogrammed systems. The characteristics of cache memory in multiprogrammed systems are discussed in (*). Each process is shown to spend an initial period of time reloading itself into cache memory, occupying as much cache as it can use. As the reload transient dies, the process then uses cache as it would in a single-user system until the process reaches the end of its time quantum. At that point, the next process then loads itself into cache memory and occupies as much cache as it can use. Then the second process goes into its single-user behavior until its quantum ends. This repeats for all the processes in the multiprogramming mix.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 36% of the total text.

Means for Improving the Performance of Multiprogrammed Cache

       This article describes how to control a cache to improve
its performance in multiprogrammed systems.  The characteristics of
cache memory in multiprogrammed systems are discussed in (*).  Each
process is shown to spend an initial period of time reloading itself
into cache memory, occupying as much cache as it can use. As the
reload transient dies, the process then uses cache as it would in a
single-user system until the process reaches the end of its time
quantum. At that point, the next process then loads itself into cache
memory and occupies as much cache as it can use.  Then the second
process goes into its single-user behavior until its quantum ends.
This repeats for all the processes in the multiprogramming mix.

      Caches normally replace the least recently used item from among
a set of replaceable items.  As caches become large, this policy is
not optimal, because the cache may contain items that will soon be
accessed by a currently idle process.  It may be better to retain
such items than to replace them by items from the currently running
process under certain conditions.  We describe how to control a cache
in such a way that the replacement decision tends to retain old items
when it should.

      The Cache-Control Mechanism Initially, consider a cache for a
system that executes two processes, Process A and Process B.  Each
process executes for a time quantum of Q seconds, and then the system
changes state to run the other process.  What is the optimal
replacement policy for a cache in such a system?

      The cache replacement policy should increase the space
allocated to the currently running system if that space increase
reduces miss rate.  If the space increase does not reduce miss rate,
then no additional space should be allocated to the running process.

      For example, assume that Process A has run through its quantum
and Process B has just begun.  As misses occur for Process B, the
replacement policy would normally discard references to Process A
because they are less recent than Process B references.  Under this
policy the amount of cache memory allocated to Process B is strictly
increasing.  As more and more items of B are loaded into cache, each
additional cell of cache allocated to B produces a smaller reduction
in miss rate.  Moreover, as B comes closer to the end of its quantum,
it becomes less likely that changing some cell's allocation from A to
B will be worthwhile because that extra cell of B may not necessarily
be touched again, whereas the discarded cell of A will probably have
to be reloaded.   Therefore, a replacement of the A cell by the B
cell has a high likelihood of producing a miss in the future without
a corresponding increase in the number of misses in the short term.

      To be mathematically precise, let MB (x) be the miss-rate for
Process B for a cache allocation of size x. If we incre...