Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Memory Management Mechanism to Reduce Cache-line Contention

IP.com Disclosure Number: IPCOM000099336D
Original Publication Date: 1990-Jan-01
Included in the Prior Art Database: 2005-Mar-14
Document File: 2 page(s) / 91K

Publishing Venue

IBM

Related People

Kawachiya, K: AUTHOR [+4]

Abstract

Disclosed is a method for reducing cache-line contention. The basic idea is to allocate different cache-lines for different type of data in order to avoid line contention among these data.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Memory Management Mechanism to Reduce Cache-line Contention

       Disclosed is a method for reducing cache-line contention.
The basic idea is to allocate different cache-lines for different
type of data in order to avoid line contention among these data.

      The cache memory is a small, fast buffer between the CPU and
main memory.  It consists of many 'cache-lines,' and each data item
is cached into a specific cache-line according to its physical
address.  Therefore, the way in which data is allocated to the
physical memory is important to reduce cache-line contention.

      The disclosed physical memory allocation method reduces line
contention by the following three steps: Step 1. Sorting the data
into several data-types. Step 2. Partitioning the cache-lines into
several portions, each of which corresponds to a data-type. Step 3.
Allocating an appropriate physical memory when requested.

      The following sorting methods can be considered as examples of
the data-types in step 1: - Sorting by program-text, program-data,
and stack. - Sorting by processes (data for process A, data for
process

      B, ...). The rest of this disclosure explains sorting by
program-text, program- data, and stack.

      In step 2, cache-lines are partitioned with considering the
characteristics of each data type.  For example, most programs only
access the stack top area and do not touch the rest of the stack, so
the number of cache-lines for the stack can be smaller...