Browse Prior Art Database

Cache Line Prefetch for Translations with Hashed Page Table

IP.com Disclosure Number: IPCOM000114105D
Original Publication Date: 1994-Nov-01
Included in the Prior Art Database: 2005-Mar-27
Document File: 2 page(s) / 85K

Publishing Venue

IBM

Related People

Liu, L: AUTHOR [+2]

Abstract

In a Hashed Page Table (HTAB) organization entries for neighboring virtual pages tend to fall into different cache line boundaries. As a result, severe performance degradations may be associated with virtual-to-real address translations in certain systems. Cache line prefetching can be utilized to reduce cache misses due to virtual/real address translations in such an environment.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Cache Line Prefetch for Translations with Hashed Page Table

      In a Hashed Page Table (HTAB) organization entries for
neighboring virtual pages tend to fall into different cache line
boundaries.  As a result, severe performance degradations may be
associated with virtual-to-real address translations in certain
systems.  Cache line prefetching can be utilized to reduce cache
misses due to virtual/real address translations in such an
environment.

      A HTAB is structured as a set-associative table in main memory.
A virtual (page) to be translated is hashed into a set (or class)
called PTEG (Page Table Group).  Each PTEG consists of a fixed number
of Page Table Entries (PTEs) that are physically packed contiguously
in a cache line.  For instance, each PTE may have 64 bits (2 words),
and each PTEG may consist of 16 PTEs that can be packed to fill a
128-byte cache line.  In order to randomize the set selections
lowest-order bits of virtual page address are often involved in the
HTAB index.  As a result, the PTEGs for 2 adjacent virtual pages are
guaranteed to fall into different HTAB congruence classes.

      It is observed that spatial locality between adjacent virtual
pages is still significant enough to have benefit in translating
multiple page entries upon each TLB miss.  In the 370-type
segment/page table architecture, the page table entries (PTEs) for
two successive virtual pages are usually adjacent to each other in
memory physically, and hence after the first TLB miss (e.g., for page
i) the PTE for page i+1 is likely to be in the cache.  However, with
HTAB architecture these PTEs are pretty much guaranteed to be in
different lines.  Since TLB miss is usually a less frequent event,
cache misses for PTEG access upon translation through HTAB become
highly likely (unless a very large cache is used).  In certain
workloads with higher TLB miss rates (e.g., NIC program accessing
large matrices at long strides) the cache misses due to HTAB
accessing can cause significant performance losses.  One way to
remedy such deficiency is to use cache line prefetch to avoid the
cache miss for the translation for page i+1.

      The invention is related to prefetching cache lines of PTEGs in
a system with HTA...