Browse Prior Art Database

METHOD FOR ACHIEVING HIGH HIT RATE FOR AN ADDRESS TRANSLATION CACHE IN BINARY TRANSLATION

IP.com Disclosure Number: IPCOM000014270D
Original Publication Date: 2000-Mar-01
Included in the Prior Art Database: 2003-Jun-19
Document File: 2 page(s) / 38K

Publishing Venue

IBM

Abstract

Disclosed is a method for increasing the cache hit rate of a branch address translation cache by significantly reducing the number of cache misses due to function return. The described approach is based on preloading function return address translations to reduce the number of cache misses and employed in conjunction with the previously disclosed instruction address translation cache. While instruction address caches have shown significant overall perform- ance, their hit rate for function returns has shown a significant amount of cache misses for easily predictable mappings.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

  METHOD FOR ACHIEVING HIGH HIT RATE FOR AN ADDRESS TRANSLATION CACHE IN BINARY TRANSLATION

    Disclosed is a method for increasing the cache hit rate of a branch address translation cache by significantly reducing the number of cache misses due to function return. The described approach is based on preloading function return address translations to reduce the number of cache misses and employed in conjunction with the previously disclosed instruction address translation cache. While instruction address caches have shown significant overall perform- ance, their hit rate for function returns has shown a significant amount of cache misses for easily predictable mappings.

In previous work, a return address stack for keeping address translations for function call/return mappings has been proposed to solve this problem, but this comes at the cost of additional hardware and processor state which has to be maintained. In addition, programs which do not always follow the function call/function return path strictly, such as C programs using the longjmp mechanism or C++/Java programs which use the throw/catch exception mechanism, may incur significant performance penalties by disturbing the return stack mech- anism. Furthermore, special handling is required for cases where function return does not take the expected path. (Typically, in this case a regular translation lookup has to be performed.)

The present invention increases branch address translation cache hit rate for function return address translation with a unified instruction address translation mechanism, which eliminates the cost of maintaining both a translation cache and a dedicated function return stack. The presented approach gives similar performance to a combined address translation cache with return stack, but only requires a single hardware resource and is also more robust in the pres- ence of unexpected function return behavior since strict function return order is not required.

The present disclosure is based on the branch address translation cache previously disclosed, and adds an improved cache management mechanism to increase cache hit rate. This manage- ment mechanism is based on preloading translations with a high access probability in the near future. The cache is preferably preloaded from problem state to minimize the cost of pre- loading.

According to the present invention, the following methods are used to implement the various branch instructions:

* Register-indirect branc...