Browse Prior Art Database

Decode Loop Cache

IP.com Disclosure Number: IPCOM000237072D
Publication Date: 2014-May-29
Document File: 2 page(s) / 61K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method for a Decode Loop Cache is disclosed.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 78% of the total text.

Page 01 of 2

Decode Loop Cache


Disclosed is a method for a decode loop cache.

In a typical out of order superscaler processor, decode cache can be used to shorten the flush penalty by fetching the instruction data from decode cache that is much closer to the execution instead of from instruction cache (icache) in an event of a flush. This approach may be used for other cases where there is a need to fetch instructions that have seen before and are recorded in the decode cache. Typically, the flush penalty can be reduced further if the decode cache is located closer to execution unit. It can also cut down the power consumption because the instruction data are from decode cache that is much smaller than the icache.

To remove dead cycles for a predicted taken branch, Branch Target Address Cache (BTAC), loop buffer and loop cache can used to detect loop and remove dead cycles from a predicted taken branch.

In order support both loop and flush optimization, significant hardware cost can be put into implementing both decode cache and loop function such as Btac, loop buffer or loop cache.

A new decode loop cache function is disclosed that can detect branch loop, record part of or the whole branch loop, fetch the recorded branch loop, and support the normal decode cache function to reduce flush penalty. The disclosed approach reduces the area needed implement both decode cache and loop function and to achieve the same performance.

Figure 1 depicts an example implementations an Instr...