Publication Date: 2003-Aug-20
The IP.com Prior Art Database
AbstractDisclosed is a method for a small cache that is designed to fit inside a small re-execution loop. Benefits include a cache that will not change in size during proliferations.
Disclosed is a method for a small cache that is designed to fit inside a small re-execution loop. Benefits include a cache that will not change in size during proliferations.
Typically, a cache is sized to fit inside the re-execution loop. Many other pipelines in the processor are also timed to the re-execution loop latency. This means that there is a large amount of work to be done if the re-execution loop latency must be changed.
When cache size increases, latency increases, and if the cache does not fit inside the re-execution loop, the re-execution loop must be changed. It is desirable to change the cache size for proliferations of a processor, but it is prohibitively expensive to change the re-execution loop. On the other hand, if the re-execution loop is designed to have longer latency than needed for the first generation cache, performance is lost and it limits the amount of work performed when the cache size is increased.
The disclosed method adds another level of cache (Big Cache in Figure) that is decoupled from the re-execution loop timing. This level cache delivers data to the mid-level cache as quickly as possible, but it can be increased in size without affecting the design of the re-execution loop. Since it is beneficial to have a short re-execution loop, the mid-level cache is smaller than usual. This smaller size enables increased bandwidth, which is a further performance benefit.
The disclosed method...