Browse Prior Art Database

Fast Packet Bus for Microprocessor Systems With Caches

IP.com Disclosure Number: IPCOM000034231D
Original Publication Date: 1989-Jan-01
Included in the Prior Art Database: 2005-Jan-27
Document File: 4 page(s) / 43K

Publishing Venue

IBM

Related People

Butler, ND: AUTHOR [+3]

Abstract

Increased microprocessor performance requires increased processor-to-memory bandwidth. The addition of a cache with a line size greater than one offers the highest processor performance at the cost of still higher bus bandwidth requirements. A fast local bus is defined that allows packetized transfers that both maximize memory subsystem efficiency and minimize processor delay during a cache line reload. In microprocessor systems the processor needs to access memory for both instructions and data. The access time of memory is inversely proportional to the cost of the memory - fast access memory is available at higher cost than slower memory. A frequent solution is to provide a small amount of high cost fast memory and a large amount of lower cost memory. The small, fast memory is called a cache.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 39% of the total text.

Page 1 of 4

Fast Packet Bus for Microprocessor Systems With Caches

Increased microprocessor performance requires increased processor-to-memory bandwidth. The addition of a cache with a line size greater than one offers the highest processor performance at the cost of still higher bus bandwidth requirements. A fast local bus is defined that allows packetized transfers that both maximize memory subsystem efficiency and minimize processor delay during a cache line reload. In microprocessor systems the processor needs to access memory for both instructions and data. The access time of memory is inversely proportional to the cost of the memory - fast access memory is available at higher cost than slower memory. A frequent solution is to provide a small amount of high cost fast memory and a large amount of lower cost memory. The small, fast memory is called a cache. The performance of the system is usually directly related to the memory subsystem average access time. The average access time is equal to the percent of time the cache memory is accessed (its hit rate) multiplied by its access time plus the miss rate multiplied by the access time of main memory. The hit rate of a cache can be increased by either increasing its size (and cost) or using a larger line size. The cache line size is the number of bytes that are reloaded every time the cache has a miss. Since processors have a locality of reference, the neighboring words of the missing word have a high probability of being required by the processor in the near future. One disadvantage of providing a line size greater than the system bus width is that it increases the number of transfers across the system bus compared to a line size equal to the system bus (because some of the neighboring words will not be accessed and would not have been loaded into the cache in normal operation). The larger line size has an overall performance benefit if the cache reloads do not exceed the system bus bandwidth. A standard bus transfer takes two parts: the address portion of the transfer and the data portion.

The address specifies where in memory to perform the load (store), and the data provides the existing contents of the memory location in the case of a load or contains the new contents of the location in the case of a store. The overall bus bandwidth can be increased if the transfers are packetized - one address portion is followed by several data transfers with an understood address for the "extra" data words. This is especially suitable for dynamic memories operating in page mode. Page mode is a higher bandwidth method of using the dynamic memory modules that allows faster access to the memory locations when only a subset of the address changes (typically the row address of the array remains constant while the column address can change). Using packet mode, the master on a bus can specify a number of words in a row, aligned on the word boundary corresponding to the size of the packet. The memory control...