Browse Prior Art Database

Direct Memory Access for Multiple Requesting Computer Devices

IP.com Disclosure Number: IPCOM000034799D
Original Publication Date: 1989-Apr-01
Included in the Prior Art Database: 2005-Jan-27
Document File: 5 page(s) / 131K

Publishing Venue

IBM

Related People

Keung, TW: AUTHOR [+3]

Abstract

A technique is described whereby cache memory implementation is provided for direct memory access (DMA) data fetches when multiple communication input/output (I/O) devices are attached to a common bus. Discussed are the efficiencies of accessing the I/O processor's (IOP) memory in sequential access modes so as to better utilize the bandwidth on a synchronous communication bus and to take advantage of dynamic random access memory (DRAM) page mode characteristics. (Image Omitted) As a background, subsystems such as shown in Fig. 1, are designed as an IOP intended for use with communication devices and protocols. In this subsystem, the memory can be accessed by using page mode and interleaving techniques so as to achieve a four-to-one speed advantage over accessing single word mode.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 37% of the total text.

Page 1 of 5

Direct Memory Access for Multiple Requesting Computer Devices

A technique is described whereby cache memory implementation is provided for direct memory access (DMA) data fetches when multiple communication input/output (I/O) devices are attached to a common bus. Discussed are the efficiencies of accessing the I/O processor's (IOP) memory in sequential access modes so as to better utilize the bandwidth on a synchronous communication bus and to take advantage of dynamic random access memory (DRAM) page mode characteristics.

(Image Omitted)

As a background, subsystems such as shown in Fig. 1, are designed as an IOP intended for use with communication devices and protocols. In this subsystem, the memory can be accessed by using page mode and interleaving techniques so as to achieve a four-to-one speed advantage over accessing single word mode. The intent is to provide a sequential access mode to aid in the managing of IOP memory traffic and in preventing this resource from becoming a bottleneck in system performance. Also, data rates and average response times can be increased at the I/O adapter (IOA) bus so as to insure that communication devices can be connected to this bus with no degradation in performance. The concept described herein is essentially an extension to the architecture provided in the subsystem of Fig. 1. It provides a means of increasing the data handling capability in sequential mode and speeds up transfers on the IOA bus by at least two-to-one for the data residing in the cache.

The implementation of the cache memory applies to the read operations, with the IOAs acting as the IOA bus master. When data is received on IOA bus 10, as shown in Fig. 1, and it is determined that the data is not contained in the cache memory unit of the control unit, it will fetch the data from storage unit 13, under the control of MPS unit 11, and sent to the requesting IOA. Once the first data unit is fetched, IOB circuit 12 will continue to prefetch the next "n" words within the eight word block of data of the original request. The number of words "n" fetched after the initial request depends on the address of the original request and will be the words remaining after the initial request in an eight-word block. For example, if the original request address had the three least significant bits of a word address equal to B'000', the number of words prefetched would be seven. The words are stored in a one-way set associative cache. The cache is organized in one bank of thirty-two words, where each bank has four blocks of eight words each. The index array for the bank of cache is four entries deep and contains an address field (17 bits), a validity field (4 bits), and a miss count bit field (4 bits). All of the RAMs in the cache have parity bits associated with each entry in that RAM. Fig. 2 illustrates the cache organization. Set associativity, block size and the depth of the cache may be expanded, without loss of generality, to suit the...