Browse Prior Art Database

Method for split DMA processing

IP.com Disclosure Number: IPCOM000012023D
Publication Date: 2003-Apr-02
Document File: 3 page(s) / 84K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for split direct memory access (DMA) processing. Benefits include improved performance.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Method for split DMA processing

Disclosed is a method for split direct memory access (DMA) processing. Benefits include improved performance.

Background

              Conventionally, the host software (S/W) process must regulate the rate at which it pushes data to an I/O device. The rate must match the input/output (I/O) device rate for processing that data. Failure may result in buffer memory overrun.

              DMA engines conventionally use scatter-gather lists to direct data movement where the DMA engine moves data in/out of system memory to/from system memory or to/from memory on an I/O device. In conventional I/O designs, a host-based S/W process (such as an I/O driver) constructs a scatter-gather list that specifies data items to be moved. The list includes instructions to a DMA engine to move the data items between host memory and I/O device-controlled memory (buffer). That memory may be located, for example, on an outboard storage device or network interface card (NIC). However, the DMA engine cannot move a data item unless the memory buffer has been made available by the I/O device.

              When the DMA engine hardware has moved a block of data into the buffer, it may have to wait for the I/O device to process the data, freeing up space in the buffer before the DMA can move another data item into the buffer. The metering of the DMA engine is typically accomplished by the host S/W driver. It tracks the progress of the I/O device and adds new descriptors to the DMA engine’s descriptor chain as the I/O device makes new buffer space available.

              Data transfer conventionally occurs in one of two ways. The I/O process can fetch descriptors from system memory and moves the data itself. Alternatively, the host process can monitor I/O progress by reading the I/O status to determine when the host can transfer more data. In either case, significant latency can occur between the I/O process being able to receive more data and the host process determining that it can transfer the data.

              The I/O device requires enough buffer space to hold as many data packets as the device can process during the latency period. This allows the device to continue processing data packets during the period between the I/O device freeing its internal buffer space and the host posting more data. As the latency period lengthens, more buffers are required. Conversely, reducing the latency reduces buffering requirements, thus reducing the cost of the I/O device.

              Pushing data across the I/O bus is the most efficient form of I/O data transfer. Memory reads by an I/O device are significantly less efficient and can diminish data bandwidth. I/O register reads by the host CPU have extremely poor performance characteristics.

General description

              The disclosed method is split DMA processing. The I/O device controls the rate that the DMA engine processes DMA descriptors to match its own I/O rate. The host S/W process is free to add new descripto...