Browse Prior Art Database

High Line Utilization Direct Memory Access

IP.com Disclosure Number: IPCOM000036502D
Original Publication Date: 1989-Oct-01
Included in the Prior Art Database: 2005-Jan-29
Document File: 5 page(s) / 47K

Publishing Venue

IBM

Related People

Dias, WC: AUTHOR

Abstract

The speed of communication links have increased to the point where the time required by the microprocessor to allocate buffers for use by the link results in a significant loss in line utilization. Cutting line utilization in half doubles the cost of moving data. It is, therefore, desirable to end one message or packet and immediately start the next one. With line speeds of 9600 Bps, the least amount of time between packets (one flag time) was 833 us. This provided plenty of time for a microprocessor to schedule the first received (inbound) data buffer for processing and assign a new buffer to the interface. Assigning the current buffer to a task and locating a free buffer for the next message is an impossible task for most microprocessors.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 30% of the total text.

Page 1 of 5

High Line Utilization Direct Memory Access

The speed of communication links have increased to the point where the time required by the microprocessor to allocate buffers for use by the link results in a significant loss in line utilization. Cutting line utilization in half doubles the cost of moving data. It is, therefore, desirable to end one message or packet and immediately start the next one. With line speeds of 9600 Bps, the least amount of time between packets (one flag time) was 833 us. This provided plenty of time for a microprocessor to schedule the first received (inbound) data buffer for processing and assign a new buffer to the interface. Assigning the current buffer to a task and locating a free buffer for the next message is an impossible task for most microprocessors.

Hardware, such as a simple direct memory access (DMA), will move data quickly, but more is needed. In order for a DMA to meet this challenge it must not only move data, but also control the buffers used to store the data. Inbound data will arise at the adapter in a totally unpredictable manner. Any lag time in supplying buffers results in data loss and the high overhead of packed recovery. To prevent this from happening, the DMA must be able to:
1. obtain its own buffers from a buffer pool;
2. fill the buffers; and
3. transfer a pointer to the filled buffers to a "filled

buffer queue".

In order to avoid conflicts with the microprocessor trying to use these buffers, the DMA must also ensure it does not overwrite a buffer still in use.

Outbound data comes from the microprocessor. Any lag time in supplying buffers, with complete blocks of data, only results in the loss of line utilization, not in the loss of data. This article defines hardware that would restore line utilization. Advantages Over Prior Art

This design is able to receive many transmissions without microprocessor assistance.

DMA designs, such as Motorola's 68350 and Intel's 8257 require the microprocessor to assign a new buffer each time a transmission is completed. The transmission link must remain idle while the microprocessor is locating a buffer. This results in poor line utilization for the newer higher speed links.

Circulating buffers have been used in the past, but with a RAM of a fixed size totally separate from the main system memory. This separation required extra operations of the microprocessor to access the data. All circulating buffers of the past did not have a FIFO queue to make the end of messages. This placed a heavy burden on software to locate message boundaries. Circulating Buffer with FIFO Address Queue

The microprocessor will assign a block of RAM as a circulating buffer. An overview of the system is shown in Fig. 1. Inbound Operation

1

Page 2 of 5

The DMA will store data into the area of RAM, incrementing its address after every store of data, until it reaches the upper bound of this area. When the DMA's address pointer reaches the upper bound of its block of RAM, it...