Browse Prior Art Database

Direct Memory Access Queue Mechanism for Sharing a Single Direct Memory Access Channel for Multiple Processor with Common Data Memory

IP.com Disclosure Number: IPCOM000110421D
Original Publication Date: 1992-Nov-01
Included in the Prior Art Database: 2005-Mar-25
Document File: 4 page(s) / 173K

Publishing Venue

IBM

Related People

Eijan, UG: AUTHOR [+4]

Abstract

This article describes a technique for priority contention for direct memory access (DMA) channel access when multiple processors share the same data memory.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 51% of the total text.

Direct Memory Access Queue Mechanism for Sharing a Single Direct Memory Access Channel for Multiple Processor with Common Data Memory

       This article describes a technique for priority
contention for direct memory access (DMA) channel access when
multiple processors share the same data memory.

      The use of multiple processing is popular in various
environments. Sharing of memory and contention resolution becomes
important when sharing external hardware.  An application on sharing
a DMA channel for a multiprocessor environment is disclosed herein.
Fig. 1 is a block diagram showing such an environment.  Fig. 2 shows
the DMA queue structure between two digital signal processors DSPs.

      In the DMA queue mechanism the DMA hardware is shared by both
processors.  In general there can be more than two processors.  This
article will be limited to the two processor case in a network
adapter.  Processor 1 handles transmit data and processor 2 handles
receive data for all channels.  Transmit and receive data is
transferred to and from the host processor via DMA access.  Processor
1 will access the DMA hardware directly,  Processor 2 will request
DMA service from Processor 1.

      With multiple data I/O channels it is likely that there can be
more requests than can be handled at a time.  This causes the need
for a circular DMA queue (DMAQUE) buffer which would keep the
addresses of all the pending DMA setup requests.  There would be two
queue pointers (the DMA Queue Write PoinTeR, DMAQWPTR, and the DMA
Queue Read Poin TeR, DMAQRPTR) for write and read access to the
queue.  A counter (DMA Queue LENgth, DMAQLEN) would be kept to avoid
overrun by the write pointer.  As long as there is more than one
request in the queue, then the DMA completion interrupt handler will
process the queue.  When no request is pending, the requesting
routine will set up a request block and write the address of the
request block in the queue at the address pointed to by DMAQWPTR.
The requesting routine will then decrement DMAQLEN which indicates a
request is pending to the background DMA queue mechanism.  Normally
DMAQLEN is 40H indicating all queue slots are available, and anything
less means request are pending in the queue.

      The DMA set-up request block has the following structure:
           1 word    DMA page address (Destination)
           1 word    DMA sub-address  (Destination)
           1 word    DMA Byte count
           1 word    Network Adapter (source)
           1 word    DMA status
           5 words total

      The DMA queue will contain the pointers to each of these
request locks.  When a request is issued, the request block is filled
and the status will be set to DMA request pending.  The DMAQLEN is
decremented and checked for overflow.  If an overflow occurs, the
network adapter will flag the error to the Host.  Upon co...