Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

High Performance Protocol to Grant a Data Bus Without Data Buffer Reservations on the Receiving End that Avoids Data Buffer Overflow and System Deadlocks

IP.com Disclosure Number: IPCOM000014100D
Original Publication Date: 2000-Mar-01
Included in the Prior Art Database: 2003-Jun-19
Document File: 2 page(s) / 41K

Publishing Venue

IBM

Abstract

High Performance Protocol to Grant a Data Bus Without Data Buffer Reservations on the Receiving End that Avoids Data Buffer Overflow and System Deadlocks Disclosed is a method for arbitrating a data bus with limited data buffer space using sideband full and nearly full signals from the data buffer control logic. These signals are used instead of counters that keep track of the number of buffers remaining. This frees the arbiter from having to be updated anytime the size of the data buffers changes. Additionally disclosed is a mechanism for avoiding deadlocks when only one data buffer remains for an indefinite period of time by retrying the address portion of an address/data transfer.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

High Performance Protocol to Grant a Data Bus Without Data Buffer

Reservations on the Receiving End that Avoids Data Buffer Overflow

and System Deadlocks

Disclosed is a method for arbitrating a data bus with limited data buffer space using sideband full and nearly full signals from the data buffer control logic. These signals are used instead of counters that keep track of the number of buffers remaining. This frees the arbiter from having to be updated anytime the size of the data buffers changes. Additionally disclosed is a mechanism for avoiding deadlocks when only one data buffer remains for an indefinite period of time by retrying the address portion of an address/data transfer.

With a limited number of data buffers to temporarily store the data for a potentially large number of data transfers from a given source, data overflow or system deadlock due to lack of forward progress can result. This invention couples a data bus arbiter to a data buffer storage queue via buffer full signals in addition to the usual request and grant lines. Two buffer full signals, completely full and nearly full, are sent from the data buffers to the data bus arbiter. The two need not reside on the same chip.

When a data source requests to send data to the data buffer queue, the full signals are referenced. If the buffers are completely full, no data bus grant is given. If the data buffers are nearly full with only one position left empty, the status of the data bus grants are reviewed to determine if there is a previous data bus grant outstanding which might source data that could fill that remaining position. If there is none, the grant is given immediately. If there is one, the arbiter waits until it sees the data arrive at the data buffer queues and then waits until the queues have had time to determine if they are completely full or nearly full. Due to data being transferred out of the queues, it is not possible to know ahead of time what state they will be in. This invention allows maximum utilization of the limited buffer resources while protecting against data overflow. A form of data streaming with very few buffers (2 to 4) can be achieved as long as data can be moved out of the queues in a timely manner.

A second part of the invention protects against system deadlocks when the data transfer also involves an address portion, as in the case of writes. The above algorithm is sufficient for data only transfer...