Browse Prior Art Database

Preallocating Buffers to Receive Replies to a Multicast

IP.com Disclosure Number: IPCOM000104050D
Original Publication Date: 1993-Mar-01
Included in the Prior Art Database: 2005-Mar-18
Document File: 6 page(s) / 205K

Publishing Venue

IBM

Related People

Auerbach, J: AUTHOR [+6]

Abstract

Disclosed is a method to ensure that an application processor (in a communication network that supports multicasting) has enough buffers to hold the large uncoordinated burst of traffic it may receive in reply to a multicast. A multicast is a transmission by one sender to more than one receiver. Multicasts can occur in a wide area network utilizing fast packet switching technology, e.g., [1], or in a network utilizing a shared broadcast medium, such as a local area network (LAN) or metropolitan area network (MAN). A processor that expects replies determines, in advance, the number of potential replies, then preallocates enough buffers in which to receive them.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 45% of the total text.

Preallocating Buffers to Receive Replies to a Multicast

      Disclosed is a method to ensure that an application processor
(in a communication network that supports multicasting) has enough
buffers to hold the large uncoordinated burst of traffic it may
receive in reply to a multicast.  A multicast is a transmission by
one sender to more than one receiver.  Multicasts can occur in a wide
area network utilizing fast packet switching technology, e.g., [1],
or in a network utilizing a shared broadcast medium, such as a local
area network (LAN) or metropolitan area network (MAN).  A processor
that expects replies determines, in advance, the number of potential
replies, then preallocates enough buffers in which to receive them.
This disclosure reduces overhead on a "receive" data path by enabling
a processor to shift much of the fixed per-packet processing time to
the non-critical period before the communications link needs to be
serviced.  It facilitates a hardware implementation that can improve
the service time for fast links by receiving data in a stream fashion
into a list of preallocated buffers.  This solves a problem in a
one-to-many communi- cations model that flow control solves in a
one-to-one communications model.

      It is a common principle of software engineering that in order
to match a slow processor with a fast link, buffering of data
received from the link is required; the processor can then process
the data later, at its own pace.  It is also well known that if a
processor is to perform a communications "receive" task that consists
simply of transferring incoming data to a buffer and putting the
buffer aside for later inspection, it must be able to handle each
received packet in a time interval that is shorter, on average, than
the interval between successive packet arrivals on the link.  In fast
networks, e.g., [1], the usual cause of errors and message loss is a
node's inability to receive or buffer data due to slow service time,
rather than errors on a transmission link, since bit errors on
digital and optical facilities are typically in the range of 10 sup
-7 to 10 sup -14 [2].  In general, improving the time it takes a
processor to service a fast link can be achieved by making packets
longer, making the processor faster, reducing the fixed overhead
associated with processing each packet, or enforcing packet spacing.
In a one-to-one communications model, conventional solutions to this
problem are flow control techniques, whereby a single sender and a
single receiver agree on the number of receive buffers required in
advance of the transmission, thus enforcing packet spacing.  In a
one-to-many communications model, however, traditional flow control
mechanisms are impractical, as they would impose an entirely new
burden of coordination upon the potentially many servers replying
concurrently to a single processor's multicast.  A better solution is
for the multicasting processor to conservatively estimate...