Browse Prior Art Database

Combining Multiple Shared-Buffer Packet Switching Modules to Improve Switch Buffer Capacity

IP.com Disclosure Number: IPCOM000106560D
Original Publication Date: 1993-Nov-01
Included in the Prior Art Database: 2005-Mar-21
Document File: 4 page(s) / 122K

Publishing Venue

IBM

Related People

Heusler, L: AUTHOR

Abstract

Throughput and thus performance of a packet switching system is related to the size of the switch internal packet buffer. Certain traffic scenarios, e.g., long bursts of packets destined to the same destination, may require more buffer space than can fit on a single module. In the following, a scheme is prepared to increase the effective buffer capacity in a modular fashion by combining multiple identical switching modules into a structure that maintains the shared packet buffer concept and retains packet sequence.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Combining Multiple Shared-Buffer Packet Switching Modules to Improve Switch Buffer Capacity

      Throughput and thus performance of a packet switching system is
related to the size of the switch internal packet buffer.  Certain
traffic scenarios, e.g., long bursts of packets destined to the same
destination, may require more buffer space than can fit on a single
module.  In the following, a scheme is prepared to increase the
effective buffer capacity in a modular fashion by combining multiple
identical switching modules into a structure that maintains the
shared packet buffer concept and retains packet sequence.

      In [*], a possible implementation of such a modular buffer
expansion scheme has been described.  Multiple shared-buffer
switching modules are arranged in parallel such that input and output
lines branch to all modules.  Its principle operation can best be
described by focusing on the combination of several physical output
queues into a larger logical output queue.  Incoming packets are
stored starting in one queue and then by proceeding from queue to
queue whenever a module becomes full.  This is illustrated in Fig. 1.

      Packet output is done in FIFO order and packet sequence over
multiple input rounds is maintained by introducing special markers
that identify entries before module switch.  Now, whenever the output
server detects such a marker it disables output from the current
module and activates the following module in a round robin fashion.

      Unfortunately, queues within a module are not independent of
each other since they all contend for a limited number of packet
entries in the shared common packet buffer.  Therefore it is very
likely that during the fill process for one queue some modules have
to be skipped because they are already occupied by traffic destined
to other ports.  Depending on the implementation these skips may
cause intermittent times where no packets can be accepted which
lowers the maximum achievable throughput.  Also, in order not to
disturb the output service, the rule is to write a marker each time a
module is visited.  But, since the duration of such a congestion is
unpredictable we may also have to write an unpredictable number of
marker entries which prohibits the definition of an absolute upper
queue size limit.

      The solution is based on the same topology with multiple
identical modules connected in parallel (Fig. 3).  However, there are
two main differences:  first, in addition to the output also the
input service is handled on a per queu...