Browse Prior Art Database

Buffer Management Scheme for Gigabit IP Routers

IP.com Disclosure Number: IPCOM000103827D
Original Publication Date: 1993-Feb-01
Included in the Prior Art Database: 2005-Mar-18
Document File: 4 page(s) / 126K

Publishing Venue

IBM

Related People

Abler, JM: AUTHOR [+4]

Abstract

The disclosed data structure minimizes the number of cycles needed for enqueueing and dequeueing Internet Protocol (IP) datagrams in a buffer memory. This is achieved because the header buffer containing the IP datagram header, as well as the data for short packets, is directly attached to the header descriptor without further indirection through a pointer. Moreover, the data structure enables fragmentation of IP datagrams within a reduced number of cycles. Multicasting is also supported.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 49% of the total text.

Buffer Management Scheme for Gigabit IP Routers

      The disclosed data structure minimizes the number of cycles
needed for enqueueing and dequeueing Internet Protocol (IP) datagrams
in a buffer memory.  This is achieved because the header buffer
containing the IP datagram header, as well as the data for short
packets, is directly attached to the header descriptor without
further indirection through a pointer.  Moreover, the data structure
enables fragmentation of IP datagrams within a reduced number of
cycles.  Multicasting is also supported.

      In high speed networks, high performance internetworking units,
such as IP routers, are indispensable.  A proper design of such units
has to address several important issues: fast IP header processing,
high speed routing table look-up, and efficient buffer management.
The buffer managementissue is focused on here.

      In general, incoming IP datagrams may have to be stored in an
IP router for various reasons before being forwarded to the
corresponding output port(s).  In the targeted switched-based
architecture, [* ]  the queue length somewhat depends on the type of
switch used to interconnect the different Network Attachment Units
(NAUs).  Separate buffer management components are incorporated in
the receive and send parts of an NAU, the Receive Memory Management
Unit (RMMU) and the Send Memory Management Unit (SMMU) respectively.

      As described, [* ]  the receive and send memories of the NAU
are subdivided into data and header memories.  The header memory can
be implemented using very fast dual-port SRAM, whereas the larger
data memory can use cheaper single ported SRAM.  The first 128 bytes
of each IP datagram are written in the header memory and the rest of
the datagram resides in the data memory.

      An element in a queue consists of a header descriptor (figure)
with a 128 byte header buffer appended to it.  The header descriptor
carries all the information required for further handling the IP
datagram.  Thus, it includes indicators for multicasting (MC) and
fragmentation (FR), along with additional information to specify
these operations in more detail as described below.  Furthermore, the
length of valid data in the header buffer is included.  The Link
Pointer refers to the header descriptor of the next element in the
queue, implementing a simple linked list of IP datagrams.  A header
descriptor may also point to a linked list of additional data buffers
if the length of the received IP datagram exceeds 128 bytes.  This is
done by using a Data Block Pointer which points to the first data
block following the header buffer.  Each data buffer is accompanied
by a data descriptor consisting of a length field and a Data Block
Pointer to the next data buffer of the IP datagram.  The length field
is only interpreted in the last buffer.  All intermediate buffers are
completely filled during reception.

      This data structure provides fast ac...