Browse Prior Art Database

Quick Memory Allocation For A Received Frame

IP.com Disclosure Number: IPCOM000014277D
Original Publication Date: 2000-Jun-01
Included in the Prior Art Database: 2003-Jun-19
Document File: 3 page(s) / 446K

Publishing Venue

IBM

Abstract

Disclosed is an algorithm for a communication adapter to quickly allocate a host memory buffer for incoming frame.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 3

Quick Memory Allocation For A Received Frame

Disclosed is an algorithm for a communication adapter to quickly allocate a host memory buffer for incoming frame.

When an Inter Node Communication frame arrives to a destination node, the communication stack has to determine where (in memory) to place the incoming frame. The vanilla TCP/IP communication stack does not provide for a pre-posting mechanism. With pre-posting, an application could potentially post a buffer (to the adapter) on a connection prior to the arrival of a frame. Since vanilla TCP/IP connections are multiplexed through the TCP/IP stack, the adapter or adapter's DD has to allocate memory buffers on the fly, as frames arrive.

Smart (a.k.a. deep) adapters offload various communication overheads from the host CPU to the adapter. Many of them offload the buffer management for incoming frames to the adapter as well. The adapter is given a (receive) memory pool at initialization. This is memory on the host that the adapter will manage and place incoming frames into. The adapter is then responsible to curve memory sections on a need to bases. It is also responsible to free memory once the host has finished processing the incoming frame.

Typically deep adapters employ some type of memory management technique to partition (allocate a chunk of memory for an incoming frame) and coalesce buffer sections (when the buffer is freed by the host) to keep buffer fragmentation low. The idea is to try and allocate a contiguous virtual memory on the host for an incoming frame. Contiguous virtual memory provides efficient processing on the host. Partitioning the receive memory pool, and coalescing it back (when the buffer is freed) is quite complex and requires processing on the adapter. As we offload many tasks to the adapter, we need to ensure that adapter processing is kept simple and efficient.

On adapter initialization, a virtually contiguous memory pool for receiving incoming frames is posted to the adapter. The posting is done on a OS page memory boundary. The adapter keeps a sequential page ta...