Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Network Performance and Latency Enhancement by Scheduling Network Memory Frees

IP.com Disclosure Number: IPCOM000188469D
Original Publication Date: 2009-Oct-09
Included in the Prior Art Database: 2009-Oct-09
Document File: 3 page(s) / 92K

Publishing Venue

IBM

Abstract

Freeing network memory in high performance computing environments extends the runtime path for any given transmit thread, introducing latency and creating a barrier to better system network performance. To combat this negative affect, this invention proposes a novel method for moving the freeing of network memory outside the scope of a transmit thread.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 68% of the total text.

Page 1 of 3

Network Performance and Latency Enhancement by Scheduling Network Memory Frees

Disclosed is a method to reduce the latency introduced by freeing network memory buffers (mbufs) on the transmit thread. During transmit, the network stack passes down a chain of mbufs to the device driver for processing and transmission. The network device driver must continually free mbufs after transmission so that the host network stack can then reuse them. Current network device drivers free the mbufs on the same transmit thread. Depending on the amount of data requested to transmit, this freeing can become quite costly and introduce significant latency into the life of the thread. This situation becomes even more critical should the freeing occur under a lock.

This invention involves scheduling the mbuf free operations to be performed outside the scope of the transmit path. By grouping the frees to run on a single thread on a different processor, the current processor being used for the transmit operations is able to circumvent performing the free operations and return to the calling thread faster. The overall effect is a more balanced processor workload (see Figure 1) instead of one with constant spikes of higher activity and longer path length which also has an impact on latency as seen in Figure 2.

During transmission, a thread will no longer free the mbufs...