Browse Prior Art Database

Method for Efficient TCP Processing with Prediction

IP.com Disclosure Number: IPCOM000127246D
Original Publication Date: 2005-Aug-18
Included in the Prior Art Database: 2005-Aug-18
Document File: 2 page(s) / 26K

Publishing Venue

IBM

Abstract

Described is a technique to boost the performance of receive side TCP/IP processing and subsequently, improving application performance on computers. TCP/IP processing comsumes a lot of CPU cycles and is a dominant bottleneck for network-intensive applications. One of the methods proposed to improve the performance of TCP/IP for receive traffic is by coalescing segments before TCP/IP processing. The most important consideration for segment coalescing is the ability to predict the time delay of arrivals from the network. This prediction is used to make the decision on when sufficient segments have been coalesced while maintaining latency constraints. The method used to predict packet arrivals is the exponential smoothing technique. The segment coalescing technique can demonstrate better latency characteristics if it can predict the arrival of the next packet.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 2

Method for Efficient TCP Processing with Prediction

Segment Coalescing Technique

    To understand the adaptive segment coalescing technique, the segment coalescing technique is reviewed. In this design, the stream bucket is a logical queue consisting of packets belonging to the same TCP connection. Also, a stream TCP/IP buffer is large buffer that can accommodate a fixed number of 1500 byte TCP packets. The size is determined by the number of segments that need to be coalesced. The steps are shown in figure 1. The high level design is described below:

    1) The NIC receives a packet (step 1) and DMAs the data into the device driver's buffer (step b & e) and interrupts the CPU. The "fast" interrupt handler is executed and it schedules the "slow" handler (step c).

    2) The slow handler examines the packet and builds a 5-tuple key consisting of the source IP address, destination IP address, Layer 4 protocol, TCP source port and TCP destination port.

3) Using the 5-tuple key, the driver hashes into the correct stream bucket.
4) The driver examines the packet for special conditions. It checks to ensure that the packet does not have IP or TCP options. The driver also checks if the packet has SYN, RST or FIN flags set.

In case, any of the condition is true, the stream bucket is emptied and the packet is copied to a TCP/IP buffer and pushed up the TCP/IP stack(step g).

    5) If the stream bucket is empty (step d), the packet is copied into a stream TCP/IP buffer.

    6) The packet may be in the stream bucket until a certain number of packets arrive or a certain latency requirement is met. This decision is made by the prediction algorithm. Skip the steps outlined below.

    7) The driver then checks using the TCP sequence number in the packet to see if the packet is contiguous with the sequence number with the coalesced packets in the stream buffer. The stream bucket is emptied and the packet is copied to a TCP/IP buffer and pushed up the TCP/IP stack(step g).

    8) The driver copies the TCP data in the packet into the stream TCP/IP buffer (step f). The driver updates the packet size in the IP header and also updates the acknowledge number in the TCP header in the stream TCP/IP buffer with the latest acknowledgement num...