Browse Prior Art Database

Method for reducing the serialization overhead when an application is using User Datagram Protocol (UDP) to send to large numbers of destinations

IP.com Disclosure Number: IPCOM000238587D
Publication Date: 2014-Sep-04
Document File: 2 page(s) / 55K

Publishing Venue

The IP.com Prior Art Database

Abstract

The UDP protocol provides a connection-less low overhead transport for delivering data between applications. However, as data is sent or received from a destination, information about the destination needs to be cached. If a UDP socket is used to communicate with multiple destinations, this cached information is lost each time a new destination is used. The extra processing resulting from cache misses as well as the delay incurred to serialize access to the cache for each destination can significantly increase the overhead for using this connection-less protocol. Disclosed is a method to add connection-oriented structures for each destination to eliminate cache misses and serialization delays.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 2

Method for reducing the serialization overhead when an application is using User Datagram Protocol (UDP) to send to large numbers of destinations

The UDP protocol provides a connection-less low-processing overhead transport for delivering data between applications. It relies on the applications using the protocol to guarantee delivery of data and provide flow control for the data.

However, because it is connection-less this can result in increased processing if an application is sending to multiple destinations. As data is sent to a destination, the route, outbound IPSEC filter rules and policy rules are cached. As data is received from a destination, routes, inbound IPSEC filter rules and policy information is cached for that destination. If the next packet is sent to or received from a different destination or different port, the cached inbound or outbound information is discarded.

In addition, since there might be multiple data sends or receives occurring concurrently, updates to the route, IPSEC and policy cache must be serialized using the local UDP socket's lock. This creates increased delay.

Enterprise Extender (EE) is an existing function implemented by z/OS Comm Server. It is a specific example of SNA applications using UDP for transport to reduce overhead. Customers having large numbers of SNA sessions are experiencing these performance issues.

Enterprise Extender uses one of 5 UDP ports to send and receive data; the default port values are 12000 to 12004. The 5 UDP ports are mapped to 4 SNA transmission priorities for data traffic and one port for LLC signaling. As an EE UDP connection to a destination is created, a table of 5 UDP sockets are opened, one per port.

A UDP socket is represented internally by a control block called a UCB (UDP control block).

This is the current implementation for Enterprise Extender:

Data traffic is serialized at the UCB level

Outbound traffic is serialized using the UCB lock.

Outbound IPSEC filter rule and Policy information is cached in the UCB

Inbound traffic is serialized using EE policy locks - one per UCB

Inbound IPSEC filter rule and Policy information are cached in the UCB

The route information associated with the UCB and port is cached.

An EE link represents the 5 "UDP connections" (one for each port) that might be used between the local and remote node.

A separate route is obtained for sending data from each local port to the remote node.

A UCB table contains the 5 UCB control blocks (one for each UDP socket).

Using the UCB for serialization causes performance issues when there are thousands of remote nodes, each routing to one of the local node's five UCBs.

The route information, IPSEC filtering rules and Policy information is cached for each UCB. The cache constantly changes as a packet is received from or sent to a different remote node. This res...