Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Apparatus and implementation for optimistic look ahead processing of network stream data

IP.com Disclosure Number: IPCOM000201286D
Publication Date: 2010-Nov-10
Document File: 7 page(s) / 56K

Publishing Venue

The IP.com Prior Art Database

Abstract

Many network applications, such as electronic trading, require extremely low latency. Applying parallel speculative processing to the whole chain of operations on network data is an approach to minimize all latencies.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 20% of the total text.

Page 01 of 7

Apparatus and implementation for optimistic look ahead processing of network stream data

Speculative Processing of Network Stream Data

Many network applications, such as electronic trading, require extremely low latency. This disclosure applies parallel speculative processing to the whole chain of operations on network data to minimize all latencies.

Disclosed is a system or hardware-software co-design to process network data with significantly reduced latency compared to currently available solutions. This is achieved by processing any piece of network data speculatively through the full hardware- and software-stack. Network data means both network protocol headers and customer payload. The system design staggers the fully pipelined operations to further reduce the latency.

The main idea of speculative processing applied to network data stream processing is to reduce latency by processing non-validated data. In real networks errors will occur seldom, hence the system can assume incoming data to be usually correct and ready for immediate processing. The system calculates and verifies the checksums in parallel to the payload data processing. If the verification fails it will discard the data or replace it by redundant good data. The cost of speculative processing is a slightly higher design complexity with respect to the communication between the processing steps, compared to strictly serialized steps. Overall this approach can significantly reduce processing latency without sacrificing data integrity.

The hardware stack optimized in this disclosure starts right above the physical media access control, often referred to as Open Systems Interconnection (OSI [1]) layer 2b. Typical hardware operations consist of network protocol header processing, payload data processing and transfer to the main memory. Header and payload data processing may include operations like filtering based on addresses or content, compression, computing of derivative measures, averaging or similar algorithms. Software operations usually comprise system setup, data transfer control, device drivers, up to the user application, referred to as OSI layer 7.

The novel approach considers the whole processing chain, instead of just optimizing individual steps. Therefore hardware-software co-design can move the borderline between hardware and software as needed without creating a discontinuity in the speculative processing.

A growing number of applications demands lowest latency to gain a competitive business advantage. Electronic Trading for example requires the lowest latency to decide to buy, sell, or abstain. The fastest decision makes the deal. For other applications the throughput is limited by the processing latency, when for example the decision which data to request next is based on the processing result from the previous data.

Existing network processing systems commonly don't process the data before data integrity has been ensured by some sort of checksum p...