Browse Prior Art Database

Method for hyper-threading to reduce I/O processing overhead

IP.com Disclosure Number: IPCOM000012027D
Publication Date: 2003-Apr-02
Document File: 3 page(s) / 121K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for hyper-threading to reduce I/O processing overhead. Benefits include improved performance.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 55% of the total text.

Method for hyper-threading to reduce I/O processing overhead

Disclosed is a method for hyper-threading to reduce I/O processing overhead. Benefits include improved performance.

Background

              Conventionally, when application software performs an input/output (I/O) operation, such as a disk read, significant processing overhead occurs. It is a result of several factors, including:

•             I/O operations are initiated by use of a system call trap.

•             For a synchronous I/O operation, the initiating thread is suspended until the operation completes and another thread is scheduled.

•             Completion of the I/O operation causes an interrupt and the invocation of an interrupt handler.

•             The suspended thread is rescheduled to run.

              Each item of the list (above) results in the execution of hundreds to thousands of instructions. Other factors contribute to overhead as well, including poor cache utilization due to the lack of locality of the executed code and translation look-aside buffer (TLB) flushes that result from context switches among threads.

              Hyper-threading conventionally enables processor hardware to multiplex the execution of multiple virtual-thread contexts among one or more sets of physical hardware resources. When an execution thread is blocked due to waiting for an external dependency, the processor (hardware) switches to another execution thread that is ready to run. This switch occurs without operating system (OS) software intervention. External dependencies that may cause a thread to be blocked are typically short-lived events, such as a cache miss.

              A completion queue is an area of host memory that indicates when an event has occurred.

General description

              The disclosed method extends the capabilities of hyper-threads to reduce the processing overhead of I/O operations by changing the way threads are blocked due to waiting for I/O completions. Additionally, the method provides a mechanism for eliminating the interrupts caused by I/O devices.

Advantages

              The disclosed method provides advantages, including:

•             Improved performance due to reducing processing overhead by replacing call traps with message queue-based system calls, which eliminates the processing required for issuing and handling interrupts 

•             Improved performance due to signaling to the processor that an I/O operation is complete.

Detailed description

              The disclosed method provides a general-purpose mechanism that extends the type of supported blocking events to include I/O operations. The disclosed method provides a general-purpose mechanism that enables the following functions:

•             Threads can block execution while waiting for the completion of an I/O operation.

•             Hardware I/O devices signal to the processor...