InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method to improve response time for threads running in a multi-threaded environment

IP.com Disclosure Number: IPCOM000189111D
Original Publication Date: 2009-Oct-28
Included in the Prior Art Database: 2009-Oct-28
Document File: 2 page(s) / 30K

Publishing Venue



Method to improve response time for threads running in a multi-threaded environment

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 40% of the total text.

Page 1 of 2

Method to improve response time for threads running in a multi -threaded environment

Main Idea

Hardware multi-threading is a well-known technique for improving the throughput of a microprocessor core by providing more than one hardware thread of execution for the core. Multi-threading allows the microprocessor to overlap typical single-thread bottlenecks such as cache misses and branch mispredicts with instructions from other threads.

One less discussed characteristic of hardware multi-threading is its impact on the performance of a single operation passing through the core. Consider some typical measurements on a system:

number of active throughput per-thread threads/core throughput 1 1 1
2 1.3 0.65

Essentially, with a higher degree of multi-threading, the performance delivered by each hardware thread is reduced, though the total core throughput is increased. The actual impact on response time is less at high utilization (due to less queuing for CPU resource), but the impact to single-thread performance has become obvious enough to notice. This problem becomes most extreme for cores with even higher degrees of threading, such as four or eight threads per core.

Some multi-threading implementations allow software to dynamically change the degree of multi-threading. For example, an operating system can switch a core from single-threaded (ST) to simultaneously multi-threaded (SMT) and vice-verse in a matter of hundreds of cycles. An operating system can do this, based on the number of active software threads and the overall utilization.

Applications, such as databases, can use workload managers to prioritize work. The database optimizes for response time or short lived transactions by initiating them in a higher priority workload class and then moving them to a lower priority workload class as the CPU time consumed by the transaction exceeds some limit. This is implemented by the database's keeping track of the start time of the transaction and explicitly moving it from one class to another as the transactions execution progresses.

A typical problem in a transaction processing system and a way of solving the problem is discussed here. The scenario involves dividing all the tasks in a transaction processing system into two groups, with one group consisting all high priority tasks and the other group consisting of low priority tasks. The high priority group is entitled to more cpu and memory resources than the low priority group. The problem that would be observed is that lowering the amount of cpu resource entitlement to the lower priority group would still not improve the response time of the transactions initiated by tasks in the high priority group even with higher cpu utilization. Also, It is much easier to measure the cpu utilization of a given task or sample over a period of time than to measure or sample response time of a task over a period of time.

This invention addresses this problem...