InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Hardware Multi-Threading Pacing in Lieu of Cache Hierarchy Resource Over-Commit

IP.com Disclosure Number: IPCOM000248569D
Publication Date: 2016-Dec-19
Document File: 4 page(s) / 96K

Publishing Venue

The IP.com Prior Art Database


Described is hardware multi-threading pacing in lieu of cache hierarchy resource over-commit.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 59% of the total text.


Hardware Multi -Threading Pacing in Lieu of Cache Hierarchy Resource Over -Commit

To increase system throughput, most modern microprocessors employ multiple independent

threads of execution. These independent threads of execution use the processor resources better

and help reduce the idle time. Today's microprocessors typically allow 2 to 8 threads of

execution per core. Infrequently, however, higher levels of threading may actually degrade the

system throughput. For example, 4 threads per core may actually result in better throughput than

8 threads per core. This creates a challenge for usage and management of such systems: should

the threading level be kept at a low conservative level, or risk losing performance and perhaps

quality of service for workloads with more active threads? Such question is best addressed by

allowing the hardware to dynamically pace the number of hardware threads in such a way that the

users and administrators of the system did not need to take specific actions for optimization.

As depicted in Figure 1 below, the invention involves a per-core management agent

which has the following capabilities:

1. The ability to monitor key event signatures that are correlated with the over-commit of

processor resources in a multi-threaded environment.

2. The ability to keep track of a history of processor efficiency and thread usage.

3. The ability to pace the instruction dispatch rate to the core for execution on a thread-by-thread



Figure 1

When a po...