Named Dynamic Run Queues
Original Publication Date: 2003-Jun-17
Included in the Prior Art Database: 2003-Jun-17
This proposal is for non-static named run queues that can have specific processors allocated to service the named run queues.
Named Dynamic Run Queues
Processor run queues are fixed and created during system initialization. In current implementations a run queue may be global (i.e system wide) or local to a processor. In the case of a NUMA system a run queue could be considered node local.
This proposal holds that run queues do not need to be static and could have processors added or subtracted, or for that matter have run queues dynamically created and destroyed. These new named dynamic run queues will replace the local run queues. Once processors are associated to run queues, they are a very efficient means of managing work unlike a resource manager such as WLM which requires additional management of the work each time a processor takes work.
By dynamically being able to create run queues and allocate processors to service these run queues, very efficient simple management of the processors workload can be achieved. Any work on the run queue will be deemed to be eligible to be executed on any processor which services the run queue, in addition depending on load the processor time can be shifted between these soft boundaries (i.e work on a given run queue can as a whole be considered a workload entity and processor time can dynamically be shifted between these entities).
For example consider a system running both 2 concurrent databases. There could be created a run queue for Database A and a run queue for Database B and each allocated the appropriate processors. On systems with the concept of local and remote memory by enabling hard memory affinity (affinitising memory and disallowing remote memory allocations) would provide further segregation of the workload if the processor assignments to the run queues are on processor hardware boundaries which correspond to memory hardware boundaries.
Where hardware boundaries exist and there are advantages to partitioning on these boundaries named dynamic run queues could be used to expose these boundaries. For example consider where there are MCM's (multi chip module) containing dual chip modules (8 processors arranged in pairs with each pair sharing a L2 cache). A run queue for each dual chip module could be created, or a run queue for each MCM could be created. This would effectively bind the workload to dual chip modules or MCM's depending on the choice made.
The choice is also retained with this proposal to keep the local run queues as individual run queues, and a local run queue could be created for any processor where a local run queue is required. By default an individual run queue would be created for each processor not included within a named dynamic run queue to maintain compatibility.
By introducing a name to the run queue it makes workload assignment simple. Instead of needing to bind a process to a logical processor number there is the ability to bind to a named run queue which is simpler and easier for the customer. When a process is
started an environment variable will be defined to n...