Surety is performing system maintenance this weekend. Electronic date stamps on new Prior Art Database disclosures may be delayed.
Browse Prior Art Database

Collaborative Scheduling

IP.com Disclosure Number: IPCOM000240244D
Publication Date: 2015-Jan-15
Document File: 7 page(s) / 110K

Publishing Venue

The IP.com Prior Art Database


The article provides methods to optimize virtualization scheduling decisions. Today a lot of virtualization scheduling decisions is based on guess work, the idea strives to significantly improve that by allowing guest and host to collaborate. A rather trivial example might be a guest with two virtual cpus scheduling an important job on one and an unimportant job on another of these virtuaL cpus. The Hipervisor without this idea has no clue about the guest job priorities when being forced to decide between the two. At the core of the idea is the ability to share guest scheduling information with the host which allows the host to make smarter decisions.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 38% of the total text.

Page 01 of 7

Collaborative Scheduling


Disclosed is a device that provides methods improve virtualization implementations regarding their ability to make smarter scheduling decisions by enabling collaboration between different virtualization layers.


Hipervisor technology evolved tremendously in the last years and overhead was reduced close to the minimum. But sometimes a Hipervisor just does not have the right information available to make the right decisions.

The idea is about improving this lack of knowledge via a register and share concept that allows a Hipervisor to better understand the Guests it manages.


Illustration 1: States of a virtual CPU


Page 02 of 7

Illustration 2: Host Guest interaction


Page 03 of 7

Illustration 3: extended States of a virtual CPU


Page 04 of 7

Illustration 4: Guest Task Priority usage

Basics & State of the Art

[501] In Virtualization a Host [160] manages virtual cpus [100],[110] for its guests.


Page 05 of 7

[502] At any point in time those virtual cpus can be backed by real resources of the Hipervisor "run" [10] or not.
[503] A vcpu is currently running guest code and an event occurs [20] that needs Hipervisor activity the vcpu is virtually stopped [30]. Then the host handles the event and can let the vcpu run again [40].
[504] Current state of the art often handles the guest vcpus like processes in terms of Host scheduling.
[505] But a guests virtual cpus are not completely like a process, especially in case it got interrupted like in [503]. Due to the delivery of the event to the guest the guests processes in the guests run queue [100], [110] can become runnable or non-runnable.
[506] There are a multitude of collaborative scheduling technologies patented, but all of them use active components like agents in the guest to make guest information accessible. This is a hard contrast to the solution described here, where the hypervisor host can decide if/when to look for guest information or not, because as pointed out the information is more a tie breaker than a commonly required data set.
[507] Also for chargeback those prior art in [506] cpu consumption is charged to the one collecting data, not necessarily the one benefitting by it. The hypervisor on the other hand could decide to either transparently do it or to charge it to the receiver of the scheduling improvement.

Prior Art - Pseudo page faults

[510] One common cause of events that cause a guest to exit are page faults caused by the guest process. But there are two kind of faults.
[511] One type is a fault inside the guest, that the guest can resolve - with proper hardware support this doesn't even need to let the vcpu exit.
[512] But another type is caused by the fact that the Host can overcommit memory and to achieve this swap pages of the guest out to disk. As soon as the guest process accesses this memory the host needs to be involved to bring that page back into memory before the guest can continue.

If the host would...