Browse Prior Art Database

Low Latency Kernel-Level Interpartition Communications Via Shared-Memory IO

IP.com Disclosure Number: IPCOM000229293D
Publication Date: 2013-Jul-18
Document File: 5 page(s) / 47K

Publishing Venue

The IP.com Prior Art Database

Abstract

Described is a low latency kernel-level interpartition communications via shared-memory IO.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 19% of the total text.

Page 01 of 5

Ȉˇ ˄

Ȉˇ ˄ ˙ ˝

˙ ˝

The ideal for inter-process communications is that a message residing in a buffer of a "sender" is copied directly into the message buffer of a "receiver", waking that receiver when the message arrives. Such buffers, though, typically reside in different address spaces so this ideal is not achievable in reality. Similarly, the ideal for inter-partition communications between processes
is that the communications can occur at least as fast as inter-process communications within a single partition. But, just as with partitions residing on separate systems, the storage available to each of these partitions is intentionally isolated from each other; the source and target buffers reside in intentionally isolated address spaces. This normally means that inter-partition communications within the same SMP is provided via a virtual ethernet bridge supported by a hypervisor between these partitions. As a result, there is significant processing overhead to do what is logically an otherwise very simple function. Much of the associated processing is as though communications is to be done between partitions on different systems using ethernet; this even though the partitions reside in the same SMP. The difference between virtual ethernet and actual ethernet DMA-based data transport is that the bridging hypervisor provides the actual transport via real address-based memory copy operations. Again, this is all a low-level emulation of an ethernet bridge. It works, of course, but this takes much more processing and latency than is strictly needed. Suffice it to say that considerable processing is being used to set up high-level messaging to be capable of actually being sent and received over low-level hardware architectures. This is introduces CPU overhead and message latency well in excess of the ideal.

    What is wanted instead is a data transport which approaches the ideal latency, but one which also protects most high-level communications architectures from awareness that a faster and higher level transport is being used. But at the same time, one would like to provide these higher level communications architectures the opportunity to use the new transport more directly if still better latencies are desired.

    This can be done via coordinated use of both a new Intelligent Interrupt Capability (IIC) and inter-partition shared memory addressable via higher level effective addressing. In this disclosure, the actual data transport will typically be done at the OS kernel level using the high-level effective address of the associated message buffers and with its control efficiently provided by the IIC.

    The physical memory used by partitions residing in a single cache-coherent SMP is normally partitioned (isolated from other partitions) by first ensuring that the partitions proper only use higher level forms of addressing - called Virtual Addressing in the Power architecture - and only allowing the hypervisor common to many parti...