Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method and apparatus to virtualize and share NVMe devices in a performance efficient manner

IP.com Disclosure Number: IPCOM000249207D
Publication Date: 2017-Feb-10
Document File: 4 page(s) / 53K

Publishing Venue

The IP.com Prior Art Database

Abstract

NVM Express (NVMe) is a logical device interface specification for accessing non-volatile storage media such as flash devices. NVMe protocol supports very high number of IO queues and high number of commands in each queue. Not all the hosts may use all the bandwidth and storage offered by NVMe devices. Hence by virtualizing the NVMe devices, the NVMe devices can be better utilized. This article proposes a method of sharing and managing NVMe Input/Output queues across multiple Virtual Machines (VM).virtualizing NVMe devices.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 43% of the total text.

1

Method and apparatus to virtualize and share NVMe devices in a performance efficient manner

NVM Express (NVMe) is a logical device interface specification for accessing non-volatile storage media such as flash devices attached via PCIe (PCI Express) bus or over fabrics. It has been designed to make best use of the low latency and parallelism characteristics of Flash devices. NVMe protocol supports vast number of (64K) IO Submission and Response queues and vast number of commands (64K) in each queue.

Problem: Not all Virtual Machines(VMs) would require all the 1. Storage and 2. Performance characteristics of a NVMe device all for themselves. There would be multiple use-cases where it would be sufficient for a virtual machine to benefit by just getting a fraction of it. So, a NVMe adapter couldan be betterst utilized and shared in a virtualized environment just like it was done for SCSI ( using vSCSI ) and FC ( using NPIV ).by virtualizing the device like FC or SCSI.

In the proposed method, at a high level each VM could be assigned a certain number ofowns one or more IO submission and response queue pairs as per VM’s requirement. These IO queues would be mapped directly to the VM and mainline IO processing would happen without (or with minimal) intervention) of the “IO hosting server” ( e.g. Virtual IO Server in PowerVM ). The VM and will communicate withto physical NVMe adapter using these IO queue pairs. mapped to them directly either without or minimal involvement of VIOS for mainline IO processing. MHere memory for Submission Queue(SQ) / Completion Queue(CQ) pair will be created in VM's memory and physical adapter's queue registers will be programmed with these addresses. PowerVM Logically Remote DMA (i.ei.e. LRDMA) kind of a technology could be used for the physical NVMe adapter to efficiently(directly) access VM’s memory without the intervention of the “IO hosting server”.Adapter will access this memory using LRDMA technology.

Major advantages of this solution are:

1. Virtual Machines can share and benefit from the advantages of NVMe without having the need to own a complete device.

2. All NVMe specific applications would work as is on the VM, as all the details of NVMe virtualization are abstracted at the virtual NVMe client adapter layer itself.

3. Live partition migration ( aka VM motion ) will work with NVMe adapter virtualization.

2

Following is the high-levelOverview of the proposed method forof virtualizing NVMe adapter :.

1. “IO hosting server” Virtual IO Server (VIOS) owns the physical NVMe device/controller and facilitates the sharing of it across multiple Virtual Machines ( VM )IO (VIO) clients with help of the hypervisor.

2. “IO hosting server” VIOS owns the NVMe controller management tasks and, owns and handles the NVMe Admin Submission and Completion queues.

3. “IO hosting server”VIOS with the support and knowledge of hypervisor manages a light-weight “virtual NVMe host adapter” i.e. vNVMeHost.

4. Virtual Machine(VM) with...