Browse Prior Art Database

Method for I/O device virtualization with multiple I/O servers

IP.com Disclosure Number: IPCOM000236023D
Publication Date: 2014-Apr-02
Document File: 2 page(s) / 30K

Publishing Venue

The IP.com Prior Art Database

Abstract

In a multiple I/O server scenario, clients are usually tied to a single I/O server which leads to less than optimal use of storage bandwidth. We propose a scheme to allow clients submit I/O requests to a common pool and have multiple I/O servers service requests optimally.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 43% of the total text.

Page 01 of 2

Method for I/O device virtualization with multiple I /O servers


In a virtualized I/O environment a client computer operating system (client OS) makes I/O requests to a virtual storage device presented by an I/O server. The I/O server maps the client's virtual storage device onto some backing device accessible to the I/O server. The virtual storage device is often a virtual disk drive backed by (some portion of) a physical disk drive attached to the I/O server.

I/O requests are conveyed between the client OS and I/O server via some communications channel. In the client OS, the channel is typically associated with a virtual controller device. And on the I/O server the set of virtual devices to be presented to the client are bound to the channel and enumerated in some way to distinguish them. So on the client the channel may appear, for example, as a virtual SCSI adapter which has access to a set of Logical Units representing virtual disks.

The backing device need not be private to a single I/O server, and the client can have multiple communications channels with various I/O servers. This allows configurations such as a client with multiple virtual SCSI adapters capable of accessing the same backing devices via multiple I/O servers.

The problem with this model is that the client's I/O requests are issued via particular channels and those channels are bound to particular I/O servers. This effectively forces the client to choose the I/O server for each I/O request. But the client lacks the information necessary to choose the _best_ I/O server for each request, information such as: current health and workload of the I/O servers, ease and speed with the various I/O servers can access the backing storage, etc.

Using this invention, the client publishes I/O requests to all I/O servers at once, rather than sending each request to a single I/O server. This has a couple of significant advantages:


1) It shifts the burden of choosing a "best" I/O server from the client to the I/O servers, which have better information about accessibility to the backing devices, etc.


2) It allows fail-over of the I/O request from one I/O server to another without the client's involvement. If the request is fielded by an I/O server which is ultimately unable to complete it, the request can be assigned to a second I/O server without the client's interaction (or even knowledge). In the existing model, the failure by the first I/O server would flow back to the client which would then have to choose a second I/O server in order to retry the request.

The invention works as follows:


- The participating I/O servers coordinate to assign a unique, environment-wide identifier to the backing device that is to be virtualized. This could be done via any of:


- Identifiers assigned by a human administrator
- Identifiers inherited from the underlying storage (e.g. based on file names in an underlying shared filesystem)
- Chosen by consensus among the I/O servers

- When a client O...