Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Host Operation Precedence with Parity Update Groupings for Raid Performance

IP.com Disclosure Number: IPCOM000104194D
Original Publication Date: 1993-Mar-01
Included in the Prior Art Database: 2005-Mar-18
Document File: 4 page(s) / 88K

Publishing Venue

IBM

Related People

Champion, JR: AUTHOR [+6]

Abstract

A RAID 5 Host Operation (op) prioritizing algorithm is presented. Two op queues are maintained at the DASD interface - a high priority Host request queue, and a low priority parity update queue. The high priority queue is given favor whenever possible in order to reduce Host op response times. A group of parity update requests are serviced at one time in order to reduce seek times. Both features conspire to improve the overall performance (response time and throughput) of a RAID 5 configuration.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Host Operation Precedence with Parity Update Groupings for Raid Performance

      A RAID 5 Host Operation (op) prioritizing algorithm is
presented.  Two op queues are maintained at the DASD interface - a
high priority Host request queue, and a low priority parity update
queue.  The high priority queue is given favor whenever possible in
order to reduce Host op response times.  A group of parity update
requests are serviced at one time in order to reduce seek times.
Both features conspire to improve the overall performance (response
time and throughput) of a RAID 5 configuration.

      The algorithm is best introduced with a short illustration.

      Fig. 1.A represents 4 drives in an idle RAID 5 configuration.
The drives are numbered across the top, and 2 FIFO op queues for each
drive are represented across the bottom (only the top 3 elements of a
queue are shown).  Let the top of each queue represent the next op to
be processed in that queue.  The leftmost drive op queue represents a
High priority request queue and the rightmost queue represents a Low
priority request queue.  In general, Host I/O gets placed on a
drive's High priority queue, and parity band requests are placed on
the Low priority queue.  When a drive is ready to process a request,
the queues are examined.  The High priority requests are favored and
are more than likely processed ahead of the Low priority requests.
Details for exactly which queue is selected follows.

      Fig. 1.B shows a Host write op directed to drive 1 with the
associated parity update op sent to drive 2.  The Host op is
considered High priority and is placed accordingly.  The parity op is
Low priority and is loaded onto drive 2's Low priority queue.

      Fig. 1.C shows a High priority Host read op being loaded onto
drive 2's High priority queue before drive 2 has a chance to process
requests.  When drive 2 is ready to process, it will select the Host
read over the parity op.

      Fig. 1.D shows a Host write op being directed to drive 3 with a
parity update once again hitting drive 2.  After drive 2 processes
its pending Host read op, it may service t...