Browse Prior Art Database

Dynamic Cache Destage Queue Depth For Optimum Performance

IP.com Disclosure Number: IPCOM000234771D
Publication Date: 2014-Feb-03
Document File: 7 page(s) / 78K

Publishing Venue

The IP.com Prior Art Database

Abstract

Described is a method of dynamically altering the destage threshold providing a consistent and even destage rate, the lowest possible destage queue depth, and thus the best possible read response time.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 24% of the total text.

Page 01 of 7

Dynamic Cache Destage Queue Depth For Optimum Performance

In a caching storage system or subsystem, the cache provides three main benefits. First, data written into the cache does not need to wait to be written into slower back-end storage, and thus the system write requests have very low latency. Next, system read requests that "hit", or requests data that is currently being cached, do not need to request the data from the back-end storage, and thus also have very low latency. Finally, the cache can act as a coalescence buffer where the data is kept in attempt to coalesce data from multiple system requests into a single back-end storage device write, also known as a destage. By turning a large number of system write requests into fewer back-end storage writes, the demand on the back-end storage is reduced, thus allowing for a higher overall throughput for the system if the back-end storage is a bottleneck. As a coalescence buffer, the cache must be sufficiently full to achieve good coalescence, yet must remain empty enough to take a large burst of write data.

    Overall performance is determined by the balance of reads and destages. The read rate or queue depth is not within the control of the storage system. Thus, varying the destage queue depth will impact the overall system performance. If the destage queue depth is too low, the cache will fill and system write performance will be severely impacted. If the destage queue depth is too high, the storage devices will be busier with writes and read response time will go up, hurting read system performance.

    In existing designs, a threshold is set at a cache fullness level where sufficient coalescence is obtained and where a large burst can still be taken. If the amount of data in cache is above this threshold, destages will be initiated; if it is below this line, data is not destaged. To avoid a "saw-tooth" of bursts of destages and inactivity, any data being destaged is not counted in checking if the current level of cache exceeds the threshold when determining if a new destage is necessary. With this method, a relatively small difference in the amount of cache data will cause the destage queue depth to go from 0 to max and result in a very uneven workload to the drives. An uneven workload on the drives results in longer latencies when the drives are handling many ops and lower utilization when the drives are idle. In this case, to avoid longer read latency, the destage max queue depth must be set low. However, setting the max destage queue depth low will result in a low maximum write throughput and leave the system more vulnerable to hitting cache full under lighter load. This disclosure describes a method of dynamically altering the destage threshold providing the following:


1. A consistent and even destage rate.


2. The lowest possible destage queue depth, and thus the best possible read response time.


3. A destage queue depth which is self optimizing to give priority to reads whil...