Browse Prior Art Database

Effective Run-Time Thrashing Control Method for Concurrent Mergesorts Using Parallel Prefetching

IP.com Disclosure Number: IPCOM000106523D
Original Publication Date: 1993-Nov-01
Included in the Prior Art Database: 2005-Mar-21
Document File: 2 page(s) / 82K

Publishing Venue

IBM

Related People

Teng, JZ: AUTHOR [+3]

Abstract

Parallel prefetching can be used to improve the system response time for concurrent mergesorts. However, thrashing may develop, i.e., prefetched pages are replaced before referenced, and therefore the effectiveness of parallel prefetching can be greatly compromised.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Effective Run-Time Thrashing Control Method for Concurrent Mergesorts Using Parallel Prefetching

      Parallel prefetching can be used to improve the system response
time for concurrent mergesorts.  However, thrashing may develop,
i.e., prefetched pages are replaced before referenced, and therefore
the effectiveness of parallel prefetching can be greatly compromised.

Disclosed is an effective run-time thrashing control method for the
merge phase of concurrent mergesorts using parallel prefetching.

      Parallel prefetching via multiple disks can be attractive in
improving the average response times for the merge phase of
concurrent mergesorts, where initial sorted runs are stored on
multiple disks and the final sorted run is written back to another
dedicated disk.  In addition to prefetching on reads, the write
requests can also be deferred and batched to improve the output I/O
efficiency.  However, severe thrashing may develop if the buffer
space is not large enough; thus a large number of prefetched pages in
the buffer can be replaced before referenced.  With fewer number of
output disks than input disks, deferred writes can further aggravate
the thrashing effect.  When thrashing occurs, not only the previous
work of prefetching the pages is wasted, but also a synchronous read,
i.e., the merge process has to be put into a wait state until the
read I/O is completed is needed for each replaced page, severely
worsening the system response time.  As a result, thrashing can
significantly reduce the effectiveness of parallel prefetching and
degrade the system performance.

      To merge N sorted runs into a single sorted run, a certain
number of pages from each run can be prefetched from multiple disks
concurrently into the buffer.  The number of pages prefetched in a
read I/O request is referred to as the prefetch quantity (PFQ).
After fetched into the buffer, these data are merged into a sorted
order and the merged data become dirty pages in the buffer.  The
dirty buffer pages are then asynchronously written out to disk in
batch.  An asynchronous write I/O request is issued when a certain
number...