Browse Prior Art Database

FAST ACCESS TO LARGE MEMORY BLOCKS USING DELAYED RETURN

IP.com Disclosure Number: IPCOM000013404D
Original Publication Date: 2000-Sep-01
Included in the Prior Art Database: 2003-Jun-18
Document File: 2 page(s) / 42K

Publishing Venue

IBM

Abstract

Disclosed is an algorithm to provide fast access to large memory blocks using delayed return. There are standard search methods for memory allocation: first fit, best fit and worst fit. Each method involves searching a free space list until a hole of the right size is located. However, none of these methods can avoid external fragmentation which could make the linear search very time-consuming. Many methods have been developed to speed up the search. The basic idea of these methods is that if the returned blocks are not inserted back to the free list, but cached in a cache table which is an added data structure to the heap, the subsequent search time can be cut down significantly. The cache table can be constructed as follows. Based upon the statistics of requested block sizes, a cache table is created to hold returned blocks of those sizes. In order to search fast, the table is kept small. In order to maintain low internal fragmentation, the maximum cache block size should not be too large either, say a few thousand bytes. The following example is used to illustrate the design. Given two consecutive cache block sizes, say 44 bytes and 84 bytes, if the first time a block of 56 bytes is requested, a 84-byte block is located by traversing the free list. When the block is freed by the application, this block is held in the cache table. Next time around when a block of 64 bytes is requested, the cache table returns this 84-byte block very quickly without traversing the free list at all. PROBLEM:

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

FAST ACCESS TO LARGE MEMORY BLOCKS USING DELAYED RETURN

Disclosed is an algorithm to provide fast access to large memory blocks using delayed return.

There are standard search methods for memory allocation: first fit, best fit and worst fit. Each method involves searching a free space list until a hole of the right size is located. However, none of these methods can avoid external fragmentation which could make the linear search very time-consuming. Many methods have been developed to speed up the search. The basic idea of these methods is that if the returned blocks are not inserted back to the free list, but cached in a cache table which is an added data structure to the heap, the subsequent search time can be cut down significantly.

The cache table can be constructed as follows. Based upon the statistics of requested block sizes, a cache table is created to hold returned blocks of those sizes. In order to search fast, the table is kept small. In order to maintain low internal fragmentation, the maximum cache block size should not be too large either, say a few thousand bytes. The following example is used to illustrate the design. Given two consecutive cache block sizes, say 44 bytes and 84 bytes, if the first time a block of 56 bytes is requested, a 84-byte block is located by traversing the free list. When the block is freed by the application, this block is held in the cache table. Next time around when a block of 64 bytes is requested, the cache table returns this 84-byte block very quickly without traversing the free list at all.

PROBLEM:

The cache table indeed speeds up searches significantly. At the same time, it prevents the cached blocks from merging back with the free blocks on the free list. In many situations, external fragmentation builds up quickly. For requesting a block larger than the maximum block size, the heap manager has to traverse a long free list to locate the right hole. When the block is released, it has to traverse a long free list to update the free list again. This causes significant performance problems to region operations and bitblt operations, since they usually require muc...