Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method for Managing Fixed-Size Memory Blocks when Block Initialization is Costly and Future Need is Unknown

IP.com Disclosure Number: IPCOM000246319D
Publication Date: 2016-May-30
Document File: 6 page(s) / 69K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for managing fixed-sized memory blocks within a software system in order to increase efficiency and reduce costs when the future needs of the blocks is unknown. The method and process use pointers in the system to track block usage; when the system needs the block again, a block allocator locates the block in the cache and gives it back to the object that last used the block.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 23% of the total text.

Page 01 of 6

Method for Managing Fixed

Method for Managing Fixed-

--Size Memory Blocks when Block Initialization is Costly and Future Need is Unknown

Size Memory Blocks when Block Initialization is Costly and Future Need is Unknown

Software systems sometimes need to allocate fixed-size blocks of memory for brief periods, during which the initialization of the block and related data structures is costly and at the end of the usage period , it is unknown whether the block will be needed again in the near future or never again. More specifically, assume that the software system has a set of objects, each of which needs to use one or more memory blocks under certain conditions.

As one condition, the size of each block is one of a fixed set of block sizes (e.g., 4K, 8K, 16K, etc.). In addition, before an object can use one of the blocks, the block must be initialized, which is costly, to the same content that it had when the object last used the block. Finally, the block is only needed for a brief time, and it is not known whether the object will need the block again after that time (i.e., the object might need the block again after a short period, the object might need the block again after a long period, or the object might never need the block again).

Current operations for allocating fixed-size blocks of memory for brief periods are often redundant (i.e., repeatedly freeing, allocating, initializing, and re-initializing blocks) and expensive, particularly when the future needs are unknown.

The novel solution for managing fixed-size memory blocks, when block Initialization is costly and future need is unknown, does maintain a cache of memory blocks of various sizes; however, the new method includes several enhancements.

Each object that uses the memory blocks and needs to perform the costly initialization , keeps a pointer to the block that it last used. Thus, when it needs that same block again, the block allocator can quickly locate the block in the cache and give the block back to the object that last used the block (if the block is still available). This allows the object to quickly get a block without re-initializing it.

To prevent excessive memory usage, whenever an object returns a block to the cache that causes the amount of memory in the cache to reach a certain threshold, the block allocator frees one or more of the least recently used blocks to bring the amount of memory in the cache below the threshold. The threshold can be user-specified or set as some percentage of the total memory on the machine.

A background process periodically identifies instances in which the cache is holding memory blocks for a long period without any object using said blocks. In that case, the block allocator frees one or more of the memory blocks in the cache . The logic is that if the memory has remained unused for that period, then the probability of it being needed again is low for the immediate future ;

1


Page 02 of 6

returning it to the heap makes it...