Browse Prior Art Database

Improving performance of memory management subsystems through allocation pattern detection

IP.com Disclosure Number: IPCOM000029340D
Original Publication Date: 2004-Jun-24
Included in the Prior Art Database: 2004-Jun-24
Document File: 2 page(s) / 43K

Publishing Venue

IBM

Abstract

Most modern memory management subsystems for large software systems manage a global memory pool, and several separate heaps, where each heap handles memory requirements for a particular subsection or feature of the software system. Under some memory useage conditions, memory may constantly be thrashed between the global pool and the heap, which can cause severe global memory pool latch contention. A memory management subsystem that detects these scenarios can cache additional memory at the heap level to avoid global memory pool latch contention, improving memory management subsystem performance considerably.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 2

Improving performance of memory management subsystems through allocation pattern detection

     In most modern memory management subsystems, there is a single global memory pool that contains memory allocated directly from the OS. This can either be directly controlled by the software system (required to manage shared memory allocations), or can be the default private memory pool that is maintained, for instance, by the malloc() function in the standard C-runtime library. Individual heaps will then be allocated to handle one particular aspect of the software system, for instance, in a database server, there could be one heap for each buffer pool, one heap for each active sort, etc. Memory is typically transferred from the global pool to the heap in fixed amounts, and this amount is usually determined at the software system development time based on projected heap usages, such that each global pool request transfers enough memory to handle several heap requests. This is countered by a desire not to waste memory at the heap level - if this fixed transfer size is too large, and most heaps only ever use a fraction of this memory, the overall software system will have much higher memory requirements, which may adversely impact usability and performance of the overall software system. To reduce overall memory requirements of the system, typical memory management subsystems will also free memory back to the global pool when there is a large contiguous section of free memory at the heap level. Each instance where the heap needs to either acquire memory from the global pool, or release memory back to the global pool, requires serialization on the global pool latch.

     There are certain scenarios where memory management subsystems can get into degenerative states causing excessive serialization on the global pool latch. If the amount of memory to transfer from the global pool to the heap is either sized too small at the software system development time, or at runtime the software system issues heap requests of vastly different sizes, excessive global pool latch contention will result, degrading the response time of the memory management subsystem considerably. Alternatively, if any particular heap has a fairly constant memory requirement, but experiences very short memory usage spikes, each spike will likely result in acquiring the global pool latch to transfer memory to the heap to handle the spike, and shortly afterwards acquiring the global pool latch again to transfer memory back to the global pool.

     Disclosed is a system that detects the scenarios mentioned above individually for each heap, and alters the memory management...