Browse Prior Art Database

Reduction of working set used by a message cache by using a free stack with efficient reset.

IP.com Disclosure Number: IPCOM000013716D
Original Publication Date: 2000-Dec-01
Included in the Prior Art Database: 2003-Jun-18
Document File: 2 page(s) / 44K

Publishing Venue

IBM

Abstract

A mechanism is provided for determining when a cache should be reset and for triggering the reset. This is applicable to store-and-forward communications in which a cache is used to increase the efficiency of retrieval of a stored communication. In asynchronous, store-and-forward messaging, there are typically only very few messages stored (e.g. in a file on disk) at one time, since the placing of the message in disk storage is closely followed by its retrieval. However, when a communication failure occurs, there can be a large number of messages stored on disk until normal service is resumed. This change of state of the disk file from nearly empty to very full and then back to nearly empty can leave the cache in a state which will benefit from a reset. If the cache is used to store message headers of messages placed in storage, and then retrieval of messages changes the file to the nearly empty state, the cache of messages headers can be left with only a few, widely spaced in-use slots. This is a non-optimal use of resources. A mechanism is provided for identifying when this non-optimal situation has arisen and then triggering a cache reset. A specific solution is described below by way of an example. In a message queuing environment, messages are PUT to a queue, which typically is a file on disk. When a process GETs a message, depending on the options requested on the GET, some or all the message headers may be accessed to determine the correct message to return to the requesting process. In order to improve GET times, each message's header may be held in an in-memory cache. This cache is a large fixed number of fixed slots. One slot can accommodate 1 message's header. The cache has a NEXT pointer associated with it which points to the next available slot in the cache. When a message is PUT to the file, the message header is saved in the slot pointed to by NEXT, and NEXT is incremented. When NEXT reaches the end of the cache, it is reset to the first slot.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 2

  Reduction of working set used by a message cache by using a free stack with efficient reset.

A mechanism is provided for determining when a cache should be reset and for triggering the reset.

    This is applicable to store-and-forward communications in which a cache is used to increase the efficiency of retrieval of a stored communication. In asynchronous, store-and-forward messaging, there are typically only very few messages stored (e.g. in a file on disk) at one time, since the placing of the message in disk storage is closely followed by its retrieval. However, when a communication failure occurs, there can be a large number of messages stored on disk until normal service is resumed. This change of state of the disk file from nearly empty to very full and then back to nearly empty can leave the cache in a state which will benefit from a reset.

    If the cache is used to store message headers of messages placed in storage, and then retrieval of messages changes the file to the nearly empty state, the cache of messages headers can be left with only a few, widely spaced in-use slots. This is a non-optimal use of resources. A mechanism is provided for identifying when this non-optimal situation has arisen and then triggering a cache reset. A specific solution is described below by way of an example.

    In a message queuing environment, messages are PUT to a queue, which typically is a file on disk. When a process GETs a message, depending on the options requested on the GET, some or all the message headers may be accessed to determine the correct message to return to the requesting process. In order to improve GET times, each message's header may be held in an in-memory cache. This cache is a large fixed number of fixed slots. One slot can accommodate 1 message's header. The cache has a NEXT pointer associated with it which points to the next available slot in the cache. When a message is PUT to the file, the message header is saved in the slot pointed to by NEXT, and NEXT is incremented. When NEXT reaches the end of the cache, it is reset to the first slot.

    This increases performance, but also increases the working set of a process because cache slots are being used in a "round-robin" fashion, causing every page in the cache to be touched regardless of the number of messages on the queue. i.e. for a typical queue access pattern :-

action PUT PUT GET GET PUT GET PUT GET

slot used 1 2 3 4

    A stack of slot addresses can be used to allow the same quick access while improving upon the round-robin access of the above scheme. The use of a stack of slot addresses can be greatly improved by provision of a stack-reset feature wh...