Browse Prior Art Database

Method and System for Fast Cache Entry Access to Cached, Access Controlled Data

IP.com Disclosure Number: IPCOM000237049D
Publication Date: 2014-May-29
Document File: 3 page(s) / 54K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed are a method and system for fast cache entry to authorization-sensitive cached data, allowing the authorization check to occur in the same step as the cache entry retrieval operation. This technique satisfies strict access control requirements while allowing cache entries to be shared among application users belonging to the same groups, thus avoiding multiplication of cache entries that could be shared.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 3

Method and System for Fast Cache Entry Access to Cached, Access Controlled Data

High performance multi-tier analytic applications cache extensively to achieve fast response-times and high page throughput. A cache is typically implemented essentially as a hash-map containing a large number of key-value pairs. In many high-volume analytic applications, the cache can become quite large, reaching sizes of 10s or 100s of GB and containing many hundreds of thousands and even millions of cache entries. The speed with which a cache entry can be retrieved from the cache when needed and served to the calling code, and the extent to which each cache entry can be re-used among application users (cache hit rate), determines the overall performance of the cache and overall performance of the application .

Analytics applications place stringent security constraints on access to cached data. For example, an analyst working for the corporate financial department can execute a report detailing financial results over the last quarter . The data describing these results can be cached using a key composed of the Uniform Resource Locator (URL) and report input parameters. However, if an analyst from one of corporate Line of Business (LOB) units executes the same report with the same input parameters against the same data warehouse, that analyst may see very different data, and would not be allowed to see overall corporate financial information. This has legislative grounds. In order to satisfy performance requirements, cache entries have to be shared among application users. Underlying analytic data are also cached as cubelets, members, and other domain-specific objects and need to be shared.

A current solution to this problem is to create separate cache instances for different groups of users . This results in large memory footprint, as many cached objects - most of which are common for all users and need to be duplicated (actually multiplied by n-times where n is the number of users). This also causes increased infrastructure costs, as Random Access Memory (RAM) is expensive. This also causes performance problems related to garbage collection of very large Java* Virtual Machine(JVM) heaps.

The disclosed solution is a method and system for fast cache entry to authorization -sensitive cached data. This technique satisfies strict access control requirements while allowing cache entries to be shared among application users belonging to the same groups, thus avoiding multiplication of cache entries that could be shared. The new and unique contribution of this method and system allows the authorization check to occur in the same step as the cache entry retrieval operation - thereby reducing the time needed to access each cache entry and providing a significant performance boost for the broader workload .

This Method and System for Fast Cache Entry Access to Cached...