Browse Prior Art Database

USING "DEFUNCT" CACHE ENTRIES TO AVOID ARTIFICIAL DEADLOCKS WHEN CACHING CATALOG TABLES

IP.com Disclosure Number: IPCOM000014786D
Original Publication Date: 2001-Jun-16
Included in the Prior Art Database: 2003-Jun-20
Document File: 3 page(s) / 57K

Publishing Venue

IBM

Abstract

Disclosed is a method for increasing concurrency in a distributed database environment where authorizations are cached at each database node. This method increases concurrency be eliminating artificial cache deadlocks that may occur when authorizations are cached, but would not occur in a non-cached environment.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 42% of the total text.

Page 1 of 3

  USING "DEFUNCT" CACHE ENTRIES TO AVOID ARTIFICIAL DEADLOCKS WHEN CACHING CATALOG TABLES

    Disclosed is a method for increasing concurrency in a distributed database environment where authorizations are cached at each database node. This method increases concurrency be eliminating artificial cache deadlocks that may occur when authorizations are cached, but would not occur in a non-cached environment.

In a distributed (MPP -- Massively Parallel Processor) database environment, it is desirable to cache authorizations at local/coordinator nodes to increase performance and to reduce the need to communicate with the catalog node. In an environment where authorizations are not cached, each time an application is required to check authorities/privileges, it is necessary to perform a table or index lookup at the catalog node. This results in the catalog node becoming a source of contention. Caching authorizations in a distributed cache allows applications to avoid contacting the catalog node and performing a table lookup to find authorizations. Instead, the applications can access cached authorizations at the local node (i.e. the node where the database connection exists).

Caching database authorities in a distributed (MPP) environment poses the following problem:

If authorizations are requested on a non-catalog node and a cache entry for these authorizations does not exist at this node, it is desirable to have only one application go to the catalog node to retrieve these authorizations. If multiple agents/applications are allowed to retrieve the same authorizations from the catalog node, there would be an increase of unnecessary inter node communication (i.e. multiple requests for the same information). In order to prevent multiple applications from loading the same cache entry, an exclusive serialization lock (loading lock) is used. Using an exclusive loading lock on a cache entry may cause deadlock scenarios in the catalog cache that would not occur if the same catalog table was not cached. Consider the following example:

Application A (A): Requests Authorizations for group 1 (G1) Application B (B): Grants additional authorities for group 1 (G1) (the authorization id for B is a member of group 1)

Node 1 (N1): Catalog Node Node 2 (N2): Non-catalog node

Application A and B are connected to the database at N2.

With cached authorization (deadlock)table Without cached authorization table

B attempts to connect at N2 A issues a grant to G1 B attempts to connect at N2 A issues a grant at G1 B creates and locks cache entry at
N2

A locks catalog row for grant at N1

B goes to N1 to retrieve authorizations for G1 from the catalog table

A locks catalog row for grant at N1

B goes to N1 to retrieve A creates and locks cache authorizations for G1entry for G1 at N1 B waits on lock by A on the catalog row A completes grant and releases lock. B finds cache entry for G1 at N1
A broadcasts to all and waits on the loading lock of A participating nodes to lock...