Browse Prior Art Database

Mechanism to seamlessly route cache requests based on cache status discovery

IP.com Disclosure Number: IPCOM000206049D
Publication Date: 2011-Apr-13
Document File: 3 page(s) / 37K

Publishing Venue

The IP.com Prior Art Database

Abstract

This invention describes a) a simple mechanism for synchronizing data across a database and a cache and b) using this information to seamlessly route application requests between the database and the cache (based on cache readiness).

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 45% of the total text.

Page 01 of 3

Mechanism to seamlessly route cache requests based on cache status discovery

The following section describes the algorithm/process being disclosed (namely a mechanism outlining how cache routing may be implemented).

    As a pre-requisite to the system, it is assumed that the "Cache System" is comprised of:
1) a fast in-memory database
2) a database to cache and
3) an asynchronous replication system connection in-memory database and database (e.g. log-based replication engine).

    The use of an asynchronous replication engine allows changes to cache to be reflected in the database and changes in the database to be reflected in the cache.

    The process outlined here builds on top of the system to outline how a driver can seamlessly detect when a cache is ready to service requests (i.e. when a given table in a cache is fully up-to-date with any changes that have been made to the backend database) and use this information to automatically route to the cache. Implementation/Method
Phase 1 - INITIAL START-UP

    The CHECKPOINTS table is (pre)-configured to be continuously replicated to the cache database (front-end) using continuous asynchronous replication. Tables are configured by default to be routed to the backend database (rather than the cache).

Phase 2 - MAIN

    The connection handler threads route all data requests for tables flagged as CACHED to the frontend. Where a write is made to the cache, a write-through mechanism is used to replicate the change to the database (backend) (write-through is a synchronous write).

    The connection handler threads route all data requests for tables that are not cached, or that are flagged as REQUEST

_CACHING to the backend. For such

    When the cache status thread discovers a new table to cache, the following actions are performed:
a. The table is added to BE-REPLICATION-LIST and the table undergoes CONTINUOUS COPY from backend database to cache database (frontend).

b. The transparency layer performs a replication checkpoint by inserting a unique identifier into the CHECKPOINTS table.

c. We block the cache status thread.

The connection handler threads route all data requests for tables flagged as

requests:

o Any write requests are stored in a local ordered list, REPLAY-STATEMENTS.

Entries in this list consist of the write statement being executed and a transaction identifier.

o Where a rollback occurs, all statements for the rolled back transaction are

removed from REPLAY-STATEMENTS.

o Where a commit occurs, all statements for the committed transaction are

removed from REPLAY-STATEMENTS.

Phase 3 - CACHE ENABLEMENT

    A cache status thread polls the CACHE-META-DATA table in the cache database (frontend). An entry in this table indicates table state
(REQUEST

_CACHING or CACHED).


Page 02 of 3

CACHED to the frontend. Where a write is made to the cache, the write-through mechanism is used to replicate the change to the database (backend).

    The connection handler threads route all data requests for tables that are not cached,...