Browse Prior Art Database

A mechanism for communicating event information robustly between a distributed runtime and a database, using the injection of control commands into the event stream.

IP.com Disclosure Number: IPCOM000201699D
Publication Date: 2010-Nov-18
Document File: 6 page(s) / 80K

Publishing Venue

The IP.com Prior Art Database

Abstract

This article describes a mechanism whereby multiple processes provide event files for a store-and-forward listener, which can also detect control files being injected into the events and modify its behaviour accordingly. In particular, the listener process is "watched" by another service whose purpose is to ensure the listener is restarted if it fails, but also uses the control file mechanism to decide whether a restart is in fact necessary.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 24% of the total text.

Page 01 of 6

A mechanism for communicating event information robustly between a distributed runtime and a database, using the injection of control commands into the event stream.

Disclosed is a device whereby if the file system is being used as a store-and-forward mechanism to communicate events between two services, it can also be used as a means to control the service that is processing the files. This mechanism can in turn be used via another service to ensure the robustness of the processing service. BACKGROUND

    The DataStage job runtime mechanism involves each run being composed of a number of autonomous processes which can only communicate with each other via the file system. It is also implemented in a non-mainstream programming language (UV/BASIC) which has limited access to external libraries.

    The requirement is for information available at job run time ("events") to be collected in the order they are produced, and used to update an operational database that holds the history of the job runs. It is also a requirement that events shall not be lost - for example, if the database they are being sent to is temporarily unavailable. The process that collects events needs to be robust, since it needs to be as up-to-date with the production of events and the event producers cannot afford to keep checking whether the collection service is still up or not. It is also a requirement that there is some way of interrupting the event stream so that action can be taken on the database; again, without losing any event information or communicating directly with the distributed set of processes that are running jobs.

    Because of the distributed nature of job run time there is no central process that such events could funnel through. Although UV/BASIC has the ability to access sockets and pass events to a central listener via a port, the requirement never to lose an event means that a store-and-forward mechanism of some kind is needed, for the case where such a listener is not running. It is not desirable for each run process to make a direct connection to the database, for performance reasons; and connection pooling cannot easily be implemented in UV/BASIC. It is also not possible from within the UV/BASIC environment to directly call an external queuing mechanism (such as MQ series).

    The use of the file system as a store-and-forward mechanism is fairly common; for example it is used in Information Server by the Operational Metadata Service and the Logging Service. This article describes the use of such a system to provide a simple means to control the service that is processing the files; plus its use via a second service to ensure the robustness of the first.

SUMMARY OF MECHANISM
There are 4 parties involved in the process:
· Producers of events: running jobs, in this case.
· A consumer of these events: the "Listener" service that updates the database.
· A 3rd party that wishes to control the listener from outside.
· An "AppWatcher" process whose job is...