Browse Prior Art Database

Dynamic post processor Disclosure Number: IPCOM000032779D
Original Publication Date: 2004-Dec-25
Included in the Prior Art Database: 2004-Dec-25

Publishing Venue


Related People

Other Related People:


This idea describes a new software concept of a post processor. The post processor realizes one additional processing step after a root cause analysis engine has run. A root cause analysis engine is used for correlating alarms in layered transport networks. Post processors run after the correlation engine has run and has generated several groups of trouble ticket candidates. These post processors are used for generating the actual trouble tickets by grouping two or more trouble ticket candidates and/or adding a suitable root cause. Usually, adding new types of post processors, would require not only taking the system off from operation but also some amount of change to already existing and implemented functionality. Current solutions to this issue are not effective in the sense that even when they are based on languages/technologies that can be extended at run time, usually the underlying architecture does not support this extension. This means that functional extension, through the addition of new components, also implies changes in old components that are already in place, increasing the probability of introducing undesired behavior changes in already existing functionality. Related work to this presents a generic concept for a library of reusable pipeline components of pipe stages, based on Java Beans. The overhead introduced by the Java Beans technology, suited for building distributed applications, makes it not ideal for applying to the correlation engine to which the concept described in this idea is part of, as alarm correlation performance is of paramount importance. In the US patent US20020078251A1 a concept for a software pipeline is presented, where the information about the processing stages being executed for a given source data object is embedded in the object itself. The data object is aware of its path through the pipeline, which is not intended in the present idea, as new stages should be supported without changes in the existing data objects.