Method for automatic generation of code to analyze files suffering from a not completely defined structure
Publication Date: 2016-Feb-23
The IP.com Prior Art Database
Compiler construction techniques for the automatic generation of programs analyzing program source text using parser or scanner generators require a mathematically precise definition of syntax rules. Disclosed here is a method to automatically generate analysis programs for text files lacking a formally defined structure such as log files generated by simulators or operating systems.
Page 01 of 4
Lexical and syntax analyzis
Disclosed is a method for the automatic generation of the source code of a program that can analyze text files that do not obey formal syntax rules.
State of the art for analyzing text files is applying techniques that evolved in the field of compiler construction for scanning and parsing program source code as described, for example, in .
The first step is lexical analysis which, in short, examines all parts of the language that can be described by regular expressions. Typically, it recognizes character sequences being a particular keyword, a floating point constant, and similar. The output of this phase is a stream of socalled tokens which basically are integer numbers denoting the lexical elements of the language, i.e. keywords, constants and such. Some tokens have data attributes, e.g. a "number token" has the value of the original constant stored as attribute. It is state of the art to generate the source code of such a lexical analyzer automatically from a list of regular expressions by means of a "scanner generator".
The next compilation step is syntax analysis which receives the token stream as input. It analyses a part of the programming language typically described by a contextfree grammar. Typical syntax rules like the one that in C, the key word "while" must be followed by a "(" symbol, then by an expression, by a ")" symbol and finally by a statement belong here. Again, it is common practice to generate the source code of this step from a set of grammar rules by means of a "parser generator". The basis of the automated source code generation is that the grammar rules follow a mathematical framework. In practice, LL(1) and LARL(1) grammars are used.
While accepting the input text, the syntax analyzer generates an internal representation, usually some graph data structure. For example, a while statement may yield a node labelled with a marker identifying it a while loop, and having two successor nodes representing the expression and the loop body.
The grammar rules that are the input of such a generator have interspersed program code that is manually written and used, for example, to generate an intermediate representation of the program being translated by the compiler. While generating source code from the grammar rules, the generator inserts the interspersed code at appropriate positions.
The remaining steps of the compiler work on such internal representations, performing different kinds of code optimization and finally generating machine code.
Text files without formal definition
Trace or log...