Browse Prior Art Database

Tracing with minimal size but maximum error data capture

IP.com Disclosure Number: IPCOM000181640D
Original Publication Date: 2009-Apr-08
Included in the Prior Art Database: 2009-Apr-08
Document File: 3 page(s) / 169K

Publishing Venue

IBM

Abstract

This article describes a technique for maximising the amount of relevant trace and debug data generated from a program, while keeping its size to a minimum. The technique could be described as setting a marker in the code execution and then doing some processing that may result in an error. Copious and detailed trace is written to a temporary trace file. At the end of the processing, a decision can be made to either keep the detail because an error did occur, or to erase the trace because processing of this section was successful.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 3

Tracing with minimal size but maximum error data capture

One of the most common ways of helping customers investigate problems with our software is to have that software write trace information to a file detailing options selected, decisions made and, crucially, errors and events encountered. There is a conflict between the requirements to keep these trace files to a small size to save disk space and transmission times, but yet to contain as much useful information as possible.

    Some current solutions are centred on minimising the amount of data stored, whilst still trying to make it useful when errors occur. The data stored is thus always a subset of what is actually available, and makes fast and efficient investigation, debug and fixing of problems difficult - and sometimes impossible.

    Another known solution is to have a "fast" and "slow" trace file. The fast trace is a circular file in which detailed data is written, but then quickly wrapped. The drawbacks here are that there is a set amount of data stored - which may be too much or too little for the current problem, and it may also be that by the time the program eventually completes, relevant details from the error situation have already been wrapped by more recent, but irrelevant data.

    The core idea is that during its execution, a program can be writing to one of two trace files. One is for permanent, persistent data that will always be written out, the other is a transient one. At a suitable, logical part of the program when a function is to be undertaken, a new transient (or local) trace is started. During the processing of that function, detailed information is written to the local trace file. When processing is complete a decision is made - if the processing was successful, the local trace file is deleted and a new local trace file opened for the next logical function. But if any errors were encountered, all the information in the local trace file can be copied to the persistent file. This te...