Browse Prior Art Database

A new uniform approach to formatting data area instances in dumps Disclosure Number: IPCOM000185390D
Original Publication Date: 2009-Jul-23
Included in the Prior Art Database: 2009-Jul-23
Document File: 5 page(s) / 70K

Publishing Venue



Disclosed is an approach to unified formatting of data area instances that reside in unformatted storage dumps in the scope of a computing system. The approach is based on coupling standard documentation on the data areas (data area maps) with actual dump contents in form of reports that are structured field by field and contain field-level documentation entries together with corresponding pieces of dump data in both unformatted and formatted shape. Automatic transformation of the documentation into a set of formatting routines (programs that are capable of producing such reports when invoked against a dump) is proposed as a realization of this idea. A sample implementation (for z/OS* platform in IPCS environment) is demonstrated.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 22% of the total text.

Page 1 of 5

A new uniform approach to formatting data area instances in dumps

In the field of resolving problems concerning abnormalities in functioning of software products that are executed at a computing system, it is the unformatted storage dump that serves as a main source of information for corresponding problem analysis being performed by software support specialists. The root cause of difficulties that are peculiar to this kind of analysis is the fact that unformatted storage dumps are in machine format and thus a dump can be visually perceived by humans just as a stream of binary or hexadecimal numeric data that is quite hard to interpret. At the same time, the support specialists rely on materials in human readable form, such as software source code, log records and technical documentation. So there's a serious gap between machine-oriented dump data and readable materials that can be used to actually decode that data.

A key subtask of dump analysis is exploration of blocks of dumped storage that represent instances of data areas (structures of fields of various data types) of a particular program pictured at the time the dump was taken at. These instances are usually interconnected by pointer fields and form a graph that, once entered and then traversed, can provide an investigator with plenty of details on the program state at the dump time and on actual data being processed at that time. To make it all possible, software support specialists normally refer to corresponding data area reference documentation that contains vital information on every field of every data area, such as data type specification, length in bytes, offset from the beginning of the data area, and detailed field description (expected usage, range of allowed values, logical relationships with other fields, etc.). With such documentation it is possible to reveal actual meaning of any segment of dumped data area given data area address in the dump, segment length and offset from the beginning of the area. On the other hand, it is also possible to find in a dump actual value of any field described in the documentation.

Naturally, while this principle appears to be the same for any computing system, various technologies are being exploited in the scope of different platforms. For computing systems based on z/OS platform, dumps are usually browsed by using IPCS [1], while information on the system data areas can be found in MVS* manuals named "MVS Data Areas, Vol. 1-5" [2]-[6]. These manuals contain detailed data area reference information that consists of field-level entries that are ordered in the same way as field variables appear inside corresponding data structure in the source code. Each field entry incorporates field offset from the beginning of the data area in both decimal and hexadecimal, field type, decimal field length in bytes, field n...