Browse Prior Art Database

Method to Prune Debug Dumps and Error Logs on a Service Processor to Create Space to Accommodate More Dumps Disclosure Number: IPCOM000244495D
Publication Date: 2015-Dec-16
Document File: 5 page(s) / 60K

Publishing Venue

The Prior Art Database


Described is a method to prune debug dumps and error logs on a service processor to create space to accommodate more dumps.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 57% of the total text.

Page 01 of 5

Method to Prune Debug Dumps and Error Logs on a Service Processor to Create Space to Accommodate More Dumps

Error logs and debug dumps are created on board management controllers or service processors in a server whenever errors are encountered. Both error logs and debug dumps have several sections depicting different information captured from the service processor (both hardware and firmware). Typically, a debug dump would consist of error logs, component traces, specific files, cores generated, Linux* processor file system, etc.

    The dumps are stored on a dedicated partition on the flash attached to the service processor. Typically, the flash size on embedded systems is constrained. In order to be able to debug the problems occurring in servers, the service processor is expected to store as much information as possible. With the limited flash size, the size of partitions where the dumps and error logs are stored is also limited, and not more than few dumps can be accommodated on the partition. Further, these dumps are reported to a management console managing the server post which the dumps and error logs get purged on the service processor, thereby creating space for new dumps or error logs.

    There is always a case where there is not enough space for newer dumps or error logs and, hence, they get dropped without getting stored on the flash. Hence, this limits the debugging capability of the server. An alternate mechanism would be to adopt FIFO logic and purge the oldest logs, thereby accommodating the newer logs. However, the capability to debug genuine issues because the dumps and error logs were purged is being lost. Another use-case is in the cloud where the storage is billed on per gigabyte basis. It is better to optimize the dumps captured based on use-cases so that the cost of using storage in the cloud is optimized. Proposed is an idea where weights are assigned to each section of debug dumps and error logs generated from a service processor depending upon the context or scenario in which the dumps or error logs were generated. Also, the sections in the generated dumps and error logs are deleted based on their weights in order to create space and accommodate more debug information related to newer error scenarios.

Creation of dumps

    Service processor initiates a dump when a problem is encountered in the system. The debug dump capture process retrieves various sections (firmware version, error logs, component traces, critical files, application cores, kernel traces, processor file system, mount points, environment variables, complete memory dump). Upon identifying a use-case for creating a dump, the sections are assigned weights on an increasing scale of 1 to 10 based on a look-up table as shown below. Further, depending upon the space availability, the sections with weights less than a specified threshold will not be collected if the free space is below certain thresholds. Section...