Browse Prior Art Database

A method for optimizing the identification of vulnerable elements in software systems Disclosure Number: IPCOM000242190D
Publication Date: 2015-Jun-24
Document File: 2 page(s) / 25K

Publishing Venue

The Prior Art Database


We compute a defect density ratio which overcomes the inherent bias in the traditional, widely used defect ratio. We expand the ratio to cover any ratio between problem metrics and software size metrics. We focus on nonlinear ratios, and suggest means for deriving an optimal nonlinear denominator. This approach allows much more accurate identification of high-risk elements of a software system compared to the state of the art.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 46% of the total text.

Page 01 of 2

A method for optimizing the identification of vulnerable elements in software systems

Many traditional methods for identifying highly vulnerable software elements are based on computing the ratio between a metric measuring problem indicators (e.g., number of changes) and a measure of the software size such as LOC. Higher ratio serves as an indicator of higher vulnerability. These methods are inherently biased as they tend to associate higher ranks with elements of lower "size". Our invention systematically overcomes the inherent bias of traditional methods and optimizes ratio computation to deliver highly accurate indications of problem proneness.

Instead of the traditional linear ratio, we compute a nonlinear ratio. We learn the preferred ratio either by trying several nonlinear functions to be used for ratio computation until we get to the best one, or by computing the ratio from earlier releases where error proneness is known in retrospect. Doing this we remove the inherent bias and provide an optimized normalization ratio , thus identifying a much more accurate subset of system elements which are at a higher level or problem proneness.

A method for optimizing the identification of vulnerable elements in software systems

Large and complex software systems exhibit an array of problem types such as security vulnerabilities, fault proneness, functional incorrectness, performance issues, to count a few.

To keep system quality at a desirable level, it is necessary to identify such problems and vulnerabilities early on and handle them to remove the risk they introduce. In large and complex systems, it is infeasible to find and address all of the potential issues in the system . Therefore, one needs a method that facilitates the discovery of those system elements which introduce higher levels of risk. That is, the goal of such methods should be to identify a small subset of system elements which have a higher probability of faults and vulnerabilities than the rest of the system has.

Indeed, this need was identified in the art and multiple methods were introduced in an attempt to provide such preferred subsets. Commonly, those methods are based on a metric which measures some aspect which directly or indirectly associates with the vulnerabilities of interest. For instance, one may measure the number of bugs identified in a file during the development process , or the number of changes made to a file in that process. Early on, it was recognized that such metrics are biased as they tend to place larger system elements at a higher rank, because chances are that with size, the number of vulnerability indicators associated with the system element will increase.

To overcome this bias, the basic metric is typically adjusted or normalized for size or for another characteristic metric of the software itself (and not a metric of vulnerability). The practice known in the art is a linear normalization. That is, the vulnerability metric is divided...