Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Optimized Detection of Previously Identified Problems

IP.com Disclosure Number: IPCOM000248032D
Publication Date: 2016-Oct-19
Document File: 5 page(s) / 409K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method for efficiently identifying a previously discovered problem is disclosed.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 5

Optimized Detection of Previously Identified Problems

Disclosed is a method for efficiently identifying a previously discovered problem. The method reduces the time and steps necessary to determine if a test failure or bug has already been discovered (i.e. is a duplicate of an existing issue).

The following approaches are combined to efficiently determine a current problem matches a previously identified problem: 1. Use software release time and time user has been using software release to find optimal search time window. Figure 1 depicts an example of a release time to originator correlation. Figure 2 depicts and example of release time to owner area correlation. Using these graphs, the information shows when the majority of bugs are opened and who they are opened against as a release progresses over time.

Figure 3 depicts a flow using these graphs. When a user starts using a release, that start time is their time 0. There's an intrinsic multiplier (calculated over time for given release) that correlates the time a user is using a software release to where they are on those 2 graphs. Using that multiplier and knowing the amount of time the user has been on the release is used to determine where the ideal locations are to search for the input bug.

2. Use historical user data to determine the most likely software area to search. User's of software releases tend to test the same functional code. The user historical data is used to determine which component is identified as having the bug causing the problem.

Which test area usually finds the user's problems at Tx?

Which developer usually ends up fixing this user's bugs at Tx?

This uses the same concept as approach 1., except it's customized to the user.

Approach 2. should be more accurate in finding a duplicate, but it requires some prior user history data to be available. The more data that's available, the more accurate approach 2. becomes.

3. Use test coverage data to reduce software areas to search. Figure 4 depicts a cov...