Browse Prior Art Database

Smart App Selection for Regression Testing

IP.com Disclosure Number: IPCOM000235029D
Original Publication Date: 2014-Feb-25
Included in the Prior Art Database: 2014-Feb-25
Document File: 3 page(s) / 297K

Publishing Venue

Microsoft

Related People

Vincent Sibal: INVENTOR [+5]

Abstract

Self-host validation, application compatibility testing and regression testing are common testing aspects that are included when validating a product or platform that supports 3rd party application development. Validation is required to ensure existing 3rd party applications are not regressed as the platform is continually updated with new releases. Due to the nature of a scalable platform providing the ability to publish hundreds of thousands of apps, an effective solution is required to test application compatibility as not all applications can be tested in a timely manner. Some common solutions that has been employed in the past is to outsource validation to a vendor team to do manual test passes on a subset of these apps. The subset is usually defined by app popularity and/or hand chosen apps based on impact to the platform. This has the benefit of being able to test thousands of apps on a recurring basis but can often times still be a slow process and have the problem of delays in issues being found and fixed in the product. There is also greater chance for human error involved in a largely redundant process and there are still limitations on testing on different variations of hardware configurations. To solve this problem a tool has been developed, Smart App Selection (SAS), which can pick the most effective apps for self-host validation and regression testing. The applications are prioritized based on the amount of product code changes to code coverage of the app as well as equivalence class coverage between the apps. Less coverage from product changes will show the code changes have a less likely impact on the given app. Additionally, similar coverage between apps will enable creating the necessary equivalence classes of apps to test. Overall, this will help to reduce the manual effort of validating each and every app by targeting only the most impact apps. Smart App Selection is based off the process and toolset, “Smart Test Selection” (STS) which leverages an internal tool called Scout. Scout does the work to produce percentages of code changes to code coverage. Based on a set of product code changes the tool will analyze code coverage traces and provide data on which traces are most impacted by OS churn. Scout will also compare results of each coverage trace to one another to find similar matches between traces. Smart Test Selection is about leveraging code coverage from test cases and comparing them to product changes whereas Smart App Selection is about using Scout to leverage code coverage from Apps to compare against product changes. To implement, code coverage is first obtained against a list of apps via automation and manually exercising apps. Each app has its own unique code coverage trace that is fed into a code coverage database. The internal tool, Scout, is then used to target this code coverage database and is provided product change sets to analyze against. The tool generates a priority list of coverage traces (or app traces) with percentages of which apps are being exercised the most based on the product churn. It also provides percentages of which apps share similar code path matches (equivalence classes of apps). This list would provide the necessary information to know which apps to target regression testing and self-host validation. Below is a visual representation of the workflow for the process and toolset. To summarize, Smart App Selection (SAS) is a process and toolset for selecting the most effective apps for self-host validation and regression testing. It provides a method to identify and calculate relevance of applications based on OS churn, proper sets of applications to test based on OS churn and equivalence class of applications based on OS churn.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 58% of the total text.

Document Author (alias)

vinsibal

Defensive Publication Title 

Smart App Selection for Regression Testing

Name(s) of All Contributors

Vincent Sibal

Eduardo Leal-Tostada

Peres Bayer

Ioana Pop

Herman Widjaja

Summary of the Defensive Publication/Abstract

Self-host validation, application compatibility testing and regression testing are common testing aspects that are included when validating a product or platform that supports 3rd party application development.  Validation is required to ensure existing 3rd party applications are not regressed as the platform is continually updated with new releases.  Due to the nature of a scalable platform providing the ability to publish hundreds of thousands of apps, an effective solution is required to test application compatibility as not all applications can be tested in a timely manner. 

Some common solutions that has been employed in the past is to outsource validation to a vendor team to do manual test passes on a subset of these apps.  The subset is usually defined by app popularity and/or hand chosen apps based on impact to the platform.  This has the benefit of being able to test thousands of apps on a recurring basis but can often times still be a slow process and have the problem of delays in issues being found and fixed in the product.  There is also greater chance for human error involved in a largely redundant process and there are still limitations on testing on different variations of hardware configurations.

To solve this problem a tool has been developed, Smart App Selection (SAS), which can pick the most effective apps for self-host validation and regression testing.  The applications are prioritized based on the amount of product code changes to code coverage of the app as well as equivalence class coverage between the apps.  Less coverage from product changes will show the code changes have a less likely impact on the given app.  Additionally, similar coverage between apps will enable creating the necessary equivalence classes of apps to test.  Overall, this will help to reduce the manual effort of validating each and every app by targeting only the most impact apps.

Smart App Selection is based off the process and toolset, “Smart Test Selection” (STS) which leverages an internal tool called Scout.  Scout does the work to produce percentages of code changes to code coverage.  Based on a set of product code changes the tool will analyze code coverage traces and provide data on which traces a...