Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

System and procedure to track manual tester’s actions while testing the application to confirm if they followed the required steps as planned

IP.com Disclosure Number: IPCOM000239282D
Publication Date: 2014-Oct-27
Document File: 4 page(s) / 83K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is an arrangement which allows testing teams to perform manual testing more productively. This would allow to just concentrate on running the manual tests on the applications and the validations (ie. marking steps pass/failed). Manual testers can handtype/record actions to be performed on the application once and then proceed with performing the steps on the application. The actions captuerd will be performed by the manual tester during test execution and compare it with the preauthored steps to automatically mark them pass/fail. Thus, the arrangement continues to utilize the benefits of manual test users and automates the extraneous parts of manual testing in the whole process.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 36% of the total text.

Page 01 of 4

System and procedure to track manual tester's actions while testing the application to confirm if they followed the required steps as planned

Manual testing by its nature needs human intervention. When a manual tester reads the steps to be followed and manually does those on the application, there are chances of error. A step written by another tester is understood in a different way by another tester. Some organizations which need compliance, many a times want to validate if the step was actually executed in the manner it was expected. This starts bringing in more need for tracking who is testing the steps of manual test and what did they actually do to test that. Commonly known ways to track the actions of a tester include systems where Key and mouse movements are tracked on the operating system. Other ways to track this is to create a video of the steps which the tester followed in the background while testing was in progress. However, most systems work on trust basis with the onus being on the tester.

A deviation from manual testing would be to automate the tests and then use the tools available commercially to run the tests automatically. However this may still be a utopian goal as many a times there are problems with automating scenarios, domain support and the overhead of maintaining automated scripts when applications change.

Herein is provided unique way to track what the manual tester has done while he was running the test. The whole system is a combination of sub-systems where parts of the system help in creating the test script and then parts of the system use the script created in authoring time to evaluate the actions which user is performing during the actual manual execution. With this the complete activity of the tester can be captured and then compared to the pre-existing steps (which were created during authoring time) to validate if the tester performed the test in rightly manner.

The system consists of the following parts

Fig. 1 records the manual test

Fig. 2 tracks the execution of test

1


Page 02 of 4

Fig 2

Requirements:
a) An automation tool which can record user's actions on the application
b) A convertor which understand the automation tool output & convert to natural language.

This may be part of the automation tool or part of the tool adapter or an individual entity in itself.

2


Page 03 of 4

c) A server to correlate information where the authored scripts are stored and then later results of the test can be stored. This correlation happens on Test management server.

d) A tool adapter which is the binding between a test automation server system [c] and the Automation tool [a]. This tool adapter contacts the server system to get its tasks in order to initiate a RECORD or a Assisted Execute request (or similarly named tasks).

e) Optional - A browser using which a tester initiates the sessions. This can very well be initiated from a command line or a trigger.

f) Application under Test - This is the application...