Browse Prior Art Database

Method to Automate Manual Screen Reader Testing and Reporting Disclosure Number: IPCOM000239616D
Publication Date: 2014-Nov-19
Document File: 2 page(s) / 31K

Publishing Venue

The Prior Art Database


A method to automate manual screen reader testing and reporting is disclosed.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 2

Method to Automate Manual Screen Reader Testing and Reporting

Disclosed is a method to automate manual screen reader testing and reporting.

Assistive technology used by individuals with disabilities is dependent on proper implementation of accessibility application programming interfaces (APIs) in the software being interacted with.

The actual results of the speaking by a screen reader (assistive technology) depends both on the implementation of the accessibility APIs in the program being read and the algorithms and abilities of the assistive technology. Details that affect the result include the ability of the assistive technology to obtain information through the accessibility API and its interpretation and formatting of that information. For example, some screen readers add instructions, such as "button, press space to activate". As another example, some screen readers guess at structure when none is provided.

Because of these differences between what is coded and what the user experiences, accessibility tests often require manual testing with the actual Assistive Technology.

Modern software development based on principles of Continuous Development demands that wherever possible testing processes be automated. Thus manual testing requirements and practices are problematic in a continuous development process. Because of this they risk being bypassed or even dropped from testing cycles, which can lead to undiscovered bugs and potential user problems.

What is needed is a method and process to automate this manual part of accessibility testing.

The disclosed method uses a process where tools cause the results from the assistive technology (specifically screen readers) to be created and collected. These results are then compared to the expected results. Differences between the expected results and actual results are flagged as errors. Once the expected results are specified and the automated steps to create and collect the results are in place, the process can be run in a fully automated fashion.

Pattern extraction is used where possible to increase the resilience of the test case. This works for any granularity of code that can be run individually in an automated tester.

A unit of development code is identified as belonging to an automated testing unit and expected results file set.

The automated testing unit has a series of steps to:

Activate the assistive technology (or an assistive technology simulator)...