Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Dynamically injecting temporary errors into a review document for quality control in a body of content

IP.com Disclosure Number: IPCOM000238333D
Publication Date: 2014-Aug-18
Document File: 3 page(s) / 57K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed are a method and system to increase accuracy in document reviews by periodically inserting artificial errors to test reviewer competency.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Page 01 of 3

Dynamically injecting temporary errors into a review document for quality control in a body of content

Document review systems or source control management (SCM) tools commonly require reviews and approvals of documents. In the case of written documents, a fellow editor might review an article for grammatical, spelling, or flow issues. In the case of source control, a peer developer might need to review new code for logical correctness. The role of the reviewer is important in the document lifecycle. The reviewer is expected to provide a second (or third, etc.) opinion on a document and ensure good quality before it is published or progresses further in a process.

Ideally, each reviewer spends sufficient time with the document to observe any errors or required changes. However, in practice, some reviewers might rush through the process, or even blindly approve a review request without looking at the document. These examples represent a breakdown in the review process that can introduce cost later in the lifecycle, when fixing problems is more expensive than earlier in the process.

The solution is a unique method to augment a review system to periodically test the effectiveness of a reviewer and reduce the likelihood of poor work quality and incurring subsequent cost. The primary goal of the method and system is to increase accuracy in document reviews by periodically inserting artificial errors to test reviewer competency.

The method includes a system with a cross-contextual understanding of errors and corrections applicable to multiple types of content (e.g., code, plain text, etc.). The method intentionally inserts temporary errors of varying degrees and frequencies into a reviewer's version of a document. The system determines whether the reviewer successfully detected the errors. This determination contributes to an effectiveness score of the current document review and factors into an overall competence score for the reviewer. Before the review begins, the system can notify the reviewer that one or more artificial errors are present in the document to encourage thorough inspection.

The system uses analytics to understand what type of content is under review (e.g., programming language source code, plain text, etc.) and correlates the document with an appropriate error type. The system then inserts the errors into the document. Knowing that artificial errors may be present, reviewers must be diligent and conduct a thorough review of the material. In the process of discovering the artificial errors in the document, the reviewers are likely to discover any actual errors as well.

For example, a developer submits a changeset for a group of Java* classes in a lifecycle management tool, related to synchronization between threads. The system recognizes that the type of document under review is Java source code, and prepares three artificial errors to inject into the review changeset. The three errors vary in degree of difficulty. Le...