Browse Prior Art Database

A self-diagnosing question answering system Disclosure Number: IPCOM000247244D
Publication Date: 2016-Aug-17
Document File: 5 page(s) / 132K

Publishing Venue

The Prior Art Database


A method and system for automatically diagnosing problems in a system capable of answering questions is disclosed.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 20% of the total text.

Page 01 of 5

A self-diagnosing question answering system

Disclosed is a method and system for automatically diagnosing problems in a system capable of answering questions. The disclosed system deals with the problem of diagnosing and fixing substantive problems in the logic level functioning of a QAing (QA) system that prevent the system from coming up with the correct answers to a given set of questions. In contrast with the current state-of-the-art practices, which rely

either on statistical machine learning algorithms to improve performance or on intensive manual diagnosis, the disclosed method provides a solution that automates the diagnostic and solution exploration and testing processes.

There are many reasons why a QA system can fail to produce a correct answer to a question in any given case. The more complex and comprehensive the QA system, the greater the number of possible failure modes. That is one reason why a statistical machine learning approach that focuses mainly on the scoring module of the QA system is an attractive solution. That solution has been used effectively in certain domains. However, one problem with such approaches is that they require an enormous amount of training data. In most real-world domains and applications such

quantities of data are simply not available. Additionally, statistical systems are somewhat limited in the kinds of "corrections" they can make to a QA system. Typically the corrections are adjustments to weights in some function used to score candidate answers are on the basis of a fixed set of features. There are simply some categories of error that such an approach cannot correct, e.g., the absence of information in the corpus that contains a correct answer to the question at hand.

Previously a manual diagnosis may be the only way in which a QA system's performance can be improved. Such an approach involves attempting to figure out

what went wrong in a particular QA episode by determining which system algorithms

and features played a significant role in putting forth incorrect answers. Then a determination of why system algorithms and features that might have come up with the correct answer did not perform as well as they might have. By analyzing failures in this

way one can hope to arrive at hypotheses as to measures that can be taken to improve

the performance of the system, not only on a single QA episode, but on a more general set of similar episodes.

A example implementation of a QA system that self-diagnoses errors in operation then proposes and tests possible solutions is as follows:

0. Input: Triples of a question, an answer, and a supporting evidence passage for the answer
For each triple, run the question through the QA system.

If the answer output by the system matches the answer from the input triple, the answer is considered correct, continue to the next input triple.

If the answer is incorrect, continue to Passage/Document analysis (1):

1. Passage/Document analysis: Compare the pas...