Browse Prior Art Database

Method for Utilizing Watched Hypotheses to Annotate Corpus Documents in Question Answering (QA) System to Enhance Discovery of Search Results

IP.com Disclosure Number: IPCOM000250362D
Publication Date: 2017-Jul-05
Document File: 2 page(s) / 100K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method is disclosed for utilizing watched hypotheses to annotate corpus documents in question answering (QA) system to enhance discovery of search results.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

1

Method for Utilizing Watched Hypotheses to Annotate Corpus Documents in Question Answering (QA) System to Enhance Discovery of Search Results

In general, question answering (QA) system facilitates discovering and ranking information that is most relevant to a user’s query. However, if the user’s query is in an imprecise language or if the user is not clear in explaining the query, the QA system fails to fetch the exact information requested by the user. Further, a deep QA system is developed, which enables the user to take an active role in communicating more information to a search engine. The deep QA system features watched questions, which automatically repeats a saved question along with ingestion of new documents to check whether the new documents are relevant to user query or not. However, the watched questions feature infers the user’s interest/intent only by opening the document or following a link. So there exist a need for method that enhances discovery of search results by annotating corpus documents in QA system.

Disclosed is a method for leveraging watched hypotheses to annotate corpus documents in the QA system to enhance discovery of search results. The watched hypotheses process infers the user’s interest by analyzing separate action, which is far more affirmative to indicate the user’s interest or the user’s assessment in fetching relevant documents. Subsequently, the watched hypotheses along with passages are considered to annotate documents with metadata, wherein the documents are extracted from passages.

In an embodiment, the method enables the user to submit a query to the deep QA system. The deep QA system processes the user query and returns a result set, which includes a set of ranked hypotheses/answers along with a set of ranked evidence passages. Subsequently, the user indicates to watch an individual hypothesis along with the set of ranked evidence passages, based on the received set of ranked hypotheses and the set of ranked evidence passages, to annotate a document extracted from the set of ranked evidence passages. Further, an annotation bundle is harvested from the watched hypotheses and the input query, so as to append the annotation bundle to one or more metadata fields of the document. The one or more metadata fields of the document are stored along with the document in a corpus. The pertinent elements are then identified from the analysis of the user query and forms a search query of pertinent metadata fields.

Consider an exemplary scenario, where a user submits a query such as, “What is the most popular household pet in United States?” The method enables the deep QA system to process the user query and returns the set of ranked hypotheses along with the set of ranked evidenc...