Answering complex, list and context questions with LCC's Question-Answering Server Page: 2
The following text was automatically extracted from the image on this page using optical character recognition software:
Processing of Complex Questions
Processing of List Questions
Quantification Scalar Processing of Context Questions
Figure 1: Architecture of LCC's QASTM
ferent kinds of questions were evaluated: (1) complex
questions, that expect an answer from the text col-
lections without knowing if such an answer exists; (2)
list questions, requiring a list of answers; and (3) con-
text questions, in which the question was considered
in the context of the previous questions and answers
processed by the system. Three distinct evaluations
were conducted, but a single question-answering archi-
tecture handled all three cases.
The Question Processing is different for each of the
three kinds of questions that were evaluated. For com-
plex, factual questions like "Q1147: What is the Statue
of Liberty made of?", the processing involves at first
the recognition of the expected answer type from an
off-line taxonomy of semantic types. In TREC-10, the
factual questions were far more complex than those
evaluated in TREC-9 and TREC-8 because frequently
the expected answer type could not be easily identi-
fied. For example, in the case of the question Q1147
virtually anything could be a criterion. To help narrow
down the search for the expected answer type and to
generate robust processing at the same time, a set of
bridging inference procedures were encoded. For ex-
ample, in the case of the question Q1147, the bridging
inference between the question and the expected an-
swer type encode several meronymy relations between
different materials and the Statue of Liberty. Instead
of searching for the expected answer type in each re-
trieved paragraph LCC's QASTM looks for meronymy
relations involving any of the keywords used in the
For questions expecting a list of answers, the quan-
tification scalar, defining the size of the list, is iden-
tified at the time of question processing and used
when the answers are extracted and fused together.
For example, in the case of question "Name 15 reli-
gious cults." the expected answer type is ORGANIZA-
TION of the type religious cult and the quantifier is 15.
Sometimes, the expected answer type have multiple at-
tributes, e.g. "Name 4 people from Massachusetts who
were candidates for vice-president." Such attributes
are translated into keywords that retrieve the relevant
document passages or paragraphs.
If the question needs to be processed in the context
of the previous questions and answers, a coreference
resolution process takes place prior to the recognition
of the expected answer type. For example, the pro-
noun this from the question "On what day did this
happen?" is resolved as the event mentioned in its
preceding question, i.e. "Which museum in Florence
was damaged by a major bomb explosion in 1993?".
The reference resolution entails the usage of the key-
words defining the antecedent along with the keywords
extracted from the current question.
The Document Processing module uses a paragraph
index to retrieve document passages that (a) contain
the keywords from the query, and (b) contain either a
concept of the expected answer type or a relation indi-
cated by the bridging inference mechanisms. However,
if insufficient evidence of the paragraph relevance ex-
ists, pragmatic information is passed back to the feed-
back loop that reformulates the query searching for the
Here’s what’s next.
This paper can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Paper.
Harabagiu, Sanda M.; Moldovan, Dan I.; Paşca, Marius. 1974-; Surdeanu, Mihai; Mihalcea, Rada, 1974-; Gîrju, Corina R. et al. Answering complex, list and context questions with LCC's Question-Answering Server, paper, November 2001; [Gaithersburg, Maryland]. (digital.library.unt.edu/ark:/67531/metadc83297/m1/2/: accessed October 23, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT College of Engineering.