Answering complex, list and context questions with LCC's Question-Answering Server Page: 1

This paper is part of the collection entitled: UNT Scholarly Works and was provided to Digital Library by the UNT College of Engineering.

View a full description of this paper.

Answering complex, list and context questions with LCC's
Question-Answering Server
Sanda Harabagiu*, Dan Moldovan*,
Marius Paca*, Mihai Surdeanu*, Rada Mihalcea, Roxana Girju, Vasile Rus,
Finley Licatuu, Paul Morirescu and Razvan Bunescu
Language Computer Corporation*
Dallas, TX 75206
{sanda,moldovan}@languagecomputer. com

This paper presents the architecture of the Question-
Answering Server (QAS) developed at the Language
Computer Corporation (LCC) and used in the TREC-
10 evaluations. LCC's QASTM extracts answers for
(a) factual questions of vairable degree of difficulty;
(b) questions that expect lists of answers; and (c) ques-
tions posed in the context of previous questions and
answers. One of the major novelties is the implemen-
tation of bridging inference mechanisms that guide the
search for answers to complex questions. Additionally,
LCC's QASTM encodes an efficient way of modeling
context via reference resolution. In TREC-10, this
system generated an RAR of 0.58 on the main task
and 0.78 on the context task.
Systems providing question-answering services need to
process questions of variable degrees of complexity,
ranging from inquiries about definitions of concepts,
e.g. "What is semolina?" to details about attributes
of events or entities, e.g. "For how long is an elephant
pregnant?". Finding the answer to questions often in-
volves various degrees of bridging inference, depending
on the formulation of the question and the actual ex-
pression of the answer extracted from the underlying
collection of documents. For example, the question
"How do you measure earthquakes?" is answered by
the following text snippet extracted from the TREC
collection: " Richter scale that measures earthquakes"
because the required inference is very simple: a mea-
suring scalar, i.e. Richer scale, has a relative adjunct
introduced by the same verb as in the question, having
the same object of measurement. Yet a different, more
complex form of inference is imposed by questions like
"What is done with worn and outdated flags?".
The Question-Answering Server (QASTM) devel-
oped at the Language Computer Corporation (LCC)
encodes methods of performing several different bridg-
ing inferences that recognize the answer to questions
of variable degree of complexity. The pragmatic

knowledge required by different forms of inference is
distributed along the three main modules of LCC's
QASTM: the Question Processing module, the Doc-
ument Processing module and the Answer Process-
ing module. Some of the inference forms enabled by
LCC's QASTM determine the answer fusion mecha-
nisms that assemble list-answers expected by questions
like "Name 20 countries that produce coffee. "
Rarely questions are asked in isolation. When sat-
isfied by the answer, a user may have follow-up ques-
tions, requiring additional information. If the answer is
not satisfactory, a new question may clarify the user's
intentions, thus enabling a better disambiguation of
the question. LCC's QASTM is capable of answer-
ing questions in context, thus exploiting the common
ground generated between the answer of questions like
"Which museum in Florence was damaged by a ma-
jor bomb explosion in 1993?" and its follow-up ques-
tions "On what day did this happen?" or "Which gal-
leries were involved?". These new capabilities of (a)
answering more complex questions than those evalu-
ated in TREC-8 and TREC-9; (b) detecting when a
question does not have an answer in the collection; (c)
fusing several answers that provide partial information
for questions expecting list-answers; and (d) answering
questions in context - stem from a new architecture,
that enhances the three-module streamlined operation
used in the previous TREC Q/A evaluations.
The architecture of LCC's QASTM
The architecture of LCC's QASTM used in the TREC-
10 evaluations is illustrated in Figure 1. Three dif-
1In TREC-8, the Q/A evaluations showed that the best
performing systems exploited the combination of Named
Entity semantics with the semantic of question stems. In
TREC-9, two trends could be observed: (1) systems that
used advanced pragmatic and semantic knowledge in the
processing of questions and answers, and (2) systems that
improved on new ways of indexing and retrieving the para-
graphs were the answers may lie.

Upcoming Pages

Here’s what’s next.

upcoming item: 2 2 of 7
upcoming item: 3 3 of 7
upcoming item: 4 4 of 7
upcoming item: 5 5 of 7

Show all pages in this paper.

This paper can be searched. Note: Results may vary based on the legibility of text within the document.

Tools / Downloads

Get a copy of this page .

Citing and Sharing

Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.

Reference the current page of this Paper.

Harabagiu, Sanda M.; Moldovan, Dan I.; Paşca, Marius. 1974-; Surdeanu, Mihai; Mihalcea, Rada, 1974-; Gîrju, Corina R. et al. Answering complex, list and context questions with LCC's Question-Answering Server, paper, November 2001; [Gaithersburg, Maryland]. ( accessed September 25, 2018), University of North Texas Libraries, Digital Library,; crediting UNT College of Engineering.

International Image Interoperability Framework (This Page)