Extracting a Representation from Text for Semantic Analysis Page: 244

This paper is part of the collection entitled: UNT Scholarly Works and was provided to Digital Library by the UNT College of Engineering.

View a full description of this paper.

cal entailment baselines based on Glickman et al.
(2005).
4 Discussion and Future Work
Analyzing the results of reference facet extraction,
there are many interesting open linguistic issues in
this area. This includes the need for a more
sophisticated treatment of adjectives, conjunctions,
plurals and quantifiers, all of which are known to
be beyond the abilities of state of the art parsers.
Analyzing the dependency parses of 51 of the
student answers, about 24% had errors that could
easily lead to problems in assessment. Over half of
these errors resulted from inopportune sentence
segmentation due to run-on student sentences con-
joined by and (e.g., the parse of a shorter string
makes a higher pitch and a longer string makes a
lower pitch, errantly conjoined a higher pitch and
a longer string as the subject of makes a lower
pitch, leaving a shorter string makes without an
object). We are working on approaches to mitigate
this problem.
In the long term, when the ITS generates its own
questions and reference answers, the system will
have to construct its own reference answer facets.
The automatic construction of reference answer
facets must deal with all of the issues described in
this paper and is a significant area of future
research. Other key areas of future research
involve integrating the representation described
here into an ITS and evaluating its impact.
5 Conclusion
We presented a novel fine-grained semantic repre-
sentation and evaluated it in the context of auto-
mated tutoring. A significant contribution of this
representation is that it will facilitate more precise
tutor feedback, targeted to the specific facet of the
reference answer and pertaining to the specific
level of understanding expressed by the student.
This representation could also be useful in areas
such as question answering or document summari-
zation, where a series of entailed facets could be
composed to form a full answer or summary.
The representation's validity is partially demon-
strated in the ability of annotators to reliably anno-
tate inferences at this facet level, achieving
substantial agreement (86%, Kappa=0.72) and by
promising results in automatic assessment of stu-

dent answers at this facet level (up to 26% over
baseline), particularly given that, in addition to the
manual reference answer facet representation, an
automatically extracted approximation of the rep-
resentation was a key factor in the features utilized
by the classifier.
The domain independent approach described
here enables systems that can easily scale up to
new content and learning environments, avoiding
the need for lesson planners or technologists to
create extensive new rules or classifiers for each
new question the system must handle. This is an
obligatory first step to the long-term goal of creat-
ing ITSs that can truly engage children in natural
unrestricted dialog, such as is required to perform
high quality student directed Socratic tutoring.
Acknowledgments
This work was partially funded by Award Number
0551723 from the National Science Foundation.
References
Briscoe, E., Carroll, J., Graham, J., and Copestake, A.
2002. Relational evaluation schemes. In Proc. of the
Beyond PARSE VAL Workshop at LREC.
Gildea, D. and Jurafsky, D. 2002. Automatic labeling of
semantic roles. Computational Linguistics.
Glickman, 0, Dagan, I, and Koppel, M. 2005. Web
Based Probabilistic Textual Entailment. In Proc RTE.
Jordan, P, Makatchev, M, VanLehn, K. 2004. Combin-
ing competing language understanding approaches in
an intelligent tutoring system. In Proc ITS.
Kipper, K, Dang, H, and Palmer, M. 2000. Class-Based
Construction of a Verb Lexicon. In Proc. AAAI.
Lawrence Hall of Science 2006. Assessing Science
Knowledge (ASK), UC Berkeley, NSF-02425 10
Leacock, C. 2004. Scoring free-response automatically:
A case study of a large-scale Assessment. Examens.
Lin, D & Pantel, P. 2001. Discovery of inference rules
for Question Answering. In Natl. Lang. Engineering.
Nielsen, R, Ward, W, and Martin, JH. 2008a. Learning
to Assess Low-level Conceptual Understanding. In
Proc. FLAIRS.
Nielsen, R, Ward, W, Martin, JH and Palmer, P. 2008b.
Annotating Students' Understanding of Science Con-
cepts. In Proc. LREC.
Nivre, J, Hall, J, Nilsson, J, Eryigit, G and Marinov, S.
2006. Labeled Pseudo-Projective Dependency Pars-
ing with Support Vector Machines. In Proc. CoNLL.
Palmer, M, Gildea, D, & Kingsbury, P. 2005. The
proposition bank: An annotated corpus of semantic
roles. In Computational Linguistics.

244

Show all pages in this paper.

This paper can be searched. Note: Results may vary based on the legibility of text within the document.

Tools / Downloads

Get a copy of this page .

Citing and Sharing

Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.

Reference the current page of this Paper.

Nielsen, Rodney D.; Ward, Wayne; Martin, James H. & Palmer, Martha. Extracting a Representation from Text for Semantic Analysis, paper, June 2008; Stroudsburg, Pennsylvania. (https://digital.library.unt.edu/ark:/67531/metadc1042597/m1/4/ocr/: accessed March 24, 2019), University of North Texas Libraries, Digital Library, https://digital.library.unt.edu; crediting UNT College of Engineering.

International Image Interoperability Framework (This Page)