Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Hoa T. Dang (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 18 of 18

Event Nugget and Event Coreference Annotation

June 17, 2017
Author(s)
Zhiyi Song, Ann Bies, Stephanie Strassel, Teruko Mitamura, Hoa T. Dang, Joe Ellis, Sue Holm, Yakari Yamakawa
In this paper, we describe the event nugget annotation created in support of the pilot Event Nugget Detection evaluation in 2014 and in support of the Event Nugget Detection and TAC KBP open evaluation in 2015. We present the data volume for both training

SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge

June 14, 2013
Author(s)
Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa T. Dang
We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers

An Assessment of the Accuracy of Automatic Evaluation in Summarization

June 8, 2012
Author(s)
Karolina K. Owczarzak, John M. Conroy, Hoa T. Dang, Ani Nenkova
Automatic evaluation has greatly facilitated system development in summarization. At the same time, the use of automatic evaluation has been viewed with mistrust by many, as its accuracy and correct application are not well understood. In this paper we

Assessing the Effect of Inconsistent Assessors on Summarization Evaluation

June 8, 2012
Author(s)
Karolina K. Owczarzak, Peter Rankel, Hoa T. Dang, John M. Conroy
We investigate the consistency of human assessors involved in summarization evaluation to understand its effect on system ranking and automatic evaluation techniques. Using Text Analysis Conference data, we measure annotator consistency based on human

Who wrote What Where: Analyzing the content of human and automatic summaries

June 23, 2011
Author(s)
Karolina K. Owczarzak, Hoa T. Dang
Abstractive summarization has been a long-standing and long-term goal in automatic summarization, because systems that can generate abstracts demonstrate a deeper understanding of language and the meaning of documents than systems that merely extract

An Evaluation of Technologies for Knowledge Base Population

May 21, 2010
Author(s)
Hoa T. Dang, Paul McNamee, Heather Simpson, Patrick Schone, Stephanie Strassel
Previous content extraction evaluations have neglected to address problems which complicate the incorporation of extracted information into an existing knowledge base. Previous question answering evaluations have likewise avoided tasks such as explicit

Considering Discourse References in Textual Entailment Annotation

September 17, 2009
Author(s)
Luisa Bentivogli, Ido Dagan, Hoa T. Dang, Danilo Giampiccolo, Medea L. Leggio, Bernardo Magnini
In the 2009 Recognizing Textual Entailment (RTE) challenge, a Search Pilot task has been introduced, aimed at finding all the sentences in a corpus which entail a set of given hypotheses. The preparation of the data set for this task has provided an

Overview of the TAC 2008 Update Summarization Task

September 4, 2009
Author(s)
Hoa T. Dang, Karolina K. Owczarzak
The summarization track at the Text Analysis Conference (TAC) is a direct continuation of the Document Understanding Conference (DUC) series of workshops, focused on providing common data and evaluation framework for research in automatic summarization. In

Overview of the TREC 2007 Question Answering Track

December 1, 2008
Author(s)
Hoa T. Dang, Diane Kelly, Jimmy Lin
The TREC 2007 question answering (QA) track contained two tasks: the main task consisting of series of factoid, list, and ``Other'' questions organized around a set of targets, and the complex, interactive question answering (ciQA) task. The main task

Overview of the TREC 2006 Question Answering Track

November 5, 2008
Author(s)
Hoa T. Dang, Jimmy Lin, Diane Kelly
The TREC 2006 question answering track contained two tasks: a main task and a complex, interactive question answering (ciQA) task. As in 2005, the main task consisted of series of factoid, list, and ``Other'' questions organized around a set of targets

DUC in Context

November 21, 2007
Author(s)
Paul D. Over, Hoa T. Dang, Donna K. Harman
Recent years have seen increased interest in text summarization with emphasis on evaluation of prototype ststems. Many factors can affect the design of such evaluations, requiring choices among competing alternatives. This paper examines several major

DUC 2005: Evaluation of Question-Focused Summarization Systems

January 22, 2007
Author(s)
Hoa T. Dang
The Document Understanding Conference (DUC) 2005 evaluation had a single user-oriented, question-focused summarization task, which was to synthesize from a set of 25-50 documents a well-organized, fluent answer to a complex question. The evaluation shows

Overview of the TREC 2005 Question Answering Track

October 2, 2006
Author(s)
Ellen M. Voorhees, Hoa T. Dang
The TREC 2005 Question Answering (QA) track contained three tasks: the main question answering task, the document ranking task, and the relationship task. In the main task, question series were used to define a set of targets. Each series was about a

The Role of Semantic Roles in Disambiguating Verb Senses

December 19, 2005
Author(s)
Hoa T. Dang, Martha S. Palmer
We describe an automatic Word Sense Disambiguation (WSD) system that disambiguates verb senses using syntactic and semantic features that encode information about predicate arguments and semantic classes. Our system performs better than the best published