NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
LabelVizier: Interactive Validation and Relabeling for Technical Text Annotations
Published
Author(s)
Xiaoyu Zhang, Xiwei Xuan, Rachael Sexton, Alden A. Dima
Abstract
With the rapid accumulation of text data brought forth by advances in data-driven techniques, the task of extracting "data annotations"—concise, high-quality data summaries from unstructured raw text—has become increasingly important. Researchers in the Technical Language Processing (TLP) and Machine Learning (ML) domains have developed weak supervision techniques to efficiently create annotations (labels) for large-scale unlabeled data. However, weakly-supervised annotations often have to balance the trade-off between annotation quality and speed. Annotations generated by the state-of-the-art weak supervision techniques may still fail in practice because of conflicts between user requirements, application scenarios, and modeling goals. There is a pressing need for efficient validation and relabeling of the output from weak supervision techniques that incorporates human knowledge and domain-specific requirements. Inspired by the practice of debugging in software engineering, we address this problem by presenting LabelVizier , a human-in-the-loop workflow that provides actionable insights into annotation flaws in large-scale multi-label datasets. We present our workflow as an interactive notebook with editable code cells for flexible data processing and a seamlessly integrated visual interface, which facilitates the annotation validation for multiple error types and the relabeling suggestion at different data scales. We evaluated the efficiency and generalizability of LabelVizier for improving the quality of technical text annotations with two use cases and five expert reviews. Our findings indicate that our workflow can be smoothly adapted to various application scenarios and is appreciated by domain experts with different levels of computer science background as a practical tool to improve annotation qualities.