LabelVizier: Interactive Validation and Relabeling for Technical Text Annotations
Xiaoyu Zhang, Xiwei Xuan, Rachael Sexton, Alden A. Dima
With the rapid accumulation of text data brought forth by advances in data-driven techniques, the task of extracting "data annotations"—concise, high-quality data summaries from unstructured raw text—has become increasingly important. Researchers in the Technical Language Processing (TLP) and Machine Learning (ML) domains have developed weak supervision techniques to efficiently create annotations (labels) for large-scale unlabeled data. However, weakly-supervised annotations often have to balance the trade-off between annotation quality and speed. Annotations generated by the state-of-the-art weak supervision techniques may still fail in practice because of conflicts between user requirements, application scenarios, and modeling goals. There is a pressing need for efficient validation and relabeling of the output from weak supervision techniques that incorporates human knowledge and domain-specific requirements. Inspired by the practice of debugging in software engineering, we address this problem by presenting LabelVizier , a human-in-the-loop workflow that provides actionable insights into annotation flaws in large-scale multi-label datasets. We present our workflow as an interactive notebook with editable code cells for flexible data processing and a seamlessly integrated visual interface, which facilitates the annotation validation for multiple error types and the relabeling suggestion at different data scales. We evaluated the efficiency and generalizability of LabelVizier for improving the quality of technical text annotations with two use cases and five expert reviews. Our findings indicate that our workflow can be smoothly adapted to various application scenarios and is appreciated by domain experts with different levels of computer science background as a practical tool to improve annotation qualities.