NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
An Assistive Learning Workflow on Annotating Images for Object Detection
Published
Author(s)
Vivian W. Wong, Max K. Ferguson, Kincho H. Law, Yung-Tsun Lee
Abstract
We present an end-to-end workflow to generate annotated image datasets for object detection. With this workflow, which we call assistive learning, we are able to reduce manual annotation time on two experimental datasets by 79.4% and 83.1%. The experimental results of this work show three contributions of the assistive learning workflow: (1) Savings on human annotation time; (2) generalizability to variable dataset sizes, domains and convolutional neural network (CNN) models; and (3) faster CNN training with limited amount of labeled data using a novel contextual sampling method, thereby a reduction in human workload early on in the assistive learning process. In addition, we wrap the workflow in an interactive annotation interface, allowing annotators without any machine learning experience to speed up the annotation process for training the CNN models.
Wong, V.
, Ferguson, M.
, Law, K.
and Lee, Y.
(2019),
An Assistive Learning Workflow on Annotating Images for Object Detection, 2019 IEEE International Conference on Big Data, LA, CA, US, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=928783
(Accessed October 12, 2025)