Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

ITL Speakers Bureau: Peter Bajcsy



This talk will address the problem of measuring class encodings in neural networks (NNs). The work is motivated by understanding adversarial attacks in NNs designed for classification tasks; specifically, NN models trained with poisoned classes embedded into traffic sign images also called trojans. Trojans are defined as physically realizable triggers in input images; for example, a sticky note on top of a STOP sign that causes misclassification of one type of traffic sign into another one, unintended by the NN model design and undesirable by the NN application (e.g., classification of scene objects by self-driving cars). 

Our approach is based on (a) designing a web based interactive NN simulator of trojans for visual understanding of two class encodings in input dot patterns [1], (b) extending the measurements of class encodings from simulations to thousand times larger NN models, tens to hundreds of classes, and complex input images generated for four rounds of the TrojAI challenge [2] (4000+ NN models, 22 architectures, 2500 training images per traffic sign class), (c) using the class encodings to generate traceable fingerprints of NN models to training images and to guide designs of trojan detectors [3]. The expected impacts of the work are in the use of (a) publicly accessible NN simulator at for educational purposes, (b) class encodings as NN fingerprints for enabling traceability of trained models to training data, and (c) clean and poisoned class encodings for improved design of trojans and trojan detectors. 


[1] Peter Bajcsy, Nicholas J. Schaub, and Michael Majurski, “Designing Trojan Detectors in Neural Networks Using Interactive Simulations,” Special Issue on Machine Learning for Cybersecurity Threats, Challenges, and Opportunities, Computing and Artificial Intelligence Section, Appl. Sci. 2021, 11, 0.; URL: 

[2] TrojAI Challenge Datasets: 

[3] Peter Bajcsy and Michael Majurski, “Baseline Pruning-Based Approach to Trojan Detection in Neural Networks, “ Security and Safety in Machine Learning Systems Workshop at ICLR 2021, URL: (oral presentation), May 7th, 2021 

Keywords: neural networks, artificial intelligence-based modeling, computer vision, large scale image processing 


Peter Bajcsy Speakers Bureau

Peter Bajcsy received his Ph.D. in Electrical and Computer Engineering in 1997 from the University of Illinois at Urbana-Champaign (UIUC) and an M.S. in Electrical and Computer Engineering in 1994 from the University of Pennsylvania (UPENN).  He worked for machine vision, government contracting, and research and educational institutions before joining the National Institute of Standards and Technology (NIST) in 2011. At NIST, he has been leading a project focusing on the application of computational science in biological metrology, and specifically stem cell characterization at very large scales. Peter’s area of research is large-scale image-based analyses and syntheses using analytical, statistical, machine learning, and computational models while leveraging computer science foundations for image processing, computer vision, pattern recognition and artificial intelligence. He has authored more than 45 papers in peer reviewed journals and co-authored eight books or book chapters and 110+ conference papers. 


Created March 7, 2022, Updated June 12, 2023