From Neuron Coverage to Steering Angle: Testing Autonomous Vehicles Effectively
Jack Toohey, M S Raunak, Dave Binkley
A Deep Neural Network (DNN) based system, such as the one used for autonomous vehicle operations, is a "black box" of complex interactions resulting in a classification or prediction. An important question for any such system is how to increase the reliability of, and consequently the trust in, the underlying model. To this end, researchers have largely resorted to adapting existing testing techniques. For example, similar to statement or branch coverage in traditional software testing, neuron coverage has been hypothesized as an effective metric for assessing a test suite's strength toward uncovering failures and anomalies in the DNN. We investigate the use of realistic transformations to create new images for testing a trained autonomous vehicle DNN, and its impact on neuron coverage as well as the model output.
Special Issue on Safety, Security, and Reliability of Autonomous Vehicle Software
, Raunak, M.
and Binkley, D.
From Neuron Coverage to Steering Angle: Testing Autonomous Vehicles Effectively, Special Issue on Safety, Security, and Reliability of Autonomous Vehicle Software, [online], https://doi.org/10.1109/MC.2021.3079921, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=932505
(Accessed December 8, 2021)