On February 7 and 8, 2024, the PSCR UAS portfolio hosted the Workshop on Cybersecurity and Artificial Intelligence (AI) Risk Management (CSAIRM) for UAS in Public Safety at the Montgomery County Fire & Rescue Training Academy in Gaithersburg, Maryland, USA. This workshop brought together a wide spectrum of stakeholders, including public safety end users, management, academia, and developers, with the goals of:
- Networking and learning about each others’ connective capabilities and challenges.
- Beginning to develop a roadmap for improving cybersecurity and AI risk management in this domain.
Upon conclusion of the workshop, one of the main event outcomes was the formulation of an initial “Top-10” checklist: a set of questions that public safety management personnel, such as fire and police chiefs, could ask their vendors, IT staff, and other people in their UAS ecosystem to better understand their CSAIRM posture.
The draft list is included below. Following the workshop, the UAS portfolio began to further engage with stakeholders across the public safety, AI, and cybersecurity ecosystem to validate the utility of the questions and the interpretability of potential answers -- with the ultimate goal of providing useful, expert-driven guidance and educational materials to support public safety with assessing and improving their CSAIRM risk posture within UAS program implementation and operations.
Preliminary Top-10 Questions
- How secure is the system? What is the attack surface? How, and in what timeframe, will we be notified of breaches of the various severity levels? What cybersecurity stress testing or “Red Teaming” has been performed, on both the system and systems on which it depends?
- How and where is the data stored? What is the level of encryption and other security methods for the data and its derivatives, such as metadata, error logs, vendor telemetry, and temporary files, in transit and at rest? Who can decrypt or modify the data?
- How can we determine if the AI is giving incorrect information? What are the possible mission-critical failure modes of the system? Do we have enough access to the system to determine if a decision that we do not agree with is due to an error on the part of the AI system or to otherwise perform root-cause analysis? How can we correct the behavior of the system? What measures are taken to ensure an appropriate level of data privacy? What are the levels of data privacy that have been chosen and how were those decisions made?
- In what physical locations is the AI processing being done? For instance, on the UAS, on the controller, on our own servers, or in a cloud datacenter in-state, in-country, or elsewhere?
- What AI and cybersecurity challenges have other users run into with this system and what steps have they taken to mitigate risk?
- Is information about the confidence that the AI system has in its decision available to the operator and/or investigator? What information is available to assist in interpreting this?
- What will the system do in the event it encounters a situation that is unusual or unexpected? Is the system capable of detecting if it is operating in a situation that was not represented during its development, training, or testing?
- Where does additional input data to the system, such as maps, and positioning, come from, how is it updated and authenticated, and what measures are in place to address intentional and accidental interference, corruption, jamming, spoofing, or similar?
- What measures are taken to ensure continuity of critical operations in the event that the system undergoes planned or unplanned downtime or end-of-life? How likely is it that the information and systems required for recovery are affected by the same cyber attack as the main system? If the AI system suffers a failure, such as effects from a bad model or input data, what plans exist to roll back to a good model?
Next Steps
As of early 2025, PSCR is continuing to engage with stakeholders across the UAS, AI, cybersecurity, and public safety ecosystem broadly on the topic of cybersecurity and AI risk management with the objective of supporting an improved risk management posture across public safety UAS operations. If you are interested in contributing to this project or have opportunities for collaboration, please reach out to our team directly at psprizes [at] nist.gov (psprizes[at]nist[dot]gov).