University of Virginia
In April 2021, University of Virginia (UVA) was awarded $1.14M for the Public Safety Innovation Accelerator Program: Augmented Reality (AR) cooperative agreement.
With advances in sensing, computing, and visualization technologies, vast amounts of previously inaccessible data streams from mobile and wearable devices are becoming available to first responders. AR interfaces can enable the seamless integration of multimodal sensor data from the scene with responder workflows to assist in situation assessment, decision making, and timely response. However, existing technologies and current applications of AR for Emergency Medical Services (EMS) primarily focus on training and education rather than real-time situational awareness.
UVA proposes to combine existing AR technology with wearable sensing and computing devices and machine intelligence capabilities and develop a cognitive assistant system that captures, integrates, and assesses incident data in real-time. This will potentially provide prioritized aggregated situation awareness to a team of responders through timely and smart notifications on AR interfaces.
UVA’s research team will include three principal investigators and three Ph.D. students as well as several undergraduate student researchers who will collaborate with local public safety agencies in Virginia, including North Garden Fire Department and Richmond Ambulance Authority. Team members include:
The main objective is to develop an AR cognitive assistant system that supports responders with real-time inference of incident context based on observations at the scene and provides context-dependent feedback on timely and safe execution of response actions.
The proposed system will integrate AR interfaces (with audio and video measurement capabilities) with responder-worn sensors and data analytics algorithms (for speech recognition, natural language processing (NLP), image/video classification, and activity recognition) to perform data fusion and context inference based on multimodal sensor data from multiple responders on the scene.
The UVA team will combine the knowledge of EMS protocol guidelines and pre-collected incident data with machine learning methods to develop a computational framework for risk-aware decision support that captures the behavioral models of EMS protocols and requirements for safe execution of interventions.
The proposed framework will be used for real-time tracking of timing and quality of response actions and providing just-in-time smart reminders that are customized to each responder.
This project has the potential to significantly improve responders’ situational awareness and operational practices by enabling automated hands-free data collection and situation assessment based on multimodal sensor streams available from incident scenes and by supporting responders’ efforts in managing and reporting incidents. Just-in-time context-dependent feedback provided through AR interfaces can make a positive impact on patient health outcomes and safety of responders by reducing responders’ cognitive load and ensuring the timely and accurate execution of response actions. The prototype AR system, datasets, and simulated emergency scenarios developed as part of this project can benefit responders’ training practices by creating more realistic and instructive training environments. The data analytics algorithms for real-time context inference and decision support can be shared with the public safety and research communities and be further extended to develop tools for responders’ performance evaluation during training and in-field operations.