Principle Investigator: Homa Alemzadeh
University of Virginia
Emergency medical responders and firefighters are the very first people to arrive at an incident scene to assess and control the situation and assist victims by providing medical care. In medical emergencies, natural disasters, and terrorist attacks, minutes can make difference between life and death. First responders need to process substantial amount of data with different levels of importance and confidence and quickly prioritize available information for situation assessment and response. They may consider circumstances and history of the incident, communications with the command center, other responders, and the victims, and base their actions on this information plus knowledge of established emergency response protocols. In addition, with the rise of Internet of Things, large streams of previously inaccessible data from the wearables, mobile devices, smart buildings, and smart utilities become available to responders and the safety community. Manual collection, aggregation, filtering, and interpretation of such data at the incident scene or control center requires significant human cognitive effort that could be better used to address other incident complexities.
Previous work has demonstrated several architectures for collection and sharing of heterogeneous data across emergency personnel and organizations to optimize the emergency management activities. However, the public safety community still lacks mechanisms for real-time processing and aggregation of both structured and unstructured data and transforming it into actionable knowledge . For example, evidence from incident scene are often collected manually by the first responders in unstructured format such as video, audio, and free-form text and is later entered into archival databases.
The main objective of this project is to develop a cognitive assistant system that improves situational awareness and safety of emergency responders by real-time collection and analysis of data from incident scene and providing dynamic data-driven feedback to them. The proposed system leverages the responder-worn devices and smart sensors to monitor activities and communications at the incident scene and aggregates this data with static data sources such as protocols and guidelines to generate insights that can assist first responders with effective decision making and taking safe response actions.
The proposed cognitive assistant system will consist of a suite of wearable sensing and computing devices as well as signal and natural language processing and learning algorithms that monitor responders’ activities/communications and changes in the environment and fuse such information in real-time to infer the emergency situation and most relevant response actions. The key challenges in design of this system include limited processing power and battery life of wearable devices, high energy consumption of fusion-based learning algorithms, creating labeled training data for supervised learning of responder actions, and ensuring privacy and security in collection, transmission, analysis, and sharing of data. While the proposed system will exploit broadband communication when available, core capabilities must remain available to the responders when broadband is not available.
This project can potentially make significant impact on improving health outcomes and first responders’ safety by promoting evidence-based emergency response decision making. Automated incident monitoring and data collection will benefit first responders by reducing cognitive burden and response time to incidents and focusing on more important tasks. The collected data and analytic results can be shared with the public safety community and other researchers and further used for responders’ performance assessment, identifying most critical emergency scenarios and response actions, and designing more effective training modules.