NIST will hold a virtual workshop on Explainable Artificial Intelligence (AI). Explainable AI is a key element of trustworthy AI and there is significant interest in explainable AI from stakeholders, communities, and areas across this multidisciplinary field. As part of NIST’s efforts to provide foundational tools, guidance, and best practices for AI-related research, we released a draft whitepaper, Four Principles of Explainable Artificial Intelligence, for public comment. Inspired by comments received, this workshop will delve further into developing an understanding of explainable AI.
UPDATE:
For the breakout sessions, all participants who registered for the workshop on January 26, 2021 have been emailed a link to sign up for the breakout sessions.
All workshop participants have also been automatically registered for the Closing Remarks on Thursday January 28, 2021 from 5:00pm–6:00pm ET. A link will be provided closer to the time of the closing remarks.
Please contact explainable-AI [at] nist.gov (explainable-AI[at]nist[dot]gov) if you registered for the workshop and did not receive a breakout session registration link or if you have any questions.
Stay tuned for further announcements and related activity by checking this page or by subscribing to Artificial Intelligence updates through GovDelivery at https://public.govdelivery.com/accounts/USNIST/subscriber/new.
This workshop will consist of plenary speakers, panels, and breakout sessions. Plenary speakers and panels will provide expert insight and a starting point to the discussions. Breakout sessions on Wednesday and Thursday will allow for further facilitated discussions among attendees on the panel topics. All times are Eastern Standard Time.
Time Start |
Time End |
Topic |
11:00 AM |
11:30 AM |
Welcome to NIST: Charles Romine – ITL Director, NIST Opening Remarks: Elham Tabassi – ITL Chief of Staff, NIST |
11:30 AM |
12:00 PM |
Overview of Four Principles of Explainable Artificial Intelligence, Draft NISTIR 8312 - P. Jonathon Phillips – Electronic Engineer, NIST |
12:00 PM |
1:30 PM |
Panel: What are the roles of explanations throughout the AI life cycle? Panelists: Michael Hind – Distinguished Research Staff Member, IBM David Leslie – Ethics Team Lead and Ethics Fellow, the Alan Turing Institute |
1:30 PM |
2:30 PM |
Lunch Break |
2:30 PM |
4:00 PM |
Panel: How does explainable AI fit into the trustworthy AI ecosystem? Panelists: Sumeet Chabria – Head of Global Business Services, Bank of America Frank Torres – Director of Public Policy, Office of Responsible AI, Microsoft |
4:00 PM |
4:15 PM |
Break |
4:15 PM |
5:45 PM |
Panel: What does a risk management strategy look like for explainable AI? Panelists: Nahla Ivy – Enterprise Risk Management Officer, NIST Courtney Lang – Director of Policy, Information Technology Industry Council (ITI) |
5:45 PM |
6:00 PM |
Closing Remarks – Carina Hahn, NIST |
Time Start |
Time End |
Topic |
9:00 AM |
10:00 AM |
Trustworthy Ecosystem and Explainable AI |
10:00 AM |
11:00 AM |
Break |
11:00 AM |
12:00 PM |
Risk Management and Explainable AI |
12:00 PM |
1:00 PM |
Lunch Break |
1:00 PM |
2:00 PM |
Explanations in AI Life Cycle |
2:00 PM |
3:00 PM |
Break |
3:00 PM |
4:00 PM |
Trustworthy Ecosystem and Explainable AI |
4:00 PM |
5:00 PM |
Break |
5:00 PM |
6:00 PM |
Risk Management and Explainable AI |
Time Start |
Time End |
Topic |
9:00 AM |
10:00 AM |
Explanations in AI Life Cycle |
10:00 AM |
11:00 AM |
Break |
11:00 AM |
12:00 PM |
Trustworthy Ecosystem and Explainable AI |
12:00 PM |
1:00 PM |
Lunch Break |
1:00 PM |
2:00 PM |
Risk Management and Explainable AI |
2:00 PM |
3:00 PM |
Break |
3:00 PM |
4:00 PM |
Explanations in AI Life Cycle |
4:00 PM |
5:00 PM |
Break |
5:00 PM |
6:00 PM |
Breakout Debrief Closing Remarks – Amy N. Yates, NIST |