Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Workshop Series Banner August 18 Bias in AI Banner

The workshop was originally scheduled to be held at NIST Gaithersburg, MD. While we strive to accommodate global participants across multiple time zones, staffing for the virtual workshop will primarily be from the NIST Gaithersburg campus. As a result, workshop times are in Eastern Time Zone. Please note: Virtual sessions will be recorded and available for on-demand viewing. We appreciate your understanding. 

 

Workshop Recordings

On August 18, NIST will host the Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.

Workshop goal: To develop a shared understanding of bias in AI that can guide and speed the innovation and adoption of trustworthy AI systems – including the measurements, standards, and related tools which are critical parts of the AI foundation.  

The intensity of dialogue around trustworthy AI has been increasing over the past year. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. Insights shared by speakers and other participants during this interactive workshop is intended to move the AI community closer to agreement on defining bias in AI. That will guide and strengthen efforts by NIST and others to make progress in multiple aspects of trustworthy AI. The workshop will help to build a community of interest for NIST's multi-faceted work in this arena, which includes foundational and use-inspired research, evaluation, standards, and policy engagement. 

Workshop participants will:

  • Hear from and interact with leading experts during plenary sessions and discussion panels 
  • Take part in virtual breakout sessions which will feed into the workshop's conclusions and recommendations
  • Be a part of the growing community of interest in addressing bias in AI and AI trustworthiness, more broadly

Agenda (PDF)

NIST Workshop on Bias in AI
August 18, 2020
9:00am - 5:00pm EDT

 

–AGENDA –

9:00 AM

Welcome and Introduction – Elham Tabassi, Acting Chief of Staff, NIST Information Technology Laboratory

9:15 AM

Plenary: Panel Session

Foundational Juggernaut: Addressing data bias challenges in AI

Darrell West (Moderator) Brookings Institution

Andrew Burt, BNH, Immuta

Alexandra Chouldechova, Partnership on AI, Carnegie Mellon University

Fernando Diaz

Teresa Tung, Accenture Labs

10:30 AM

Break / Transition to Breakout Session #1 (multiple breakouts, locations varied)

10:45 AM

Breakout Session #1 – Data Bias

This breakout session will build on topics from the morning panel session.  Participants will engage in a facilitated discussion focused on the following questions: 

  • What is the “right data”?
  • What are the biggest barriers to success at this point?
  • How can technology developers and practitioners effectively work together and inform each other to mitigate data bias?
  • From what shared understandings or assumptions about bias in AI do we need to operate before moving onto the next step?
  • What do we need in order to measure bias within data?
  • How does bias exhibit itself in data? Are there particular indicators?

11:30

Break - 20 minutes

11:50

Plenary: Report Outs from Breakout Sessions - Data Bias

This plenary session will be an opportunity to hear the key takeaways from each breakout session about data bias. (Breakout Session Facilitators)

12:30 PM

1-hour Lunch Break

1:30 PM

Plenary: Panel Session

Algorithmic bias is in the question, not the answer: Measuring and managing bias beyond data

Joshua Kroll (Moderator), Naval Postgraduate School

Aylin Caliskan, The George Washington University

Abigail Jacobs, University of Michigan

Nicol Turner Lee, Brookings Institution

Kush Varshney, IBM

 

Transition to Breakout Session #2 (multiple breakouts, locations varied)

2:45 PM

Breakout Session #2 – Bias in Algorithmic Modeling

This breakout session will build on topics from the afternoon panel session. Participants will engage in a facilitated discussion focused on the following questions: 

  • Algorithms are highly dependent on data, which can be biased. But how does the algorithmic process insert additional biases?
  • Is it possible to use algorithms to mitigate data biases?
  • What do we need in order to measure algorithmic bias?
  • How does bias exhibit itself in the model?
  • How can algorithms exacerbate biases? How can they be used to mitigate biases?
  • How can we think about transparency and its role in bias in AI?

3:30 PM

20-minute break

3:50 PM

Plenary: Report Outs from Breakout Sessions - Algorithmic Bias

This plenary session will be an opportunity to hear the key takeaways from each breakout session about algorithmic bias. (Breakout Session Facilitators)

4:30 PM

Closeout (Elham Tabassi, NIST Information Technology Laboratory)

4:45 PM

Adjourn

Morning Panel

Darrell West

Darrell M. West is the vice president and director of Governance Studies at the Brookings Institution, Senior Fellow at Brookings’ Center for Technology Innovation, and holds the Douglas Dillon Chair, and Co-Author with Brookings President John R. Allen of Turning Point:  Policymaking in the Era of Artificial Intelligence (Brookings Institution Press, 2020). He is Co-Editor-in-Chief of TechTank. His current research focuses on artificial intelligence, robotics, and the future of work. West is also director of the John Hazen White Manufacturing Initiative. Prior to coming to Brookings, he was the John Hazen White Professor of Political Science and Public Policy and Director of the Taubman Center for Public Policy at Brown University.

Andrew Burt

Andrew Burt is managing partner at bnh.ai, a boutique law firm focused on AI and analytics, and chief legal officer at Immuta. He is also a visiting fellow at Yale Law School's Information Society Project.

Previously, Andrew was Special Advisor for Policy to the head of the FBI Cyber Division, where he served as lead author on the FBI’s after action report on the 2014 Sony data breach, in addition to serving as chief compliance and chief privacy officer for the division.

A frequent speaker and writer, Andrew has published articles on law and technology for the New York Times , the Financial Times and Harvard Business Review , where he is a regular contributor. He holds a JD from Yale Law School.

Alexandra Chouldechova

Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University's Heinz College of Information Systems and Public Policy.  She is also a 2020 Research Fellow with the Partnership on AI. Dr. Chouldechova’s research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services.  She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.  Dr. Chouldechova received her PhD in Statistics from Stanford University and an H.B.Sc. in Mathematical Statistics from the University of Toronto.

Fernando Diaz

Fernando Diaz was an Assistant Research Director of Microsoft Research Montréal and led a multi-disciplinary group studying the societal impacts of artificial intelligence. Previously, he was a director of research at Spotify. His primary research background is in the design and evaluation of algorithmic information access systems, including web search engines, crisis informatics, and recommendation systems. His research has received best paper awards at several academic conferences and he was awarded the 2017 Karen Spärck Jones award for young information retrieval and natural language processing researchers. Fernando received his PhD from the University of Massachusetts Amherst in 2008. Fernando is a co-organizer of the TREC Fair Ranking track, an evaluation initiative focused on fairness in information access systems.

Teresa Tung

Teresa Tung is a Managing Director in Accenture Labs responsible for its global Software & Platforms R&D group whose projects include semantic modeling, edge analytics, and robotics. Teresa leads her team in evaluating best-of-breed next-generation software architecture solutions from industry, start-ups and academia for relevance to Accenture and its clients, then building experimental prototypes and delivering pioneering pilot projects.   

Accenture’s most active inventor with over 150 patents filed or granted, Dr. Tung has had several leadership roles in her career with Accenture. In previous roles, she led Accenture Labs research and development activities for Cloud, Big Data, the Internet of Things and APIs, creating a wide range of key assets that are still in use across Accenture and with clients today.

Dr. Tung holds a Ph.D. in Electrical Engineering and Computer Science from the University of California at Berkeley. She is based in Accenture’s San Francisco Innovation Hub.
 

Afternoon Panel

Joshua Kroll

Joshua A. Kroll is an Assistant Professor of Computer Science at the Naval Postgraduate School. His research focuses on building software-driven automation which is trustworthy, even as it integrates with humans, organizations, and society. His work on “Accountable Algorithms” was recognized by the Future of Privacy Forum as a top paper for policymakers in 2016. Previously, Joshua was a postdoctoral researcher at the UC Berkeley School of Information, an engineer at Cloudflare, Inc., and a graduate research fellow at Princeton, from which he holds his PhD.

Aylin Caliskan

Aylin Caliskan is an Assistant Professor of Computer Science at George Washington University. Her research interests lie in AI ethics, bias in AI, machine learning, and the implications of machine intelligence on fairness and privacy. She investigates the reasoning behind biased AI representations and decisions by developing explainability methods that uncover and quantify biases of machines. Her presentations on both de-anonymization and bias in machine learning are the recipients of best talk awards. Her work on semi-automated anonymization of writing style furthermore received the Privacy Enhancing Technologies Symposium Best Paper Award.

Dr. Caliskan holds a PhD in Computer Science from Drexel University and a Master of Science in Robotics from the University of Pennsylvania. Before joining the faculty at George Washington University, she was a Postdoctoral Researcher and a Fellow at Princeton University's Center for Information Technology Policy.

Abigail Jacobs

Abigail Jacobs is a computational social scientist and an Assistant Professor at the University of Michigan in the School of Information and the Center for the Study of Complex Systems, and an affiliate of the Center for Ethics, Society, and Computing (ESC) and the Michigan Institute for Data Science (MIDAS).

Dr. Jacobs’ current research interests are around structure, governance, and inequality in sociotechnical systems; measurement; and social networks.

Dr. Jacobs received a BA in Mathematical Methods in the Social Sciences and Mathematics from Northwestern University. She received her PhD in Computer Science from the University of Colorado Boulder.

Nicol Turner Lee

Nicol Turner Lee is a senior fellow in Governance Studies, the director of the Center for Technology Innovation, and serves as Co-Editor-In-Chief of TechTank. Dr. Turner Lee researches public policy designed to enable equitable access to technology across the U.S. and to harness its power to create change in communities across the world. Her work also explores global and domestic broadband deployment and internet governance issues. She is an expert on the intersection of race, wealth, and technology within the context of civic engagement, criminal justice, and economic development.

Dr. Turner Lee’s current research portfolio also includes artificial intelligence (AI), particularly machine learning algorithms and their unintended consequences on marginalized communities. Her recent co-authored paper on the subject has made her a sought out speaker in the U.S. and around the world on the topics of digital futures, AI and ethics, algorithmic bias, and the intersection between technology and civil/human rights. She is also an expert on topics that include online privacy, 5G networks and the digital divide. Dr. Turner Lee has a forthcoming book on the U.S. digital divide titled Digitally Invisible: How the Internet is Creating the New Underclass (forthcoming 2021, Brookings Press). She sits on various U.S. federal agency and civil society boards. Dr. Turner Lee has a Ph.D. and M.A. from Northwestern University and graduated from Colgate University.

Kush Varshney

Kush R. Varshney was born in Syracuse, NY in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, NY, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge. While at MIT, he was a National Science Foundation Graduate Research Fellow.

Dr. Varshney is a distinguished research staff member and manager with IBM Research at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the machine learning group in the Foundations of Trustworthy AI department. He was a visiting scientist at IBM Research - Africa, Nairobi, Kenya in 2019. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to recognitions such as the 2013 Gerstner Award for Client Excellence for contributions to the WellPoint team and the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation. He conducts academic research on the theory and methods of trustworthy machine learning. His work has been recognized through best paper awards at the Fusion 2009, SOLI 2013, KDD 2014, and SDM 2015 conferences and the 2019 Computing Community Consortium / Schmidt Futures Computer Science for Social Good White Paper Competition. He is currently writing a book entitled 'Trustworthy Machine Learning' with Manning Publications. He is a senior member of the IEEE and a member of the Partnership on AI's Safety-Critical AI expert group.

 

Created June 8, 2020, Updated August 21, 2020