The National Cybersecurity Center of Excellence (NCCoE) has released a new final project description, Mitigating AI/ML Bias in Context: Establishing Practices for Testing, Evaluation, Verification, and Validation of AI Systems. Publication of this project description continues the process to identify project requirements and scope, along with hardware and software components for use in a laboratory environment.
Managing bias in an AI system is critical to establishing and maintaining trust in its operation. To tackle this complex problem, this project will adopt a comprehensive socio-technical approach to testing, evaluation, verification, and validation (TEVV) of AI systems in context. This approach will connect the technology to societal values in order to develop recommended guidance for deploying AI/ML decision-making applications. This project will also look at the interplay between bias and cybersecurity.
The initial phase of the project will focus on a proof-of-concept implementation for credit underwriting decisions in the financial services sector. This project will result in a freely available NIST AI/ML Practice Guide.
In the coming months, the NCCoE AI Bias team will be publishing a Federal Register Notice (FRN) based on the final project description. If you are interested in participating in this project with us as a collaborator, you will have the opportunity to complete a Letter of Interest (LOI) where you can present your capabilities. Completed LOIs are considered on a first-come, first-served basis within each category of components or characteristics listed in the FRN, up to the number of participants in each category necessary to carry out the project build. Please stay tuned for more information.
If you have any questions, please reach out to our project team at ai-bias [at] nist.gov (ai-bias[at]nist[dot]gov).