Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Taking Measure

Just a Standard Blog

Powerful AI Is Already Here: To Use It Responsibly, We Need to Mitigate Bias

Apostol Vassilev poses for a photo with a computer monitor in the background.
Apostol Vassilev is studying the role of AI bias in the credit underwriting process.
Credit: R. Wilson/NIST

We are all witnessing a revolution in the progress of artificial intelligence (AI) and its adoption. AI’s latest content creation capabilities have created an enormous interest in the media and the public. 

ChatGPT most recently demonstrated an astonishing ability to generate coherent text when you ask it to write an essay or answer a question. Along with excitement, many ethical questions immediately popped up. Are these systems able to reason on par with humans? Are they aware of the content they generate? Are their answers fair and unbiased?

But AI is not just about chatbots anymore. Even if you’ve never used a tool like ChatGPT, AI is already affecting your life. AI may have been used in your doctor’s office or bank without you even knowing. 

Here too, we need to worry about the same issues, including: 

  • fairness,
  • bias,
  • security, and
  • robustness and resilience.

These are all part of what’s known as trustworthy AI.  

As the power of AI grows, businesses, governments and the public will have to manage AI’s impact on society. 

The key will be to allow the industry to innovate while managing risk. Although it will be challenging, we will have to find the right balance among self-interest, business interest and societal interest. 

Our Human Brains Have Their Limits

I’ve long been interested in the human brain. Despite the great progress in science, many questions remain unresolved: How do we comprehend information? How do we store and retrieve information in our brains? What is intelligence?  

I’ve learned through the work of Nobel-prize winning psychologist Daniel Kahneman and others in behavioral economics that humans are terrible at being consistent in reasoning and coming up with the best solutions. I’m fascinated by how we as people can be both very limited and so creative and capable of deep thinking. 

As AI advances, I’m curious how it can be good for all people. How can it serve the needs of everyone equally well? It’s an intriguing question. 

How Do You Define Fairness in AI?

Last year, we developed a comprehensive report on AI bias. This report established that fairness and bias are not just abstract or universal statistical problems. 

Fairness and bias issues in AI have complex aspects to them that cannot be easily defined by mathematics. 

The definition of fairness in lending, for example, has evolved over time and will continue to do so. Interest-based loans started thousands of years ago in Mesopotamia, and the idea of fairness in lending has certainly changed since then. 

Fairness is also a social concept. Many people may feel something is unfair, even if the data shows it’s unbiased. 

We need to ensure fairness across the board, but the problem of finding the best approaches to managing bias and fairness vary based on the context and application. For example, the process to get a mortgage on a home is much more rigorous than the process to get a car loan because you’re being loaned much more money for a home than for a car. 

We need to consider both the technological and human factors in AI and have an approach to fairness and bias that has realistic definitions for different contexts, such as finances, health care or hiring. We also need task-specific datasets for machine learning model development and evaluation. 

AI Decides if You Get a Loan. Is It Fair?

View looking down at a desk shows coffee, books and hands holding a tablet with the screen saying "Mortgage: Apply Now."
Whether you realize it or not, AI is usually a part of the process of determining whether someone gets a loan or other forms of credit. 
Credit: Wright Studio/Shutterstock

As a next step to the seminal work started in last year’s report, we are continuing to develop guidance and testing infrastructure for managing AI bias in context, one context at a time. 

We are starting with a project on consumer credit underwriting in financial services because it touches many lives.  

Whether it’s applying for a credit card, a mortgage or a car loan, nearly everyone has interacted with the credit system. You may not realize it, but when you apply for credit, AI is nearly always a part of that process. We need to make sure these systems are fair. Bias in the system could lead to an unfairly high interest rate or someone being denied credit entirely. 

We humans have numerous cognitive biases. Studies show these biases result in inconsistent decision making, regardless of the level of experience or training of the personnel. 

Machines tend to be more consistent than humans, but that does not make them fair. One of the problems with machine learning systems is we train them on historical data to make future decisions. Well, that historical data reflects the societal biases at the time, which our research shows are many. 

If we’re not careful to interrupt those biases, AI will replicate those historical mistakes. We need to fully understand exactly what data is going into AI decision making, so we can make sure to adjust for biases. This is one of the limitations of machine learning. 

People + AI = More or Less Bias? 

Another interesting, but not well understood, aspect of AI-assisted decision making is when a human is assisted by an AI system to make credit decisions. 

Both AI and people have their own biases. How does that play out in decision making? Do the biases compound each other? Do they cancel each other out? How do we address both biases? This is something that’s not well understood, so we’re planning to study it and answer some of these questions. 

At NIST, we’re working with partners in the financial and technology industries, from small companies and startups to large banks and technology companies. We’re working to see what we can learn from their experiences, their data and the tools that detect and manage bias. 

We’re assembling a representative sample of the industry in both financial services and tech to help us come up with recommendations on how we can responsibly use this technology in consumer credit underwriting. We’re also talking with consumer groups and other organizations for their input.

In August 2022, we hosted a workshop and invited participants to help us frame the research questions. What’s the problem? What are the tools we have? How best can we deploy them to have the most effect? What guidance is needed to assist the financial services industry in adopting good business practices in deploying AI technology?  

There are many steps we must follow, as part of a long-established process, to begin work on this study. We completed and published the project description document, which describes the specific scenarios we will be working on. It reflects feedback from the workshop. While we are working through the approvals and public notices, we are working to recruit potential collaborators from the financial and technology sectors. 

We want to work with companies that are willing to work in good faith to solve these hard problems. Doing the right thing also makes good business sense. It doesn’t have to be one or the other; you can do both things.

Eliminating Bias in AI Is a Hard Problem. We Will Keep Studying It. 

Once our study on bias in credit underwriting is complete, we will map the findings in our reports to the high-level principles established in our comprehensive report from last year. This will ensure consistency and harmony between the general principles and sector-specific recommendations. We will continue to study bias in AI and how we can eliminate it in other sectors of the economy where AI is used. 

AI is going to continue to be a part of everyone’s lives for years to come, so we want to make sure it’s operating fairly.  

About the author

Apostol Vassilev

Apostol Vassilev is a research team supervisor in the Computer Security Division at NIST. His team’s research agenda covers a range of topics in trustworthy AI and cybersecurity. Vassilev works closely with academia, industry and government agencies on the development and adoption of standards in artificial intelligence and cybersecurity and contributes to national and international standards groups. Vassilev holds a Ph.D. in mathematics from Texas A&M University. He has authored over 50 scientific papers and holds five U.S. patents. Apostol enjoys life with his wife. They are empty nesters, and when Apostol is not working, they enjoy hiking together in nature, taking care of and playing with their cat and dog, and pursuing environmental causes.

Related posts

Comments

A.I. is very interesting, confusing, and technical forward-thinking. After reading this article I look into the past to see what were the major changes in the world that changed mankind ability to work, play, and survive for human welfare a humanist approach. There has always been changes that push us into another world of work, mental anguish, and divided us into different social classes, and I believe A.I. is not different. When you take out the human factor, you take out personal beliefs, culture differences, and economic norms. A machine has no personal biases or feeling. It can be programmed to do whatever you enter into its chips. There will come a day maybe when I am dead and gone where mankind loses control over A.I. and will be at its mercy, I believe. Possibly I have been reading and watching too many books and movies. But danger is always around the corner with new systems that mankind will have to evolve to stop when their new ideas become the monster they said it would not become ?

Thank you for the great work on Trustworthy AI. Those of us working on AI guidelines/education/mission assistance for DoD IT/Software teams are eternally grateful for NIST's excellent work.

I propose there's a key angle missing in most of our AI conversations, and now seems as good a time as any to raise it. Regarding guidance and practice for traditional IT/Software/Data activities vice AI IT/Software/Data activities... I feel strongly that we should avoid asking teams to bifurcate what they do for traditional IT/SW and what they do for AI. Certainly, some things are germane to AI (over and above), but there is much that is not. AI is already ubiquitous and IT/SW teams usually have hybrid systems. Asking them to be non-biased, trustworthy and so on about the AI parts of the system, but not the traditional parts of the system, feels disingenuous to the cause and does not serve anybody. Especially when data is usually the same data. (I jokingly say one way to think about AI is "more advanced software running on more advanced hardware, with the same data you've always had - initially anyway.)

Given AI was born in the Computer Science realm, and Computer Science has now reached the age when our study of computation and information is due to receive a greater philosophy and ethics treatment anyway, can we not conclude that the foundational tenants of AI Ethics/RAI, etc. apply to all software, the hardware it runs on, and data? I propose we up our game across the board and then of course create practice that is necessary for AI (aka modernized computer science techniques) and future special techniques as they emerge.

Recognizing the buzzwords serve a purpose (and become memorialized in statutes, funding, programs, etc.), I am not proposing we do away with AI terminology. I am proposing we strongly think about a holistic approach to Computer Science in general that includes much of what we prescribe for AI. There's a reason there aren't more standards and guidelines out for AI (to date DoD doesn't have a single MIL-STD), because it's IT/SW/Data and we already have standards for those areas. The answer should be to modernize existing standards where needed (and do a much better job of continuously improving them for Industry 4.0/5.0 pace of emerging techniques), and only add where we must.
(Sidebar: 'Engineering of AI' and 'TE/VV of AI' come to mind as two areas where we must add. The world is seriously hurting on these two fronts and there's little we can derive from Silicon Valley practices that is sufficient for fundamental AI ENG and TE/VV.)

If we continue to bifurcate the IT/SW/Infrastructure process into permanent silos beyond R&D, government digital systems will continue to break down despite all of our best efforts. With that, I respectfully recommend we apply your excellent bias-finding efforts (and eventual practices) to ALL of the data. Thank you for your good work.

Nice article to read. Thanks, Yoglica

As an avid reader and technology enthusiast, I cannot help but express my sincere appreciation for this enlightening article. It brilliantly captures the essence of our current AI landscape and the pressing need to ensure responsible use while mitigating bias. The advent of powerful AI has undoubtedly transformed numerous industries, revolutionizing the way we live, work, and interact. However, it is imperative that we recognize the inherent biases that can seep into AI systems and proactively address them.

The author's emphasis on responsible AI usage is both timely and crucial. While AI holds immense potential, it also has the potential to perpetuate and amplify societal biases if not carefully developed and deployed. It is encouraging to see the call for proactive measures to mitigate bias, as this is an issue that cannot be ignored. The need for diversity in AI development teams, rigorous testing, and ongoing monitoring to identify and rectify biases are steps that cannot be understated.

Moreover, the article effectively highlights the ethical considerations surrounding biased AI algorithms. These algorithms have the potential to influence critical decisions in areas such as employment, criminal justice, and financial services. It is vital that we address biases within AI systems to ensure fair and just outcomes for all individuals, irrespective of their backgrounds or circumstances.

In conclusion, this article serves as a wake-up call, urging us to take collective responsibility for shaping the future of AI. By acknowledging the existence of biases and striving to mitigate them, we can pave the way for a more inclusive and equitable society. Kudos to the author for shedding light on this important issue and prompting us to reflect on the potential impact of AI. This article has undoubtedly sparked a necessary conversation and will hopefully inspire action.

Thank you for sharing your insights and driving the discourse forward!

Best regards,

Malik Johnson

Thank you for sharing.

Add new comment

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Please be respectful when posting comments. We will post all comments without editing as long as they are appropriate for a public, family friendly website, are on topic and do not contain profanity, personal attacks, misleading or false information/accusations or promote specific commercial products, services or organizations. Comments that violate our comment policy or include links to non-government organizations/web pages will not be posted.