NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Hallucination Detection in Large Language Models Using Diversion Decoding
Published
Author(s)
Basel Abdeen, S M Tahmid Siddiqui, Meah Tahmeed Ahmed, Anoop Singhal, Latifur Khan, Punya Modi, Ehab Al-Shaer
Abstract
Large language models (LLMs) have emerged as a powerful tool for retrieving knowledge through seamless, human-like interactions. Despite their advanced text generation capabilities, LLMs exhibit hallucination tendencies, where they generate factually incorrect statements and fabricate knowledge, undermining their reliability and trustworthiness. Multiple studies have explored methods to evaluate LLM uncertainty and detect hallucinations. However, existing approaches are often probabilistic and computationally expensive, limiting their practical applicability. In this paper, we introduce diversion decoding, a novel method for developing an LLM uncertainty heuristic by actively challenging model-generated responses during the decoding phase. Through diversion decoding, we extract features that capture the LLM's resistance to produce alternative answers and utilize these features to train a machine-learning model to develop a heuristic measure of the LLM's uncertainty. Our experimental results demonstrate that diversion decoding outperforms existing methods with significantly lower computational complexity, making it an efficient and robust solution for evaluating hallucination detection.
Proceedings Title
Data and Applications Security and Privacy XXXIX
Volume
15722
Conference Dates
June 23-24, 2025
Conference Location
Gjøvik, NO
Conference Title
39th IFIP WG 11.3 Annual Conference on Data and Applications Security and Privacy, DBSec 2025
Abdeen, B.
, Siddiqui, S.
, Ahmed, M.
, Singhal, A.
, Khan, L.
, Modi, P.
and Al-Shaer, E.
(2025),
Hallucination Detection in Large Language Models Using Diversion Decoding, Data and Applications Security and Privacy XXXIX, Gjøvik, NO, [online], https://doi.org/10.1007/978-3-031-96590-6_7, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959860
(Accessed October 13, 2025)