Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

LLM-Assisted Relevance Assessments

Published

Author(s)

Rikiya Takehi, Ellen Voorhees, Tetsuya Sakai, Ian Soboroff

Abstract

Test collections are information retrieval tools that allow researchers to quickly and easily evaluate ranking algorithms. While test col- lections have become an integral part of IR research, the process of data creation involves significant efforts of manual annotations, which often makes it very expensive and time-consuming. Thus, the size of test collections could become small when the budget is limited, which leads to unstable evaluations. As an alternative, recent studies have proposed the use of large language models (LLMs) to completely replace human assessors. However, while LLMs seem to somewhat correlate with human judgments, they are not perfect and tend to be biased. Moreover, even if a well-performing LLM or prompt is found on one dataset, there is no guarantee that it will perform similarly in practice due to difference in tasks and data. Thus a complete replacement with LLMs is argued to be too risky and not fully trustable. Thus, in this paper, we propose LLM-Assisted Relevance Assessments (LARA), an effective method to balance manual annotations with LLM annotations, which helps to make a rich and reliable test collection. We use the LLM's predicted relevance probability in order to select the most profitable documents to manually annotate under a budget constraint. While solely relying on LLM's predicted probability to manually annotate performs fairly well, with theoretical reasoning, LARA guides the human annotation process even more effectively via online learning. Then, using the model learned from the limited manual annotations, we debias the LLM predictions to annotate the remaining non-accessed data. In short, our method guides the human annotations using LLM predictions, then guides the LLM predictions using the collected ground truth labels. Empirical evaluations on various datasets show that LARA outperforms the existing solutions on almost any budget constraints.
Proceedings Title
SIGIR 2025 (if accepted)
Conference Dates
July 15-18, 2025
Conference Location
Padova, IT
Conference Title
The 48th International ACM SIGIR Conference on Research and Development in Information Retrieval

Keywords

information retrieval, evaluation, llm

Citation

Takehi, R. , Voorhees, E. , Sakai, T. and Soboroff, I. (2025), LLM-Assisted Relevance Assessments, SIGIR 2025 (if accepted), Padova, IT, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959057 (Accessed September 24, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].

Created July 13, 2025, Updated September 18, 2025
Was this page helpful?