Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

The UK-US Blog Series on Privacy-Preserving Federated Learning: Introduction

This post is the first in a series on privacy-preserving federated learning.
The series is a collaboration between CDEI and NIST.

Advances in machine learning and AI, fueled by large-scale data availability and high-performance computing, have had a significant impact across the world in the past two decades. Machine learning techniques shape what information we see online, influence critical business decisions, and aid scientific discovery, which is driving advances in healthcare, climate modelling, and more.

Training Models: Conventional vs Federated Learning

The standard way to train machine learning models is to collect the training data centrally, on one server or in one data center, where it can be used to train the model. A model is what we get from the output of training on the data; once trained, a model is then used for predictions or generating new content. For example, a consortium of banks may want to train a model to detect fraudulent transactions; to do so, they’ll need to collect transaction data from many different banks on a central server, as shown in Figure 1(a).

Technical diagram describing Conventional vs Federated Learning
Figure 1: Comparing centralized and federated learning. In centralized learning (a), participants send data to a central aggregator, who trains the model. In federated learning (b), participants send model updates instead; in federated learning, data never leaves the participant’s organization.
Credit: NIST

Collecting sensitive financial data raises serious privacy concerns, however. If the central server is compromised or its owner is not trustworthy, then the data may be leaked or used for other purposes. Sharing sensitive information like financial data may not even be possible due to privacy regulations or other legal restrictions. Apart from security and privacy concerns, in some cases the data may raise intellectual property issues.

Because of these challenges, many organizations simply decide to not share data. As a result, useful models—like the fraud detector in our example—are impossible to build.

This blog series focuses on federated learning, an approach that addresses the fundamental privacy challenge of traditional machine learning by avoiding the centralized collection of training data. In federated learning, the central server sends a copy of the partially trained model to each participating organization and collects model updates instead of data, as shown in Figure 1(b). Each organization constructs its model update by training the model locally on its own sensitive data — which never leaves the organization. The resulting model updates can be aggregated and assembled to construct an improved model. This process repeats until the model is trained. Examples of successful use of federated learning include Google’s Gboard, Speech, and Messages, and Apple’s news personalization and speech recognition. However, there are currently limited examples of federated learning being used to enable collaboration between different organizations without direct sharing of data — such a setup has the potential to unlock new use cases that could provide significant societal and economic benefits.

Privacy Challenges in Federated Learning

Even though federated learning helps to address the privacy challenge of centralized data collection, in the past decade, researchers have discovered new kinds of privacy attacks that can recover sensitive training data even when federated learning is used!

  1. Attacks on model updates: Model updates are determined by the training data. In some cases, it’s possible to recover information about the training data from the model updates used in federated learning.
  2. Attacks on trained models: The final trained model also reflects the training data. In some cases, it’s possible to infer information about the training data from the trained model, whether or not federated learning was used to train it.

Developing Solutions: The UK-US PETS Prize Challenges

Announced at the inaugural Summit for Democracy in December 2021, the UK-US PETs prize challenges were a collaborative effort between U.K. and U.S. governments to drive innovation in privacy-preserving federated learning (PPFL). Contestants could tackle two high-impact use cases: combating financial crime and responding to public health emergencies. In both cases, contestants were tasked with training a machine learning classifier on a federated dataset, whilst providing end-to-end privacy guarantees.

The winning teams were selected by an independent panel of PETs experts from academia, industry, and the public sector, taking into consideration the quantitative metrics from the evaluation platform, the outcomes from red teaming, and the code and written reports submitted by teams.

The winning solutions were announced at the second Summit for Democracy in March 2023; you can watch the announcement video here, which also provides a really useful animation of the challenge setup. You can learn more about the challenges at the NIST and the UK websites.

Coming up in this blog series

Throughout this series, we plan to present techniques for PPFL alongside lessons learned in the PETs prize challenges. The challenge results demonstrated the increasing practicality of some PPFL techniques — in some cases, these techniques are already being incorporated into commercial products. The results also highlighted areas where existing techniques are not yet practical, and more research is needed.

Over the coming months, we’ll be publishing a number of blogs to provide practical guidance for exploring privacy-preserving federated learning in practice. The series will feature guest authors from organizations involved in the U.K.-U.S. prize challenges, and other leading experts in the field. Topics will include:

  • Privacy threat models in federated learning
  • Solutions developed during the prize challenges
  • Resources for getting started with federated learning

Finally, we want this blog series and our other work in this space to meaningfully contribute to making AI safer and fairer, whilst protecting important shared values such as privacy and autonomy. Through the Atlantic Declaration, the U.S. and U.K. governments have reaffirmed their commitment to developing safe and responsible AI, including further collaboration on privacy enhancing technologies. If you feel you could help us in this ambition, we want to hear from you. Please do get in touch via pets [at] cdei.gov.uk (pets[at]cdei[dot]gov[dot]uk) or PrivacyEng [at] nist.gov (PrivacyEng[at]nist[dot]gov).

About the author

Joseph Near

Joseph Near is an assistant professor of computer science at the University of Vermont who supports NIST as a moderator for the Privacy Engineering Collaboration Space. His research interests include data privacy, computer security, and programming languages. Joseph received his B.S. in computer science from Indiana University, and his M.S. and Ph.D. in computer science from MIT.

David Darais

David Darais is a Principal Scientist at Galois, Inc. and supports NIST as a moderator for the Privacy Engineering Collaboration Space. David's research focuses on tools for achieving reliable software in critical, security-sensitive, and privacy-sensitive systems. David received his B.S. from the University of Utah, M.S. from Harvard University and Ph.D. from the University of Maryland.

Naomi Lefkovitz

Naomi Lefkovitz is the Senior Privacy Policy Advisor in the Information Technology Lab at the National Institute of Standards and Technology, U.S. Department of Commerce. Her portfolio includes work on the National Strategy for Trusted Identities in Cyberspace (NSTIC), privacy engineering, privacy-enhancing technologies, cybersecurity and standards development.

FierceGovernmentIT named Ms. Lefkovitz on their 2013 “Fierce15” list of the most forward-thinking people working within government information technology, and she is a 2014 Federal 100 Awards winner.

Before joining NIST, she was the Director for Privacy and Civil Liberties in the Cybersecurity Directorate of the National Security Staff in the Executive Office of the President. Her portfolio included the NSTIC as well as addressing the privacy and civil liberties impact of the Obama Administration’s cybersecurity initiatives and programs.

Prior to her tenure at the White House, Ms. Lefkovitz was a senior attorney with the Division of Privacy and Identity Protection at the Federal Trade Commission. Her responsibilities focused primarily on policy matters, including legislation, rulemakings, and business and consumer education in the areas of identity theft, data security and privacy.

At the outset of her career, she was Assistant General Counsel at CDnow, Inc., an early online music retailer.

Ms. Lefkovitz holds a B.A. with honors in French Literature from Bryn Mawr College and a J.D. with honors from Temple University School of Law.

Dave Buckley

Dave Buckley is a Senior Technology Adviser at the UK’s Centre for Data Ethics and Innovation, where he leads the Centre’s work on privacy enhancing technologies. He has led development of CDEI’s PETs Adoption Guide, was a co-editor of the 2023 UN PET Guide, and was a member of the core delivery team for the UK-US PETs prize challenges. Dave is a software engineer and data scientist by background, with a strong interest in quantitative methods for social science.

Comments

Add new comment

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Please be respectful when posting comments. We will post all comments without editing as long as they are appropriate for a public, family friendly website, are on topic and do not contain profanity, personal attacks, misleading or false information/accusations or promote specific commercial products, services or organizations. Comments that violate our comment policy or include links to non-government organizations/web pages will not be posted.