Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

All Contributions

Return to Browse


Approximate Minima Perturbation (AMP)

De-identification Tool
Keywords: Differential Privacy, Machine Learning

GitHub POC: @jnear
Affiliation/Organization(s) Contributing: Carnegie Mellon University; Boston University; University of California, Berkeley; University of California, Santa Cruz; Peking University

This work presents a novel algorithm called Approximate Minima Perturbation (AMP) for differentially private convex optimization, and an extensive empirical evaluation on real datasets of both AMP and a number of previous approaches for solving this problem. The Github repository contains Python implementations of AMP, noisy stochastic gradient descent, noisy Frank-Wolfe, objective perturbation, and two variants of output perturbation, as well as a number of benchmarks for generating experimental results.

Notes: The AMP algorithm and associated experimental results are described in an IEEE Symposium on Security and Privacy 2019 paper available here.

AMP on GitHub  Share Feedback


ARX Data Anonymization Tool

De-identification Tool
Keywords: Differential Privacy, K-Anonymity, Anonymization, Machine Learning

GitHub POC: @prasser
Affiliation/Organization(s) Contributing: TUM - Technical University of Munich

ARX is a comprehensive open source software for anonymizing sensitive personal data. It supports a wide variety of (1) privacy and risk models, (2) methods for transforming data and (3) methods for analyzing the usefulness of output data.

ARX  Share Feedback


City of Seattle Open Data Risk Assessment

Privacy Risk Assessment Use Case

GitHub POC and Email Address: @k-finch | kfinch [at] fpf.org
Affiliation/Organization(s) Contributing: Future of Privacy Forum (FPF)

While the transparency goals of the open data movement serve important functions in cities like Seattle, some municipal datasets about the city and its citizens’ activities carry inherent risks to individual privacy when shared publicly. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data Program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked the Future of Privacy Forum (FPF) with creating and deploying an initial privacy risk assessment methodology for open data.

This Report first describes inherent privacy risks in an open data landscape, with an emphasis on potential harms related to re-identification, data quality, and fairness. To address these risks, the Report includes a Model Open Data Benefit-Risk Analysis (“Model Analysis”). The Model Analysis evaluates the types of data contained in a proposed open dataset, the potential benefits – and concomitant risks – of releasing the dataset publicly, and strategies for effective de-identification and risk mitigation. This holistic assessment guides city officials to determine whether to release the dataset openly, in a limited access environment, or to withhold it from publication (absent countervailing public policy considerations). The Report methodology builds on extensive work done in this field by experts at the National Institute of Standards and Technology, the University of Washington, the Berkman Klein Center for Internet & Society at Harvard University, and others, and adapts existing frameworks to the unique challenges faced by cities as local governments, technological system integrators, and consumer facing service providers. The Report concludes by detailing concrete technical, operational, and organizational recommendations to enable the Seattle Open Data Program’s approach to identify and address key privacy, ethical, and equity risks, in light of the city’s current policies and practices.

Notes: Templates for the Model Benefit-Risk Assessment (https://fpf.org/wp-content/uploads/2018/01/Model-Benefit-Risk-Analysis.pdf) and the Program Maturity Assessment (https://fpf.org/wp-content/uploads/2018/01/Program-Maturity-Assessment.pdf) are available separately, as well.

Related blog post: https://fpf.org/2018/01/22/public-comments-on-proposed-open-data-risk-assessment-for-the-city-of-seattle/

Future of Privacy Forum website: https://fpf.org/

City of Seattle Executive Order 2016-01: http://murray.seattle.gov/wp-content/uploads/2016/02/2.26-EO.pdf

Risk Assessment Report (PDF)   Share Feedback


Differential Privacy Synthetic Data Challenge Algorithms

De-identification Keywords: Differential Privacy, Synthetic Data Generation
Participants in Match #3 of NIST's 2018 Public Safety Communications Research Differential Privacy Synthetic Data Challenge developed these open source algorithms as part of an effort to advance differential privacy. Participants were challenged to create new methods, or improve existing methods of data de-identification, while preserving the dataset’s utility for analysis. All solutions were required to satisfy the differential privacy guarantee, a provable guarantee of individual privacy protection. Participants used a data set of emergency response events occurring in San Francisco and a sub-sample of the IPUMS USA data for the 1940 U.S. Census. Contributions are listed in alphabetical order.

DP_WGAN-UCLANESL

Team Members: Prof. Mani Srivastava (@msrivastava) - Team Captain (Match 1 and Match 3), Moustafa Alzantot (@malzantot) - (Match 1 and Match 3), Nat Snyder (@natsnyder1) - Match 1, Supriyo Charkaborty (@supriyogit) - Match 1
This repo contains an implementation for the award-winning solution to the 2018 Differential Privacy Synthetic Data Challenge by team UCLANESL. Our solution has been awarded the 5th place in Match #3 of the challenge and an earlier version has also won the 4th place in Match #1. The solution trains a wasserstein generative adversarial network (w-GAN) that is trained on the real private dataset. Differentially private training is applied by sanitizing (norm clipping and adding Gaussian noise) the gradients of the discriminator. Once the model is trained, it can be used to generate synthetic dataset by feeding random noise into the generator.

 DP_WGAN-UCLANESL on GitHub More Information Share Feedback

DPFieldGroups

Team Members & Affiliation: John Gardner (no affiliation)
This is the fourth place entry in the third round of the NIST Differential Privacy Synthetic Data Challenge. The goal of this challenge is to produce differentially private synthetic data while retaining as much useful information as possible about the original data set. Colorado census data from 1940 with 98 field columns were provided for algorithm development with census data from other states used for testing. This solution groups together fields which have been found to be highly correlated. For each of these groups, a histogram is created for the purpose of counting the number of occurrences of every possible combination of values of all fields in the group. For privatization, Laplacian noise is added to every bin with scale proportional to the number of groups / total epsilon. Synthetic data is generated by selecting a random bin for each group with probability weighted by these noisy bin counts. The field values corresponding to each group's selected bin are written out as a single row of synthetic data.

DPFieldGroups on GitHub Share Feedback

DPSyn

Team Members & Affiliations: Ninghui Li (Purdue University), Zhikun Zhang (Zhejiang University), Tianhao Wang (Purdue University)
We present DPSyn, an algorithm for synthesizing microdata while satisfying differential privacy, and its instantiation to the dataset used in the competition, namely Public Use Microdata Sample (PUMS) of the 1940 USA Census Data.

DPSyn GitHub Share Feedback

rmckenna

Team Member & Affiliation: Ryan McKenna (UMass Amherst)
The first place entry in the third round of the NIST Differential Privacy Synthetic Data Challenge. The high-level idea is to (1) use the Gaussian mechanism to obtain noisy answers to a carefully selected set of counting queries (1, 2, and 3 way marginals) and (2) find a synthetic data set that approximates the true data with respect to those queries. The latter step is accomplished with [3], and the previous step uses ideas inspired by [1] and [2]. More specifically, this is done by calculating the mutual information (on the public dataset) for each pair of attributes and selecting the marginal queries that have high mutual information.

[1] Zhang, Jun, et al. "Privbayes: Private data release via bayesian networks." ACM Transactions on Database Systems (TODS) 42.4 (2017): 25.
[2] Chen, Rui, et al. "Differentially private high-dimensional data publication via sampling-based inference." Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015.
[3] McKenna, Ryan, Daniel Sheldon, and Gerome Miklau. "Graphical-model based estimation and inference for differential privacy." Proceddings of the 36th International Conference on Machine Learning. 2019.

rmckenna Algorithm on GitHub Share Feedback


Differentially Private Stochastic Gradient Descent (DP-SGD)

De-identification Tool
Keywords: Differential Privacy, Machine Learning

GitHub POC: @ilyamironov

Train machine learning models with differential privacy by clipping and noising gradients during stochastic gradient descent.

Notes: Paper with full details: https://arxiv.org/abs/1607.00133

DP-SGD on GitHub  Share Feedback


Ektelo

De-identification Tool
Keywords: Differential Privacy

GitHub POC: @michaelghay

Ektelo is a programming framework and system that aids programmers in developing differentially private programs with high utility. Ektelo can be used to author programs for a variety of statistical tasks that involve answering counting queries over a table of arbitrary dimension.

Notes: Ektelo is described in detail in a SIGMOD 2018 paper, titled "EKTELO: A Framework for Defining Differentially-Private Computations." https://dl.acm.org/citation.cfm?id=3196921

Ektelo on GitHub   Share Feedback


FAIR Privacy

Privacy Risk Assessment Tool

GitHub POC: @privacymaverick
Affiliation/Organization(s) Contributing: Enterprivacy Consulting Group 

FAIR Privacy is a quantitative privacy risk framework based on FAIR (Factors Analysis in Information Risk). FAIR Privacy examines personal privacy risks (to individuals), not organizational risks. Included in this tool is a PowerPoint deck illustrating the components of FAIR Privacy and an example based on the US Census. In addition, an Excel spreadsheet provides a powerful risk calculator using Monte Carlo simulation.

Notes: Some additional resources are provided in the PowerPoint deck.

Feedback and suggestions for improvement on both the framework and the included calculator are welcome. Additionally, analysis of the spreadsheet by a statistician is most welcome.

FAIR Privacy on GitHub   Share Feedback


GUPT: Privacy preserving data analysis made easy

De-identification Tool
De-identification Keywords (select any relevant): Differential Privacy, Machine Learning, Database Queries

GitHub User Serving as POC (or Email Address): @prashmohan
Affiliation/Organization(s) Contributing: University of California, Berkeley; University of California, Santa Cruz; Cornell University

The tool provides differential privacy guarantees to statistical/machine learning algorithms by treating the underlying algorithm as a black-box, and only relying on input/output signatures. It implements a variant of the celebrated sample and aggregate framework by Nissim, Rashkhodnikova, and Smith, 2007. The empirical evaluation shows that the system scores well on various learning tasks (like clustering and regression).

Additional Notes: GUPT is described in detail in a SIGMOD 2012 paper, titled "GUPT: Privacy Preserving Data Analysis Made Easy." A PDF is available here.

GUPT on GitHub   Share Feedback


NIST Privacy Risk Assessment Methodology (PRAM)

Privacy Risk Assessment Tool

GitHub POC: @kboeckl
Affiliation/Organization(s) Contributing: NIST

The PRAM is a tool that applies the risk model from NISTIR 8062 and helps organizations analyze, assess, and prioritize privacy risks to determine how to respond and select appropriate solutions. The PRAM can help drive collaboration and communication between various components of an organization, including privacy, cybersecurity, business, and IT personnel.

Worksheet 1: Framing Business Objectives and Organizational Privacy Governance
Worksheet 2: Assessing System Design; Supporting Data Map
Worksheet 3: Prioritizing Risk
Worksheet 4: Selecting Controls
Catalog of Problematic Data Actions and Problems

Notes: NIST welcomes organizations to use the PRAM and share feedback to improve the PRAM.

PRAM on GitHub   Share Feedback


PixelDP

De-identification Tool
De-identification Keywords: Differential Privacy, Verification of Algorithms, Machine Learning, Adversarial Examples

GitHub POC: @matlecu
Affiliation/Organization(s) Contributing: Columbia University

Adversarial examples that fool prediction models are a new class of attacks introduced by machine learning deployments. PixelDP is the first certified defense that both offers provable guarantees of robustness against these attacks and scales to large models and datasets, such as Google’s Inception on the ImageNet dataset. PixelDP's design relies on a novel use of differential privacy at prediction time.

Additional Notes: This IEEE S&P 2019 research paper describes PixelDP.

PixelDP on GitHub   Share Feedback


Privacy Protection Application (PPA)

De-identification Tool
Keywords: K-Anonymity, Anonymization, Information Leakage, Algorithmic Fairness, Database Queries, Location Data

POC: carterjm [at] ornl.gov

The Privacy Protection Application de-identifies databases that contain sequential geolocation data, sometimes called moving object databases. A record of a personally-owned vehicle’s route of travel is an example, but the tool can process other types of geolocation sequences. The application has a graphical user interface and operates on Linux, OS X, and Windows. Location suppression is the de-identification strategy used, and decisions about which locations to suppress are based on information theory. This strategy does not modify the precision of retained location information. One of the objectives is to produce data usable for vehicle safety analysis and transportation application development.

Notes: This tool treats static databases and has two versions. The main GUI versions uses a very efficient map matching strategy that may identify false roads for certain types of road structures. The tagged version (https://github.com/usdot-its-jpo-data-portal/privacy-protection-application/releases/tag/hmm-mm) uses a Hidden Markov Model map matching algorithm that is more accurate, but less efficient. This version is a command line tool that runs in Docker. Additionally, a streaming de-identification tool was developed for a USDOT Safety Pilot Study. This tool uses geofencing to identify locations that can be retained. It can also be found on GitHub: https://github.com/usdot-jpo-ode/jpo-cvdp

PPA on GitHub   Share Feedback


Private Aggregation of Teacher Ensembles (PATE)

De-identification Tool
Keywords: Differential Privacy, Machine Learning

GitHub POC: @npapernot 

The PATE framework achieves differentially private learning by carefully coordinating the activity of several different ML models.

Notes: Papers with full details: https://arxiv.org/abs/1802.08908

PATE Framework on GitHub   Share Feedback


Interested in contributing? 

Contribute your de-identification tool.

Contribute

Created March 1, 2019, Updated September 19, 2019