Skip to main content

NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.

Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.

U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

Search Title, Abstract, Conference, Citation, Keyword or Author
  • Published Date
Displaying 51 - 74 of 74

SATE VI Ockham Sound Analysis Criteria

July 11, 2017
Author(s)
Paul E. Black
In preparation for SATE VI, we present our current thoughts on the Ockham Sound Analysis Criteria track. First, we explain the purpose of the Ockham track and define some terms, such as "sound", "finding", and "site". Then we present the general flow for

Dramatically Reducing Software Vulnerabilities

January 18, 2017
Author(s)
Paul E. Black, Larry Feldman, Gregory A. Witte
This bulletin summarized the information presented in NISTIR 8151: Dramatically Reducing Software Vulnerabilities: Report to the White House Office of Science and Technology Policy. The publication starts by describing well known security risks and

Resilience and System Level Security

December 20, 2016
Author(s)
Mark L. Badger
One approach for reducing damage caused by software vulnerabilities is to take advantage of emerging systems architecture patterns to strategically improve assurance. Emerging systems architectures embody significant choices about where computation takes

MPInterfaces: A Materials Project based Python Tool for a High Throughput Computational Screening of Interfacial Systems

May 31, 2016
Author(s)
Arunima Singh, Kiran Mathew, Joshua Grabriel, Kamal Choudhary, Susan B. Sinnott, Albert Davydov, Francesca M. Tavazza, Richard G. Hennig
A Materials Project based open-source python tool, MPInterfaces, has been developed to automate the high throughput computational screening and study of interfacial systems. The framework encompasses creation and manipulation of interface structures for

SATE V Ockham Sound Analysis Criteria

March 22, 2016
Author(s)
Paul E. Black, Athos Ribeiro
Static analyzers examine the source or executable code of programs to find problems. Many static analyzers use some heuristics or approximations to handle programs up to millions of lines of codes. We established the Ockham Sound Analysis Criteria to

Towards a Periodic Table of Bugs

June 19, 2015
Author(s)
Paul E. Black, Irena V. Bojanova, Yaacov Yesha, Yan Wu
High-confidence systems must not be vulnerable to attacks that reduce the security, reliability, or availability of the system as a whole. One collection of vulnerabilities is the Common Weakness Enumeration (CWE). It represents a considerable community

Towards a "Periodic Table" of Bugs

April 8, 2015
Author(s)
Irena Bojanova
Our vision for a "periodic table" of bugs is a "natural" organization of a catalog or dictionary or taxonomy to describe software weaknesses and vulnerabilities. Such an organization will help the community to: a) more closely explain the nature of

Fuzz Testing for Software Assurance

March 1, 2015
Author(s)
Vadim Okun, Elizabeth N. Fong
Fuzz Testing, or fuzzing, is a software testing technique that involves providing invalid, unexpected, or random test inputs to the software system under test. The system is then monitored for crashes and other undesirable behavior. Fuzz testing can be

Formalizing Software Bugs

December 8, 2014
Author(s)
Irena Bojanova
Knowing what makes a software systems vulnerable to attacks is critical, as software vulnerabilities hurt security, reliability, and availability of the system as a whole. In addition, understanding how an adversary operates is essential to effective cyber

Characterizing and Back-Porting Performance Improvement

September 25, 2013
Author(s)
Clif Flynt, Phil Brooks, Donald G. Porter
The Tcl interpreter is constantly being modified and improved. Improvements include new features and performance boosts. Everyone wants to use the latest releases with the newest improvements, but corporate users with large code bases may not be able to do

The Juliet 1.1 C/C++ and Java Test Suite

October 1, 2012
Author(s)
Frederick E. Boland Jr., Paul E. Black
The Juliet Test Suite 1.1 is a collection of over 81,000 synthetic C/C++ and Java programs with known flaws. These programs are useful as test cases for testing the effectiveness of static analyzers and other software assurance tools, and are in the public

Static Analyzers: Seat Belts for Your Code

January 10, 2012
Author(s)
Paul Black
Just as seat belt use is wide spread, we argue that the use static analysis should be part of ethical software development. We explain some of the procedures of the four Static Analysis Tool Expositions (SATE), and some of the lessons we learned

Counting Bugs is Harder Than You Think

October 20, 2011
Author(s)
Paul E. Black
Software Assurance Metrics And Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to

An IEEE 1588 Performance Testing Dashboard for Power Industry Requirements

September 12, 2011
Author(s)
Julien M. Amelot, Ya-Shian Li-Baboud, Clement Vasseur, Jeffrey Fletcher, Dhananjay Anand, James Moyne
The numerous time synchronization performance requirements in the Smart Grid entails the need for a set of common metrics and test methods to verify the ability of the network system and its components to meet the power industry's accuracy, reliability and

Maintainers Manual for Version 2.2.1 of the NIST DMIS Test Suite

October 25, 2010
Author(s)
Thomas R. Kramer, John A. Horst
This manual is a maintainers manual for the NIST DMIS Test Suite, version 2.2.1. DMIS (Dimensional Measuring Interface Standard) is a language for writing programs for coordinate measuring machines and other dimensional measurement equipment. The manual is

System Builders Manual for Version 2.2.1 of the NIST DMIS Test Suite

October 25, 2010
Author(s)
Thomas R. Kramer, John A. Horst
This manual is a system builders manual for the NIST DMIS Test Suite, version 2.2.1. DMIS (Dimensional Measuring Interface Standard) is a language for writing programs for coordinate measuring machines and other dimensional measurement equipment. The

Users Manual for Version 2.2.1 of the NIST DMIS Test Suite

October 25, 2010
Author(s)
Thomas R. Kramer, John A. Horst
This manual is a users manual for the NIST DMIS Test Suite, version 2.2.1. DMIS (Dimensional Measuring Interface Standard) is a language for writing programs for coordinate measuring machines and other dimensional measurement equipment. The manual

Software Interoperability: Enabling New Technologies

May 19, 2005
Author(s)
John V. Messina, Matthew L. Aronoff, Eric D. Simmon
New and potentially disruptive technologies are constantly being introduced into the electronic interconnection industry as companies seek to improve their manufacturing chain and stay competitive. These new technologies do not exist in a vacuum and must

SARD: A Software Assurance Reference Dataset

Author(s)
Paul E. Black
Software assurance tools examine code for problems. To test such tools, we need programs with known bugs as ground truth. The Software Assurance Reference Dataset (SARD) is a publicly accessible collection of over 100,000 test cases in different
Was this page helpful?