Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.


[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]

National Institute of Standards and Technology (NIST) workshop on Software Security Assurance Tools, Techniques, and Metrics
7-8 November 2005
Co-located with ASE 2005 Long Beach, California, USA

Funded in part by the Department of Homeland Security (DHS), the National Institute of Standards and Technology (NIST) started a long-term, ambitious project to improve software security assurance tools. Security is the ability of a system to maintain the confidentiality, integrity, and availability of information processed and stored by a computer. Software security assurance tools are those that help software be more secure by building security into software or determining how secure software is. Among the project's goals are

(1) develop a taxonomy of software security flaws and vulnerabilities,

(2) develop a taxonomy of software security assurance (SSA) tool functions and techniques which detect or prevent flaws, and

(3) develop testable specifications of SSA functions and explicit tests to evaluate how closely tools implement the functions. The test material include reference sets of buggy code.

These goals extend into all phases of the software life cycle from requirements capture through design and implementation to operation and auditing.

The goal of the workshop is to convene researchers, developers, and government and industrial users of SSA tools to

* discuss and refine the taxonomy of flaws and the taxonomy of functions, which are under development,

* come to a consensus on which SSA functions should first have specifications and standard tests developed,

* gather SSA tools suppliers for "target practice" on reference datasets of code, and

* identify gaps or research needs in SSA functions.


Sets of code with known flaws and vulnerabilities, with corresponding correct versions, can be references for tool testing to make research easier and to be a standard of evaluation. Working with others, we will bring reference datasets of many types of code, like Java, C, binaries, and bytecode. We welcome contributions of code you've used.

To help validate the reference datasets, we solicit proposals not exceeding 2 pages to participate in SSA tool "target practice" on the datasets. Tools can range from university projects to commercial products. Participation is intended to demonstrate the state of the art in finding flaws, consequently the proposals should not be marketing write-ups, but should highlight technical contributions: techniques used, precision achieved, classes of vulnerabilities detected, suggestions for extensions to and improvements of the reference datasets, etc. Participants are expected to provide their own equipment.


SSATTM encourages contributions describing basic research, novel applications, and experience relevant to SSA tools and their evaluation. Topics of particular interest are:

- Benchmarks or reference datasets for SSA tools

- Comparisons of tools - ROI effectiveness of SSA functions

- Flaw catching effectiveness of SSA functions

- Evaluating SSA tools - Gaps or research needs in SSA functions

- SSA tool metrics - Software security assurance metrics

- Surveys of SSA tools

- Relation between flaws and the techniques that catch them

- Taxonomy of software security flaws and vulnerabilities

- Taxonomy of SSA functions or techniques


Papers should not exceed 8 pages in the conference format. Papers exceeding the length restriction will not be reviewed. Papers will be reviewed by at least two program committee members. All papers should clearly identify their novel contributions. All papers should be submitted electronically in PDF format by 26 August 2005 to Liz Fong <efong [at] (efong[at]nist[dot]gov)>.


Accepted papers will be published in the workshop proceedings. The workshop proceedings, along with a summary of discussions and the output of the reference dataset "target practice", will be published as a NIST Special Publication.


Freeland Abbott Georgia Tech J
im Alves-Foss U. Idaho
Paul Ammann George Mason U.
Paul E. Black NIST
Elizabeth Fong NIST
Michael Hicks U. Maryland
Michael Kass NIST
Michael Koo NIST
Richard Lippmann MIT
Robert A. Martin MITRE Corp.
W. Bradley Martin NSA
Nachiappan Nagappan Microsoft Research
Samuel Redwine James Madison U.
Ravi Sandhu George Mason U.
Larry D. Wagoner NSA J
effrey M. Voas SAIC
26 Aug: Paper and tool proposal submission deadline
19 Sep: Paper and proposal notification
15 Oct: Final camera-ready copy due
7-8 Nov: Workshop

Created March 30, 2021, Updated May 17, 2021