Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Workshop on Software Security Assurance Tools, Techniques, and Metrics

[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]

7 and 8 November 2005
Long Beach, California, USA
Co-located with the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE 2005)

Contents

PURPOSE

Funded in part by the Department of Homeland Security (DHS), the National Institute of Standards and Technology (NIST) started a long-term, ambitious project, SAMATE, to improve software security assurance tools. Security is the ability of a system to maintain the confidentiality, integrity, and availability of information processed and stored by a computer. Software security assurance tools are those that help software be more secure by building security into software or determining how secure software is. Among the project's goals are:

  1. develop a taxonomy of software security flaws and vulnerabilities,
  2. develop a taxonomy of software security assurance (SSA) tool functions and techniques which detect or prevent flaws, and
  3. develop testable specifications of SSA functions and explicit tests to evaluate how closely tools implement the functions.

These goals extend into all phases of the software life cycle from requirements capture through design and implementation to operation and auditing.

As noted in the Call for Papers, the purpose of the workshop is to convene researchers, developers, and government and industrial users of SSA tools to

  • discuss and refine the taxonomy of flaws and the taxonomy of functions, which are under development,
  • come to a consensus on which SSA functions should first have specifications and standard tests developed,
  • gather SSA tool developers for "target practice": see how reference datasets fare against various tools, and
  • identify gaps or requirements for research in SSA functions.

We encourage contributions describing basic research, novel applications, and experience relevant to SSA tools and their evaluation. Topics of particular interest are:

  • Benchmarks or reference datasets for SSA tools
  • Comparisons of tools
  • ROI effectiveness of SSA functions
  • Flaw catching effectiveness of SSA functions
  • Evaluating SSA tools
  • Gaps or research requirements in SSA functions
  • SSA tool effectiveness metrics
  • Software security assurance metrics
  • Surveys of SSA tools
  • Relation between flaws and the techniques that catch them
  • Taxonomy of software security flaws and vulnerabilities
  • Taxonomy of SSA functions or techniques

PROGRAM

November 7, 2005

8:30 - 9:00 : Welcome - Paul E. Black

9:00 - 10:30 : Tools and Metrics - Liz Fong

  • Where do Software Security Assurance Tools Add Value? - David Jackson, David Cooper
  • Metrics that Matter - Brian Chess
  • The Case for Common Flaw Enumeration - Robert Martin, Steven Christey, Joe Jarzombek

10:30 - 11:00 : Break

11:00 - 12:30 : Flaw Taxonomy and Benchmarks - Robert Martin

  • Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors - Katrina Tsipenyuk, Brian Chess, Gary McGraw
  • A Taxonomy of Buffer Overflows for Evaluating Static and Dynamic Software Testing Tools - Kendra Kratkiewicz, Richard Lippmann
  • ABM - A Prototype for Benchmarking Source Code Analyzers - Tim Newsham, Brian Chess

12:30 - 1:30 : Lunch

1:30 - 4:00 : New Techniques - Larry Wagoner

  • A Benchmark Suite for Behavior-Based Security Mechanisms - Dong Ye, Micha Moffie, David Kaeli
  • Testing and Evaluation of Virus Detectors for Handheld Devices - Jose A. Morales, Peter Clarke, Yi Deng
  • Eliminating Buffer Overflows, Using the Compiler or a Standalone Tool - Thomas Plum, David Keaton
  • A Secure Software Architecture Description Language - Jie Ren, Richard Taylor
  • Prioritization of Threats Using the K/M Algebra - Supreeth Vendataraman, Warren Harrison

End of day 1


November 8, 2005

9:00 - 11:30 : Target Practice and Reference Dataset Discussion - Michael Kass

11:30 - 1:00 : Lunch

1:00 - 2:30 : Invited Presentation - Vadim Okun

  • Correctness by Construction: The Case for Constructive Static Verification - Rod Chapman

2:30 - whenever : Open Discussion - Michael Kass

End of workshop

REFERENCE DATASET "TARGET PRACTICE"

Sets of code with known flaws and vulnerabilities, with corresponding correct versions, can be references for tool testing to make research easier and to be a standard of evaluation. Working with others, we will bring reference datasets of many types of code, like Java, C, binaries, and bytecode. We welcome contributions of code you've used.

To help validate the reference datasets, we solicit proposals not exceeding 2 pages to participate in SSA tool "target practice" on the datasets. Tools can range from university projects to commercial products. Come "shoot holes" in the reference dataset. Participation is intended to demonstrate the state of the art, consequently the proposals should not be marketing write-ups, but should highlight technical contributions: techniques used, precision achieved, classes of vulnerabilities detected, suggestions for extensions to and improvements of the reference datasets, etc. The content and detail of any observations, suggestions, results, etc. shared for publication are completely voluntary and may be anonymous: participants are not obligated to share any results at all.

Participants are expected to provide their own equipment.

PUBLICATION

Accepted papers will be published in the workshop proceedings. The workshop proceedings, along with a summary of discussions and voluntary results of the reference dataset "target practice", will be published as a NIST Special Publication.

Published as "Proceedings of Workshop on Software Security Assurance Tools, Techniques, and Metrics", Elizabeth Fong ed., U.S. National Institute of Standards and Technology (NIST) Special Publication (SP) 500-265, February, 2006. 

Informal proceedings, including accepted papers, several flaw taxonomies, and the reference dataset, will be given to attendees on a 32 MB USB drive with this content.

REGISTRATION

Conference registration on-line registration ended 28 Oct. Walk-up registration is welcome.

IMPORTANT DATES

26 August           Paper and tool proposal submission deadline

19 September      Notification of acceptance

15 October          Final camera-ready version of papers due

7-8 November     Workshop

Note that there is a particularly complementary workshop on Software Certificate Management (SoftCeMent) on 8 Nov.

PROGRAM COMMITTEE

Freeland Abbott Georgia Tech
Paul Ammann George Mason U.
Paul E. Black NIST
Elizabeth Fong NIST
Michael Hicks U. Maryland
Michael Kass NIST
Michael Koo NIST
Richard Lippmann MIT
Robert A. Martin MITRE Corp.
W. Bradley Martin NSA
Nachiappan Nagappan Microsoft Research
Samuel Redwine James Madison U.
Ravi Sandhu George Mason U.
Larry D. Wagoner NSA
Created March 30, 2021, Updated May 17, 2021