[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]
Welcome to the workshop on Software Security Assurance Tools, Techniques, and Metrics, organized by the U.S. National Institute of Standards and Technology (NIST). The purpose of the workshop is to convene researchers, developers, and government and industrial users of Software Security Assurance (SSA) tools to
The material and papers for the workshop will be distributed on USB drives to the participants. The content of the USB drives are:
We thank those who worked to organize this workshop, particularly Elizabeth Fong, who handled much of the correspondence and Debra A. Brodbeck, who provided conference support. We appreciate the program committee for their efforts in reviewing the papers. We are grateful to NIST, especially the Software Diagnostics and Conformance Testing division, for providing the organizers' time. On behalf of the program committee and the whole SAMATE team, thanks to everyone for taking their time and resources to join us.
Sincerely,
Dr. Paul E. Black
Call for Papers
Agenda
Description of Reference Data Set
Software Flaw Taxonomy
National Institute of Standards and Technology (NIST) workshop on
Software Assurance Tools, Techniques, and Metrics
7-8 November 2005
Co-located with ASE 2005
Long Beach, California, USA
Funded in part by the Department of Homeland Security (DHS), the National Institute of Standards and Technology (NIST) started a long-term, ambitious project to improve software security assurance tools. Security is the ability of a system to maintain the confidentiality, integrity, and availability of information processed and stored by a computer. Software security assurance tools are those that help software be more secure by building security into software or determining how secure software is. Among the project's goals are:
These goals extend into all phases of the software life cycle from requirements capture through design and implementation to operation and auditing. The goal of the workshop is to convene researchers, developers, and government and industrial users of SSA tools to
Sets of code with known flaws and vulnerabilities, with corresponding correct versions, can be references for tool testing to make research easier and to be a standard of evaluation. Working with others, we will bring reference datasets of many types of code, like Java, C, binaries, and bytecode. We welcome contributions of code you've used. To help validate the reference datasets, we solicit proposals not exceeding 2 pages to participate in SSA tool "target practice" on the datasets. Tools can range from university projects to commercial products. Participation is intended to demonstrate the state of the art in finding flaws, consequently the proposals should not be marketing write-ups, but should highlight technical contributions: techniques used, precision achieved, classes of vulnerabilities detected, suggestions for extensions to and improvements of the reference datasets, etc. Participants are expected to provide their own equipment.
SSATTM encourages contributions describing basic research, novel applications, and experience relevant to SSA tools and their evaluation. Topics of particular interest are:
Papers should not exceed 8 pages in the conference format . Papers exceeding the length restriction will not be reviewed. Papers will be reviewed by at least two program committee members. All papers should clearly identify their novel contributions. All papers should be submitted electronically in PDF format by 26 August 2005 to Elizabeth Fong (efong [at] nist.gov (efong[at]nist[dot]gov)).
Accepted papers will be published in the workshop proceedings. The workshop proceedings, along with a summary of discussions and the output of the reference dataset "target practice", will be published as a NIST Special Publication.
----------------------------------------------------------------------
Workshop Program |
||||||
Monday - November 7, 2005 |
||||||
Time : | Session : | |||||
8:30 – 9:00 | Welcome – Paul Black | |||||
9:00 – 10:30 | Tools and Metrics - Elizabeth Fong
|
|||||
10:30 – 11:00 | Break | |||||
11:00 – 12:30 | Flaw Taxonomy and Benchmarks - Robert Martin
|
|||||
12:30 – 1:30 | Lunch | |||||
1:30 – 4:00 | New Techniques - Larry Wagoner
|
|||||
End of day one | ||||||
Tuesday - November 8, 2005 |
||||||
9:00 – 11:30 |
|
|||||
11:30 – 1:00 | Lunch | |||||
1:00 – 2:30 | Invited Presentation - Vadim Okun
|
|||||
End of workshop |
The SAMATE Reference Dataset (SRD) is a rapidly growing set of contributed test cases for measuring software assurance (SwA) tool capability against a functional specification for that tool.
This initial distribution is a compilation of C source code test cases that will be used for evaluating the functional capability of C source code scanning tools. Contributions from MIT Lincoln Lab and Fortify Software Inc. make up this initial set. Additional contributions from Klocwork Inc. and Ounce Labs Inc. will be added soon.
We expect to expand the SRD to include other languages (e.g. C++, Java) as well as to include test suites for other SwA tools (such as requirements and software design documents).
MIT Contribution
Documentation for each test case is contained in the source files themselves. In the case of the MIT contribution, the first line of each test case contains a classification code describing the test case “signature” (in terms of code complexity). All MIT discrete test cases are “buffer overflow” examples, with permutations of some of the 22 coding variation factors to challenge a tool's ability to discover a buffer overflow or recognize a patched version of the overflow. Also, MIT contributed 14 models (scaled-down versions) of 3 real world applications (bind, sendmail, and wu-ftpd).
Fortify Software Test Case Contribution
Fortify Software has contributed C code test cases, the majority of which are also buffer overflow vulnerabilities. Additionally a number of race condition, command injection and other vulnerabilities are also included in the test suite. Like the MIT test cases, the Fortify test cases are “self documenting”, with keywoSRD describing the type of software flaw present in the code. Additionally, to provide a uniform way of classifying the complexity of the test cases, the MIT classification code is placed at the top of each test file.
Klocwork Test Case Contribution
Klocwork Inc. has donated an initial contribution of C++ test cases, the majority of which are memory management related (e.g. memory leak, bad frees, use after frees ). They intend to follow up with an additional donation of Java test cases.
Target Practice Test Suite - [Download the files (zip)]
A subset of both the MIT (152 discrete test cases and 3 models) and Fortify (12) test cases make up the “target practice” test suite. A representative group of well-understood and documented tests are presented as a “starting point” to get initial feedback from tool developers and users as to how useful the test suite is. Both a “bad” (flawed) and “good” (patched) version exists for each test case.
Confidentiality of Test Results - At no time is a tool developer required to report anything about their tool's performance against the Target Practice test suite. The purpose of the target practice is to solicit feedback on the SRD… NOT the tools that run against it. If a tool developer wishes to provide further insight into the usefulness of the SRD by disclosing how their tool performed against it, they do so at their own discretion.
9 AM - 11:30 AM - Discussion of Test Results and Reference Dataset by target practice participants and workshop attendees :
One of the conclusions from the August ’05 workshop “Defining the State of the Art in Software Security Tools” was the need for a reference taxonomy of software flaws and vulnerabilities. To further this goal, the NIST SAMATE team developed a harmonization scenario extending ideas in the Tsipenyuk/Chess/McGraw paper Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors.
This scenario was created from the following publicly available taxonomies:
The scenario is unavailable at this time. You can contact us for further information. Construction details may be found below. This working document was developed by the NIST SAMATE team from publicly available sources without consultation with any of the taxonomy authors. The goal is to stimulate discussion.
Please join the samate-subscribe [at] groups.yahoo.com (subject: subscribe) (samate[at]yahoogroups[dot]com) email group to comment.
Notes on the Construction of the Harmonization Scenario :
Kingdoms |
|
CLASP |
Environment |
<------ |
Environmental problems |
Errors |
<------ |
General logic errors |
Security Features |
<------ |
Protocol errors |
Input Validation and Representation |
<------ |
Range and type errors |
Time and State errors |
<------ |
Synchronization and timing errors |
Disclaimer: Any commercial product mentioned is for information only; it does not imply recommendation or endorsement by NIST nor does it imply that the products mentioned are necessarily the best available for the purpose.