This page credits those who contributed sets of test cases to the SARD and includes a short description for each set. We appreciate those who contributed test cases. These test cases represent considerable intellectual effort to reduce reported vulnerabilities to examples, classify them, generate elaborations of particular flaws, come up with corresponding correct examples, etc. This page acknowledges those people, groups, companies, and entities who have generously shared with everyone.
Contributors are listed in alphabetical order:
[CAS20] National Security Agency's Center for Assured Software created almost 29 000 test cases in C# covering 105 CWEs, called the Juliet test suite for C# version 1.3. The C# test cases and supporting files can be downloaded from the Test Suites page.
Either using SARD cases or commenting directly on them. These are listed newest first.
Matteo Mauro, Regole di Programmazione per la Safety e Security: Analisi, Strumenti e Relazioni, Programming Rules for Safety and Security: Analysis, Tools and Relations, Bachelor Thesis, Università Degli Studi Firenze, 2018, unpublished. Mauro ran several static analyzers for MISRA rules on some Juliet test cases. The goal was to study which MISRA rules can also be helpful for security.
Gabriel Díaza and Juan Ramón Bermejo, Static analysis of source code security: Assessment of tools against SAMATE tests, Information and Software Technology, 55(8):1462–1476, August 2013, DOI: 10.1016/j.infsof.2013.02.005. "The study compares the performance of nine tools (CBMC, K8-Insight, PC-lint, Prevent, Satabs, SCA, Goanna, Cx-enterprise, Codesonar) ... against SAMATE Reference Dataset test suites 45 and 46 for C language."
Anne Rawland Gabriel, NIST Tool Boosts Software Security, FedTech, 8 February 2013. "Using the SARD test suites for internal testing and evaluation allows our researchers to gain insight into how their technology fares against a wide range of vulnerabilities ..."
Robet Auger, NIST publishes 50kish vulnerable code samples in Java/C/C++, is officially krad, cgisecurity.com blog, 31 March 2011.
He calls the Juliet test suite "a fantastic project" and says, "If you're new to software security and wish to learn what vulnerabilities in code look like, this is a great central repository ..."
Cristina Cifuentes, Christian Hoermann, Nathan Keynes, Lian Li, Simon Long, Erica Mealy, Michael Mounteney, and Bernhard Scholz, BegBunch: benchmarking for C bug detection tools, Proc. 2nd International Workshop on Defects in Large Software Systems; held in conjunction with Int'l Symposium on Software Testing and Analysis (ISSTA 2009), Chicago, Illinois, July 2009.
Describes BegBunch. Compares BegBunch with SARD and other collections.
Henny Sipma, SAMATE Case Analysis Report, Kestrel Technology, April 2008.
The description is "An application of CodeHawk to a NIST benchmark suite." The first page reads "CodeHawk Buffer-overflow Analysis Report: Benchmarks 115-1278". CodeHawk found an unrecognized underflow vulnerability in case 834.
John Anton, Eric Bush, Allen Goldberg, Klaus Haveland, Doug Smith, and Arnaud Venet, Towards the Industrial Scale Development of Custom Static Analyzers, Kestrel Technology, 2006.
"The SAMATE database will provide the basis for studying the specification language." Specifically uses cases 1314 and 54.
Redge Barthomew, Evaluation of Static Source Code Analyzers for Safety-Critical Software Development, 1st International Workshop on Aerospace Software Engineering (AeroSE 07), 21-22 May 2007.
Robert C. Seacord and Jason A. Rafail, Secure Coding Standards, Cyber Security and Information Intelligence Research Workshop (CSIIRW 2007), 14-15 May 2007.