Software Assurance Metrics And Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example programs with known weaknesses for evaluating tools, and the Static Analysis Tool Exposition (SATE). Along the way we list over two dozen possible research questions, which are also collaboration opportunities. Software metrics is incomplete without metrics of what is variously called bugs, flaws, or faults. We detail numerous critical research problems related to such metrics. For instance, is a warning from a source code scanner a real bug, a false positive, or something else? If a numeric overflow leads to buffer overflow, which leads to command injection, what is the error? How many bugs are there if two sources call two sinks: 1, 2, or 4? Where is a missing feature? We conclude with a list of concepts which may be a useful basis of bug metrics.
Proceedings Title: Eleventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM)
Conference Dates: September 25-26, 2011
Conference Location: Williamsburg, VA
Pub Type: Conferences
software engineering, software tools, software metrics, software assurance, SAMATE, source code