Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
NIST Authors in Bold
|Author(s):||Paul E. Black;|
|Title:||Counting Bugs is Harder Than You Think|
|Published:||October 20, 2011|
|Abstract:||Software Assurance Metrics And Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example programs with known weaknesses for evaluating tools, and the Static Analysis Tool Exposition (SATE). Along the way we list over two dozen possible research questions, which are also collaboration opportunities. Software metrics is incomplete without metrics of what is variously called bugs, flaws, or faults. We detail numerous critical research problems related to such metrics. For instance, is a warning from a source code scanner a real bug, a false positive, or something else? If a numeric overflow leads to buffer overflow, which leads to command injection, what is the error? How many bugs are there if two sources call two sinks: 1, 2, or 4? Where is a missing feature? We conclude with a list of concepts which may be a useful basis of bug metrics.|
|Proceedings:||Eleventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM)|
|Dates:||September 25-26, 2011|
|Keywords:||software engineering, software tools, software metrics, software assurance, SAMATE, source code|
|PDF version:||Click here to retrieve PDF version of paper (114KB)|