November 8 & 9, 2007
at SIGAda 2007
Hyatt Fair Lakes Hotel
Fairfax, Virginia, USA
"Black-box" software testing cannot realistically find maliciously implanted Trojan horses or subtle errors which have many preconditions. For maximum reliability and assurance, static analysis must be applied to all levels of software artifacts, from models to source code to byte code to binaries. Static analyzers are quite capable and are developing quickly. Yet, developers, auditors, and examiners could use far more capabilities. As noted in the CFP the goal of this summit is to convene researchers, developers, and government and industrial users to define obstacles to such urgently-needed capabilities and try to identify feasible approaches to overcome them, either engineering ("solved" problems) or research.
This follows the first Static Analysis Summit in June 2006. The next workshop will be co-located with PLDI in 2008.
We solicit contributions of papers or proposals for discussion sessions. Contributions should describe basic research, applications, experience, or proposals relevant to static analysis tools, techniques, and their evaluation. Questions and topics of interest include but are not limited to:
Papers should be from 1 to 8 pages long. Papers over eight pages will not be reviewed. Papers should clearly identify their novel contributions.
Discussion session proposals should give a session title and name a moderator and at least two other participants. The proposal should clearly identify the topic or question for discussion.
Submit papers and proposals electronically in PDF or ASCII text by 3 September 2007 to Wendy Havens <wendy.havens [at] nist.gov>. (We will need ACM copyright forms.)
You do not have to have an accepted paper or discussion proposal to attend. We invite those who develop, use, purchase, or review software security evaluation tools. Academicians who are working in the area of semi- or completely automated tools to review or assess the security properties of software are especially welcome. We are encouraging participation from researchers, students, developers, and users in industry, government, and universities.
You must register for at least one day of SIGAda 2007. There is no additional charge to attend the summit. It only costs $25 for one day registration for full-time students.
12:30 PM: Program Presentation and Charge to Attendees - Paul E. Black
12:45 : Static Analysis for Improving Secure Software Development at Motorola - R Krishnan, Margaret Nadworny, and Nishil Bharill
1:10 : Discussion: most urgently-needed capabilities in static analysis
1:40 : Evaluation of Static Source Code Analyzers for Real-Time Embedded Software Development - Redge Bartholomew
2:05 : Discussion: greatest obstacles in static analysis
2:35 : Break
2:50 : Common Weakness Enumeration (CWE) Status Update - Robert Martin and Sean Barnum
3:15 : Discussion: possible approaches to overcome obstacles
3:45 : Panel: Obfuscation vs. Analysis - Who Will Win?
4:30 : New Technology Demonstration Fair
8:30 AM: Discussion: Static Analysis at Other Levels
9:00 : Keynote: Bill Pugh
10:00 : Break
10:15 : A Practical Approach to Formal Software Verification by Static Analysis - Arnaud Venet (to be given by Henny Sipma)
10:40 : Discussion: inter-tool information sharing
11:10 : Logical Foundation for Static Analysis: Application to Binary Static Analysis for Security - Hassen Saidi
11:35 : Wrap up discussion: needs, obstacles, and approaches
Accepted papers and discussion notes will be published in Ada Letters.
23 August 2007 - on-line registration opens
3 September 2007 - Paper submission deadline
3 October 2007 - Author notification
22 October 2007 - Final publication-ready copy due
1 November 2007 - Last date to register for Early Conference Rate
8 & 9 November 2007 - Summit
|Paul E. Black||NIST||paul.black [at] nist.gov|
Judging the value of static analysis
Keynote address at SASII, 8 & 9 November 2007
There is lots of work on developing new static analysis techniques that can find errors in software. However, efforts to judge or measure the effectiveness of static analysis at improving software security or quality is much less mature. Developers and managers want to know whether static analysis techniques are worth the investment in time and money, or whether those resources could be better spent elsewhere. Government agencies are interested in developing benchmarks or checklists that will let them determine which static analysis tools are approved so that they can restrict government procurement to approved tools and to software checked with approved tools. Static analysis tool vendors are exceptionally secretive about the capabilities of their tools.
Unfortunately, there are no easy solutions. Getting from static analysis results to improved security and quality is a complicated process that isn't well understood or documented. In other areas such as parallel programming, benchmarks have proven to be corrosive to good science, focusing immense resources on narrow problems that didn't address the real problems of the field. I believe that developing standard benchmarks for static analysis would have much the same impact in the static analysis field.
I will talk about the problems associated with evaluating static analysis, and some ways that the field might be able to improve the situation.