in conjunction with SERE (SSIRI) 2012
20 June 2012*
National Institute of Standards and Technology
Gaithersburg, Maryland, USA
The goal of this workshop is to bring together researchers and practitioners to (1) understand the state of the art and state of practice in software testing, (2) map work needed for improved methods and tools for software testing, and (3) list any important problems needing to be solved.
Software has been tested for decades, yet there is still so much we desire of testing but cannot get: creating a small set of tests to expose bugs or build assurance, understanding how much confidence is justified by certain coverage, and combining the strengths of static analysis with testing. Why is fuzzing (still) so effective? How does the nondeterminism of scheduling, memory use, parallelism, virtual machines, and cloud computing change testing? What is the measure of effectiveness of testing in light of a determined adversary? How can we supply information for decisions based on assurance, robustness, resilience, security, portability, maintainability, and so forth? Just as civil engineering uses building codes, what styles, constructs, or patterns can be used in software so the product can be analyzed and tested for assurance that it satisfies requirements for safety or confidentiality or scalability?
There are severe theoretical limits to the ability to measure software properties, such as the halting problem. However, Heisenberg's uncertainty principle has not stopped the advancement of theoretical or applied atomic physics. NIST seeks to identify needed research and technology in measurement, metrics, and standards of software testing, based on sound principles of science and engineering.
Topics of interest include, but are not limited to, the following areas:
Submissions may be either position statements or full papers.
Position Statements: A position statement should be 1 paragraph to 1 page long. A "position" may include articulations of a problem, an issue to discuss, as well as a solution or opinion. The program committee will review the position statements, and invite some to make a presentation. Position statements will be published if agreed to by both the author and the program committee.
Papers: Papers should be from 2 to 8 pages long. Papers over eight pages will not be reviewed. Papers should clearly identify their novel contributions. If your paper is accepted, we will invite you to present it at the workshop. Accepted papers will be published by IEEE.
We will notify submitters of acceptance by 19 March 2012.
Register through the SERE (SSIRI) web site.
The deadline for author registration is 20 April.
The program will be a mix of discussion sessions and presentations based on position statements and papers.
Stuart Reid PhD FBCS CITP
Stuart Reid is Chief Technology Officer at Testing Solutions Group. He has 29 years experience in the IT industry, working in development, testing and education. Application areas range from safety-critical to financial and media. Stuart also supports the worldwide testing community in a number of roles. He is convener of the ISO Software Testing Working Group, which is developing the new ISO 29119 Software Testing standard and is the software testing representative at BSI. He chairs the BCS Specialist Group in Software Testing and founded ISTQB to promote software testing qualifications on a global scale. Stuart chaired EuroSTAR 2007, Europe's largest ever software testing conference with over 1200 attendees, won the European Testing Excellence award in 2001, and regularly writes magazine articles on software testing.
0900 SERE keynote
1030 MaSST keynote: Standards for Testing?, Stuart Reid, Testing Solutions Group
1100 Discussion session: Validating Measures, led by Paul E. Black, NIST
1130 Paradigm in Verification of Access Control, Jeehyun Hwang, North Carolina State University
1300 SERE invited talk
1400 Why Fuzzing (Still) Works, Allen D. Householder, CERT
1430 Viewpoint based Test Architecture Design, Yasuharu Nishi, University of Electro-Communications
1530 Discussion session: Gaps in and Roadmap for Measures and Standards in Software Testing, led by Taz Daughtrey, DACS
1600 Software Testing of Business Applications, Vijay Sampath, Tata Consultancy Services
1630 Discussion session: Lessons Learned and Next Steps led by Paul E. Black, NIST
1700 SERE reception
Paul E. Black (National Institute of Standards and Technology)
paul.black [at] nist.gov
Elizabeth Fong (National Institute of Standards and technology)
efong [at] nist.gov
Paul Ammann (George Mason University)
Taz Daughtrey (DACS)
Mary Ann Davidson (Oracle)
Helen Gill (NSF)
Mark Harman (University College London)
Cem Kaner (Florida Institute of Technology)
Satoshi Masuda (IBM Japan)
Thomas Ostrand (DIMACS-Rutgers University, visitor)
Alexander Pretschner (Technische Universitaet Muenchen)
Gregg Rothermel (University of Nebraska - Lincoln)
Laurie Williams (No. Carolina State University)
This workshop was originally scheduled for two days. It will only be one day, 20 June.