Attendees: Alan Goldfine, Allan Eustis, Dan Schutzer, David Flater, John Wack, Lynne Rosenthal, Mat Masterson (EAC), Nelson Hastings, Paul Miller, Sharon Laskowski, Steve Berger
Discussion of QA/CM Research Paper: This paper has been posted on the web for a week. The material in this paper has not been included in the VVSG yet, it should go to coordinator of paper in the next week.
Alan Goldfine discussed the plenary presentations. The CRT subcommittee plans to cover 5 topics: 1) electromagnetic compatibility, 2) quality assurance/configuration management requirement, 3) review of CRT changes, 4) discussion of benchmarks, and 5) general discussion.
For electromagnetic compatibility, the requirements are still in the process of being developed. These requirements have been divided into three parts: conducted disturbance (should be completed next week), radiated disturbances (draft set of requirements by March 30), and telecommunications (incomplete, 1st draft by March 30). (Steve Berger forwarded these to IEEE for comments which are forthcoming.)
QA/CM requirements raise an issue. The requirements are being done as a result of a TGDC resolution directing the review of QA/CM. At the December 2006 meeting, it was decided that the ISO 9000/9001 standard would be used for quality assurance. This is just the beginning of the process . Details are needed to state what conformance means. Vendor requirements are more specific - required documentation, data delivered to test labs and EAC. The draft requirements state two different alternatives which were discussed in great detail. The first alternative requires that vendors provide their quality assurance procedures ( quality manual) early in the process, before design/development of system begins. The other alternative requires that the quality assurance procedures be done, but there is no specification. The problem identified : if a system's quality assurance procedures are not approved, and the design/development is well on its way, a system may not be approved and there would be wasted time and effort. Opinions were expressed that this did not seem to be a problem, that the real question was if the system passed certification, the manufacturing process must produce systems of the same quality. This topic did not appear to be resolved. It will be brought up at TGDC plenary meeting for discussion and possible consensus. It was pointed out that there was no time specified in the EAC manual regarding the registration process. Mat Masterson will bring this up to the EAC.
David Flater then discussed what he would be presenting at the plenary meeting. The first topic would be a review of the changes as discussed at the last CRT subcommittee meeting. David will also be presenting information regarding benchmarks. CRT has received feedback from NASED regarding the question about resetting benchmarks on reliability and accuracy in the next VVSG. Defensible numbers need to used for the benchmarks. The response from NASED in response to reliability said that no failures that lead to unrecoverable votes are acceptable, in other cases, our tolerance for failure depends on how hard it is to recover from those failures, there is no typical volume on which to base a benchmark. We currently do not have a particular volume to base a benchmark; however, if test labs are going to advise rejection of systems that perform unreliably during testing, there needs to be a benchmark for what constitutes an unacceptable rate of failure.
With respect to accuracy, we have a real requirement that does not map to a particular benchmark. The acceptable number of errors is 1 less than the vote margin between first and second place. The current benchmark is 1 in 10M ballot positions - this was set as a compromise based on costs of testing. NASED acknowledged the need to review test methods and also expressed concern that 1 in 10M is probably not achievable for real ballots. Since we want volume testing to produce realistic ballots, the benchmark should meet everyone's requirements but also be attainable. Should the benchmark be relaxed so that it is attainable? The route we went through did not produce the data we need to produce defensible benchmark numbers. There is not a lot of time left. At the TGDC meeting, we plan to ask our customers for input about what this number should be.
Paul Miller informed the group that NASED had concerns about putting into writing indicating an acceptable rate of failure. There was also a concern about the definition of terms. We have to look at what are "failures". If poll workers can't set up systems, that's a failure. If a printer jams, that's a failure. However, there are manageable failures and those should be identified. David stated that failures were defined as equipment breakdown, including software, so that continuous service is worrisome or impossible. Complete testing of systems in voting environment cannot be tested as we do not know how many types of machines a precinct will have and how many as backup, if any. Failures fall anywhere in a large spectrum. If we ask test labs to try and evaluate the severity of failures, we will get ambiguous results.
Any comments/questions should be sent via email.
The meeting adjourned at 12:10 p.m.
to the Help America Vote Act of 2002, the TGDC is charged with directing
NIST in performing voting systems research so that the TGDC can fulfill
its role of recommending technical standards for voting equipment to the
EAC. This teleconference discussion served the purposes of the CRT subcommittee
of the TGDC to direct NIST staff and coordinate its voting-related research
relevant to the VVSG 2007. Discussions on this telecon are preliminary,
pre-decisional and do not necessarily reflect the views of NIST or the
policy / security notice / accessibility statement