CRT Teleconference
Thursday, April 5, 2007
11:00 a.m. EDT
Meeting Minutes

Draft Agenda:

1) Administrative updates (Eustis)
2) Reliability and Accuracy Benchmarks (Flater)
3) Ongoing Maintenance of Draft (Flater)
4) Any other items.

Attendees: Allan Eustis, Brit Williams, David Flater, John Crickenberger, Mat Masterson (EAC), Nelson Hastings, Paul Miller, Sharon Laskowski

Administrative Updates:

  • Next TGDC Plenary meeting is scheduled for May 21-22, 2007, here at NIST. Agenda will focus on the review of the near final VVSG draft.
  • All material from the March meeting has been posted, including the two resolutions that were passed.
  • The April-May telecon dates have been posted on web - any member of TGDC invited to attend any of the telecons.

Reliability and Accuracy Benchmarks (David Flater):

Current state of the Benchmarks research paper by Paul Miller and David Flater was forwarded to the subcommittee. This was prepared in response to the direction of the TGDC at the last plenary meeting - namely to attempt to gather what data/guestimates we could about different volumes of usage of different devices so as to come up with an estimate for what the benchmark for reliability and accuracy should be in at least the correct order of magnitude. From NASED feedback, no given number is going to be correct for the usage in every possible jurisdiction. Part of the direction that was taken was that different severities would be assigned to different types of failures in a manner that was not entirely different than what was done in the 1990 voting system standards.

[NOTE: David Flater asked why the scoring standard in the 1990 standard was taken out of the 2002 revision that described what a relative failure is and assigned different weights depending on what kind of failure it was. Brit Williams said there was not a major show stopping reason it was deleted. He noted that Appendices were not par of the 1990 VSS.]

Paul Miller is going to talk to other members of NASED to confirm numbers and approach. Assuming the process comes out favorably, the draft will be changed to expand the single benchmark into a collection of benchmarks that are tailored to individual types of equipment.
The paper contains definitions of several different types of voting equipment. These definitions are compatible to what is in the draft. These are written in plain language and the ones in the draft are more formal.

After that, there is an attempt to answer what constitutes a typical volume for these different types of equipment, understanding that each jurisdiction will understand different what "typical" means. There's a breakdown of how many errors/failures a precinct can tolerate on Election Day. Failures are classified differently depending on whether personnel present can easily remedy situation versus calling in specialized assistance.
There is a system level benchmark for accuracy. Apart from the general consensus that we don't want any vote to become unrecoverable, there was a feeling that the benchmark in the current standard was fine, we just need to understand how it will be applied. The two benchmarks in 05 were done using a sequential probability ratio test, which involves selecting a lower benchmark which conformity could be demonstrated at 95% confidence. The new test is different, using a fixed length test plan. [The difference between the tests was explained later by David Flater] We need a single benchmark to say what the requirement is - and if you don't meet this requirement, we can not recommend the equipment for certification. [Dave's December plenary presentation has a number of slides showing difference in testing methods.]

Next Step: Paul Miller will confirm or modify numbers based on discussion with NASED members. David will get material prepped for inclusion into draft VVSG.

Ongoing Maintenance of Draft (David Flater):

David is still catching up with changes discussed at plenary meeting. Including such changes as:

  • Clarify testing terminology - test protocol out
  • All procedural requirements are being changed to informative text explaining assumptions being made by product requirements
  • Purging use of the phrase "may not" based on different interpretations
  • A whole series of changes in response to comments from David Wagner
  • Definition change for EBMs
  • Defined what logic defect is
  • Clarified logic defects found during conformity assessment are not field serviceable
  • Clarify use of assertion and constraint
  • Clarify vendor versus lab responsibility in logic verification
  • Discuss phenomena such as numeric overflows and invocation of undefined behavior in both coding convention and logic verification section
  • Positional changes still need to be done
  • Clarify reporting of irrelevant contests
  • Word-smithing in conformance clause
  • Paper durability - start with JPC (joint Committee on Printing standards
  • Add new device classes - audit devices, activation devices
  • Add system level classes - 1 for independent dual verification for handling innovation class and for handling election verification which maps to end-to-end cryptographic systems

Meeting adjourned at 11:45 a.m.

[* Pursuant to the Help America Vote Act of 2002, the TGDC is charged with directing NIST in performing voting systems research so that the TGDC can fulfill its role of recommending technical standards for voting equipment to the EAC. This teleconference discussion served the purposes of the CRT subcommittee of the TGDC to direct NIST staff and coordinate its voting-related research relevant to the VVSG 2007. Discussions on this telecon are preliminary, pre-decisional and do not necessarily reflect the views of NIST or the TGDC.]

 

************

Link to NIST HAVA Page

Last updated: July 25, 2007
Point of Contact

Privacy policy / security notice / accessibility statement
Disclaimer / FOIA
NIST is an agency of the U.S. Commerce Department