CRT Teleconference*
January 11, 2007

Agenda:

1) Administrative updates (Allan E.)
2) Volume testing, reliability and accuracy testing discussion (David F.)
3) Election Management Systems discussion (David F.)
4) Data collection for reliability and accuracy benchmarks (David F.)
5) Any other items.

Participants: Alan Goldfine, Allan Eustis, Brit Williams, Dan Schuzter, David Flater, Max Etschmaier, Nelson Hastings, Paul Miller, Sharon Laskowski, Stephen Berger, Philip Pearce, Wendy Havens

Administrative Updates (Allan Eustis):

  • New disclaimer will be read at the beginning of every telecom. Meetings are formally announced in Federal register. What is said at these meetings is public and NIST welcomes any and all new listeners.
  • New articles, documents, and public comments have been placed on the TGDC website.
  • Mark Skall and Allan attended Donetta Davidson's installation as Chief Commission of the EAC. Outgoing chief expects approval of new EAC members in January.
  • Transcripts from the TGDC December 4 and 5, 2006 meeting are on web page.
  • Any combined TGDC subcommittee meetings will be worked into current schedule of meetings. New meetings require a couple weeks notice to schedule.

Volume testing, reliability, and accuracy testing discussion paper (David Flater):

This write up is is a realization of things that were discussed at TGDC meeting. First major addition is requirements to conduct volume testing similar to CA volume testing reliability protocol. Specific parameters were modeled after this protocol -- one exception is with respect to central tabulators. The other part has to do with test methods for accessing conformance to the benchmarks for reliability accuracy and probability of misfeeds for paper based tabulators. Following that are specific requirements for reliability, accuracy, and probability for misfeeds.

Discussion regarding this paper among subcommittee members followed. After discussion, Allan asked Steve Berger to summarize his concerns so that David Flater would know which direction to proceed.

Steve: Looking at the testing resource required by the draft (Table 7 below), we should look at it two ways. First, determine what resource is being assumed and explicitly state that this current draft would require "x" number of man hours of testing. Second, if the minimum level of testing is performed, what's the confidence that can be stated; if there's not enough testing to make a credible statement, then we need to look at why we're doing the test in the first place. Second point, do the tests, as currently prescribed, isolate problems being observed in the field. For example, the calibration issue. Do the tests we currently have screen out future systems that may require too frequent calibrations. Do we understand the causal factor? If we don't, then we need to do research.

Table 7 from David Flater's report on volum testing, reliability, and accuracy testing

David Flater provided a high level preliminary answer. We have not specified a particular test suite. We have specified a test method. Specific resources and results at end of conformity assessment depend in large part on the test suite used, which is currently developed by the test labs and approved by the EAC. With respect to isolation of specific issues, those issues are targeted by a series of environmental tests, from operational and non-operating points of view. We can always do research on causal issues.

Allan Eustis pointed out that some of these issues would be examined at an upcoming workshop by EAC on cost of testing. Also, relevant policy-related issues need to be referred to the EAC.

Steve Berger stated that the fundamental question had to be asked "would problems in the field today be prevented by the VVSG 2005 (as published), if not have we rectified the concerns in order that issues would be effectively identified and handled under VVSG 2007?"

We can use this methodology on a number of field issues. We should take a systems view. (Max's next paper on Quality Assurance will address some of these issues and will be released in the next week or so. It will be a discussion item at an upcoming CRT meeting.)

Election Management Systems (EMS) Discussion Paper (David Flater):

There is confusion over the definition of the Election Management System (EMS) as defined in the 2002 VSS. The current definition covers both pre and post voting functions. The question posed is whether the definition should be revised, making a distinction between pre and post requirements. (Brit Williams pointed out that originally (VSS 1990) the definition referred to firmware and software, and that pre and post election were added later. Some vendors group both functions into one software package, others separate it by function. EMS is run on standard desktop/laptop computers.) Consensus at the meeting was that the current definition was on track, and that follow up concurrence would be asked for from ITAA.

Data Collection for Reliability and Accuracy Benchmarks (David Flater):

In order to set benchmarks for reliability and accuracy, we need to know what the customer wants to see in terms of reliability and accuracy. Since se have controversy over testing, it puts benchmarks on hold. The way we do this data collection is currently being reviewed by the legal office.

The question arose about availability. How is availability defined? Turn on a machine and if it works it means it's available. Concerns were expressed over the fact that logic and accuracy testing was not performed at the precincts before use. It was also discussed that some issues/problems thought to deal with calibration problems turned out to be usability problems. Some of these issues are being looked at by the HFP subcommittee.

Concerns expressed by CRT participants: Does a system go out of calibration during transport? Are the environmental tests being performed sufficient to determine this? What about older technology problems such as memory packs that are either old, or the technology on them is old? Separate component issues were raised. We need to evaluate the components of the system, as well as the system as a whole. Manufacturers of components need to be contacted about expectations of individual components. We need to know if touch screen systems are being deployed in a manner that goes against manufacturers' advice or if they are being subjected to expectations beyond what manufacturers say are reasonable. We then need to look at the process to detect when this is happening. We also need to make sure that vendors are in agreement with component manufacturers about what to expect. Vendors do not want products used in unacceptable operating conditions.

Next teleconference is scheduled for February 1, 2007, at 11:00 a.m. EST.

[* Pursuant to the Help America Vote Act of 2002, the TGDC is charged with directing NIST in performing voting systems research so that the TGDC can fulfill its role of recommending technical standards for voting equipment to the EAC. This teleconference discussion served the purposes of the CRT subcommittee of the TGDC in directing NIST staff and coordinating its voting-related research relevant to the VVSG 2007. Discussions on this telecon are preliminary and do not necessarily reflect the views of NIST or the TGDC.]

 

************

Link to NIST HAVA Page

Last updated: July 25, 2007
Point of Contact

Privacy policy / security notice / accessibility statement
Disclaimer / FOIA
NIST is an agency of the U.S. Commerce Department