Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

TGDC Subcommittee Work - Historical Meetings - CRT Meetings - 2006

CRT Teleconference
November 30, 2006, 11:00 a.m. EDT

Agenda

1) Administrative updates (Allan E.)

2) Proposed agenda for the December 4-5 TGDC plenary (http://vote.nist.gov/TGDCagenda120406.htm) (Allan E. and John W.)

3) Overview of CRT plenary presentations (Alan G. and David F.)

4) Other Items

Attendees: Allan Eustis, Stephen Berger, Sharon Laskowski, Nelson Hastings, Dvid Flater, Alan Goldfine, Britt Williams, Max Etschmaier, Philip Pearce

The Meeting convened at 11:03 EST

1) Administrative updates (Allan E.)

AE reminded participants of Sunday night informal reception at Gaithersburg Hilton beginning at 7 pm in the Rockville Room.

2) Proposed agenda for the December 4-5 TGDC plenary

The Agenda and meeting materials were sent out on a CD to all members. All updated meeting materials are also available on the web at: http://vote.nist.gov/TGDC/TGDCpresentations120406.htm.


3) Overview of CRT plenary presentations

AG and DF reviewed their presentation slides for the upcoming plenary. See:
http://vote.nist.gov/TGDC/TGDCpresentations120406.htm.

AG noted papers of ME and discussion of MTBF/ alternative accuracy metrics; Need for implementation of quality control program. AG noted an accepted industry quality standard of ISO 9000/9001.

SB brought up concerns of advocating ISO 9000/9001 especially implementation dates and concern over the ability of vendor to what he says he will do; Potential horror stories her. AG will note possibility of this scenario in his talk. Process does need supplementing. SG noted that ISO 9000 adds little value without specifying what should be in quality program.

Participants discussed costs to implement an ISO 900 certification framework. Is this a NISTissue or a policy issue for EAC?

ME agreed that with ISO 9000 vendors also need a quality handbook.

SB brought up linkage between reliability and usability. Is there mistake tolerance convergence? Do we need to handle this?

DF noted source of confusion between reliability (applies to equipment) and usability (applies not to equipment but to operator).

SB wants to make sure requirements cover the potential human error and transposition of races. ME noted the value of functional failure analysis here. DF noted that this is also an issue of system integrity. There are requirements in VVSG 2007 to deal with system tolerance.

DF noted that you want to design system to achieve reliability. Discussion ensued on conflict between reliability benchmarks in theory and in practice.

DF noted inclusion of voting system variations and the arrival at conformity assessment. There was a discussion of open ended testing beyond conformity assessment.

BW initiated discussion of testing of optional features by ITAs. Historically this is not called out in the standards. DF noted that optional features are outside of conformity assessment to the standards.

BW noted that you do not want to have states repeating testing of VVSG done at the Federal level. Discussion continued on "fitness for use" evaluations. Can or should a voting system contain additional features?

DF noted that you will be writing tests to the requirements. BW noted that voting systems should conform to vendor documentation. SB noted that you want the system resistant to unauthorized use.

BW illustrated issues here with VVPAT spools not covered in VSS 2002. The VVPAT systems are tested to the vendor documentation. DF noted the issue of a vendor simply re writing the documentation to conform. AG noted the issue should be raised to EAC.


BW indicated the need of supplementing the test reports with concerns of the lab within the standard requirement r of the test reports. States require this within their reports. DF brought up litigation issues here. He gave an example of a vendor specific feature that met general requirements but misbehaved later in a costly way to the end user. (Good intentions but not conformity assessment). There is a relevant discussion paper in the meeting written materials.

DF noted that California Volume Test will be discussed at the meeting.

 

4) Other Items

Stephen Berger offered for review three resolutions (below) that he planned to introduce at the December TGDC plenary. AE proposed and SB agrees that me all three resolutions would be forwarded to all TGDC members in advance of the plenary. A copy of the resolutions would also be sent to EAC Commissioner Davidson. No edits were made in the proposed resolutions during the telcon call.

 

U.S. Election Assistance Commission
Technical Guidelines Development Committee

Resolution for consideration by the TGDC at their plenary meeting, December, 2006

Resolution #__-__, Offered by: Stephen Berger

For a variety of historical reasons the voting systems used in the US have developed to be response to the varied needs and desires of state and local election officials and others involved in the selection and use of voting systems. As a result the current systems may be characterized as:

i) Flexible,
ii) Adaptable,
iii) Individual, and therefore Non-Standard modification for jurisdictions, and
iii) Economical.

These characteristics have resulted in a mismatch between the systems and the use environment. Specific areas of contrast:

i) The flexibility and adaptability and the level of technical support and training generally available to jurisdictions to properly configure the systems for use,
ii) Flexibility and adaptability also come in conflict with resistance to human error and mistakes,
iii) Flexibility and adaptability are in conflict with a well designed security strategy and the security generally desired for elections,
iv) The modification of systems for individual states and jurisdictions blocks certain economies of scale, the development of various support tools for use by many jurisdictions and the ability to easily build expertise among all users of a system.
v) Budgetary constraints and pricing pressure have resulted in system quality levels that meet contractual requirements but fall short of general expectations for quality.

In order to assure that the 2007 revision is maximally helpful the NIST staff are requested to study and prepare a report how the proposed changes to the VVSG address these issues and specifically:

i) Result in systems that are appropriate to the level of technical support and training generally available to jurisdictions.
ii) Will significantly reduce the number of human errors and mistakes.
iii) Will result in a system that implements a system strategy that meets specified security threats. This requirement will require specific identification of the threat model being addressed.
iv) Will bring greater uniformity to voting systems and allow development of general use tools in support of election administration.
v) Will raise the system quality level at appropriate points but equally resist adding cost where there is not sufficient value added.


U.S. Election Assistance Commission
Technical Guidelines Development Committee

Resolution for consideration by the TGDC at their plenary meeting, December, 2006

Resolution #__-__, Offered by: Stephen Berger

NIST is requested to prepare a report surveying problems experienced in the 2004 and 2006 elections and analyzing these experiences for trends, causal factors and patterns of problems.

The report should then compare the changes made with the introduction of the 2002, 2005 and proposed change for the 2007 standard.

This report should then answer the following questions:

i) In what ways was the introduction of the 2002 version of the VSG helpful, harmful or irrelevant to addressing the issues identified? Were these impacts due to the standard itself, transitional factors related to its introduction or other simultaneously introduced factors?
ii) In what ways would the 2005 version of the VVSG have been helpful, harmful or irrelevant to addressing the issues identified? What transitional issues can be foreseen for the introduction of the 2005 guidelines?
iii) In what ways would the 2007 version of the VVSG have been helpful, harmful or irrelevant to addressing the issues identified? What transitional issues can be foreseen for the introduction of the 2005 guidelines?


U.S. Election Assistance Commission
Technical Guidelines Development Committee

Resolution for consideration by the TGDC at their plenary meeting, December, 2006

Resolution #__-__, Offered by: Stephen Berger

NIST is requested to prepare a report analyzing the relevance and effectiveness of recent and current proposed changes of the voting system certification process, specifically addressing the role and contribution of the VVSG, to recent election problems. The report should analyze the effect of recent and proposed changes with the purpose of identifying the most effective means of bringing improvement to problems and concerns with current voting systems and election administration.

Meeting adjourned at 12:25 pm EST.

************

 

CRT Teleconference
November 16, 2006, 11:00 a.m. EDT

Agenda:

1) Administrative updates (Allan E.)
2) Discussion of "Voting Machines: Quality and Configuration Management Requirements" (Max E.). This is Max's initial paper discussing proposed quality assurance and configuration management requirements for the VVSG. Please read: http://vote.nist.gov/TGDC/QualityConfigMgtReqs.doc
3) Continuation of discussion: What should CRT present at the December TGDC plenary? (David F., Alan G.)
4) Any other items.

Participants: Alan Goldfine, Allan Eustis, Brit Williams, David Flater, Donetta Davidson, John Wack, Max Etschmaier, Nelson Hastings, Paul Miller, Steve Berger, Wendy Havens, Dan Schutzer, Steve Berger

Administrative Updates:

  • Allan: Providing to committee on Monday (and also on web page) drafts of papers to be presented, draft agenda, and other TGDC Meeting information. The focus of the December TGDC meeting is on critical things for VVSG 07, so we are presenting white papers for decisions. Following TGDC meeting in March.
  • John W: Everyone working hard preparing for TGDC meeting. If December meeting doesn't go well, it will be because group was overwhelmed with too much good material.

Voting Machines: Quality and Configuration Management Requirements - Max

The report has been available for review on the website for a couple of weeks.

Max gave a general overview of the report:

Third report that Max has written: The first one defined what a voting system is all about, what is required of the voting machine, and the concept of reliability. The second report builds on the first one and shows what reliability performance could and should be expected of a voting machine, and it is reasonable to expect the voting machine not to have any critical failures, or at least a very small number (low probability of failure). The centerpiece of the second report was a model of a generic voting machine and a functional analysis of it.

The current (third) report builds on the earlier two and developed requirements for quality and configuration management. It defines quality and configuration management and examines what was written in VVSG 05. It develops a framework where regulator and vendor are separated. There are both a product and a quality process.. The system develops product quality through process quality. The standard is outlined in ISO 9000. It defines the terms and the general principals. You want a measure of rigor to ensure quality and avoid pitfalls of workarounds.

Further details regarding the report were given, followed by a discussion period.

Donetta: When talking about all 3 presentations, when we have the December meeting, how much of this are you going to explain to the others? Answer: Decision forthcoming. It would be beneficial to spend time educating the TGDC on a number of these issues. They should be stated simply - we may end up sacrificing issues in the interest of making them clear. Members should also realize the VVSG 07 must be written clearly, that we do have a firm deadline, and that we need to put as much information as possible in the guidelines.

Some of this proposal looked like it would result in new hardware; if that is the case it needs to be made clear to the members. It would be nice to have a cost estimate for Max's proposal. New equipment also affects the timeframe when everything becomes effective. When considering the deadline for implementation, take into consideration design, building, and testing of any new hardware. The two year deadline may not be enough.

Max restated the purpose of his three papers. Following an outline prescribed by CRT, he was to develop new quality assurance language for the VVSG 07. These three documents provide the basis for further work. Next step is to look at implementation. Currently, we have not looked at the economics from transitioning from the current system to future systems. Recommendations will be based on this analysis, followed by discussion of specific recommendations for next guidelines. These documents presented by Max are a precursor to the language going into the document.

Alan G: Cost implications have come up before - it seems that the general consensus has been that although we should not eliminate the cost issues, NIST's responsibility is to develop the best possible technical recommendations.

Steve B: Cost not the issue, it is implementation and understanding the possible interruptions and consequences. He would like to see a gap analysis from both CRS and STS subcommittees. Compare these new concepts versus the current class of equipment being sold and used. The TGDC has to understand the size of the gap with equipment currently in use and how much would have to change. Second, what is the time period for having plans for these concepts for deployment and implementation? Some aspects of this concept are a long way from being effectively implemented. Next, what is a credible transition plan that wouldn't be unduly disruptive? Vendors must design equipment taking into account that new requirements will be forthcoming and they will be harder to certify. We have to look at all three of these things before we're asked to make decisions.

Max pointed out that new machines that meet new requirements will be no more expensive than current machines.

New machines would not be open systems. Some input/output mechanisms that generate vulnerabilities we've been discussing would be disabled. Unnecessary software would be disabled in the new systems -- any function that is not needed on a system should not be included. General purpose code is not necessary. Once you know that the software in the machines works as advertised, there is no need further modify that software. (This eliminates the emergency patch scenario.) Interaction with the outside is strictly prohibited with the closed box and does not need any modification. The outside voting system (which is part of the internet) will need to be patched and modified and adapted as needed but it will never change what is in the secure voting machine that will be available for verification. If we go with this protocol, we can ensure safe and creditable elections. [Steve: To go with this, we would have to replace every electronic voting station currently in the field.]

How far are we along? What do we want to do? We shouldn't be designing voting machines. If we only design guidelines, we are well along to meeting VVSG 07 deadline.

Transition issue: It would be unthinkable to throw out all current systems. It is a learning process. We can modify old systems to get them close to what a reliable voting machine should look like. Voting machines being recommended require an understanding of everyone in the voting system. Use transition of the systems to reevaluate voting process.

Paul Miller has concerns over implementation, and the fact that cost of building new equipment does have to be considered - it will have impact on how we implement these changes. Vendors have been contacted and asked for comments on the proposal that Max is suggesting through ITAA.

With the scope of the things we're changing, it would be helpful for CRT to develop a field deployment scenario like when major upgrades are made to the telecom system. This is to understand the scope and challenges of changes. [This will be looked at in next report, as well as an implementation plan.]

We need TGDC buy-in at the December meeting if this is the way to go. It needs to be decided if this makes sense or whether we should change course. After that we would begin writing text for the VVSG 07 for approval by the TGDC.

There is support for this concept and realization that there will be much improvement. We need to figure out how much is specifiable and how much is implementable by VVSG 07. In December, we are going to propose formal adoption of ISO 9000 or the vendors be formally certified after a transition period to ISO 9000. Max's approach requires coordination with the other subcommittees, especially STS.

There are several things that can be accomplished near term. We should not only be looking at long-term goals.

Steve expressed concern that there has not been enough discussion on the deployment and implementation of COTS. There have been white papers addressing this issue. [John W: This requires EAC implement things differently.]

John W: We have to consider VVSG 07 as a major upgrade, a standard that will stand for 4 years. How far should we be going in our 2007 recommendations? NIST is addressing future voting systems, not looking at current systems.

Donetta: The subcommittee needs to provide guidance on how long this will take - two years, four years? We had originally set a 2 year window for manufacturers to meet, but looking ahead this is not a 2 year implementation time frame for core requirements. You have to design, build, and test. We need to think about is the negativity associated with the election process. Congress made a mistake with HAVA and the time requirements - requiring new machines by 2006 which weren't ready with new standards. People had an issue buying systems with 2002 standards that they felt were going to change right away. We're looking at the future election equipment and process.

Max's opinion is that, implementing his proposal, developing new systems in two years should not be a problem. What would cause a problem would be replacing current systems, this is a financial issue, but technologically this should not be a problem. [Group disagrees, we haven't discussed the local jurisdictions who have to buy them] We don't have to get rid of present machines - there is a lot we can do to make current systems as reliable as possible.

We are suggesting two types of requirements, one that can be used with existing software, and ones that cannot. There is a grandfather clause; we're not suggesting anyone throw away any machines.

It would be nice to know how long a system actually lasts, e.g., plastic deterioration, etc. to see how long an actual machine may be in use.

At the next teleconference, actual presentations for the December meeting should be available for comment. A suggestion was made to form a small task group to think about what we need to go into the meeting with, so we know what the TGDC needs to give guidance on. EAC and TGDC need to decide if we should go further than current systems. We do not want new guidelines every two years. Several vendors would like to start building on VVSG 07 guidelines instead of worrying about 05 ones that will change.

At the end of Max's report, together with the certification for quality management might also require certification for ISO 14,000 (environmental quality) which would be consistent and provide overall quality management for the manufacturers. Something we might expect of the machines - they conform to the standard that most new equipment today routinely conforms to - the energy star compliance.

When talking about implementation, in the certification arena, NVLAP will have to go back and reassess whether they can test to the new standard.

Next meeting will be November 30 at 11:00 a.m.

Meeting adjourned at 12:10 pm.

************

 

CRT
October 26, 2006 at 11:00 a.m. EDT

Agenda:

1) Administrative updates (Allan E.)

2) Conclusion of discussion on "Voting Machines: Reliability Requirements, Metrics, and Certification" (Max E.). NOTE: A slightly revised version of Max's paper has been posted to the web page
http://vote.nist.gov/TGDC/crt/index.html.

3) Discussion: What should CRT present at the December TGDC plenary? (David F., Alan G.)

4) Any other items.

Future CRT phone meetings are scheduled for: November 16 and 30.

Participants: Alan Goldfine, Allan Eustis, Dan Schutzer, David Flater, Max Etschmaier, Nelson Hastings, Paul Miller, Philip Pearce, Sharon Laskowski, Steve Berger

Administrative Updates:

  • Allan E.: Welcome Philip Pearce and Paul Miller as new members of the TGDC. New members will be receiving an introductory package shortly. Short bios of our new members are on the web. Working on an introductory meeting for the end of November.
     
  • Allan E: EAC meeting regarding certification procedure being held today, October 26. It is being recorded and will be available for webcast next week. Mary Saunders of NIST's Technology Services is presenting in one of the panels. There is a place for comments on the certification process available. Final is scheduled for release around Dec. 7.

Voting Machines: Reliability Requirements, Metrics, and Certification -- Max E.

  • Last meeting Max presented his paper, "Voting Machines, Reliability…" Today's meeting will continue that discussion.
  • At the next meeting Max will have a new paper entitled, "Rethinking the Quality Assurance and Configuration Management Aspect of the VVSG".
  • A brief overview was given by Max. Work consists of two parts. First task, dealing with defining reliability requirements for voting machines. In analysis - looked at the environment, laws, design of current machines, and what reliability means in context. Looked at system and function and critical failures. Second task - define structure for the analysis, looked a generic model of a voting machine and followed where the analysis led.
  • Conclusions of the analysis: It is possible to build a voting machine that will not experience critical failure and infrequent non-critical failures during election cycle. A prototype should be constructed for testing. Second conclusion, verification of the machine behaviors through statistical analysis of end-to-end testing would either be non-conclusive or expensive or both. Defined a metric of functional failure analysis. First the machine meets the requirements; the second doesn't meet the requirements unless certain features are changed. Looked at how we can assure a machine does meet the requirements. The current process (current VVSG and certification process) shares the blame for the current difficulties. Before fixing, we need to know what an ideal process will look like.
  • At the beginning we delineate the responsibilities and authority with voting machine and certification. Secondly, we should not freeze the technology. Vendor knows more about product than regulator, therefore vendor should do all analysis and certify that his machine meets all requirements and the regulator should verify this is accurate. Volume testing should be done for suitability testing in every use. Testing will also be done through an ongoing monitoring system.
  • Definition of the process is different than today's. Generic process which doesn't identify any institutions.
  • Need to figure out how this plays into the other aspects of the NIST voting work. We need to discuss implications with entire team.
  • DISCUSSION:
  • David F: Conflicts or obliviousness? Are you concerned about the accuracy paper? Max feels his paper may be in conflict. David sees Max's idea as a validation of testing, he sees no conflict.
  • Max sees the accuracy testing being applied to the components of the machine instead of the overall machine.
  • End-of-end test for reliability as well as for accuracy will not give you the results you're looking for.
  • Testability issues when breaking it down to individual components. Mechanical reliability could look at mean time between failures of individual parts based on stress, but looking at the accuracy of the voting system, some components that affect the end-to-end may accuracy be untestable. What components? When looking at a complete system, unless you're willing to tear it apart, most of the behaviors (including the optical sensor) is unobservable in the complete system. [Max - this is why you need this certification process.]
  • Purpose of test (it's not going to give you complete confidence) is to give you confidence in the analysis that was done.
  • Allan asks that TGDC members read Max's conclusions and provide feedback.
  • Steve Berger: A lot of us are expecting feedback after next week's election and then we'll want to think through if our standards protect where we need them to. If we see a pattern in the problems, how do you see that folding into the test regiment proposed? [Basic flaws in voting systems. Voting machine is integral part of system. There are no boundaries of the system. No control over election management people. Very complex system that requires system analysis. Looked at the whole thing and distilled a voting machine that resembles current machine, but it is totally isolated, clearly delineated, with a fixed boundary that can not be violated.] If there are problems that after analysis appear to be weaknesses in the equipment , we should ask the question, would either the current standard or changes we're working on, prevented those flaws from being fielded in future systems?
  • Max feels vendors unfairly blamed for machine failures when systems are failing.
  • Paul Miller: Agrees with comment about vendor blame. A lot of problems are procedural. Concerned with testing that it will work like a recall effort of machines after they are in the field.
  • There is a requirement in EAC Certification paper that says there has to be a process to de-certify systems as well as certifying them. Everyone should read to see if this is an equitable process.
  • What deliverable are we expecting on Max's proposal for revising the whole process? For the reliability, there are two more deliverables. A more detailed, concrete examination of the metrics in Max's formulation, the detailed examination of the components of failures. And draft requirements for the upcoming draft of VVSG that would implement this strategy in terms of specific requirements on vendors and testing authorities. We need revisions throughout the standard to accommodate this proposal. [Max will examine his ideal system against today's reality election system and try to design how it will fit into current system or what kind of changes necessary to accommodate his design. Figure out how to map certifying agency into current agencies.]
  • John W sent Max's paper to STS for comments. Not sure if STS and TGDC will want to go Max's route. We need to have an alternative plan. [The analysis of the reliability is not affected by future activities. The certification process without clear relationships will not be as good but the metric can still be applied.]
  • Alan G: If process shot down by TGDC, then the fall back position would have to be something along what is currently there with a significantly larger meantime between failures.
  • Steve B: Would it prevent problems currently seeing and how implementable is it in the distributed certification system we have?
  • We need more data flow analysis.
  • Software is an integral part of the functionality in the process that Max has described.
  • CRT will be presenting this work at the December 4&5 TGDC meeting. It remains to be seen if this will be accompanied by any resolution. We will present this paper and the next one on Quality Assurance and Configuration Management.
  • This working document needs to be on the web by mid-November for TGDC review. Max should develop a 3 page summary for upcoming discussions. If we want the go ahead from TGDC, we need to have something focused, easily readable and discussed.

Discussion: What should CRT present at the December TGDC plenary?

  • Voting Machines: Reliability Requirements, Metrics, and Certification - Max E. [We do not want to present anything that conflicts this.]
  • Accuracy Benchmark, Metrics and Test Methods - David F. [might be too technical, might be in conflict w/Max. We need a shorter summary to present. Steve B feels that even if there is conflict, we should discuss. Max, David, and Alan G will get together to see what conflicts are/if any.]
  • Discussion paper on COTS
  • Discussion paper for Testing for VVSG for Voting System Requirements (responsibility of test labs and how it is scoped in the VVSG)
  • Volume Reliability Testing Protocol as part of the federal certification process
  • Discussion paper on Coating Conventions on Logic Verification
  • Discussion paper on Marginal Marks and Optical Scans Systems

Next meeting is November 16 and then November 30.

**************

 

CRT Teleconference
Thursday, October 12, 2006

Participants: Alan Goldfine, Allan Eustis, David Flater, Max E., Nelson Hastings, Paul Miller, Philip Pearce, Sharon Laskowski, Wendy Havens

Draft Agenda:

1) Administrative updates (Allan E.)

2) Rescheduling the November 2 CRT phone meeting to October 26.

3) Discussion of revised "On Accuracy Benchmarks, Metrics, and Test Methods" (David F.). Please read:
http://vote.nist.gov/TGDC/crt/CRT-WorkingDraft-20061003/AccuracyWriteUp.html

4) Discussion of "Voting Machines: Reliability Requirements, Metrics, and Certification" (Max E.). Please read:
http://vote.nist.gov/TGDC/Reliability_Reqs_Metrics_Certification.doc
NOTE: Max's PowerPoint presentation is attached to this email.

5) Discussion of the remaining issues from "Issues List" (David F.). Please read:
http://vote.nist.gov/TGDC/crt/CRT-WorkingDraft-20061003/Issues.html

6) Any other items.

Administrative Updates

  • Allan welcomed Philip Pearce as an official member of the TGDC. Also Paul Miller, as of today October 12, 2006 is now an official member.
     
  • New members will be getting an orientation package very soon. As soon as the fourth new member joins (hopefully within the next couple of weeks) we will have an orientation teleconference with EAC Commissioner Davidson.
     
  • Alan Goldfine discussed rescheduling CRT meetings, moving the meeting on Nov. 2 to Oct. 26 at 11:00 a.m. Nov. 16 & 30 meetings also moved to 11:00 a.m.

On Accuracy Benchmarks, Metrics, and Test Methods - David Flater

This came about because there was an issue highlighted in the draft about whether we want to use a single high-level end to end error rate for the system or do we want to retain the individual error rates that were specified for each low level operation that were in the previous editions of the standard.

In accuracy assessment, no value in having low level error rate - found other issues:

  • Low-level versus single end-to-end error rate - when done as full analysis, we get predictions of end-to-end error rate. Context of doing this in a test lab is too narrow. No value. Error rates are not observable in a system level test.
     
  • Other versions used a probability ratio sequential test for the design of accuracy assessment. Assumes you're doing a single test. More valuable to collect data through entire campaign.
     
  • Fixed length versus sequential test plan - when enough evidence is collected to verify system doesn't meet accuracy benchmark, you can terminate testing. We may want to run the entire test campaign for other reasons.
     
  • Validity as a system test - The accuracy testing specified in VVSG 05 allowed the test lab to bypass portions of the system that would be exercised during an actual election. Issues such as those reported in CA volume testing, and the cost issue. David position is we have to do end-to-end testing.

Definition of accuracy method.

  • The accuracy metric in the 2002 VSS and VVSG'05 is ambiguous. Need to clarify.
     
  • Definition of "volume" i.e. filled in oval. 1990 VSS Votes versus ballots. Define volume as votes and not detection of marks on a ballot.
     
  • The Bernoulli process assumed by the 2002 VSS and VVSG'05 is an invalid model of tabulation. The system can do worse than miscount, it can manufacture votes. The Poisson method is a more valid model, allowing for the possibility of more than one error per unit of volume.
     
  • In the determination of error, it is unclear how inaccuracies in ballot counts and totals of under votes and over votes factor in.
     
  • All changes have ramifications for what kind of benchmark make sense. Old standard specified 1 error in the volume of 10M. In testing, only 1 error in 500K.
     
  • We may want to modify the benchmark to what can be practically demonstrated in testing.

Draft Requirements

  • Is the new approach OK overall?
     
  • Is the 1 in 10M benchmark still appropriate? If not, what should it be?
     
  • Is the 90 % confidence level appropriate? If not, what should it be?
     
  • Should the test plan be fixed length, or should we stop as soon as there is sufficient evidence that the accuracy benchmark is not satisfied?

Discussion

  • Allan - Is this dependent on the acceptance by TGDC of the volume test program?
     
  • David F - Some changes move us closer to a "California" test method - an end-to-end system test. Only requirement is to mitigate sufficient volume to change risks.
     
  • Alan G - One reason why California is a disadvantage, it decreases reproducibility of our (volume) tests.
     
  • Paul Miller - Saw the volume test in San Diego - concerns about interface user interaction to the screen freezing and jams with the VVPAT systems. Are these considered errors and how they can be fixed. (David: These are operational (reliability errors).

Voting Machines: Reliability Requirements, Metrics and Certification - Max Etschmaier

  • Voting system functions: Look at the whole thing - but then look at individual details.
     
  • In my last presentation I laid out my general philosophy. Hopefully you've looked at report. The section is limited in its scope. Documented a lot of analysis. This is the first phase of reliability requirements. Transcends into other subjects.
     
  • Example of a voting system. Options generation: sets up voting machine for election, separate from machine, but inputs data so security issues. Control program: core of voting machine, model is invariant, does not change. I/O Device: self contained device, if fails, no critical effect on other systems. Verification unit: verifies machine working properly before, during and after election. Machine core: physical structure, holds all other components together. Alternate record: depends on legal requirements as to whether it's needed.
     
  • Discussed critical failures and usage pattern at last meeting.
     
  • With all we have, we can do a functional failure analysis.
     
  • Procedure: For every critical function - identify design requirements to avoid it, if none found, set limit on failure probability. For every non-critical function - determine failure probability, set limit if possible.
     
  • Design requirements from analysis. Do we look at the machine as a whole or do we want to break it down and understand the pieces to make an analysis - more meaningful statements. Architecture needs to be transparent - requires separation of code and data. Components need to be separate between input and output devices. Fail safe architecture where possible. No modification after certification. [Discussion: Alan G - you're explicitly forbidding scenario where any changes can be made by vendor before machine is used. David F - EAC posted a firm statement that says if changes to machines have not been reviewed, the machine will not be certified. Legally it is the responsibility of the state to make sure machines are certified. Paul M - Any modifications that do not affect machine workings, will be considered certified - vendor and county (now state) decision.]
     
  • Reliability requirements. Critical and non-critical components.
     
  • Results are a voting machine that provides security repository of the original vote, feeds into precinct and regional systems, recount (through verification unit) available anytime, and very few failures - any precinct would be able to make due with one spare voting machine. With this rate of failure, strategy # 2 discussed before not necessary.
     
  • A prototype should be produced.
     
  • Two parallel paths increase failure probability and decrease reliability.
     
  • Certification requires very careful analysis. Vendor is responsible for compliance. Volume testing as validation, and start of in-status ongoing performance monitoring. No modification after certification.
     
  • Submission for certification. Allan - What's different from the current EAC requirements? David F - the requirements stating certification of component reliability and enforceable assurance of technical and financial fitness.
     
  • Next steps. Find a path to implementation: 1) Define requirements for quality and configuration management, 2) transfer results to other parts of the VVSG, 3) examine transition for jurisdictions, and 4) examine transition for vendors. Formulate reliability requirements for VVSG: 1) specify format and 2) set limits on probabilities. [Decisions for TGDC as a hold]
     
  • Currently working on the quality assurance and configuration management which was handled completely different in VVSG 05 and previous versions. Max will be delivering a report on this by the December meeting.
     
  • This is a framework. Decision needs to be made on how we want to go.
     
  • Discussion will continue at the next CRT meeting.

Next CRT meeting will be October 26 at 11:00 a.m.

**************

 

CRT Teleconference
Thursday, September 21, 2006

Agenda:

1) Administrative updates (Allan E.)

2) Follow-up to last meeting's discussion of "Critical Issues for Formulating Reliability Requirements" (Max E.)

3) Discussion of "On Accuracy Benchmarks, Metrics, and Test Methods" (David F.) Please read:
http://vote.nist.gov/TGDC/crt/AccuracyWriteUp-20060914/AccuracyWriteUp.html.

4) Discussion of "Issues List" (David F.) Please read:
http://vote.nist.gov/TGDC/crt/CRT-WorkingDraft-20060823/Issues.html.

5) Any other items

Participants: Alan Goldfine, Allan Eustis, Dan Schutzer, David Flater, John Wack, Max Etschmaier, Nelson Hastings, Sharon Laskowski, Steve Berger, Thelma Allen

Administrative Updates:

  • Allan: Number of us have been visiting various states out west during primaries and post election activities -- WY and WA; as well as visiting various counties in MD after primaries. John W was and election judge in Montgomery County and Allan was an election technician in DC. We'll keep observing and participating in various aspects of the election process in multiple states to learn more about procedures in the pre/ post election process. (note Rene Peralta observed L & A testing in Washington state.)
  • John W: Just got a first formatted draft of the (incomplete) VVSG 2007 back from the contractors. It contains mostly requirements sections from Alan Goldfine's and David Flater's work as well as some from HFP and some draft security work. It will be posted on the TGDC internal web site soon. Note:the draft is quite rough, John is sending back lots of comments to contractors who are formatting the document.

Critical Issues for Formulating Reliability Requirements - Max E

[Introduction of agenda by Alan Goldfine: Max is looking into reliability issues such as meantime between failure requirements of the VVSG, rethinking them from the ground up. As a first "strawman" he has prepared that document which was presented at the last meeting.]

  • Max: Primary objective is to give a brief update. The first report identified the concept of reliability that would be used in the analysis and defined reliability in a very broad sense. The first report showed that functionality of software needed to be included in system functions as well as the functionality of hardware. Different functions have different levels of importance - need to separate critical and non-critical. Usage of the machines has an effect of the reliability of machines. Two potential basic strategies were identified in the paper. The first: to expect the voting machine not to fail "period". The second: allowing for corrections of non-critical failures during voting period.
     
  • Max has tried to elicit comments/feedback. No comments have been received yet. He is using this paper as a model for future work. He has developed a generic model of the voting machine and performed the functionality reliability analysis. One conclusion is that it indeed possible to build a voting machine that will not fail or will not fail with a high probability and critical failures can almost be completely eliminated. All showed that there are a number of conditions that the voting machine must meet in order to make the statement of reliability requirements. These cut across the complete spectrum of VVSG requirements.
     
  • A metric has been developed that could be published in new guidelines. It contains a few statements of probability. Max has also defined the process of testing certification that voting machines need to undergo in order to meet reliability requirements. He has also identified problems that should be expected in the implementation of these new reliability requirements. The report is almost finished and should be available by the end of next week. The conference call was opened for questions.
     
  • Steve Berger: We need to figure where the model Max has developed will lead us, including unforeseen consequences, and any downsides. [Max: The purpose of paper was to get these concerns.] When discussing reliability, what are the failure modes that you are envisioning identifying? What kind of things would cause the system to fail? [Max: A list of critical failures is identified in the paper - page 10, figure 3 - e.g., the display of the ballots which provides inaccurate data] [David F: this is where we have the clash between reliability and accuracy]
     
  • Max: If there is an appearance of suspicious activity, it is almost as bad if something actually went wrong.
     
  • Steve: Some of these symptoms could be caused by underlying mechanisms - are we looking at these mechanisms as efficiently as possible (including mechanical errors, memory errors and error rate). [Max: We have to look at all of them, the purpose of this analysis is to show how we can avoid all of them. This imposes certain conditions on the physical part of the voting system and also on the functionality of the software.] [David F: One of the concerns is if we define all of these as critical failures and take the strategy to design the system to prevent all them, we run into a quandary when we get to the case of error rate on optical scanning of paper ballots, etc, where achieving an error rate of zero is considered impossible.] [Steve: And proving it is impossible.] Max's goal is to look at every failure in the nature that it occurs to the failure, statistical approach to every failure.
     
  • Steve: In the use scenario, it is pointed out that the machines are used for short periods but our ultimate requirement is that a number of units used over a short period have a high level of accuracy - more of a population assumption. [Max: yes, different from any other system. We have to expect that we can't know everything and our analysis will never be perfect. Accept human error in the process. Must include field anomalies.] Steve: How should the field information be structured to achieve goals. [Max: We need exception report. Data on the failure of machines in actual use. System for collecting and analyzing. Should be easy to get.]
     
  • Max: What we are developing is radically different than what we have today. Technically he does not see any problems. Culturally, there may be problems.
     
  • Alan: What are ramifications, unexpected implications? One of the main ramifications will be the fact that is if the proposed system is put into effect, many of the current certified systems are not going to pass.
     
  • Allan E: At the last TGDC meeting, discussion ensued concerning an anomaly reporting mechanism that the EAC/NASED could manage for localities to report election day equipment failures down to the county level with election equipment. [Steve will get more details. The plan is to have an anomaly reporting system in place.]
     
  • John W: STS is starting to consider whether the VVSG 2007 should effectively permit DREs as they are constituted today and propose standards for future DREs. They are studying a white paper for the TGDC about whether voting systems must adhere to this independent verification notion or use a cryptographic protocol. Max's proposal crosses over to the STS area. [Max: The definition of a voting machine, in a narrow sense, would be welcome.]

Accuracy - David Flater

  • Paper has been posted. (http://vote.nist.gov/TGDC/crt/AccuracyWriteUp-20060914/AccuracyWriteUp.html). Of the drafts that we have, it's a confusing situation because of the expansion of the scope of Max's meantime between failure work to overlap into accuracy and some aspects of security. Previously the narrow focus was on the accuracy benchmark metric and testing method as defined in the current standard. There was an issue about the standard set out error rate benchmarks for a collection of low level operations in the system as opposed to a single end-to-end error rate. These low level benchmarks are not necessary or sufficient to accomplish a good end-to-end error rate. Proposal was to replace them with end-to-end error rate. Upon implementation other issues arose in relation to accuracy in the present standard. Hence, the discussion paper. If Max's work proceeds as is, it may obviate all that David has written about accuracy.
     
  • Issues: First, the use of a probability ratio sequential test method to assess conformity with the accuracy requirement. While it is widely approved of and has numerous advantages, this test design leaves the test lab in a quandary if errors begin to occur in other parts of the test campaign after the system fulfills the accuracy criteria for acceptance.

Presentation of December TGDC Meeting

  • Dan Schutzer wants to know what the CRT will be presenting at the December TGDC plenary meeting. [John W: Discussions are going on now. That information will be available soon.] Will it include something on accuracy and reliability? [Alan G: Yes, definitely reliability, and presumably accuracy as well.] Volume testing? [David F: That is included in the discussion of accuracy. We might want to increase the frequency of our teleconferences to discuss these issues.]
     
  • John W: We want to discuss mostly the controversial issues.
     
  • Allan E: We'll be developing a meeting agenda soon, as well as allowing everyone to see white papers and the issue paper by Max so the meeting can be focused on the issues of greatest concern for the next set of standards. [John W feels that COTS testing might be on the agenda because it's something that is frequently discussed along with coding standards.]
     
  • David F: With respect to the volume test, he looked at the current VVSG standard and noticed that it appears that the accuracy test which is the closest thing to a volume test, allows portions of the system to be bypassed with a test harness or instrumentation. This is probably related to the discrepancies between test reports and the results which the state of California (CA) has reported. Continuing to use this kind of test instrumentation is probably not a defensible strategy to take and we want to move toward end-to-end testing similar to the CA volume test. That will have ramifications for the kind of accuracy benchmark we specify in other things.
     
  • John W: We need to merge in the accuracy test with the other sorts of functional performance tests on the voting system. Does it presume we're going to change the way the labs work with the vendors on tests because we may not get the whole picture on accuracy the way it's done now? [David F: It is fixed length versus sequential test. There are two approaches. If you assume your ways correct, then there's no difference in your confidence in results. If we have enough evidence that the system does not meet the accuracy benchmark to a sufficient level of confidence, there's no reason to not stop and require the system to be fixed. However, you could run the entire test suite and calculate and estimate of the true accuracy based on the evidence collected, therefore, deterring to make a decision until all data is collected. This is different from policy allowing vendors to withdraw.]

Possibly schedule another telecon next Thursday afternoon to continue this agenda.

Issues List - David Flater

  • 3 major sections. Identifies 3 EAC opportunities. What we would write in the product standard could be significantly influenced by EAC plans to support certain decision making. Section 2 lists some lower level technical decisions within the product standard. Most are issues discussed before. Finally a note on the test reports and the conformity assessment process and how the language is up in the air and about the ramifications of EAC coming out with a process that may alter what was said in VVSG 05. [Alan G: A meeting in the near future with EAC to discuss this.]
 

***********

CRT Teleconference
Thursday, August 31, 2006

Participants: Alan Goldfine, Allan Eustis, Dan Schuster, David Flater, John Wack, Max Etschmaier, Sharon Laskowski, Thelma Allen, Wendy Haven

Agenda:

1) Administrative updates (Allan E. and John W.)

2) Critical Issues for Formulating Reliability Requirements (Alan G. and John W.)

Max Etschmaier presentation: http://vote.nist.gov/TGDC/crt/Etschmaier20060831.ppt

Max Etschmaier paper: http://vote.nist.gov/TGDC/crt/CriticalReliabilityIssues.doc

3) Issues List (David F.) See: http://vote.nist.gov/TGDC/crt/CRT-WorkingDraft-20060823/Issues.html

4) Any other items.

Meeting began with introductions at 10:05 a.m.

Administrative Updates:

  • Allan Eustis - Just returned from the Wyoming primaries where he was reviewing post election activities. Met with Laramie County IT people - they would very much appreciate a standard XML for voting result outputs. They also thought other states would find a uniform XML valuable.
     
  • John Wack - Attended EAC meeting. Discussion about expanded scope and issues. Concerns about poll workers, including electronic books. Brian Hancock says these systems will have to be certified. Walked away with thoughts on how requirements must be satisfied. Confusion arose about how to include it.

 

Critical Issues for Formulating Reliability Requirements - Max Etschmaier

(Note paper and presentation URL above)

  • Alan Goldfine introduced Max and informed the group that he would be looking at old reliability issues in existing specifications. He is not developing conclusions or requirements at this time.
     
  • Max began with a quick background introduction of his experience, most from aviation (both civil and military). Aviation was quite different then, but still better than our voting systems of today. Reliability was looked at because of their concerns with costs.
     
  • Although voting machines are different from airlines, the idea of looking at system reliability still applies. Hopefully the same successful outcome will happen.
     
  • Many of the barriers are institutional barriers.
     
  • The definition of reliability emphasizes that reliability analysis cannot be focused entirely on obtaining measures, but needs to look at the purpose of the system and the environment, to define measures. Reliability defines the frequency with which "failures" occur.
     
  • Reliability depends on equipment, maintenance process, and logistics support.
     
  • Our analysis will require 3 steps: 1) examine definition of "voting system", 2) examine other requirements to see what functions are required of the voting system, and 3) operating environment.
     
  • Current guidelines require reliability requirements at the machine as well as precinct.
     
  • Voting machine examination must include software examination.
     
  • Definition of voting system is consistent with HAVA definition.
     
  • Functions of a voting system were discussed. Other functions were suggested: verify eligibility of voter, make sure voter has not voted previously, exception handling (disputes problems).
     
  • Threats to the integrity of these functions come from malfunctions, damage, and tampering.
     
  • There are two types of failures: critical (essential for completion of system mission or may lead to unacceptable consequences) and non-critical (may disturb operation or have economic consequences).
     
  • The language of VVSG2005 says critical failures have to be avoided - the requirement is unconditional. If a machine is does not exclude critical failures, it cannot be certified.
     
  • Next step is to determine cost of a "critical" failure.
     
  • Max's recommendation is to stay with the current requirement unconditional requirement of no critical failures.
     
  • Usage pattern - machines are idle most of the time. Use is predictable.
     
  • Critical failures in the form of interference can only occur during active phase; however, a skilled person could access machine and manipulate it to cause a critical failure outside the active phase - the failure would be that the protection against tampering failed.
     
  • Two strategies to look at - neither permits critical failures during active stage.
     
  • Strategy 1: No failure of any kind will occur during active phase. Easy to manage. Works well for simple machines. Certification: Demonstrate machines under conditions similar to actual operations would not fail at a rate that exceeds the limit. Analysis performed by vendor and audited by certifying agency.
     
  • Strategy 2: Only critical failure are excluded during active phase, non-critical failures are permitted and corrected with maintenance. Requires management and maintenance in place during active phase. Also spare machines. Certification: A disciplined process for managing the logistics system is included in certification of reliability.
     
  • Discussion of MTBF under strategies 1 and 2 (no failures during critical phase vs. failures by activity)
  • Discussion of narrowing scope
  • Need to approach accepted reliability of today's computers
  • Reliability does not stand alone
  • Need to consider failure in terms of testing context
  • Max will continue one on one dialogue with TGDC members to refine approach.

Meeting adjourned at 11:35 a.m.

 

*************

 

CRT Teleconference
Thursday, August 10, 2006

Participants: Allan Eustis, John Wack, Alan Goldfine, David Flater, Nelson Hastings, Philip Pearce, Max Etschmaier, Daniel Schutzer, Paul Miller, Wendy Havens

Agenda:

1) Administrative Updates: Allan Eustis

2) Test reports: David Flater and Stephen Berger (tentative)

3) Reconciliation of aliases and overvotes in write-ins: David Flater

4) FYI: retiring the term "ballot format": David Flater

5) Other Items

Administrative Updates:

  • Allan Eustis welcomes to our teleconference; Max Etschmaier (new NIST contractor for work in quality assurance); also Philip Pearce (one of two members joining the access board), and Paul Miller, in nomination to replace NASED charter member, Paul Craft who resigned as a result of his retirement from the State of Florida.

JW-Ron Rivest sent email suggesting NIST / STS look into the gaming standards

  • links to technical standards and setup validation
  • looks like there could be a "good amount" of overlap
  • Commissioners Martinez and Hilman suggested it might be good to add a "ad hoc sub-group" in the standards board. Four individuals named: JW Participated in a conference call with the EAC and standards board reps.
    • idea is to look at procedural issues related to VVSG
    • allows more participation among the EAC sub-committees
  • AE-HAVA; Alice Miller, Sharon Turner-Buie, Helen Purcell, and John Gale; are TGDC members representing Standards and Advisory Boards.

[Test reports deferred to later in the meeting ]

Reconciliation on aliases and over-votes:

  • David Flater discusses corrective actions on definition(s) re: double votes
  • add improvements to understand terminology more clearly
  • refer to Mike Shamos advice that reconciliation of aliases is handled by manual adjustment.
  • Robust reconciliation of aliases would imply that double vote resolution could be automated.

Term-ballot format:

  • revision to text; re: terms like ballot formatting, ballot style, ballot configuration
  • concern that terminology will be confusing
  • in some cases, such as "ballot formatting", term is not used (or very seldom) anymore JW-ad hoc standards group could have input in such a case
  • bottom line that it is better to use "ballot style" terminology consistent with VSS 2002

AE-status on "action item" list from David Flater and Stephen Berger

  • re: their outline of test report not in "harmony" with each other

Test reports

  • Mr. Berger not present on today's meeting, will defer to next teleconference
    • follow-up email to be sent out to Mr. Berger
    • action item; waiting to receive revisions from Mr. Berger

Other Items

Next scheduled meeting:
Thursday, August 31, 2006 at 10:00 AM EST

*************

 

CRT Teleconference
Thursday, July 20, 2006
10 AM EDT

Participants: John Wack, Alan Goldfine, David Flater, Sharon Laskowski, Steve Berger, Wendy Havens

Agenda:

 

1) Administrative Updates: John Wack
2) Test reports (comparison): David Flater & Stephen Berger
3) Reconciliation of aliases and over-votes in write-ins: David Flater
4) FYI: retiring the term "ballot format": David Flater
5) Any other items.
6) Meeting Action items

Administrative Updates:

JW- Updates everyone on the House of Representatives Joint Science/Administration Committees' hearing yesterday in which Dr. William Jeffrey gave testimony on voting system standards and related issues. (His testimony has been posted at:
http://vote.nist.gov/jeffrey_science20060719.pdf.) In addition to Dr. Jeffrey, the following witnesses provided testimony:

  • Ms. Donetta Davidson - Commissioner, Election Assistance Commission;
  • Ms. Mary Kiffmeyer - Secretary of State for Minnesota;
  • Ms. Linda Lamone - Administrator of Elections, Maryland State Board of Elections;
  • Mr. John Groh - Chairman, Election Technology Council, Information Technology Association of America; and
  • Dr. David Wagner - Professor of Computer Science, University of California at Berkeley.

The full hearing web cast is available for viewing at: http://boss.streamos.com/real/science/sci06/071906.smi

Discussion on Test Reports:

David Flater along with Steve Berger discussed and compared their draft notes regarding voting system test reports. There were numerous issues that were reviewed and David Flater put together an action items list that has been appended at the end of these meeting minutes.

Other items:

Agenda items deferred to next CRT telcon:

 

3) Reconciliation of aliases and over-votes in write-ins,
4) Retiring the term "ballot format";

Next scheduled meeting is on Thursday, August, 10th @ 10:00 AM EST

Meeting Action Items:

  • SB to revise his outline to separate Technical Data Package, Voting Equipment User Documentation, Test Plan, Test Report, and Public Information Package from one another, to make it integratable with DWF working draft.
     
  • SB to remove line items found to be redundant.
     
  • SB to review VVSG'05 TDP content with BH to determine which subsections remain relevant.
     
  • SB to write memo and coordinate with STS regarding proposal for test lab to maintain custody of the build environment and act as EAC's deputy in ensuring that no unauthorized changes are incorporated into the voting system. (Revises witness build requirements.)
     
  • SB to call BH regarding publication of election management practices and determine their relevance to the test report or PIP (indications where certain practices are necessary for the system to meet requirements).
     
  • SB to forward example of requirements for attestation in test report.
     
  • SB to clarify "label" requirement.
     
  • DWF to move requirement "include a reference to the specific section or sections of the Voting Equipment User Documentation where the voting variations that the voting system was found to support are documented by the vendor" from Test Report to Implementation Statement.
     
  • DWF to add signature requirement(s) to Implementation Statement.
     
  • DWF to add test report requirement or placeholder for warrant of accepting change control responsibility (attestation that vendor will implement changes as required for certification). (Still a little fuzzy on this.)
     
  • DWF to make changes to reflect that EAC will be in the loop during the testing campaign: include test report revision history ("predecessory configurations and test reports that are connected to the current evaluation"), TDP and system change notes in Test Report for EAC.
     
  • DWF to add requirements about use of photos for (1) system hardware identification (coordinate with STS setup validation) and (2) illustration of correct system set-up.

Notes (not actionable until outline is integrated):

  • Desire to chunk according to the specialties of reviewers and order based on EAC workflow.
  • Desire to keep all potentially confidential information in one easily redacted chunk (though the determination that it is actually confidential is not ours to make).
  • Desire to keep all setup validation type information in one easily accessible place, to be used by several parties including end users.

***********

Created January 16, 2020