Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
TECHNICAL GUIDELINES AND DEVELOPMENT COMMITTEE (TGDC)
The VVSG contains considerable new material and material expanded from previous versions of the voting standards. This section provides an introduction to and overview of major features of the VVSG, those being
The VVSG structure is markedly different from the structure of previous versions. First and foremost, the VVSG should be considered as a foundation for requirements for voting systems; it is a foundation that provides precision, that reduces ambiguity and multiple repeated requirements, and that provides for change, i.e., the addition of new requirements for new types of voting devices or voting variations.
It was necessary to focus on providing this robust foundation for several reasons. First, previous versions suffered from ambiguity, which resulted in a less-robust testing effort. In essence, it has been more difficult to test voting systems when the requirements themselves are subject to multiple interpretations. This new version should go a long way towards reducing that ambiguity.
Secondly, there are simply more different types of voting devices than anticipated by previous versions, and new devices will continue to be marketed as time goes by. This proliferation of new devices required a strong organizational foundation so that existing devices could be unambiguously described and so that the development of new devices can proceed in an orderly, structured fashion.
The VVSG has been reorganized to bring it in line with applicable standards practices of ISO, W3C and other standards-creating organizations. It contains 3 volumes or "Parts" for different types of requirements:
The requirements in these Parts rely on definitition and strict usage of certain terms, included in Appendix A, Definition of Words with Special Meaning in the VVSG. This covers terminology for standardization purposes that must be sufficiently precise and formal to avoid ambiguity in the interpretation and testing of the standard. Terms are defined to mean exactly what is intended in the requirements of the standard, no more and no less. Note: Readers may already be familiar with definitions for many of the words in this section, but the definitions here often may differ in small or big ways from locality usage because they are used in special ways in the VVSG.
The VVSG also contains a table of requirement summaries, to be used as a quick reference for locating specific requirements within sections/subsections. Appendix B contains references and end notes.
Voting system and device classes are new to the VVSG. Classes in essence form profiles of voting systems and devices. They are used as fields in requirements and they connote what the requirement applies to. For example, Figure 1 shows the high-level device class called vote-capture device. There are various requirements that apply to vote-capture device; this means that all vote-capture devices must satisfy these requirements (e.g., for security, usability, etc.).
There are also requirements that apply more specifically to, say, IVVR vote-capture device and those explicit devices underneath it, such as VVPAT. These devices inherit the requirements that apply to Vote-capture device, that is, they must satisfy all the general Vote-capture device requirements as well as the more specific requirements that apply. In this way, new types of specific Vote-capture devices can be added in the future; they must satisfy the general requirements that all Vote-capture devices are expected to satisfy, but at the same time they can satisfy specific requirements that only apply to the new device. This structure assists in unambiguously making it clear to vendors and test labs which requirements apply to ALL Vote-capture devices, for example, as opposed to which requirements apply specifically to just VVPAT systems. This structure also allows for the addition or modification of new or existing device requirements without impacting the rest of the standard.
Figure 1: Voting device class hierarchy
Requirements are now very specific to either a type of voting variation or a type of voting device (as stated in the previous section, the voting device can be a general profile of voting devices or a more specific voting device). They contain expanded text and more precise language to make explicit what exactly is required and what type of testing is to be used by the test lab to determine whether the requirement is satisfied. If possible, the requirement also contains a reference to versions of the requirement in previous standards (e.g., VVSG 2005 or the 2002 VSS) so as to show its genesis and to better convey its purpose.
The terminology used in the VVSG has been considered carefully and is used strictly and consistently. In this way, requirements language can be made even more clear and unambiguous. Hypertext links are used throughout the VVSG for definitions of terminology so as to reinforce the importance of understanding and using the terminology in the same way.However, it is important to understand the terminology used in the VVSG is specific to the VVSG. An effort has been made to make sure than the terms used in the VVSG mean essentially the same thing as used in other contexts, however at times the definitions in the VVSG may vary in big or small ways.
Figure 2 illustrates the relationships and interaction between requirements, device classes, types of testing from Part 3, all in the framework of strictly used terminology.
Usability Performance Requirements
Usability is conventionally defined as: "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" ([ISO98a] ISO 9241-11: 1998 Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability). In VVSG'05, the usability guidelines basically relied on three assessment methods:
While all these help to promote usability, they are all somewhat indirect methods. The actual "effectiveness, efficiency and satisfaction" of voting systems are never evaluated directly.
This version of the VVSG uses a new method based on summative usability testing that directly addresses usability itself: how well do users achieve their goals? The features of this new method include:
Obviously, the implementation of such complex tests is more difficult than simply checking design features. However, performance-based testing using human subjects yields the most meaningful measurement of usability because it is based on their interaction with the system's voter interface, whereas design guidelines, while useful, cannot be relied upon to discover all the potential problems that may arise. The inclusion of requirements for performance testing in these guidelines advances the goal of providing the voter with a voting system that is accurate, efficient, and easy to use.
Table 1 shows all five benchmarks; the actual values are included in Part 1 Section 3.2.1. Please see this section for full details.
In addition to usability performance benchmarks , the treatment of human factors, i.e., usability, accessibility, and privacy, has been expanded considerably. Table 2 summarizes the new and expanded material.
Software independence [Rivest06] means that an undetected error or fault in the voting system's software is not capable of causing an undetectable change in election results. All voting systems must be software independent in order to conform to the VVSG.
There are essentially two issues behind the concept of software independence, one being that it must be possible to audit voting systems to verify that ballots are being recorded correctly, and the second being that testing software is so difficult that audits of voting system correctness cannot rely on the software itself being correct. Therefore, voting systems must be "software independent"so that the audits do not have to trust that the software is correct; the voting system must provide other proof that the ballots have been recorded correctly, e.g., voting records produced in ways in which their accuracy does not rely on the correctness of the voting system's software.
This is a major change from previous versions of the VVSG, because previous versions permitted voting systems that are software dependent, that is, voting systems whose audits must rely on the correctness of the software, to be conformant. One example of a software dependent voting system is the DRE, which is non-conformant to the VVSG.
Independent voter-verifiable records
There are several general types of voting systems that can satisfy the definition of software independence, but the VVSG currently contains requirements for only one: those types of voting systems that use voter-verifiable paper records (VVPR), such as with
The relevant requirements, though, have been abstracted to apply to a higher-level type of voter-verifiable records called independent voter-verifiable records (IVVR), which can be audited independently of the voting system software much the same as with VVPR but do not necessarily have to be paper-based. IVVR relies on voter-verification, that is, the voter must verify that the electronic record is being captured correctly by examining a copy that is maintained independently of the voting system's software, i.e., the independent voter-verifiable record.
NOTE: If different types of IVVR are developed that do not use paper, systems that use them can also be conformant to the VVSG "as is.". In other words, new types of IVVR that do not use paper are already "covered"by the IVVR requirements in the VVSG; new requirements do not necessarily need to be added.
Figure 3 illustrates this in a tree-like structure. At the top of the tree is software independence; as stated previously all voting systems that are conformant to the VVSG must be software independent. One route to achieving software independence is to use IVVR. The VVSG contains requirements for IVVR, of which VVPR is one (ccurently the only) type. New types of IVVR voting systems, as long as they meet the current requirements in the VVSG, will also be conformant to the VVSG without needing to add additional requirements.
Figure 3: Voting systems that can conform to current requirements in the VVSG
Use of IVVR is currently the only method specified by requirements in the VVSG for achieving software independence. Vendors that produce systems that do NOT use IVVR must use the Innovation Class as a way of proving and testing conformance to the VVSG. The innovation class is for the purpose of ensuring a path to conformance for new and innovative voting systems that meet the requirement of software independence but for which there may not be requirements in the VVSG. Technologies in the innovation class must be different enough to other technologies permitted by the VVSG so as to justify their submission. Technologies in the innovation class must meet the relevant requirements of the VVSG as well as further the general goals of holding fair, accurate, transparent, secure, accessible, timely, and verifiable elections.
A review panel process, separate from the VVSG conformance process, will review innovation class submissions and make recommendations as to their eventual conformance to the VVSG.
Open-Ended Vulnerability Testing
The goal of open-ended vulnerability testing (OEVT) is to discover architecture, design and implementation flaws in the system which may not be detected using systematic functional, reliability, and security testing and which may be exploited to change the outcome of an election, interfere with voters' ability to cast ballots or have their votes counted during an election, or compromise the secrecy of vote. The goal of OEVT also includes attempts to discover logic bombs, time bombs or other Trojan Horses that may have been introduced in the system hardware, firmware or software for said purposes. Open-ended vulnerability testing (OEVT) relies heavily on the experience and expertise of OEVT team members, their knowledge of the system, its component devices and associated vulnerabilities, and their ability to exploit those vulnerabilities.
In addition to software independence and OEVT, the treatment of security in voting systems has been expanded considerably. There are now detailed sets of requirements for eight aspects of voting system functionality and features, as shown in Table 3.
To clarify the treatment of components that are neither manufacturer-developed nor unmodified COTS (commercial off-the-shelf software/hardware) and to allow different levels of scrutiny to be applied depending on the sensitivity of the components being reviewed, new terminology has been introduced: application logic, border logic, configuration data, core logic, COTS, hardwired logic, and third-party logic. Using this terminology, requirements have been scoped more precisely than they were in previous iterations of the VVSG.
The way in which COTS is tested has also changed; the manufacturer must deliver the system to test without the COTS installed, and the test lab must procure the COTS separately and integrate it.
End-to-End Testing for Accuracy and Reliability
The testing specified in previous versions of the VVSG for accuracy and reliability is not required to be end-to-end but may bypass significant portions of the system that would be exercised during an actual election, such as the touch-screen or keyboard interface. A volume test is now included that is analogous to the California Volume Reliability Testing Protocol.
The metric for reliability has been changed from Mean Time Between Failure (MTBF) to a failure rate based on volume that varies by device class and severity of failure.
Reliability, accuracy, and probability of misfeed are now assessed using data collected through the course of the entire test campaign, including the volume testing. This increases the amount of data available for assessment of conformity to these performance requirements without necessarily increasing the duration of testing.
The general core requirements for voting systems has been expanded greatly. In addition to the already noted improvements in COTS coverage, end-to-end testing for accuracy and reliability, and the new reliability metric, the following topics in Table 4 have been added or expanded.