READ-ONLY SITE MATERIALS: Historical voting TWiki site (2015-2020) ARCHIVED from https://collaborate.nist.gov/voting/bin/view/Voting
I noticed that there isn't currently a process defined (that I can see) for post-election equipment and operational audits. I'm posting this topic as a place for people to discuss potential options for post-election equipment and operational audits, in hopes of collaboratively developing a process for this component of the project.
At ES&S, we've been working on a set of standard metrics for any election operation to utilize as part of their post-election analysis. We call them the V.I.T.A.L.s, as they represent a fairly complete view of of an election looking at it retrospectively, and can provide some assistance in planning election operations for the future. Here is what we've developed. The following is an excerpt from an upcoming white paper on the subject of election analysis.
What makes a “successful election” for a voter? In looking at voter experience studies like the comprehensive work done in academic endeavors such as the Caltech/MIT Voting Technology Project and through jurisdiction efforts such as the Voter Systems Assessment Project in Los Angeles County, much of the feedback from voters comes down to four clear statements of success. For many voters, they’ve had a successful day at the polls when they can say the following:
These statements underscore the key drivers for success in election operations: access to critical information, convenience, accuracy, and transparency.
Measuring against these key drivers can be challenging, but the emergence of better data makes it possible…right now. As part of a comprehensive approach to providing election officials with better data analysis for their elections, Election Systems & Software developed five key metrics that map to these key drivers of success. They are derived not only from the excellent work done in the aforementioned voting studies, but also from extensive conversations with customers that highlighted the challenges “keeping election officials up at night.” The list is by no means comprehensive, nor does it list every metric that indicates election success. Further metrics around accessibility, accuracy of the count, security, voter experience, and election efficiency are certainly available. However, for election officials who want to quickly and accurately measure vital performance factors, these five metrics create a great starting point.
The five V.I.T.A.L. metrics are:
Let’s review each of these key metrics in depth:
Measuring voting volumes can be key to pre-empting long lines and creating a smooth voter flow throughout Election Day. Voting volumes can be measured by the number of voters checking in at a voting location at a specific time, or by the flow of voters starting and completing voting sessions on your voting equipment. Consider average number of voters per hour per location as a starting point.
An “interrupted voting session” is any occurrence that takes the voter off the prescribed path of the ideal voting experience. Interruptions can occur based on voter behavior, poll worker challenges, and equipment or ballot operation, so it’s very helpful to separate out data about interruptions by the various causes in order to know how best to mitigate them. For equipment that tracks error messages such as voters making unreadable marks a log file can provide helpful information about interruptions, when they occur, and how they were caused. Reported challenges or interruptions found in call center data may also be helpful here.
Interruptions to voting can also indicate opportunities for measuring usability and legibility of ballots and election equipment. The number of voters making too many or too few selections, making unreadable marks, or inserting ballots incorrectly may all be indicators of usability issues with ballots or equipment. Consider a “usability issues” count as a % of total votes to capture this important metric.
The images of election officials peering through the not-quite-punched holes of ballots fifteen years ago still haunt many election officials. Knowing how equipment is performing, which ballots worked best with your equipment, and how ballots might be designed better in the future is a key metric for any voting operation.
Machine-level performance is available through log file analysis, as well as through call center ticket analysis. Ballot performance can also be measured through certain equipment log files and ballot design usability studies.
Metrics around voter accommodations can be utilized to better estimate needs for resource allocations. Whether you’re deciding how many of each non-English ballot to provide at a location, or when to staff interpreters at polling locations, real data around utilization can help optimize your Election Day resources.
Consider metrics that show peak voting times and places for those utilizing provided accommodations. These may include accessible voting equipment (such as audio features, high contrast screen, specialized equipment, and large print ballots) or ballots and equipment interfaces that are in languages other than English. Peak voting times by language selection; peak voting time at accessible voting equipment; and total number of users per location using voter accommodations can provide helpful insights. Minority language ballot utilization by location may also provide useful information.
It seems simple, but most election officials rely largely on anecdotal information to know how long people waited, whether their polls are “open for business” on time on Election Day, and whether they stay open until the mandated poll closing time. Call center check-ins and equipment log messages can provide real data about when your sites are ready for voters.
Review log files showing when voting equipment shows “poll opened” or “poll closed” and when they otherwise note that they are ready for voting. Alternatively, review call center logs or consider having poll workers check in with the call center when their polls are ready for voters. These metrics can best be used in poll worker training. Late-opening polls or early-closers may require additional documentation, training, or management to ensure better performance in the next election.
Knowing how long people waited to vote, and the locations with the longest wait time, are highly valuable insights for your election. For those who can measure it, wait times at various polling place stops (such as in front of a pollbook, at the voting booth, or in front of a scanner) can enrich the total picture.
Voter waiting times and line length have proven to be one of the most challenging metrics to measure, however, largely because collecting the data about wait times often relies on either in-person observation or voter surveys, which often can be inaccurate. Given the challenge of gathering this information, ensure that your method of measuring voting wait times is well documented and consistent. Consider determining the average voting time per user, and estimating line lengths at different times of day by comparing voting numbers to average voting time.
As in all data analyses, these five metrics come with challenges. The V.I.T.A.L.s will give a helpful picture of election success, but they are limited by what we can measure consistently and accurately. A few specific limitations:
To reduce blind spots in your analysis, always re-assess the analysis and limitations of those measures as you go. Consider the question “What aren’t we capturing that we should be?” and edit your reporting accordingly, as data becomes available. The V.I.T.A.L.s are a starting place for election data analysis, not the ending place.
To ensure success with these five metrics, consider the following strategies for each metric:
These metrics don’t require a single type of hardware or software, and are not dependent on a specific voting process. They can be analyzed using different data sources, depending on the type of voting (in-person versus by mail), the equipment used (direct recording electronic – DRE versus paper ballots with electronic scanners versus all paper) and where votes are counted (precinct count versus central count).
While understanding total voter volumes, equipment challenges, and poll worker actions is useful, drilling down to a specific location, or even a specific machine, creates a “magnet for the needle in the haystack.” Increase the level of granularity of your data whenever possible.
“Real time” Election Day reporting can be incredibly helpful in providing voters and officials with information about election results and voter turnout. But Election Day isn’t the time for making long-term operational changes, so a post-election analysis is useful as well. A complete analysis of one’s election allows for planning, training, and outreach activities leading up to the next election.
ARCHIVE SITE DESCRIPTION AND DISCLAIMER
This page, and related pages, represent archived materials (pages, documents, links, and content) that were produced and/or provided by members of public working groups engaged in collaborative activities to support the development of the Voluntary Voting System Guidelines (VVSG) 2.0. These TWiki activities began in 2015 and continued until early 2020. During that time period, this content was hosted on a Voting TWiki site. That TWiki site was decommissioned in 2020 due to technology migration needs. The TWiki activities that generated this content ceased to operate actively through the TWiki at the time the draft VVSG 2.0 was released, in February of 2020. The historical pages and documents produced there have been archived now in read-only, static form.
ARCHIVED VOTING TWIKI SITE MATERIALS
This wiki was a collaborative website. NIST does not necessarily endorse the views expressed, or concur with the facts presented on these archived TWiki materials. Further, NIST does not endorse any commercial products that may be mentioned in these materials. Archived material on this TWiki site is made available to interested parties for informational and research purposes. Materials were contributed by Participants with the understanding that all contributed material would be publicly available. Contributions were made by Participants with the understanding that that no copyright or patent right shall be deemed to have been waived by such contribution or disclosure. Any data or information provided is for illustrative purposes only, and does not imply a validation of results by NIST. By selecting external links, users of these materials will be leaving NIST webspace. Links to other websites were provided because they may have information that would be of interest to readers of this TWiki. No inferences should be drawn on account of other sites being referenced, or not referenced, from this page or these materials. There may be other websites or references that are more appropriate for a particular reader's purpose.