Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Post Election Equipment and Operational Analysis

READ-ONLY SITE MATERIALS: Historical voting TWiki site (2015-2020) ARCHIVED from

I noticed that there isn't currently a process defined (that I can see) for post-election equipment and operational audits. I'm posting this topic as a place for people to discuss potential options for post-election equipment and operational audits, in hopes of collaboratively developing a process for this component of the project.

At ES&S, we've been working on a set of standard metrics for any election operation to utilize as part of their post-election analysis. We call them the V.I.T.A.L.s, as they represent a fairly complete view of of an election looking at it retrospectively, and can provide some assistance in planning election operations for the future. Here is what we've developed. The following is an excerpt from an upcoming white paper on the subject of election analysis.

Five Metrics for Election Success

How to Know What to Measure

What makes a “successful election” for a voter? In looking at voter experience studies like the comprehensive work done in academic endeavors such as the Caltech/MIT Voting Technology Project and through jurisdiction efforts such as the Voter Systems Assessment Project in Los Angeles County, much of the feedback from voters comes down to four clear statements of success. For many voters, they’ve had a successful day at the polls when they can say the following:

  1. I knew how and where to vote.
  2. I voted with little or no trouble.
  3. I trust that my vote counted.
  4. The election results were provided in a timely manner.

These statements underscore the key drivers for success in election operations: access to critical information, convenience, accuracy, and transparency.

Five V.I.T.A.L. Signs for Election Success

Measuring against these key drivers can be challenging, but the emergence of better data makes it possible…right now. As part of a comprehensive approach to providing election officials with better data analysis for their elections, Election Systems & Software developed five key metrics that map to these key drivers of success. They are derived not only from the excellent work done in the aforementioned voting studies, but also from extensive conversations with customers that highlighted the challenges “keeping election officials up at night.” The list is by no means comprehensive, nor does it list every metric that indicates election success. Further metrics around accessibility, accuracy of the count, security, voter experience, and election efficiency are certainly available. However, for election officials who want to quickly and accurately measure vital performance factors, these five metrics create a great starting point.

The five V.I.T.A.L. metrics are:

  1. Voting Volumes: how many people voted, at what times and in what places?
  2. Interruptions to Voting: what percentage of voting sessions included an interruption, and what were the top challenges that voters encountered?
  3. Tabulator Performance: what was the percentage of “up time” for voting equipment, and what were the top challenges with equipment or ballots?
  4. Accommodations for Voters: How many people utilized language or capability accommodations, at what times and in what places?
  5. Lines, Poll Opening and Closing Success: what percentage of polls were opened and closed on time? how long did people wait to vote?

Let’s review each of these key metrics in depth:

  1. Voting Volumes : how many people voted, at what times and in what places?

Measuring voting volumes can be key to pre-empting long lines and creating a smooth voter flow throughout Election Day. Voting volumes can be measured by the number of voters checking in at a voting location at a specific time, or by the flow of voters starting and completing voting sessions on your voting equipment. Consider average number of voters per hour per location as a starting point.

  1. Interruptions to Voting: what percentage of voting sessions included an interruption, and what were the top challenges that voters encountered?

An “interrupted voting session” is any occurrence that takes the voter off the prescribed path of the ideal voting experience. Interruptions can occur based on voter behavior, poll worker challenges, and equipment or ballot operation, so it’s very helpful to separate out data about interruptions by the various causes in order to know how best to mitigate them. For equipment that tracks error messages such as voters making unreadable marks a log file can provide helpful information about interruptions, when they occur, and how they were caused. Reported challenges or interruptions found in call center data may also be helpful here.

Interruptions to voting can also indicate opportunities for measuring usability and legibility of ballots and election equipment. The number of voters making too many or too few selections, making unreadable marks, or inserting ballots incorrectly may all be indicators of usability issues with ballots or equipment. Consider a “usability issues” count as a % of total votes to capture this important metric.

  1. Tabulator Performance: what was the performance level for voting equipment, and what were the top challenges with equipment or ballots?

The images of election officials peering through the not-quite-punched holes of ballots fifteen years ago still haunt many election officials. Knowing how equipment is performing, which ballots worked best with your equipment, and how ballots might be designed better in the future is a key metric for any voting operation.

Machine-level performance is available through log file analysis, as well as through call center ticket analysis. Ballot performance can also be measured through certain equipment log files and ballot design usability studies.

  1. Accommodations for Voters: How many people utilized language or capability accommodations, at what times and in what places?

Metrics around voter accommodations can be utilized to better estimate needs for resource allocations. Whether you’re deciding how many of each non-English ballot to provide at a location, or when to staff interpreters at polling locations, real data around utilization can help optimize your Election Day resources.

Consider metrics that show peak voting times and places for those utilizing provided accommodations. These may include accessible voting equipment (such as audio features, high contrast screen, specialized equipment, and large print ballots) or ballots and equipment interfaces that are in languages other than English. Peak voting times by language selection; peak voting time at accessible voting equipment; and total number of users per location using voter accommodations can provide helpful insights. Minority language ballot utilization by location may also provide useful information.

  1. Lines, Poll Opening and Closing Success : what percentage of polls were open and closed on time? how long did people wait to vote?

It seems simple, but most election officials rely largely on anecdotal information to know how long people waited, whether their polls are “open for business” on time on Election Day, and whether they stay open until the mandated poll closing time. Call center check-ins and equipment log messages can provide real data about when your sites are ready for voters.

Review log files showing when voting equipment shows “poll opened” or “poll closed” and when they otherwise note that they are ready for voting. Alternatively, review call center logs or consider having poll workers check in with the call center when their polls are ready for voters. These metrics can best be used in poll worker training. Late-opening polls or early-closers may require additional documentation, training, or management to ensure better performance in the next election.

Knowing how long people waited to vote, and the locations with the longest wait time, are highly valuable insights for your election. For those who can measure it, wait times at various polling place stops (such as in front of a pollbook, at the voting booth, or in front of a scanner) can enrich the total picture.

Voter waiting times and line length have proven to be one of the most challenging metrics to measure, however, largely because collecting the data about wait times often relies on either in-person observation or voter surveys, which often can be inaccurate. Given the challenge of gathering this information, ensure that your method of measuring voting wait times is well documented and consistent. Consider determining the average voting time per user, and estimating line lengths at different times of day by comparing voting numbers to average voting time.

Key Challenges to Election Measurement

As in all data analyses, these five metrics come with challenges. The V.I.T.A.L.s will give a helpful picture of election success, but they are limited by what we can measure consistently and accurately. A few specific limitations:

  • Measurability: Some measures of election success are simply more challenging to measure than others. Measures that require extensive staffing resources, specialized equipment, or a challenging measurement process should be considered carefully before using.
  • Variance in Election Operations: Every jurisdiction operates differently. We’ve tried to capture key metrics that can be measured with a variety of different data points, but still, election processes vary. What works for one jurisdiction may not translate perfectly to another.
  • Voter Privacy: Measuring election success simply cannot come at the cost of voter or ballot privacy. No measure is worth gathering if it compromises this vital element of our election system.
  • Ease of Reporting: Data is only as useful as its ability to be consumed and shared with stakeholders. In selecting the measures you’ll use, consider the tools you’ll use to report on the data. Keeping your measures constrained to a set that you can report on with a single tool or process will help make the analysis more useful to your team.

To reduce blind spots in your analysis, always re-assess the analysis and limitations of those measures as you go. Consider the question “What aren’t we capturing that we should be?” and edit your reporting accordingly, as data becomes available. The V.I.T.A.L.s are a starting place for election data analysis, not the ending place.

Best Practices for Success

To ensure success with these five metrics, consider the following strategies for each metric:

Tailor your metrics to your operation

These metrics don’t require a single type of hardware or software, and are not dependent on a specific voting process. They can be analyzed using different data sources, depending on the type of voting (in-person versus by mail), the equipment used (direct recording electronic – DRE versus paper ballots with electronic scanners versus all paper) and where votes are counted (precinct count versus central count).

Granularity wins: work as small as you can

While understanding total voter volumes, equipment challenges, and poll worker actions is useful, drilling down to a specific location, or even a specific machine, creates a “magnet for the needle in the haystack.” Increase the level of granularity of your data whenever possible.

Combine real-time data and historical information for a complete analysis

“Real time” Election Day reporting can be incredibly helpful in providing voters and officials with information about election results and voter turnout. But Election Day isn’t the time for making long-term operational changes, so a post-election analysis is useful as well. A complete analysis of one’s election allows for planning, training, and outreach activities leading up to the next election.

Voting TWiki Archive (2015-2020): read-only, archived wiki site, National Institute of Standards and Technology (NIST)


This page, and related pages, represent archived materials (pages, documents, links, and content) that were produced and/or provided by members of public working groups engaged in collaborative activities to support the development of the Voluntary Voting System Guidelines (VVSG) 2.0. These TWiki activities began in 2015 and continued until early 2020. During that time period, this content was hosted on a Voting TWiki site. That TWiki site was decommissioned in 2020 due to technology migration needs. The TWiki activities that generated this content ceased to operate actively through the TWiki at the time the draft VVSG 2.0 was released, in February of 2020. The historical pages and documents produced there have been archived now in read-only, static form.

  • The archived materials of this TWiki (including pages, documents, links, content) are provided for historical purposes only.
  • They are not actively maintained.
  • They are provided "as is" as a public service.
  • They represent the "work in progress" efforts of a community of volunteer members of public working groups collaborating from late 2015 to February of 2020.
  • These archived materials do not necessarily represent official or peer-reviewed NIST documents nor do they necessarily represent official views or statements of NIST.
  • Unless otherwise stated these materials should be treated as historical, pre-decisional, artifacts of public working group activities only.
  • NIST does not warrant or make any representations regarding the correctness, accuracy, reliability or usefulness of the archived materials.


This wiki was a collaborative website. NIST does not necessarily endorse the views expressed, or concur with the facts presented on these archived TWiki materials. Further, NIST does not endorse any commercial products that may be mentioned in these materials. Archived material on this TWiki site is made available to interested parties for informational and research purposes. Materials were contributed by Participants with the understanding that all contributed material would be publicly available.  Contributions were made by Participants with the understanding that that no copyright or patent right shall be deemed to have been waived by such contribution or disclosure. Any data or information provided is for illustrative purposes only, and does not imply a validation of results by NIST. By selecting external links, users of these materials will be leaving NIST webspace. Links to other websites were provided because they may have information that would be of interest to readers of this TWiki. No inferences should be drawn on account of other sites being referenced, or not referenced, from this page or these materials. There may be other websites or references that are more appropriate for a particular reader's purpose.


Created August 28, 2020, Updated February 5, 2021