Testimony for the Voter Assistance Commission Hearing, June 28, 2007

New York, NY

 

FROM:  Diana Finch

 

I am a literary agent, and several of my clients write about voting and elections.  I am here today to present the findings from a recent PhD thesis, submitted this May by Sarah P. Everett at Rice University in Houston Texas:  THE USABILITY OF ELECTRONIC VOTING MACHINES AND HOW VOTES CAN BE CHANGED WITHOUT DETECTION.

 

The thesis is available online as a pdf file download at the following url: http://chil.rice.edu/research/pdf/EverettDissertation.pdf  Everett's study is specifically focused on DRE screen machines, the review screen function, and whether voters catch mistakes made on the screen, by the voter or by the machine.

 

Everett explains that most voting machine critiques focus on accuracy and security.  But she maintains that  'usability' is equally critical.  Usability means that voters must actually be able to case their votes as intended, without having obstacles placed in their way.  I believe that this speaks directly to the mandate of the Voter Assistance Commission: that no obstacles be put in the way of voters in New York City.

 

 

The usability problems that threaten elections are Undervotes, Overvotes and Votes for the Wrong Candidates.   Everett reminds us how serious these problems are by analyzing the results of the 2000 Presidential elections to show that problems in these 3 areas led to a different result than what the voters intended.

 

From the thesis:  "Voting on Election Day can be a stressful experience for many, possibly leading to emotional reactions like anxiety and frustration.  Polling places can be loud and distracting, and voters may feel time pressure to complete their ballots quickly. Studies have shown that high stress leads to errors.  Noise slows performance on manual tasks, and high levels of time pressure speed performance but at the expense of accuracy"

 

Everett focuses on a very specific aspect of the voting experience:  whether voters notice changes made to their ballots.  She pays special attention to the review screen on the DREs, where, in technical terms, voters must distinguish a mistake amongst all the correctly displayed choices - a 'signal' amongst background 'noise.'

 

The thesis is based on three studies of simulated elections conducted on a Votebox machine where voters used a mouse rather than a touchscreen to make their selections.

 

The first study compared DRES that offered voters the opportunity to review their choices before casting their ballots to DRES without this review function.  Everett's significant finding:  participants are most satisfied with the DRE experience (as opposed to bubble ballots, lever machines, or punch cards).  But what they are satisfied with is the experience, not the result - the test voters, all tech-savvy college students in this first test, did not know whether or not there were any errors in their vote tallies.

 

The second test looked at whether voters noticed if races were added to or missing from the review screen.  We should note that these test voters were highly motivated to vote correctly - they understood that this was a study, and knew what it was measuring.  Those who found the most errors were the oldest voters who spent the most time on the review screen.  Yet only 75% of these careful voters found the errors.  1 out of 4 voters missed the errors.  Over 27% of the ballots contained at least 1 error, a decisive amount, particularly for a close election. 

 

But the most disturbing finding of the second test was that among voters with only a high school education or less, only 9% - 1 out of 10 voters - found the error.

 

The third test looked at whether voters noticed changes in their votes inserted into their review screen.   Overall, fewer than half the voters noticed the changes.  Among those with a high school education or less, only one-third noticed.

 

The group that missed the most changes was the group of voters who did not follow news of voting machine security problems.  Only 15% noticed the inserted changes.  Yet among those who follow this news closely, only just over half - 54% - noticed changes.   Why do voters not notice the inserted changes?  They do not check the review screen.

 

Overall, in the second and third tests, up to 8 races were added or changed, and fewer than 40% of the voters noticed the changes.

 

Everett also cites another study in her thesis, in which 6% of actual voters on DREs with review screens walk away without having cast their vote.  They complete the voting selections, check the review screen, but think they are done at that point and fail to make the final selection, 'cast your vote.'  And 6% - or even 3% - can be a margin of victory in many close races.

 

It is interesting that Everett makes a distinction between what voters like and what works for voters.  Voters may like the experience of voting on a touch-screen, and studies show that indeed they do.  However the important thing is not creating a pleasant experience, but electing a legitimate government. In spite of the voter's best intentions, DRE review screens do not work to create legitimate election results.  Everett thinks that it is highly unlikely that voters will notice changes on a print-out, a VVPAT or voter-verified printed audit trail, if they do not notice them on a review screen in front of them.  And if a system fails to help voters catch errors, the very legitimacy of our government is called into question.