Voting System Transparency and Security:

Comments on the Sept 20 hearing

Part of the Voting and Elections web pages, http://homepage.cs.uiowa.edu/~dwjones/voting/
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Submitted to the
U. S. Election Assistance Commission
Technical Guidelines Development Committee
September 23, 2004

 

General

Many good things were said at the EAC Technical Guidelines Development Committee hearing on Sept 20, 2004 in Gaithersburg, and I endorse much of what was said. Craig Burkhardt's recognition of the constituencies or customer groups for election equipment was quite valuable -- he is right that election administrators and pollworkers, candidates and committees, voters, and the press each make different demands on voting systems. In fact, this observation reinforces David Chaum's suggestion of multiple rating scales for voting systems -- one could easily imagine ratings that are specific to the needs of each of these groups, so that, for example, a voting system was rated in terms of the quality of its election administrator interface quite separately from ratings of its voter interface, the quality of information released to the press, and so on.

Paul Craft's statement that we have traded a small increase in voter convenience for a large increase in difficulty for the election administrators reinforces my own observations about the impact of technology on transparency; his suggestion of a "dissertation defense model" of source code audit disclosure is an extremely valuable idea.

Fred Berghoefer of the Arlington County Virginia election office had some very valuable observations, illustrating the real value of inviting public comment. His suspicion of including time-of-day clocks in voting equipment is justified -- I would suggest that access to time information is something that should be closely inspected; it may be needed to time-stamp audit-log entries or to help seed pseudo-random number generators, but outside of these narrow confines, access to time information should be closely guarded, and there may well be large components of the voting system that should have no access at all to the date or time.

Some of the things said at the hearing require additional comment, and I will focus on those in the remainder of this.

Questions about Mark-Sense ballots

Michael Shamos and Fred Berghoefer both commented about problems with optical mark-sense ballot scanners. Fred Berghoefer's comments on residual rates on mark-sense absentee ballots accurately describe what I would expect from older central-count mark-sense tabulating machines using infra-red mark sensors, and the evolution in ballot processing rules that he described as having taken place in Arlington Virginia shows a remarkable convergence with the absentee ballot processing rules that are currently in place in Miami Florida and that I have long advocated. (See my recommendations in http://homepage.cs.uiowa.edu/~dwjones/voting/optical/)

I believe that the time has come to ban the use of infrared optical mark-sense ballot scanners. In the late 1960s and early 1970s, when this technology was developed, infrared emitter-photosensor pairs may have been a reasonable choice, but today, we have nearly white LEDs and broad spectrum photosensors on the market that can easily sense any mark that would be visible to the human eye, and we ought to demand their use.

I also believe that election authorities, in their pre-election tests, should not rely entirely on the election vendor's standard calibration protocols for mark-sense scanners, but should routinely test the calibration of mark-sense scanners as part of their pre-election tests. Particularly with absentee ballots, no matter how voters are instructed to mark their ballots and make corrections, many voters will use whatever pens or pencils are convenient and some will make corrections by erasing, using white-out, or similar means. It is very appropriate, therefore, to test the calibration of mark-sense scanners with a variety of nonstandard ballot markings. I have documented the tests I used in the August 13, 2004 pre-election tests in Miami-Dade County in section 8 (pages 15) of my Observations and Recommendations on Pre-election testing in Miami-Dade County (See http://homepage.cs.uiowa.edu/~dwjones/voting/miamitest.pdf.)

Jurisdictions that follow these recommendations, eliminating infrared ballot scanners, referring all ballots containing overvotes to the canvassing board, and insisting on scanner calibration thresholds that accept a single pencil stroke across the voting target, should reduce the residual vote on mark-sense absentee ballots to an acceptable level.

Sequoia's Voter Verifiable Paper Trail

The new voter-verifiable paper trail technology just introduced for use in Nevada came under fire from several people, among them Michael Shamos. There are three problems he pointed out. First, that machine-readable marks such as bar codes on the paper compromise voter verifiability; second, that a voter-verifiable paper trail that most voters fail to verify is of no great value; and finally, that maintaining a continuous tape of ballots preserves the order of votes and therefore destroys anonymity.

Bar codes

I agree entirely with the first criticism. Everything printed on a voter-verifiable paper ballot should be readable by the voter. Given that the range of type fonts used on such ballots will be very limited, there is no reason to use bar codes or other non-human readable information on the ballot; any machine counting of such paper can and should be done from the human-readable text itself.

Most voters don't bother to verify

I am not so sure about the second criticism. While it would be good if every voter verified their ballots, we can gain considerable confidence in the system if a substantial fraction of the voters take the time to do so. With current DRE systems, the strongest competing model for voting system validation is to run parallel tests on election day with some small fraction of the voting systems. Let us imagine that 1 percent of the voting machines are pulled, at random, for parallel testing on election day. I have discussed parallel testing in my Recommendations for the Conduct of Elections in Miami-Dade County using the ES&S iVotronic System, section 7, (see http://homepage.cs.uiowa.edu/~dwjones/voting/miami.pdf), and in my Parallel Testing: A menu of options http://homepage.cs.uiowa.edu/~dwjones/voting/miamiparallel.pdf).

If just 1 percent of the voters inspect their voter-verifiable ballots and would be willing to complain vocally if they saw problems, then the voter-verifiable model will test the machines more thoroughly than the parallel testing model. In fact, as Jim Adler has shown, in Confidence -- What it is and how to achieve it, presented at the NIST Symposium on Building Trust and Confidence in Voting Systems last December (see http://www.votehere.net/papers/NIST_121003.pdf), random voters testing 1 percent of the ballots offers a far stronger test than random testing of 1 percent of the machines.

Of course, I would prefer to see voter-verifiable paper ballots that were inviting enough to attract more voters to inspect them, but the fact that most voters do not take the time to do so is not a sufficient reason to write off the technology out of hand.

Voter privacy on scrolled paper ballots

Finally, there is the question of whether recording paper ballots on a scroll destroys voter privacy. I disagree that it must do so, because it depends on a crucial point of law, on the one hand, and on the procedures used, on the other hand.

There are two working definitions of the voter's right to a secret ballot, as I have pointed out in Auditing Elections in the Oct. 2004 issue of Communications of the ACM (see http://homepage.cs.uiowa.edu/~dwjones/voting/cacm2004.shtml). One legal model makes secrecy an absolute right. "The ... voting system shall secure that the votes in the ... ballot box ... remain anonymous, and that it is not possible to reconstruct a link between the vote and the voter" (quoted from the July 13, 2004 draft Recommendation of the Committee of Ministers to member states on legal, operational and technical standards for E-voting, Appendix I, paragraph 17). This definition is absolute, speaking in terms of possibility.

The other definition of the right to a secret ballot is provisional, as pioneered in the British Secret Ballot Act of 1872, where the law actually requires that ballots be tied to the voter who cast them, but then makes that connection a closely held secret. If we accept a provisional definition of ballot secrecy, as is the case in many jurisdictions today, we can work out procedures for retaining ballot secrecy despite the fact that the voting system preserves the order of the votes. If we require absolute secrecy, we cannot do this.

I believe that the decision to adopt absolute secrecy or provisional secrecy is essentially a political decision, and we should not make this decision in the voting system standards. What we must do, however, is identify those features of voting systems that make ballot secrecy provisional and work to prevent jurisdictions from unknowingly adopting voting systems where ballot secrecy is provisional when their laws require absolute secrecy.

I should note that David Chaum's scheme and Jim Adler's schemes for end-to-end voter verifiability also involve provisional ballot secrecy because, if the voter and the custodians of the cryptographic keys for these systems collude, they can, together, prove how that voter voted.

In the case of the Sequoia voter-verifiable paper ballot system, one administrative procedure that would preserve the voter's right to a provisionally secret ballot is as follows:

Step 1: At the close of the polls, the spool of voted paper ballot records is sealed with a tamper-evident adhesive seal, and held in secure custody until such time as a recount or election audit requires its examination.

Step 2: In the event of a recount or an audit of the election data from this particular voting machine, the seal is broken and then the ballot spool is unrolled over a ballot box, printed side down, and cut into separate paper ballots which are then shuffled before examination.

To do this, the dividing line between ballots must be visible from the back of the tape. This procedure can, of course, be strengthened, for example, by using a machine to cut the ballots, or by cutting twice and discarding the confetti so that alignment of paper fibers cannot be used to reconstruct the connection between ballots, but the basic idea should suffice if we are willing to make ballot secrecy conditional on the correct conduct of this procedure.

Auditing Elections

Aviel Rubin commented, in passing, that the paper ballots used in a mark-sense voting system are the audit trail. I disagree with this assertion and I disagree with his use of terminology. In general, I believe that the terminology used by Paul Craft is more accurate, and I have the following general suggestion:

We should eliminate the term audit trail from the voting system standards in the form it is currently used. This term, as currently used, is misleading, and should be replaced by the term event log, because the audit trail of a process is the totality of all information an auditor might want to inspect, including event logs, actual ballot images, and all of the paper records surrounding the process. This observation was first made to me by Donald Llopis of the Miami-Dade Elections office.

In the context of precinct-count mark-sense systems, the relevant audit records coming out of the precinct are: The paper ballots themselves, the electronic records from the precinct-count tabulator (for example, as held in a memory pack or PCMCIA card), the paper printout of the election totals printed at the close of the polls, the pollbook or affidavits of eligibility, and the records of the number of ballots distributed to voters and the number of those ballots that were spoiled. (For this discussion, I exclude provisional ballots, but they too must of course be accounted for).

If the law gives primacy to any one of these records, then a crook attacking the vote collected at that polling place need only corrupt that one record. This is the primary weakness of HR2239, the Voter Confidence and Increased Accessibility Act of 2003, where subparagraph 4.a.'2.'B.'iii says "The ... paper record ... shall be the official record used for any recount conducted with respect to any election in which the system is used."

In a recount or audit, it is far better to examine the consistency of the multiple records from the precinct. If the number of paper ballots is consistent with the number of ballot-cast records in the event log for the ballot tabulator and this, in turn, is consistent with the number of signatures in the pollbook, then indeed, it is reasonable to trust the paper records. If, on the other hand, the pollbook the printout from the ballot tabulator taken when the polls were closed, and the totals extracted from the electronic memory cartridge of the tabulator are in agreement with each other but differ with the paper ballots found in the ballot box, it would be foolish to accept these paper ballots as the definitive record of the election and reasonable to guess that the ballot box had been tampered with. See my Auditing Elections in the Oct. 2004 issue of Communications of the ACM for additional discussion of this (see http://homepage.cs.uiowa.edu/~dwjones/voting/cacm2004.shtml).

The State Certification Process

Dan Wallach commented that he thought that there might be some benefit in standardizing the state certification process. As I have stated in my tutorial on Testing Voting Systems (see http://homepage.cs.uiowa.edu/~dwjones/voting/testing.shtml) this is not a good idea for two reasons.

The first of these is that the state tests are the place where the voting system's conformance to the eccentric requirements of the state are tested. Each of the 50 states in the United States has its own legal requirements for voting systems, and there is some variation, for example, in requirements for ballot rotation, straight party voting, the number of parties that must be accommodated, full-face ballot display, and many other issues. Obviously, to the extent that state laws differ, state testing must differ accordingly.

I believe that the second reason is more important! Up to the point where the voting system is submitted to the states for testing, all of the tests to which it is subjected are predictable. Certainly, this is true of the testing at the ITA under the FEC/NASED Voting System Standards, where the vendor knows that each objectively testable criterion in the standards is likely to lead directly to some test.

At the state level, however, the vendors cannot currently guess what tests they will be subjected to. Some states have minimal tests, some rely entirely on the ITA's, but others offer significant and surprising tests. The ITA is asking, does the voting system satisfy each objectively measurable criterion in the voluntary Federal standards. The states first ask if the machine meets the requirements of state law, but having asked this, the examiners in may states are asked to give an opinion about whether the machine will accurately and fairly capture and record the intent of the voters. This is an open ended requirement that allows them to exercise some creativity in their tests, and between the 50 states, this creativity has led those of us involved in state testing to find significant problems.

What the current system lacks is a channel for the problems uncovered in state testing to be disseminated to other states, the ITAs, those maintaining the voting system standards and the public. When one state uncovers a problem with a voting system, this should warn other states and the ITAs that they may have overlooked something, and it should warn those who set voting system standards that there may be a deficiency in the standards. The public, of course, have a right to know about the existence of shortcomings in the voting systems purchased in their local jurisdictions!