This article originally appeared in the January, 2001, Number 16 issue of *spark.
What can one say about the recent American election? My impression is that a corrupt political party tried to steal a victory it hadn't earned for a candidate who wasn't fit to be dogcatcher, let alone hold arguably the most powerful political office in the world. However, since my writing for *spark is primarily about technology, not politics, I won't say this.
Instead, one aspect of the fiasco in Florida intrigues me: the debate over the quality of results of hand-sorted versus machine-read ballots. Owing to a large number of rejected ballots, representatives of presidential candidate Al Gore asked for recounts of ballots in three Florida counties by hand. Representatives of candidate George W. Bush responded that machine-read results were more accurate; that when human beings were involved, bias and error would skew the results.
I could point out that the Bush camp came late to this party: as Governor of Texas, Bush enacted a law which allowed hand counting of electoral ballots. His opposition to hand-counting ballots in Florida--to the point where his representatives argued in court that hand counts were open to fraud--could be seen as opportunistic and hypocritical. But, again, since I'm a tech writer, I won't point this out.
The question this raises which is appropriate for a technology writer to consider is: What are the relative merits of machines and people for various tasks?
The assumption behind the Bush camp's rejection of hand counting is that machines are infallible and incorruptible. Neither of these positions is entirely true. Suppose you're a voter in a Florida booth. You begin to punch a hole on the ballot form, then realize it doesn't represent the candidate you want to vote for, so you fully punch the hole next to your candidate's name.
Now, you and I would clearly be able to see the voter's intention. A machine, however, may well see the partially punched hole as a second attempted vote and declare the ballot spoiled. In this (and many other ways) error can creep into the behaviours of the most carefully designed machines.
As for machines not being corruptible, the past 20 or so years of research into the working of science (e.g., Latour) and the development of technology (e.g., the technological constructivist studies of Pinch and Bijker) clearly shows that the design of machines is never value neutral: Machines have biases built into them in the assumptions of their designers. The values of the creators of technology can easily corrupt its use, as when the promise of unlimited nuclear energy turns into the nightmare of nuclear war.
Perhaps another example--a more important example--may help us explore the difference between machine and human actions. One of the more successful areas of artificial intelligence research has been in the creation of "expert systems." With these, a large number of experts on a given subject are interviewed; the rules of thumb by which they function are distilled and codified into a set of rules which can be applied by a computer. One area in which expert systems have been applied is medicine. One of the things a doctor's routine includes is listening to a patient's symptoms and choosing a diagnosis from among the many possible illnesses the patient may have. Well-programmed expert systems can make the diagnostic procedure much simpler.
However, the ultimate decision about a diagnosis always rests with the human doctor. There is a practical reason for this: The legal liability for the mistakes of computers which are allowed to make diagnoses has not been determined. There is a more basic reason, though.
We don't trust computers to make important decisions about our lives. While there may be an irrational component to this belief, it is, for the most part, based on a very real difference between machines and people: Machines follow rules, human beings exercise judgment.
There are human factors in a medical diagnosis which only a human being can appreciate. This may be a simple matter of knowing the environment in which a patient lives, which can be a factor in illness which doesn't show up in a simple list of symptoms. It may be as complex as interpreting a hysterical patient's description of his or her symptoms. Rules-based computing makes sorting through lists of symptoms and diagnoses quick and easy; dealing with the messiness of human existence requires human judgment.
Judgment is the ability to make decisions in situations where the rules are fuzzy, or simply do not apply. It also includes a set of rules over and above those that can be programmable (for instance, when to apply information outside of the rules). Ultimately, it takes in all of the human experience which doctors apply to their work, and knowing when to apply it. With expert systems (the cutting edge of current artificial intelligence research, since they actually have worked in limited capacities), this is simply not possible.
In the case of the election, the voting machines used a very simple set of rules to determine how ballots were filled out. Imagine how much human reality was missing.
I like computers. I work with computers. I play with computers. But, for anything important, I trust human beings.
Can’t get enough of *spark? For more intellectual stimulation, go to the source: *spark online.