Chapter 1: Who're You Calling a Dummy?

Addison Wesley Professional

From September 1 through September 30, enter to win a free copy of this book. Check out author David Platt's blog on Microsoft Subnet.

"That'll never sell," I sneered at the title in the bookstore. "Who would publicly buy a book that proclaimed to all the world that he's a dummy? It'd be like buying condoms labeled 'extra small.' "

We all know how that one turned out, don't we? DOS for Dummies and its companion, Windows for Dummies, became the best-selling computer books of all time. The concept has spread to fields far beyond computing, with titles as disparate as Wine for Dummies, Saltwater Aquariums for Dummies, and Breast Cancer for Dummies. The series has sold more than 100 million copies, according to Getting Your Book Published for Dummies, which I bought to help me find a publisher for the volume you are now reading.1

Computers make users feel dumb. Literate, educated people can't make that infuriating beige box do what they want it to do, and instead of marching on Microsoft with torches and pitchforks and hanging Bill Gates in effigy, they blame themselves and say, "Gee, I must be dumb." In a society where nothing is ever the fault of the person doing it, where people sue a restaurant when they spill their own coffee, getting users to blame themselves for anything is a magnificent accomplishment, albeit probably not the main one the software vendor intended. Why do programmers design applications that make people feel this way, and why do people meekly accept this abuse from their computers?

Where We Came From

The designers of the earliest computer programs didn't care about making their products easy to use. Solving the computing problem at hand—for example, dealing with a printer to make the words come out properly on paper—was so difficult that no one had time or money left over for making a user's life easier. A computer's thinking time was enormously expensive, much more so than the user's time. Forcing the human user to memorize complicated commands instead of using computer power to provide a menu listing them made economic sense. The relative costs are now reversed, but almost everyone in the industry older than about 30 grew up in that type of environment. It can't help but shape our thinking today, no matter how hard we try to leave it behind. Think of your older relatives who grew up in the Great Depression of the 1930s, who even today can't bear to throw away a sock with only one hole in it.

Like driving a car in the early years of the twentieth century, early users expected computers to be a pain in the butt, and we were rarely disappointed. Almost all users were programmers themselves. Few of them felt the need, or often even the desire, to make things easier. We accepted the difficulties—the rationed computer time, the arcane commands, the awful documentation—as those motoring pioneers accepted hand-cranked engines and constant tire punctures. It was the best anyone had. We were happy to get our important computing jobs (tabulating the census, cracking enemy codes) done at all, as they were happy to stop shoveling horse manure out of the barn every day. We liked fiddling with our programs, using them in ways their designers never intended, as the early motorists liked tinkering with their engines. If someone had told Henry Ford that his Model T needed a cup holder, he'd have laughed in that person's face.

There was a feeling in those days that making programs easy to use was just plain wrong. If a program was hard to write, it should be hard to use so that only those who had proven themselves worthy through intellectual struggle could benefit from the programmer's effort. I remember, with surprising fondness even today, the pride I felt on discovering that the command to print a document on the first major computer system I ever used (1975, freshman year in college) wasn't Print or P, but rather, the letter Q, since you were placing the document in a queue to be printed. I had learned a magic word. I was becoming one of the elect. I was Smart!

But as hardware got cheaper, and computers moved from the air-conditioned glass rooms attended by high priests to the workbenches of geeky hobbyists and then to the desktops of individual employees and the homes of real people, they had to become easier to use. So the developers of applications had to start putting time and money into designing a program that users could actually use. Why hasn't it worked?

Why It Still Sucks Today

The piece of a computer program that deals with the human user—getting commands and input data from him, displaying messages and output data to him—is known as the user interface. As with many areas of computing, user interface design is a highly specialized skill, of which most programmers know nothing. They became programmers because they're good at communicating with a microprocessor, the silicon chip at the heart of the machine. But the user interface, by definition, exists to communicate with an entirely different piece of hardware and software: a live human being. It should not surprise anyone that the skill of talking with the logical, error-free, stupid chip is completely different from the skill of talking with the irrational, error-prone, intelligent human. But the guy who's good at the former is automatically assumed to be good at the latter. He's usually not, and he almost never realizes that he's not. That's what causes programmers' user interface designs to suck, at least from the standpoint of the poor schmoe that's stuck using that piece of junk.

How does this happen? Programmers have to have a certain level of intelligence in order to program. Most of them are pretty good at dealing with the silicon chip; otherwise, they get fired very quickly and encouraged to take up another profession in which they might possibly benefit society, such as roofing. How can they turn into lobotomized morons when designing a user interface? For one simple reason, the same reason behind every communication failure in the universe: They don't know their users.

Every programmer thinks he knows exactly what users want. After all, he uses a computer all day, every day, so he ought to know. He says to himself, "If I design a user interface that I like, the users will love it." Wrong! Unless he's writing programs for the use of burned-out computer geeks, his user is not him. I tell my programming students to engrave on their hearts, along with the phrases "Garbage In, Garbage Out" and "Always Cut the Cards," Platt's First, Last, and Only Law of User Interface Design:

Know Thy User, for He Is Not Thee

To take the simplest example, consider a personal finance program, such as Quicken or Microsoft Money. These get used for a few hours every couple of weeks. A user won't—can't—remember as much of the program's operation from the previous session as she would for an application she used every day. She will therefore need more prompting and guidance, which an all-day every-day user (such as the programmer) finds intrusive and annoying. It's impossible for a programmer to put himself into the shoes of such a user. The programmer knows too much about the program and can't conceive of anyone who doesn't.

Because they're laboring under the misconception that their users are like them, programmers make two main mistakes when they design user interfaces. They value control more than ease of use, concentrating on making complex things possible instead of making simple things simple. And they expect users to learn and understand the internal workings of their programs, instead of the other way around. I've done them both, and I now repent the error of my foolish younger ways.

Control versus Ease of Use

Every time I teach a class at a company, I ask how many of the students drive cars with a manual, stick-shift transmission (as I do). Usually about half the students raise their hands. I then ask how many more would drive stick shifts if their wives would let them, or if they came on the minivans that they need to drive because they're turning into old-fart curmudgeons like me. Usually about half the remaining students raise their hands.2 "Now, would you not agree," I ask, "that a stick shift takes more work to learn and to use than an automatic, but gives somewhat better control and performance if you do it right?" They know they're being led somewhere they don't want to go, but they can't usually wriggle out at this point, so they agree suspiciously. "Now, what percentage of cars do you think are sold with stick shifts in the U.S.?" They squirm uncomfortably and say something like, "I bet it's low; 30 percent?" They wish. Sales estimates vary from about 10 percent to 14 percent. Let's call it 12.5 percent, or one out of eight, for easy comparison.

This means that six out of eight programmer geeks value a slight increase in control and performance so highly that when they spend $25,000 or more on Motor City iron, they're willing to do more work continuously over the life of the product to get it. But only one out of eight of the general population makes the same decision when offered the same choice. And it's actually much lower than that, because all six of those geeks are in that one out of eight. The percentage of normal people willing to tolerate the extra effort is almost zero. Programmers value control. Users value ease of use. Your user is not you.

Here's an example of doing it wrong. AT&T directory assistance was once simple and easy. You'd ask for someone's number and the automatic voice would say, "The number you requested is 555-1212. Please make a note of it." If you stayed on the line, it'd repeat the number so that you could be sure you'd written it down correctly. Simple. Easy. Impossible to screw up. Good. Then AT&T added the capability of automatically dialing the number for you. They'd say, "The number you requested, 555-1212, can be automatically dialed for an extra charge of 50 cents. Press 1 to accept and 2 to decline." The simple thing was as easy as ever, and the newer, more powerful feature was available to those who wanted it enough to pay for it. Anyone who didn't like the new feature could simply hang up. Then some idoit [sic, see note3] had an absolutely awful idea. The last time I tried AT&T directory assistance, it said, "The number you requested can be automatically dialed for an extra charge of 50 cents. Press 1 to accept and 2 to decline." It wouldn't give me the number until I entered my choice. I had to take the phone away from my ear, visually reacquire the keypad (which gets harder after age 45 or so), put down the pencil I was holding in my other hand to write down the number, press the correct button, pick up the pencil again, and put the phone back to my ear. Only then would it tell me that the number was 555-1212. The complex, powerful operation is possible, but the simple operation is no longer simple. The designer of this system clearly valued control over ease of use, but I guarantee that his users don't. Whoever inflicted this on the world should be forced to do it 500 times every day. He'd shoot himself by the end of a week.

My cell carrier, Verizon, on the other hand, has taken ease of use to new heights. Verizon realized that almost everyone calls directory assistance because she wants to phone someone immediately, so why not just do it? When I dial directory assistance from my cell phone, the automated voice says, "The number is 555-1212. I'll connect you now." It happens automatically, without any motion or even thought on my part. The new number stays on my phone's recently dialed list so that I can add it to my contact book if I want to. The few callers who only want to write the number down can simply hang up, which they'd be doing then anyway. Simple things are simple. Complex, powerful things are simple, too. This design is as good as AT&T's is bad.4

I Don't Care How Your Program Works

The second mistake programmers make when they design user interfaces is to force users to understand the internal workings of their programs. Instead of the programmer adjusting her user interface to the user's thought processes she forces the user to adjust to hers. Furthermore, she'll usually see nothing wrong with that approach. "That's how my program works," she'll say, puzzled that anyone would even ask why her user interface works the way it does.

Here's an example of what I mean. Open Windows Notepad, or any other type of editor program, and type in any random text. Now select File, Exit from the main menu, or click on the X box in the upper right of the title bar. You'll see the message box shown in Figure 1–1.

Figure 1.1

Figure 1-1

Notepad asking the user whether to save changes

What exactly is this box asking us? It seems to be saying that some file changed, but I haven't seen any file anywhere. What the hell does "save the changes" mean?

Related:
1 2 3 4 Page 1
Page 1 of 4
The 10 most powerful companies in enterprise networking 2022