Modern computer and internet technology is amazing, allowing us to do an incredible number of things that simply weren’t possible before, both individually and as part of larger organizations.
But anyone who works with computer and mobile devices knows that everything isn’t perfect. Too often, computer systems are frustratingly hard to use. And now, the Nielsen Norman Group has identified a new problem stemming from sub-optimal user interfaces: computer-assisted embarrassment.
Earlier this month, Susan Farrell described the phenomenon this way: “Smart devices have invaded our world and inserted themselves in almost every context of our existence. Their flaws and faulty interactions are no longer only theirs—they reflect badly on their users and embarrass them in front of others.”
That is, as technology has become more social, UX failures are becoming social failures.
At first, I wondered if this was just another made-up first-world problem. But the Nielsen Norman Group is probably my favorite UX consultancy—I’ve been a fan of Jakob Nielsen’s groundbreaking web-usability work for literally decades—so I looked a little deeper, and Farrell’s argument started to resonate.
She’s right when she says, “Computers and smart devices have become social actors.” And with the rise of chatbots and voice-powered assistants, the trend is only accelerating. As technology becomes an increasing part of family, work and play, Farrell argues, they need to do much more than just “function according to specifications.” We increasingly demand “software to be truthful, trustworthy and polite to us.” That’s especially important when software is used in the presence of others and by communities of users, she adds.
The social implications of software are a bigger deal than you might think. Farrell notes six categories of computer-assisted embarrassment:
- Communication disasters: Think of email and messaging systems that make it easy to reply all by accident, or to include the wrong people on lists.
- Privacy leaks: Do we even need to go into this one?
- Terrible timing: Remember that 2 p.m. meeting you missed because it was in your calendar for 2 a.m.? Or because it thought you were on Central Time when you were actually in Oregon?
- Aggressive helpfulness: Who hasn’t had autocorrect mishaps? My latest: replacing “mom” with “monster.” That one took some explaining.
- One-size-fits-all alerts: Remember when Farmville spammed your Facebook friends with idiotic updates on your progress in the game? Or when your got a meeting notification that interrupted a much more important meeting? Or when missed an important text because you were busy telling your smartphone you didn’t want to update the operating system right now—for the 234th time!
- Systems designed for only one gender, ethnic group or set of abilities: Farrell reports on things like image-search systems that don’t recognize black people, voice recognition systems that struggle with dialects and interface actions that require two hands.
The point is, there are lots and lots of ways that bad software can be more than just annoying. It can actually make us look bad in front of our family, friends and colleagues.
Context is the key
The best way to minimize so-called “social defects,” Farrell says, is to pay more attention to context—when and how the software is actually being used. Instead of testing interfaces only in the lab with one user at a time, it’s increasingly important to make sure it works as intended in the field, with groups of people. You have to make it easy for users to report problems and then be able to fix those problems even after the software is released (SaaS can be a real advantage here).
This isn’t just an academic exercise. As Farrell warns in my favorite line of her article: “Don’t underestimate the costs of humiliating customers.”