Usability and Dangerous Technology

Over the last week, like many people, I’ve been reading and hearing numerous commentaries on what needs to be done to prevent attacks like those in New York and Washington.  Some of these commentaries include calls for technological improvements to help prevent misuse of technologies that might be dangerous, including airplanes, airplane simulators, and even gene splicing equipment.

I work for a company, Rockwell Automation, that produces technology for automating industrial processes.  The applications using our products can be quite dangerous, whether they are transporting large quantities of molten steel, moving continuous rolls of paper at high speed through miles of machinery, or simply moving heavy chunks of material between machining stations. 

Over the years, I have developed some sense of how to make some tradeoffs of safety versus utility.  This is the crux of usable design with these products: 

How do we prevent mistakes with, and misuse of, potentially dangerous technologies without blocking their intended use? 

As I will discuss in the coming days, preventing mistakes is often the best way to increase safety, while adding security measures often increases the probability of mistakes.

First, let’s define what I mean by dangerous technology.  After all, any technology is potentially dangerous depending on how it is used (or misused).  Running with scissors can be dangerous, but I do not classify scissors as dangerous technology.  Of course, the following definition could be refined quite a bit, but it will do. 

Technology is dangerous if there is significant risk that it will cause severe injury or death in its use.

I also distinguish between two classifications of dangerous technology, depending on whether its fundamental purpose is to cause harm or not.

Inherently dangerous technology is designed to cause severe injury or death in its use.

Weapons of all sorts fit into this category: handguns, land mines, fighter jets, and nuclear warheads, for example.  If these technologies do not result in harm when used, then their design will be improved to do so. 

This does not mean that they need to be dangerous to their users however.  Usability design in these types of products concentrates on facility and safety for users or those on the side of the users, not the targets.

Potentially dangerous technology is designed for some other purpose, but may result in severe injury or death in its use.

This is a much broader and more prevalent type of dangerous technology (at least in most places).  There are many examples in your home of products that can be harmful if misused or errors are made in its use.  Toxic and corrosive chemicals, motorized blades, and electrical devices are all examples.  Industrial automation systems and tools fit into this category, as do commercial airliners, power plants, and automobiles.

Usability design for this class of technologies includes both the utility and safety for the user as well as others that may come into contact with the products.

It seems clear why we should be concerned in applying usability techniques to potentially dangerous technology, but why on earth should we make inherently dangerous stuff more usable?  Why should we make it easier to hurt and kill people?

I asked this question a few years ago, a bit naively, in an e-mail conversation with Jerry Weinberg.  He pointed out that, if you choose to work at all on such technologies, shouldn’t they be as safe to use as possible?  Shouldn’t your design minimize errors in use so as not to accidentally harm unintended targets?  This conversation, simple as it may seem, radically refocussed my understanding of usability’s relation to safety in systems. 

I now fundamentally believe that the most important thing you can do to improve safety is to improve usability.  If your design is easy to learn and remember, improves the user’s productivity, clearly supports the work, reduces user mistakes, and allows the user to stay immersed in the purpose of his or her work, the system will be safer for all concerned.

Now, contrast that with calls for changes to dangerous technology to add guards and constraints to provide more safety.   Unless particularly well designed, most technological guards and protections will create additional opportunities for error, block the flow of the work, and interfere with making life-or-death decisions at the exact point that those decisions must be made.

Do you want to have to key in a password in order to open an airline cabin door for a doctor to tend to a pilot whose anurysm has just burst?  (I’ve read many calls for more password protected access to plane cockpits and other areas in airline security.)

Would you rather a plane be unable to take evasive action by moving into urban airspace to avoid a collision with another plane? (One article I read said that planes should be hard-wired to not allow flying over cities.)

Notice, however, that I do allow for such protection to be included if it is \ particularly well designed.\   Protections can be done well, such that they don’t interfere with work.  The key is whether the user, in the normal case, has to interact with the protections at all in order to do their work.

_______

This article was originally serialized on the following days:

19 September 2001: Usability and Dangerous Technology, Part 1
21 September 2001: Usability and Dangerous Technology, Part 2
25 September 2001: Usability and Dangerous Technology, Part 3

This entry was posted in Interaction Design. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *