Dan Bricklin's Web Site: www.bricklin.com
Metaphors, Not Conversations
Rather than make interacting with the computer act like a conversation with an assistant, make it like a tool you use yourself.
My essay The "Computer as Assistant" Fallacy has proven popular. Among other places, it was linked to from Jakob Nielsen's Useit.com and Lawrence Lee's Tomalak's Realm. Lawrence also linked to a July 2000 John Markoff article in the NY Times about Microsoft research. (Jakob is also quoted in that NY Times article.) Reading that old article inspired me to be more explicit in describing the types of software interfaces I prefer and led to this essay.

Introduction
In John Markoff's July 2000 article "Microsoft sees software 'agent' as way to avoid distractions" there is a description of a general human interface design: "Using statistical probability and decision-theory techniques that draw inferences from a user's behavior, the team is developing software meant to shield people from information overload [in email] while they are working." The software "decides" whether a message is something you should see and when to show it based upon sophisticated statistical analysis of various inputs. This is the same group that did the work behind the Microsoft Office "Paper Clip" and the email filter that blocked Blue Mountain Arts cards for a while. (John writes that the filtering "...was an important lesson, he said, in the risk of artificial intelligence making poor judgments.")

Reading about this style of an "agent" program that "decides" what to do for me in the background bothered me. What, I thought as I walked the dog that night, is wrong? The "what's wrong" is what makes up the difference between an "agent" or "assistant" and a "tool". It isn't the end result (e.g., only reading what I want to at the time I want to) that's the problem but how the tool interfaces with me. That missing difference, I realized, is "transparency".

Programs that are "transparent"
The word "transparency" came to mind because of the reading I've done in the area of Globalization. The term is used to describe (countries') financial and political systems and measure how open they are with information that allows outsiders to understand what's really going on. Countries with transparent systems have detailed, up to date statistics telling you the state of loans, money flows, agriculture, etc. Non-transparent countries have ministries that say "everything's fine -- trust us". Investors like transparency. Countries whose systems are not transparent enough don't get much investment. If you don't know the details and can't find them out, it's hard to develop trust.

To me, a transparent user interface is one in which the user is presented with all the information they want in a form that makes sense in light of their mental model of what's going on. The operations of the program should be consistent within the constraints of that model. One that isn't transparent just provides data with little context or model of where it came from or how it was derived or how to make adjustments.

Metaphors
The key to making a transparent interface work is in the presentation of the model of the world in which the program is operating. In the old days, we used to talk of the "metaphor" represented by the program. A good metaphor aids in developing trust between the program and the user. Its strengths and weaknesses are apparent. It is a tool that the user can work "with". It provides a "space" of some sort that can be explored and manipulated for the user's purpose.

The metaphor proposed for many of the agents and assistants that I find so bothersome is of a "magic" program that says "I know, trust me, I'll tell you". That's an easy metaphor to invent, but one with very little transparency. The idea of sophisticated software analyzing diverse inputs on my behalf is fine, but ending with a "this is the answer, trust me" interface is missing an important part of the product design.

What you want in a metaphor is a presentation of the data in a way that emphasizes what the user needs to see, exposing whatever they need when they need it in a visual or some other visceral manner that supports the meaning and manipulation of the data.

Too caught up in Artificial Intelligence
The problem with many of these Artificial Intelligence-style metaphors is that they seem to be designed to pass a Turing Test: A human types a free-format question and the computer types back in prose, or the "Holy Grail" of conversing in voice. They miss the part of the problem with the total interface to the user.

Just after I wrote the first draft of this essay, News.com posted an interview with Microsoft Research's assistant director in which he's quoted as saying: "We'd like to be able to interact with a computer just like you interact with another human being." Again, there's that desire for the computer to be a person. Now, in the movies, people work with assistants all the time, but when they really need to get something done right, they decide to be "hands on" and do it themselves. Shouldn't we have the computer help us do it ourselves for more control? Shouldn't the goal be to create tools that magnify what we can do, like tools in other areas? We want leverage like a Star Trek Tricorder, not 2001's HAL.

(You might think that Microsoft, with those very rich employees, must be run by people who are used to servants at their beck and call. But from what I've seen, given their great wealth, Microsoft's leaders like Gates and Ballmer are amazingly unpretentious and hands-on in many aspects of their lives. It must not come from that. Maybe too much of the wrong science fiction?)

Examples of useful metaphors
There are many examples of useful metaphors. A classic one, of course, is the spreadsheet. The calculations, the formatting, and the presentation are all visible and under the user's control. The user-determined, two-dimensional nature of the data layout, along with optional text in the same layout, supports the understanding of the meaning of the data. There are "automatic" operations, such as copying cells, that make certain assumptions, but most of that is presented in an obvious way. The more clever automatic operations are often the most error-prone for the user, since they may not fit into an obvious mental model. We don't "ask" the computer to forecast costs, we "refine" our model.

Another popular metaphor is the word processor. The formatting of the text is right there for the user to see, and the automatic operations, such as word wrap, pagination, spell check, line numbering, etc., fit well in the metaphor and are obvious to the user. (Microsoft has done some useful innovation here.)

CAD/CAM products are a popular way to display and manipulate design data. Video editing tools let you manipulate snippets of recorded material. Sound editing tools like Cool Edit and Sound Forge let you work in a space unapproachable without such tools. Image editing and music creation tools give indescribable control to the artist. (I use the term "indescribable" purposely...) These are all tools people value highly (even though some are quite inexpensive).

These applications have proven very popular. They are examples of WYSIWYG ("what you see is what you get") metaphors. That term refers to the output when printed, though in many cases now the results are rarely printed. More importantly they are direct manipulation metaphors and the user feels as if they are operating on a world that responds appropriately. That world is constructed in a way to promote leverage in the space covered  by the application.

Rather than a robot-want-to-be Paper Clip winking at me as it condescendingly tries to show how much better it knows a program than I do, I would expect a help program to display a more inviting set of instructions than today's Help. Add to the "Contents", "Index" and "Find" tabs other useful tools, perhaps using (in a "transparent", understandable way) the information collected from my recent operations that would have driven the Paper Clip.

It's interesting that AOL, which had an awful problem with spam email, came up with the very easy to understand, "transparent" idea of a Buddy List. That simple solution worked for millions of users. Not much AI.

Wizards are not a complete answer
Since good, easy to understand metaphors are hard to create, many developers just present all of the data and expect the user to figure out how to make sense of it. To get things started, they create "Wizards" that extract initial information from the user step by step in an interrogation process. This is good in that the application gets populated with data relevant to the user. This is bad if the metaphor they end up in is not understandable. Wizards don't make up for poor interfaces if you ever need to go further. It becomes like a taxi driver that drops you off at a restaurant in an unfamiliar city after taking all the short cuts. Without a map, if the restaurant is closed, you have no idea how to get anywhere else.

Allow user control to do the unanticipated
In addition to the presentation of data, it is helpful to have  tools that leave the user with enough control to do things not anticipated by the tool designers. When I worked on VisiCalc, there was no way (despite my lauded MBA...) that I could foresee all of the uses. Often, each individual has their own special needs or special insights into what needs to be done. Tools that allow for such user control seem to win out over more circumscribed ones. There were many "forecast my business" systems, based on the latest AI and business school teachings, but they didn't catch on once the more user-programmable spreadsheet was available. There is a tendency in product design to think you understand things more than the user. In the long run, when it comes to their particular needs, you often can't. Product designers should leave the users with the control. Bob Frankston addressed some of this in his Prerogatives of Innovation essay on ZDNet.

This is not to say that tools cannot do things beyond the understanding of the users. Having "magic" under the covers is fine as long as it is presented in an understandable way to the user. For example, sound editing programs do mathematical transforms on the data whose theory would never be understood by most users, but the effects have useful names and can be tried to see what they do. Artists may not understand the chemistry of their paints, but they can still use them and do things unimagined by the paints' creators.

Conclusion
Finding appropriate metaphors is a challenge. Neglecting to do so, though, will leave many needed applications unadopted.

- Dan Bricklin, 4 April 2001

© Copyright 1999-2018 by Daniel Bricklin
All Rights Reserved.