May 25, 2012
Graphic Interface design —or User Experience (UX) design as it is nowadays called— is not a science. And when you see 99% of all the interfaces out there you’d be hard-pressed to call it “art” instead. To many software developers UX seems to be a necessary evil. On one end of the spectrum we find the applications that seem to have an interface that exactly represents the program code. If there is an SDK function called MergeReferencedTemplateStructures() there will be a menu item called Merge Referenced Template Structures Ctrl+Shift+Alt+M. In many cases I wouldn’t even call this “design”.
Other programmers seem to treat UX design as a challenge to make their software as distinctive as possible. At this end we’ll find hand-crafted GUIs that do not in any way conform to the OS standard. Have you ever noticed for example how all virus scanner software looks completely unlike any other software? Virus scanners use custom images to represent tabs, buttons and toggles. They tend to reserve large areas on the main interface for logos and background images. I do not know why this should be so, the only logical explanation I can think of is that virus scanner developers believe that end users won’t trust software whose purpose is to protect the system if it seems subordinate to that system. The result is that it is often not clear what parts of a virus scanner window are clickable, and due to the non-standard nature of the interface, tooltips are often missing entirely. Not all UX that breaks away from OS-standards is bad by the way. Sometimes there are good reasons for re-designing a UX standard. Adobe LightRoom and Blender are good examples of non-standard UX design that seem to work very well.
The most important part in any UX decision is the U. The interface serves only the user. It is not there to make you feel good about being a UX designer. It is not there to provide good screen shots for marketing brochures. It is not there so you can show off your Photoshop icon skills. The only guiding principle worth a damn —and I cannot stress this enough— is the user. There’s just one problem; the user doesn’t exist. Unless you’re writing a program for a single client, you’ll have to deal with users, and as soon as there’s more than one of ’em there will be conflict. All users want features specifically crafted to help them with the idiosyncrasies of their unique and individual tasks. All users want not to be confronted by features made for others. Some users want shortcuts, some users want drag+drop, some users want all features to be in the main menu, some users want all features to be in context menus, some users want toolbars, some users want command-lines, some users want text, some users want icons, some users want animated icons with text. It is easy for a software company to say that they listen to their users, because once you have enough users, every single possible opinion can be distilled from the hubbub. It’s like reading tea-leaves.
There are of course ideas about how to approach this conundrum. ‘Personas’ are popular and indeed I found them to be quite helpful myself. While I was in Seattle last year working with the core Rhino team, Brian Gillespie showed me an effective way to design the mock-up for a brand new program or feature. You basically fill out the following sentence template until you’re out of ideas; “As a [x] I want [y] because [z]”. For example “As a reseller I want to have automatic updates because it will reduce my support workload”. Or “As a teacher I want to have a customizable UX because I want to hide complicated features from students”. If [x] is not important, don’t add the feature. If you cannot come up with a good [z], don’t add the feature.
A problem with personas is that they often collide. The desires of the expert rarely line up with those of beginners. Alan Cooper et. al. in the excellent book About Face 3, claim that the distribution of users across the skill axis is somewhat Gaussian:
New users with zero experience trickle in on the left side of the graph. Either they become better over time and move into the Intermediate area or they give up and disappear from the graph. Intermediates who use the software a lot become experts over time and experts who stop using the software drift back to the Intermediate sphere. If you believe this distribution to be correct —as I do— your first instinct may be to optimize your software for intermediates. It’s certainly not the worst thing you can do, but it cannot be denied that you’re doing a disservice to both beginners and experts. More beginners will abandon the software because it is too difficult to learn and fewer intermediates become experts because the learning curve remains steep. But the size of each category is not all that matters. When you look at how much time individuals spend using your software, a very different distribution pops out:
From this point of view, optimizing your UX for intermediates means you’re perpetrating an injustice against those who use your software the most.
Some UX designers like to explicitly separate their users into different skill categories. Their interfaces will have a ‘Novice’ and an ‘Expert’ mode that you can toggle between at will. Although this approach does solve some problems, I’m not a big fan of it for the following reasons:
- Instead of a single UX, you’re now shipping two. Anybody who wishes to promote themselves to high intermediate or expert, must learn a new interface. Sometimes even the terms differ, what beginners know to be “anti-aliasing” is now suddenly called “super-sampling”.
- People have a nasty habit of switching to Expert mode right away regardless of their skill level, meaning they are now even worse off because the Expert mode pulls no punches.
- It creates a dissonance between beginners and experts, i.e. between those most likely to ask questions and those most likely to answer them.
- It overtly stigmatises those using the ‘Novice’ interface.
- It doubles investment for the developer and the documenter.
Another popular approach is to layer the interface, so that features typically associated with expert users are not prominent on the top layer. Perhaps they are in sub-menus, or on the collapsed portion of toolbars and menus. Perhaps they are missing entirely until they are specifically switched on or created. A sort of trial-by-fire approach where we assume that anyone expert enough to modify an interface is also expert enough to be awarded the new features. Again, it solves some problems but I don’t think this approach works everywhere. For one, it actually makes expert tools harder to reach and the experts care far more about flow than beginners.
A concept which is highly associated with layering is ‘discoverability’. It is an article of faith among many UX designers that a feature which is undiscoverable is pointless, because nobody will ever use it. Take for example many drag+drop implementations. Dragging data around the screen is a great way to quickly modify the state of a program. It is faster than pressing buttons or menu items and it doesn’t overwrite the clipboard contents. But it is often not obvious that you can drag and drop data. In part I suspect because it is simply not used enough and therefore people don’t expect to be able to drag+drop, but also because it is an action that doesn’t involve any additional pixels on the screen. However it is a mischaracterization to label drag+drop as undiscoverable. It is certainly difficult to discover, but an expert user who spends a lot of time with the program stands a good chance of finding it. There are degrees of discoverability and there can be worthwhile benefits to making certain things hard to find.
A golden rule of story-telling is that one does not give away the ending. There is a lot of excitement and pleasure to be had from slowly peeling apart the layers that make up a story. Similarly, finding a new feature in a program you’ve been using for years just when you need it* can be a real source of joy and rather than besmirching beginners by making them suffer the taint of the Novice Interface, we now praise experts for having found the buried treasures.
* Obviously one is most likely to find a new feature when one is looking for it, and one is looking for it when one needs it.