Whither Instant Feedback?
How many times can we count moments when we desperately wanted to pull an author, creator, editor directly over to us and ask him or her “Why did you write this?” or “Why is this designed this way?” Of course this never, ever happens. We usually, instead, lament whatever snag we’ve encountered in some software experience, or puzzle over a line of text that just has no clear meaning. Imagine my complete surprise when I was at dinner with a group of friends for someone’s birthday, only to find that the guy sitting next to me could answer a specific software question I had. I mean, he had quite literally designed exactly the issue I was talking about. Mind you, this happened in Cambridge, Massachusetts; something of a hub of technology in the modern world, so maybe the chances were a little bit better than average, but still… In any event, a few of us had been talking about the (very good) open source sound editing program Audacity, and I mentioned with some annoyance that I didn’t like the visualization features it had - you know, the simple stuff like expanding your view of a track, or closing it up again. The developer sitting next to me had created the shortcuts for them, and within less than a minute, my laptop was out, and we went through everything. In about 30 seconds, I learned all I needed to know about expanding and collapsing views. I also got to check his name in the “About Audacity” drop down as well (“Hey - that’s you!”)
This incident was pretty direct proof of the importance of feedback. What he and I had done was essentially the same as “in the field” user testing, with me being more of an expert level user, and he being a not-so-unbiased voice, but someone who could clearly lead me through a process with his app. I noticed that he was also listening to my remarks and taking some mental notes about what I was having problems with. This exchange is what designers and builders would love to have during all stages of their creation process: on the fly testing and validation; a kind of immediate gut-check as to whether something actually makes sense or not.
People outside of this process may wonder why it isn’t employed all the time. There are a number of reasons, some of which are time, expense, complexity. Besides these potential inhibitors, not all design solutions can come from just a simple question to one user. Let’s look at the case of my visual problems with Audacity - am I the only one with this problem? Has anyone else reported it? Have users grown used to the difficulty of zooming in on the view of a single audio track, or is it an inherent problem of my own? Let’s face it, I was used to different interfaces before and had grown used to doing a quick Command / + to zoom, whereas now it is a Command / 1.
Let’s Not Make Assumptions…
But there are broader concerns here about process for all of us who design systems, and also specific to education or learning. Because many of our products are built around existing metaphors (books, courses, applications,) it becomes easy for us to assume what workflows, processes and expected behaviors can be. It can be common and often necessary to design based on some expected behaviors, but when do you stop and rethink those assumptions you’ve made? Or, do you find yourself in territory where it’s very difficult to make assumptions, either for the designer or the user? At some point, for instance, someone had to create the language of how to navigate a smartphone - and that language has not changed dramatically since 2007’s iPhone interface. A good example of interactions that exist in the realm of conjecture, assumption, and educated guess comes from a recent conversation between two of my colleagues in reference to a desired feature flow for the uploading of a document to the mobile version of a (mostly) desktop application. The idea is that the user would be able to upload a finished document to the app on their phone. Because the space for mobile development on this project is somewhat limited in scope for the present, not many other features are going to be made available. But this simple action of uploading a document raises a lot of questions; ones which would be hard for many of us as designers (who are probably not unbiased learners using our products) to hazard a guess at.
For example, what is this flow exactly? When and where would a learner need to take a pre-written document that’s saved in some kind of phone- or tablet-accessible file, and then upload it to an app (to complete the assignment)? The asterisk to this is that editing this writing assignment wouldn’t really be a very viable option for the user, except for rare cases - i.e., if the document is easily editable and savable, if it can be accessed again easily, if it can be read/viewed easily. It begs even more questions: If it’s so potentially challenging to view and edit this document from a mobile device, should it even be offered as a service?
These open questions are a challenge for the designer and the researcher because they still require a lot of assumptions to begin. In other words, having an upload capability implies that you need it in the first place - and what causes us to make that assumption? Are we answering a design need that wasn’t filled before? But there are ways to ask these questions, answer them, and plan for them: They just require some thoughtful use of our assumptive powers, and only when we need that to get us started.
“First, I Brush my Teeth…”
We should always begin with some kind of user flow. User flows are usually based on what a user or learner needs to accomplish, but we can also look at flows from the point of view of an application, to see what needs it fulfills and where the two meet and deviate from each other. There are a number of names for these flows: task flows, user journeys, workflows. They are slightly different ways to slice the same melon, but they are diagrams that enable us to see how people interact with a system, how they accomplish a task, and how a system answers their needs. In the case of my (simple) need from Audacity, you could look at that workflow like this:
The area of concern for me (and area of interest to developers, designers and researchers) would be here:
In the case of the written assignment upload, it might look like this:
And our areas of concern and relevant questions would be here:
In the most ideal case, much like my friend in the restaurant, we would have a human being (one of our users) right next to us to help answer our inquiries as we design. Naturally we don’t always have it - in fact, we rarely do. But we can do a few things that can help fill that gap:
Use our existing data
Revisit who our learners are, and ask: Are we indeed sure that the general demographic for this product / service is the same that it was two years ago? Three or more years ago?
Look at the dates of the last research - is it more than a year or two old? Do we need to ask more questions about our current users?
How can we get access to current users?
As a team (product managers, learning designers, user experience designers, user assistance writers, developers, QA / QE testers) what do WE think might be typical user behavior that we can use as a starting point until we can have that validated by user testing or market research?
Are we willing to admit that which we don’t know about our learners?
What would our personas do at specific pain points in a design?
What metrics are in place that will validate that any assumptions we’ve made are correct?
The best method to insure that we’re not only assuming that we’re designing the right thing in the right way (but actually doing it) is to follow three basic principles:
Know who the design will impact
Design to meet or exceed a need
Have metrics ready to know, quantitatively and/or qualitatively, that it “worked”
OR - we can just have someone follow us around and say, “No, do it like this.”