Saturday, 25 August 2018

Getting and Interpretting Feedback

If I'd asked customers what they wanted, they would have said 'a faster horse'.
Henry Ford

Well, there is something slightly wrong with that statement since Henry Ford didn't exactly invent the motor car, rather he invented the process to mass produce cheap cars. Sure, everybody at the time, unless you were something like Warren Buffet sort of rich, still traveled using horses, but the point he is making is quite correct - users don't always know what they want, which is why we need techniques to be able to garner their desires and their frustrations.

Couple on buggy being pulled by a horse
I think they would have preferred a car

Now, as previously mentioned, you do have to sort the proverbial wheat from the chaff, because you are going to have devil's advocates, those that insist on something that pretty much everybody else hates, as well as those who are nitpicking every little problem. However this is the essence of getting feed back, but it is not so much asking, but rather observing. Henry Ford observed that only the filthy rich could afford to drive cars, so looked for a way of lowering the price, much in the way that RyanAir looked for a way of lowering the price of airfares.

Ethnographic Testing

So, we have this concept of ethnography, where people are basically studied in their native environment. Initially it related to tribal people living in remote places, however it has been transferred to our modern society where we study people in their native habitats, whether it be the home, the workplace, or the local pub. Once again, it comes down not so much to asking, but rather to observing what they do. This can be applied to user design, since we can get a lot of information about user's frustrations when we study them using our product in their native environment (though somethings that simply seems to be scrolling through one's Facebook feed while wandering along a crowded street, which is why councils have resorted to painting warnings on the ground, not that they will see it).

There are a few ways we can do this, such as the fly on the wall approach (which could raise the issue of privacy, but then again there is a lot scope within the UX arena that could fall foul of the privacy laws). The day in the life approach involves watching the person go about their daily business, while shadowing involves following what they are doing. Then we have the inventory method, where we study what they own, which can produce some very interesting insights as to their behaviour.

Group in cafe staring at their smart phones
Some friends catching up for some brunch. Image source.

The Lifecycle

Userbility testing isn't a one off process - it is something that is ongoing. Okay, some things work so well that they seem to never change, such as the Google home page, and at other times things are simply being tweaked around the edges. However, UX development is continually progressing as developers find problems, implement fixes, find problems with those fixes, and so on. In a sense, there isn't a time when one can simply relax because as technology advances, new, and easier, implementations come on line. Take uploading pictures - there was a time when to upload a picture you had to open a dialogue box from which you could choose one picture at a time to upload. However, these days pretty much every website that allows you do upload pictures has implemented a drag and drop method, and if you didn't you would pretty much find yourself being left behind, and fast.

The development lifecycle looks a bit like this:

design process



So, how do we get feedback? Well, there are a number of ways, including paper prototypes, surveys, interviews, usability tests, focus groups, and log analysis. For instance, you can ask a user to keep a diary tracking the use of the object. You can have them come in and use the product, while thinking out loud about their experiences. Then there are the surveys and the interviews, where questions are asked specifically of the user.

Now, remember, users probably don't actually know what they want until they are presented with it, which is why observation, and interpretation of those observations, becomes important. Let us go back to the uploading picture example. None of the users may actually say outright that they would like a way of simply dragging and dropping pictures to upload. However, they may express a lot of frustration at having to manually upload the pictures, or even spend time labeling the pictures that they want to upload. It is then our job to think outside the box and work out ways to solve these frustrations.

Now, to get an idea on how some of these methods work, consider this video where they perform a UX analysis on fruit:


Now, one thing that we need to consider are the users of our app. First of all there is going to be a segment of society that we can pretty much exclude - somebody who hates football isn't going to want to use an app that provides updates on the games. However, once we have narrowed this down, we will then need to identify the different types of users. For instance, let us consider a share-trading app. Now, we can assume that pretty much everybody using the app is going to have some interest in share trading. However, not all share traders are created equal. You have the buy and hold, the casual investor, the professional investor, the day trader, and the guy with some dodgy financial advice certificate flogging off a massively overpriced newsletter that isn't worth the electrons used to send it by email. This is the case with pretty much any app.

So, how do we identify the users? Well, through the usual methods. In fact the same methods that we use to get generalised user feedback. The thing is that we aren't just interested in the user of the app, we are interested in the type of user of the app, and how much attention we wish to pay to this user. One of the ways is through a survey, though we can also cast our nets wide through the use of social media. Notice that a lot of apps actually have their own Facebook pages. In one sense this has a lot to do with promotion, but in another sense it is a means of being able to reach out to users and to get their opinion of their product.

We also need to look at behavioral and attitudinal methods, as well as qualitative (formative) and quantitative (summative) methods. This chart outlines a lot of these methods quite well:

3d framework.png

Now, as you can see, the difference between formal and summative assessments is that one is direct, while the other is indirect. The formative assessments are usually conducted prior to the app being developed, and the summative usually occurs afterwards. In a way, once the app has been developed, it is much easier to simply sit back and let the feedback data roll in.

However, let us consider surveys, as they can be one of the cheapest ways of gathering user feedback.

Surveying the Surveyed

In many cases, what applies to surveys also applies to interviews. First and foremost we must remember that people really aren't going to articulate what they like and dislike about a product, so we need to be able to guide them in the direction to get the answers that we want. However, the catch is that we need avoid asking leading questions, so that we don't actually get the answers we are gunning for. Sure, we may have filled out hundreds and hundreds of surveys, but that doesn't mean we are experts in creating surveys, creating a survey is an art in and off itself.

So, first of all we need to get rid of people that we don't really need any answers from. A person that hasn't used Facebook for something like two years isn't going to provide any useful information on its latest incarnation. Sometimes we might not want to get feed back from casual users, so we need to get rid of them as well. Then there comes the problem of being able to identify the correct users, since we might capture some, but we might not catch all of them. Sure, posting the survey on Facebook might help, but that will lead to this problem. We need to capture more than just those users that like our Facebook page.

There are a few other errors we can get as well, such as Nonresponse error, where the questions aren't answered (and that can be solved by not asking so many questions - five minute surveys should be the absolute maximum), and measurement errors. This occurs because, well, the right questions aren't asked. So, we need to not only ask the right question, but ask it in a way that can illicit the right responses. For instance, free form questions are a bit of a no-no. Why, because people can write any old rubbish down there. For instance, if you ask them to outline how the app could be improved, they might suddenly start describing a night out on town where they missed out on a great opportunity, but not actually say how the app could have helped them in that particular instance.

Interviews tend to be much more expensive, particularly since you generally need to compensate the user for their time. But what does help is that you can pick up on body language that can give you ideas that an online survey might not. Further, you can tailor the questions to specific individuals. For instance, a specific answer might suddenly lead you down a track that you never actually considered going down, and would have been impossible for an online survey.

Finally, let us take another look at the leading question, and I believe that this clip from Yes Prime Minister pretty much says everything:



No comments:

Post a Comment