Tuesday, 6 November 2018

And More Testing - It Never Ends

Well, they also call this user behavioural analysis, but in the end it is simply more userbility (not usability, but user-bility) testing, and as I have indicated it basically never seems to end. I guess that is not surprising because the reality of the commercial world is that you either change or die. Okay, that actually isn't all the pleasant for the plebs, namely because they have little say in the change, and while corporations carry on about those who are resistant to change, I can assure you that if your job simply consists of answering phones, no matter how much you may hate the change, your opinion means squat. In reality, the people you have to worry about who tend to be resistant to change happens to be management - you see change is good, as long as it doesn't affect them.

However, the reality is that society changes, and users change. If another platform offers a better service you can be assured that your users will desert your platform for the other - just ask Yahoo and Myspace - you can be assured that they discovered that the hard way. This is also why sites like Facebook and Google also seem to always be on top of their game, and also spend up big on their UX departments - they don't want to happen to them what they effectively did to others.

Anyway, usability testing can actually be mapped out in the following way, as Whitney Hess demonstrates:


In a way this is a three dimensional grid - we have an axis that moves from attitudinal to behavioural, from qualitative to quantitative, and also in a contextual way. Considering context, note that we are looking at three forms - whether the product is being used or not, and whether it is being used in a natural or a scripted environment. One of the key things with user testing is that we really shouldn't be telling them how to use the app, but observing how the app is used. In one sense each method has its own pros, and cons, and you can also see that they all appear on one part of the graph.

One of the interesting things is considering the natural use of the product, and also taking into account privacy. Notice that there are methods like data mining and message board mining, as well as eye tracking studies. There are already concerns that that little camera on the front of your phone, and in fact the GPS tracking system, and other input systems are mining more information about you than you actually realise. However, it seems that some of these companies have mining programs that will trawl the internet looking for flags regarding their product. In fact, if a customer made a disparaging comment about the company I was working at over the internet, their social media department knew about it (in the same way that they would know you slagged the company over the internet as well). Note also that these methods sort of edge towards the indirect method, meaning that information is being collected, as opposed to being requested.

Another thing about UX design are the anacronyms (though you do find them in pretty much any industry). The first one is the HIPPO, or the highest paid person's opinion, or the ZEBRA, zero experience (or evidence) but really arrogant. I'm sure you know those guys - they don't get to where they are because they actually know anything, but rather because they are able to make the right friends at the right time. Dealing with these people is a gift, and a skill, because there is always the chance that they can completely destroy a project, and the ironic thing is that they never seem to actually take the blame for destroying the project. However, that probably needs to be addressed elsewhere by somebody else who has much more experience in behavioural science than I happen to have.

The key thing here comes down to evidence. Look, honestly, there are going to be times when, despite the evidence that you have gathered, people are simply not going to want to listen  - take climate change for example. The thing is that people always have things (whether it be money, or reputation) on an opinion, and people simply do not like to admit that they are wrong, especially when it results in them losing face. However, that does not mean that we simply cannot dispense with collecting evidence - evidence helps us understand how our product is being used, and how we can make it better - and stay ahead of the game.

So Onto Testing User

Well, this is the obvious point but what we are trying to do is to use the scientific method to gauge user behaviour. Getting them to use it in the natural environment helps if, say, you happen to be Google or Facebook (or even Yelp), but sometimes that simply isn't possible, especially if your product hasn't been released to the market. However, lab testing, while useful, can also be fraught with problems since it happens to be within a controlled environment. For instance, if the user knows that an expert is there, the user is always tempted to ask the expert for help. On the other hand, testing remotely can sometimes reveal incomplete data, namely because you are not privy to things such as thought processes.

Let us consider eyeball tracking for an moment because ironically it can reveal a lot of interesting information. Take a look at this Wikipedia page:


This actually reveals a lot of information as to how people are viewing the site. Notice that the user looks at the introduction, at the contents, but very rarely at any of the main text. Interestingly they also seem to look at the top of the navigation bar. Eyeball testing can be used to see what information people look at and how they relate to your app. In the Wikipedia example, I can attest to being one of those people that tends to only read the introduction.

Now for a couple of videos, the first being on a UX team trying out usability testing on fruit of all things, and some of the results may surprise you:


No surprisingly, the Google development team also have a number of videos on Youtube regarding usability testing, here is a method known as Guerilla Testing, where you go to a cafe and randomly ask people to test your app.


Preparation is Key

Honestly, things do simply appear out of thin air, and while some people are able to operate on the spur of the moment, nothing ever beats preparation. This is the same with userbility testing. We should first of all have a plan, but we should also have a script, resources, and an idea of where the testing should occur - in a cafe, or in a lab. We need to know what we are testing, what we want from our test, who we are going to be testing the product on, when and where, and what outcomes we are seeking. In fact, we should be putting as much time as we can afford to into preparing the test.

Further more we should be screening the test subject, and in doing so we should be asking specific questions. For instance, instead of asking what their favourite websites are, ask them to list their top ten websites. Ask them how often they would use a similar app, and also give them a list of important features and whether they appeal to them or not. We don't want open questions because open questions can lead to vague answers.

Having a tight script is also very important, as this provides consistency. The thing with a good script is that it gives the users a feeling of professionalism, and expertise. It isn't good if we are working off the cuff, particularly since we might forget something for one person that we would ask of another and so forth.

When it comes to the lab, we also need to be prepared. For instance, we can't have it in a high traffic area where the user is likely to be disturbed. We need good lighting, and we also need to be able to observe without the user feeling uncomfortable - a one way mirror can be great in this regards. Also, while it goes without saying, good equipment is also another requirement.

Now, we get down to the tasks. We need to know what tasks the user will be performing, the approximate time the tasks will be taking, and what possible deviations there might be from these tasks. In addition to that we need to be prepared in recording our observations, what we are expecting to see, and what we actually see. Finally, there comes our post test survey, were we can delve into the users thoughts on the product, the likes and dislikes, and of course room for improvement.

In Progess

Now that we have the tasks, we should hand them to the user one at a time. The thing is that handing a whole stack of paper to a user and asking them to work through them one at a time can be quite overwhelming. Think about that to-do pile at work, and the thoughts of whether you are actually going to be able to complete it. We should also use the 'think aloud' process, that is that the user really shouldn't be keeping their thoughts to themselves. What we want to know is the thought process, and we would like to encourage the user to share them with us.

Honestly, we need to remind the user that we are not testing them, we are testing the application. As such we can't make them feel that they have failed in the task. If there is a failure, in reality it is the application, and the design process that has failed. This is not a university exam. However, we also need to avoid guiding them through the application. That sort of defeats the purpose of the whole testing process. If the user isn't able to figure something out, then maybe we need to go back to the drawing board and try to work out how we can solve it.

Another thing we can consider is controlled and natural testing. This is done a lot in the medical research industry, where one group is given a tablet and the other a placebo, and the results of both are studied. In a way we can present two groups of users with two different versions of the products and watch to see how each group responds. In fact you may have experienced that yourself with apps that you used, though you generally don't realise that you are a part of a controlled experiment.

I probably should mention the concept of alpha/beta testing. You may have encounter the term 'beta' test previously. Well, you have pre-alpha where the UI has been completed but functionality not so much. Alpha testing is usually done inhouse to test for bugs and flaws. Beta testing is done through a limited scope with outside users, before the pre-release and the eventual release. Look, I've heard people carry on about how they have received the beta test version of a game, but really, it isn't the finished product, it is just a version where they haven't ironed out all the problems yet.

Oh, and there are other resources, such as Google Analytics, which you can also use in addition to your testing.

To Recap

Well, this has been a bit of a slog, but it is an important facet of UX development. Anyway, there are effectively three phases: the strategy, which is the start where the designing and thought processes take place, the execute, which is the go and no-go, and is where much of the development and implementation occurs, and the assess, where we look over the results of what we have gathered, and how we can implement them in the future.

Finally, remember, that qualitative is describes, and quantitative counts, and both are useful for understanding how users respond to our product.

Creative Commons License

And More Testing - It Never Ends by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

No comments:

Post a Comment