The validity problem in UX Research

All aspiring UX researchers and designers start their training by learning about the classic tool of usability testing as the cornerstone of learning about the pain points users face when trying to complete their goal using an application. Standard curriculum stuff, really because it can be done badly, and/or professionally. Granted, there is some disagreement about certain features in the middle (does the researcher need to behave like an automaton to be objective, do they need to display a bleeding heart to show their immense empathy with the user?). That is fodder for another post (story: I was criticized once for being too happy to see the participant in a session. Whatever. These are minutiae.

The fundamental question is: Can we trust anything we learn from usability sessions?

My answer is … well, take it with a big chunk of salt. A usability session is the most artificial setting, when a participant is given ample time, an incentive to focus on a very focused aspect of an application, in a quiet, calm setting, enjoying the special attention they receive from the researcher (and maybe even more people behind the looking glass), while told to relax, as there could be no harmful consequences of anything they do or say.

Needless to say, when most of us try to use an app

  • we are short on time
  • our attention is divided by competing websites and notifications and screaming children/spouse, and phone calls
  • in a noisy grocery store, playground, etc.
  • trying to hold on to a stroller/wallet/ pushcart
  • we don’t have the glasses with us
  • mistakes are costly and
  • nobody is there to make us feel special

I guess we are all aware of this, how our environment is really not letting us focus on what we are trying to accomplish. If only we had time and quiet, we could focus, and get whatever we need to get done in no time. RIght? All we UX researchers need to do to make our sessions more authentic is to provide a lot of noise, some tote bags, blurry screens, and yell at the participants to speed it up, we don’t have all day. Well, not so fast. Turns out, it is not just the external environment that makes these sessions far from a realistic use of the app. TBC

Recent research however shows that even when we are actively using our computers, Wtrying to get something done, we change what we see on our screens by clicking on a different tab, or apps every 13 seconds !!!! The finding that our attention span being shorter than that of a goldfish is an often-cited meme but most people associate that with social media browsing, designing to just dip our toes into some content, then move on to something else.

Leo Yeykelis et al.’s study participants, however, recorded over four days, engaged in all kinds of activities, including work and watching videos did not, as expected, complete one task (such as reading an article or watching a video) before moving on to the next one. Instead, they kept on switching from one activity to the other, from starting a transaction, then switching to .something else, then back again. This was true for every single participant in the study, indicating that it is not just people with severe ADHD who flitter around from one attractive website to another. While participants’ screenograms (the highly individual composition and sequence of screens visited) varied widely, and no two ones were alike, the one feature shared by all was the frequent task switching.

Where does that leave us, UX researchers? Let’s start thinking about how we can make our sessions closer to people’s real experiences. Besides the standard background noise and other disruptions, we may need to integrate flow-interrupting activities such as watching short videos, reading a neutral tweet, math problems, unrelated product browsing in our sessions. If our participants can still confidently complete the task assigned (granted, over a longer period) then we can be more reassured that we indeed tested the design.

Print Friendly, PDF & Email