Usability testing in Agile environments

I had the pleasure of returning to a Team I had been doing Scrum coaching with earlier in the year to do usability testing on their products. This was the first significant opportunity the project had to engage users to do some end-to-end testing and I was looking forward to seeing how everything hung together.
The aim of the testing was fairly straight forward — to ascertain whether improvements could be made to enhance users’ ability to work with the system before it was deployed in a few Sprints time.
The approach we took was a fairly straight forward, one that will be familiar to many UX practitioners:

  • Cognitive walk-through — design scenarios in which users would attempt to use the system based on the knowledge of their current processes.
  • Heuristics — assess the following aspects of the system:
    • Visibility of system status — is the system keeping users informed about what is going on through appropriate feedback at the right time?
    • Match between system and the real world cognitive and behavioural metaphors — does the system use the same language/terms/jargon/concepts that users are familiar with?
    • User control and freedom — is the user free to leave an unwanted state through, for example, using back buttons, undo/redo, or closing a screen?
    • Consistency and standards — does the system use different words and interactions to do the similar things?
    • Error prevention — does the system help users from making errors before they commit to an action?
    • Recognition over than recall — does the system require that users remember information or processes from one part of the system to the next?
    • Flexibility and efficiency of use — does the system speed up the manual use of frequent actions?
    • Aesthetic and minimalist design — the brain needs to processes (at an unconscious level) everything it sees, so does the system take this into account when displaying information or laying out functionality on the screen?
    • Assistance in recognition, diagnosis, and recovery from errors — does the system not only tell users when an error has occurred but also tell them what actions to take to rectify the situation?
  • Undertake an expert review of the interface –– how does the system behave in terms of some of the science behind interface design, specifically cognitive load, information hierarchy, visual hierarchy, and visual contrast?

The Set-up

In working with a Scrum Team, the timing of the usability testing was critical — it needed to coincide with the team’s Sprint cycle. This meant doing:

  • Pre-planning — We worked with the Product Owner, subject matter experts and the developers to identify the User Stories that had already been delivered that would be tested. Ultimately, this gave us traceability back to the User Stories that would need to change on the back of recommendations to enhance the usability.
  • Collaboration — We went through the test scenarios with one of the developers to gain a shared understanding of the implications of some of the tests (particularly given that there were still system features waiting to be delivered in subsequent Sprints that might make our tests moot). As a result, we ended up not testing some functionality.
  • Testing pre-conditions as User Stories — Once we’d finalised the scenarios and had traceability to the User Stories that were being tested, the Product Owner created new User Stories for the Team for their next Sprint. These User Stories outlined the outcomes required of them to prepare for usability testing.

Executing the usability testing

We returned next Sprint to find that (according to plan) everything was set up and ready to go. Each test session with users was broken into two parts:

  • Part A — We handed the user a sheet that outlined the outcomes they needed to achieve in a 20 minute time box. Their actions were recorded using a Nikon D7000 and they were asked to use self-talk to give some audible cues to us regarding what they were doing for when we reviewed the video footage at a later time. The users were instructed to simply do as many scenarios as they could and if they ran into trouble, or an error, to move onto the next scenario. Once they were done, we began the set-up for the next user and copied the video file onto their PC to do Part B (which took up about 10 minutes).
  • Part B — Users then walked us through the video of themselves doing attempting the scenarios. What we were most interested in was what they wanted to do, what they expected to happen, and what the system then did as a result. We time-boxed this exercise to 20 minutes as well because while they were doing the walk-through, we were recording the next user.

This staggered approach meant we could cover more users in a shorter period of time than would have otherwise been possible. We also conducted the testing in the environment in which they would use the system in order to take account of their normal, every day interruptions on their ability to use the system — e.g. do some data entry, take a phone call, and then return to the system and remember where they were up to and what they were doing.

We used a D7000 to film users going through their tasks. We then played back the video and had them walk us through their decisions and expectations about the system

Reviewing the findings and making User Stories

Once we had analysed the outcomes of the testing, we wrote up all of the recommendations as User Stories in BDD format with associated Definition of Done. This was done so that the Product Owner could immediately negotiate usability features with his Business Sponsor (and later with the Team) regarding their value, priority and where they should go into the Product Backlog.  Because we did the testing toward the beginning of the Sprint, there was still time for the Team to prep and plan for these User Stories prior to the next Sprint using their normal Backlog Grooming process. Because the usability testing wasn’t done at the very end of the project, as is typical with a Waterfall process, there were still opportunities to improve the usability of the system, and subsequently, it’s value to users.

Conclusions

Usability testing is difficult at the best of times in Waterfall environments. I often head user-experience professionals finding themselves bring criticised for their recommendations — “it’s too late for these changes”, “you’re being a blocker”, “we’ve run out of money to make any of these changes”, “we only have time for bug fixes not ‘nice to have’ enhancements”.
Agile processes like Scrum provide a greater opportunity to involve other non-=developer stakeholders in creating working code and usable features every Sprint rather than toward the very end of the project when often money has run out or the project is out of time. But UXers need to adapt the way they engage with Scrum Teams if they are to ensure that their recommendations play nicely with the project’s methodology. This means:

  • Working within the Sprint system and timing activities with its regular, scheduled meetings — Sprint Planning, Backlog Grooming, Sprint Review.
  • Working closely with the Product Owner prior to testing and after testing — they’re the one who will rank the usability enhancements with the rest of the features still to be implemented in the Product Backlog so they need to have an implicit understanding of the value in what you’re recommending to add to/change in the system.
  • Collaborating with developers to increase an understanding of what you’re testing and the outcomes of the tests — this helps everyone know and plan for what work is coming up in subsequent Sprints.
  • A multidisciplinary team working together with the Product Owner to, like the Agile process, to add value to the end-Product — because we’re all there to make the very best software we can.

M

About the author

Related Posts

agile iq academy logo 2022-05-05 sm

Enter your details

search previous next tag category expand menu location phone mail time cart zoom edit close