diff --git a/research/2025-04-01-Branching_usability_testing_round_4.md b/research/2025-04-01-Branching_usability_testing_round_4.md new file mode 100644 index 000000000..b1130768b --- /dev/null +++ b/research/2025-04-01-Branching_usability_testing_round_4.md @@ -0,0 +1,77 @@ +# 2 branches usability testing - round 4 + +## 2025-04-01 / Sprint 15 (Firebreak) + +## Aims +This round aimed to test different iterations of the latest design for two branches, to assess whether they would meet our pre-decided criteria for confidence to launch. + +### The criteria + +We would consider launching Design 1 if: +- 4 out of 6 people could create 2 branches from a question and demonstrate that they understood how they did it. +- 4 out of 6 people could assess a question with routing and decide whether the design would enable them to build it. +- 4 out of 6 people could show someone else how to set up 2 routes + +### Designs to test +There were originally 3 designs to test, but time constraints meant that we prioritised to only test 2 versions (Design 1 and Design 3). + +- **Design 1** proposed content changes. These aimed to help users understand what routing could do and what the users needed to do to implement it. +- **Design 2** proposed a small change of functionality. Instead of selecting which questions to skip from and to, they would select the first and last questions the user would skip. +- **Design 3** proposed changes in functionality. When adding a 2nd route, the platform suggests which question you might want to add your second routing to. Also, if they chose not to route from the suggested question, instead of selecting which questions to skip from and to on the same page, each action would take place on different pages. + +## Participants +- Civil servants - existing users +- 8 participants (only 7 usable for analysis) + +## Methodology +Pre-call task with one prototype + 60-minutes usability testing of two prototypes. + +## Key headlines +### Design 1 + +#### The outcomes in line with the criteria + +We had decided the criteria based on 6 participants, but as we ended up with 7 participants, we had a conversation about whether the criteria levels should change. We decided that 4 out of 7 would be enough for us to have a conversation and consider whether to launch the deisgn. + +**Outcomes** +- 4 out of 7 people could create 2 branches from a question and demonstrate that they understood how they did it. +- 4 out of 7 people could assess a question with routing and decide whether the design would enable them to build it. +- 4 out of 7 people could show someone else how to set up 2 routes + +#### Findings +- All participants originally put their questions in the wrong order for applying the routing. (pre-call task) +- They found re-ordering the questions frustrating. +- Adding the routing took a lot of thought or trying things out. +- Participants frequently mentioned a need to “see” the routing in order to understand what is happening. +- Several described routing as being challenging in some way, and described other platforms they preferred +- Having the question text (as well as numbers) in the dropdowns was helpful. +- They didn’t seem to mind the amount of time they spent doing it (an average of 30 minutes). +- When using the prototype for the second time, participants: + - still didn’t always remember what order they needed to put the questions in + - still found it tricky and confusing to work out which routes to apply + +#### Conclusion +- The prototype is still challenging for users to understand and incurs high cognitive load. +- However, on the whole, users end up being able to apply two routes to a question. +- Design 1 just meets the criteria for us to feel confident enough to launch it. + +### Design 3 + +#### Findings +- On the whole, this was reasonably easy for people to use, with a few struggles. +- However, there will have been a learning effect from the previous task/prototype. +- They still needed to take time to read and understand the ‘For any other answer’ section. +- Several said they found it easier than Design 1. +- With some it was unclear whether they had actually understood the difference between the designs. +- We again heard people saying that a visual would help. + +#### Conclusion +It has potential for future consideration, as it did seem to make it easier for some users. + +However, we would need to be sure that: +- this was not due to learning effects during the test +- users understood what the system had done for them, otherwise this could lead to confusion later on. + +## Supporting evidence +- [Report](https://docs.google.com/presentation/d/13Prvth6ftZimaJNKvGr3opcTHOkbDm8eQeKr8zUe-vM/edit?slide=id.g32d739b8369_0_4&pli=1#slide=id.g32d739b8369_0_4) +- [Further documentation](https://drive.google.com/drive/folders/1gBo1RNktzyd2TcfaSLT6B0bDXeHsu-ze)