'Capella' tasked the team with conducting usability testing on the platform.
To identify user pain points, bugs, and areas for potential improvements.
The goal was to improve the software’s functionality and user experience, creating a more seamless and intuitive interface.
Through structured interviews with four users on the demo software, the team collected both quantitative and qualitative data to analyze its performance.
This process revealed key pain points and areas for improvement, providing actionable insights. The findings were presented to Capella to guide future development and enhance the platform’s usability and user experience.
Three main tasks we specifically wanted users to complete during usability testing.
Each task was designed with clear goals to evaluate the functionality, usability, and overall user experience of the software.
A structured approach to collect both quantitative and qualitative data was applied.
These are the points used to gather both quantitative and qualitative data.
Feedback was provided to Capella on 4 primary pain points.
These insights highlighted specific pain points and usability challenges, offering actionable recommendations to enhance the platform's functionality and overall user experience.
Minimal feedback was provided from the software following a user action.
For parking, "no available spaces" were displayed before booking. After booking, no ‘successful booking’ indication was provided.
Users found no ‘edit booking' option. This meant users could not change times, locations, or any details without first deleting the booking and making a new one.
Participants expected a difference between “book a meeting room” and “book a meeting,” eventually confused when both led to the same process.
Participants hoped for a function to invite participants to the meeting where the “need to collaborate?” button is displayed on the dashboard; however, this function was missing.
Some buttons/links on the dashboard were not clear or difficult to find.
Specifics about context were provided to Capella about each button/link.
There was no in-context help.
The only support was the full demo at the beginning; however, there is nowhere to look for help if a user gets stuck during a task.
‘Change meeting schedule/invite participants’ caused most problems for participants. This task took the longest for every participant.
‘Book parking space’ task caused some issues for participants also, specifically when it came to very little or no feedback being provided on details.
‘Check office bookings’ and ‘delete office bookings’ were found to be the most straightforward tasks and were the fastest to complete.
Recruiting participants with both experience and no prior exposure to hybrid-working platforms proved to be a challenge.
Experienced users provided valuable insights into existing mental models, including which features were intuitive, which felt cumbersome, and what functionality they deemed essential or missing.
On the other hand, including participants without prior experience was crucial to evaluating the platform’s ease of use for first-time users. Balancing these perspectives was essential but difficult to achieve.
Preventing observer bias and ensuring results remained authentic required careful moderation. This involved asking neutral, non-leading questions, avoiding offering assistance during usability challenges, and allowing participants to navigate tasks independently.
Managing the Hawthorne effect, where participants alter their behaviour because they know they’re being observed, added complexity to maintaining objective results.
Ensuring the testing environment was technically sound posed multiple challenges. It was critical to capture clear audio of both participant and moderator voices, record screen interactions, and observe facial expressions for non-verbal cues.
Remote usability tests introduced additional difficulties, such as software and hardware compatibility issues, internet disruptions, and prototype performance limitations, all of which required proactive troubleshooting to avoid interruptions.
Avoiding bias during usability testing is essential to gathering authentic insights. This reinforces the importance of developing skills in neutral facilitation, such as crafting non-leading questions, managing silence effectively, and resisting the urge to intervene.
These techniques ensure results reflect genuine user experiences rather than guided outcomes.
Usability testing rarely goes exactly as planned. Flexibility in adapting to unexpected challenges, such as prototype limitations or user misunderstandings, is an essential skill.
Being agile and responsive during sessions ensures the testing process remains effective even when conditions change.
A well-prepared testing environment minimizes disruptions caused by technical issues. Thorough pre-test checks for recording tools, software compatibility, and internet stability are critical.
This experience underscores the value of contingency plans and backup solutions to ensure smooth test execution.
To improve future usability tests, I would conduct a full pilot test before engaging participants. This would help ensure that all task instructions are clear and intuitive, minimizing the risk of issues arising from unclear moderator guidance.
By refining the test script in advance, I could better focus on uncovering genuine system usability challenges rather than addressing misunderstandings caused by ambiguous instructions.