Identifying, Testing and Resolving User Experience Design Issues
8 months ago, the HotPlate App was launched for all of Evanston to use. As developers, we were immensely proud of the app we had built and couldn't wait to see others love it as much as us. The big day finally arrived and the team was eagerly looking forward to what people had to say. In just a few days, we had over 300 users. It was probably the most exciting time... until, we discovered all the issues our users were facing.
The interface of rating a dish was clunky, requiring multiple clicks, steps, and screens for a user to rate one dish.
Asking users to rate a dish on a scale of 1-10 created ten options for user ratings, which was overwhelming rather than simple, and created multiple similar dish ratings. For example, we could not identify a difference in user comprehension of dish quality between a dish rated 1/10 and another rated 2/10.
Right after the launch, the truth behind user experience hit me. What developers think is intuitive, might not be. The only way to understand the truth and your consumer is to go out in the field and user test. So that’s what we did.
Since the core value of our app lies in the individual dish ratings, we began to investigate how we could optimize the redesign and experience of the rating system. As we began prototyping different rating systems, these were some of the questions we had in mind:
1) Is the interpretation of the rating metric consistent across users?
2) Does the system match the way users think about food?
3) How can we make the user experience simple and seamless?
4) What rating system encourages user involvement?
We designed and tested 4 ratings systems
The final rating system was a spin on the 5-star rating system
As any designer knows, trade-offs are a part of the process.
Did we want to be consistent with users’ mental model (what they're familiar with) and give them star ratings or did we want to be more original? After multiple discussions, we decided that we wanted to hone in on the familiarity of the 5 star system but instead of displaying stars, display ‘HotPlates’. We felt this was a great compromise between the clarity of the star system and the originality users desired.
Deciding to move forward with the five “HotPlates” design gave us a rating system which users clearly understood, but we still had to think about how to make the interaction seamless. As we designed our rating system, we quickly realized that multiple small interactions occur as users rate dishes - After rating a dish, users may also wish to update or delete their rating.. This created multiple questions:
Do users need to delete their dish rating in order to update their rating?
Should users delete their rating by tapping on a delete button?
How should the interface provide feedback to users that their rating has been submitted?
We began to design and test various prototypes to address these interactions. After testing, we decided that users should be given the option of directly updating their ratings, as well as deleting and re-adding their ratings. Although the function might not be used often, it provided users with a sense of control over the ratings and increased their sense of trust. Additionally, we realized that users needed feedback after completing their rating so we would provide a “Rated!” message after users rate a dish.
TL;DR - if there are 2 things I want you to take from this post it is the following:
First, the goal of user testing is to find a design that minimizes the confusion users face while interacting with your app. To set yourself up to meet this goal, design multiple prototypes and observe how your users interact with each.
Second, even ‘simple’ interactions have multiple layers for the user that designers and developers can overlook - always conduct user testing in order to iron out those smaller details.