Enjoy Discovering Wine
Enjoy Discovering Wine is a wine training course provider in England. Their goals were to build their brand and increase sales. As part of their website revamp, I evaluated the website for its compliance with basic usability heuristics alongside e-commerce heuristics.
I was asked to evaluate the website but needed to operate within a rigid list of parameters:
No information on target users or groups
No information on user journey's
Not able to conduct user testing
Given this, I knew I wouldn't be able to provide any answers around specific user's. To add value, I decided to conduct a blended expert/heuristic evaluation. This would allow me to evaluate the website against general usability and ux principles, as well as ground my analysis in substantive guidelines.
I began by collecting the heuristics I would use: Nielson's 10 usability heuristics combined with Nielson Norman Group's publicly available e-commerce heuristics. Combined I had 14 dimensions against which to evaluate the website.
I thought I'd use these since Nielson's 10 heuristics are well researched and cover the major issues found on websites. I incorporated e-commerce heuristics as the website to address potential gaps in Nielson's generally applicable heuristics. I wanted to ensure that was captured in my evaluation as I don't have very much experience with e-commerce.
Next, I examined each unique page type and interactive element against these heuristics and recorded my findings in a table. By unique page type, I mean I examined a single product page instead of every product's page. This is based on the assumption that every page of the same type (e.g. blog post, event page) will be mostly identical. In which case evaluating the user experience and usability of one is like evaluating all of them. I don't think that you can always make this assumption. But, I think you can in the case of a smaller website using a template (such as this one).
In evaluating each page against the heuristics, I initially tried to implement a rating system (scale of 1-3) but ultimately realized: (1) I needed to record every single problem anyway and (2) I didn't really know how I was going to use those numbers to rank problems. After recording an issue, I'd take a screenshot.
Once I completed my initial analysis and data recording, I reviewed my work and deleted duplicates. I also tried to define the heuristics more specifically because I struggled using all of Nielson's heuristics on every page. Sometimes some of them applied, sometimes all of them. After I cleaned the data, I cataloged global problems and page-specific problems. Then I tried to group similar problems with one another. Finally, I ranked their severity based on the factors listed above
Based on these findings I made recommendations on what could be improved and some usability testing suggestions that could further inform improvements.
After I finished my review of the website I typed up all my findings into a report detailing the usability strengths, weaknesses, and my recommendations.
After presenting my findings, many of my recommendations were implemented. Without testing, there is no way to validate the impact of my recommendations.
However, it was too late to make some of the changes I was suggesting or even see if they were worth it. So, as a follow-up, I was asked to come up with a strategy for conducting user research to learn more about the users which would inform future improvements.