Don't trust loud customers
How to figure out if user feedback is representative of your most important customers
Customer feedback is tricky. Whenever someone complains about or suggests improvements for your product, you’ve got to consider some questions before deciding whether to act on it, like:
Is this person the type of customer we’re actually building our product for?
Is this the right problem to solve now? Or are there more impactful problems we should prioritize instead?
What underlying problem does their feature suggestion solve? Do others share this problem or is it an isolated issue for this customer?
These questions are hard enough to answer normally, but when the same feedback starts appearing over and over again, it’s almost impossible to ignore. I’ve written before about how a bunch of identical feature requests almost killed our startup during our first weeks of traction and how, in the nick of time, we figured out it was the wrong thing for us to build. Today’s newsletter covers a similar story about some recent research led by Tudor Cristian Bogdan, a UX Researcher at Labster.
Tudor was dealing with a difficult scenario earlier this year — Labster’s sales teams were keen to pursue a broad and challenging technical build based on a bunch of consistent feedback pouring in from customers. Tudor was tasked with figuring out whether this would be the right solution for the problem (and, more importantly, whether the problem was the right one to prioritize solving in the first place).
Not only did Tudor’s research prove otherwise — it completely changed how his team thinks about customer feedback today…
Background
Labster is an award-winning science learning platform that offers educators a catalog of immersive virtual lab simulations that are proven to transform students from disengaged to inspired and prepared. The team of over 350 employees has raised $147 million to date and serves millions of students through 3,000+ educational institutions around the world.
Challenge
The more tailored Labster is for each educator’s use case, the more immersive the experience is for their students. But every new educator that signs up for Labster comes with their own unique context that the platform must adapt to.
Until now, the only way educators could tailor Labster was through the interactive quizzes at the end of each lab simulation. With limited resources, Labster’s Product Management team had to figure out which customization opportunities would have the most impact on the greatest number of educators. The challenge was, where to start?
Process
Tudor Cristian Bogdan, a UX Researcher at Labster, was tasked with finding the right answer to this question.
He started by running two internal workshops at Labster — one with Account Managers to gather customization problems from existing customers and one with Sales Executives to uncover problems raised by new users. Through these workshops, Tudor gathered a bunch of pain points and feature suggestions that later turned into his list of customer problem statements.
Before jumping into active research, Tudor identified the three key questions he needed to answer:
Which problems are customers most keen to solve?
How often do customers experience or consider these problems?
How does a customer’s use case influence the problems they’re most keen to solve?
Based on these objectives, Tudor created a ‘Customer Problem Stack Ranking’ survey on OpinionX and sent it to 1000 Labster users. Those users were shown Tudor’s problem statements as a series of simple head-to-head votes to measure which problems were most negatively impacting their experience using Labster. After the ranking exercise, Tudor’s survey asked some quick questions about problem frequency and use cases so that he could later split out the ranked results to compare the priorities of different customer segments.
Results
Tudor’s survey results revealed three surprising insights:
The problem that initiated the whole research project in the first place ended up ranking in 17th place!
A different opportunity emerged as a top priority for customers that the team had not expected.
The highest-ranked problems varied significantly depending on the customer’s main use case, and this data enabled Tudor to focus on the problems that were most important to Labster’s best-fit customers.
These results proved to Tudor how crucial it is to understand the importance of users’ problems, not just whether these problems exist:
“We already knew what our customers’ common problems were, but we didn’t know which ones were most important to them. We had never asked them to compare the importance of these problems before. Having a set of problems prioritized by our users was the game changer that OpinionX enabled.”
Tudor’s research went on to inform multiple roadmap projects for the product team and changed how he now thinks about customer feedback and feature requests:
“The survey results helped me demonstrate to the broader team that the loudest voices are not always the most representative customers — just because you’ve got a very loud customer complaining about something does not mean they represent your broader customer base.”
Company growth is about a lot more than just identifying and solving customer problems. You’ve got to know (1) which customers are your most important segment and (2) which problems those customers are most urgently trying to solve.
“As a rapidly scaling company, almost anything you do will move the needle, but this research made me realize that solving certain problems moves the needle a lot more than others.”
If Tudor had just blindly trusted that customer feedback accurately represents what the entire customer base cares about, he would’ve caused Labster to pursue the wrong priorities. Or he could’ve opted for a bunch of user interviews to find patterns in user problems — but at a scaling startup like Labster, you don’t always have time for that.
The ideal middle ground was a customer problem stack ranking survey. It allowed Tudor to identify which barriers to value within the product his customers actually cared about addressing, and it produced objective data that helped align other teams behind a clear direction to pursue instead of their original assumptions.
If you enjoyed this case study and want to see more like it in the future, please let me know! Or if you have a research story you’d like to share on The Full-Stack Researcher, hit reply and send an overview my way :)
— Daniel