Designers are capable of offering more to a business than simply executing visuals like wireframes and hi-res designs. But many struggle with articulating exactly how.
Anyone who has been in the product design industry for a few years now knows that simply creating visual deliverables is no longer enough. Designers need to be able to talk in terms of value, outcomes, and data. And that’s a good thing. Using best practices and design principles is important. But it’s only half of the job in terms of designing truly great products.
Analyzing user data enables us to make informed decisions that help in reducing risk, tailoring our products to meet specific user needs, and iterating on them based on actual usage. Let’s take a closer look.
The full article will be presented in two parts. This is part one, which focuses on the value of and methods for measuring user data. Part two will outline how we do this at Pendo.
Why data analysis is important for UX
Having access to both quantitative and qualitative information is critical in helping designers make better, more educated decisions regarding their products. Quantitative insights are typically numerical or boolean in nature, and are measurable. They add value in telling us the “what.” Qualitative insights, on the other hand, are subjective and tell us the “why.” This is essential in getting a holistic view of the user’s experience.
Here are some examples of each:
- CSAT (Customer Satisfaction)
How users rate their satisfaction with your product or service on a numerical scale
- NPS (Net Promoter Score)
How likely users are to recommend your product or service to others
- Success Rate
The percentage of users who complete a specified task (e.g. product onboarding, upgrading to a paid plan, exploring a new feature)
- User interviews
- Focus groups
If we only focus on quantitative metrics like page views and button clicks, we’re only getting half of the story. For example, there could be many reasons why a particular action is frequently taken that may not be directly tied to users actually finding value in the task. But layering in qualitative insights helps us understand why users may be doing (or not doing) a specific action. It’s also essential for designers to better empathize with our users and understand the emotional journey associated with their experiences.
On the other side of the coin, if we only focus on qualitative insights like data from user interviews, it’s difficult to measure the impact of our design decisions—which makes it harder to articulate that value to the business. Additionally, what users say in interviews may not align with the story the data tells, so it’s important to review both together to see if there is an opportunity to dig deeper to get to the root problem.
Designers who learn to analyze data trends, understand user patterns, and create an ongoing customer feedback loop to make informed decisions over time will be able to more effectively communicate the business value of what they do. Ultimately, this helps design get out of the “just make it pretty” trap—giving them the ability to have a bigger impact on the customer, the business, and the products they design.
Here are some specific ways conducting this type of analysis can have an impact. It…
- Helps teams prioritize features and enhancements that align with the needs of the user—ensuring that investments are being made in areas that will yield the highest value and impact.
- Reduces the risk of investing in features that may not resonate with users.
- Ensures that features and changes we implement remain relevant and aligned with users’ expectations as they evolve over time—due to factors like learning the product or exposure to competitors products, changing skill sets related to technology, or an increase of industry knowledge.
- Gives teams the ability to understand the value and efficacy of their work by seeing the impact on the user through data. For example, reviewing trends in the data after a recent feature release may help designers gain an understanding of how their work reduces churn and improves retention. This can also be shared with engineering to help them connect their work to user and business value.
- Helps to strengthen the confidence of designers in identifying usability opportunities. Ongoing analysis of user data enables them to be closer to the ever-changing needs of their users. Designers can take the lead in identifying existing pain points for users, as well as opportunities for improvement within the product. Insights like this can help them be proactive in addressing issues like inconsistencies in the product, and help them find ways to reduce time to value when making design and interaction decisions.
- Strengthens user trust and loyalty through understanding user sentiment and experiences. This also enables product teams to have a stronger, more empathetic understanding of their users.
This all sounds useful, right? But what’s the actual level of impact analyzing these types of data has on the product, the user, and the organization?
From the perspective of the product itself, designers who have a deep understanding of user data can identify inconsistencies and unnecessary redundancy, and get data-driven insights into ways in which specific flows can be simplified.
For example, seeing that it takes a user 12 clicks to complete what should be a simple task can result in a potential opportunity for faster time to value. While it’s not always valuable to analyze an experience solely based on the number of clicks it takes to complete, there’s still value in understanding where there may be opportunities to simplify and reduce steps. Separately, having two similar pages with drastically different levels of conversion rate success can reveal an inconsistency in the UI that may be causing confusion or unnecessary work for the user.
The benefits to your end users or customers are ample, too. Because the teams making decisions about the product gain clarity into which areas need improvement, they can make adjustments that lead to greater customer satisfaction, faster time to value, and increased value—thanks to their increased empathy with users and understanding the contexts in which they engage with the product.
What are some examples of these metrics?
There are several metrics designers can review to get a deeper understanding of their customers’ needs and tie their decisions back to business value:
Behavioral: The understanding of what people do and how
- User engagement
Refers to the measurement of activity of user interaction over time with a product.
- Task success rate
The percentage of participants that successfully complete a task. The important—and sometimes challenging—part of this is defining the success criteria. There may be many ways to successfully complete a task, so the team evaluating this metric will need to ensure their goals are clearly defined.
- Time on task
Measures the amount of time it takes for a user to complete a task. This is often useful when evaluating an existing experience in the product and trying to determine if there’s opportunity to reduce complexity.
- Error rate
Calculated by dividing the total number of errors made by the total number of task attempts (or total number of possible errors). This is an excellent way to understand the level of complexity for the user associated with completing a task.
- Conversion rate
Refers to the percentage of users who perform a desired action. This is a particularly useful metric for UX to understand that ties well to business goals.
- Misclick rate
An average of the number of misclicks outside the hotspots or clickable areas such as links or buttons in a product.
- User retention
A key growth metric which looks at first-time users within a specific time frame, and calculates the percentage of those users that return in subsequent time periods.
- Customer churn rates
Measure how many customers choose to not renew at the end of their subscription.
- Exit rate
Refers to the percentage of visits that were in the last session.
- Bounce rate
Refers to the percentage of visits that were the only one of the session.
- Feature adoption
Another crucial metric for measuring the usage for a product’s specific features. The more features users adopt, the more value they receive and the less likely they are expected to abandon the product. Feature adoption is a key retention metric.
- Product Engagement Score (PES)
A composite metric made up of an equal average of Adoption, Stickiness and Growth. In Pendo, Adoption is measured by the average number of Core Events adopted across all active Visitors or Accounts. Stickiness is the average of the percentage of weekly active users (WAU) who return daily (DAU/WAU), or the percentage of monthly active users that return daily (DAU/MAU) or weekly (WAU/MAU). Finally, Growth is the sum of new and recovered Accounts or Visitors divided by dropped Accounts or Visitors (known as the Quick Ratio).
Attitudinal: What people believe, feel, think; motivations and attitudes
- System Usability Scale (SUS) is a questionnaire that allows designers to evaluate the user’s perspective of an experience for both products and services. It consists of ten statements describing the experience in terms of complexity, ease of use, user confidence and more, and uses a five-point Likert scale where the participant can share if they strongly agree or disagree (or somewhere in between) about their view of the statement.
- User satisfaction is measured using Net Promoter Score, Customer Effort Score, Customer Satisfaction Score, surveys, feedback forms, and other means.
- Standardized User Experience Percentile Rank Questionnaire (SUPR-Q) is similar to SUS in that it’s also a five-point Likert scale questionnaire—with eight statements for understanding the level of ease surrounding usability, user trust/credibility, appearance, and loyalty. Each of the four sections have two statements for users to give feedback on. NPS is also a part of this measurement. SUPR-Q can also be used to compare similar products from different organizations.
- UMUX-Lite is a two-statement questionnaire, shortened from the slightly longer four-statement UMUX original version. It also uses either a five or seven-point Likert scale like SUS and SUPER-Q—though SUS considers user perception surrounding usability and learnability, and SUPER-Q evaluates usability, credibility, appearance, and loyalty. UMUX-Lite assesses effectiveness, efficiency, and satisfaction.
That’s a lot… How do I actually use these metrics?!
This is just a snapshot of the many metrics designers can consider in order to better understand their users. It’s best to view them as tools in a toolkit, rather than essential pieces of a single process. Understanding the methods available to you—and determining which make the most sense to use, and when—is critical for getting value out of this type of analysis.
The best place to begin is to simply start with some questions for yourself and your team.
What’s the company’s business model or strategy for making a profit?
Nope, it’s not just for product managers anymore! Design needs to understand this as well.
Consider this example: Let’s say that you’ve recently begun designing a new page that will provide simplified access to a set of features from what users previously had. However, it’s a portion of a larger feature set that’s only available to paid users. It’s important to ask the question of the desired business outcome associated with this work—because if it’s to increase adoption by having more freemium users converting into paid subscribers, those free users will not have access.
What Key Performance Indicators (KPIs) is the organization tracking to measure design impact?
Design KPIs are a big topic (and worth their own post!), but many of the metrics listed above are evaluated on a regular basis by organizations hoping to understand this very concept. Teams should consider what makes sense to them, and build a plan to evaluate and measure. This not only has a positive impact on the product, but also on the organization as a whole, as it helps designers know how their own work can be improved. Additionally, it makes the designers themselves have a stronger skillset in the market because they’re then able to articulate the impact of their decisions.
What is our design strategy for improving these metrics?
Dozens of decisions are made every time a design is worked on. How can those decisions, for example, help take the statement of “This application’s capabilities meet my requirements” from a “strongly disagree” to a “strongly agree”?
While there may be deep, systemic opportunities with a product, it’s also entirely possible that changing simple labels for form fields could have a huge, positive impact. Establishing a baseline for and then reviewing these metrics over time can provide a clearer understanding of what needs to be improved and how to know when it has.
Where do I begin if I’m new to this?
Product managers, researchers, and data analysts likely already review this kind of data at your company. So if you’re new to this process (but have access to the folks in these roles), it wouldn’t hurt to have a conversation with them and ask if you can get access to the data your company may already provide.
If that’s not an option for you, talk with your team leadership about investing in a product that collects—and makes it easy to analyze—this kind of data. Pendo offers a free option (with both quantitative and qualitative data) to get you started, fast. Put together a simple proposal outlining why investing in data-driven design is valuable, and share it with your leadership team to help with the conversation. Start small: Propose a few metrics you want to begin evaluating (SUS, NPS, and PES are great), and make a plan to review and share those metrics on a regular basis.
“Show me the data” is one of our core values here at Pendo. So if you’re curious about how we do this for ourselves, stay tuned for part two of this article—where we’ll dig deeper into our process.