Cognitive bias in product management: 10 traps product managers can avoidExplore the biases
Product managers today face a unique challenge. Not only do they need to design and build innovative features, they have to understand how those features are going to drive value for customers and advance key business goals.
Left unchecked, cognitive biases can lead to misalignment between what you build and what customers want or the business needs. Do you know how cognitive bias may be affecting how you work or make product decisions?
10 common biases in product management
1. Confirmation biasSeeking out sources, points of view, and supporting materials that confirm one’s prior view or stance on an issue.Bias in the wildA PM who favors one version of a product roadmap seeks to build “consensus” by soliciting feedback from colleagues they know will agree in advance and avoiding it from those who may not.
2. Halo effectBuilding a positive holistic picture of a path, product, or person based on one or a few traits that don’t justify doing so.Bias in the wildProduct teams may want to take a certain path on a feature based on preliminary positive anecdotal feedback from one “superuser,” obscuring real issues or problems that would otherwise come to light.
3. Sunk cost fallacyFollowing through on a project or feature that may not be right or may even be doomed to fail, simply because one has already put a ton of work into it.Bias in the wildData suggests a particularly difficult feature to build is getting low adoption rates and is a low priority for users. It may no longer be worth working on. But rather than abandon the project, the product team chooses to continue prioritizing it because they’ve already invested so heavily in it.
4. Authority biasPrivileging the opinions or judgements of someone in a position of authority and giving them unmerited weight.Bias in the wildThe CPO feels strongly that the product team should prioritize one proposed feature over another, and despite the PM being closest to the actual work with different feelings chooses to remain silent out of deference.
5. Availability heuristicGiving too much weight to information that’s top of mind or easily accessible when making product decisions.Bias in the wildProduct teams may rely too much on anecdotal info about a particular product or feature without examining a full data set related to it.
6. Survivorship biasConflating a successful subgroup in a given area with the entire group.Bias in the wildProduct teams may think about, say, prioritizing a particular type of feature in terms of the one or two companies that have seen success with it, ignoring the 95% of companies who’ve failed to do it well.
7. Recency biasPrivileging information that’s the most recent (and thus often most top of mind) over potentially more relevant information that’s somewhat older or less current.Bias in the wildIf adoption of a given feature is lower than expected, but the time at which one is gauging adoption is generally lower usage (say, late summer or over the holidays), then one is forming a distorted picture of how customers are engaging with the product.
8. Bandwagon effectPrioritizing a point of view or direction based on the number of people voicing support for it.Bias in the wildA PM holds a meeting about what kinds of feature updates on a product to pursue. This PM fails to account for the fact that different team members have different preferred ways of giving feedback and puts pressure on people to voice thoughts “on the spot.” Feeling unsure of themselves, anxious teammates feel compelled to support the view of the most vocal teammate who first speaks up.
9. Ostrich effectChoosing to ignore information that threatens our preferred way of doing things or point of view on a given question.Bias in the wildEarly signals come back that key customers or customer segments are dissatisfied with some aspect of a product compared to a competitor. Rather than act on the feedback and work alongside customer success to improve it, the team chooses to “stay positive” and focus on positive feedback, leading to increased churn over time.
10. Clustering illusionSpotting a pattern in the data where there is none.Bias in the wildProduct teams may look at feedback or adoption over too limited a time horizon. They may start discerning “signals in the noise” where there aren’t any.
4 ways to break free from cognitive bias
No one is completely free of biases–and that’s OK. Here are some effective ways to handle it:
1. Acknowledge your bias
We are all biased as humans. The important thing is to acknowledge their existence and factor ways of combating them into your product management model.
Where to start? Build a culture of feedback, iteration, and experimentation within and across your product teams. Welcome all points of view. Let data guide your decision-making.
2. Seek the facts
It’s unrealistic to think only the right data will always be your guiding star on every product decision. But do your best to spot biases and rely on data-driven insights where possible.
Leverage product analytics to understand the user journey. Examine your users by segments – are you seeing patterns in key metrics like adoption/time to value/NPS? This allows you to potentially spot a pattern indicating whether needs or concerns of different segments are going unaddressed and whether customer priorities align with your own.
3. Explore other perspectives
Feedback is a gift, no matter what it’s telling you. Don’t just focus on customers who are already happy. Get fresh points of view from multiple areas, including users you consider your “haters.”
Build a complete picture through soliciting regular feedback: poll and survey your users where and when it matters most (when they’re in the app) to determine the efficacy of a new feature or gauge user wants and needs to inform the roadmap. Proactively target user segments whose voice may be underrepresented in product planning meetings.
4. Be curious, yet specific
Being data driven is necessary for a product team to succeed. But it’s not sufficient. Be curious. Experiment. Test new approaches. But be clear about what you're trying to learn and what "success" looks like, including positive and negative outcomes. Be specific about your KPIs and get relevant stakeholders aligned.
Try to challenge your own ideas as much as possible, and if there’s data that supports your own point of view, verify that it’s relevant and/or statistically significant.