Collect Data

You're reading the page three of the product development handbook.
Why we invest in research
Research is the process of bringing together data to create usable insights. The financial investment in research provides both tangible and intangible outcomes.
The tangible benefits:
reduced churn
increased conversion rate
reduced customer acquisition cost
increased total lifetime value
reduced customer support tickets
The intangible benefits:
better user experience
less customer frustration
increased team purpose and sense of direction
better word of mouth marketing
We invest in research to reduce risk, test assumptions, and make decisions. Everyone has assumptions, and the difference between the assumed reality and the real world needs to be fully understood.
To read more on why we test assumptions, check out this book → Just Enough Research by Erika Hall
Prioritising is like gambling. We use research to help us win more often.
Good research helps us prioritise what's important and what deserves the most attention. Prioritisation is like gambling - you place bets on the items that are ranked highest in priority. And you either walk away with big rewards or you've lost what you've put in.
What to build and when to build it?
What you build and when you build are the questions that are the hardest to answer: 1. because it costs money to follow through with building a Feature, and 2. that time could have been used to build something more valuable.
What you build The Feature goes into production is designed to solve a problem. But there could be many different ways to solve the problem and all of those options are optimised for a different experience, focus, effort, value, and timing. The difference of what you build affects all items that are downstream in your backlog and also have to be maintained.
When you build For every second that you are working on one thing you are not working on something else. Prioritising Feature A over Feature B is a gamble that Feature A is actually more important than Feature B. In the moment, almost everything feels equally important and it's your job to cut through the noise.
The cost breakdown of making a bad decision can be amplified by reputation cost, lost future business, and loss of current customers. The cost can be higher than your initial bet and the opportunity cost - a bad decision could cost your company everything.
All inputs are research
All inputs are research, not just conversations or statistics from the customer. Security Features are less visible and are easier for the team to ignore. But that decision to de-prioritise security Features could lead to your customers or companies information be accessed by people who shouldn't have access. This is called a data breach.
According to IBM, the average cost of a data breach costs $3.92 million dollars in 2019. But 1.42 million of the original $3.92 million is from 'lost business'. This research into the cost of a data breach is one piece of data that can be used to prioritise a security Feature over all other Features. And although it isn't direct research with the customer, this information makes up an input that contributes to decision making for the wider product.
Maintaining a Research Pipeline
The Research Pipeline is a separately managed backlog of all things that need validation and research. Research is needed to back your decisions with data.
Who owns the work: Researcher Who's involved: Product Manager, UX Designer, UI Designer, Business Development, Software developer, Marketing
There are 5 main types of research that are maintained in a Research Pipeline. These different types of research occur at different times in the Feature lifecycle.
1. Discovery → 2. Problem validation → 3. Solution validation → 4. Usability testing → 5. Measuring outcome
1. Discovery
If you have an idea and want to build it into your product - start with discovery. Identify your assumptions and create a test strategy for how you can validate or invalidate your assumptions. Discovery is the process of building understanding and gathering insights.
Build an ideas board
Your team needs a place to put every idea that can be captured from silly and absurd to reasonable and common sense. The goal of discovery is to uncover problems and identify if your thinking was on the right track. Once you understand more fully what the problem space is like for your end-user, you can work towards solving their problem.
What does discovery look like?
Discovery sessions are run in rounds. They combine data from interviews, research papers, surveys, statistics, usability sessions, analytics, or any other possible source of information that could be useful.
Usually discovery rounds include interviewing people to learn more. Semi-structured interviews are best for discovery because they can provide both quantitative and qualitative insights.
Steps for running discovery interviews:
Define your cohort
How is it segmented?
How many people should be included for accuracy?
Run a recruitment round for interviews
How will you find people who match your user segment?
Prepare for your sessions
Who should be in the room?
How will you be taking notes?
Are there any templates, handouts, or forms you need to prepare?
What questions are you going to ask?
Will your questions provide both quantitative and qualitative data?
Will there be any activities such as card sorting, user journey mapping, or drawing?
Run the interview
Take the notes and transcribe them
Process the data into insights
Report on the insights
2. Problem validation
Take all you insights that you've learned, and collect enough data from enough sources to prove:
That the problem is real and not assumed.
What the problem actually is and the full context of the problem.
The people who experience the problem and their pushes and pulls.
Problem validation helps build your roadmap.
3. Solution validation
Once you know the most important problem that needs solving, you go through a process of designing the ✨best ✨solution. Best could mean a lot of things, such as fastest to build, first to market, lowest cost to build, best user experience, etc.
The solution validation lives in the Feature Spec. It's the primary source of data that backs up the investment into building a Feature. Once you have picked a solution, you can choose to go through the process of testing assumptions about that solution before you build it.
4. Usability research
Chances are, at some point in your life you’ve been angry at a computer. Or mad when the form you spent the past 45 minutes filling out losses all of your information. Usability research is the most valuable component of design research. It's how you know what you are designing will function as intended.
Good usability testing is what makes software feel ‘intuitive’. What that means is a user instinctively knows where to go and what to press - elements and functionality make sense. It helps you to identify confusing or frustrating parts of your software and draws attention to the bits that users engage with and enjoy.
Everybody has experienced bad design before; products which are frustrating, confusing, or just not that useful. Bad design can be found everywhere from websites to voting systems. To avoid bad design, there’s a fundamental first step you have to take - understanding your user’s point of view. That’s what design research is all about - understanding your user, so you can make them something that fits their needs.
What's a usability session?
A usability session is a one-on-one test, where a researcher observes a user as they complete a series of tasks. This method is commonly used to uncover usability issues in web and mobile apps, or on websites. You can run tests on just about anything: from prototypes to live products.
How to run usability sessions?
To read more on how to run usability research → Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests by Jeffrey Ruben and Dana Chisnell This book is a bit old, but offers some pretty solid advice, and even goes so far as to explain best practice for where to sit in an interview room.
Recruit five users. Believe it or not, the optimal number of participants is five. If you run a test with more than five users, you are likely to only discover the same issues, again and again, meaning it becomes a waste of time and resources. It’s better to test with five users, fix the issues and test again with another five users than it is to run a large test with 10-15 users. Usability testing doesn’t have to be costly to get results.
You also need to ensure you test with a range of users. For example, we trialled Accreditron with both really small organisations and really large organisations - which represent the extremes of our range of users.
Define your tasks. During the test, you ask users to complete a series of tasks. The tasks you want to test will be crucial to your project’s success or challenge any assumptions your team may have made.
For example, if you were testing an online shop, you could ask a user to purchase an item, find the sale page, or save an item to their wishlist.
Start the session with an interview. This is a great chance to ask your participant a few questions about their thoughts and experiences. This foundational information will give you context to help ensure the best results. If you plan on recording the session, make sure to collect the participant's consent at the start of the session.
Explain the session. It’s important that your participant is comfortable with giving you honest feedback. Make it clear that this is a test of the software, not of them. Ask the participant to complete the tasks while talking aloud about their thought process and feelings.
Observe. As the researcher, your goal is to watch the user complete the tasks, and notice when they encounter problems or have moments of confusion. This could be a participant not finding the right page, or not being sure what a sentence means right away. This is why you ask them to speak aloud; it lets you understand their thought process. You’re not only looking for issues - take note of what your user liked and enjoyed.
Don’t help them! Most people want to be helpful, but if you offer assistance you may inadvertently affect the results. The only time you can step in is if they are honestly stuck and can’t progress at all.
Record. Usability sessions need to be recorded or the finer details will be lost. It is recommended that you both record the user’s screen and capture audio of the whole session.
Repeat! Once you have completed all of your sessions and analysed your data, you’ll have a list of issues to fix. When you’ve solved the identified issues, the best thing you can do is to run another session. This will ensure you have fixed the issues and that you haven’t created any new ones. This iterative process is how software goes from good to great.
5. Impact & outcome validation
This is the final stage of research. Successful outcome validation is the definition of done for product Feature development. Once you've set measurable success criteria for each Feature, being able to show that the outcome of the solution is achieved means that Feature is done.
Can research be trusted all the time?
There are four places that you can pull information from to make a decision:
your past experience
your teams' past experience
research and data
assumptions or guesses
You can't always rely on research to make decisions. Running quality research to fully understand an issue may be out of budget or take to long. We have to understand that sometimes research may just show a sliver of the full picture we are trying to understand. Not every decision needs to be backed with data. You are allowed to use your experience and trust your gut. Still, I recommend you strive for perfection and try to make informed decisions as often as possible. And remember, the minimum number of interviews to be valuable to research is 5. I have full faith that you can manage to interview 5 people when building a digital solution.
Misleading statistics
Data can be used to mislead, manipulate, and to back up an argument. Good arguers can leave out data and use data when it suits them to prove a point. The most recommended position is to always have data to back your decisions. But sometimes that is just not possible or feasible.
I went home to visit my family for the holidays. My parents live in a neighbourhood in Texas that has a homeowners' association. The roads in the neighbourhood are managed and maintained by the neighbourhood, and the speed limit and speed enforcement is also managed by the neighbourhood, not the city. One of the people in the neighbourhood is convinced that there is a speeding problem has petitioned to lower the speed limit from 30mph to 20mph. Others in the neighbourhood argued that they aren't interested in lowering the speed limit and that they don't think that there is a problem. Two points of view: 1. there is a speeding problem, or 2. there is not a speeding problem. In an effort to validate that there really is a speeding problem, the neighbour who wants to lower the speed limit rented radar speed detecting sign and placed it at the bottom of a really big hill. She then collected the data of how fast cars were going when they passed the sign at the bottom of a big hill, and then shared it with the neighbourhood to back up her argument. "More than 75% of cars are travelling faster than 30mph! We must fix this and lower the speed limit!" she argued. But what she didn't share was the context of the data: 1. What is the margin of error on a radar machine? Answer: usually 5mph. 2. What trends do the data show? Turns out that if you looked at the data between 75% and 95% of the population of cars that drove past the radar were travelling between 30mph and 35mph. And that only 5% of cars were speeding more than 5mph. The data showed that 95% of the cars were travelling below or within the margin of error of the radar machine. The original argument was that 3/4 of cars in the neighbourhood are speeding, and the insights pulled from the data could be used to back up both sides of the argument depending on how the insights were gathered.
It is important to not trust insights alone, and to always be able to reference the data used to develop those insights. Always keep your original data and organise it in an accessible way for all of your relevant stakeholders.
To listen to another an example of data being used to mislead, check out this podcast → The basement tapes episode of Revisionist History by Malcolm Gladwell This podcast dives into the saturated fat debate and argues that the US population was misinformed based on bad use of data.

go to next page

Capture Context

How do you share information across a team?
Like this handbook and want to see more?