There’s long been a debate in the design community about quantitative vs qualitative research, and which is thought of as “better”. Both quant and qual— can have value in the design process, as long as they’re geared toward action. So rather than judging one research method as “the best”, acknowledge their strengths, weaknesses, and best uses of each.
This article outlines 6 examples of research activities, both quant and qual methods, and when to use them to their strengths.
Research activity selection process
The numbers and statistics produced by quantitative methods can signal the existence of a problem, but tell us little about its cause, or how to solve it.
This is the main reason most design researchers use qualitative methods exclusively, or alongside quantitative ones. Qualitative research helps us empathize with users, to understand their needs and environment. This kind of empathy not only tells us why a problem exists in the first place, it also lets us build a narrative about a potential solution.
A narrative based on real user experiences helps us share intent and knowledge, between the stakeholders that assign budgets, the designers that propose the solution, developers that implement it, and the testers who test it.
The narrative is just as valuable a research outcome as data or insights.
What are we trying to achieve with user engagements?
Everything we do should get us closer to a solution. So every user engagement should have some impact on how we eventually execute the solution . This could mean nailing the project definition, isolating a research question or direction of inquiry, deciding on features, or crystallizing a design detail. From the design team’s perspective, the most valuable data or insight that comes from a user engagement enables design action. We know it’s valuable if it does one or more of the following:
- It helps us distinguish between wants and needs
- It points out a new opportunity
- It narrows down or changes the task at hand
- It tells us what to avoid (and why it should be avoided)
- It gives us empathy for the user’s context, in a way that changes how the product is designed
- It clarifies what is required for the product to be valuable to users
To extract these types of insights from user engagements, asking “why” becomes very important. What was the reasoning behind this reply, this choice?
For example, when users rate down a blinking blue light, is it really about the color?
Or is it actually about something else? Perhaps the blinking pattern is too aggressive, or the bulb size is too big. Does the light need to be removed altogether? The “why” behind the “what” has an enormous impact on the action we eventually take.
Qualitative Methods: Gather insights to get to action.
There are myriad qualitative research methodologies out there, but let’s focus on three specific ones, because they’re scalable, and they lead to action.
Scalable means that they can yield useful results after meeting 7–10 users, and can also be used at large enough scale that you’re essentially doing in-market testing. As a rule of thumb, when you surpass 10 users you start getting repetition in responses, resulting in a diminishing return of insights. When you get past 15–20 users, the engagement can take months, and the benefits of choosing a qualitative methodology diminish.
Observations
Observation is a good way to find inspiration at the start of a project. Observations give you insights about behaviors: what people actually do, not what they say they do. The design actions that come out of observations are ideas to try, problems to address, and values to aim for. The effort you put into observations scales up well. It can yield insights when you spend a single session passively observing and taking notes for a few hours.
Monitored Testing
Monitored testing is a good way to inform how to iterate or inspect the execution of a design. It’s not enough to just imagine what would happen if you implemented a particular solution — the emotional aspects of most products demand that you actually experience them. Experimenting and making multiple prototypes is the key to evolving a design quickly. By monitoring a prototype in use, in combination with interviews, you’ll encounter both direct problems and new opportunities.
One misconception about testing is that it has to be done according to a protocol that you repeat strictly each time. But it also yields valuable results when performed iteratively, on prototypes that change with each iteration.
Interviews
Interviews can complement observation and testing; they can also stand alone. As a stand-alone method, it’s good to start by finding inspiration and thoroughly understand a topic. As a complement to other methods, it’s a good way to understand motivations and fears (the “why”).
Quantitative Methods: Gather data to get to decisions.
User engagements that gather data points are often done to facilitate decisions on what to spend efforts on, before defining a final product. But it’s also a method for inspection and iteration of close-to finished products, that leads to design actions. These are three examples.
Surveys
Surveying is a rather limited method, as it mostly provides answers to the questions you know to ask. But it can be a low-effort tool to inform a position: confirm or debunk a prejudice, answer a question, or verify a proposition.
Smoke Tests
A smoke test is a way of measuring the desirability of a digital experience idea, for example by creating a fictional ad campaign and website, then seeing how many people click through on it. When used to evaluate fictional solutions smoke tests sit in an ethical grey zone — they don’t cause any direct harm, but they can definitely be misleading — but have the advantage of delivering hard numbers based on the reactions of real-world users.
You perform smoke tests in order to inform or iterate; to eliminate ideas from a large pool in order to focus your efforts. They’re much less useful, though, for generating new ideas or solving problems. Smoke tests only answer the question “is this more desirable than that?”, and tell you nothing about why. You need a large enough volume of smoke tests, with similar executions, in order to produce comparable results.
In-Market Testing
In-market testing means releasing a product in a limited, monitored way. This can be a good way to inspect a nearly finished product, by gathering data on key indicators like performance, adoption, and discoverability. Depending on how the test is set up for monitoring, different types of outputs can be generated. Like smoke tests, in-market testing can be used to kill off ideas, but it can also be a way to find new problems. In-market testing requires a nearly finished product or feature to be valuable. The more complex the product, the more complex the testing.
Conclusion
It’s always about increased confidence.
All types of user engagements should be designed to enable the team to act with more confidence. They should give stakeholders confidence that the product is desirable, give developers confidence that it’s stable, and make designers confident that they are solving the right problem.
User engagements are active. They create opportunities, and they let you iterate and execute versions of a product. Engagements that are done within the design process may take more or less time, and produce outcomes with higher or lower fidelity, but they should always propel the product forward. They should be performed with small groups of users, preferably face to face. This allows multiple things to happen at once. Non-verbalized problems are often spotted in face-to-face settings, that would never appear in a phone interview or survey.
This is when the new behaviors and opportunities are found, because similar tendencies are observed across users. When users say one thing and do another, this is the moment when new ideas are born . More importantly, these moments offer clarity and bring us closer to action — the highest goal of any kind of engagement.
Design research is only as good as the action it enables
PS. With user testing, it’s usually better to see what people do instead of listening to what they say. It’s also important to notice what they don’t do. When you notice their actions or lack thereof, start asking questions but not directly. Not “Why did you do X?” or “Why didn’t you do Y?”.