You should talk to users. And mostly, that’s good advice. However, most of the good decisions don’t usually come from interviews. They come from building, using, and fixing things. Most teams do research not to find direction, but to avoid being wrong. They treat it as a layer between them and the responsibility of making a call. It's easy to say, "x/m users understood this"
This is especially common in teams that don’t use their own product. Instead of living the product, they schedule interviews. The further they are from the problem, the more research they ask for. It looks like diligence. It’s often just distance from accountability.
The problem isn’t research itself. It’s how it gets used. Most research is backward-facing. It tells you what users did, what they said, or what they remember. But building good products is forward-facing. It’s about deciding what should exist. That’s not something you can outsource. If you’re not already close to the problem, no number of calls will fix that. Ever.
I’ve sat through full research cycles where the only real outcome was the illusion of alignment and indirectly passing the ball to the 'user group'. I've seen insight decks that said things like “users value ease of use” or “onboarding needs to be smooth and is currently confusing.” True, but useless. These are insights you can find in a Medium post. No one actually makes better decisions because of them.
AI is now part of this too. Tools can summarise interviews, tag emotions, surface highlights. A tidy research report doesn’t mean the right product will be built. It just means the team will feel more confident about the one they already wanted to build. I’ve seen teams fall apart after months of research. They can’t agree on which feedback to prioritise. They debate outliers. They debate inputs from people who are building it or using it. They cling to quotes that support their opinion. Everyone has data, but no one has clarity.
The thing is that you can’t ask users to predict what they’ll like. A survey won’t tell you which new design will work better, because people respond based on what they already know. Familiarity feels safer than change. If you show two options, they’ll usually pick the one that looks more like the old one even if the new one solves the real problem. That doesn’t mean the new design is wrong. Research reflects comfort.
The best ideas usually come from usage. From trying to install your own smart lock and realising it’s harder than expected. From noticing that users always skip the “Learn more” link. From observing someone struggle to repack your product back into the box. These are observations. And they only happen when you’re paying attention to the product itself.
There’s a difference between asking and watching. You can ask users what they want and get vague answers. Or you can watch how they use what you’ve already built and see what they avoid. The first gives you opinions. You can't depend on opinions. The second, however, gives you behaviour. And behaviour is always clearer. Research becomes dangerous when you try to replace your judgment. A good builder uses research to sharpen their instincts. The moment you start outsourcing your thinking to interviews, you end up solving reported problems instead of real ones.
Now, it’s not that research is bad. It’s that it’s rarely the thing that changes the outcome. If you’re close to the problem, it helps. The most useful research is usually informal like a phone call after something breaks, a visit to a service center, a screen recording from an actual customer doing something weird. People often say: “But what about bias?” “What about edge cases?” “What about people not like you?” These are valid concerns. But they assume that builders who use judgment are flying blind. They are not. Or, not in a way one would assume.
Cognitive bias is real. Founders often over-index on their own use cases. Engineers optimise for what’s easy to build. Designers sometimes value aesthetics over function. But the response to this isn’t always “do more research.” It’s “get closer to the right kinds of problems.” A team building a purifier for Indian kitchens shouldn’t be doing 50 user interviews across income segments. They should be fixing one tray that keeps breaking, or confusing support staff. The feedback is already there. It’s just not in the lab work bench.
There’s also the argument that taste and intuition exclude people and are subjective. They are when used carelessly. But most good judgment is built from repeated exposure to failure. The designer who removed the onboarding screen didn’t do it because they felt like it. They did it because they saw users skip it 100 times. The hardware lead who rejected the OLED screen didn’t do it because they lacked empathy. They did it because no one used it in testing, it added cost, and made servicing harder. That’s not ego. That’s pattern recognition.
If you’re building for a market you don’t live in, yes you need to do fieldwork. But if you’re solving a problem you’ve already experienced 50 times, research becomes a way to delay responsibility. No amount of persona-building will match what you learn from debugging a live feature that thousands of people use every day. You don’t need more empathy exercises. You need to own the product long enough that its pain points become personal.
The teams that consistently ship well rarely lead with research. They lead with usage, observation, instinct, and conviction. Then they use research to catch what they missed. Not as to decide where to go. If you’re already close to the product, trust yourself more. If you’re not close, get closer. Use the purifier. Install the lock. Repack the box. Sit with support tickets. You’ll learn more than five interviews ever could. The distance between you and the user is closed with effort.
You need more responsibility. That’s where real product sense comes from.