the distance from researcher to insight

Published by

on


I’m a people researcher. Call it user research, customer research, or product research. The work involves building rich qualitative or quantitative datasets, digging into that data, analyzing patterns, surfacing insights, and translating all of it into something stakeholders can use to make better design, product, engineering, and business decisions.

What makes this type of work satisfying isn’t just the intellectual challenge. It’s the connection. When I’m interviewing people or reading survey responses or deep in analysis, wrestling with the data, I’m building understanding. I’m connecting to someone else’s experience, making sense of how they navigate the world, and finding ways to articulate that understanding so others can see it too. The insight is personal and meaningful. And the relationship between the work I do and the understanding I produce is what makes it matter.

But I’ve been thinking about how easy it is to cheapen that relationship and about the risks associated with doing so.

The modes of production in research are shifting; both the productive forces themselves (the tools) and the social and technical relations of production, including the relation between workers and the objects of their work.

AI research tools promise to automate analysis, summarize findings, and generate actionable prioritized insights for stakeholders. Notwithstanding quality issues with current outputs, these tools can speed things up and, when they’re in expert hands, they can sometimes get from question to insight just a bit faster.* But they also introduce distance between the researcher and the objects of their work. They decouple the process of understanding from the act of producing insights. When that decoupling happens, the work becomes less meaningful and that matters.

I’ve been at this long enough to know that part of what motivates people is the meaning they find in their work: I love problem-solving and [this work] lets me exercise that muscle everyday!… I like the creativity of it… I’m a critical thinker, and those knotty problems help me hone that craft… I’m all about understanding other people, and I don’t think I could work in a role where I didn’t get to do that… I’ve heard versions of this feedback from app developers, software engineers, solutions architects, business analysts, operations managers, and even from C-level leadership in tech and banking.

Chipping away at the relationship between a researcher (or research team) and an insight destroys essential qualities: personal connection to data, wrestling with ambiguity, or the moment of recognition when you finally understand someone or something. What remains is output, but not understanding, and that’s bad for decision-makers.

When insights are distant from the people who produce them, they become riskier for the decision-makers who rely on them. Deep understanding de-risks decisions. You might be thinking, “hmmm, but no insights are perfect and superficial understanding is better than guessing.” Is it, though? I’m not so sure. Arguing that bad data is better than no data is arguing for wasting a firm’s resources. Arguing that surface-level understanding is better than guessing is similar. The surface offers no depth and, in the context of applied research, I’d argue it provides little/no competitive advantage. Insights forged through a deep relationship between the researcher and their data are better insights. Depth of engagement surfaces nuance, identifies edge cases, reveals context and tensions that determine whether and how an insight applies to a decision. Surface-level insights generated through disconnected processes may approximate that work, but that’s about all they can do.

Historical materialism tells us to pay attention to the relationship between workers and the objects of their work. When they’re alienated from their output—and it doesn’t matter what the role or the output is—the work degrades both for them and for everyone who depends on it. The degradation may vary across the board: it may look different in a call center than on a factory line, and it may look different on the line than it does in a fulfillment center or a corporate office.

In research, that degradation shows up as insights that look legitimate but lack depth. Reports that check boxes but miss meaning. Outputs that inform decisions without providing understanding and, thus, introduce risk. The knotty problem for researchers is to figure out how to strike the right balance between integrating AI tools (and here I’m going to adopt a much broader definition of AI than currently makes its way into the popular discourse on the subject) and maintaining a strong relationship with the objects of our work so that we continue to deliver value to decision-makers through timely, high-quality insights imbued with deep engagement and understanding.

We need to be smart and intentional about adopting tools that risk introducing distance between researchers and insights. The connection between a researcher and their work isn’t just about job satisfaction. It’s about the quality and reliability of the understanding we’re tasked with producing.

If you don’t believe me, highly recommend reading any of Clay Christensen’s writings on innovation, including some of the Harvard Business cases featured in his disruptive innovation course. Deep understanding of clients is *the* way for incumbents both to protect themselves against disruption and to innovate on behalf of customers.

*Notwithstanding quality issues with current outputs, these tools can speed things up and, when they’re in expert hands, they can sometimes get from question to quality insight just a bit faster

I wrote a separate entry about the value of prioritizing timeliness over speed of delivery. You can read that post here: timeliness, not speed.

Previous Post