kayfabe research

Published by

on


Kayfabe is a term for scripted conflict and wrestling storylines that fans know are not real but engage with anyway. It’s performance masquerading as reality, and it works partly because the emotional response, which is real for the audience, matters more than truth. It doesn’t matter that the wrestlers aren’t actually enemies or that the wrestling isn’t real in the sense that a mixed martial arts fight is real. It matters that it’s entertaining and provocative.

When we’re talking about something designed to entertain (wrestling) I’m onboard, but when performance masquerades as reality in other domains that are not designed to entertain and instead are designed to de-risk and strengthen decision-making (like customer research) then I’m opposed.

Research should never be performance masquerading as reality, but the risk that it’s becoming performative is real. And in some cases performance masquerading as reality is an accurate description of how customer research is done. Call it the kayfabe model of research.

In the kayfabe model, you perform the role of a researcher—delivering lines about expertise, methods, insights, and implications that seem rigorous or valid, even though the reality is that they’re not. They make for a compelling story but there’s no substance behind them. What makes this approach especially bad is the fact that the audience for research—product managers, designers, managers, directors, VPs, and the C-suite—don’t know that there’s no substance. They don’t know it’s a performance. They think they’re getting expertise, methodological rigor, valid insights, and implications that take strategies, roadmaps, and resources (among other things) into consideration. This is where the analogy breaks since, in wrestling, the audience knows it’s fake but engages anyway, and in research, the audience thinks it’s real and engages accordingly.

In the kayfabe model, whether insights reflect truth matters less than their fitness for a narrative or their ability to engage/evoke reactions in an audience. But when researchers deliver kayfabe insights, they’re not doing research. They’re performing it and portraying intellectual rigor.

Now, there’s always some level of role performance in everyday life, but there’s a difference between performing something without the substance that goes along with it and doing the thing.

Think of an actor playing a researcher versus an experienced research practitioner doing their job. The practitioner’s artifacts should meet certain criteria for research quality (e.g. rigor, validity). In my view, it’s part of a researcher’s job to ensure these criteria are met within reason and given the constraints of the project. The actor’s performance is not measured against those same criteria. It has the substance of performance not the substance of (research) practice in that it is designed to appear as and to entertain.

The kayfabe model introduces risk into the PDLC that can compound over time as more kayfabe research produces more bad insights, creating an asymmetry between quality insights and kayfabe insights in a research repository, and because kayfabe insights have the appearance of rigor or validity without the substance, it will become harder to distinguish one from the other. Is this a substantive insight that I can trust and use in my decision-making? Or is it a kayfabe insight? Judgment becomes important, as does access to project artifacts (planning documents, transcripts, notes, other media, and analytic process descriptions) that enable insight audits.

Researchers need to care more about whether our insights are true than whether they’re compelling, and that means being willing to deliver findings that don’t fit the preferred narrative. This is actively working against the proliferation of the kayfabe model of customer research. “True” and “compelling” are not mutually exclusive categories, by the way. But if pressed, prioritize truth first and then figure out how to deliver truth in a compelling story. That’s part of the craft.

I think decision-makers, whatever their role, do care about the quality of research insights—whether made by a researcher or made themselves—that informs their work. What would it mean if they didn’t care?

The same thing that happens if you don’t care about the quality of the food and drink that goes into your body. If you don’t care about the quality of the education (learning environment, teachers, books, activities/exercises) you or your family get. If you don’t care about the quality of the tests run on the materials used to make the bridge you drive across to get to work. The outcomes (physical and mental health, critical thinking capacity, personal safety) are all at risk.

Good outcomes require good inputs, and even though there is certainly lots of room for disagreement about just what “good” means, customer research that appears rigorous and valid but lacks the substance introduces unnecessary risk into the very processes that it is meant to de-risk.

Previous Post