The THINKerry
By Kerry Edelstein
@researchkerry
March 25, 2024
For years, I’ve been trying to wrap my head around two significant challenges in the MRX industry:
1. A precipitous decline in data quality, despite significant investments in fraud mitigation and digital fingerprinting
It used to be that we lost and had to replace 5-10% of quantitative sample to bots, fraud, speedsters, or general inattention. More lately, I’ve had conversations where researchers quietly admit to throwing out and replacing 30-50% of their respondents during data collection. And it’s happening in qualitative research too.
2. A curious lack of creative innovation when it comes to research technology.
How many more qualitative research platforms do we need that have “polling” as a feature, but no actual qualitative creative tools like mood boards, storyboards, collages, or Mad Libs? Sure, we can integrate Miro and Pinterest boards, but why should we have to? And in quant research, if we don’t have enough Black research participants, and 20 years of tech investments haven’t changed that, why aren’t we trying new non-technical and collaborative approaches like partnerships with the NAACP, HBCUs, or Black churches?
My first boss from my 1990s research assistant years said something to me a couple years ago that made me laugh, and then groan, and then reflect:
“Research is the only industry where technology has made us worse at our jobs.”
I’m not sure he’s right that it’s the ONLY industry facing that, but I definitely agree that over the past decade, technology has introduced more error and problems than it has solutions into market research. (Influential names in our field will invariably disagree, precisely because they have a financial interest in that statement being wrong. That doesn’t mean it is.) It’s mystifying, frustrating, and in my estimation the dark side of taking piles of investment dollars from funders whose business objectives are more about flipping companies and less about collecting accurate data that can be used to make meaningful business decisions. But that’s a different blog, for a different day. Regardless of the reason, the reality remains: we have a data quality problem and aren’t using technology to elevate quality or possibility, and we need to fix that.
Of course, you can’t really fix these things if you’re not at the table. And one such table in the “restech” space (as we’ve dubbed it) is at SampleCon. I must confess that I’ve largely avoided SampleCon for the past decade, on account of not wanting to spend thousands of dollars so that vendors can pitch me for two days straight. This year, the conference was driving distance from home, at an idyllic Ritz Carlton lakefront resort, and I obtained a comp pass. The seat at the table was looking a lot more scenic and far less expensive this year, so I drove myself in the southern sunshine through the highways of North Carolina, South Carolina, and Georgia to the table in Lake Oconee.
Here’s what I took away.
1. As an industry, we’ve become much too engineering driven and not enough art driven.
Art? What are you talking about, Kerry? You might be thinking that sounds like a crazy thing to say. But it’s far from crazy; it’s an educational movement. We’ve become a STEM industry that needs to be a STEAM industry. If you’re not already familiar with the difference, here’s a useful blog you can read about the difference and why STEAM has begun to replace STEM in the education sector.
During one of the panels, I heard a lot about “innovation”, and yet no idea struck me as either new or forward thinking; there wasn’t anything I hadn’t been hearing for almost 20 years. So when the Q&A silence got a touch deafening, I stood up and walked to the mic. And I asked the panel:
“Where can we innovate in the sampling industry without technology, and how can those ideas help heighten representation in our sampling frames? How can we innovate our processes, our thinking, even our entire approach to recruitment?”
Nobody had an answer.
One panelist did proactively address representation as a necessary industry commitment (the others didn’t even touch that part of the question) – and that I agree with. The other three almost immediately used the word technology in their answer, even though I’d specifically asked them not to. Then those same three collectively agreed that technology and innovation are inextricably linked.
But they’re not inextricably linked. Think of the once-innovative business concepts like the gig economy, agile project management, and upcycling – these concepts can certainly be applied to technology or utilize technology, but none of them inherently rely on it. I once agile project managed my way through a cross-country move, iterating in sprints toward completion while hiring part-time (AKA “gig”) contractors to help maintain Research Narrative while my bandwidth was constrained, as I packed up my old size 4 clothes that will definitely never fit again but could maybe be upcycled into some cool accessories one day. No technology involved.
At SampleCon, nobody broached any of the ideas I’ve considered over the years, like partnering with the NAACP to augment Black representation in sampling, or Rotary clubs to increase rural representation. Nobody talked about different types of gamification design and how that might better engage younger respondents toward higher feasibility. Nobody talked about new funding sources, like a non-profit foundation model for research panel funding.
Nobody thought outside the tech black box. I truly believe you can’t be innovative without also being imaginative. The research industry is full of brilliant engineers and data professionals, but we’re being steamrolled by a STEM mindset that is neither imaginative nor creative. And that’s how I found myself wondering, “What would an artist tell us to do?” We need to start thinking more STEAM, and less STEM, to harness opportunity and solve the real problems in front of us.
2. We are a research industry that doesn’t do research on ourselves.
At one point during the conference, I found myself talking to one of our own vendors about the features they’d built into their research tech platforms. You know, one of those ones that offers polling on a qual platform, but not creative exercises. Not one feature they’ve launched in the past several years has been what our agency is looking for. And when I offered half a dozen ideas in under one minute, a senior team member at that company looked stunned. “That is great feedback. Email me those ideas!” he said. And I will. But I think the greater issue is, why has nobody ever asked?
Here’s the reality check of restech: it’s never started with the customer. Nobody asked for hundreds of digital measurement platforms that don’t speak to each other; they asked for total cross-platform audience measurement. Nobody asked for programmatic sampling, we mostly asked for double opt-in panels with identity verification. Nobody asked for synthetic data; in fact, I know end-clients who won’t touch anything AI-driven because a black box of algorithms is a no-go for a federally regulated industry like banking or telecom. The latest thing nobody asked for, that I’m suddenly hearing, is “qualitative at scale.” Nobody asked for that, and what would actually be useful is existing LLMs that can accurately read typos, sarcasm, emojis, slang and acronyms – in multiple languages and dialects. Let’s optimize what we have, before we rush off to the next new shiny object.
It’s become clear to me that Silicon Valley investors and the engineers they hire are driving the evolution of restech, instead of brand and agency buyers driving that development. It’s deeply ironic that we’re in the business of consumer research, and our “innovation” is being dictated by a tech mindset that quite literally ignores its customer. We’re in the business of market research, and yet our research tech partners do almost no market research on their customer market: us. Fellow researchers, why are we letting people who’ve never designed and conducted research, let alone defended it to executives making multi-million-dollar decisions, dictate what methods and tools we need? We’re the experts, not them. We’ve dangerously ceded that expertise, and we need to own it again.
So to my fellow insights researchers, here’s my biggest takeaway: We’re the experts. We need to stop letting tech investors and product engineers with no insights experience tell us otherwise, or “tech-timidate” us into thinking we’re fearful or anti-innovation. Once upon a time, I embraced and helped pioneer the early years of tech innovation in research, when suddenly we could do ad testing at scale, visual conjoints, online dial testing, and online focus groups that cut travel and time away from family. But causing harm to quality is not progress; destroying trust is not progress; spending all our technology resources fighting fraud that these platforms brought on in the first place is not progress; leaving humans out of human behavior analysis is not progress. The buyers of research technology should be dictating what we need, and the restech industry should be asking and listening. And if they’re not going to ask, we need to demand that they listen. That means showing up to conferences, getting up on stage, and countering restech mythology out loud in public. I know, it’s annoying to pay a fee to be pitched misguided technology. But the alternative is being left out of that decision process and steamrolled with bad progress. I prefer good progress; don’t you?
3. The tech side of our industry is gravely lacking in transparency – including internal transparency.
SampleCon was an eye opener for me not because I’m ignorant to the challenges of our industry, but precisely because I’m deeply entrenched in them and still didn’t realize how much hasn’t been communicated outside of technology and engineering circles. And we don’t just have a transparency and communication problem between research technology companies and their customers. Within research technology (including sample) companies, there’s a disturbing lack of communication across teams. And at SampleCon, people admit this out loud.
I vividly remember when one of our qualitative recruitment partners moved recruitment online a few years ago. I remember it because they didn’t tell us. We figured it out, because the “grids” coming back to us looked different; they had survey data formats and the kind of pithy typos you see in online survey data but not phone transcribed data.
So we asked, “Did you change your recruitment process? Do we need to write our screeners for online now?” We didn’t much care either way, we just wanted to know what scenario we were dealing with, so that we could craft a relevant recruitment instrument.
Here’s the crazy part: Our project and account managers didn’t know the answer.
The even crazier part: They didn’t know how to get an answer.
But it gets even crazier: Years later, they still don’t know, because their own internal teams don’t tell them.
And the craziest part? We should have cared, because the switch to online recruitment destroyed recruitment quality. If you haven’t had someone smoking weed, driving, napping, taking an online class, or my favorite – playing TopGolf – in the middle of an online focus group, you haven’t experienced the absurdist dark side of online recruitment. (And yes, we have kicked people out or at least made them pull over on the side of the road, in such scenarios. Though to be fair, the TopGolf guy was a truly exceptional multitasker and a thoughtful, articulate participant; we let him stay.)
I don’t perceive that these problems persist because vendors are lying to us. I perceive they persist because our point people have absolutely no idea how their own sausage is made. The transparency isn’t simply lost between vendors and clients; it’s lost within restech and sample companies. Houston, we have a communication breakdown.
I have a question I periodically ask my husband, when he says things that sound questionable, while declaring them true with the utmost in confidence. The question I ask is:
“Do you know that, or do you believe that?”
In our industry, we believe we understand what’s going on. But many of us, we don’t actually know. And that’s exactly the issue. Before this year’s SampleCon, I believed something was up in our industry that felt disingenuous and fundamentally problematic. And there were some things I did know: I knew quality metrics were falling. I knew that a survey dashboard and its SPSS file didn’t match in the number of completes. I knew online recruits were increasingly garbage and tedious to fix. I knew emerging platforms weren’t delivering incremental insights, but just different and often more complicated and expensive ways of getting the same answers.
What I didn’t know was why. I didn’t know about the lack of internal transparency, about the world of ghost completes and end link encryption that create disparities in data, about the glaring bluntness with which PE and VC-backed companies will say the inside voice out loud, “I have to report to my investors.” Not customers, investors.
I believed that attending SampleCon would help me surface the root causes of the challenges I noted at the top of this blog. I now know I was right.
I’m not the sort of optimist who sees the glass as half full. Rather, I’m the sort of optimist who might see it as empty and bone dry, but who takes that glass – along with a bottle and a bucket – and marches off to find a water fountain, or maybe a hose. Our glass is teetering on empty right now, I’m not here to shed false optimism on that. Instead, I’m sharing this experience because I believe we can fill our cups. I believe we can fix our quality and creative shortcomings. I hope others will join me in turning that “believe” into a “know.”
RESOURCE LIST FOR DATA QUALITY INITIATIVES:
LEARN MORE ABOUT STEAM: