
The Candor Paradox: What AI-Moderated Research Reveals About GLP-1 and Honesty
Published on:
Dec 1, 2025

The Candor Paradox: What AI-Moderated Research Reveals About GLP-1 and Honesty
Published on:
Dec 1, 2025
We recently ran an AI-moderated study with people using GLP-1 weight-loss medication.
It was designed to explore the emotional and social context around adoption. What it ended up revealing was something more fundamental about how and when people tell the truth.
Participants spoke to Tellet's AI moderator with a level of openness that is still rare in traditional qualitative research. Not rehearsed answers. Not socially polished narratives. Real admissions, including things they actively hide from partners, friends, and colleagues.
People did not describe their decision to use GLP-1 in neat, rational terms. They talked about shame, loss of control, fear, and moments where self-image tipped into self-rejection.
One participant put it bluntly:
“I hated the way I looked. I couldn’t illusion myself anymore without something changing.”
These are not responses shaped for a human interviewer. They are emotionally unguarded. Sometimes messy. Often contradictory. Exactly the kinds of inputs that explain behaviour rather than decorate it.
A recurring theme was secrecy. Many participants deliberately hide their GLP-1 use. Some frame their weight loss as discipline or lifestyle change. Others say nothing at all.
Why? Because GLP-1 is still perceived by many as “cheating”. As weakness. As the lazy option.
What is striking is not that this stigma exists. It is that participants were explicit about the strategies they use to manage it. They openly described lying, omitting details, and constructing alternative stories for the people closest to them.
They were honest about their dishonesty.
Tellet does not judge. It does not interrupt. It does not react socially. There is no fear of being evaluated as a person rather than understood as a respondent.
That absence matters.
Participants could talk about misconceptions, ambivalence, self-image, and unresolved discomfort without managing someone else’s reactions in real time. The usual social friction simply was not there.
Some described meaningful improvements in confidence and wellbeing after starting GLP-1. Others reported that weight loss did not resolve deeper issues with identity or self-worth. Both truths coexisted. No one tried to resolve the contradiction for the interviewer’s benefit.
People do not lie randomly. They tailor the truth to the listener.
Human interviewers, no matter how skilled, bring social context into the room. That context shapes what gets said and what gets withheld. AI removes a layer of that pressure.
This does not make AI moderation “better” in all cases. It makes it particularly powerful when the topic involves shame, stigma, health, or moral judgement.
In those contexts, AI can create a form of psychological safety that is difficult for humans to replicate consistently.
If you care about authenticity, this matters.
AI-moderated research is not just faster or more scalable. In certain domains, it may be more valid. Not because people like machines more than humans, but because machines do not trigger the same self-protective behaviours.
When participants trust the listener enough to admit what they hide from family and friends, you are no longer just collecting opinions. You are observing how people actually navigate their lives.
That has implications far beyond GLP-1.
To everyone who participated, thank you. Your candor contributes to a more realistic and more compassionate understanding of decisions millions of people are making.

Tellet uses AI to conduct and analyse consumer research interviews for faster, deeper and more affordable insights.
Want a free trial? Book a demo with us.