Technology Should Bridge Gaps, Not Widen Them

Technology Should Bridge Gaps, Not Widen Them

Language barriers can become missed opportunities, and AI can be the solution.

Language barriers can become missed opportunities, and AI can be the solution.

AI using voice technology to translate different languages.
AI using voice technology to translate different languages.
AI using voice technology to translate different languages.
AI using voice technology to translate different languages.

Technology Should Bridge Gaps, Not Widen Them

Published on:

Jul 9, 2025

The fact is that language barriers affect far more than casual conversation. They impact how we understand people, make decisions, and build solutions. In research, those barriers can mean the difference between relevant insights and flawed assumptions. Missed nuance becomes missed opportunity.

As research becomes more global, Language AI, defined by the DeepL Language AI Report as "designed for precise, context-aware communication", is emerging as a viable path to scale without sacrificing depth. The demand reflects that: among executives planning to integrate AI into their operations in 2025, 25% are focused on using it for specialised tasks, such as translation

But here’s the problem: not all languages are treated equally by today’s AI.

A 2025 Stanford study shows that large language models, like ChatGPT and Gemini, perform significantly better in English than in languages with fewer digitised resources - languages like Vietnamese, Swahili, or Nahuatl. That gap isn’t just technical, it’s systemic.

And if left unaddressed, it risks excluding entire communities from the very technologies meant to include them.

The Risks of Linguistic Inequality in AI Research

When AI doesn’t support a language well, research becomes skewed, limited, or outright inaccessible. The implications are real:

  • Missed voices lead to blind spots where the stakes are high.

  • Inequitable access to AI-powered services, insights, and tools that are supposed to improve lives.

  • Bias and misinformation are more likely to occur as poor language support increases the risk of inaccurate responses or distorted translations.

It’s easy to think of this as a “future problem”, but it isn’t. It’s happening now and it’s shaping whose stories are told and whose are ignored.

Conversation is the Way In

Research is, above all, about listening. But too often, the tech isn’t built to hear everyone. Translation tools flatten meaning. Models trained primarily in English often struggle to grasp underrepresented languages. The result? A dangerous gap, not just in comprehension, but in representation.

This isn’t a volume issue. It’s about who the system was built for. And without thoughtful, inclusive design, the voices of less digitised languages get pushed to the margins.

That’s where conversational AI comes in: not just to process words but to understand context, tone, and cultural nuance. The kind of understanding that leads to true insight.

If we want research that reflects the full picture, we need tools that listen like people. Anything less risks missing the point entirely.

At Tellet, Inclusion Isn’t an Afterthought

We believe people should be able to share their stories in the language they speak, not the one AI happens to prefer.

That’s why our platform supports 57 languages, with more to come. You can review responses in the original language they were given, in English, or in any other language. If it’s a voice or video response, you can always return to the source to hear exactly how it was said.

We also adapt Tellet’s AI to understand your discussion and probing guides, and to execute your study in line with relevant cultural norms. That means your research isn’t just translated, it’s interpreted in context, to uncover the insight you’re actually looking for.

We’re not just checking boxes. We’re committed to capturing stories with accuracy, empathy, and cultural context. Because research that doesn’t include everyone isn’t research, it’s a partial view.

That’s why we designed a tool that:

  • Prioritises multilingual, voice-first research, enabling authentic conversations across regions and cultures.

  • Scales without flattening nuance, because depth shouldn’t be sacrificed for reach.

The Way Forward

The Stanford study is a wake-up call: if we want AI to serve everyone, we need to build it with everyone in mind. This means investing in underrepresented languages, engaging in participatory data collection, and designing tools that reflect the full range of human experiences.

At Tellet, we’re not waiting for that to happen. We’re building toward it because technology should make it easier to hear more voices, not filter them out.

If you’re ready to listen more broadly, dig more deeply, and do research that reflects the real world, we’re here to help.

Photo of Greg Burke

I’m Greg and I’m the co-founder of a new kind of research platform called Tellet. We use AI to conduct and analyse consumer research interviews for faster, deeper and more affordable insights.

Want a free trial? Book a demo with us, or drop me an email – greg@tellet.ai.

Keep up to date with Tellet

Sign up for our communicatinos if you fancy a regular dose of research inspo in your inbox.

Keep up to date with Tellet

Sign up for our communicatinos if you fancy a regular dose of research inspo in your inbox.

Keep up to date with Tellet

Sign up for our communicatinos if you fancy a regular dose of research inspo in your inbox.