Your competitor is showing up in LLMs
Are you?
Hi team,
A year ago, LLM sounded like an airport code. Now your mom is asking ChatGPT how to decorate her living room, and your dad is Claudemaxxing to plan vacations.
Commerce moved first.
According to Adobe, 55% of consumers now use AI for product research. No longer fringe behavior, it is a mainstream discovery.
At the same time, according to Yotpo data, traffic to brand websites is falling across segments. Brands doing over $100 million are down 10 to 25 percent. Mid-market brands are down 35-40%. Smaller brands are seeing declines as steep as 70%.
And yet purchases attributed to AI-powered search have increased 11x this year. Orders coming through that channel carry 30% higher AOV compared to other sources.
TL;DR: Fewer people are visiting your site. The ones who do are higher intent and spend more. Given all the legwork they did on LLMs before hitting your site, their decision is increasingly shaped before the site visit happens.
Which means:
If you are not trusted by the machine, you never make the shortlist.
And if you never make the shortlist, you never get the highest intent customers.
Today I want to break down how LLMs actually decide what to recommend, why most brands are underbuilding the trust layer that feeds those decisions, and what you can do this quarter to fix it.
Let’s dive in, shall we??
The 71% Statistic You Can’t Ignore
Whitelisting, Spark Ads, Partnership Ads—whatever you call them, one thing is certain: they are crushing it for 2026 and minisocial is our go-to partner.
Creator-led ads, whatever specific flavor you choose, are crushing. Meta reported that 71% of consumers say they make a purchase within a couple of days after seeing creator content, partnering with minisocial is the best way to partner with high-quality creators and streamline your UGC production.
Brands including Topicals, Ayoh!, MadeGood, and Rocksbox already trust minisocial for creator-made content that performs!
Here’s what brands working with minisocial say:
“Ads made with minisocial content are performing better than anything we’ve made in-house.” - Michael, Growth Marketing Manager
“minisocial’s cost per creator is cheaper than our in-house program.” - Andrea, Digital Campaigns Manager
“The price for what you’re getting is just insane... minisocial handles every step of the influencer management process.” - Kaylee, Director of Marketing
Ready to See Results?
Work with the team behind the best Whitelisting assets:
Before we dig into how this actually works, two things worth taking a peek at:
Yotpo published research on what shoppers are actually asking LLMs. 53 million shopping prompts every single day. If you want to see what people are searching and who shows up when they do, it’s worth reviewing.
Second, if you want to know where you actually rank on LLMs right now and quick fixes you can deploy, there’s a free tool that shows you.
Your customers are choosing before they ever visit your site
For years we optimized what happens after the click. Stronger PDPs, faster load times, tighter checkout, better copy. That still matters.
But more of the decision now happens before the click.
When someone asks ChatGPT, “What is the best prenatal supplement for conception?” a shortlist gets assembled before your site is even in play. If your brand does not appear in that answer, your beautiful PDP might as well not exist for that buyer.
The frustrating part is that these customers convert better and spend more when they do show up, because they have already done the research.
If you are invisible at the LLM layer, you are missing your highest-intent customers.
How LLMs actually evaluate your brand
LLMs are not reading your page like a human or reacting to your hero banner the way a person would. They evaluate signal drawn from across the web.
That signal usually comes from three areas: user sentiment, independent validation, and structured product data.
User sentiment reflects what real customers say in reviews, Reddit threads, Q&A sections, and community forums, including the specifics, the complaints, and the details that show how a product performs in real life. Independent validation includes media coverage, expert commentary, and third-party reviews that exist outside your owned channels.
Structured product data means your information is clear and consistent enough that a machine can match you to a query without guessing, including ingredient breakdowns, certifications, pricing tiers, compatibility notes, and clearly defined use cases presented in a structured way.
When those layers are strong and aligned, you are easier to recommend. When they are thin or inconsistent across sources, you gradually fall out of answers and most brands do not notice until the damage is done.
The funnel did not disappear. It moved earlier.
At the top, the model builds a shortlist from broad awareness questions like “What are the best running shoes for flat feet?” It relies on review depth, clear use-case language, and category clarity. If your reviews never mention overpronation, you are giving the model little reason to connect you to that query.
In the consideration phase, buyers narrow down with prompts like “Brand X vs Brand Y.” The model gathers ingredients, pricing tiers, policy details, recurring themes from reviews, and how third parties describe each option, then forms a summary based on the weight of that evidence. If a competitor has clearer attributes and stronger off-site coverage, the comparison will tend to lean in their direction.
At the decision phase, buyers want reassurance. Prompts like “Is Brand X worth it?” or “Do people like their customer service?” draw heavily from recent reviews, shipping feedback, and real-world outcomes. If that signal is generic or outdated, confidence erodes and recommendations shift.
Most brands assume comparison and final decision happen on their PDP. Increasingly, much of that work happens before the first visit.
A decent chunk of trust signal lives somewhere you don’t (easily) control
A significant portion of what shapes whether an LLM recommends you comes from outside your domain, including forums, Reddit, independent review platforms, and media coverage. The trust calculation is happening whether you are actively managing those surfaces or not.
Authentic, specific, and recent reviews used to feel like retention hygiene. In an AI-mediated discovery world, they function more like infra. When review banks become generic or overly polished, the signal weakens because the model has less real-world evidence to rely on.
The audit is simple. Use the same tools your customers are using. Ask ChatGPT or Claude the questions buyers ask at each stage. See who appears and which sources are cited. Then look inward. Are your reviews recent and detailed? Do customers describe real use cases and outcomes? Is your product data structured clearly and consistently? Are independent sources discussing your brand?
If those answers are uncomfortable, adjusting a flow will not fix it.
Your trust layer needs attention, and right now that layer determines whether you are even in the conversation.
If you want deep dives on topics like these + actionable ways to show up in LLMs, check out Tomer’s (CEO Yotpo) newsletter here.
That’s it for this week!
Any topics you’d like to see me cover in the future?
Just shoot me a DM or an email!
Cheers,
Eli 💛
P.S. If you want to figure out how to get your brand to rank high in LLMs and show up in ChatGPT, Gemini, and more… check this out.






