Tags
AI-driven discovery has moved from novelty to habit fast, changing how people make decisions online. LLMs are already making a play against traditional search and for many users, they have become their first point of discovery.
That’s powerful, but it also puts trust firmly under the spotlight. When answers feel authoritative, invisible forces shaping them suddenly matter a lot more. For brands and users alike comes an understandable question: can we trust what LLMs show us, and how fragile is that trust as monetisation enters the picture?
At BrightonSEO, I explored what OpenAI’s early advertising tests tell us about intent, trust and what brands should be focusing on next.
The set up
In the back end of 2025, OpenAI began testing advertising within ChatGPT in North America. It’s worth being clear about how these ads work. They don’t appear within the AI’s answer itself: they surface alongside responses, explicitly labelled and clearly separate from the generated content.
Despite massive user growth, advertiser feedback has been... challenging to say the least. Some brands struggled to spend allocated budgets – one example committed $250k per quarter but managed to spend just 3%. Not to mention, beta-stage limitations like reporting glitches and blocked data access made measurement difficult.
But these issues are surface‑level. The bigger challenge sits in intent and trust.
The reality: attention is not intent
Data from Adthena, an IDHL partner, displays that Google ads typically see click‑through rates of around 6%. Early testing within ChatGPT is closer to 0.9% - more than six times lower.
This doesn’t mean LLMs lack value. Audiences are clearly using them for discovery and exploration, and they’re becoming important waypoints in the search journey. But high‑intent behaviour still overwhelmingly lives elsewhere, particularly in Google and social search.
A strong illustration comes from Walmart’s agentic commerce test in 2025. When Walmart enabled AI agents to guide and complete purchases within OpenAI, conversion rates were three times lower than on its own website. That reflects two things: the advantage brands still have when controlling and optimising their own environments through CRO testing, and—more importantly—how much trust audiences currently place in AI systems acting on their behalf.
The data tells us we’re not quite there yet. Attention doesn’t equal intent. Just because LLMs have huge user bases doesn’t mean ad revenue will naturally follow. Ads are surfacing, but demand, usability and trust all need to catch up.
Reframing the trust question
To understand where this goes next, we need to start looking at LLMs differently. They are not search engines. Search engines surface links so you can go and consume media elsewhere. An LLM is the medium. It’s a form of communication in its own right.
History gives us useful context here. From the first newspaper ads in 1600s Holland to the original digital ad in 1994, user trust has endured when two conditions are met: ads are explicit, and they’re relevant.
This is why the introduction of ads into LLMs doesn’t automatically erode trust. Users are familiar with the value exchange of advertising; however, trust is damaged is when intent is unclear.
Influencer marketing teaches us a good lesson here: with only around 25% trust (the lowest of any ad format), many users struggle to tell whether they’re hearing a genuine opinion or a paid endorsement. That lack of transparency quickly undermines credibility.

What users don’t realise about ‘organic’ LLM answers
Advertising is already playing a major role in so-called organic LLM answers.
At IDHL, we’ve analysed citation sources across our client base to understand what actually drives brand mentions in LLM outputs. The results were surprising: 45% of citations came from editorial sources but 55% were driven by advertising, primarily affiliate and advertorial content.
In simple terms, getting your brand mentioned positively in LLMs is already pay to play.
This isn’t inherently a problem. The real risk to user trust sits in how platforms evolve from here. If citation algorithms become too easy to influence through low‑quality paid content, answers will deteriorate, users will lose confidence, and usage will fall. Equally, if ads begin blending directly into answers rather than sitting clearly alongside them, trust will erode, just as we’ve seen previously with search ads.
The answer: stop optimising for LLMs in isolation
The biggest mistake brands can make is viewing this purely through an LLM visibility lens.
Trust in AI‑driven discovery is built through integrated earned, owned and paid media strategies, not platform‑specific hacks. LLMs learn from the same ecosystem we already operate in: traditional search, social platforms, ecommerce marketplaces and video.
Awareness, engagement, conversion and advocacy signals across these channels are exactly what LLMs use to form answers, sentiment and recommendations.
Which leads to the real takeaway from Brighton: this isn’t about learning how to ‘optimise for AI’. It’s about asking how do we build trust, visibility and positive sentiment with our audiences wherever they are?
Do that well, and LLM visibility becomes a natural by‑product.
Integrate your approach with IDHL
LLMs are already influencing how brands are being discovered, now is the time to act. Get in touch with our experts to how an integrated earned, owned and paid approach can build trust and visibility across AI‑driven platforms.






