Insight8 min read
Observations on the state of AI tooling
Wed Nov 12 2025 | Jonathan Healey

- Insight
- IDHL Labs
Tags
AI and browsers are merging
Recently OpenAI announced the launch of their brand new Atlas browser. A browser with ChatGPT built into it. The launch generated significant interest across the sector as you might expect given OpenAI have the largest share of consumer AI use.
OpenAI are not the first to launch a browser with AI built in. Perplexity launched their Comet browser in July with a promise of an intelligent tool that can complete actions on your behalf. With much less coverage Microsoft announced Copilot mode in Edge (also in July with significant updates announced in October – possibly an attempt to compete with the Atlas announcement).
LLM capability is hard to monetise directly
Despite sounding very similar, there are some key differences between each of these offerings. Perhaps more interesting than the strength and weaknesses of each is what the launch of these products signals regarding the direction of travel for AI tooling.
The frontier models are maturing and the extent to which they can be improved is costly and time consuming in relation to how much more immediate benefit this would bring. There are now a cluster of models that perform at a similar level to GPT5. The competitive edge is shifting beyond pure model capability.
For AI companies looking to monetise their base LLM capabilities commoditisation of the market is an unwelcome challenge. developing value-add capabilities is no longer a luxury, but a necessity. We’ve seen an example of how OpenAI are reacting, with the checkout capabilities recently announced in ChatGPT. Right now ChatGPT’s commerce features remain early stage and unproven. It does show action toward a profitable business model and innovation beyond pure generative capability.
Are LLMs powerful engines still looking for a home?
AI vendors are still learning and adapting to how users actually want to use AI tools. The extent to which users will continue to return to a specialist tool once the initial excitement wears off is not understood. If you can’t bring the users to your platform, then bring your platform to the users.
A great car needs an excellent engine, but very few drivers buy a car solely for what’s under the hood – and even fewer buy an engine in isolation. LLMs offer exceptional processing capabilities, but on their own lack general usefulness. Once you place an engine in front of comfortable seat and wrap it in a beautiful chassis the opportunity to excite both engineers and the general public is limitless.
So launching new browsers shouldn’t be that surprising a development. Given how dominant Chrome is in the marketplace it’s an interesting gamble, but then again so were Internet Explorer and Netscape Navigator until something more compelling came along. Perplexity’s $35b offer to Alphabet signals a strong belief in browser-led AI.
User interfaces are evolving at pace to meet user need
As users become familiar with using LLM based tools day to day they start to realise how important context is for every prompt sent. Single thread chats have limited utility, users want to pool information to create wider context – especially in business use cases where the greatest revenue market for LLM providers is likely to be.
We see a lot users at the moment discovering “Projects” (ChatGPT and Claude), “NotebookLM” (Gemini) and “Notebooks” (Copilot). Creating these micro RAG (Retrieval Augmented Generation) systems allows deployment of LLMs to specific clients or projects. This approach is more scalable and user-centric than building custom GPTs or agents. Right now these solutions rely on referencing or uploading associated materials to the AI platform. This introduces inefficiencies for organisations that have already invested in structured storage solutions with considered taxonomies.
All of this adds up to user interfaces that are not yet optimised for the most powerful and useful use cases.
Separating UI from LLM presents powerful opportunities
Microsoft’s Copilot is an interesting approach to this challenge. Under the hood it leverages the same GPT5 LLM developed by OpenAI that ChatGPT uses. In effect it is an alternative user interface to ChatGPT. Except it’s a user interfaces routed through a layer of enterprise security (eliminating significant risk and uncertainty inherent in some AI implementations) and designed to be embedded within the tools that enterprise knowledge workers use every day (Outlook, Word, Excel, PowerPoint, OneNote, Teams etc). The current integration presents both opportunities and areas for refinement, but you can see what the thinking is and the pace at which features are being deployed is impressive.
Further evidence of the separation between LLM and user interface can be seen in the recently announced Office Agent from Microsoft. Currently unavailable in the UK, but what it promises looks impressive. Rather than using an OpenAI LLM it makes use of tools built by Anthropic. One user interface, multiple LLMs. This emerging trend suggest a shift toward flexible LLM integration models.
Business and consumer applications seem likely to diverge
From a business user perspective it would seem likely that the creators of the software users interact with every day are best positioned to take advantage in the short term. That means Microsoft and Google. Google’s advantage right now is that they’ve been building their own LLM capability for a comparatively long time. Gemini benchmarks as one of the best engines available and is becoming seamlessly integrated into the Google product set.
On the other hand, Microsoft’s advantage may lie in an LLM agnostic approach. At present they have access to the dominant LLM of the moment through their relationship with OpenAI, but if relationships with Anthropic and others continue to develop things could get really interesting. Organisations could be presented with the capability to incorporate custom LLM capabilities into the Copilot toolset and that could allow corporate customers to differentiate using AI. For example consider a law firm deploying a specific England and Wales law LLM directly into the existing integrated toolset rather than building separate agents on top of someone else’s pretrained generic LLM.
From a consumer perspective it’s likely a different picture. ChatGPT still dominates this space and at the moment nothing seems likely to challenge this dominance. It would seem likely that ChatGPT will monetise their offering by licensing their capabilities to third parties (e.g. Microsoft), selling advertising and leveraging affiliate arrangements such as the one they’ve announced with Stripe.
Changes in AI are not yet done
It's clear that the battleground for AI vendors is shifting from who can build the most powerful engine to who can build the user interface most useful to users. This shift opens new opportunities and signals continued momentum in AI innovation.
Navigating the next chapter of AI with IDHL
As AI tooling and interfaces evolve, the real opportunity is connecting technology with real user needs. Get in touch with our experts to turn insights into action for your business.


