A customer opens ChatGPT and asks: "What is the minimum deposit for a Halifax first-time buyer mortgage?" The AI answers confidently. The figure it gives is outdated. The customer plans their finances around it. Later, they are surprised at the application stage.
This scenario is not hypothetical. AI systems are actively generating answers to financial product questions using information they have indexed, synthesised, and summarised — without real-time access to your current documentation, terms, or rates. And unlike a Google search result, the AI presents its answer as a direct statement, not a link to verify.
The compliance exposure most firms haven't considered
For FCA-regulated firms, the regulatory framework around financial promotions is well established. A communication that is misleading, unfair, or not clear about a financial product is a compliance matter — regardless of who produced it.
The harder question is whether an AI-generated description of your product, which you did not produce and cannot fully control, constitutes a financial promotion for regulatory purposes. Legal opinion on this is evolving. But the spirit of Consumer Duty — requiring firms to ensure customers receive accurate information and are not harmed by it — provides a clearer direction of travel.
A firm that knows AI systems are misrepresenting its products, and takes no action, is in a materially different position to a firm that actively monitors and corrects those representations.
The distinction matters. Regulators have historically taken a dim view of systemic gaps that a firm should reasonably have identified and addressed. AI-generated misrepresentation is increasingly within that territory.
What AI systems are actually saying about financial products
Our analysis of major AI platforms — including ChatGPT, Google's AI Overview, Perplexity, and Microsoft Copilot — reveals several consistent patterns in how financial products are described.
First, AI systems frequently use outdated information. Rates, thresholds, and eligibility criteria change regularly, but AI training data does not update in real time. A savings rate from 18 months ago may still be cited as current.
Second, AI systems often conflate products across providers. A general description of "how ISAs work" may blend information from multiple sources, creating a composite that accurately describes no specific product.
Third, nuance is frequently lost. Eligibility conditions, exceptions, and product-specific terms are often omitted in favour of a cleaner, simpler answer. For financial products, these are precisely the details that matter most.
What responsible firms should do now
The honest answer is that full control over what AI systems say about your products is not yet achievable. But the gap between doing nothing and taking reasonable steps is significant — and it is likely to become more significant as regulatory expectations develop.
Three things are within reach for most regulated firms today.
Understand your current AI-generated presence. Systematically review what major AI platforms are saying about your key products. This is not a one-time exercise — it requires ongoing monitoring as AI systems update.
Improve the quality of the information AI systems are reading. AI systems synthesise information from the sources they can access. Firms that provide clear, structured, schema-marked content give AI systems better material to work from. This does not guarantee accuracy, but it improves the probability of accurate representation.
Put an escalation framework in place. When a material misrepresentation is identified, who is responsible for responding? What is the process? Having this documented is both good practice and defensible if the question is ever asked by a regulator.
Is your firm monitoring what AI systems are saying about your products?
Our AI Search Visibility & Compliance Diagnostic provides a structured assessment of your current AI-generated presence across major platforms — including where representations are inaccurate or non-compliant.