AI Search · Financial Services · Regulatory Compliance

Customers are asking AI which financial products to choose.
Your firm has no control over the answer.

AI systems are already describing your products, eligibility criteria, and rates to customers — without your input, without your oversight, and without FCA scrutiny. Most regulated firms do not know what is being said. Fewer still have a strategy to address it.

Experience inside
Halifax
Sainsbury's Bank
NatWest

AI systems are already describing your products to customers.
No one authorised this.

The information AI platforms present about your firm — its products, rates, eligibility criteria — was not provided by you, has not been approved by compliance, and is not monitored. For FCA-regulated firms, this is not a marketing problem. It is a governance gap.

01

AI platforms answer financial questions without reference to your current documentation: rates, criteria, and terms may be outdated or wrong

02

Inaccurate AI-generated descriptions of regulated products carry potential Consumer Duty and FCA compliance exposure

03

Most firms have no monitoring in place; no escalation process when a misrepresentation is found

04

AI search operates on fundamentally different logic to conventional search; existing agency relationships do not address it

05

Firms that establish control early will define how AI platforms represent their sector. That advantage compounds over time

Three distinct problems.
One structured response.

AI search exposure in financial services is not a single issue. It has three layers — each requiring a different discipline, each dependent on the one before it.

01
Presence

Foundation & Visibility

AI systems synthesise answers from the sources they can read and trust. Firms without structured, schema-marked, entity-optimised content are systematically excluded from AI-generated responses , regardless of their prominence in conventional search.

02
Accuracy & compliance

Representation & Compliance

Where firms do appear, AI platforms frequently synthesise inaccurate descriptions , drawing on outdated training data, conflating products across providers, or omitting material eligibility conditions. For regulated firms, this carries Consumer Duty and FCA exposure that most compliance functions have not yet assessed.

03
Ongoing control

Monitoring & Response

AI representations are not static. Model updates, newly indexed content, and competitor activity change how firms are described , often without notice. Without systematic monitoring and a defined escalation path, material misrepresentations persist undetected.

Strategic entry point

AI Search Visibility &
Compliance Diagnostic

A structured, independent assessment of your firm's current position in AI-generated search. Covers visibility gaps, representation accuracy, compliance exposure, and content architecture , delivered as a board-ready report with a clear action framework.

Assessment areas
AI visibility across major platforms
Representation accuracy review
FCA compliance exposure assessment
Content & entity architecture
Board-ready action framework

Senior stakeholders at FCA-regulated financial services firms.

This work operates at the intersection of marketing strategy, compliance, and digital infrastructure. It requires stakeholders from more than one discipline to act on.

Marketing Directors
Responsible for where the firm appears and how it is represented across all search surfaces
Compliance Leads
Responsible for ensuring AI-generated content about your products meets FCA standards
Digital Directors
Responsible for the content and technical infrastructure AI systems read and trust
Executives & ExCo
Accountable for the strategic response to a search landscape undergoing structural change

Beyond the diagnostic.

Digital Marketing Audit Paid search, SEO, analytics, attribution, and agency accountability : independently assessed
AI Search Implementation Executing the roadmap from the diagnostic: content architecture, schema, knowledge graph
Monitoring Retainer Continuous monitoring across AI platforms with monthly reporting and escalation management
Board Briefings A structured briefing on AI search risk and opportunity for executive and board audiences
Governance · Monitoring

What FCA-regulated firms should be monitoring in AI-generated answers

Read
Strategy · AI Search

Why traditional SEO does not solve AI search visibility

Read

What AI platforms say about your firm
is already shaping customer decisions.
The question is whether you know.

Fixed fee. No retainer until findings are delivered.

What we do

AI search visibility and compliance for FCA-regulated firms.

AI search exposure in financial services operates across three distinct layers. Each requires a different discipline. Each is dependent on the one before it.

01
Presence

Foundation & Visibility

AI systems synthesise answers from sources they can read, index, and trust. Firms without structured, schema-marked, entity-optimised content are systematically absent from AI-generated responses , regardless of their size or reputation. Foundation work corrects this at the data and architecture level, ensuring the information AI platforms draw on is accurate, current, and properly attributed to your firm.

02
Accuracy & compliance

Representation & Compliance

Where firms do appear in AI-generated responses, the descriptions are frequently inaccurate , drawn from outdated training data, conflated with competitor products, or stripped of material eligibility conditions. For FCA-regulated firms, inaccurate AI-generated product descriptions carry Consumer Duty and financial promotions exposure that most compliance functions have not yet formally assessed.

03
Ongoing control

Monitoring & Response

AI representations shift as models are retrained, new content enters the training corpus, and competitors optimise their own presence. Without systematic monitoring across platforms and a defined internal escalation path, material misrepresentations accumulate undetected. This pillar establishes the operational infrastructure to identify, escalate, and resolve issues as they arise.

Strategic entry point

AI Search Visibility & Compliance Diagnostic

An independent, evidenced assessment of your firm's current AI search position , covering visibility gaps, representation accuracy, compliance exposure, and content architecture. Findings are delivered as a board-ready report with a prioritised action framework, drawn entirely from real AI platform data.

Fixed fee · Money-back guarantee if no material gaps are identified
Assessment areas
AI visibility across major platforms
Representation accuracy review
FCA compliance exposure assessment
Content & entity architecture
Board-ready action framework

Beyond the diagnostic.

Digital Marketing Audit Independent review of paid search, SEO, attribution, analytics infrastructure, and agency performance
AI Search Implementation Executing the roadmap from the diagnostic: content architecture, schema, and knowledge graph optimisation
Monitoring Retainer Continuous monitoring with monthly reporting and managed escalation. Available following the diagnostic.
Board & ExCo Briefings A structured briefing on AI search visibility and compliance risk for executive and board audiences

Ready to understand where you stand in AI search?

How it works

From first conversation to board-ready findings.

Structured. Transparent. Low-risk for you.

No discovery retainer. No long sales process. You see the evidence before committing to anything further. The diagnostic stands entirely on its own.

01 · Discovery A 30-minute conversation. We understand your current position and whether the diagnostic makes sense for your firm right now. No commitment required.
02 · Scope Fixed fee confirmed before work begins. NDA in place from day one. We agree exactly what we will assess and what access we need.
03 · Analysis 4–6 weeks of quantitative analysis across all major AI platforms. Visibility, representation accuracy, compliance exposure, and content architecture. Every finding evidenced.
04 · Delivery A board-ready report and in-person or remote presentation. Where you stand. Where the gaps are. What the compliance exposure is. What the roadmap looks like.
05 · Your decision Act internally, engage us to implement, or move to a monitoring retainer. The diagnostic stands alone. No obligation to continue beyond it.

Before you engage us.

Who is this appropriate for? FCA-regulated financial services firms — banks, building societies, insurers, fintechs, and consumer credit lenders — whose senior leadership recognises that AI search is changing how customers find financial products and wants to understand and act on their current position.
We already have an SEO agency. Is this different? Entirely different. Traditional SEO optimises for Google's ranking algorithm. AI search visibility requires entity optimisation, knowledge graph presence, and content architecture designed for AI synthesis. Most SEO agencies have not yet made this transition.
What is the compliance risk specifically? AI systems sometimes misrepresent financial products : incorrect eligibility criteria, inaccurate rates, outdated product descriptions. For FCA-regulated firms, this creates regulatory exposure that most compliance functions are not currently monitoring. We identify where this is occurring and what to do about it.
What data and access do you require? Access to your existing content infrastructure and analytics platforms. We also conduct independent research across AI platforms. All access is formalised under NDA before any work begins.

Start with a conversation. No commitment required.

Thirty minutes. No pitch. A direct conversation about whether this is appropriate for your firm.

About JOAN

A focused consultancy working at the intersection of AI search, financial services, and regulatory compliance.

A focused consultancy. Senior-led. Specialist.

JOAN works with a small number of FCA-regulated financial services firms at a time. Each engagement is led by a senior practitioner throughout — not sold at a senior level and delivered by a junior team.

The practice addresses a specific and urgent problem: regulated firms need to understand and control how AI platforms represent them — and most do not know where to begin. Both principals have worked inside regulated institutions, building the capabilities they now help others develop.

Edinburgh-based. UK-focused. FCA-environment fluent.

Independence

We have no products to sell and no platforms to promote. Our only commercial interest is giving you an accurate picture and a useful roadmap.

Evidence first

Every finding is drawn from real AI platform data. We do not offer opinions without data to support them.

Restraint

We take on a limited number of engagements. Senior involvement throughout, not front-loaded and delegated.

Two principals. Complementary disciplines. Both with direct, senior experience inside the financial services sector.

Anurag

AI Search · SEO Strategy · Financial Services

Led AI search transformation and built the strategic playbook at one of the UK's largest retail banks , not as an external adviser, but as a senior internal practitioner accountable for the outcome. Brings direct knowledge of what AI search strategy looks like inside a major regulated institution, the compliance constraints that shape it, and what good implementation actually requires.

  • AI search strategy and implementation at scale
  • SEO centre of excellence, retail banking products
  • Knowledge graph and entity optimisation
  • FCA-regulated content architecture

Joseph

Data Analytics · Digital Marketing · HCI

Senior performance analyst with a PhD in Human-Computer Interaction and direct experience inside FCA-regulated financial services. Has built performance frameworks at government scale, restructured paid media accounts, and led agency consolidation programmes that generated significant verified savings. Brings analytical rigour, an HCI researcher's understanding of why customers behave as they do in digital environments, and the operational discipline to translate findings into actions.

  • PhD, Human-Computer Interaction
  • Performance frameworks, dashboards, measurement infrastructure
  • Paid media restructure and agency accountability
  • Digital analytics and attribution

Want to understand whether JOAN is the right fit?

A direct conversation. No pitch deck.

Insights

Thinking on AI search, compliance, and financial services.

Compliance · AI

How AI-generated search is creating compliance risk for financial services firms

Governance · Monitoring

What FCA-regulated firms should be monitoring in AI-generated answers

Strategy · AI Search

Why traditional SEO does not solve AI search visibility

Architecture

Entity-based content architecture: the technical foundation for AI search presence

Paid Search

Why your paid search cost-per-acquisition is systematically overstated, and what to do about it

Consumer Duty

Consumer Duty and AI-generated search: what regulated firms need to assess now

Compliance · AI Search

When an AI misrepresents your financial product — who is responsible?

AI platforms are already describing mortgages, savings accounts, and investment products to customers. When those descriptions are wrong, FCA-regulated firms face a compliance exposure most have not yet formally assessed.

A customer opens ChatGPT and asks: "What is the minimum deposit for a Halifax first-time buyer mortgage?" The AI answers confidently. The figure it gives is outdated. The customer plans their finances around it. Later, they are surprised at the application stage.

This scenario is not hypothetical. AI systems are actively generating answers to financial product questions using information they have indexed, synthesised, and summarised — without real-time access to your current documentation, terms, or rates. And unlike a Google search result, the AI presents its answer as a direct statement, not a link to verify.

The compliance exposure most firms haven't considered

For FCA-regulated firms, the regulatory framework around financial promotions is well established. A communication that is misleading, unfair, or not clear about a financial product is a compliance matter — regardless of who produced it.

The harder question is whether an AI-generated description of your product, which you did not produce and cannot fully control, constitutes a financial promotion for regulatory purposes. Legal opinion on this is evolving. But the spirit of Consumer Duty — requiring firms to ensure customers receive accurate information and are not harmed by it — provides a clearer direction of travel.

A firm that knows AI systems are misrepresenting its products, and takes no action, is in a materially different position to a firm that actively monitors and corrects those representations.

The distinction matters. Regulators have historically taken a dim view of systemic gaps that a firm should reasonably have identified and addressed. AI-generated misrepresentation is increasingly within that territory.

What AI systems are actually saying about financial products

Our analysis of major AI platforms — including ChatGPT, Google's AI Overview, Perplexity, and Microsoft Copilot — reveals several consistent patterns in how financial products are described.

First, AI systems frequently use outdated information. Rates, thresholds, and eligibility criteria change regularly, but AI training data does not update in real time. A savings rate from 18 months ago may still be cited as current.

Second, AI systems often conflate products across providers. A general description of "how ISAs work" may blend information from multiple sources, creating a composite that accurately describes no specific product.

Third, nuance is frequently lost. Eligibility conditions, exceptions, and product-specific terms are often omitted in favour of a cleaner, simpler answer. For financial products, these are precisely the details that matter most.

What responsible firms should do now

The honest answer is that full control over what AI systems say about your products is not yet achievable. But the gap between doing nothing and taking reasonable steps is significant — and it is likely to become more significant as regulatory expectations develop.

Three things are within reach for most regulated firms today.

Understand your current AI-generated presence. Systematically review what major AI platforms are saying about your key products. This is not a one-time exercise — it requires ongoing monitoring as AI systems update.

Improve the quality of the information AI systems are reading. AI systems synthesise information from the sources they can access. Firms that provide clear, structured, schema-marked content give AI systems better material to work from. This does not guarantee accuracy, but it improves the probability of accurate representation.

Put an escalation framework in place. When a material misrepresentation is identified, who is responsible for responding? What is the process? Having this documented is both good practice and defensible if the question is ever asked by a regulator.

Is your firm monitoring what AI systems are saying about your products?

Our AI Search Visibility & Compliance Diagnostic provides a structured assessment of your current AI-generated presence across major platforms — including where representations are inaccurate or non-compliant.

Get in touch

A direct conversation. No commitment required.

30 minutes. No pitch deck. Just honesty.

We will tell you quickly whether the AI Search Visibility & Compliance Diagnostic is appropriate for your firm. If it is not — we will tell you that too.

This conversation is for senior stakeholders at FCA-regulated UK financial services firms who want to understand their current AI search position and whether to act on it.

5 South Charlotte Street, Edinburgh, EH2 4AN
Serving FCA-regulated firms across the UK

Thank you. We will be in touch within one working day.

Our entry point

AI Search Visibility &
Compliance Diagnostic

A structured, independent assessment of your firm's current position in AI-generated search.

The Diagnostic is not a consulting engagement. It is a defined, fixed-scope piece of analytical work that produces a precise, evidenced picture of where your firm stands — and what needs to change. Every finding is drawn from real AI platform data, not assumption or inference.

The output is a board-ready report, structured for compliance and executive audiences, with a clear action framework. No retainer is required before findings are delivered.

AI Visibility Assessment Systematic review of how your firm appears : where it appears and across ChatGPT, Google AI Overview, Perplexity, and Microsoft Copilot
Representation Accuracy Review Analysis of what AI platforms are stating about your products, rates, and eligibility criteria , and where those statements are inaccurate or outdated
FCA Compliance Exposure Assessment of where AI-generated descriptions of your products may carry Consumer Duty or financial promotions exposure under current FCA frameworks
Content & Entity Architecture Review of the structural factors : schema, knowledge graph presence, and content architecture, that determine how AI systems read and synthesise information about your firm
Board-Ready Action Framework A prioritised roadmap of interventions, structured for presentation to marketing leadership, compliance, and board-level stakeholders

The regulatory and commercial case for acting before the problem is visible.

AI platforms are already describing financial products to customers. Most FCA-regulated firms have not assessed what is being said, whether it is accurate, or whether it carries regulatory exposure.

Consumer Duty requires firms to ensure customers receive accurate, fair, and clear information. It does not make an exception for information generated by third-party AI systems. A firm that is aware of material AI misrepresentations and takes no action is in a materially different regulatory position to one that has assessed and addressed the issue.

The commercial case is equally direct. Firms that establish AI search presence and accuracy now will define how their products are represented as AI-generated search becomes the primary discovery channel. That advantage compounds. It does not wait.

Senior stakeholders at FCA-regulated firms where AI search is not yet formally owned.

Marketing Directors
Accountable for how the firm appears and is represented across all customer-facing search surfaces , including AI
Compliance Leads
Responsible for ensuring AI-generated product descriptions do not create FCA or Consumer Duty exposure
Digital Directors
Responsible for the content and data infrastructure AI systems draw on when generating answers about your firm
Executive & ExCo
Accountable for the strategic response to a search environment undergoing structural change at pace
Ready to proceed

Request an AI Search Diagnostic

Begin with a direct conversation. Scope is confirmed before any work starts. Fixed fee. NDA from day one. No retainer required before findings are delivered.

Fixed fee · Money-back guarantee if no material gaps are identified
What you receive
AI visibility assessment report
Representation accuracy findings
FCA compliance exposure note
Content architecture review
Prioritised action framework
Compliance · AI Search

How AI-generated search is creating compliance risk for financial services firms

AI platforms are actively describing financial products to customers — without input from the firms whose products they describe, without compliance review, and without FCA oversight. For regulated firms, this is not a marketing gap. It is a governance exposure.

A customer asks an AI platform: "What is the minimum deposit for a first-time buyer mortgage?" The platform answers — confidently, immediately, and in detail. The information it provides may be accurate. It may be outdated. It may conflate criteria across multiple lenders. The customer, receiving what appears to be a direct and authoritative answer, plans accordingly.

This is not a hypothetical scenario. It is the current operating reality across every major AI platform — ChatGPT, Google AI Overview, Perplexity, Microsoft Copilot. Customers are receiving AI-generated answers to financial product questions at scale. Most FCA-regulated firms have not assessed what those answers say.

The governance gap

The information AI platforms generate about financial products was not submitted by the firms whose products are being described. It was not reviewed by compliance. It was not approved as a financial promotion. It is drawn from training data — a synthesis of web content, product documentation, and third-party sources — that may be months or years out of date.

For most regulated products, this creates an immediate governance question: who is responsible for the accuracy of AI-generated descriptions of your products?

The information AI platforms generate about financial products was not submitted, reviewed, or approved by the firms whose products are being described.

The FCA's Consumer Duty framework requires firms to ensure customers receive information that is fair, clear, and not misleading — and to take reasonable steps to prevent harm. It does not carve out an exception for information generated by third-party AI systems. A firm that knows its products are being misrepresented in AI-generated search and takes no action is in a materially different regulatory position to one that has assessed the issue and implemented a response.

What misrepresentation looks like in practice

Analysis of AI-generated responses about regulated financial products reveals several consistent failure modes.

Outdated information. AI training data has a cutoff. Rate changes, updated eligibility criteria, and revised product terms introduced after that cutoff are not reflected. A savings rate from eighteen months ago may still be cited as current. A mortgage product that has been withdrawn may still appear as available.

Cross-provider conflation. AI platforms frequently synthesise information from multiple sources. A response about "fixed-rate mortgage eligibility" may blend criteria from several lenders into a description that accurately represents none of them.

Omission of material conditions. Eligibility thresholds, exclusion criteria, and product-specific conditions are frequently absent from AI-generated summaries. For regulated products, these are precisely the details that carry most compliance weight.

The escalation question

Most regulated firms do not yet have a defined owner for AI search representation. Marketing is often focused on visibility rather than accuracy. Compliance is often unaware that AI-generated descriptions exist, let alone that they may be inconsistent with approved product documentation. Legal has typically not been asked to assess the question.

The absence of ownership means there is also no escalation path. When a material misrepresentation is identified — an incorrect rate, an inaccurate eligibility criterion, a withdrawn product still described as available — there is no defined process for responding. This is the governance gap that most regulated firms currently have.

Assess your firm's current AI search position

The AI Search Visibility & Compliance Diagnostic provides a structured, evidenced assessment of how your firm is represented across major AI platforms — and where that representation creates compliance exposure.

Governance · Monitoring

What FCA-regulated firms should be monitoring in AI-generated answers

Monitoring AI-generated search is not yet a standard practice inside regulated financial services firms. It should be. The question is not whether AI platforms are describing your products — they are — but whether what they say is accurate, current, and compliant.

Most FCA-regulated firms have some form of media monitoring in place. Press coverage, social media, review platforms — these are tracked, logged, and in many cases reviewed by compliance before action is taken. AI-generated search is absent from almost every monitoring framework we have encountered.

This is a material gap. AI platforms are generating answers to financial product questions at a scale and frequency that dwarfs most other channels. A customer who types a mortgage question into ChatGPT or asks Google's AI Overview about ISA rates receives an immediate, confident, synthesised answer — drawn from sources the firm has no visibility over and did not approve.

What to monitor, and why

Product descriptions and feature claims. The most fundamental monitoring requirement is accuracy of product description — rates, terms, features, and eligibility criteria. These change regularly. AI training data does not update in real time. The gap between what is currently true and what AI platforms are saying can be significant, and it accumulates over time without intervention.

Eligibility criteria and exclusions. Eligibility conditions are among the highest-risk elements of AI-generated financial product descriptions. A response that omits a material exclusion criterion — income thresholds, residency requirements, credit history conditions — may lead a customer to apply for a product they do not qualify for, or to make financial decisions based on incorrect assumptions about their eligibility.

The gap between what is currently true and what AI platforms are saying can be significant — and it accumulates over time without active monitoring.

Comparative positioning. AI platforms frequently generate responses that implicitly or explicitly compare providers. How your firm's products are positioned relative to competitors — on rate, on features, on accessibility — is determined by AI synthesis, not by your marketing or communications function. Understanding this positioning is a prerequisite to influencing it.

Regulatory and disclosure language. FCA-regulated products carry specific disclosure and regulatory language requirements. AI-generated descriptions frequently strip these out in favour of simplicity, creating summaries that may not meet the standards expected of approved financial promotions — even though they are not technically promotions in the traditional sense.

How to structure a monitoring framework

Effective monitoring of AI-generated search representations requires three components.

First, a defined query set — the specific questions customers are likely to ask about your products on AI platforms. These should be drawn from customer research, search data, and product knowledge, and reviewed regularly as product ranges and customer behaviour evolve.

Second, a systematic review process — regular sampling of AI platform responses to those queries, with a structured framework for assessing accuracy against current approved product documentation.

Third, an escalation path — a defined internal process for what happens when a material misrepresentation is identified. Who is informed? Who is responsible for the response? What remediation options exist? Without this, monitoring produces findings that sit in a report without consequence.

Who owns this?

In most regulated firms, AI search monitoring currently sits in no one's remit. Marketing is focused on visibility and acquisition. Compliance is focused on approved communications and financial promotions. Digital teams are focused on conventional search and owned channels.

Establishing ownership is the necessary first step. In practice, this is most effectively approached as a cross-functional responsibility — with compliance setting the accuracy and disclosure standards, marketing owning the visibility and representation strategy, and digital owning the technical infrastructure that influences what AI platforms can read and synthesise.

Understand your current AI monitoring position

The AI Search Visibility & Compliance Diagnostic includes a structured review of your firm's current AI-generated representations and a framework for establishing ongoing monitoring.

Strategy · AI Search

Why traditional SEO does not solve AI search visibility

The techniques that improved a firm's position in conventional search rankings do not transfer to AI-generated search. The underlying logic is different. The inputs are different. The outputs are different. Firms that treat AI search as a SEO problem will find the solutions do not work.

For the past two decades, the dominant question in search strategy has been: how do we rank higher in Google? The answer to that question has been refined continuously — keyword optimisation, backlink strategies, technical site performance, content quality signals. An entire industry has developed around it.

AI-generated search asks a fundamentally different question: which sources does this AI system trust enough to synthesise answers from — and what does it say when it does?

These are not the same question. The skills, tools, and methodologies that address one do not address the other. Firms and agencies that treat AI search as an extension of conventional SEO are starting from an incorrect model of the problem.

How conventional search works

Conventional search engines return a ranked list of URLs. The ranking is determined by a complex set of signals — relevance, authority, technical performance, user behaviour — that have been the subject of optimisation for years.

The user clicks a link. They visit a page. They form their own judgment about the content. The search engine's job is to surface relevant pages, not to interpret or summarise them.

How AI-generated search works

AI search platforms do not return a ranked list of URLs. They return a synthesised answer — a direct response to the user's question, drawn from multiple sources, presented as a single coherent output.

AI systems do not rank pages. They synthesise answers. The inputs that determine what appears in those answers are structurally different from conventional search ranking signals.

The sources the AI draws on are not determined by the same signals that drive conventional search rankings. They are determined by what the AI system can read, parse, and trust — which depends on how information is structured, attributed, and made available to the system's indexing and training processes.

A firm with strong conventional search rankings may have poor AI search presence. A firm with weaker conventional rankings may be well-represented in AI-generated answers because its content is structured in a way that AI systems can synthesise with confidence. The correlation between the two is weak and unreliable.

What AI search visibility actually requires

Entity optimisation. AI systems understand the world through entities — named things, with defined attributes and relationships. A financial services firm that is clearly defined as an entity — with consistent, structured information about what it is, what it offers, and how it relates to other entities in the financial services landscape — is more reliably represented in AI-generated answers.

Schema and structured data. Machine-readable markup tells AI systems what information on a page means, not just what it says. Product terms, eligibility criteria, rates, and features expressed in structured schema formats are more reliably read, understood, and synthesised by AI systems than the same information presented in unstructured prose.

Knowledge graph presence. AI systems draw on knowledge graph data — structured information about entities and their relationships — as a high-confidence source. Firms that are well-represented in knowledge graphs are more consistently included in AI-generated answers, and the information presented tends to be more accurate.

Source authority and indexability. AI systems apply their own assessment of source authority and reliability. This is not identical to conventional domain authority, and it does not respond to the same optimisation signals. Understanding what makes a source trusted by AI systems — and ensuring your firm's content meets those criteria — requires a different analytical approach.

The agency problem

Most digital agencies, including those with strong SEO capabilities, have not yet built the methodologies to address AI search visibility. This is not a criticism — the field is genuinely new, and the tools and frameworks are still developing. But it does mean that delegating AI search strategy to an existing agency relationship, without verifying that the agency has specific AI search capability, is unlikely to produce meaningful results.

The first step is to understand your current position accurately — what AI platforms are saying about your firm, where the gaps and inaccuracies are, and what structural factors are driving them. That understanding is the prerequisite for any meaningful strategy.

Understand your firm's current AI search position

The AI Search Visibility & Compliance Diagnostic provides a structured, evidenced assessment of where your firm stands in AI-generated search — and what the structural factors are that determine it.

Insights

This piece is in preparation.

This insight will be published shortly.

If this topic is relevant to your firm's current position, a direct conversation may be more useful than waiting for the published piece.

Legal

Privacy Policy

Last updated: May 2026

Who we are

JOAN Digital Ltd is a company registered in Scotland. Our registered office is 5 South Charlotte Street, Edinburgh, EH2 4AN. We operate as JOAN. References to "JOAN", "we", "us", or "our" in this policy refer to JOAN Digital Ltd.

We are the data controller for personal information collected through this website. For questions about this policy, contact us at info@joandigital.com.

What information we collect

When you complete our contact form, we collect your name, organisation, role, email address, and any information you choose to include in your message. We collect only what is necessary to respond to your enquiry.

We do not use cookies beyond those strictly necessary for the website to function. We do not use tracking pixels, advertising cookies, or analytics platforms that process personal data.

How we use your information

Information submitted through our contact form is used solely to respond to your enquiry and, where relevant, to follow up on matters arising from that enquiry. We do not use your contact information for marketing without your explicit consent.

Our lawful basis for processing contact form submissions is legitimate interest — specifically, responding to business enquiries directed to us.

How long we keep your information

Enquiry correspondence is retained for up to 24 months from the date of last contact, unless a longer retention period is required by law or contractual obligation. If no engagement follows from an initial enquiry, data is deleted within 12 months.

Sharing your information

We do not sell, rent, or share your personal information with third parties for marketing purposes. Information may be shared with service providers who support our operations (such as email infrastructure), under appropriate data processing agreements, and only to the extent necessary.

We do not transfer personal data outside the United Kingdom or European Economic Area without appropriate safeguards.

Your rights

Under UK data protection law, you have the right to access the personal information we hold about you, to request correction of inaccurate information, to request deletion of your information, to object to processing, and to request restriction of processing.

To exercise any of these rights, contact us at info@joandigital.com. We will respond within 30 days. You also have the right to lodge a complaint with the Information Commissioner's Office (ICO) at ico.org.uk.

Changes to this policy

This policy may be updated from time to time. The date at the top of this page indicates when it was last revised. Continued use of this website following any changes constitutes acceptance of the updated policy.