The Intermediary – August 2025 - Flipbook - Page 84
T E C H N O L O GY
Opinion
Look under the
bonnet before you
implement AI
A
rtificial intelligence
(AI) is no longer
just a buzzword.
Tools that once
seemed futuristic
– automating
suitability reports, summarising
meetings, or tailoring client
communications – are now widely
available and surprisingly easy
to deploy.
Used well, these tools free advisers
to focus on the human side of advice.
But that doesn’t mean firms should
rush headlong into implementation
without first understanding what
they’re deploying and the risks that
come with it.
It’s tempting to assess AI tools solely
based on their outputs. Can they dra
a decent report? Do they pull out the
key points from my meetings? Do they
help me communicate more clearly
with clients?
If the answer is yes, job done, right?
Not quite. In a regulated profession
like ours, it’s not enough for a tool
just to generate usable outputs.
Advisers need to understand how
the technology functions under
the bonnet.
That means asking where the data
comes from, how it’s protected and
how the model is trained. You should
also understand how hallucinations
and inaccuracies are handled, what
oversight is built in, how bias is
managed and how changes to the
model are communicated to you.
Without clear answers, you’re
effectively placing blind trust in
a black box – one that’s making
decisions on your behalf. That leaves
you personally liable if something
goes wrong.
While the Financial Conduct
Authority (FCA) hasn’t yet published
detailed guidance on AI, it will in
82
The Intermediary | August 2025
time. In the meantime, the Consumer
Duty already applies. That means
you’re expected to demonstrate high
standards of care, diligence and
transparency – whether you’re using
AI or not.
Of course, tech providers are
responsible for ensuring their
products function safely and ethically.
But advisers aren’t off the hook. You’re
still the one giving the advice. It’s still
your name on the file.
That’s why you should document
how you’re using AI in the advice
process. Keep a clear record of what
systems you’re using, what data is
being shared and what checks and
balances are in place to ensure good
client outcomes.
Data protection is a critical part
of this. Client data is sacred and the
risks of mishandling it are enormous.
Advisers must think very carefully
before uploading personal or financial
information into tools like ChatGPT
or Microso Copilot.
Even if a breach occurs at the
soware provider’s end, your firm
could still be held responsible under
General Data Protection Regulation
(GDPR) and other data protection
rules if you’re inpuing sensitive
client details into an AI model.
Before rolling out any AI-powered
tool across your business, test it in a
controlled environment. Understand
exactly what’s happening to the data,
where it goes, how it’s stored, who can
access it and ensure you’re meeting
regulatory obligations at every stage.
Just as importantly, take the time
to work out what problem you’re
trying to solve. What inefficiencies
are you addressing? What value will
the tool bring to your clients? AI is not
a solution in search of a problem. It
should support your advice process,
not define it.
CLAIRE CHERRINGTON
is director PMS and Bankhall
at Sesame Bankhall Group
Client data is
sacred and the risks
of mishandling it
are enormous”
The most effective AI solutions are
the ones that integrate seamlessly with
your existing systems and workflows.
That’s where you’ll see meaningful
gains in efficiency, consistency
and scale.
But secure, accurate, compliant
tools come at a cost. And rightly so. In
this area, the old adage holds true: you
get what you pay for.
Free or cheap AI might be useful
for internal brainstorming or admin
tasks. But when it comes to anything
client-facing or advice-related,
cuing corners is a risk you simply
can’t afford.
AI has the potential to transform the
advice profession. But it’s not magic.
It’s soware. And like all soware, it
must be properly understood, tested,
governed and integrated.
Used responsibly, AI can drive
productivity, enhance personalisation
and improve your firm’s profitability.
But used blindly, it introduces
regulatory, reputational and
client risks.
So, before you implement AI, sit
down with your chosen provider. Ask
the right questions. Understand the
model. Think through how it fits into
your business.
Because ultimately, the only safe
way to adopt AI is to take a good look
under the bonnet and get to grips with
how it works. ●