I was gutted to miss Joe Craig’s session in Manchester on the opportunities and challenges associated with AI. Apparently, they’d made a deepfake video of him selling an earpiece that translates pensions jargon – the pensions codpiece! It sounded both fascinating and entertaining. But I did get to hear the gist over drinks at Hymans’ Aperitivo on the Wednesday night, and it seemed like an apt topic for today’s blog.

Joe is concerned that AI is buying up financial advice apps and what that might do to the quality and accuracy of advice people receive over time. His point was timely: only days earlier, OpenAI announced its acquisition of Roi, a personal finance app offering AI-driven investment advice. The move could mark the start of a new era in which global tech firm – not financial regulators – shape how advice is delivered, and perhaps how it’s understood.

That concern isn’t theoretical. Quietroom’s paper How AI Is Changing the Way Your Customers Make Decisions shows that tools such as Apple Intelligence, Microsoft Copilot, and Google Gemini already rewrite and summarise financial content before people even read it. Search engines and email clients now decide what information customers see, what they ignore, and how messages are interpreted. A pension newsletter can be condensed by AI into a few lines, sometimes losing critical details about payment rules or legal rights.

Multiply’s Mike Curtis made a similar point in his discussion with Quietroom earlier this year: AI can enhance personalisation but must be grounded in tested, rule-based systems for compliance and accuracy. Large language models still “hallucinate”, and in finance that’s not a quirk, it’s a risk. As Curtis put it, traditional systems remain essential because they hold the regulatory logic that prevents mistakes from becoming harm.

So, the issue isn’t whether AI will enter financial advice – it already has. The real question is how we build governance around it. Who checks the accuracy of an AI summary when it’s the first thing a member sees? How do we protect consumers from acting on an automated interpretation of complex pension information? And, what happens when global AI platforms start to blend marketing, search, and advice into a single, persuasive “answer”?

For now, the safest route is integration, not replacement. As Curtis argued, the future lies in AI and human expertise working hand-in-hand, with clear accountability and ongoing testing. Quietroom adds that firms should audit, simplify, and structure their content so that both humans and machines can interpret it accurately.

Maybe that’s the message behind Joe’s deepfake too: we can laugh at the “cod piece”, but the translation problem it mocks is real. If AI becomes the new interpreter of pensions language, we need to make sure it’s fluent in both accuracy and ethics.


Leave a Reply

Your email address will not be published. Required fields are marked *