Company

Company

Company

Finovate Europe Keynote: How AI Agents are Driving Higher CSAT in Finance

Missed Finovate Europe? Here's everything we covered in our mainstage presetnation.

Dimitri Masin

·

Feb 17, 2026

Blog cover image

Last week at Finovate Europe, I shared results that would have seemed impossible just a few years ago. AI agents consistently outperforming human support teams on customer satisfaction scores in financial services.

If you missed my presentation, here's an overview of what we've learned from deploying AI at scale with leading global banks and fintechs.

The financial services support problem

Financial services support is complex and fragmented. A single customer query often requires multiple touchpoints, different systems, and various team members. A customer calling about a declined card might need account verification, transaction analysis, fraud checking, and potentially a card replacement. Each step traditionally involves different departments, different tools, and multiple handoffs.

Most AI implementations focus on only one part of the problem at a time. Either they're automating customer-facing interactions with chatbots and voice systems, or they're automating back-office tasks like data processing and workflow management. But they're not connecting these pieces together.

We take a more holistic approach. During my presentation, I showed how instead of automating fragments, we built one agent to automate your entire support operation.

Our guiding principle: "What would a human do?"

This question drives every design decision we make. Drawing from my seven years building the AI and Data team at Monzo, I outlined five principles that separate AI agents customers love from those they tolerate:

Optimise for understanding first

During my presentation, I showed two side-by-side examples that illustrate this principle perfectly.

The typical "best effort" agent misses critical nuance. A customer calls about a card decline, and the agent immediately starts explaining generic reasons why cards get declined. It doesn't clarify the customer's actual intent or dig into their specific situation. This leads to multiple back-and-forth exchanges. The customer gets upset. The problem doesn't get solved.

Our AI agent approaches the same case like a skilled human would. It seeks to understand the problem first before offering solutions. Instead of assuming why the card was declined, it asks clarifying questions, checks the specific transaction details, and understands the customer's immediate need. The result? It resolves the issue on the first try, and the customer feels heard and understood.

The difference isn't just about having better responses. It's about fundamentally changing how the AI approaches each conversation.

Know when to step back

This might be the most important principle, and it's where many AI implementations fail. During my presentation, I showed how we pre-empt scenarios where a human should step in.

Great human agents know their limits. They recognise when a situation involves complex emotions, requires nuanced judgment, or falls into grey areas that need human discretion. Our AI agents learn the same skill.

We don't wait for the AI to fail before escalating. Instead, we teach our agents to identify high-risk scenarios upfront. A customer reporting potential fraud? Immediate escalation. Someone dealing with a bereavement and needing account changes? Pass to a human agent.

The key is recognising these situations before the customer gets frustrated. When we pre-empt correctly, customers appreciate that they're immediately connected to the right person. When we guess wrong, we lose their trust.

Connect to the right data safely

Connecting to data safely is something crucial for financial services that most AI vendors completely overlook.

Our agents operate on a principle of least privilege. They only access the data needed to complete the specific procedure at hand. Not everything in the customer's file, not their entire transaction history, just what's relevant to solving their current issue.

The tools side is equally important. Our agents come with dozens of out-of-the-box integrations that let them take real action and execute back-office processes. They can actually freeze a card, process a refund, or update account details. For unique cases, we have custom tools available through our solve API and MCP integrations.

This approach works because it balances capability with compliance. The agent can solve problems without compromising on the strict data governance that financial services requires. It's not just about having access to information. It's about having the right access to take meaningful action on the customer's behalf.

Continually improve with domain-specific reinforcement learning

Our AI agents learn from millions of financial services interactions. We train our own reinforcement learning models specifically for this industry. I call it the "data flywheel" - we discover edge cases unique to finance, generate training data, and continuously improve the models.

Build trust gradually

Financial services customers have zero tolerance for AI mistakes. I call this the "Trust Escalator" - a careful scale-up process. Start with offline testing and red-teaming. Move to 10-20 live conversations per day with 100% quality review. Only scale up after you get your first customer thank-you note.

Why this works now

The companies seeing these CSAT improvements treat AI as a way to deliver better experiences, not just cut costs. They're making relationship-quality service available to every customer.

My key insight from the presentation: AI agents built specifically for financial services handle both the scale customers expect and the precision regulators require. They deliver better outcomes because they understand the domain well enough to think like experienced human agents.

It’s a journey, but it’s worth it

As I wrapped up my presentation, I made the point that this approach takes patience. But the results justify the effort. When you scale properly, you get better customer experiences at lower cost.

For financial services leaders looking at AI support options, purpose-built AI agents can actually exceed human performance on customer satisfaction. The future here involves using AI to deliver the level of understanding your customers deserve, at the scale your business requires.

Want to see how our agents deliver measurably higher CSAT than human teams? Get in touch for a demo.

Last week at Finovate Europe, I shared results that would have seemed impossible just a few years ago. AI agents consistently outperforming human support teams on customer satisfaction scores in financial services.

If you missed my presentation, here's an overview of what we've learned from deploying AI at scale with leading global banks and fintechs.

The financial services support problem

Financial services support is complex and fragmented. A single customer query often requires multiple touchpoints, different systems, and various team members. A customer calling about a declined card might need account verification, transaction analysis, fraud checking, and potentially a card replacement. Each step traditionally involves different departments, different tools, and multiple handoffs.

Most AI implementations focus on only one part of the problem at a time. Either they're automating customer-facing interactions with chatbots and voice systems, or they're automating back-office tasks like data processing and workflow management. But they're not connecting these pieces together.

We take a more holistic approach. During my presentation, I showed how instead of automating fragments, we built one agent to automate your entire support operation.

Our guiding principle: "What would a human do?"

This question drives every design decision we make. Drawing from my seven years building the AI and Data team at Monzo, I outlined five principles that separate AI agents customers love from those they tolerate:

Optimise for understanding first

During my presentation, I showed two side-by-side examples that illustrate this principle perfectly.

The typical "best effort" agent misses critical nuance. A customer calls about a card decline, and the agent immediately starts explaining generic reasons why cards get declined. It doesn't clarify the customer's actual intent or dig into their specific situation. This leads to multiple back-and-forth exchanges. The customer gets upset. The problem doesn't get solved.

Our AI agent approaches the same case like a skilled human would. It seeks to understand the problem first before offering solutions. Instead of assuming why the card was declined, it asks clarifying questions, checks the specific transaction details, and understands the customer's immediate need. The result? It resolves the issue on the first try, and the customer feels heard and understood.

The difference isn't just about having better responses. It's about fundamentally changing how the AI approaches each conversation.

Know when to step back

This might be the most important principle, and it's where many AI implementations fail. During my presentation, I showed how we pre-empt scenarios where a human should step in.

Great human agents know their limits. They recognise when a situation involves complex emotions, requires nuanced judgment, or falls into grey areas that need human discretion. Our AI agents learn the same skill.

We don't wait for the AI to fail before escalating. Instead, we teach our agents to identify high-risk scenarios upfront. A customer reporting potential fraud? Immediate escalation. Someone dealing with a bereavement and needing account changes? Pass to a human agent.

The key is recognising these situations before the customer gets frustrated. When we pre-empt correctly, customers appreciate that they're immediately connected to the right person. When we guess wrong, we lose their trust.

Connect to the right data safely

Connecting to data safely is something crucial for financial services that most AI vendors completely overlook.

Our agents operate on a principle of least privilege. They only access the data needed to complete the specific procedure at hand. Not everything in the customer's file, not their entire transaction history, just what's relevant to solving their current issue.

The tools side is equally important. Our agents come with dozens of out-of-the-box integrations that let them take real action and execute back-office processes. They can actually freeze a card, process a refund, or update account details. For unique cases, we have custom tools available through our solve API and MCP integrations.

This approach works because it balances capability with compliance. The agent can solve problems without compromising on the strict data governance that financial services requires. It's not just about having access to information. It's about having the right access to take meaningful action on the customer's behalf.

Continually improve with domain-specific reinforcement learning

Our AI agents learn from millions of financial services interactions. We train our own reinforcement learning models specifically for this industry. I call it the "data flywheel" - we discover edge cases unique to finance, generate training data, and continuously improve the models.

Build trust gradually

Financial services customers have zero tolerance for AI mistakes. I call this the "Trust Escalator" - a careful scale-up process. Start with offline testing and red-teaming. Move to 10-20 live conversations per day with 100% quality review. Only scale up after you get your first customer thank-you note.

Why this works now

The companies seeing these CSAT improvements treat AI as a way to deliver better experiences, not just cut costs. They're making relationship-quality service available to every customer.

My key insight from the presentation: AI agents built specifically for financial services handle both the scale customers expect and the precision regulators require. They deliver better outcomes because they understand the domain well enough to think like experienced human agents.

It’s a journey, but it’s worth it

As I wrapped up my presentation, I made the point that this approach takes patience. But the results justify the effort. When you scale properly, you get better customer experiences at lower cost.

For financial services leaders looking at AI support options, purpose-built AI agents can actually exceed human performance on customer satisfaction. The future here involves using AI to deliver the level of understanding your customers deserve, at the scale your business requires.

Want to see how our agents deliver measurably higher CSAT than human teams? Get in touch for a demo.

Share post

Copy post link

Link copied

Copy post link

Link copied

Copy post link

Link copied

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance