The Three Pillars of AI Programme Growth for Financial Services

While some companies buy simple AI agents for quick cost savings, the long-term competitive advantage hinges on how you optimise and extend your AI agent’s capability.

Neal Lathia

·

Feb 17, 2026

Blog cover image

Last week in London, the Gradient Labs team met with a group of innovative financial services customers in different stages of launching and scaling their AI agents. AI strategy is a new frontier, and we found in our conversation that most teams are learning similar lessons as they grow, scale, and optimise the performance of their AI agent.

The truth is, the best results come from treating AI adoption as organisational change, not just technology deployment. While adding an AI agent to your customer support team can yield immediate ROI — our new customers often see a +50% resolution rate instantly — the true long-term value comes from building a roadmap that grows and optimises the agent over time.

The pressure to successfully execute an AI programme has never been stronger. As analyst Joshua Lloyd-Lyons recently shared on stage at Finovate Europe, 99% of financial services organisations intend to deploy AI agents. But only 11% of those organisations have made it to deployment, and only the top 2% of firms have reached the “scaling” phase. These gaps indicate a failure in execution and maintenance.

Having spent collective decades working with AI agents, the Gradient Labs team put together some benchmarks to help our financial services customers identify continuous best practices and growth opportunities in their launched programmes. The benchmarks touch on three main pillars: ongoing maintenance, increased participation rate, and an optimisation plan. Here are the key highlights from our customer workshop.

1. Adopt a teaching mindset

Yulia Yan, a member of the AI Delivery Team at Gradient Labs, started the workshop with a simple piece of advice: treat your AI agent like a new team member.

Like with new human members of your support team, the AI agent is context-dependent. It only knows what it has been explicitly told, and it can’t infer gaps in its knowledge. It needs structured guidance for complex tasks and requires clear boundaries to operate safely.

It’s important to remember that the quality of the model is only half the equation needed for exceptional results. The context provided to the model is the other, vital half, and this is up to the customer to evaluate and maintain.

At Gradient Labs, the AI agent draws from a hierarchy of sources: the customer’s knowledge base, facts (AI-generated insights extracted from past conversations), and notes, which are added by customers for private or time-sensitive information like outages or campaigns.

For more complex reasoning and multi-step tasks, procedures come into play, which is where customers specify actions, branching logic, or data-driven decisions in natural language.

To ensure that all these knowledge sources are sufficient and up to date, customers should monitor a key metric: handoff rate. If the AI agent hands off a conversation, it means it doesn’t have sufficient knowledge to resolve the query, either because of an outdated knowledge base, incomplete procedures, or new gaps due to policy changes, product rollouts, or seasonal challenges.

By using diagnostic thinking to work backwards from the handoff, customers can evaluate the agent’s reasoning process to identify the root cause of any knowledge gaps. Yulia closed her session by reminding the room that the agent doesn’t learn from mistakes: humans do. Maintaining the agent’s consistency and quality comes from a practice of deliberate iteration with a cycle of reviewing, diagnosing, and teaching.

2. Increase participation rate

Next, Head of Delivery Rob Dickinson talked about an important factor in driving ROI: participation rate. Increasing participation rate requires a roadmap towards complexity, which is particularly important for financial services, where support cases frequently navigate a host of regulations, identity checks, and sensitive data.

A basic rule of AI support agents is that automation growth becomes more efficient over time. Failure to embrace complexity not only hampers the automation rate and leaves human agents tied up in expensive problems, but it misses the point — AI wasn’t invented for the basics.

The first, and easiest, opportunity for expansion is to increase the breadth of the programme by adding more volume, such as duplicating working use cases across channels. If a use case is implemented on chat, for example, then most of the work is already done; it only takes a small amount of effort to optimise the same use case for another channel like email or voice.

At Gradient Labs, the AI agent uses similar logic and financial services-specific guardrails across all channels, so the work comes down to honing particular channel nuances, rather than ensuring the agent knows how to handle the problem at all.

The second opportunity for expansion is to increase your programme’s depth by giving the agent more capabilities, building towards higher coverage. This involves expanding the agent’s sources of information beyond the knowledge base, activating features such as Tools and Resources that allow for customised actions based on the use case. By committing to this roadmap, customers can empower the agent to resolve complex, multi-step inquiries that are customer-specific and may involve taking action on behalf of customers.

3. Build a roadmap around gaps to close

Lastly, Wei Han Lim from the AI Delivery team talked about how to identify current gaps for roadmap planning and how to prioritise items by impact. He talked through the primary metric most customers use to measure their programme’s health as a whole: resolution rate.

Resolution rate can be maintained and improved by closing both knowledge gaps (as discussed earlier) and capability gaps. Capability gaps tend to require more work, which is why it’s important to understand them thoroughly and prioritise the biggest wins in a long-term roadmap.

While closing both types of gaps is critical to overall ROI, capability gaps often represent the thorniest challenges and could require greater team involvement. The Gradient Labs team has a system for identifying these gaps, working hand in hand with customers to deliver personalised reports according to business priorities, as well as recommendations for next steps.

The ultimate goal is a clear path to an 80-90% resolution rate. This level of scale and accuracy, while often achievable by AI agents in certain low-stakes use cases, has industry-changing power for tightly-regulated, complex verticals like financial services. Here, scalability is a prize that not only builds customer trust and satisfaction, but minimises human error in an environment where a single mistake can be extremely costly.

The key takeaway

Ultimately, the success of an AI programme hinges on long-term organisational support, not just successful implementation. By prioritising maintenance, expansion, and closing capability gaps over time, financial services organisations can unlock the true long-term value of automated support.

Last week in London, the Gradient Labs team met with a group of innovative financial services customers in different stages of launching and scaling their AI agents. AI strategy is a new frontier, and we found in our conversation that most teams are learning similar lessons as they grow, scale, and optimise the performance of their AI agent.

The truth is, the best results come from treating AI adoption as organisational change, not just technology deployment. While adding an AI agent to your customer support team can yield immediate ROI — our new customers often see a +50% resolution rate instantly — the true long-term value comes from building a roadmap that grows and optimises the agent over time.

The pressure to successfully execute an AI programme has never been stronger. As analyst Joshua Lloyd-Lyons recently shared on stage at Finovate Europe, 99% of financial services organisations intend to deploy AI agents. But only 11% of those organisations have made it to deployment, and only the top 2% of firms have reached the “scaling” phase. These gaps indicate a failure in execution and maintenance.

Having spent collective decades working with AI agents, the Gradient Labs team put together some benchmarks to help our financial services customers identify continuous best practices and growth opportunities in their launched programmes. The benchmarks touch on three main pillars: ongoing maintenance, increased participation rate, and an optimisation plan. Here are the key highlights from our customer workshop.

1. Adopt a teaching mindset

Yulia Yan, a member of the AI Delivery Team at Gradient Labs, started the workshop with a simple piece of advice: treat your AI agent like a new team member.

Like with new human members of your support team, the AI agent is context-dependent. It only knows what it has been explicitly told, and it can’t infer gaps in its knowledge. It needs structured guidance for complex tasks and requires clear boundaries to operate safely.

It’s important to remember that the quality of the model is only half the equation needed for exceptional results. The context provided to the model is the other, vital half, and this is up to the customer to evaluate and maintain.

At Gradient Labs, the AI agent draws from a hierarchy of sources: the customer’s knowledge base, facts (AI-generated insights extracted from past conversations), and notes, which are added by customers for private or time-sensitive information like outages or campaigns.

For more complex reasoning and multi-step tasks, procedures come into play, which is where customers specify actions, branching logic, or data-driven decisions in natural language.

To ensure that all these knowledge sources are sufficient and up to date, customers should monitor a key metric: handoff rate. If the AI agent hands off a conversation, it means it doesn’t have sufficient knowledge to resolve the query, either because of an outdated knowledge base, incomplete procedures, or new gaps due to policy changes, product rollouts, or seasonal challenges.

By using diagnostic thinking to work backwards from the handoff, customers can evaluate the agent’s reasoning process to identify the root cause of any knowledge gaps. Yulia closed her session by reminding the room that the agent doesn’t learn from mistakes: humans do. Maintaining the agent’s consistency and quality comes from a practice of deliberate iteration with a cycle of reviewing, diagnosing, and teaching.

2. Increase participation rate

Next, Head of Delivery Rob Dickinson talked about an important factor in driving ROI: participation rate. Increasing participation rate requires a roadmap towards complexity, which is particularly important for financial services, where support cases frequently navigate a host of regulations, identity checks, and sensitive data.

A basic rule of AI support agents is that automation growth becomes more efficient over time. Failure to embrace complexity not only hampers the automation rate and leaves human agents tied up in expensive problems, but it misses the point — AI wasn’t invented for the basics.

The first, and easiest, opportunity for expansion is to increase the breadth of the programme by adding more volume, such as duplicating working use cases across channels. If a use case is implemented on chat, for example, then most of the work is already done; it only takes a small amount of effort to optimise the same use case for another channel like email or voice.

At Gradient Labs, the AI agent uses similar logic and financial services-specific guardrails across all channels, so the work comes down to honing particular channel nuances, rather than ensuring the agent knows how to handle the problem at all.

The second opportunity for expansion is to increase your programme’s depth by giving the agent more capabilities, building towards higher coverage. This involves expanding the agent’s sources of information beyond the knowledge base, activating features such as Tools and Resources that allow for customised actions based on the use case. By committing to this roadmap, customers can empower the agent to resolve complex, multi-step inquiries that are customer-specific and may involve taking action on behalf of customers.

3. Build a roadmap around gaps to close

Lastly, Wei Han Lim from the AI Delivery team talked about how to identify current gaps for roadmap planning and how to prioritise items by impact. He talked through the primary metric most customers use to measure their programme’s health as a whole: resolution rate.

Resolution rate can be maintained and improved by closing both knowledge gaps (as discussed earlier) and capability gaps. Capability gaps tend to require more work, which is why it’s important to understand them thoroughly and prioritise the biggest wins in a long-term roadmap.

While closing both types of gaps is critical to overall ROI, capability gaps often represent the thorniest challenges and could require greater team involvement. The Gradient Labs team has a system for identifying these gaps, working hand in hand with customers to deliver personalised reports according to business priorities, as well as recommendations for next steps.

The ultimate goal is a clear path to an 80-90% resolution rate. This level of scale and accuracy, while often achievable by AI agents in certain low-stakes use cases, has industry-changing power for tightly-regulated, complex verticals like financial services. Here, scalability is a prize that not only builds customer trust and satisfaction, but minimises human error in an environment where a single mistake can be extremely costly.

The key takeaway

Ultimately, the success of an AI programme hinges on long-term organisational support, not just successful implementation. By prioritising maintenance, expansion, and closing capability gaps over time, financial services organisations can unlock the true long-term value of automated support.

Share post

Copy post link

Link copied

Copy post link

Link copied

Copy post link

Link copied

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance