Product

Product

Product

How a fraud model taught us to build AI that understands people

We built an AI model that accurately warned customers of fraud. They ignored it. Learn how this experience shaped our approach to building AI agents, and the importance of designing agentic systems that deeply understand people.

Photo of Danai Antoniou

Danai Antoniou

·

Nov 17, 2025

Blog cover image

Warnings never heeded

In 2020, during the first months of the pandemic, I was working at Monzo and the UK was seeing a surge in sophisticated scams. With most people at home ordering packages, scammers sent fake delivery texts and followed up with phone calls pretending to be the victim’s bank. They coached people through sending their own money to the scammers’ accounts.

We needed to move quickly, so we built a machine learning model that detected fraudulent payments with very high accuracy. When it spotted a likely scam, it triggered a bright red warning in our app. The message was clear. It said the payment looked fraudulent and the customer should not continue.

We were convinced we had solved the problem.

Then we checked the data. The model was accurate, but the warning was getting ignored. People were still pressing “approve” and sending the money.

We eventually understood why. The victims were being coached by scammers on the phone. The scammer would say something like, “You will see a warning. Everyone gets it. Ignore it and continue.” Under pressure, most people followed the voice on the phone rather than the message on the screen.

Our accurate model had run into a very human form of manipulation. It never stood a chance.

Breaking the scammers’ spell

The failure of that warning alert left us with a real sense of responsibility. We had built something that detected fraud but ultimately still didn’t protect our customers from it.

So we tried a different approach. Over one long weekend, we created a new system. Instead of only showing a warning, the system would block the suspicious payment altogether and immediately connect the customer to a real fraud analyst.

It worked. Once a human expert joined the conversation, everything shifted. They could ask simple questions in a calm voice. They could build enough trust to help the customer pause and think. They could cut through the pressure the scammer had created.

Our fraud losses dropped sharply. The graph from that week still sits in my files because of how steep the change was.

And though it was a win that human intervention worked, we couldn’t keep putting an expert on every single case. Their time was limited and the number of scams kept growing. We had a volume problem.

Scaling empathy to prevent an increasing volume of fraud

This challenge eventually led me to co-found Gradient Labs. We’re building AI support agents that can do so much more than handle simple queries. We are building agents to extend the skills, judgment, and reasoning of expert analysts to meet the volume of modern fraud.

Our approach is to train AI agents on thousands of real cases and conversations from the best human fraud analysts. The focus is on how analysts speak to customers, how they ask questions, and how they anticipate what a scammer has already told the victim. A good analyst knows exactly when to say something like, ‘I understand someone may have asked you to move this money quickly,’ instead of just, ‘Our models indicate this isn’t a scam.’

Our team has lived this problem from within a regulated bank. We know how scammers change their playbooks and how money moves once it has been stolen. We have seen these patterns first-hand. That experience is what we are now teaching our AI.

What we do next

To prevent fraud, we need to build agentic systems that understand people as much as they understand data. A model can be accurate and still fail if it cannot recognise the pressure someone is under or the story they have been coached to believe.

If we can build AI agents that can provide the same expertise and empathy as the world’s best fraud analysts, we can reduce the number of fraud victims by an order of magnitude.

This is just one of the things we’re tackling with our AI agents at Gradient Labs. Follow along for the journey here.

Warnings never heeded

In 2020, during the first months of the pandemic, I was working at Monzo and the UK was seeing a surge in sophisticated scams. With most people at home ordering packages, scammers sent fake delivery texts and followed up with phone calls pretending to be the victim’s bank. They coached people through sending their own money to the scammers’ accounts.

We needed to move quickly, so we built a machine learning model that detected fraudulent payments with very high accuracy. When it spotted a likely scam, it triggered a bright red warning in our app. The message was clear. It said the payment looked fraudulent and the customer should not continue.

We were convinced we had solved the problem.

Then we checked the data. The model was accurate, but the warning was getting ignored. People were still pressing “approve” and sending the money.

We eventually understood why. The victims were being coached by scammers on the phone. The scammer would say something like, “You will see a warning. Everyone gets it. Ignore it and continue.” Under pressure, most people followed the voice on the phone rather than the message on the screen.

Our accurate model had run into a very human form of manipulation. It never stood a chance.

Breaking the scammers’ spell

The failure of that warning alert left us with a real sense of responsibility. We had built something that detected fraud but ultimately still didn’t protect our customers from it.

So we tried a different approach. Over one long weekend, we created a new system. Instead of only showing a warning, the system would block the suspicious payment altogether and immediately connect the customer to a real fraud analyst.

It worked. Once a human expert joined the conversation, everything shifted. They could ask simple questions in a calm voice. They could build enough trust to help the customer pause and think. They could cut through the pressure the scammer had created.

Our fraud losses dropped sharply. The graph from that week still sits in my files because of how steep the change was.

And though it was a win that human intervention worked, we couldn’t keep putting an expert on every single case. Their time was limited and the number of scams kept growing. We had a volume problem.

Scaling empathy to prevent an increasing volume of fraud

This challenge eventually led me to co-found Gradient Labs. We’re building AI support agents that can do so much more than handle simple queries. We are building agents to extend the skills, judgment, and reasoning of expert analysts to meet the volume of modern fraud.

Our approach is to train AI agents on thousands of real cases and conversations from the best human fraud analysts. The focus is on how analysts speak to customers, how they ask questions, and how they anticipate what a scammer has already told the victim. A good analyst knows exactly when to say something like, ‘I understand someone may have asked you to move this money quickly,’ instead of just, ‘Our models indicate this isn’t a scam.’

Our team has lived this problem from within a regulated bank. We know how scammers change their playbooks and how money moves once it has been stolen. We have seen these patterns first-hand. That experience is what we are now teaching our AI.

What we do next

To prevent fraud, we need to build agentic systems that understand people as much as they understand data. A model can be accurate and still fail if it cannot recognise the pressure someone is under or the story they have been coached to believe.

If we can build AI agents that can provide the same expertise and empathy as the world’s best fraud analysts, we can reduce the number of fraud victims by an order of magnitude.

This is just one of the things we’re tackling with our AI agents at Gradient Labs. Follow along for the journey here.

Share post

Copy post link

Link copied

Copy post link

Link copied

Copy post link

Link copied

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance

Ready to automate more?

Meet the only AI customer service built for Finance