Customer support teams don’t have a knowledge problem, they have a knowledge distribution problem. It may be that only your most experienced agents know the three-step workaround for failed international transfers. Or there’s a system outage that hasn’t been widely announced yet. And almost certainly, your knowledge base still has some outdated policies.
The knowledge exists. It's just scattered across your help centre, internal docs, emails, Slack messages, and the heads of different people on your team. An individual customer issue gets resolved, but that knowledge doesn't spread. It stays with whoever handled it.
AI agents trained only on your knowledge base can't access any of this. At best, they plateau at 30-40% resolution rate. At worst, this fragmented knowledge leads to compliance violations and customer-facing mistakes that your human agents would never make.
We built Gradient Labs so your AI agent could launch with the knowledge your best human agents already have, on day one.
We accumulate the knowledge of your best human teams
We don't wait for your AI agent to escalate to a human to learn the right way to do something. We extract knowledge automatically from the conversations your human agents have already had.
The system works across three knowledge layers:
1. Start with your documented knowledge
This is your foundation: everything you've already written down for customers and your internal teams.
We auto-sync your entire help centre and internal documentation, so if it's written down, your agent can use it. This covers the documented answers: account setup, product features, standard policies.
But most knowledge bases (KBs) are written for humans browsing your site, not for AI agents in live conversations. And they're missing the implicit knowledge that makes your human agents effective.
2. Capture what your best agents already know
After we ingest your documented knowledge, we turn to Facts. Facts are snippets automatically extracted from past conversations between your human agents and customers. They capture the implicit knowledge that never makes it into official documentation—the tone, the edge cases, the things your agents know but would never write down.
For example:
"We apologise, we can't process international wires on weekends" (tone + limitation your help centre wouldn't state this way)
"Try closing the app completely and reopening, not just backgrounding it" (a workaround agents know but isn't documented)
"Your statement generates on the 3rd business day of the month, not the calendar date" (nuance that gets lost in KB simplification)
In financial services, this is critical. Your human agents know that "dispute" means different things for card transactions versus direct debits versus international transfers. Your knowledge base doesn't spell this out. It assumes the agent will figure it out from context.
And since we specialise in financial services, some of the most critical regulatory knowledge is already baked in. Take tipping-off, for example —if a card is blocked due to a fraud investigation, it's illegal in the UK to tell the customer why. Your human agents learned this in training and would never make that mistake. We've built that same constraint directly into the agent, so it's not something your team needs to worry about extracting or maintaining.
But beyond these hard regulatory boundaries, your AI agent still needs to learn how your team actually handles these situations. The careful language they use. The alternative explanations they give. The specific edge cases where they deviate from the script. That's what the extracted facts capture—not the regulatory rules themselves, but the practiced, compliant way your best agents navigate around them.
New facts are generated automatically as your agent handles conversations. You review, edit, and delete outdated information through a curation interface. The system automatically flags contradictions with your public KB so you catch conflicts before customers do.
3. Update your agent without overhead
Sometimes you need to give your agent information that doesn't belong in your official documentation.
A system outage happening right now. A limited-time promotion. A temporary policy change while you're migrating platforms.
Notes let you add time-sensitive information in minutes. We recommend using them sparingly, as permanent knowledge still belongs in your knowledge base. But for urgent updates, Notes keep your agent accurate without forcing you to publish articles you'll delete next week.
Why this approach gets better results
Most AI support tools learn gradually. When your AI agent escalates to a human, some tools might help by generating a draft article from that single resolution to implement later. But it means making a lot of mistakes first, and hoping that over time, your agent gets better.
Gradient Labs works differently. We learn from all your past conversations before your agent launches. That's why our customers see high resolution rates from day one.
Want to see how this three-layer system works in practice? Book a demo with us here.
Customer support teams don’t have a knowledge problem, they have a knowledge distribution problem. It may be that only your most experienced agents know the three-step workaround for failed international transfers. Or there’s a system outage that hasn’t been widely announced yet. And almost certainly, your knowledge base still has some outdated policies.
The knowledge exists. It's just scattered across your help centre, internal docs, emails, Slack messages, and the heads of different people on your team. An individual customer issue gets resolved, but that knowledge doesn't spread. It stays with whoever handled it.
AI agents trained only on your knowledge base can't access any of this. At best, they plateau at 30-40% resolution rate. At worst, this fragmented knowledge leads to compliance violations and customer-facing mistakes that your human agents would never make.
We built Gradient Labs so your AI agent could launch with the knowledge your best human agents already have, on day one.
We accumulate the knowledge of your best human teams
We don't wait for your AI agent to escalate to a human to learn the right way to do something. We extract knowledge automatically from the conversations your human agents have already had.
The system works across three knowledge layers:
1. Start with your documented knowledge
This is your foundation: everything you've already written down for customers and your internal teams.
We auto-sync your entire help centre and internal documentation, so if it's written down, your agent can use it. This covers the documented answers: account setup, product features, standard policies.
But most knowledge bases (KBs) are written for humans browsing your site, not for AI agents in live conversations. And they're missing the implicit knowledge that makes your human agents effective.
2. Capture what your best agents already know
After we ingest your documented knowledge, we turn to Facts. Facts are snippets automatically extracted from past conversations between your human agents and customers. They capture the implicit knowledge that never makes it into official documentation—the tone, the edge cases, the things your agents know but would never write down.
For example:
"We apologise, we can't process international wires on weekends" (tone + limitation your help centre wouldn't state this way)
"Try closing the app completely and reopening, not just backgrounding it" (a workaround agents know but isn't documented)
"Your statement generates on the 3rd business day of the month, not the calendar date" (nuance that gets lost in KB simplification)
In financial services, this is critical. Your human agents know that "dispute" means different things for card transactions versus direct debits versus international transfers. Your knowledge base doesn't spell this out. It assumes the agent will figure it out from context.
And since we specialise in financial services, some of the most critical regulatory knowledge is already baked in. Take tipping-off, for example —if a card is blocked due to a fraud investigation, it's illegal in the UK to tell the customer why. Your human agents learned this in training and would never make that mistake. We've built that same constraint directly into the agent, so it's not something your team needs to worry about extracting or maintaining.
But beyond these hard regulatory boundaries, your AI agent still needs to learn how your team actually handles these situations. The careful language they use. The alternative explanations they give. The specific edge cases where they deviate from the script. That's what the extracted facts capture—not the regulatory rules themselves, but the practiced, compliant way your best agents navigate around them.
New facts are generated automatically as your agent handles conversations. You review, edit, and delete outdated information through a curation interface. The system automatically flags contradictions with your public KB so you catch conflicts before customers do.
3. Update your agent without overhead
Sometimes you need to give your agent information that doesn't belong in your official documentation.
A system outage happening right now. A limited-time promotion. A temporary policy change while you're migrating platforms.
Notes let you add time-sensitive information in minutes. We recommend using them sparingly, as permanent knowledge still belongs in your knowledge base. But for urgent updates, Notes keep your agent accurate without forcing you to publish articles you'll delete next week.
Why this approach gets better results
Most AI support tools learn gradually. When your AI agent escalates to a human, some tools might help by generating a draft article from that single resolution to implement later. But it means making a lot of mistakes first, and hoping that over time, your agent gets better.
Gradient Labs works differently. We learn from all your past conversations before your agent launches. That's why our customers see high resolution rates from day one.
Want to see how this three-layer system works in practice? Book a demo with us here.
Share post
Copy post link
Link copied
Copy post link
Link copied
Copy post link
Link copied

