Thought Leadership
David Ralston 24 February 2025 9 min read

AI Won't Replace Your Team. But It Will Change Everything About How They Work

The leadership challenge

Something fundamental is shifting in how contact centres operate. For decades, managing these environments meant balancing headcount against call volume, monitoring average handle times, and adjusting shift patterns to meet demand. Those tasks remain, but a new dimension has emerged: the need to lead teams where artificial intelligence and human agents work side by side.

This is not a theoretical prospect. Organisations across financial services, telecommunications, government, and healthcare are already deploying conversational AI agents that handle live customer interactions. Gartner projects that by 2026, one in ten agent interactions will be automated, up from an estimated 1.6 per cent in 2022. The shift is well under way, and its pace is accelerating.

For leaders, the implications run deeper than selecting a technology vendor. Orchestrating a hybrid workforce demands new competencies: understanding where AI excels and where it falls short, redesigning workflows so humans and machines complement one another, rethinking training programmes, confronting ethical questions that did not exist five years ago, and measuring success with metrics that capture the full picture of organisational health.

None of this is simple. But the contact centre leaders who engage with these challenges now will build operations that are more resilient, more responsive, and ultimately more rewarding for customers and employees alike. Those who delay will find themselves managing increasingly outdated models in an environment that has moved on without them.

Why refining legacy systems falls short

Most contact centres have spent years optimising their interactive voice response (IVR) trees and deploying rule-based chatbots. These tools were meaningful improvements when they were introduced. They reduced the volume of straightforward enquiries reaching human agents, shortened average handle times, and lowered cost per contact.

The problem is that customer expectations have moved faster than these systems can adapt. A menu-driven IVR that was adequate in 2018 now feels cumbersome to callers accustomed to natural language interactions with voice assistants in their homes. A scripted chatbot that can handle password resets and balance enquiries frustrates customers whose issues fall outside its narrow predefined paths.

73%
of customers expect organisations to understand their unique needs, according to Salesforce research

Incremental refinements to these legacy tools produce diminishing returns. Adding another branch to an IVR tree does not make the experience more intuitive. Expanding a chatbot's scripted responses does not give it the ability to reason through an unfamiliar scenario. These systems operate on fixed logic: if the customer says X, do Y. They cannot adapt when a caller phrases a request in an unexpected way, combines two issues in a single sentence, or expresses frustration that requires a different conversational approach.

The gap between what legacy systems can deliver and what customers now expect is widening. Closing it requires a different architectural approach altogether: multi-agent systems built on large language models that understand context, maintain conversational state, and reason through complex interactions in real time. This is not about making old tools slightly better. It is about recognising that the underlying paradigm has changed.

Building effective human and AI collaboration

The most productive framing for AI in the contact centre is not replacement but redistribution. Conversational AI agents are exceptionally well suited to high-volume, repetitive interactions: verifying account details, processing straightforward requests, providing status updates, scheduling appointments, and answering frequently asked questions. These tasks follow predictable patterns, draw on structured data, and do not typically require emotional nuance or complex judgement.

Human agents, by contrast, bring capabilities that remain beyond the reach of current AI: genuine empathy in sensitive situations, creative problem solving for novel issues, the ability to navigate organisational ambiguity, and the social intelligence required to de-escalate a conversation that has become adversarial. When a recently bereaved customer calls to close a family member's account, that interaction demands human sensitivity. When a business client needs to restructure a complex service arrangement that spans multiple product lines, that requires human reasoning and relationship management.

64%
of contact centre agents say that handling fewer routine tasks allows them to provide better service on complex enquiries (ICMI)

The leadership task is to design workflows that route each interaction to the resource best equipped to handle it. This means establishing clear escalation protocols so AI agents transfer seamlessly to humans when an interaction exceeds their capability. It means ensuring that human agents receive full conversational context during handoffs, so customers never repeat themselves. And it means building feedback loops where human agents flag AI responses that missed the mark, feeding continuous improvement into the system.

Effective collaboration also requires cultural groundwork. Agents who fear that AI will eliminate their positions are unlikely to engage constructively with the technology. Leaders must communicate clearly that automation of routine work is intended to elevate human roles, not eliminate them. This message carries weight only when it is accompanied by tangible investment in skills development and visible changes to how human agents spend their working hours.

Developing the workforce of tomorrow

As intelligent automation absorbs routine tasks, the competencies that organisations need from human agents shift significantly. Proficiency in following scripts and navigating knowledge bases becomes less central. The capacity for critical thinking, emotional intelligence, complex problem resolution, and cross-functional collaboration becomes more important.

This transition does not happen automatically. It requires deliberate investment in training and development programmes that prepare agents for their evolving responsibilities. Practical areas for development include advanced communication skills for handling sensitive or high-stakes conversations, analytical thinking to diagnose issues that fall outside standard resolution paths, and technical literacy to work effectively alongside AI tools and interpret their outputs.

The career architecture of the contact centre also needs to change. In traditional models, the progression from frontline agent to team leader to operations manager follows a relatively narrow path. In a hybrid workforce, new roles emerge: AI trainers who review and refine agent behaviour, escalation specialists who handle the most complex interactions, quality analysts who assess AI and human performance against shared standards, and workflow designers who optimise the division of labour between machines and people.

40%
of current contact centre tasks could be augmented by AI within two years, creating demand for new hybrid roles (McKinsey Global Institute)

Organisations that invest in these transitions stand to benefit from lower attrition. When agents see a credible path to more skilled, more varied, and better-compensated work, they are less likely to view the contact centre as a temporary stop on the way to something else. This matters enormously in an industry where annual turnover routinely exceeds 30 per cent and the cost of replacing a single agent can run into the tens of thousands of dollars.

Navigating the ethics of AI integration

Deploying AI agents that interact directly with customers introduces ethical considerations that traditional contact centre management never had to address. These are not abstract philosophical questions. They have practical consequences for customer trust, regulatory compliance, and brand reputation.

Transparency sits at the top of the list. Customers have a right to know when they are speaking with an AI agent rather than a human. This is increasingly a regulatory expectation in several jurisdictions, but it is also a matter of organisational integrity. Attempting to disguise AI interactions as human ones may produce short-term satisfaction scores, but it erodes trust when the deception is discovered. The more sustainable approach is to be straightforward about the nature of the interaction while demonstrating that the AI agent is genuinely capable of helping.

Bias mitigation is another critical area. Large language models can reflect and amplify biases present in their training data. In a contact centre context, this could manifest as inconsistent service quality across different demographics, varied tone or language complexity based on a caller's accent or speech patterns, or differential outcomes in processes like claims assessment or credit decisions. Leaders need clear protocols for auditing AI behaviour, identifying patterns of unequal treatment, and correcting them promptly.

Data privacy adds a further layer of complexity. AI agents that access customer records, transaction histories, and personal information must operate within strict data governance frameworks. This encompasses not only compliance with regulations like the Australian Privacy Act and GDPR, but also internal policies about data retention, access logging, and the separation of training data from live customer information. Customers must be confident that their personal details shared during an AI interaction are handled with the same rigour as those shared with a human agent.

There is also the question of accountability. When an AI agent provides incorrect information that leads to a customer making a poor financial decision, who bears responsibility? When an automated process denies a claim that a human agent would have approved, what recourse does the customer have? These are not hypothetical scenarios. They occur in live deployments, and organisations need clear frameworks for adjudicating them before they arise, not after.

Leaders should establish formal AI governance committees that include representatives from operations, technology, legal, and compliance. These groups need the authority and resources to set guidelines, review incidents, and mandate changes when AI behaviour falls outside acceptable boundaries. Choosing technology partners who build these safeguards into their platforms from the ground up, rather than bolting them on as an afterthought, makes this governance considerably easier to maintain.

Measuring what matters

The metrics that define success in a hybrid contact centre need to evolve beyond traditional operational efficiency indicators. Cost per contact and average handle time remain relevant, but they tell an incomplete story when AI is handling a significant share of interactions.

Customer satisfaction must be measured across both channels, with the ability to compare AI and human performance on equivalent interaction types. This reveals where AI is meeting or exceeding expectations and where it is falling short. First-contact resolution rates, tracked separately for AI and human agents and then blended, provide a clearer picture of operational effectiveness than aggregate numbers that obscure the contribution of each.

Employee engagement deserves equal attention. If the promise of intelligent automation is that human agents will do more meaningful work, then engagement and satisfaction scores should reflect that promise being fulfilled. A decline in agent engagement after AI deployment is a signal that the redistribution of work is not functioning as intended, perhaps because agents feel surveilled rather than supported, or because the complex interactions routed to them are overwhelming rather than stimulating.

Escalation quality and handoff success rates merit tracking as well. When an AI agent transfers a call to a human, how often does the customer report having to repeat information? How frequently do human agents note that the AI's context summary was inaccurate or incomplete? These metrics directly measure the quality of collaboration between the two workforces and highlight areas for improvement.

Finally, leaders should track the pace and effectiveness of their own adaptation. How quickly is the organisation incorporating lessons from AI performance data into process improvements? How responsive is the leadership team to emerging ethical concerns? How well are training programmes keeping pace with the shifting demands on human agents? These are not numbers that appear on a standard dashboard, but they may be the most consequential indicators of long-term success.

The temptation is to reduce measurement to a single return-on-investment calculation. That approach misses the point. Intelligent automation changes the character of work across the entire operation. A comprehensive measurement framework captures efficiency gains alongside the human factors that determine whether those gains are sustainable over time.

See multi-agent collaboration in practice Discover how specialist AI agents coordinate to resolve complex customer interactions.
Click for more

The contact centre industry is at an inflection point. The technologies available today are capable enough to handle a meaningful share of customer interactions with quality that meets or exceeds human benchmarks for routine tasks. The organisations that thrive will be those whose leaders treat this moment not as a technology procurement exercise, but as a fundamental rethinking of how work is organised, how people are developed, and how success is defined.

Strategies that were effective six months ago may already need revision. The velocity of progress in conversational AI means that assumptions about what machines can and cannot do require regular reassessment. Leaders who build adaptive, learning organisations, ones that continuously evaluate their approach and adjust course, will be better positioned than those who seek a fixed end state.

AI is not coming for your team. It is coming for the parts of the work that prevent your team from doing what they do best. The organisations that recognise this distinction, and act on it with clarity and conviction, will define the next chapter of customer service.

Lead the shift

See how CallD.AI helps contact centre leaders build hybrid teams where humans and AI agents deliver together.