Subscribe to our email newsletters that highlight our latest research-based articles, products, programmes, and more to help you strengthen your leadership skills.
Building Leadership Capability for an AI-Enabled Organisation
Helping leaders and professionals use AI with confidence, judgment, and responsibility.
Subscribe to our email newsletters that highlight our latest research-based articles, products, programmes, and more to help you strengthen your leadership skills.
Building Leadership Capability for an AI-Enabled Organisation
Helping leaders and professionals use AI with confidence, judgment, and responsibility.
Subscribe to our email newsletters that highlight our latest research-based articles, products, programmes, and more to help you strengthen your leadership skills.
Building Leadership Capability for an AI-Enabled Organisation
Helping leaders and professionals use AI with confidence, judgment, and responsibility.
AI is entering organisations faster than many people expected. Systems are introduced, workflows change, and expectations shift – often before employees fully understand what is happening or why.
When change moves faster than explanation and involvement, trust begins to erode. People hesitate, resist, or disengage – not because they reject technology, but because they feel excluded from decisions that affect their work.
This article explores why trust is the critical human challenge of AI adoption and what leaders can do to rebuild it through clarity, dialogue, and human‑centred leadership as automation reshapes everyday work.
I spend a lot of time with people who do the hard work in the “gray area” of financial recovery services (complex asset recovery, fraud investigations, and challenging debt situations). Their job is to navigate every nuance of each case with empathy, not to follow a rigid script. This was supposed to be the world that’s safe from automation. However, AI has penetrated it at a speed that employees didn’t plan for and certainly didn’t choose. The systems show up fast. The explanations come later, if at all. And in that vacuum, people are left confused. They hesitate, question, and worry about their position.
Resistance is the natural result of change racing ahead of understanding. When rollouts fail to give employees enough context and control, people feel unsteady and push back. I know because I’ve seen this pattern play out often. According to Business Insider, New York City public schools in January 2023 banned generative AI but reversed course by May only because teachers demanded guidance over prohibitions (“New York City’s Public Schools Reverse Their Ban on ChatGPT—Admitting It Had Been ‘Knee-Jerk Fear’”). This showed exactly how quickly tech had moved ahead of the dialogue itself. Trust broke down and was only repaired when leaders slowed down to teach, set boundaries, and invite questions. It also proved that when leaders answer that trust gap with more software and fewer conversations, the gap only widens.
WHY THE REAL CHALLENGE IS HUMAN
An article in the May-June 2024 issue of Harvard Business Review reports that hyperautomation is now listed as a primary technology goal by 80% of organizations, which increases the stakes for clarity and participation at all levels (“For Success with AI, Bring Everyone on Board”). The hard part is not the technology but getting people to use it with confidence. AI changes who makes decisions, how those decisions are explained, and where accountability lives. Without trust, people work around the tool, second-guess outputs, or delay choices. That’s why your edge isn’t the algorithm you deploy but the trust you earn while utilizing it.
Treat AI as a people challenge first. Lead with disciplined change management, real emotional intelligence, and clear communication about purpose, roles, and guardrails. Tell people why this change matters now, what will and won’t change in their day, where human judgment still applies, how their input will shape updates, and how to raise concerns without penalty.
Some programs turn the corner when leaders recognize that success depends on people. Take Starbucks, which according to Reuters in April 2025 (“Starbucks to Beef Up Store Staffing, Go Slow on Automation Rollout”), paused the broader rollout of its Siren/Siren Craft equipment and shifted investment toward staffing and store experience—keeping selective deployments while emphasizing people over machines. The technology didn’t disappear; trust and performance improved as teams felt supported and standards became clearer.
Simply put, be a clear and caring leader so that gains in speed don’t come at the cost of energy and pride in the job. What will help you more than the code you send out is the trust you build by showing people your work, setting limits, and keeping them in the loop on choices that affect them.
Your ability to notice and manage your emotions, read the room, and choose words and actions that build trust is emotional intelligence. For AI to work in that environment, it must be designed around those same human skills. This doesn’t mean you’re removing judgment. You are simply giving people better signals and more time to use it with care.
You can start by being clear about roles. Let AI handle the sorting and presentation of relevant data while your teams decide, explain, and support. Train managers to coach tone and clarity with emotional awareness in mind. Provide everyone with simple rules for when to pause automation and ask for a human review. Be open about what data the system learns from and where people can question an output without penalty. When these expectations are clear, AI will strengthen empathy instead of dulling it.
I have observed this in practice when we partnered with Elephants Don’t Forget to deploy Clever Nelly—an adaptive AI microlearning platform—across its global workforce. The program replaces sporadic, generic courses with short, daily practice matched to each person’s gaps. It reinforces compliance and builds soft skills such as respectful language, careful listening, and clear next steps. The rhythm is light but steady, so knowledge stays current and people can apply judgment when pressure is high. Over time, teams were better prepared for difficult conversations and handled them with accuracy and care.
TURNING AI INTO A HUMAN LEARNING PARTNER
An April 2024 Harvard Business Review article states that almost 60% of employees want upskilling, and 57% are already seeking training outside work because what they get internally is not enough (“Corporate Learning Is Boring—But It Doesn’t Have to Be”). Start there. Close the gap with learning that is timely, personal, and tied to real tasks. AI helps when it delivers quick, specific feedback that builds self-awareness alongside technical skill. A strong system focuses on relevance and real outcomes, not volume.
Use AI as a learning partner. Deliver short, adaptive sessions at the moment of need, with prompts that meet each person where they are. Let the tool surface the few facts that matter and suggest a next step while people decide and explain. Follow important decisions with a brief check-in to turn experience into judgment. Give managers a clear view of patterns so coaching happens early, and offer a simple way to pause automation and ask for a review. Designed this way, AI builds skill and confidence without adding pressure, and people feel supported as the work and the tools evolve.
The hard part is not the technology but getting people to use it with confidence
FROM RESISTANCE TO RESILIENT WORKFORCE
People first become worried about AI when they fear losing their jobs or their voices. These worries are valid and make sense, so they need to be taken into account in the way we teach. But if you want the quickest way to drain energy from a rollout, keep the people who do the work out of the room where the rules are set. Getting them involved and showing them how their ideas impact the plan is the fastest way to gain back their enthusiasm.
Set the table with two habits that hold. Create regular office hours where developers, risk leaders, and frontline teams meet to review a small set of cases. Keep the group small enough to speak and rotate voices so knowledge moves across locations. Then publish a short note on what was learned and what will change. People accept trade-offs when they can see the exchange.
I think about Walmart, a global retailer that introduced AI-driven inventory tooling in 2023 (“Decking the Halls with Data: How Walmart’s AI-Powered Inventory System Brightens the Holidays,” Walmart Global Tech). The first version landed hard. Store teams felt managed by a model that didn’t understand local traffic or seasonal quirks. Shrinkage and stockouts told the story on the floor. So in response, the leadership paused, built a feedback loop with district managers and associates, and changed parameters that drove poor decisions. They also set predictable schedules that considered the tool’s suggestions and the human realities of running a store. The same technology produced steadier outcomes because the people closest to the work shaped how it operated.
The same principle applies in service environments. If you pilot a decision tool for case handling, invite a small group of agents to run real scenarios and narrate what they see. Capture where the guidance helped and where it confused. Adjust the prompts and the thresholds, then try again.
A few tight cycles like this build a system that respects local context instead of flattening it. Adoption follows because the tool feels like it belongs to the team.
Resilience grows through such repetitions. A new hire finds the right phrasing for a delicate moment because real-time guidance offered a useful nudge he could accept or ignore. A seasoned pro can spot a subtle policy change during a brief daily check rather than an hour-long lecture. A manager coaches the behaviors that the analytics reveal and gives credit for the judgment that closed the loop. These small wins add up. The climate shifts from guarded to engaged. Performance follows because people feel equipped and respected, which is the ground where initiative takes root.
DESIGNING THE FUTURE OF WORK WITH HUMANITY AT THE CORE
As AI moves to everyday practice, the test is simple: Does the system make work more human, not less? Technology should stay in the service of purpose, and cultures that last will be built on trust and steady ethics, not on features alone.
Innovation holds when people can see how decisions are made and where judgment sits. Explain what the system learns from and how it arrives at a recommendation. Give employees a path to pause automation and ask for a review. Keep a regular check on bias, and fix what you find in daylight. When intent and limits are visible, confidence grows and adoption follows. To determine whether this approach works, you need to widen the scoreboard. Keep tracking output, and add measures that tell you how people are coping. Psychological safety and belonging should rise with the trends, not fall. There should also be a visible and proactive need to upskill. Use these signals to steer pace, coaching, and investment so progress is real for the people doing the job.
There is a leadership habit that binds these pieces. Tell the story of the work as it is, not as you wish it to be. Share where the system fell short and what you changed. Invite a fresh round of feedback and be specific about what will happen next. This rhythm builds credibility. It also keeps the build close to reality, which is where the value is. This is the path forward. Combine data with judgment so that tools lift human strengths. Let progress be measured by results and the way people are treated. Organizations that hold to that balance will move faster with fewer missteps, because trust reduces friction and learning compounds.
WRITTEN BY
Curtis Vincent is chief human resource officer at Phillips & Cohen Associates Ltd.
MCE Recommends
At MCE, AI is not a technology programme. It is a management and leadership capability.