
AI doesn’t stumble because of bad algorithms. It’s stumbling because people don’t trust it.
Just as companies are betting big on AI to drive growth, trust in the technology is eroding. Our study shows that 71% of organizations are still hesitant to trust autonomous agents in enterprise environments.
For a tool hailed as the next productivity engine, it’s a crisis of confidence hiding in plain sight.
Article continues below
Vice President of Analytics and AI at Capgemini Invent.
Lack of trust on the front line
For AI to generate real, scalable ROI, it cannot be left alone in an innovation lab. It must be integrated into the daily decisions and workflows that enable a business to thrive. But the people who do this daily work are, ironically, among the least convinced.
The Harvard Business Review recently found that employee use of employer-provided tools fell 15% between February and July of this year. When AI seems opaque or untested, workers tend to avoid it. Worse yet, they are instead turning to their own “ghost” AI tools
A Capgemini study shows that 63% of software professionals currently using generative AI are doing so with unauthorized tools or in an ungoverned manner. This introduces security threats, compliance gaps, and inconsistent results, ultimately slowing successful adoption.
When trust erodes on the front lines, organizations cannot move beyond the experimentation phase, no matter how advanced their models are.
Excessive confidence without safeguards
The trust issue cuts both ways. There is also too much confidence, or rather confidence in all the wrong places. IDC reports that around a third of UK businesses say they “completely trust” AI.
Yet many of these same organizations do not have basic safeguards in place: governance, data controls, risk frameworks or ethical oversight. In other words, they trust technology more than their own infrastructure.
This is a risky imbalance. Organizations that overestimate their AI maturity jump into experimentation without mitigating bias or planning for compliance. And in a regulatory environment shaped by frameworks such as EU AI law, the cost of misjudgment can be severe – up to 7% of global revenues in high-risk misuse.
Solve the trust problem at its source
Traditional AI has always needed governance. But generative AI, with its creativity, unpredictability and risk of hallucination, requires an intentional approach more than ever. Building trust requires a holistic approach combining governance, culture, training, and intentional collaboration between humans and AI.
Governance cannot be seen as an afterthought, integrated after deployment is complete. This should shape the design from day one. Organizations must establish clear frameworks for model lifecycle management, data provenance, risk assessment, explainability, human oversight, as well as continuous monitoring and quality assurance.
Although the list may seem long, strong governance should be seen as a competitive advantage and not an obstacle. Done right, it accelerates innovation by making scaling safe, predictable, and reliable.
Building AI that reflects human values
We can’t expect people to trust what they don’t understand – nor should they be forced to.
Confidence comes from clarity. It thrives when employees understand how AI works, why it recommends certain outcomes, and how it aligns with the organization’s values. This is why governance must go beyond technical considerations. Human ethics must be integrated into the AI stack as closely as performance metrics.
When people recognize their own principles (like fairness and transparency) reflected in AI behavior, adoption becomes a natural step rather than a leap of faith.
Give employees skills and confidence
AI is most effective when people know how to use it. Comprehensive training, new role definitions (such as AI supervisors and people-in-the-loop specialists), and a skills-based approach help employees feel empowered rather than displaced.
To achieve this, human-AI collaboration must be intentionally designed. Decision-making structures, escalation routes, task transfers; these are all intricacies that need to be mapped out.
While autonomous agents can drive end-to-end processes, humans remain responsible for providing direction, maintaining guardrails, and ensuring positive outcomes.
From drivers to business ROI
The path to AI ROI is through scale, and scale only happens when the foundation is strong.
Today, these foundations are often lacking. Many organizations are stuck in pilot mode, running isolated experiments without the data architecture, governance, or change management needed to scale.
A recent Microsoft study highlights that AI leaders achieving 3x ROI versus laggards are distinguished by their consistent, organization-wide strategy.
Leaders must accept that developing such a strategy takes time. Trust is not built overnight. A step-by-step roadmap aligned with your people’s values will always be more effective than rushing to deploy the latest model this month.
A future built on trust
AI will not reach its enterprise potential through technical innovation alone. This requires cultural transformation, governance innovation and a renewed commitment to creating systems that people actually want to work with.
Trust is the foundation of scalable ROI. Ignore it and your AI strategy becomes a ticking time bomb. But when organizations invest in the right safeguards, skills and values, AI becomes what it was always meant to be: a trusted partner in building the future.
We have presented the best IT automation software.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro