Insurance executives are about to spend billions on AI systems that their employees don’t know how to use. Accenture’s new survey of 218 insurance C-suite leaders reveals 90% plan to increase AI investment in 2026, while simultaneously, employee confidence in using AI has dropped 10 percentage points since last summer. The gap between boardroom enthusiasm and workforce readiness is widening dangerously—and it could waste those billions.
The disconnect is stark: 85% of insurance executives view AI as a revenue growth engine, but only 40% of employees say their training has prepared them for AI-enhanced roles. Nearly a third of companies are rebuilding entire processes around AI, yet fewer than 10% are redesigning employee job descriptions to match. The result: workers feel unprepared, anxious about job security, and increasingly reluctant to adopt the very tools their leadership is betting the company’s future on.
This matters because insurance might be the industry where AI adoption succeeds or fails most visibly. Claims processing, underwriting, risk assessment, and fraud detection are all prime candidates for AI transformation—but only if the people operating these systems trust them and know how to intervene when AI outputs are wrong. And those outputs are wrong surprisingly often: 54% of employees report that low-quality or misleading AI results are actively undermining productivity rather than improving it.
⚡
WireUnwired • Fast Take
- 90% of insurance executives increasing AI spending in 2026, but employee AI usage dropped 10 points since summer
- Only 40% of workers say training prepared them for AI roles; just 10% of companies redesigning jobs to match
- 54% report low-quality AI outputs reducing productivity instead of improving it
- Job security concerns rising: 48% feel secure, down from 59% six months ago

The Enthusiasm Gap: Leadership vs. Reality
Accenture’s “Pulse of Change” survey, covering 3,650 C-suite leaders across 20 industries and 20 countries, reveals insurance leading the AI investment charge. Among the 218 insurance executives surveyed, 34% are already deploying AI agents across multiple functions—not pilots or experiments, but operational production systems handling real business processes.
The bullish sentiment extends beyond current deployments. Even if the much-discussed “AI bubble” bursts, 47% of insurance executives say they’d increase AI spending further, and 37% would escalate recruitment. Only 6% would decrease investments by more than 20%. Khalid Lahraoui, who leads Accenture’s insurance practice, summarizes the mood: “Insurance leaders are confident in AI’s capacity to drive growth, and as such, they are decisively increasing investments, despite ROI uncertainty.”
That phrase—”despite ROI uncertainty”—is telling. The confidence isn’t based on proven returns but on competitive necessity. Insurance companies see AI as existential: adopt aggressively or risk being left behind by competitors who move faster. The problem is that speed without execution capability wastes money and creates operational chaos.
Where AI Is Actually Failing
The 54% of employees reporting that AI outputs undermine productivity points to a specific failure mode: systems deployed before the underlying data infrastructure is ready. As Accenture notes, 35% of insurance leaders acknowledge that progress depends on “getting core data strategies and digital abilities right”—which means 65% apparently haven’t internalized this lesson yet.
Poor data quality creates a vicious cycle. AI systems trained on incomplete, inconsistent, or outdated data produce unreliable results. Employees learn not to trust those results and develop workarounds—manually checking AI recommendations, reverting to old processes, or simply ignoring AI suggestions. This defeats the entire purpose of AI investment while creating the appearance of adoption in leadership dashboards.
Read more :On-Device AI Is Coming Sooner Than You Think
The training gap compounds the problem. Only 40% of employees feel adequately prepared for AI-enhanced roles, and just 20% believe they have any say in how AI affects their work. When workers feel AI is being imposed on them without their input or adequate preparation, resistance is predictable. The 15-percentage-point drop in employees independently trying AI tools (from 54% to 39% over six months) suggests this resistance is growing, not shrinking.

The Job Security Crisis
Job security concerns are spiking at precisely the moment companies need employee buy-in for AI adoption. The share of insurance workers who feel secure in their roles dropped from 59% to 48% in just six months—an 11-point collapse. Meanwhile, 59% of workers believe young professionals face harder job markets due to automation and AI.
This creates a toxic dynamic where companies ask employees to enthusiastically adopt tools that workers fear will eliminate their jobs. The rational response for individual employees is to avoid fully engaging with AI systems, learn just enough to check compliance boxes, and quietly start looking for exit options. This is exactly what the survey data shows happening.
The confidence gap extends to organizational preparedness. While 67% of executives feel ready for technological disruption, only 38% of employees share that confidence. For economic disruption, the gap widens: 43% of leaders feel prepared versus just 29% of workers. Leadership optimism isn’t translating into workforce confidence—it’s creating skepticism about whether executives understand the actual implementation challenges.
What Insurance Companies Should Actually Do
Accenture’s data points to clear priorities that most companies are ignoring. Only 24% have implemented continuous AI learning programs, and just 5% are adjusting job roles to accommodate AI integration. These numbers need to flip: training and role redesign should be happening before or alongside technology deployment, not as afterthoughts.
Specific actions that would address the survey’s findings:
Redesign roles before deploying systems. If AI agents handle routine claims processing, underwriters’ jobs should evolve to focus on complex cases and exception handling. Rewrite job descriptions, adjust performance metrics, and retrain people for the new reality—don’t just layer AI onto existing roles and hope workers figure it out.
Fix data quality first. The 54% reporting bad AI outputs suggests many companies are deploying AI on top of messy legacy data. Pause new AI rollouts until data governance improves. It’s cheaper to delay deployment than to burn employee trust on systems that give wrong answers.
Give employees genuine input. The 20% who feel they have a say in AI implementation are probably the 39% still experimenting with tools. Involve frontline workers in selecting AI systems, designing workflows, and defining success metrics. People support what they help create.
Communicate the job security reality honestly. If AI will eliminate certain roles, say so and offer transition paths. If it won’t, prove it with specifics—which roles are safe, what new opportunities emerge, how skills transfer. The current ambiguity maximizes anxiety while minimizing trust.
Insurance executives are right that AI represents a competitive imperative. But technology alone doesn’t create competitive advantage—executed technology does. And execution requires a workforce that understands, trusts, and actively engages with AI systems. Right now, the survey data shows insurance companies failing this test while doubling down on the investment that won’t pay off without fixing the people problem first.
FAQ
Q: Why are insurance employees’ AI usage rates dropping if companies are investing more?
A: Increased deployment doesn’t equal increased voluntary adoption. When AI tools produce unreliable outputs (as 54% report), employees learn to distrust them and develop workarounds. Combined with inadequate training (only 40% feel prepared) and job security fears (down 11 points in six months), workers are rationally pulling back from AI tools that feel imposed rather than helpful. The drop from 54% to 39% trying AI independently suggests growing resistance, not growing enthusiasm.
Q: What does it mean that only 10% of companies are redesigning jobs for AI?
A: Companies are adding AI to existing roles without rethinking what those roles should become. For example, an underwriter might get an “AI assistant” but still have the same performance metrics, responsibilities, and workflows—except now they’re also supposed to supervise AI outputs. This creates more work, not less, and confusion about what success looks like. Effective AI adoption requires redefining roles around what humans do best after AI handles routine tasks.
Q: Should insurance companies slow down AI investment based on this data?
A: Not necessarily slow investment, but redirect it. The problem isn’t spending on AI—it’s the ratio of technology spending to workforce readiness spending. If 90% of budget goes to AI systems and 10% to training, role redesign, and change management, that’s backwards. The data suggests companies should maintain or increase total AI budgets but shift 30-40% toward workforce preparation, data quality, and organizational change. Technology without capable users is just expensive shelf-ware.
For insights on enterprise AI implementation and workforce transformation, join our WhatsApp community where 2,000+ business leaders discuss real-world technology adoption.
Discover more from WireUnwired Research
Subscribe to get the latest posts sent to your email.



