Half of all U.S. employees now use AI at work. That's Gallup's Q1 2026 number, up from 21% just two years ago. The growth curve has been steep and steady - four consecutive quarters of increases.

But here's what that headline misses: a huge portion of those users are invisible to their employers.

They're already using it. You just don't know.

The data on this is striking. Microsoft and LinkedIn's 2025 Work Trend Index found that 75% of knowledge workers use AI at work, but 78% of them are bringing their own tools. Gartner tracked unauthorized AI tool usage jumping from 41% in 2023 to 68% in 2025. A Salesforce survey put it even higher - finding that more than half of AI users at work describe their usage as "unapproved."

These aren't rogue actors. They're your best people. They're saving time (90% say so), focusing on higher-value work (85%), and being more creative (84%). They're solving real problems with tools they found on their own because nobody gave them a sanctioned alternative.

And 53% of them worry that admitting it will make them look replaceable.

why-do-they-hide.jpg

The double-edged sword

Hidden AI use creates real value and real risk at the same time.

On the upside, these early adopters are discovering genuine productivity gains. They're automating tedious work, accessing capabilities that used to require a specialist, and rethinking how they deliver value to customers. Some of them are months ahead of their peers.

On the downside, they're operating without guardrails. Nearly half have uploaded sensitive data into public AI tools. There's no visibility into what's being shared, no governance around accuracy, and no way to capture or scale the best practices they're developing. The average cost of a shadow AI data breach is now $4.2 million.

This is the paradox organizations are sitting in right now: the people creating the most value with AI are also creating the most unmanaged risk.

Why "wait and see" is failing

A lot of leaders and senior leaders are going slow. Waiting for clarity. Waiting for permission. Waiting for IT to figure it out.

Meanwhile, their people aren't waiting at all.

Gallup's data shows AI usage hitting all-time highs quarter after quarter. Deloitte's State of AI in the Enterprise report found that high-performing organizations invested early in AI talent and infrastructure and now have dozens of AI applications in production, while laggards remain stuck in pilots. The gap between leaders and followers is widening, and it's getting harder to close.

Waiting for clarity is not a strategy. It's a way to fall behind while feeling responsible.

The IT training reflex

When organizations do act, many default to treating AI as a technology problem. IT gets the budget. A technical training vendor gets the contract. Employees sit through sessions on prompt engineering or data science fundamentals.

This makes less sense every quarter. Modern AI tools are fundamentally humanistic. Natural language interfaces and powerful inference capabilities are rapidly removing the need for technical knowledge. You don't need to understand how a large language model works to use one well, any more than you need to understand TCP/IP to send an email.

The real capability gaps are human ones. Can your people evaluate AI output critically? Can they spot when the AI is confidently wrong? Do they understand what to delegate and what to own? Can your leaders think about AI's impact on their business model, their competitive position, their talent strategy?

Those aren't technical training problems. And they won't be solved by IT alone.

What actually works

Two things move the needle.

Sponsored, grassroots experimentation. Instead of top-down rollouts, give people permission and structure to experiment with AI in their actual work. Bring the hidden users into the daylight. Create psychological safety around AI use so people share what they're learning instead of hiding it. This approach is faster, stickier, and more practical than any classroom program, because it starts with real problems and real workflows.

Leadership development around AI judgment and strategy. Leaders at every level need to understand AI fundamentals right now, and they need to think as futurists in parallel to their day jobs. What does AI mean for how their team creates value? Where are the biggest opportunities for augmentation? What decisions need human judgment and what can be safely delegated? These are strategic questions that require ongoing development, not a one-time briefing.

HR and L&D's role

Here's what I'd say to every HR and L&D leader reading this: you have a critical role to play in AI adoption, and you need to operate in partnership with IT and the business.

Most HR and L&D leaders I talk to know this. But many are shy to assert themselves, especially when IT has already staked a claim on "AI training." Don't be. The technical skills piece is a small and shrinking part of the picture. The human side - adoption, judgment, culture change, leadership development - that's where the real work is. And that's your wheelhouse.

Go in with confidence. The need is urgent, the value proposition is clear, and your people are already ahead of you.


Jim Perry is co-founder of Harness Intelligence, where he designs hands-on AI learning experiences for organizations navigating the human side of AI adoption.


Sources:

  • Gallup Workplace AI Tracking, Q1 2026
  • Microsoft & LinkedIn Work Trend Index, 2025
  • Gartner, Unauthorized AI Tool Usage Survey, 2025
  • Deloitte, State of AI in the Enterprise, 2026
  • BlackFog, Shadow AI Research, 2025
  • WalkMe/SAP, Shadow AI and Training Gaps Survey, 2025

← Back to Insights