How Do I Know If My Team Is Actually Using AI?
Last updated: May 14, 2026
Most teams have a split: one or two people using AI heavily, the majority not using it at all, and everyone performing the same answer when asked. This split doesn't show up in your dashboards. It shows up in output quality, turnaround time, and the occasional comment that reveals one person has figured something out that no one else knows. The problem isn't dishonesty. It's that no one has made it safe to admit either position — using AI or not knowing how.
Why Is This Harder Than It Looks?
Asking your team if they're using AI is a loaded question. The person using it heavily worries you'll take credit for their efficiency gains, or worse, reduce their headcount. The person not using it worries you'll see them as behind. Both groups give you the same non-answer: "Yes, I'm exploring it."
This is the same dynamic that makes performance reviews unreliable. People don't lie outright. They just tell you what they think you want to hear.
There's also a tool visibility problem. The most common AI use in organizations happens in consumer tools — ChatGPT, Claude, Perplexity — that leave no trace in your project management system. Someone can be running half their first drafts through an AI assistant and your Jira board will show exactly what it always showed.
What Actually Works
Watch outputs, not declarations. Look at the work product, not the survey response. Unusually fast first drafts. Summaries that are cleaner than the raw material would suggest. Documentation that appears where it didn't before. These are signals.
Make it safe to be at either end of the spectrum. Run a team conversation where you go first: share something you tried with AI that worked and something that didn't. When the person with authority in the room admits to a failed experiment, the psychological dynamic shifts. People stop performing competence they don't have.
Build structured sharing into the workflow. A 15-minute weekly slot where one person shares an AI workflow they actually use — specific tool, specific task, before and after. This surfaces who's using what without requiring anyone to self-report under pressure.
Separate audit from adoption. If your goal is to know what's happening, use signals-based observation. If your goal is to increase adoption, build the environment where it's safe to try. Conflating these two goals produces neither outcome.
The Thing People Miss
The reason employees hide AI use is the same reason executives hide not knowing about AI. There's no safe space to be a beginner, and no safe space to be ahead of the curve. Both positions feel like exposure.
Fix the environment before you audit behavior. An environment where it's safe to say "I tried this and it didn't work" will surface more honest information than any adoption survey. And it will drive more real adoption than any mandate.
The signal to look for isn't tool usage. It's whether your team talks about AI the way they talk about other tools in their workflow — practically, specifically, and without defensiveness. That's when you know adoption is real.
What This Looks Like in Practice
CoCreate's discovery calls at the start of any engagement consistently surface this split. Before any tool deployment, we talk to individual contributors one-on-one — not in a group, not with managers present. The picture that emerges is almost always the same: one or two people who have built personal workflows they haven't shared because they're not sure how it will be received, and the rest of the team waiting for someone to make it official.
The intervention isn't a mandate. It's making the unofficial official. Structured sharing, peer cohorts, and a facilitator who creates space for both ends of the spectrum. That's what moves the middle.
The discovery cadence referenced here is part of how CoCreate approaches enterprise adoption work.
Related Questions
If your team is giving you the non-answer and you're not sure what's actually happening, let's talk.