How Do You Use AI Without Putting Client Data at Risk?

Last updated: May 14, 2026

The concern is legitimate and the answer is practical. The risk isn't using AI. The risk is using the wrong type of AI access for the sensitivity of the data involved. Consumer AI tools — ChatGPT, Claude.ai, Gemini — process inputs in ways that may contribute to model training. Enterprise agreements and API access operate under different terms with different data handling commitments. Understanding which tier applies to which type of work is the skill that makes this manageable.

Why Is This Harder Than It Looks?

Most employees in regulated industries arrive at the same practical question: "I want to use AI to analyze this data, but I'm not sure what I'm allowed to input." The honest answer is that the line isn't as clear as most compliance guidance implies, and the actual risk profile of different approaches varies significantly.

There are two separate concerns that often get conflated. The first is data privacy: who can see your inputs, and are those inputs used to train future AI models? The second is data security: can your inputs be accessed by unauthorized parties? Consumer tools raise the first question; enterprise tools with proper agreements address both.

The confusion is compounded by how fast the landscape moves. Data handling policies change. Enterprise agreements evolve. The AI tool a team decided was compliant six months ago may have updated its terms. Without a clear internal policy and someone who owns it, employees default to either avoiding AI entirely or using it without thinking about the question at all.

What Actually Works

Understand the three tiers of AI access.

Tier 1 — Consumer tools (ChatGPT free, Claude.ai free plans, Gemini): Inputs may be reviewed by the provider and may be used in model training. Do not input client names, deal terms, proprietary financial data, personal identifying information, or anything that would be covered by your NDA or data handling agreements. These tools are appropriate for general research, drafting work using hypothetical scenarios, and learning.

Tier 2 — Enterprise subscriptions (ChatGPT Enterprise, Claude for Enterprise, Microsoft 365 Copilot, Google Workspace AI features): These operate under enterprise data agreements that explicitly prohibit using customer data for model training. Your inputs are not used to improve the model. This is the appropriate tier for work involving internal data, client names, and business-sensitive information — provided you've read and understood the specific terms for your agreement.

Tier 3 — API access with your own data handling: Organizations that use AI through direct API access can control the full data pipeline. Inputs go to the model and nowhere else. This requires technical setup but provides the strongest data isolation. This is the approach for regulated industries processing genuinely sensitive data.

For Microsoft 365 Copilot specifically: If your organization has an M365 Copilot deployment, data stays within your Microsoft tenant. Copilot can access your emails, documents, Teams conversations, and SharePoint with the permissions of the signed-in user. It does not send that data to OpenAI for training. The meaningful risk is internal: Copilot surfaces data based on existing permissions, so if your permissions architecture is misconfigured, Copilot may surface documents users shouldn't see. This is an access control problem, not an AI problem — but it surfaces faster with Copilot.

The practical working rule: If you wouldn't email it to a general contractor who has signed a standard NDA, don't put it in a consumer AI tool. If you're using an enterprise agreement, check the specific terms for your subscription. When in doubt, anonymize: replace client names with "Company A," replace specific figures with representative ranges, describe the situation without the identifying details. You can often get useful AI output without inputting the sensitive data directly.

The Thing People Miss

Most organizations don't have a data classification policy that maps to AI tools, which means employees are making individual judgment calls that vary widely across the team. Someone in compliance is being extremely cautious. Someone in sales is putting client data into a free ChatGPT account.

The solution isn't to clamp down. It's to establish a clear, practical policy that gives employees a decision framework. Three tiers, clear examples of what belongs where, and a named person to ask when it's ambiguous. That policy needs to be one page, not a legal document. The goal is to enable confident AI use, not to create another compliance maze that everyone works around.

What This Looks Like in Practice

This is exactly the constraint CoCreate navigated with a major venture banking firm using M365 Copilot. The team's AI work was constrained to tools that stayed within the Microsoft data boundary. That constraint shaped which workflows were appropriate for AI deployment and which weren't.

The frameworks built for that team — which data types go where, which tool tiers are appropriate for which task categories — are applicable across financial services and any regulated industry navigating the same question. The answer is almost always: more is possible than people assume, but it requires knowing the specific rules for your specific tools.

Navigating vendor boundaries and workflow constraints with compliance in mind is work CoCreate handles as part of its consulting services.

Related Questions


If your team is stuck because of data concerns and you need to figure out what's actually possible, let's talk.