Employees Are Already Using AI. Your Policies Need to Catch Up.

Recent legislation out of New York has surfaced a legal development that should concern every organization, regardless of size or industry. A federal judge ruled that documents created using a consumer AI platform were not protected by attorney–client privilege.

The underlying case offers a clear illustration of how easily AI use can create unintended exposure. After investigators seized a defendant’s devices, his counsel claimed privilege over several AI-generated documents. These were described as summaries the defendant created using a consumer AI tool to organize his thoughts before speaking with his attorney. However, because he prepared them on his own, without direction from counsel, and because they were created inside a commercial AI system with no confidentiality obligations, the court found they were neither private nor privileged. The judge also rejected arguments under the work product doctrine, noting that materials produced independently by a nonlawyer using an internet service are not protected simply because they were later provided to counsel.

While this ruling directly affected legal strategy documents, the implications reach far beyond law firms.

It highlights a broader reality that many organizations have yet to address: employees are using generative AI tools every day, whether or not those tools are officially sanctioned, and most companies have not updated their acceptable use policies to reflect this new landscape.

The “Shadow AI” Problem Is Already Here

Most organizations assume that if they haven’t formally rolled out AI, or if they have restricted its use, their employees simply won’t use it. In practice, the opposite is happening. Employees turn to AI tools because they genuinely help them work faster: drafting emails, refining presentations, troubleshooting problems, analyzing data, or generating content.  Where restrictions exist, employees will find a way around them, using a personal computer at home to install prohibited software, or creating personal subscriptions to AI platforms where the company has declined.

But with consumer AI tools, every piece of information pasted into a chat window is a potential disclosure. That includes:

  • Financial models
  • Sales forecasts
  • Client or customer lists
  • Proprietary workflows
  • Sensitive internal communications
  • Intellectual property
  • Early-stage product ideas

In other words, exactly the material employees regularly feed into AI to accelerate their work.

When Consumer AI Becomes a Legal Risk

If courts continue to hold that entering information into consumer AI platforms is equivalent to sharing it with a third party, organizations may face serious downstream consequences:

  • Breach of confidentiality or NDAs
  • Loss of trade secret protections
  • Exposure of proprietary or regulated data
  • Compromised compliance with industry standards
  • Permanent imprinting of sensitive data in third-party systems

Researchers have found that removing data once it has entered an AI system is extremely difficult, if not practically impossible. Even if providers state they do not train their models on user inputs, metadata, and logs may still persist well beyond a user’s expectation of deletion.

The Missing Pieces: Updated Acceptable Use Policies and Training

Despite the obvious risks, many companies have not revisited their acceptable-use policies, information governance guidelines, or employee handbooks to incorporate generative AI. As a result, employees lack clear direction on what is appropriate and what is not.

Without guidance, the default behavior is experimentation. And experimentation with unapproved tools is exactly what exposes organizations to the kinds of risks highlighted in the recent court decision.

Updating acceptable use policies is not simply a compliance exercise; it is a practical necessity. Organizations must clearly define:

  • Which AI tools are approved
  • What types of data may (and may not) be input
  • Required privacy, security, and confidentiality safeguards
  • Processes for evaluating new AI tools
  • Expectations for transparency and documentation
  • Differences between consumer-grade and enterprise-grade platforms

This clarity reduces risk, enables safer adoption, and builds the foundation for responsible innovation.

Employees Aren’t Waiting. Policies Can’t Either.

The most important takeaway is this: employee use of AI is not a future scenario; it is already happening. Restrictive policies alone will not prevent usage. Lack of policy only guarantees confusion and risk.

Organizations must assume AI is already part of their operational fabric and respond accordingly. Responsible, secure, and well-defined AI usage is not just an IT or legal priority. It is a leadership priority.

The real question for every executive team today is not whether employees are using AI, but whether those employees have been given the guidance they need to use it safely.

Recent legislation out of New York has surfaced a legal development that should concern every organization, regardless of size or industry. A federal judge ruled that documents created using a consumer AI platform were not protected by attorney–client privilege.

The underlying case offers a clear illustration of how easily AI use can create unintended exposure. After investigators seized a defendant’s devices, his counsel claimed privilege over several AI-generated documents. These were described as summaries the defendant created using a consumer AI tool to organize his thoughts before speaking with his attorney. However, because he prepared them on his own, without direction from counsel, and because they were created inside a commercial AI system with no confidentiality obligations, the court found they were neither private nor privileged. The judge also rejected arguments under the work product doctrine, noting that materials produced independently by a nonlawyer using an internet service are not protected simply because they were later provided to counsel.

While this ruling directly affected legal strategy documents, the implications reach far beyond law firms.

It highlights a broader reality that many organizations have yet to address: employees are using generative AI tools every day, whether or not those tools are officially sanctioned, and most companies have not updated their acceptable use policies to reflect this new landscape.

The “Shadow AI” Problem Is Already Here

Most organizations assume that if they haven’t formally rolled out AI, or if they have restricted its use, their employees simply won’t use it. In practice, the opposite is happening. Employees turn to AI tools because they genuinely help them work faster: drafting emails, refining presentations, troubleshooting problems, analyzing data, or generating content.  Where restrictions exist, employees will find a way around them, using a personal computer at home to install prohibited software, or creating personal subscriptions to AI platforms where the company has declined.

But with consumer AI tools, every piece of information pasted into a chat window is a potential disclosure. That includes:

  • Financial models
  • Sales forecasts
  • Client or customer lists
  • Proprietary workflows
  • Sensitive internal communications
  • Intellectual property
  • Early-stage product ideas

In other words, exactly the material employees regularly feed into AI to accelerate their work.

When Consumer AI Becomes a Legal Risk

If courts continue to hold that entering information into consumer AI platforms is equivalent to sharing it with a third party, organizations may face serious downstream consequences:

  • Breach of confidentiality or NDAs
  • Loss of trade secret protections
  • Exposure of proprietary or regulated data
  • Compromised compliance with industry standards
  • Permanent imprinting of sensitive data in third-party systems

Researchers have found that removing data once it has entered an AI system is extremely difficult, if not practically impossible. Even if providers state they do not train their models on user inputs, metadata, and logs may still persist well beyond a user’s expectation of deletion.

The Missing Pieces: Updated Acceptable Use Policies and Training

Despite the obvious risks, many companies have not revisited their acceptable-use policies, information governance guidelines, or employee handbooks to incorporate generative AI. As a result, employees lack clear direction on what is appropriate and what is not.

Without guidance, the default behavior is experimentation. And experimentation with unapproved tools is exactly what exposes organizations to the kinds of risks highlighted in the recent court decision.

Updating acceptable use policies is not simply a compliance exercise; it is a practical necessity. Organizations must clearly define:

  • Which AI tools are approved
  • What types of data may (and may not) be input
  • Required privacy, security, and confidentiality safeguards
  • Processes for evaluating new AI tools
  • Expectations for transparency and documentation
  • Differences between consumer-grade and enterprise-grade platforms

This clarity reduces risk, enables safer adoption, and builds the foundation for responsible innovation.

Employees Aren’t Waiting. Policies Can’t Either.

The most important takeaway is this: employee use of AI is not a future scenario; it is already happening. Restrictive policies alone will not prevent usage. Lack of policy only guarantees confusion and risk.

Organizations must assume AI is already part of their operational fabric and respond accordingly. Responsible, secure, and well-defined AI usage is not just an IT or legal priority. It is a leadership priority.

The real question for every executive team today is not whether employees are using AI, but whether those employees have been given the guidance they need to use it safely.

If you haven’t reviewed or revised your acceptable use policy to account for generative AI, now is the moment. The risk isn’t hypothetical anymore.