Think Before You Chat: AI Conversations May Be Discoverable

As artificial intelligence (AI) technology becomes more advanced and accessible, employers continue to struggle to adopt beneficial aspects of the technology while mitigating the risks of improper use. While navigating these challenges, it is critical to understand that AI use regarding legal issues requires an extra degree of caution. When legal questions or relevant facts are shared with AI, that information may not be protected by the attorney-client privilege, which ordinarily shields information provided to or received when obtaining legal advice from an attorney. Asking an AI bot for legal advice is not the same as asking an attorney; in fact, it’s the same as asking your (non-attorney) neighbor. And once the privilege is gone, those communications may become available to the opposing side in any litigation. 

That is exactly what happened in a criminal case in New York, U.S. v. Heppner. There, the defendant was facing fraud charges regarding his actions as a corporate executive. After meeting with his attorney, the defendant used Claude, a generative AI service owned by Anthropic, to generate a defense strategy that he later shared with his attorney. The resulting documents were discovered by authorities in a search of the defendant’s house. While defense counsel asserted that they were protected by the attorney-client privilege and the work product doctrine, the court strongly disagreed. 

In a matter of first impression, the trial judge determined that the attorney-client privilege did not protect information put into or generated by AI. To be privileged, the communication must be between a client and their attorney, be kept confidential, and occur in order to obtain legal advice. When a client shares information about a legal matter with a non-lawyer, like an AI company, the privilege does not apply. Even when the privilege applies, such as to communications with an attorney, sharing the privileged information with AI has the same result as sharing it with a neighbor; the privilege is waived. 

Employers can avoid this by instructing employees not to use AI to obtain legal advice or second opinions. Employers should consider developing written AI policies that define appropriate AI use, including which applications are permitted, when, and for what type of work. It may be wise to require advance supervisor or client approval. Employee education on AI use should cover the risks of bias, hallucinations, and intellectual property infringements appearing in AI responses, as well as data protection policies. 

AI’s presence in the workplace will continue to grow. It is becoming increasingly important for employers and employees to know which conversations should be reserved for human ears only. When legal concerns arise, resist the urge to test out new tech; instead, contact an attorney. The attorneys at Nemeth Bonnette Brouwer PC are available to answer questions regarding AI policies or any other labor and employment-related questions you might have. 

Jump to Page

Woman-owned and led, Nemeth Bonnette Brouwer has exclusively represented management in the prevention, resolution, and litigation of labor and employment disputes for more than 30 years.

By using this site, you agree to our updated Privacy Policy and our Terms of Use.

balustrade37