Cybersecurity

Your AI Chatbot Is Not Your Lawyer: What United States v. Heppner Means for Your Business

Robert Brake
April 7, 2026 9 min read

Key Takeaways

  • In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), a federal judge ruled that a criminal defendant's conversations with the AI platform Claude were not protected by attorney-client privilege or the work product doctrine.
  • The court found three reasons: Claude is not an attorney, the communications were shared with a third-party platform whose privacy policy permits disclosure to the government, and the documents were not prepared at the direction of counsel.
  • Sharing sensitive legal information with a public AI tool — ChatGPT, Claude, Gemini, Copilot — can constitute a waiver of privilege, making those conversations discoverable by opposing counsel or federal investigators.
  • Enterprise AI tools with contractual confidentiality terms present a different risk profile than free consumer tools, but no court has definitively ruled them privileged either.
  • The safest rule: if the conversation would be protected when you have it with your attorney, have it with your attorney — not a chatbot.

What Happened in United States v. Heppner?

On October 28, 2025, a federal grand jury in the Southern District of New York indicted Bradley Heppner on fraud and false-statement charges arising from an alleged scheme that defrauded investors of more than $150 million. When FBI agents executed a search warrant at his home, they seized his electronic devices. Among the files they found were approximately thirty-one documents — written exchanges between Heppner and Claude, the generative AI assistant developed by Anthropic.

According to Heppner's counsel, after Heppner received a grand jury subpoena and understood he was the target of a federal investigation, he used Claude to prepare for his legal defense. Without being directed by his attorney to do so, he input information he had learned from counsel into Claude, generated reports outlining potential defense strategies, and then shared Claude's outputs with his lawyers. Those outputs, his counsel stated, influenced the legal strategy going forward. Heppner argued that the AI documents should be protected by attorney-client privilege and the work product doctrine. The government disagreed.

On February 6, 2026, the government moved for a ruling that the documents were not privileged. After oral argument on February 10, Judge Jed Rakoff of the Southern District of New York granted the government's motion. On February 17, 2026, he issued his written opinion — the first federal ruling of its kind in the country, addressing what he called "a question of first impression nationwide."

Three Reasons the Court Said No

Judge Rakoff's analysis rested on the settled elements of attorney-client privilege. To be protected, a communication must be: (1) between a client and an attorney; (2) intended to be and kept confidential; and (3) made for the purpose of obtaining legal advice. The court found that Heppner's AI documents failed at least two of these three requirements, and possibly all three.

Reason 1: Claude Is Not an Attorney

"Because Claude is not an attorney," Judge Rakoff wrote, "that alone disposes of Heppner's claim of privilege." The attorney-client privilege protects communications between a client and a licensed attorney. An AI platform, however sophisticated, is not a licensed attorney and cannot provide legal advice in the protected sense. The court acknowledged the Kovel doctrine — which allows privilege to extend to non-attorney agents acting under an attorney's direction, such as accountants or interpreters — but found it inapplicable here because Heppner used Claude entirely on his own initiative, without any direction from counsel.

Reason 2: No Reasonable Expectation of Confidentiality

Even if the first element could somehow be satisfied, the court found that Heppner had no reasonable expectation that his Claude conversations would remain confidential. When you use a public AI platform, you are not communicating in a sealed room with your attorney. You are sharing information with a corporation — Anthropic, in this case — whose privacy policy explicitly states that it collects both user inputs and Claude's outputs, uses that data to train its models, and reserves the right to disclose that data to third parties, including governmental regulatory authorities. By agreeing to those terms and using the platform, Heppner voluntarily shared his information with a third party. That voluntary disclosure destroyed any claim of confidentiality.

Reason 3: Not Prepared for the Purpose of Obtaining Legal Advice

The court also found that the AI documents were not prepared "for the purpose of obtaining legal advice" in the required sense. Heppner's counsel argued that he used Claude for the "express purpose of talking to counsel" — meaning he intended to share the outputs with his lawyers. But the court reframed the question: the issue was whether Heppner intended to obtain legal advice from Claude. When the government asked Claude during the proceedings whether it could provide legal advice, Claude responded that it could not. An AI that cannot provide legal advice cannot be the vehicle through which legal advice is sought. The fact that Heppner later shared the outputs with his actual attorneys did not retroactively create privilege.

The Privacy Policy Problem: What AI Platforms Actually Do with Your Data

The confidentiality finding in Heppner is the most practically important part of the ruling for anyone who uses AI tools at work. The court's reasoning did not depend on any unusual facts about Heppner's situation. It depended on the standard privacy policies that govern every major public AI platform.

Platform Data Collection Training Use Government Disclosure
Claude (Anthropic) Inputs and outputs collected Used to train models (opt-out available) Disclosed to government authorities per policy
ChatGPT (OpenAI) Conversations stored Used to improve models (opt-out available) Disclosed in response to legal process
Gemini (Google) Conversations reviewed by human reviewers Used to improve products Subject to Google's standard legal process policy
Copilot (Microsoft, consumer) Prompts and responses collected Used to improve Microsoft products Subject to Microsoft's standard legal process policy

The common thread across all of these platforms is that your inputs are not private in the way a conversation with your attorney is private. You are sharing information with a company. That company has its own legal obligations, its own data retention policies, and its own terms under which it will respond to government requests. The Heppner court made clear that using a platform under these terms eliminates any reasonable expectation of confidentiality — the same expectation that is the foundation of attorney-client privilege.

Consumer AI vs. Enterprise AI: Does the Tool Matter?

Judge Rakoff's ruling was carefully limited to the facts before him: a public, non-enterprise AI platform used without attorney direction. The opinion explicitly did not address whether enterprise-grade AI tools with stronger contractual confidentiality protections should be treated differently.

This distinction matters in practice. Microsoft Copilot for Microsoft 365, for example, operates under Microsoft's enterprise data processing agreement, which includes contractual commitments that customer data will not be used to train models and will be handled under stricter confidentiality terms. Similar enterprise agreements exist for Claude for Enterprise and ChatGPT Enterprise. These platforms present a materially different risk profile than the free consumer versions.

However, no court has yet ruled that enterprise AI use is privileged. The Heppner court's silence on the question is not an endorsement. Until courts address this directly, the safest assumption is that even enterprise AI tools do not create attorney-client privilege on their own — because the first element of privilege (communication between a client and an attorney) still cannot be satisfied by any AI tool, regardless of its data handling practices.

What This Means for Small Businesses and Individuals

Most small business owners and individuals are not facing federal fraud charges. But the Heppner ruling has implications that extend well beyond criminal defense. The same privilege principles apply in civil litigation, regulatory investigations, employment disputes, contract negotiations, and tax matters. Any time you use a public AI tool to think through a legal problem, draft a document related to a dispute, or explore a legal strategy, you are potentially creating discoverable evidence.

Consider some common scenarios. An employee uses ChatGPT to draft a response to a cease-and-desist letter before consulting an attorney. A business owner uses Claude to research whether a competitor's actions might constitute tortious interference. A manager uses Copilot to draft talking points for a difficult termination meeting. In each case, if litigation follows, opposing counsel may be able to obtain those AI conversations through discovery. The conversations are not protected. They are evidence.

The practical implication is not that you should stop using AI tools — they are genuinely useful for research, drafting, and analysis. The implication is that you should be thoughtful about what you put into them. Sensitive facts about a legal dispute, details about your legal strategy, admissions that could be used against you in litigation — these belong in a conversation with your attorney, not in a prompt to a chatbot.

What Is Still Unsettled: The Gilbarco Counterpoint

The legal landscape here is genuinely unsettled, and it is worth acknowledging the counterpoint. On the same day that Judge Rakoff issued his oral ruling in Heppner, a federal court in the Eastern District of Michigan reached a different conclusion in Gilbarco, Inc. v. Ewing. In that civil case, the court held that a pro se plaintiff's ChatGPT-generated materials were protected by the work product doctrine, reasoning that "ChatGPT (and other generative AI programs) are tools, not persons" and that they represent "a litigant's internal mental impressions reformatted through software."

The two cases are distinguishable — Gilbarco involved a pro se litigant (someone representing themselves without an attorney) and focused on work product rather than attorney-client privilege — but the split illustrates that courts are still working out how to apply century-old privilege doctrine to technology that did not exist when those doctrines were developed. The Harvard Law Review's analysis of Heppner noted that the court's reasoning "veers toward categorically excluding a client's use of generative AI from attorney-client privilege" and that a more fact-intensive analysis might reach different conclusions in some circumstances.

What this means for you: do not assume that a future court will protect your AI conversations. The law is moving, but it is not moving in a direction that makes AI conversations safer to have. Until there is clear, settled law on this question, treat your AI conversations as discoverable.

Practical Rules for Using AI Without Destroying Privilege

The following guidelines reflect the practical lessons that emerge from Heppner and the broader body of legal commentary on AI and privilege.

Scenario Risk Level Recommended Approach
Researching general legal concepts Low AI is fine for general research. Do not include specific facts about your situation.
Drafting documents related to a dispute Medium Do this with your attorney, or at minimum have your attorney review and direct the process before using AI.
Discussing legal strategy or defense theories High Do not do this with a public AI tool under any circumstances. Use your attorney.
Inputting facts about an ongoing investigation High Do not do this. Those facts are now potentially discoverable and could be disclosed to investigators.
AI used at attorney's direction, on enterprise platform Lower (not zero) Stronger case for protection, but still not settled law. Document that counsel directed the use.

The single most important rule is also the simplest: if the conversation would be protected when you have it with your attorney, have it with your attorney. Attorney-client privilege exists precisely because the legal system recognizes that people need to be able to speak candidly with their lawyers without fear that those conversations will be used against them. That protection does not transfer to a chatbot, no matter how sophisticated the chatbot is.

The Bottom Line

The Heppner ruling is a first-of-its-kind decision, and it will not be the last. Courts across the country are beginning to grapple with how privilege doctrine applies to AI tools, and the outcomes are not uniform. What is uniform is the underlying principle: privilege depends on confidentiality, and confidentiality depends on who you are sharing information with. When you share information with a public AI platform, you are sharing it with a corporation, under that corporation's terms, subject to that corporation's legal obligations. That is not a confidential communication. It is a record.

For small businesses in Westchester County, the practical takeaway is straightforward. Use AI tools for what they are genuinely good at — research, drafting, summarizing, brainstorming. But keep sensitive legal matters, dispute-related facts, and anything you would not want a federal judge to read out loud in a courtroom out of your AI conversations. Those conversations belong with your attorney.

If you have questions about how your business's technology practices — including your use of AI tools — might create legal or cybersecurity exposure, a cybersecurity and IT assessment can help identify where your data is going and who has access to it. The Heppner case is a reminder that the line between a productivity tool and a liability is thinner than most people realize.

Written by Robert Brake

Robert Brake is a computer technician and IT consultant with over 30 years of experience, currently serving small businesses and home users in Westchester County, NY. He is the founder of Metro North Computer Consulting.

Need IT Help in Westchester?

No contracts. No monthly fees. Just expert support when you need it.