9 min read

The boundaries of privilege in human-machine conversations

Read more

By Jamie Tomlinson & Alistair Cooper

|

Published 13 March 2026

Overview

As generative artificial intelligence ("GenAI") tools gradually become ubiquitous in our professional and personal lives, parties to legal proceedings are increasingly asking GenAI chatbots for assistance with their cases: is there case law to support my position? Are we allowed to do X? Draft my submission for Y.

We have previously written about the dangers of lawyers relying on GenAI's output insofar as it is liable to fabricate or misanalyse information (here)1. An overlooked concern until recently is the issue of legal professional privilege, and whether communications with GenAI chatbots can be withheld from production in a disclosure context, the default position being that a party to litigation must, as part of disclosure, produce all materials relevant to its case, except those protected by privilege. Recent rulings shed some light on the current position, which ought to give litigants pause for thought before relying too heavily on GenAI for legal counsel or to build a case.

 

Privilege

A brief reminder of the applicable legal tests is a necessary starting point. The two main forms of legal professional privilege under English law are:

  1. Legal advice privilege ("LAP"), which protects confidential communications between a client and their lawyer, created for the dominant purpose of giving or receiving legal advice.
  1. Litigation privilege ("LP") offers broader protection, applying to confidential communications between a lawyer and their client, or between either of them and a third party, for the dominant purpose of litigation which either exists or is in reasonable contemplation.

Both strands represent a fundamental tenet of a functioning legal system: clients are entitled to professional advice and private counsel, whatever the issues involved in that advice, without fear of their position being prejudiced in future. Are the rules which protect this right fit for purpose in the world of large language models ("LLMs")?

In United States v Heppner [2026]2, a Southern District of New York Judge ruled that privilege rules in that jurisdiction did not protect the defendant's conversations with Anthropic's chatbot, Claude. The defendant had sought to withhold production of relevant AI-generated material by invoking both "attorney-client" privilege (which does not precisely mirror the English model, but which is comparable, as explained in more detail below). The New York Court stopped short of finding that any use of GenAI tools by a litigant automatically waives privilege in AI output, but was clear that AI is not a "lawyer" for these (or any) purposes, and that use of consumer-grade, public AI tools (including free-to-use ChatGPT and Claude) presented a particular risk around loss of confidentiality.

 

An English perspective

Heppner is not legally binding on litigants in England and Wales, but does give an indication of how English Courts may approach these issues. Working through the core elements of disclosure and privilege helps to illustrate the current position:

 

What is a "document"?

Parties to litigation are under duties to preserve, search for and disclose relevant documents. Under long-running disclosure rules, a document is described broadly as "anything in which information of any description is recorded". Practice Direction 57AD, which governs civil disclosure in the Business and Property Courts, offers a more specific definition, and one which is better equipped to deal with the modern data landscape:

"A "document" may take any form including but not limited to paper or electronic; it may be held by computer or on portable devices such as memory sticks or mobile phones or within databases; it includes e-mail and other electronic communications such as text messages, webmail, social media and voicemail, audio or visual recordings."

Chatbots retain a record of all historic conversations with users, much like an internet browser maintains a browsing history. That record will generally be stored within the application and/or the user's server; corporate accountholders may keep a central log of all employees' AI queries. Exchanges with chatbots are, undoubtedly, "documents" within the scope of civil disclosure rules.

Parties to English litigation bear a duty of "reasonable search" for potentially relevant documents. We expect to see historic AI data forming a significant part of disclosure exercises in due course. It is also worth noting that the duty to preserve documents will require parties to suspend any standard deletion policies in relation to potentially relevant AI records. That duty applies from the date on which litigation is contemplated (being the same date from which a party may be tempted to use AI to build a case strategy).

 

What is a 'lawyer'?

This issue has been the subject of much debate and judicial scrutiny over the years, particularly around the status of in-house counsel.3 Unsurprisingly, the historic debate exclusively focussed on human actors. A modern analysis requires a more open mind: could a machine be a lawyer?

Freely available GenAI tools are now perceived to be capable of certain tasks which were once carried out only by lawyers. Parties in need of legal assistance are increasingly relying on these tools in place of or as a check on human lawyers. As tempting as that approach may be, it is generally ill-advised, given the nature of the technology.

LLMs rely on vast neural networks to give probability-based, stochastic output, the accuracy of which cannot be guaranteed without sufficient validation. Chatbots demonstrate inherent bias, hallucinate, recommend inapposite legal precedent from foreign jurisdictions (or, in some cases, out of thin air), or otherwise provide oversimplified answers to complex queries. What's more, open AI models are typically trained on huge volumes of decidedly non-legal data. A lot of common law and its respected academic voices are generally protected from AI data-scraping by impenetrable pay walls.

The technology will improve – perhaps on an as yet incomprehensible scale – but it is not yet a reliable source of legal knowledge, however convincing a response might look to the untrained eye. It follows that a chatbot does not qualify as a "lawyer" for LAP purposes. For that reason alone, LAP cannot subsist in communications with a chatbot (such that, in a litigation context, relevant exchanges would fall to be produced). Such a finding by an English Court would mirror the Judge's decision in Heppner: "Because Claude is not an attorney, that alone disposes of Heppner's claim of privilege."

 

What is 'legal advice'?

Can someone, or something, who isn't a lawyer give "legal" advice? What is the specific quality of advice which makes it "legal"? It is arguable that advice given by a lawyer is, by definition, "legal advice", to the exclusion of all else. The law of privilege insists on a more specific definition, where the nature of advice depends on its subject matter, and "dominant purpose" must be established for privilege to apply. A lawyer's advice which is, for example, predominantly commercial in nature, will not attract privilege.

English Courts are likely to approach this issue more straightforwardly: because chatbots are not lawyers, their output cannot be legal advice. We have already seen nods in this direction by English Courts, including in Ayinde (on which, see our previous article here), in which the Court found:

"Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The Responses may make confident assertions that are simply untrue. They may cite sources that do not exist, They may purport to quote passages from a genuine source that do not appear in that source." 4

Whether AI output was created for the dominant purpose of legal advice is therefore an irrelevant consideration. Open AI models generally disclaim that they cannot provide legal advice. Whether such disclaimer offers any legal protection may be decided shortly: earlier this month, a US insurance company sued OpenAI for allegedly giving legal assistance to an insured, who then, emboldened by ChatGPT's convincing legalese, tried to reopen a settled claim.

 

Is a GenAI chatbot a 'third party' for the purpose of litigation privilege?

LP overlaps considerably with LAP in practice, but is broader in scope. It protects communications after the point at which a dispute is reasonably contemplated, and not just between a lawyer and a client, but between clients and third parties. Litigants are therefore able to speak frankly about the merits of their case internally, with experts, accountants, etc., without fear of that confidence being breached in the litigation. But if a client simply wants to bounce ideas off an AI chatbot, stress-test case theories or draft instructions to human lawyers, is the same protection afforded?

English Courts may, in our view, conclude that a chatbot is an appropriate "third party" for these purposes. Parties using modern technology to research or otherwise improve their legal case is not a new phenomenon. Assuming that any confidentiality in the underlying information is maintained by a client, there is no reason why an AI prompt should be treated any differently to an internet search for these purposes. While a Court is unlikely to dwell for too long on this issue, the next question will need careful consideration.

 

Are communications between a client and an AI chatbot "confidential"?

A critical component of privilege is that the relevant communication must be confidential. Disclosure of information in a way which is inconsistent with treating it as confidential will waive any privilege that may have subsisted in it. In other words, privilege cannot be extended indefinitely, and English Courts generally take a dim view of parties who seek to claim privilege over information which has been shared outside a closed network.

Some AI platforms, such as those developed in partnership with law firms, are closed systems which offer high levels of data security. Other (typically free-to-use) AI tools are "open" models, in that they are publicly accessible and capable of learning, evolution and modification. Confidential information plugged in to an open model may be provided to other users in future or appear elsewhere on the internet in some transposed form. It is also used to train the open model to deliver better output in future. Users of free-to-use ChatGPT sign up to OpenAI's following terms of use:

"We may use [your] Content to provide, maintain, develop and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Service safe… If you do not want us to use your Content to train our models, you can opt out…"

The Court in Heppner considered Anthropic's similar terms in finding that the defendant "could have no "reasonable expectation of confidentiality in his communications" with Claude". An English Court – the Upper Tribunal of the Immigration and Asylum Chamber – recently adopted the same approach in a Hamid ruling5:

"We also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege..."

Confidential information uploaded to an open model therefore ceases to be confidential or privileged. That information need not have originated from the client – once-privileged advice by a human lawyer to a client would cease to be privileged if used by a client to design a prompt to an open AI model.

 

Conclusion

Any information which clients upload to an open AI model will cease to be confidential and will not be privileged. Even where confidentiality is maintained (i.e. a closed-system AI model is used), chatbots are not lawyers capable of providing legal advice, so prompts and outputs which clients intend to replace or augment dialogue with a human lawyer will not be privileged, either (unless the chatbot can be said to be a "third party" – as considered above – in which case LP may apply, though this is yet to be tested).

The recent rulings examined above remind us that legal professional privilege is not designed to protect all client-AI communications. Users of AI must therefore think very carefully before inputting any confidential, strategic or case-sensitive information into a tool, and run the risk of prejudicing themselves in current or future legal proceedings by doing so.

Unless and until the Courts act to broaden the scope of legal professional privilege to protect AI communications, opponents in civil proceedings may begin to use the current regulatory lacuna to their strategic advantage, for example by insisting that AI-generated material is included within the scope of disclosure searches. Clients should therefore be aware of the very real (i.e. not purely hypothetical) risk of sensitive material prepared for the purpose of litigation ending up in the hands of a counterparty. If ever in doubt, consult a (human) lawyer.

 

[1] For our overarching thoughts on the dangers of overreliance on AI in a legal context, see this article

[2] United States v. HEPPNER, 1:25-cr-00503, (S.D.N.Y. Feb 17, 2026) ECF No. 27; 

[3] Following the House of Lords' decision in Three Rivers (No 6) [2004] QB 916, in-house counsels' communications are covered by LAP providing the other requirements are satisfied.

[4] R (on the application of) Frederick Ayinde v The London Borough of Haringey [2025] EWHC 1040

[5] [2026] UKUT 00081 (IAC); UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) - Find Case Law - The National Archives

Authors