By James Rhodes & Josh Hornsey
|
Published 29 April 2026
Generative AI ("genAI") is increasingly used at work, but employers should be cautious about its role in disciplinary decision‑making. In misconduct dismissals, reliance on genAI to assess guilt risks undermining the fairness of the dismissal.
The use of genAI tools such as Microsoft Copilot and ChatGPT is becoming commonplace across workplaces, including within HR and management teams. These tools can be attractive where employers are dealing with large volumes of material or increasingly lengthy employee submissions in defence of accusations of misconduct. However, when it comes to misconduct dismissals, the law continues to require decisions to be taken by people. Employers who allow genAI to influence the assessment of guilt risk falling foul of well‑established unfair dismissal principles, given the inputs and outputs of genAI are likely to be disclosable in the tribunal.
The legal position
Disclosure
Employment tribunals adopt a very broad definition of a document, covering anything in which information is recorded as long as it is relevant to the issues in dispute. This is likely to include prompts given to a genAI tool, documents uploaded to it and genAI’s output. In an unfair dismissal claim, this material may be directly relevant to whether the employer genuinely believed the employee was guilty of misconduct.
Claimants are therefore increasingly likely to seek disclosure of AI‑related material. A tribunal may then compare the prompt, the AI’s response and the evidence relied on by the employer. Any inconsistencies may be used to challenge the employer’s credibility or the robustness of its decision‑making.
Unfair dismissal for misconduct
Employment tribunals apply the Burchell test in unfair dismissal cases where the claimant argues their dismissal for misconduct was unfair. Two of the requirements for showing a misconduct dismissal was fair are whether the employer genuinely believed the employee was guilty of the misconduct alleged, and whether that belief was held on reasonable grounds.
This is where the use of genAI creates risk. The tools operate by responding to prompts using the material provided to them. They calculate the probability, on the basis of input, of what the next best word in a sequence should be, iteratively, until a complete output is provided. In other words, they are statistical next best‑word generators. They are effective at summarising, re‑ordering and reproducing information quickly, but they cannot form a belief, nor can they replace the reasoning process the law requires of a decision‑maker.
If the decision‑maker uses genAI to assess whether misconduct is proven, or appears to rely on genAI output when reaching their conclusion, a tribunal may question whether the belief in guilt was genuinely held at all. Even where the decision‑maker says that they agreed with genAI’s conclusion, the tribunal will focus on what actually informed the decision at the time. Similarly, if a belief is based on an AI‑generated summary that omits context, misstates evidence or introduces unsupported assumptions, the employer may struggle to show that the belief was held on reasonable grounds. The risk is analogous to cases where HR has been found to have improperly influenced a disciplinary decision.
Using genAI responsibly
This does not mean that genAI has no place in disciplinary processes. The key is ensuring that genAI does not stray into decision‑making. The decision‑maker must personally review the evidence, reach their own conclusions on the balance of probabilities and decide which allegations are upheld and why. Only once that decision has been taken should genAI be used, and then only to assist with drafting outcome correspondence.
Where genAI is used to draft an outcome letter, the prompt should clearly reflect the decision already reached and explain the reasoning by reference to the evidence. The decision‑maker must then carefully review and amend the output. Decision‑makers should proceed on the basis that any prompt or genAI output may later be disclosed and scrutinised in tribunal proceedings.
Employers should also consider whether their disciplinary and AI‑use policies adequately reflect these risks. Managers should be trained on the limits of genAI in disciplinary processes and reminded that they remain personally responsible for forming the belief in misconduct. GenAI should be treated as a drafting and efficiency tool, not as a substitute for judgement, and all AI‑generated material should be approached on the assumption that it may be disclosable.