At Zywave’s Cyber Risk Insights Conference 2026, Jonathan Hopkins joined a panel discussion on avoiding AI pitfalls, including a practical question that many risk managers, brokers and insurers are currently grappling with: are AI-related risks already covered under existing insurance policies?
As with many coverage questions the answer is “it depends” and that answer reflects the reality of the situation in the current insurance landscape.
Where AI‑related risks are covered in today's wordings, it is typically through silence rather than by conscious design. Many insurance policies, including cyber policies, pre-date this emerging risk and do not explicitly reference artificial intelligence. Any cover that exists is usually implicit, usually arising because AI‑related losses happen to fall within established insured perils without any specific exclusion.
Insurers and insureds may find the absence of express reference to AI risk (either by affirmative cover, exclusion, or sub-limit) uncomfortable. As the market is beginning to respond with more deliberate and express approaches to AI-related risks, an opportunity for innovation also arises.
How AI losses are being framed in cyber claims
From a cyber insurance perspective, AI-related risks are often capable of falling within cover if the loss can be characterised as a recognised cyber peril. Examples include:
- AI‑assisted social engineering attacks leading to a security breach;
- The poisoning or manipulation of AI models resulting in system failure;
- AI tools exposing personal data through prompt misuse or training data leakage.
In these scenarios, the presence of AI is not the insured peril itself. Instead, claims can be framed as:
- A security breach;
- A system failure; or
- A privacy incident.
Cyber policies are generally seen as the appropriate home for these established events, even if AI played a critical role in triggering the loss.
Where the gaps begin to appear
Problems could arise where AI risks create commercial harms outside of traditional "cyber" events.
A good example is AI model drift. Such a model may gradually produce inaccurate or harmful outputs, leading to bad decisions, regulatory exposure or reputational damage, but without any breach, hack or identifiable cyber incident. In those circumstances, it may be uncertain whether cover existing under typical cyber insurance coverages. It may be that such risks loss may not sit naturally within traditional non‑cyber policies either.
The importance of definitions
Close attention needs to be paid to policy terminology. AI is often treated as a subset of software, and software often falls within definitions relating to computer systems. These are fundamental concepts in cyber insurance.
Seemingly, small drafting changes to these definitions can have significant and sometimes unintended coverage consequences. As AI becomes more embedded in business operations, choices around these definitions will matter more.
So, is AI risk covered?
In summary, AI-related risks may be silently covered under existing insurance, with greater expectancy for cyber insurance policies to respond. However, cover typically responds indirectly, inconsistently, and with a level of uncertainty that both policyholders and insurers may wish to address through explicit terminology.
As AI adoption accelerates, the challenge for the insurance market will be moving from silent cover and towards clear, intentional, and expressly‑defined responses to AI‑driven risks.
