Increasingly sophisticated deepfakes, including synthetic audio, video, images or text created using AI, continue to pose a significant threat to individuals and businesses, and increasingly to insurers assessing cyber, crime and financial lines exposures. These technologies are being used to impersonate individuals, allowing criminals to bypass digital Know Your Client verification and account login systems, and create misinformation. For the insurance sector, this evolving threat brings heightened exposure across multiple classes, along with opportunities to develop innovative cyber‑resilient frameworks, underwriting approaches and coverage models that better reflect the realities of AI‑enabled deception.
In recent years, there have been several high‑profile incidents involving the use of deepfakes to perpetuate frauds across Asia.
In Hong Kong in 2023, the “deepfake” feature of an AI face‑swap programme was used by fraudsters to open 54 fraudulent bank accounts and make up to 90 loan applications, resulting in losses of HKD $200,000. The incident highlighted how identity‑based controls can be compromised in ways that may increase exposure for crime and financial institutions policies.
In Indonesia a year later, a threat actor made 1,100 attempts to bypass a bank's digital KYC loan application process using a combination of AI‑powered face‑swapping and virtual camera tools that manipulated biometric data, contributing to losses of USD $138.5 million.
In Singapore in 2025, a multinational corporation’s CFO was convinced to join a video call with threat actors posing as the company's CEO among others, who instructed him to transfer USD $499k from the company's bank account to accounts in Hong Kong. The fraud was only detected when a further USD $1.4 million was requested.
In China in 2023, a perpetrator used AI‑powered face‑swapping technology to impersonate a friend during a video call and request 4.3 million yuan (approximately USD $624,000). The deception only became clear when the real friend expressed ignorance of the request, underlining how deepfakes can target individuals as well as organisations.
Other countries in Asia, including Thailand and Malaysia, have experienced a marked increase in criminals using advanced deepfake technology, with the region reporting a 1,530 percent rise between 2022 and 2023. This has prompted governments and regulators across the region to review and improve their AI‑related governance frameworks, creating guidance that insurers should track closely for underwriting, claims and compliance purposes.
In September 2025, the Monetary Authority of Singapore (MAS) published The Cyber Risks Associated with Deepfakes report for the financial sector. The paper identified three principal risk areas posed by deepfakes: defeating biometric authentication; enabling social‑engineering and impersonation scams; and facilitating misinformation and disinformation. These categories align closely with risks evaluated by insurers when assessing accumulation exposure, drafting policy wordings and advising insureds on resilience measures.
In December 2025, the Office of Privacy Commissioner for Personal Data in Hong Kong published the 'Abuse of AI Deepfakes: Toolkit for Schools and Parents' highlighting the risks of deepfake technology, and provided guidance on mitigation. Whilst the toolkit was aimed at schools and parents the guidance was deemed relevant to all organisations and individuals.
Similarly, Malaysia’s Artificial Intelligence (AI) Governance Bill, currently in the drafting stage, aims to establish a comprehensive, risk‑based governance framework for AI systems, clarifying the responsibilities and accountability of developers and deployers. Malaysia has also implemented broader initiatives to manage AI‑generated content, including measures to address deepfakes and potential labelling requirements.
China has introduced the Labelling Measures for Content Generated by Artificial Intelligence and a mandatory national standard Cybersecurity Technology Labelling Method for Content Generated by Artificial Intelligence (effective from 1 September 2025). These build on the Provisions on the Administration of Deep Synthesis of Internet-based Information Services (2023) and require service providers to explicitly label AI-generated content; and platforms to, verify metadata, add warnings for confirmed or suspected AI content, and update metadata with platform information during the distribution where necessary. In October 2025, China also passed amendments to its Cybersecurity Law, taking effect on 1 January 2026, further strengthening cybersecurity and AI governance obligations relevant to insurers monitoring client compliance.
Common themes in regional guidance include storing personal data on secure platforms with restricted access and multi‑factor authentication; encrypting biometric data and/or using certificate pinning; deploying facial biometric authentication systems capable of detecting manipulated videos through motion, texture and behavioural analysis, and thermal imaging; using cancellable biometrics to prevent data compromise and reuse; watermarking and fingerprinting documents to detect tampering; and using real‑time injection‑detection software to identify suspicious patterns during verification attempts.
Businesses are also encouraged to conduct regular simulation exercises to help employees to identify deepfakes; establish incident‑response procedures, including a deepfake incident response plan with clear communication channels to notify customers of incidents; run educational sessions and awareness campaigns; implement multi‑step authorisation for financial transactions; and stay informed of regulatory developments.
The consequences of deepfakes can be severe for individuals, businesses and insurers, affecting financial loss, operational resilience, regulatory exposure and reputation.
This article was originally published in Insurance Day.
