British Christmas adverts are more than marketing - they’re a cultural phenomenon. This year, a clear trend has emerged – some brands are embracing new technology and using AI to create festive campaigns, while those producing ‘homegrown’ ads are being celebrated and praised for authenticity. Some of the companies in the latter category have even gone out of their way to seemingly leverage on this debate and highlight their human touch, using traditional methods to make it unmistakably clear that their content is not AI-generated (despite likely using AI or similar technology elsewhere within their organisation).
This reaction underscores a wider concern: as AI becomes more embedded in creative processes, how do organisations and content creators maintain audience trust and brand authenticity whilst utilising new technologies? One possible answer lies in transparency and the act of labelling AI-generated content as such.
Present position in the UK
Currently, there is no legal requirement arising from English law or regulation to label AI-generated content. However, calls for transparency are growing and some platforms are responding. Content creators on some social media apps are being encouraged to consider using a label on their posts which identifies the video or photo as AI-generated where it has been either wholly generated or significantly altered by AI. Whilst some platforms even claim to do this automatically, it's not a robust approach and is still largely led by the transparency of the creator themselves. In short, there is no legal requirement and so some just simply aren't doing it.
EU AI Act
Under the EU AI Act, certain AI-generated content (such as deepfakes and synthetic media that could mislead users) must be clearly labelled. This isn’t just theory: on 5 November 2025, the European Commission kicked off work on a Code of Practice for marking and labelling AI-generated content. While voluntary, the code is designed to help providers of generative AI systems meet their transparency obligations in a practical way.
This is a signal for organisations using AI: even if you’re not directly caught by the AI Act, aligning with these standards now is smart risk management and an important proactive step.
It's not just a legal issue
Waiting for a legal requirement could be costly. Here’s why proactive transparency matters:
- Reputational risk: Public attitudes toward AI are mixed, and clear disclosure during a culturally sensitive time helps audiences understand creative choices and strengthens trust without detracting from the messaging of the content.
- Risk of errors: AI-generated content can occasionally include subtle mistakes that, if unnoticed, may go viral for the wrong reasons - potentially attaching negative connotations to the brand.
- Content control challenges: Once shared, content spreads fast (especially on fast-paced social media platforms). Removing or correcting it later is often difficult due to reposts, screenshots, and stitched videos. Even when possible, retractions and apologies can amplify reputational harm.
In short, labelling isn’t just about compliance - it’s about protecting credibility and staying ahead of the curve.
Things to consider
Before hitting “post” on AI-generated content, ask:
- Who is your audience? Some embrace innovation, whilst others celebrate authenticity. Read the room and post accordingly.
- Is the output error-free? AI can produce subtle mistakes that erode trust.
- What’s the content lifecycle? Once it’s out, it’s hard to pull back without fallout.
- How will you show transparency? A simple disclaimer or watermark can go a long way in building trust.
Even if you’re outside the EU’s regulatory net, adopting these practices signals responsibility and consideration. In a world where spotting an AI-generated Rudolph is harder than spotting the real one, transparency is always on-brand.
