3 Min Read

What we’ve learned from the cheerleader deepfake video

Read more



Published 18 March 2021


When you hear about deepfakes, what first comes to mind? Many people will often imagine funny but ultimately innocent face-swapping videos depicting celebrities or politicians, which are then shared on social media. Some people may also be aware of how the technology is used as a means of image based sexual abuse, or so-called revenge porn. But in reality, deepfakes can be used for an incredibly wide range of purposes – even as a means for a mother to get schoolgirls kicked off of a cheerleading team.

Reports have recently surfaced of a woman in the United States using photographs of her daughter’s classmates to create a disturbing deepfake video. Raffaela Spone allegedly depicted the girls naked, drinking, and smoking, and then shared the video with the cheerleading coach. Spone hoped the girls would then face disciplinary action, and be kicked off of the high school cheerleading team. It is understood that she hoped the move would benefit her own daughter, who was apparently unaware of the plan.

As our solicitor Kelsey Farish explained during her interview with Claudia-Liza Armah for Channel 5 News Tonight on 16 March 2021 (available on Twitter, here), this story underscores a few key points that we should all be aware of when it comes to manipulated, AI-generated videos like deepfakes.

Four things to remember in light of this cheerleader deepfake story:

  1. Firstly, the technology to create deepfakes is increasingly easy to use, and is becoming more widespread. Several apps are available to download on iOS and Android platforms, and in a matter of seconds, fairly realistic (albeit short) clips can be generated using only one selfie as a “source image”. More sophisticated videos can be created on the computer using software which is free to download online. For those who prefer to outsource the technicalities, hundreds of freelancers create deepfake videos for as little as $5 (approx. £4) on marketplaces like Fivver.

  2. Secondly, deepfakes can be used to target private individuals – including children. When deepfake technology was first shared publicly in 2017, much was written about the potential threats they posed to democracy and national security. As Claudia-Liza said during the interview, “When [Channel 5] first looked at this we asked, ‘what if this got into dangerous hands of evil dictators or others bent on world domination?’ We didn’t even think about pushy mothers!” As we have seen, deepfakes can be used by anyone with motive: this could include a colleague or (ex-)partner, or even a parent seeking to improve their child’s popularity. And, as with all forms of defamation or harassment, anyone can be the victim of an unwanted deepfake, irrespective of their celebrity status or public profile.

  3. Thirdly, it would be difficult to enforce an outright ban on deepfakes in practice. It would not be in anyone’s interest to make deepfakes “illegal”, as doing so would unfairly restrict the legitimate and beneficial uses of the technology. For example, deepfakes can be used to enable Alzheimer’s’ patients to engage with videos depicting their younger selves and loved ones. More obviously, many deepfakes are acceptable as political satire, or otherwise used for creative dramatic works. The problem with deepfakes is not their method of creation, but rather, the specific and particular harm that a deepfake could cause. To that point, a careful analysis for each video is required, before requests for removal or lawsuits against the creator are justified.

  4. Finally, the current legal and practical implications for removing unwanted deepfakes are complicated. Although certain social media platforms like Facebook and Reddit have officially banned deepfakes, detecting them in the first instance is difficult. As such, a deepfake may be online for hours or even days before it is removed. Importantly, too, is the reminder that just because a deepfake is offensive doesn’t necessarily mean that it’s actionable as a criminal or civil offence. For example, a parody deepfake of a politician may be crude or distasteful, but the creator’s rights to freedom of expression may still be protected. If that deepfake of a politician was disseminated as a means to manipulate an electoral campaign, on the other hand, it’s a different story…

DAC Beachcroft’s dedicated technology and media law team handles the full range of matters and cases related to digital media and artificial intelligence. The team, led by Tim Ryan, includes Kelsey Farish, one of Europe’s leading legal experts on deepfakes and synthetic media.