3 Min Read

Self-policing of social media sites - a dream or a reality?

By DAC Beachcroft


Published 25 April 2017


In the instant world of social media the hard earned reputations of brands and individuals can be destroyed in minutes by a wholly fictitious post or a sustained campaign of online harassment, or as in the recent case of Jack Monroe v Katie Hopkins [2017] EWHC 43 (QB), a mistake.

Social media platforms are coming under increasing criticism for failing to take adequate measures to police their own sites; preventing harassment and bullying and removing defamatory content. In our experience, it often takes a letter from a law firm before the social media platform takes the problem seriously and considers removing the offending content.

In February, Twitter announced a number of changes to try to reduce the amount of abuse visible on its platform. This includes the permanent suspension of accounts used by abusers and preventing repeat offenders from creating new accounts. In practice, however, it is not clear how Twitter will prevent abusers from creating a new account using a separate IP address.

Furthermore, rather than censoring abusers, it appears the changes will have the effect of hiding the abuse from users.

Other changes of note are:

  • Creating a "safe search" feature which allows a user to remove tweets that contain potentially sensitive content and those from blocked or muted accounts so they are no longer visible to the user;
  • Collapsing abusive conversation chains so they are only visible to those users who actively seek them out;

It is important to note that offensive tweets will not be removed from the platform but rather a user may "opt out" of seeing the tweet in question, along with similar tweets, through their profile settings. This means the offending material will still be visible to the public at large and that the risk of serious and irreparable damage to reputation remains. Moreover, restricting the tweets which appear on a user's feed through the use of search terms could exacerbate the problem; if the individual is not aware of a tweet, he is not able to take the appropriate legal action before the tweet becomes viral.

In light of the recent judgment in the Katie Hopkins defamation litigation, the High Court has made it clear that the removal of a tweet does not necessarily protect the maker of the tweet from a claim in defamation. Whether such a claim will be successful will depend upon to what extent the victim's reputation has been damaged during the period the tweet was visible.

Whilst Twitter's Vice President of Engineering, Ed Ho, states that "making Twitter a safer place" is Twitter's "primary focus", unfortunately it is doubtful whether the changes will provide greater protection to individuals and businesses.

Unless and until the social media platforms take more decisive action to tackle the increase in online abuse those affected will have to rely on traditional legal remedies including obtaining an injunction to remove an offending post and commencing defamation proceedings.