As social media use became ever more prevalent in society, it was inevitable that without intervention the darker side of the online environment would also grow in prominence. Social media hate crime is on the rise, and now the new UK College of Policing guidance states that it must be treated as a priority and handled by senior officers. Officers have been told that even where a crime has not been committed, they should consider visiting the accused at work and it should be recorded as a hate incident, which could, in turn, be visible on criminal record checks. This approach has reignited debate over the impact on freedom of speech and the use of police resources.
The guidance has faced judicial review by campaigners who say that it could be actively harmful as it allows police to record false or malicious accusations. However, on the 14 February 2021, the guidance was found to be lawful by Justice Knowles in the High Court (Queen’s Bench Division), and not in violation of Article 10 of the European Convention on Human Rights. It was held that the recording of a non-crime hate incident based on an individual’s speech is not an interference of their rights and if it was, it is prescribed by law and done for two of the legitimate aims in Article 10 (namely, the prevention of disorder and crime and protecting the rights and freedoms of others).
This guidance and the move to treat hate crimes as priority incidents comes amid controversy as other crimes, such as burglary, have been downgraded as some forces and senior officers do not have time to investigate them fully. There are a number of further points of contention, including the fact that senior judges have criticised police for labelling complainants as victims before a conviction. The term is used throughout the guidance even though the college acknowledges that in some cases a crime will not even have been reported. In addition, the guidance states that a victim does not have to justify or provide evidence of their belief that a hate crime has been committed and officers should not directly challenge this perception. In line with this, even if a crime has not been committed, if the victim believes the action was motivated by hostility, it should be recorded and flagged as a non-crime hate incident. For the first time, the guidance sets out that these non-crimes should be disclosed to a current or prospective employer under an enhanced criminal record check. Earlier this year, the police revealed they had recorded nearly 120,000 non-crime hate incidents, which could impact on people’s ability to get jobs over the course of six years. The College of Policing states that recording is necessary as forces should be able to analyse non-crimes, so that preventative activity can take place. However, a Freedom of Information request sent to every police force in the country returned no instances where police have analysed non-crime hate incidents and used this as intelligence for actual crimes. This probes the intricate and important question of what should constitute a hate crime, and how it should be tackled.
This complex question has led to the proposal of certain legislation on a UK and European level. First, the UK government has confirmed that an Online Harms Bill will be introduced in 2021. This follows proposals first outlined in an April 2019 white paper and subsequent initial consultation response. The Bill will set out a strict new regime to tackle the removal of illegal content online, including terrorist material, child sex abuse, suicide promotion and cyber bullying. The new rules will apply worldwide to any platform that hosts online user interactions or user-generated content that is accessible by people in the UK. Social media platforms, dating apps, search engines, online marketplaces, peer-to-peer services, online forums and video games which allow online interaction will therefore be caught by the regulations. Further, in confirmation of the earlier proposals, the legislation will place these companies under a statutory duty of care to protect users from illegal material and implement measures to report and remove harmful content published on their platforms. Entities will be categorised into different tiers according to the size of their online presence and the level of risk posed on the platform. Category 1 companies will have to comply with the most onerous set of obligations, whilst those in Category 2 will face less stringent requirements. UK regulator Ofcom will be responsible for enforcing the rules and will have the power to impose fines for non-compliance of up to 10% of a company’s annual turnover or GBP 18 million (whichever is higher).
Similarly, on an EU level, the European Digital Services Act (“DSA”), will apply to any information society services provided in the EU and in particular, intermediary services consisting of services known as mere conduit, caching and hosting services. In summary, the DSA (which is still at an early stage of development) is expected to centre on intermediary services (including internet access providers and domain name registrars), hosting services (such as cloud hosting), online platforms that bring together sellers and consumers (for example, online marketplaces, app stores and social networks) and large platforms that have the potential to reach more than 10% of the 450 million consumers in Europe. The DSA builds on the e-Commerce Directive (“ED”) to address new challenges that have evolved since the ED’s adoption 20 years ago. The aim of the DSA is to overhaul, clarify and update many aspects of the ED to make the online world safe and reliable. New obligations proposed in the DSA include: more detailed procedures aimed at effectively tackling and removing illegal content online, including a harmonised legally-binding European user-friendly notice and take down mechanic; a requirement to report annually on measures taken to moderate content; a know-your-client obligation and data sharing obligations, for example, to provide regulators and outside groups with greater access to internal data, amongst others. In terms of sanctions, each EU Member State would have to designate a Digital Services Coordinator to enforce the DSA. The Digital Services Coordinators in different territories can work together in matters covered by the DSA, with fines of up to 6% of global annual turnover levied for breaches.
Overall, all of these measures are a step in the right direction towards an online environment that is safer, more accountable and inclusive, with the responsibilities of users, platforms and public authorities rebalanced to place fundamental human rights at the centre.