Increasingly, online participation is being threatened through manifestations of online violence, especially online violence against women. Such behaviours reflect the normalisation of inequality offline and are online reflections of offline patriarchal tendencies. This directly undermines the scope of the Internet, which should be a foundation for challenging the everyday normalisation of abuse and inequality but is increasingly becoming a tool for reinforcing inequality and silencing women online. This has become particularly evident through the phenomenon of online violence against women in politics (OVAWP).
As evidenced by the number of threats received by female politicians in the lead up to the 2017 General Election, online, gender-based abuse is experienced by all women across the political spectrum. More recently, Jess Phillips, a Labour MP received rape threats from the UKIP candidate, Carl Benjamin which are now being investigated by the police. In the same week, SNP MP Joanna Cherry received a high volume of offensive and threatening tweets and emails after questioning social media representatives about the lack of protection for women on their platforms.
The issue of threats against MPs has been gaining increasing attention, including consideration of the issue by the UK Parliament Joint Select Committee on Human Rights, the United Nations General Assembly, and senior SNP figures calling for end of online abuse. Last week, a petition was also started, advocating for a lifetime ban from standing for elected office for those who promote rape or violence. However, online threats against female MPs should be viewed primarily through the lens of equality of participation in public space as well as gender equality more broadly. These are not isolated incidents – rather they are an illustration of the prevalent nature of misogyny and gender-based hostility which is encountered by women online every day.
Online sexual harassment can take various forms, including both image-based sexual abuse (IBSA) e.g. revenge porn), and non-image-based sexual abuse. Our research categorises forms of sexual abuse which are non-image based as text-based sexual abuses (TBSA). As we argue in our book, and in our expert evidence to the UN Special Rapporteur on Violence Against Women, online text-based sexual abuse remains absent in the current legislative framework.
There have been a number of high-profile sufferers of text-based online harassment, including Tom Daley who was subject to abuse sent through social media in respect of his sexuality, but for which no criminal proceedings were pursued. He is not however, the only notable example – others include online abuse and harassment sent to prominent women. The landmark examples being those of Caroline Criado-Perez and Stella Creasy MP, both of whom were subjected to significant levels of harassment, including rape and death threats sent via social media platforms. So too, Gina Miller in the aftermath of her legal challenge to the Brexit referendum result.
A range of actors, including (but not limited to) the police and social media platforms have a significant role to play in tackling online sexual harassment. However, due to the lack of monitoring requirements in place under the e-Commerce Directive, Article 15, the responsibility for reporting issues of online abuse to the relevant policing authorities rests largely on the individuals affected by such abuse.
Despite the growing scale of online abuse on social media platforms, there have been only very limited efforts made by the police to respond to such forms of harassment, and – effectively or otherwise – recognise the potential offence, or take it seriously. The experience of Stella Creasy MP reporting online abuse to the police further illustrates this point. When Creasy reported abuse in 2013, the police failed to act, which led to her describing the police as “technologically illiterate” – a polite way of accusing the police of being either incapable of, or unwilling to, act. The police’s inaction in relation to reports of online abuse and harassment stands, however, in contrast with the recent steps taken by some of the police forces in England to recognise misogyny as a form of gender-based hate.
Providers and social media platforms have been engaged in discussions aimed at tackling forms of online abuse broadly, but these – to date – have proved to be ineffective and largely piecemeal when compared to the measures taken to, for example, tackle extremist content online. Nonetheless, social media platforms – especially Twitter and Facebook – have an important role to play when it comes to curtailing such forms of abuse and should assume greater responsibility for identifying and reporting incidents of online harassment – especially where sexual harassment is concerned. Recognition of this is perhaps becoming more evident with Twitter’s calls for ‘Health Check Proposals’ but again, this is not a solution.
Initiatives such as introducing ‘mute buttons’ on Twitter allow individual users to determine themselves what to hide from their feed (including content which is potentially abusive, and potentially criminal). However, such measures merely result in masking the problem rather than effectively tackling it and, as such, allow for online harassment to reoccur. This leaves the victims of online sexual harassment in a precarious position whereby they are likely to be exposed to further abuse and also left with no support from the social media platforms – despite some initiatives to raise awareness of such abusive behaviours online.
There is an urgent need to address this phenomenon yet at present it falls outside of the cybercrime framework – a point we made at the UN Commission on the Status of Women event in March when discussing effective protection from online violence against women in politics.
Reform, however, cannot stop here – there is more to be done as we say in our recent book: Online Misogyny as a Hate Crime: A Challenge for Legal Regulation?