As the managing director of a PR and social media agency, I do tend to spend a fair bit of my time on social, like most of us working in the industry. While social media has lots of obvious benefits with regards to staying in touch with friends, discussing current events, and sharing information, I find its value is somewhat tainted by the prevalence of trolls on the biggest platforms. In the past five years, online abuse figures have doubled. I think the issue of trolling has long past reached the point of being out of control and it has led to these platforms becoming negative environments to work on and in.

I think that it’s high time that social platforms are held more accountable for facilitating this horrible hate and abuse. Online trolling has gone on for far too long and has a lasting impact on the mental health and wellbeing of those who are affected and often they certainly don’t deserve it.

What is online trolling?

Online trolling involves people using anonymous accounts to post insults and abuse on social media to provoke outrage or to upset. This abuse can range from nasty comments to something as extreme as death threats. Trolling is frequently experienced by those in the public eye but anyone who uses social can be a victim and I have had a fair bit of nastiness myself sometimes. This online abuse can have profound effects on someone’s life offline.  Victims are often left feeling unsafe not knowing whether these trolls may be able to find them in real life. Frequently being sent insulting messages also has a negative impact on people’s self-esteem and can leave victims feeling depressed.

Online trolling has gained increased attention in the media over the past month or so due to a campaign by the Football Association to prevent online abuse.  The campaign is a result of the masses of racist abuse players across the UK have experienced on social media. The FA has supported calls for a collective social media boycott for teams and players with the aim of encouraging social platforms to do more to combat trolling. For athletes, this would be a bold move as for many their sponsorships are reliant on having an online presence but surely they can do it without being hounded by keyboard warriors?

I think the main reason trolling is so common is because those who do it know they can get away with it using anonymous accounts. There are ways for police to trace IP addresses but there are simply so many of them at this point it would be impossible to track them all due to the amount of time and resources it would take. The anonymity offered by social also means people say extreme things that they would never say to someone face-to-face because they will not personally see their reaction or face any repercussions. In other words, it’s perfect for cowards that want to get their names out there and that makes me sad.

How can social media platforms prevent trolling?                                  

Since their launch social sites, including Facebook, Instagram and Twitter have relied on users self-regulating content by flagging up any trolling and abuse they witness or experience. Teams of moderators then monitor these reports and decide what action should be taken from removing abusive posts to banning users from the platform altogether.

However, this action doesn’t go far enough as there are still thousands remaining who are able to continue spreading hate. Also trolls who are banned can easily sign up for a new account by using a different anonymous persona to the one they used previously which means the whole point is almost null and void.

There have been calls for platforms to require users to show a form of identification to sign up for a social media account. I personally support this move as taking away the troll’s anonymity will make them more accountable for what they choose to post. I don’t think posting racist and discriminatory hate comments would be as popular if people knew their family, friends and employer were going to read them. Requiring an ID would make it much easier for these trolls to be traced and prosecuted allowing victims to get their justice. It would also prevent young children from opening accounts without parental consent so they can be better protected from harm.

What is preventing increased social media regulations?

I do understand that requiring identification would be a controversial policy for social media platforms to implement. Mark Zuckerberg has enough information about us as it is without having copies of everyone’s ID on file as well. The benefits however most likely outweigh the cons as it is probably the best suggestion that I’ve heard to prevent this online abuse that affects many people’s day-to-day lives. There the argument that social media regulation intrudes with our right to freedom of expression however we also have a right to not receive abuse – it is just bile.

Alternatively, there could be stricter punishments for those who break online abuse laws. The Online Harms Bill goes some way to outline steps that could be taken to prevent various forms of online abuse. The progress however has been far too slow and it needs to be the social platforms themselves and not just the government working to tackle online trolling.

With all of that in mind, I was delighted to see one step forward from Twitter this week which said:

We began testing prompts last year that encouraged people to pause and reconsider a potentially harmful or offensive reply — such as insults, strong language, or hateful remarks — before Tweeting it. Once prompted, people had an opportunity to take a moment and make edits, delete, or send the reply as is.

In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter. Throughout the experiment process, we analyzed results, collected feedback from the public, and worked to address our errors, including detection inconsistencies.

So we have an actual platform taking the first step forward but this is exactly what all platforms should do – start to use the AI systems for good to remove this terrible shite of their networks. The results from theese tests have found:

If prompted, 34% of people revised their initial reply or decided to not send their reply at all.

After being prompted once, people composed, on average, 11% fewer offensive replies in the future.

If prompted, people were less likely to receive offensive and harmful replies back.

Ultimately, I hope that all of the social platforms will do more to prevent online abuse from taking place as it’s time they are held more accountable for what their users are able to publish and they can automate it as Twitter has. My nine-year-old son has recently started using TikTok and I hope that he will be able to have a more positive experience on social than that which footballers and other public figures have been experiencing for the past few years. Whether that truly happens though only time will tell.

[author-box]