Fake news has been grabbing the headlines a lot recently but is it really that new to us in the PR industry? Sure, it’s a clever way to make money from content that appears to have been exploited but brands have been pushing sponsored content and (or advertorials as we call them in the business) since well before I started my PR career back in (coughs quietly) 1998. We just never coined the phrase Fake News – we simply said “Sponsored Page” or “Advertisement Feature” as that was far more civilised but it amounted to the very same thing, content that was made to look real but wasn’t.

Jim Weber makes a brilliant point in his LinkedIn post: “Fake news is in fact just a natural extension of native advertising. “Branded content” is a piece of propaganda that an advertiser tries to convince readers is the truth by paying publishers a fee. So it was only a matter of time until scam artists reverse engineered this process by creating content people want to believe and making money off it via Google AdSense.”

The real worrying element here, is that, according to the Interactive Advertising Bureau, less than half of readers are now unable to differentiate between this nasty fake news/native advertising and real news. Native advertising is just the online equivalent of an advertorial where the word “Sponsored” appeared in the small print along the top of the article. I still see plenty of these sponsored articles in the national media today but they are not as provocative as some of the horrible headlines darkening our screens recently. Image result for fake news

LinkedIn and Facebook are two social channels that are apparently both attempting to kill off fake news from their networks but they have taken very different measures towards it.

LinkedIn appears to be winning the battle by using real human editors, (yes you read that right – humans not Terminator’s Skynet is selecting the real content) LinkedIn’s Dan Roth commented: “Those 25 human editors, scattered around the world, are tasked with “creating, cultivating and curating”. This approach from LinkedIn appears to be working but it is in sharp contrast with Facebook, which got rid of all of its human editors back in August 2016, leading to a rather worrying spike in the prevalence of fake news. In fact, it was so bad Facebook was forced to issue a statement after these fake stories made it out far and wide:

“Our goal is to enable Trending for as many people as possible, which would be hard to do if we relied solely on summarizing topics by hand. “A more algorithmically driven process allows us to scale Trending to cover more topics and make it available to more people globally over time.”

However, according to our friends at The Guardian:

“The trending module was meant to have “learned” from the human editors’ curation decisions and was always meant to eventually reach full automation.”

Facebook’s Mark Zuckerberg added in his post:

“We’ve been working on this problem for a long time and we take this responsibility seriously.”

One such story, which was widely shared on Facebook, after the shock US election, falsely claimed Hollywood actor Denzel Washington had praised Mr Trump and worse still the biggest story was of the Pope endorsing Mr Trump. If you want to see a full breakdown of shock headlines which weren’t true try this list.

If you ask me, Facebook and the others should copy Linkedin’s approach and rehire the humans (not bots) and use the new mechanisms to highlight fake news quicker – let us report on it. Surely, if reported they can simply block the domains from being available or sharable on their network and this would stop a lot of it? The social networks cannot be seen to be pushing one political view but lies for clicks is not right.

Data is just data until a human interprets it anyway. A computer can measure the wrong thing, as can a human but together in a collaborative approach we could stop this clap trap filtering the Internet and maybe we wouldn’t see so many shock political appointments – that is if we do all still care over what is real and what is fake anymore.

Thankfully in November, Google announced it would do more to prevent fake news sites from making money through advertising. Very shortly after that, Facebook made explicit a similar restriction on the use of its advertising network.

If you are worried about Fake News and you struggle to tell whether the site or news you are reading is fake try using this tool from the TED Blog.

Do you think fake news is worrying? Or should we just allow it to be posted and shared in the free world.

About Chris Norton

Chris Norton is the founder of Prohibition and an award winning communications consultant with more than twenty years’ experience. He was a lecturer at Leeds Beckett University and has had a varied PR career having worked both in-house and in a number of large consultancies. He is an Integrated PR and social media blogger and writes on a wide variety of blogs across a huge amount of topics from digital marketing, social media marketing right through to technology and crisis management.