Fourth Day’s Xanthe Vaughn Williams introduces the debate

Last night, I attended an excellent event hosted by Fourth Day at Somerset House, titled Truth, Trust and Downright Lies. Four panellists discussed how communications pros can address the rising threat of misinformation (accidental spreading of inaccurate information) and disinformation (deliberately misleading content). Here’s my summary for those who couldn’t make it, plus some thoughts of my own on how PR should tackle mis/disinformation around their brands.

The panel, expertly compered by Xanthe Vaughan Williams, included journalist Rob Waugh, PR consultant Katy Howell, researcher Max Templer from Think Insights and Antony Cousins from Meltwater.

Framing the mis/disinformation challenge

Rob kicked of proceedings by describing the occasion when he was offered comment from a ‘ghost influencer’, someone who he discovered on further investigation had little to no online presence. Some companies have attempted to use ‘media experts’ who can get widely quoted if journalists do not do their due diligence. The aim is to gain ranking on search engines, whose algorithms reward brands who are mentioned as experts in their field. You can read my about Rob’s experience in his piece for the Press Gazette.

According to Rob, the challenge for journalists, especially younger, less experienced ones who are used to things moving at the speed of social media, is “where does AI stop, and people begin?”

Katy cited some research that found that fake news moves six times faster than fact. The challenge for brands, Katy argues, is that all too often they’re not set up to deal with the speed of mis/disinformation, comms-wise (including internal comms), citing a recent crisis she worked on and how the brand struggled to retain the narrative. The threat of mis/disinformation is “getting worse and we should be concerned,” she added.

Antony from Meltwater is a panellist on three events related to mis/disinformation this month alone, demonstrating how it has become a key focus for comms. He referenced a major unnamed brand that he’d carried out an assessment for, and the ninth most-popular narrative online around that brand was a serious and false story but also demonstrated how artificial intelligence (AI) can be used to combat the risk by identifying narratives and the key influencers behind them.

Finally, Max described mis/disinformation as a truth spectrum, ranging from outright fabrications, to jokes to ambiguous content. There is also a spectrum of severity around those narratives. Max said there probably is plenty of fake news online around brands, but the challenge is to establish what is important and carries the most risk and focus on combatting that. The wider societal challenge is an erosion of trust for everything that media, the government, or institutions say.

What’s the solution to fighting mis/disinformation?

For Antony, brands need to anticipate what angles they might be attacked on (e.g. Diversity, Equity and Inclusion) and produce content that gets ahead of that before mis/disinformation arises around it. Next, if brands know what they’re likely to be attacked on, they should get queries and monitoring set up (again, AI can help find the right keywords and trends here). Monitor the issue and identify the issue behind it.

Max said there’s a consistent group of around 10% (skewed largely towards young males) who have shared false information online that they didn’t even believe. There’s an argument that government, brands, and social media sites should educate the public on to spot mis/disinformation and how false narratives get amplified. 

Katy added that most mis/disinformation is “born in dark social”, such as WhatsApp, Facebook Groups, and often focussed in local areas. It’s essential to monitor and Katy says it’s astounding how few brands still fail to monitor their online mentions and often, by the time they engage crisis comms professionals, they have lost control of the narrative. Katy says brands need a super-fast social media response strategy, they need to anticipate the likely lines of attack and understand where their audience is active, and to run a simulation. It’s no longer enough to put out a bland, generic statement, Katy argued. Instead, companies should put a face and personality to their response and train their spokespeople, which is “much more believable”. Finally, Katy highlighted the importance of internal comms; tell staff what’s happening!

Antony highlighted the rising prevalence of AI overviews in Google, which is sapping hits away from trusted news sites, as users no longer need to click through to read the full story. He said our search may become ever more personalised with AI, but also that there is a role for traditional trusted media to stand out.

There was a question from the audience around teaching critical thinking in schools. The audience largely agreed, as did the panel, but the challenge is whom should deliver that? Government is already so far behind the curve, plus Max highlighted that young people aren’t often the problem; during last November’s US election, most of the sharing of false information online was done by over-65s! Antony concluded that he hoped market forces would drive brands and publishers to reward truth over mis/disinformation.

My concluding thoughts

Mis/disinformation is a huge challenge and not one confined solely online, that’s just where a lot of confirmation biases are played out in public. Alas, false narratives have always been part of human nature and power play, whether it’s the Spanish Inquisition, the Salem witch trials, or the Brexit campaign.

There needs to be wider education around the topic, so we can create a media-literate public that understands how to challenge everything they hear, and news sources need to call out false information when they hear it rather than let it ride (plenty of opportunity for that right now and the BBC does have its Verify service). 

I believe there’s an argument that governments should make social media companies pay a certain percentage of their profits to pay for online education around these topics in exchange for operating in the market. 

There are case studies of brands doing well to combat misinformation. Merseyside Police got ahead of the narrative recently by announcing the driver in the Liverpool title celebration incident was a white male plus his age. You could see the right-wing dog whistles blowing only almost desperate for the driver’s profile to fit their narrative to justify some outrage. No thought to the victims, you’ll note.

Take aways for brands:

  • Anticipate threats, get your narrative/response to each scenario straight
  • Set up streamlined comms protocols, practice regularly (run scenarios)
  • Listen and monitor
  • Use humans to respond when required 

I also think there might be a return to trust in old school journalism – both in terms of the PR-journo dynamic providing trusted stories and experts, and the public knowing which media to trust. 

Also, finally, isn’t there a role for the law here? If a narrative is malicious and false, and you identify the source, isn’t this what defamation laws are for? I’m not an expert, but I’d be keen to hear from someone who knows more about that than me. One big case against a disinformant (is that a word?) could deter others.

It was a great event, very useful, thank you to Fourth Day for arranging and great to see so many old faces (a couple of whom I worked with a quarter of a century ago!)

About the author