fbpx

Misinformation On Social Media Platforms

‘MISINFORMATION’ IS A WORD WE HAVE BEEN HEARING A LOT LATELY, ESPECIALLY ACROSS THE SOCIAL MEDIA LANDSCAPE BUT WHAT DOES IT ACTUALLY MEAN AND WHY IS IT SOMETHING THAT WE SHOULD BE VERY WARY OF?

Misinformation is defined as ‘false or inaccurate information, especially that which is deliberately intended to deceive.’ Which when spread via social platforms can be misleading and negatively influential. Namely because of the sheer speed at which content can be spread across social platforms. 

In the age of social media, the way information is spread is faster than ever, regardless of whether this information is true or not, and the amount of content we are exposed to even on a daily basis can be quite overwhelming. Whether it’s celebrity gossip, political propaganda, or medical misinformation, people’s opinions are often presented as “the facts” or “news” and then are continuously shared as if they are. 

This can be even more problematic when people share articles only having read the headline, essentially spreading information without actually knowing its content, its credibility or its source. Known as ‘clickbait’, some unreliable sources purposefully twist a story when writing its headline, making a story seem more interesting or implying things that are not true to encourage clicks and shares. This is done simply to increase website traffic, often for financial gain. When these false or manipulated headlines are taken to be true without fact-checking this only increases the impact this can make. 

Celebrities and influencers can also make a significant impact when it comes to the spread of misinformation. Firstly because of the huge number of followers they have and secondly, because of their status, they are quite often viewed as a trusted or reliable source, regardless of whether or not they have any credibility in the subject area. We’ve all seen plenty of influencers promoting dieting products or miracle lotions that they know nothing about, which (depending on the product) could be quite harmful to consumers. 

In the past few years, in particular, this has been a very prominent issue, with the spread of misinformation regarding COVID-19 being so prevalent. And now, with Russia’s invasion of Ukraine, the spread of false information is an even more vital issue to combat.

SO, WHAT HAVE SOCIAL MEDIA PLATFORMS BEEN DOING ABOUT IT?

Facebook & Instagram

Facebook previously shared an article, mapping out the ways they are addressing false news, stating they are focused on the following three key areas; ‘disrupting economic incentives’, ‘building new products’, and ‘helping people make more informed decisions . They discuss how many false news sources are created for financial gain, creating web pages that are ad-heavy and sharing links to these with captivating headlines to encourage sharing and link clicks. Further methods of tackling this include implementing algorithms to improve ranking, easier reporting for Facebook users and involving third-party fact-checking organisations. 

Instagram’s regulations and policies are generally in line with Facebook’s. However, on their help centre page, they outline the ways in which Instagram specifically works at reducing misinformation. This includes making it harder to find on Explore and hashtag pages, using technology to find duplicates of false information, labelling posts identifying its contents as false information, and finally, removing accounts and content that goes against their guidelines. 

In terms of the war in Ukraine, Facebook has been incrementally updating the steps and procedures they are taking to tackle this on both Facebook and Instagram. Including restricting access to Russian state-controlled media in Ukraine and making these same accounts more difficult to come across globally, while not removing this entirely, this change is still likely to make this content less prominent and less spread across these platforms. Alongside this, any posts containing links to these sites will now be labelled with warnings about the content’s source. On Instagram stories in particular these links will be labelled with the following notice, ‘This story contains a link from a publisher Instagram believes may be partially or wholly under the editorial control of the Russian government.’ These posts will also not be recommended in Explore or Reel pages on the app and will be made harder to find in search.

Twitter

Twitter has shared an article about their policy and enforcement around misleading information being shared, opening with, ‘You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”).’ Any breaches of these rules could lead to tweet deletion, labelling of the tweet, or locking of the account. Breaches of this policy include content which, ‘include media that is significantly and deceptively altered, manipulated, or fabricated, or include media that is shared in a deceptive manner or with false context, and include media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.’ 

The ‘Twitter Safety’ account has been used to update on the procedures currently being taken in regards to the war in Ukraine. Within the same thread, it also states that they are, ‘actively monitoring vulnerable high-profile accounts, including journalists, activists, and government officials and agencies to mitigate any attempts at a targeted takeover or manipulation. Meaning their focus is on watching accounts that could have a higher impact in terms of follower count, influence and possible believability. Similar to Facebook and Instagram, Twitter has also introduced the labelling of Russian state-affiliated media websites with the simple message of ‘! Stay Informed. This Tweet links to a Russia state-affiliated website.’ Warning Twitter uses to be wary of particular sources of content.

Linkedin

On Linkedin’s blog page they discuss what is being done on this platform to reduce the spread of misinformation, listing ‘fake profiles, misinformation, “deepfakes” or other deceptive manipulated content’, as pieces of content they do not allow. Explaining that they have automated systems in place to ‘stop millions of fake accounts from ever appearing on the platform’ and to ‘remove millions more once detected as fake.’ As well as having human investigation teams working to detect suspicious content and ways for users to report suspicious accounts or content on Linkedin. 

They have also introduced a dedicated news page sharing coverage of the war in Ukraine, which is researched and curated to make sure everything on this page is genuine and can be a reliable source of information for LinkedIn’s users. Stating this consists of ‘trusted sources of information you can rely on as the crisis in Ukraine develops’. Each of the LinkedIn articles pulls together posts and articles from a wide range of trusted sources, meaning the information is detailed and backed up. This page can be found here

TikTok

With TikTok being such a fast-growing platform, and one that is entirely made up of video content, it can be very difficult to regulate misinformation, especially as this platform, in particular, is known for users expressing opinions as if they are facts. With that said, TikTok does have some regulations in place, stating that they ‘remove misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent.’ However, it isn’t stated exactly how they go about regulating this. Updates have been made on their Twitter account, @TikTokComms, stating that they are ‘working steadfastly to remove disinformation and suspend access to features like Livestream when there’s misuse’, in regards to the Russian invasion. They also say that they added tips to the discover page providing its users with digital literacy tips to help viewers make judgements about the credibility of content for themselves. TikTok will also be labelling Russian state-controlled media.

Youtube

Again, with Youtube being a video-based platform, misinformation can be more difficult to regulate. Their policy specifically targets misinformation such as ‘promoting harmful remedies or treatments, certain types of technically manipulated content, or content interfering with democratic processes.’ Youtube has a warning and strike system in place for users that violate their misinformation policies, with 3 strikes resulting in the user’s channel being terminated. Before reaching the third strike, any videos that break these policies are removed from YouTube. 

WHAT CAN YOU DO ABOUT IT?

Despite the actions and precautions these platforms are taking in order to reduce the amount of misinformation that is spread across their channels there, without a doubt, will still be many cases of this that go undetected. So what can you do about this on a more personal level? 

The best thing you could do is fact check before you share anything! Looking at details like the source of the post, where the author has got their information from, and whether or not there are other sources that support the information. Rely on a diverse source of information rather than just those that support your views. If these things don’t seem credible then they probably aren’t. It’s always best practice to do a little further digging to make sure that what you’re reading has some credibility before sharing anything.

If you do come across content that you believe is spreading false information on social media, it would be helpful to report these. Most platforms have options to flag or block posts, using these features can help the social media platforms identify misinformation more quickly and rate the reported posts lower in other users’ news feeds, meaning fewer people are likely to see it. Blocking accounts sharing this type of content also means that you will no longer see content from this source, therefore limiting the amount of misinformation that makes it to your own feed.

Unfortunately, social media continues to be full of misinformation. With the huge volume of content shared daily across these platforms, it’s pretty impossible to detect every piece that makes it into our feeds and it can be quite challenging to fact-check everything we come across. So, it is important that we take everything we read with a pinch of salt and don’t share unless we have taken the time to do a little research. 

Read in more detail what these platforms are doing:

Facebook: https://www.facebook.com/formedia/blog/working-to-stop-misinformation-and-false-news / https://about.fb.com/news/2022/02/metas-ongoing-efforts-regarding-russias-invasion-of-ukraine/ 

Instagram: https://help.instagram.com/1735798276553028 

Twitter: https://help.twitter.com/en/rules-and-policies/manipulated-media 

TikTok: https://www.tiktok.com/community-guidelines?lang=en / https://newsroom.tiktok.com/en-us/bringing-more-context-to-content-on-tiktok?utm_source=COMMSTWITTER&utm_medium=social&utm_campaign=030422

Youtube: https://support.google.com/youtube/answer/10834785?hl=en#zippy=%2Cmisattributed-content 

Got a project in mind?

To see how Ventura can help your business contact us today