top of page
ONE-NOUGHT-ONE.png
  • Tanushree Vaish

Social Media Algorithms and its Impact on Fake News

The internet has made human lives easy, but it has as many negative aspects as positive. Fake news has always been there, and its impact is equally eroding for society. The internet has fast-paced the spread of information, be it authentic information or misinformation, which is determined to shake the roots of humanity. One of the most debated questions about the spread of fake news is whether it’s because of biased users or bad algorithms.


Let’s save the debate for another day!

For now, the article is about how social media algorithms and fake news are connected.


Behind every post, video, or story, there is an algorithm designed to make you stick to your mobile for hours and hours without an end. The longer you stay on your socials, the better the algorithms are doing their job. Right now, there is no mechanism in place to identify and ramify the content based on the context. So, there is no way algorithms are not going to save you from misinformation. However, algorithms were employed to enhance user experience, not intensify the existing societal and ideological differences.


Unlike in the physical world, misinformation and fake news spread rapidly on social media platforms. But, that does not mean #socialmediaalgorithms aid the spread of misinformation. Information dissemination through word of mouth takes a longer time. Although both the mediums are equally ravaging, social media is distinctly distressing. The privilege of ‘sharing knowledge’ and ‘enlightening people’ with just a ‘click’s time’ is not felt offline. Again, unlike in the physical world, the burden of transmitting the information is transferred to the AI.


Yes, AI only amplifies the users’ behaviour patterns. Yes, it is the pre-existing ideology or notion of the user that algorithms are taking into consideration to decide what they are being fed online. Still, the lack of a ‘smart’ mechanism to deal with the selective exposure speaks in length about the collective failure of these platforms in providing a secure platform.


Fake news impacts every aspect of human lives such as social, political, emotional, and economic. A cleverly cooked story about a business can cause the stock prices to plummet to unprecedented levels or even cause them to shut down altogether. If the whole purpose of algorithms is to enhance user experience (mind you, including businesses) and make their online stay profitable, the purpose of the existence of algorithms becomes questionable.

Facebook engagement of the top five fake election stories
In a 2016 study by Statista, it has been revealed that US election stories got 8.7m engagement for fake news compared to 7.3m for the mainstream ones. Remember Trump flooding Twitter with calls for justice? The former POTUS probably thought fake news had played a significant part in manipulating votes.

When a 2016 study has astonishing figures that sound relevant and significant even now, can you imagine the damage done in recent times?


The entire internet is a hub for fake news, besides hosting educational and knowledgeable content. Fake news can be seen in various forms including podcasts, videos, images, print news, blogs, digital news, and radio shows. There are even dedicated websites to spread fake news, propaganda, misinformation, and hoaxes.


What are Google’s algorithms up to? What is Facebook doing about it? Is Twitter not acting against it? Let us break it down into platforms. Twitter is a mini-blogging hegemon in the social media field. Although the top authorities of Twitter repeatedly assert that Twitter is a democratic and harmonious digital space that aims to curb the spread of fake news, the trends suggest otherwise.


Twitter is currently at loggerheads with the Indian government for its irresponsible behavior and unregulated algorithms towards fake news. Both Indian legislation and the judiciary held Twitter accountable for recurring incidences that disturbed public order due to the absence of a mechanism to curb fake news. In 2020, it was reported that 18,000 accounts were spreading fake news, while the social media giant has been booked for not attempting to establish the truth. These are just a few examples to show how fake news on social media platforms snowball into communal tensions, political polarization, and other adverse effects on society.


While Facebook is currently undergoing legislative and national scrutiny in different countries, Twitter has already attempted to launch a ‘false’ content button to report fake news. This attempt by Twitter is considered a right step in the right direction at the right time by experts. Despite being in the pilot stage, the new feature can aid the algorithms in differentiating fake news from other content and hence, can be a significant step in optimizing algorithms.


All these platforms claim to be democratic, safe, and healthy places for socializing and these social media giants are the largest sources of infotainment. The algorithms are failing to curb the spread of fake news entirely. However, algorithms are not the sole sinners. Information overload aids the spread of fake news and makes it easy.

 

Can you expect a positive change? Before jumping to conclusions, let us take a ride to another side of the story as well. The debate surrounding social media algorithms and fake news has always been multidimensional because numerous factors are involved in the issue. Yes, algorithms are playing their part in transmitting fake news, but the think tanks behind these algorithms are trying hard to counter the damage done.


Algorithms are smart but not as smart to decide the credibility and authenticity of the content. At the end of the day, human intervention is required to take a call on these issues. Additional features had been rolled down eventually since the inception to help algorithms identify ‘problematic’ content and people as well.


For example, ‘report’ and ‘spam’ are attempts in this direction. If the user finds a particular thing concerning, problematic, or invading their privacy, report and spam buttons ensure that they don’t see the same in the future. They also help the platforms to decide the appropriateness, relevancy, credibility, and authenticity of the information. It goes a long way in limiting the visibility of the particular content or warns other users.



How Do Algorithms Go Wrong?


The best a layman knows about algorithms is that they show relevant content. Before engaging in a one-sided debate, let’s see how it works.



For example, if you click on the top search result or the one you think is relevant for you. Your current preferences teach algorithms about the relevancy of future results. This phenomenon is known as relevance feedback.


This process helps search engines and algorithms to create a ranking pattern that weighs the results based on the click-through rate. The higher the click-through rate, the higher is the probability of showing similar results in your search results or feed.


A similar process works on social media platforms too. The more you engage with one type of content, the better are the algorithms motivated or prompted to show you similar content.

There are two sides to this issue. One, how do algorithms evaluate the information?


Two, how do users react to clickbait or sensational news?
















Algorithms judge the content based on the metrics, which happens to be user engagement in this case. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items that you are more likely to enjoy.


This process, initially, was created to cut the clutter and provide you with the best information. However, fake news exploited the gaps and crawled in silently. A study conducted by TheConversation in 2018 has revealed that the more people search for a topic, the more Google pushes the results on top of search page results. Following the observations, TheConversation designed a game called “Google or Not” to find if people’s thought patterns are really prompting the algorithms to show fake news.

It turned out to be true.


More than half of the times in this study, people picked sensational headlines or fake news over trustworthy information, which confirmed that human’s thought patterns are lenient towards headlines or content that arouse curiosity, which in turn influences algorithms.

 

The Internet, in this digital era, is the primary hub of information. But, they are as much the hubs of misinformation too. One social media platform cannot be blamed for spreading this news. For that matter, the internet cannot be named the sole ‘sinner’ that is disturbing the credentials of civilized societies with misinformation. After all, they only assist people with what they ‘need’ or are ‘searching for.’ Algorithms, search engines, and the internet is a game of serving relevance on the users’ plate. They are created for that purpose. It is the user who clicks on the information. Yet, the blame cannot be shifted onto humans for this sole reason.


The dance between human thought patterns and algorithms is what nurtures fake news online. Ad revenue is not the sole source of revenue for search engines and social media platforms. Although the authorities defer in this issue, it is a fact that they track and sell user data. People are being exposed to misinformation because of their content consumption and online behavioral patterns.


  • Sensationalist stories form 95% of media headlines nowadays.

  • Media reports with negative news or statistics catch 30% more attention.

  • 26.7% of people exposed to negative news go on to develop anxiety issues.

  • 63% of kids aged 12–18 say that watching the news makes them feel bad.

  • 39% of Americans believe the media exaggerated the COVID-19 coverage.

  • A staggering 87% of the COVID-19 media coverage in 2020 was negative.


According to studies, negative news is 30% more likely to grab reader attention, 95% of headlines are sensational in evoking the reader’s curiosity. It is the state of online behavior, both on the reader’s side and the creator's side. Reiterating the fact that the fault indeed doesn’t lie on either side, the whole phenomenon is a collective of behaviors and a ‘set of programs.’


There are inconsistencies in both aspects. People should learn about these processes as much as possible. Besides improving the algorithms to curb fake news and reducing the ill effects, social media platforms and search engines should educate people about these procedures. Fact-checking mechanisms and embedding user-feedback columns under the content can be the right step in taking down the content as soon as possible and hence, stop the spill-over effects of negative news.


Debates about regulatory mechanisms and self-regulation practices aside, it is essential for the users to understand the working of these virtual models or algorithms. Decrying the algorithms is not a solution if you want to break the vicious cycle of human’s natural inclination towards sensationalism and automated processes of popping relevant content. It is just the beginning of algorithmic hegemony. It is bound to go a long way as digital trends are going on a rapid ascend. Hence, awareness and user discretion are suggested from the consumer’s end, while constant updates to enhance the quality of service are recommended on the creator’s side.

Comments


bottom of page