By: Ribhav Gupta

On social media platforms, the transmission of false information and disinformation is made possible by several causes. Social media’s information overflow creates a chaotic, overwhelming environment that makes it challenging for users to distinguish between fact and fiction. This opens channels for negative actors to disseminate false information, disproportionately harming underprivileged populations. In the past, such bad actors have deliberately disseminated false information about incorrect voting dates and polling places, intimidated voters or made other threats at polling places, or used messages to prey on common concerns among Black and Latino voters about the effectiveness of political processes.

While this is happening, social media algorithms are created to show users the content they will likely interact with. These algorithms use extensive data collection of consumer online activities, including browsing habits, past purchases, location information, and more. Confirmation biases are made possible because users frequently come across content that supports their political views and worldviews. Due to the resulting tensions, the Stop the Steal Movement following the 2020 U.S. presidential elections and the January 6 uprising were both driven by misinformation that was able to propagate and solidify among specific groups as a result of this.

Disinformation has also been distributed thanks to microtargeting. This enables political organizations and individuals to accurately distribute adverts to certain demographics using information gathered by social media platforms. Microtargeting has drawn criticism in business contexts for allowing discriminatory advertising and denying historically excluded communities access to possibilities for employment, housing, banking, and other services. In contrast, political microtargeting has come under equal criticism, partly because political ad purchases are not closely monitored.

Political campaigns have also used geofencing, another data collection technique that enables additional microtargeting, to track when people enter or are present in specific geographically defined locations. CatholicVote employed technology at a church in 2020 to target supporters of Donald Trump with specific messages, gathering voters’ religious affiliations without their knowledge or consent. This creates a fresh opportunity for data collecting that algorithms and microtargeting tools can use.

Threats from disinformation are also made worse by automation and machine learning (ML) technology. Relevant technologies range from very basic automation, such as computer programmes (“bots”) that operate fake social media accounts by reproducing human-written text to sophisticated software. These types of advanced software use ML techniques to create realistic-looking profile pictures for fake accounts or fake videos (“deepfakes”) of politicians.

Possible Solutions

There need to be better accountability mechanisms for big tech companies:

There has been little oversight over how tech companies have handled the many problems of disinformation and privacy infringements. Over the years, scholars and civil rights organizations have repeatedly flagged instances where tech companies have failed to remove misinformation or incitements of violence in violation of the company’s own policies.

Unfettered access to customer data can be prevented via a federal privacy framework :

The blatant data gathering that permits microtargeting and algorithms to discriminate based on protected characteristics is made possible by the absence of federal privacy regulations. The American Data Privacy and Protection Act, which was recently unveiled, is a step in the right direction for Congress in establishing much-needed privacy legislation. The bill forbids the collecting and using of data for discriminatory reasons, which is its most significant prohibition. Additionally, the measure includes improved kid privacy protections, organizational standards for data reduction, and a constrained private right of action. The improvement of voter safeguards online would be significantly aided by enacting this legislation.


5 examples of malicious insider data and information misuse | proofpoint us. (2020, September 16). Proofpoint. 

Data loss: Causes, effects & prevention methods. (2021, February 17). Consolidated Technologies, Inc. 

Fussell, S. (n.d.). An explosion in geofence warrants threatens privacy across the us. Wired. Retrieved June 30, 2022, from 

Lai, S. (2022, June 21). Data misuse and disinformation: Technology and the 2022 elections. Brookings. 

McQuiston, E. (2021, March 2). Digital distrust poll: A healthier society starts with trust in technology [Text]. The Hyland Blog. 

Microtargeting. (2022, May 16). 

Technology safety. (n.d.). Technology Safety. Retrieved June 30, 2022, from 

The tools to end technology abuse. (n.d.). Retrieved June 30, 2022, from

What is data misuse? (n.d.). Retrieved June 30, 2022, from 

What is geofencing? Why it’s an essential technology for businesses? (2022, February 24). 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: