A few days ago, Abijeet Nath and Nilopal Das were killed when they stopped in a village in Assam state in India to ask for directions. They were attacked by a mob that suspected they may be child abductors. Abijeet and Nilopal were the latest victims in a series of deaths in India related to misleading videos put out on social media – the videos warned of child abductions in the area but the abduction videos were modified from safety videos or from older unrelated videos with the details changed to incite fear and suspicion.
In Brazil, WhatsApp is used by 120 million of its 200 million residents. Wired.com says that “WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform.” So earlier this year when a yellow fever epidemic began, WhatsApp was used by residents to communicate with each other. In March, yellow fever was spreading fast but misinformation about the vaccine was spreading faster. In social media, people were creating realistic looking but false videos warning against the vaccines, circulating conspiracy theories about them and exaggerating side effects. The result was that people were not getting vaccinated when they should have been and yellow fever vaccine campaigns were far behind their goals.
According to a recent Brookings article, 93% of Americans consumed at least some of their news online in 2017 – 35% of which came from social media, 20% from online searches and 7% from family. Brookings also says that during the 2016 presidential election in the United States, Facebook estimated “126 million of its platform users saw articles and posts promulgated by Russian sources”. It is impossible to tell how much the U.S. election was impacted by these articles and posts, but there is no doubt that they were out there.
All of these are examples of disinformation – false information that is intended to mislead its consumers. They started out as deliberate lies and were picked up and shared by people who may or may not have realized they weren’t accurate. The stories above are very serious examples of disinformation, but there are also the more innocuous examples of disinformation that my friends and family spread (intentionally or not) on Facebook or other social media to poke fun at or insult a politician, a sports team, or a brand they don’t support. These show up in the form of modified videos, modified pictures, or articles that look like news but aren’t. We are swimming in disinformation every day and it is spreading as quickly as our use of social media.
So what can we as a society and industry do to prevent or manage disinformation? The answer isn’t that simple, but this is the first in a series of articles looking into the problem. Datagami was privileged to be able to speak with Ash Bhat last week. Ash and his partner Rohan Phadte have founded a company called RoBhat Labs – the focus of the company is on helping make the internet safe for everyone. In the next few disinformation articles, we’ll share more background on how they plan to do it!
The next post in this series is now available — Disinformation: What Can You Do About It?