Disinformation: An Ounce of Prevention

In the last article in our disinformation series, we focused on things that we as individuals can do to prevent the spread of disinformation — false information that is intended to mislead its consumers. But there are a few problems with this solution:

  • People who are spreading disinformation often know it may not be true, but they don’t mind if it furthers their personal agenda or identifies with their beliefs.
  • Emotion is a powerful driver – people are often driven by partisan or cultural identity to focus more on who is making the statement than on the accuracy of the statement itself. Creators of disinformation target those who they believe have an implicit bias.
  • Most people simply won’t take the time to validate an article before they share it. Clicking Like/Share/Forward/etc. is soooo easy.
  • Once you’ve read a fake or misleading article, it is hard to forget it. So even if you think it isn’t real, it may continue to contribute to your perception. It is hard to undo what you’ve seen.

A better solution then would be to stop disinformation BEFORE it proliferates – helping distributors of data such as Facebook, Twitter, advertising, news agencies, etc. identify disinformation so that they can prevent it from being spread in the first place. This comes with the obvious challenge of both preventing disinformation while still upholding free speech.

Identification

To prevent disinformation, we must first identify it. We can identify fake creators, pictures, videos, and/or posts:

  • By Creator – If you can identify the creator of a post as a robot or someone who habitually creates disinformation then you could extend that identification to all of their content.
  • By Picture – there are cryptographic ways of seeing how images appear and then using anchor text around the images to see the context that the image originally appeared. When this is possible, it has a high degree of accuracy and can be used to determine if an image has been PhotoShopped or taken out of context.
  • By Video — if people are bad at detecting fake news, they’ll really struggle to “check” a video, especially when that visual medium pulls so directly on people’s emotions. But there are companies (including Robhat Labs) that are working on this issue by using an algorithm to break down the video into a series of images and analyzing them to determine if they have been modified.
  • By Post – the textual content of a post can be analyzed for disinformation, but automatically detecting disinformation by analyzing text is challenging as you need to identify intent to mislead. How can we determine that the post your cousin published on Facebook is intentionally wrong versus a misunderstanding of the facts?

Prevention

To prevent disinformation, we don’t necessarily need to get to every user. Robhat Labs believes you can start by informing sites of their content – something of an “antivirus of disinformation”. Companies of all sizes and shapes can then use the information to decide what action to take – whether to block the information, label it as suspicious, further vet the poster, etc.

Some industries have started working on their own solutions to the issue while others are just beginning, here are a few examples:

  • Advertising – companies have an interest in making sure that ads for their products aren’t associated with disinformation. One application that Robhat Labs is investigating is a service to allow advertising companies to identify disinformation, this would allow the agencies to enhance their advertising services – in effect selling disinformation protection with their ads.
  • Facebook – I recently tried to advertise one of our Datagami articles on Facebook to boost its viewing. Facebook declined my ad because it determined the article had political content. The article was about diversity goals in business which I didn’t think was very political. But there was an option to ensure Datagami was allowed to post political content so I followed the directions and got it approved. It was a somewhat annoying process, but I must say I was pleased to see the controls being put in place. The process basically tried its best to ensure that I was a real person with real content physically living in the U.S. before it would let me post political content to Facebook users in the U.S.
  • WhatsApp – in my first disinformation article I mentioned the lynchings that happened in India as a result of disinformation on WhatsApp. The number of lynchings has increased since that article was written which has put further pressure on WhatsApp to do something about their role in the killings. WhatsApp responded by limiting how many times messages can be forwarded in India but they refused to support tracing of messages.
  • Government intervention – in June 2017, Germany’s parliament adopted a law that included a provision for fines of up to €50 million on sites like Facebook and YouTube if they fail to remove “obviously illegal” content, such as hate speech, defamation and incitements to violence, within 24 hours. Other countries have felt that we shouldn’t legislate against misinformation. Poynter has a great site that outlines what other countries are doing in this area.

I think industry in general has been very slow to respond and adapt to disinformation – partially because it hasn’t been financially beneficial to them to do so. With increased pressure from the media, the public, and individual companies I am hopeful that identification and prevention efforts will continue to grow and evolve!

2 thoughts on “Disinformation: An Ounce of Prevention

Add yours

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.

Up ↑

Discover more from Datagami

Subscribe now to keep reading and get access to the full archive.

Continue reading