Fake news is a term that has been coined to represent news that are completely made up. It is different from conspiracy theories, which are presented as theories and usually (but always) based on some truth or on some facts they believe to be true. Fake news is written by people who know they are writing lies with some intended purpose. Some simply do it for money, while others do it with political purposes in mind. The problem is that their power is so great, that many think they were the major force behind the results of the 2016 elections.
Their power is so great due to the easy and exponentially expanding platform that Facebook and other social media entities provide. Fake news have a mind dulling effect. Their titles are so controversial that some automatically hit the share button before even reading the article, a dangerous practice that leads to the spread of misinformation.
This danger is even greater because of Facebook’s news feeds algorithm, which personalizes content to the preferences of the individual based on number of clicks, likes, shares, and comments. Personally, I was never aware of this problem. I like double checking articles with others, just because when I read some incredible big news I want to know as much about it as I can. If it’s too sensational, I double check with Snopes. For example, I found out through Snopes that videos shared by President Trump on his twitter account of Muslim British residents performing acts of terror were all false. None of those incidents occurred on British land. I have, in other situations, found fake news on my news feeds, but I never realized that it was this big of a problem. The algorithm learned to show me videos and posts I like to read the most, which seemed to have little overlap with the issues disseminated through fake news. My friends are also mostly educated college students, so the issue of fake news spreading on my news feeds seemed less prevalent.
The question is, how responsible are social media companies in policing fake news? The problem is that their actions may be illegal given current legislation. There are laws that prevent other countries from intervening in US elections, but Facebook’s open system allowed Russia to circumnavigate that. Furthermore, Facebook hasn’t been reporting advertisement with political intentions as political campaigning investment, another violation. In addition to this, fake news with political intentions may further go around this requirement without even stating their political motives. While the first two are clear violations that Facebook can easily fix, the third one isn’t so much. They need to be able to discern fake news from real news and to read the motives under the advertising.
To address this, Facebook is trying to outsource “disputed” media (using some heuristic) to some third party that will verify the content of the post and add a link to the post leading to a website showing evidence disproving the disputed post if they found the post’s content to be false. Multiple entities have signed into the fact verifying process. This is a great idea, since it decentralizes the fact checking process, which might help reduce bias. This also makes the process less private, so to speak, as there is an openness to it to outside entities. My only worry is that this will lead to a “fact-checking war”, where the left and right consistently try to disprove each other, which may further increase our currently rising sense of distrust on media.
For now I will give them the benefit of the doubt and hope it works. While that is set up, it would be beneficial for all of us to try to break off the echo chamber, the effect were social media “echos” back our own believes through its individual filtering process. Try reaching out to other sources of media. Personally, I started listening to a daily broadcast of important news. It is not comprehensive, and it is only one more source of news, but it helps escape the filter bubble.