Fake news is, as Time magazine reports, “false news stories, often of a sensational nature, created to be widely shared online for the purpose of generating ad revenue via web traffic or discrediting a public figure, political movement, company, etc.”. In other words, it is essentially news that uses obviously false premises/facts to draw conclusions. While I am a proponent of letting people of differing opinions speak their mind and twist base facts to whatever argument they want, making things up is unacceptable and tech companies, as well as the government, have an obligation to filter blatantly false communications. Words have power. State sponsors like Russia have weaponized social media through agencies like the Internet Research Agency. The New York Times relates a very frightening event where the agency fabricated a chemical spill. It’s easy to extrapolate the potential power that being able to fabricate any event gives a government or group.
Before going any further though, I want to be clear that blatantly false is a very, very narrow window of communication. This includes things like events that never took place, false statistics (misleading statistics are fine, stats is an interpretation game after all), and media posters presenting themselves as someone else. If a good set of blatantly false guidelines are judiciously applied, this doesn’t stifle free speech. Opponents of this would argue that it’s a slippery slope to shutting down all free speech, but I think a judiciously applied guideline system would reap great benefits for society. It’s not difficult to come up with a system of checks and balances that would prevent sides of an argument from trying to shut down each other in the name of false news. A well deployed system is crucial to the well being of democracy, our communities, and national security.
You can already see the effects of fake news starting to wreak havoc on our society. Articles with titles like “Mark Zuckerberg is in denial about how Facebook is harming our politics”, and “If you don’t have anything nice to say, SAY IT IN ALL CAPS” show that this is starting to become a discussion. When more people realize how they can use information warfare to their advantage, this problem will only become worse.
Google has taken what I think is a positive step to combat fake news by beginning to partner with non-profit fact checking agencies for their search results. This is a step in the right direction. A more technical solution might be increasing the authentication barrier for users to set up new social media accounts. Making it more difficult for users to set up multiple accounts can help to stop trolls. Similarly, we might consider the pros and cons of decreasing user anonymity on the web to fight fake news. I think there need to be some studies on how much the shield of anonymity allows people to feel safe posting dissenting opinions and its effect on conversation before we can have an informed discussion on whether or not it could make sense to make people less anonymous on the web. Anonymity makes people generally feel safer about posting things, both good (like a unpopular political opinion) and bad (like cyber bullying), but I think we need a more quantitative understanding of this before I am comfortable changing this in the name of stopping trolls. The story of gamerGate and other cyber bullying stories are strong arguments to decrease anonymity. On the other hand, we wouldn’t want to take away anonymity for someone speaking out against a brutal dictator. This is a generally lose-lose situation in my mind that requires evaluating where we want to make sacrifices.