Reading10: On fake news and influence campaigns
I think it is the reasonable and sensible conclusion for platform providers such as Facebook and Twitter to be combating against “Fake News” on their site. Information that is demonstrably incorrect should be removed, particularly when it is being used to further certain agendas unfairly, or when it causing real, demonstrable harm.
It’s a no-brainer that the spread of false information should be stopped. In this way, I don’t mind the fact that it’s technically a private company deciding what is and is not “fake news” – if it’s demonstrably false, it can and should be removed.
Additionally, we need to have a discussion about simply how much power these companies have, and what their platforms mean for the public discourse. See the effect that WhatsApp is having in India; these are becoming more than just corporations. They so profoundly affect how we live our lives, and we need to take a good, hard look at what to do about it. That is, to recognize that they may have some different capabilities than we afford typical companies, but also (emphatically) that they have additional social responsibilities. See, for example, how Russian agents were able to use Facebook to (credibly) affect the outcome of the 2016 US elections.
The reason I’m relatively nonchalantly saying “yeah, go ahead and remove fake news” is that I don’t think that’s the root of the problem, particularly with the election. While it is true that they said things that weren’t true, you can do much the same with true stories presented or framed in certain ways to certain people.
In fact, a component of the 2016 election shenanigans that may have been as or more harmful is indistinguishable from typical use of the website at the surface level. On Facebook, people make posts, add friends, read and share articles, etc. In the 2016 influencing campaign, there were actors who did all these things, just with a political agenda in mind. Actions like these can never be prevented at a high level, and must be detected with fine-grain analysis, which makes them as a matter of course extremely difficult to combat.
The issue, I think, is not that what people are saying is “fake”, or completely impermissible. The problems arise when we are tracking what people do so closely and building up such huge dossiers of information on the general populace, that can then be leveraged to deliver very targeted messages to certain groups. When a firm can identify groups of people likely to vote for a certain political candidate and serve them specifically targeted ads to encourage them against voting, we have a problem.
I wouldn’t go so far as to say that we live in a “post-fact” world, but it is certainly the case that it’s becoming far easier for people to be manipulated. As the public’s news consumption is growing tied to social media rather than coming from news outlets that they themselves seek out, it is becoming easier and easier to influence people into believing certain things or behaving in certain ways.
I believe that this is a threat to our democracy – not in an “overthrow of government to authoritarian society” sense of things, but in the way that it could become those with the best analytics and election campaign will win, rather than those who have the best platform or the promise to be the most effective leader. In the age of social media and data analytics, the process of attaining political office is growing increasingly independent from having the qualities to hold that office effectively – a trend that can lead to only bad if we don’t do something to fix our political discourse.
And, I think, we can do something to alleviate it. Social media doesn’t appear to be going away any time soon, but the climate of this past presidential election cycle – with all its fearmongering, finger-pointing, and name-calling – lent itself uniquely to this sort of influencing campaign. This sort of rhetoric will never go away, but the more we try to move back towards a reasoned, policy-centered discussion, the better off we’ll all be.