Blog Post 12: Censorship

Censorship is a difficult topic with a lot of gray area to consider. I believe that tech companies have a moral imperative to censor some types of content, while others they have a moral imperative not to. The general rule I would propose is that tech companies have an obligation to filter out directly harmful messages. I’ll define directly harmful to be encouraging physical violence against a non-aggressor in a physical sense or targeted and emotionally hurtful messages without an attempted logical argument. A racial slur falls under this directly harmful, as it doesn’t express an argumentative viewpoint, and is targeted to create a hostile environment for leaders. Terrorist recruitment also falls under this category, as it encourages physical violence against groups of people who pose no physical threat. Examples of permissible speech is that I think Twitter shouldn’t have shut down includes that from the “alt-right movement … [who] are generally outspoken in their attacks on multiculturalism, globalisation and immigration”. As long as the alt-right isn’t pushing for physical violence against non-whites and is trying to convince with logic (even badly flawed logic), they are in their right to be heard. They should be allowed to write volumes arguing for the removal of all non-white citizens from the U.S, and no one should filter their speech. This facilitates conversation and doesn’t fall under the category of directly harmful. Presumably, this should lead to conversation against this viewpoint and the public will hash out a prevailing viewpoint. Some people have a fear that even discussing something very evil (like turning America into a white’s only club) will give it credence and the ability to gain followers. I think this is the price we have to pay if we care about an open society with a free exchange of ideas. We’re going to have some terrible ideas on the shelves that get a lot of thinking. They will even convince some people, maybe a lot of people.  But having an open exchange of ideas mean the good ones are out there too, and over time they should build a critical mass.

It is also important that tech companies are very transparent about what they are choosing to censor, since there can definitely be some gray area in this topic of what is and what is not acceptable to censor. One of the articles discusses a black woman who was censored on Facebook when posting about her standoff with police. Her 5 year old child was eventually shot by the officers. Rashad Robinson,  who is “the executive director of Color Of Change, an online organization focused on racial issues” said of the case that “the lack of transparency is part of the problem”. When people are unaware of what they aren’t seeing, it creates an ignorance that we don’t want in our society. If tech companies choose to censor certain things, they should at least let us know what kind of information they are refusing to show us.

Being transparent about filtering content is part of what makes it morally permissible to operate in countries with heavy handed censorship laws. “Lee Rowland, a senior staff attorney at the American Civil Liberties Union, says companies should generally submit to governments’ requests for censorship, if it means they can keep delivering their services. But when they take down content from their platform, Rowland says, the company must be transparent.” People knowing what they are ignorant of is better than not knowing you are missing out at all. It at least allows people to understand that they may be missing parts of the puzzle when they are making decisions or forming opinions about certain topics.

  • bmarin
This entry was posted in Uncategorized. Bookmark the permalink.