Reading 09

The internet is a powerful tool. It allows people to share their thoughts in a faster and more far-reaching way than ever before, which sounds amazing if we lived in a utopian society where everybody got along. Unfortunately, reality has proven otherwise, and now that the internet has become such a large part of so many people’s lives, the question of where to draw the line between censorship and free speech must be considered because it has very practical consequences and implications. The two directly oppose each other, but the line gets blurred when words turn into actions.

Free speech is not completely unbounded/unqualified. The famous restriction on free speech that is used to demonstrate its limitations is that you can’t yell “FIRE!” in a crowded theater when there is no fire. This rule was not made because it would be rude to do so or to teach Americans better movie etiquette and give other customers a better viewing experience but because it would (or could) cause unwarranted danger. Deciding what else falls under the category of dangerous things to say is difficult because it can be somewhat subjective, which makes it a gray area. In China, the government views the ideas of independence and democracy as dangerous even though these same ideas are held up as ideals in the United States.

Censorship places limitations on free speech, and like free speech, it has the ability to be dangerous when misused and abused. Controlling which ideas are and aren’t allowed to be shared gives a group of people power over others, which is why censorship has to be treated carefully. I think a good standard to start with is that speech that directly incites violence against others should be censored online. Things are less clear, however, when something violent occurs as the result of someone’s reaction to an online comment because you can’t always predict or control how others respond to something you say, and you certainly don’t control their behavior, so it’s harder to tell who is to blame when things like that happen.

In the following, it is important to note that telling companies what they can and cannot censor is itself a form of censorship. You have to consider where the company’s free speech rights begin and end. I agree with Robert Epstein when he writes that “If Google were just another mom-and-pop shop with a sign saying ‘we reserve the right to refuse service to anyone,’ that would be one thing. But as the golden gateway to all knowledge, Google has rapidly become an essential in people’s lives – nearly as essential as air or water.” Due to their size and influence, it is more crucial that companies such as Google and Facebook censor where necessary and only where necessary – no more, no less.

I think the most obvious abuse of censorship is using it to suppress opposing views just because you disagree with them. Even this statement is not an absolute because the case could be made for when the opposing view is dangerous or harmful, but relativists will say that this is subjective, which complicates things. From my point of view, government dissent is not gravely dangerous in and of itself (as long as it does not directly call for violence against people or property), and it would be unethical to censor these opinions.

On the other hand, news and messages spread by terrorist groups should be censored because by definition, terrorist groups aim to cause terror. Then, the question becomes how to decide if a group is in fact a terrorist group or if it’s just a group with views that you disagree with. If they are hostile and promote violence justified by their ideology, that is a pretty good indicator that they are a terrorist group. If they just say things that make you uncomfortable, and many reasonable people aren’t concerned about it, then you should re-examine why it makes you so uncomfortable because it probably has more to do with how you perceive things than what was actually said. A similar approach could be taken when considering whether hateful and discriminatory comments should be censored. Whether something is offensive is only determined by the recipient not the writer, so while we have a decent idea of what might cause offense, we cannot control how others react to things, and we can’t guarantee that there is someone who will read what we write and be offended or not be offended by it. My initial thoughts would be to censor anything that calls for physical violence, but if we are to recognize mental health to be equally as legitimate as physical health, it follows that we should censor comments that cause mental and psychological damage. This is harder to detect, so it would be harder to enforce.

A lot of the debate surrounding online censorship seems kind of petty. I don’t think it’s ethical for large, public companies to remove information that is not in line with their interests and political beliefs. However, I do think that sometimes people and organizations try to push the envelope and provoke the large companies into removing their content so that they can make a big deal about it and gain public support for the “wrongdoing” that the large company committed. I think both sides need to lay off a bit.

If political censorship is going to happen, I think the same rules should apply to everyone regardless of where they fall on the spectrum. Some extremist groups promote violence, and I think those should be censored. Others who just promote peaceful discussion of ideas should not be censored for ideas that a company disagrees with. When it comes to discussion of illegal activities, I’m not sure what the right course of action is. My instinct would be to say that they should be censored because not doing so could be viewed as passive endorsement of them, and talking about illegal activities could make people more likely to engage in them, but a blanket ban on talk of illegal activities would also eliminate constructive conversations regarding the pros and cons of making an activity legal or illegal. I think something more nuanced needs to be applied to the censorship of illegal activities.

There is a lot of speculation about what will happen if we enact a law or reverse a ruling, but it’s just speculation. You don’t know for sure until you do it. Censorship is a powerful tool. I don’t know exactly what the answer is as to how to apply it, but I think it lies somewhere between the two extremes of no regulation and censoring everything. I don’t think that we’ll ever completely figure it out and get it perfect, but I think we should try our best to use censorship responsibly to work towards the common good.

Reading 08

Corporate personhood is the idea that although corporations are not natural people, they are legal people, and as such, they are legally afforded the rights that a natural person has where applicable. This is an interesting concept because at the end of the day, corporations are just a collection of people (and money). Because of this, it’s inevitable that we should have to consider what they should be allowed to do and what limitations can be placed on them. I like Kent Greenfield’s point that corporations should be more like people because it feels like a natural move to make. And if the contention surrounding corporate personhood is a result of questions of what they ought to do and what’s fair, the human aspect would help keep that in perspective in contrast to the inherent profit-driven motivations of corporations themselves. The debates about corporations having freedom of speech and freedom of religion seem to be just another attempt to control what other people say and do, which really is only an issue when you disagree with what they’re saying and doing. If at the end of the day, corporations are run by people, it’s hard for me to see how you can separate the two and ask (or legally force) people to do things that go against their personal beliefs, especially if their company is not public. It is troubling that the legal precedence for this idea was founded on a series of lies and untruths that were never corrected, but that doesn’t necessarily mean the idea itself is a bad one.

In the case of IBM’s involvement in Nazi Germany, I think it was an unethical act on the company’s part. We spent the beginning of the semester discussing how technology developers are ethically responsible for considering the implications of their creations, but even if this wasn’t taken into account, IBM is culpable because they knew what their products were being used for, and they actively continued to support Nazi efforts by maintaining the machines and providing materials and means for so many people to be unjustly killed. Even though I’m willing to give the benefit of the doubt that maybe IBM didn’t know what it was getting into initially with the creation of the census system, the company had multiple chances to step away after that when things escalated.

From a Catholic perspective, corporations should definitely refrain from doing business with immoral/unethical organizations and people. As Catholics, we are supposed to avoid scandal. This means not only scandal in the sense of committing sins but also scandal as defined by leading others to believe that something immoral or sinful is okay even if you don’t actually believe it’s okay and you never committed the sin yourself as Father Mike Schmitz describes it. Therefore, even if a company does not directly and actively commit the immoral and unethical acts themselves, doing business with immoral or unethical organizations and people can lead to scandal because others may believe that what they’re doing is okay since you didn’t find anything wrong with it that would cause you to stop doing business with them.

Going back to the idea of corporate personhood, it would seem foolish to give corporations the same legal status as individual natural persons without also placing the limitations on corporations that are placed on individuals. As individuals, we are not allowed to kill or steal or do things of this nature, but the law doesn’t necessarily punish all unethical behavior and it doesn’t require anyone to fully behave ethically. I don’t think it’s possible to expect or force corporations to have the same ethical obligations and responsibilities as individuals do because they are not exactly the same by nature. However, it is important to put checks on corporations because they can have a lot of power and influence on many people, and if it was left up to the corporations to decide, they probably wouldn’t always make the ethical choice because it could result in backlash or the corporation’s death. As much as the idealist in me would like everyone and all corporations to always make good decisions, the reality is that this likely wouldn’t happen on its own.

Reading 07

The story of the dad who got upset at Target for sending his daughter baby product coupons is the classic, go-to example of when data collection crossed the line in regards to privacy. It’s an attention grabbing story, but it reveals a hidden cost of having the conveniences and helpful tools that are sometimes marketed as “free” because they do not have a monetary cost associated with them. Since there’s no such thing as a free lunch, it would be naive to think that all of these things are actually free with no strings attached. Whether it’s a “free” app that requires in-app purchases to actually access its functionalities and features that make it useful, or “free” thanks to its sponsors who take every opportunity to remind you that you got this product or service for free because of them through their ads, or “free” in exchange for consent to collect your data. While it sounds unfair to us as consumers, I think we only say that because we don’t like it. As long as companies are disclosing what information they are keeping on you, they have a right to collect your data in exchange for their service – to a reasonable extent. Some guidelines that I would agree with are ones stated in the General Data Protection Regulation. Companies should have a reason for collecting data before they do it (as opposed to just collecting it in case it will be useful in the future), the data should be anonymized, and users should be told what data is being collected and what it will be used for. Even if most people don’t read the terms and conditions, the company can at least say that it did its part in notifying the user. Unfortunately, the GDPR only applies to data collected in the EU for now, but adopting a global standard would be nice.

As to whether it’s ethical for companies to track your data in exchange for services, I don’t see a problem with it if the services are frivolous or non-essential to life. This is the price they are charging you for their service, and if you’re not willing to pay that, then don’t use it. Let me qualify this statement by acknowledging that it’s much easier said than done, and that this shouldn’t be taken to the extreme in implying that all services should force you to give them your data to use their services. I’m not sure where the balance should be. If the data is just being used to customize your ads, I don’t personally have a big problem with that. Some might argue that customized ads target people and prey on them, but I think it’s fair game to replace generic ads with custom ones. If you’re really worried that it will make you spend more money than you should be, you can consider it as an opportunity to exercise discipline and grow as a person. If you didn’t already guess, I don’t use ad blockers. Sometimes, I like seeing ads for things that I actually like or receiving discounts on things that I would actually buy. Furthermore, I get that not everything can be provided for free, and ads are a way for businesses to provide their services at lower (or no) monetary cost to its users and for the advertising businesses to make more money. All I ask is that ads stay appropriate, especially on sites and apps that kids frequently visit.

Reading 06

Tradeoffs – it’s a word that we hear all the time as engineers, and software engineering is certainly no exception. Privacy vs. security is a tough one because it affects everyone though it is meant to target only specific individuals, and there isn’t a single policy that can benefit both sides. In Apple’s case with the FBI’s request to remove features to make it easier for them to gain access to a device, I’m not sure what the right thing is to do, but I do commend Apple for taking a firm stand and holding true to its values. They seem to have done the right thing in considering the implications of granting the request, and the potential dangers in weakening their encryption standards. They are genuinely trying to avoid harm, which is an ethical decision they have as leaders in the tech industry. As Tim Cook stated in his letter to customers, they are not trying to protect criminals – this is just a side effect of protecting the privacy of their other customers. I’m not sure I entirely buy the whole “if we don’t encrypt, people who are up to no good will find someone else who does” reasoning, but I think there’s some truth to it. Those who carefully plan out their crimes will go to greater lengths to keep their plans hidden, so Apple’s back door may not be helpful; conversely, those who don’t care to try as hard will probably also leave behind evidence in other accessible places, so the back door wouldn’t be needed.

When it comes to privacy vs. security, it’s hard to draw the line as I mentioned before because they seem mutually exclusive. Security precludes privacy because it requires that everyone and everything is accounted for. You can’t trust that a given person isn’t doing something evil unless you know what they are doing period. Sure, the majority of us are doing mundane things like watching cat videos or going to a course website to view this week’s homework problems, but there’s no way to separate these people from the bad guys accurately and permanently so that we can safely ignore the information/data generated by the “normal” people. When I go to the doctor and he/she wants to take a closer look at something, I usually let them do it because even though it’s uncomfortable, I know it’s important for my health. As uncomfortable as it may be to have somebody following all that you do, at the end of the day, I do believe that if you have nothing to hide, you have nothing to fear. Unless, of course, you’re worried that the information the government has on you will be taken out of context or used to paint an inaccurate picture of what happened. I have no good response to this except that there have been cases where this occurred without access to our phones, so I’m not sure that giving them this new data will make the situation that much worse. It shouldn’t be happening at all. That being said, I think the more relevant question is “Which do we value more?” Do we value everyone’s privacy at the expense of overlooking potential danger, or do we try to stop the bad guys at the expense of the good ones? I’d say that safety and security are more important than personal comfort or convenience.