Reading 14

Even as coding becomes more and more relevant to society, I’ve never thought of it as the new literacy. The analogy makes sense to me, but I’m a little skeptical of what good knowing just a little about coding can do for a person (although I guess we learned last week that you need to know about coding/software to drive a tractor these days). I think it’s a good idea to at the very least offer a course in computer science during high school to increase exposure and help students begin to discern what they want to pursue for their futures. My school was going to offer AP computer science, which I had signed up for, but they ended up cancelling it, and there was one computer science course offered in place of it, but I didn’t take it. I think the exposure would have helped me feel a little less intimidated coming to college after I decided to major in it because even though the fundamental/intro courses at Notre Dame are designed so people with no experience can still succeed, I felt like I was behind compared to my peers since I didn’t do anything computer science related prior to coming here.

Coding is being referred to as the new literacy because some people believe that it is almost as important and prevalent as reading and writing in today’s world. In elementary, middle, and high school, I was required to take English, science, social studies, and math classes because these were considered core subjects. Some supporters of required CS classes argue that it is a core skill that students need, while many of the articles for this week suggested that it is important because the demand for computer scientists is outgrowing the supply. I think this is an interesting argument because when I took history classes and science classes, I didn’t feel as though I was taking them to prepare me for a career as a historian or scientist. I was under the impression that they were meant more for me to be intellectually well-rounded and to introduce me to different methods and approaches that are better highlighted in those fields of study. In that sense, I think there is something new to learn from the study of computer science and its thought process that is missing from the other subjects. Skeptics of exposing everyone to programming at a younger age argue that it’s not worth the time and money because some people just aren’t good at it or that we shouldn’t be giving people the impression that everyone can be successful at computer science. I think this impression came from society, and I don’t think offering courses in computer science perpetuates this notion because nobody thinks that everyone who takes an English class will become a best-selling author or everyone who’s ever taken a science class will discover a new particle.

I do agree with people who raise the concern that if we are going to introduce these computing classes, they have to be done right and have good curriculum. Schools will also have to establish standards to see if students are progressing after each grade and fulfill staffing requirements for the different grade levels. They also have to figure out how to incorporate it into their existing framework or schedule. I think that in elementary school, kids should learn binary as a unit in math, and computational thinking should be introduced with very basic examples not in a separate class but as an addition to existing classwork where appropriate. In high school there should be optional electives for those who want to learn more about computer science including introductory programming and logic courses and more advanced computational thinking or current topic studies of how computers are used to solve problems and where technology is headed. This big picture thinking and understanding why can help motivate students to learn how to make it happen. It can also help them recognize patterns that are well-suited to be solved using computer science, its limitations, and other possible applications. As much as the world needs people to make things happen, there is also a need for people to come up with the ideas.

I do think generally anyone can learn to program, and it wouldn’t hurt for everyone to know a little bit about it. Even if there are some people who can’t program well, they can at least learn to understand very basic code to interact with technology that they come across in day-to-day life. It feels kind of petty to me to try to keep coding exclusive to certain groups of people. I don’t think anyone should be forced to study computer science in depth, but I don’t see why people take issue with encouraging students to have a basic understanding of computer science approaches to problem solving, even if they’re not interested in or good at coding. I think the goal of these efforts to bring computer science to the youth needs to be clarified, and I think it should aim to give students the beginnings of a foundation in computing, not to try to make every student into the next Mark Zuckerberg.

Reading 13

According to WIPO, “A patent is an exclusive right granted for an invention – a product or process that provides a new way of doing something, or that offers a new technical solution to a problem. A patent provides patent owners
with protection for their inventions.” Basically, it’s a way to claim ownership over an invention and control who can and cannot use it and under what circumstances they can use it so that you can profit from your hard work. That’s the idea at least. They acknowledge the value of ideas and intellectual property by providing legal protection against using the patented invention without permission from the person who owns it. It enforces a “give credit where credit is due” system and is financially beneficial to inventors. Patents were intended to encourage innovation because if you came up with a really good idea, nobody could steal it from you. Sometimes, however, patents are counterproductive to this goal because they prevent others from making use of the invention in their ideas.

I used to just assume that patents were good because I’ve never known life without them. I think they are useful in some cases and make less sense in others, but I don’t know where I’d draw the line. They seem to act as protection for the small companies/individuals and a club for the big ones. If I were to come up with a really cool idea, I wouldn’t want some big company to make millions off it while I don’t see a dime just because they had the money and connections to implement the idea that I came up with. That being said, I don’t think they promote innovation as much as they hinder it because people are curious and driven, and I think they would still try to invent new things even without the guarantee of patent protection. I think people are more deterred by the risk of being sued for the work they’re doing than the deprivation of recognition or money that would occur if their idea was stolen.

Although I agree that the idea of patents should extend to software, I don’t think the way it’s currently being done is necessarily the best way to do it. I would consider software to be intellectual property, but it doesn’t seem like the software developer community cares as much about claims to ownership of the software as legal/corporate entities do. In fact, they seem to think patents discourage innovation because developers emphasize sharing and collaboration through open source projects. Until better patent guidelines for software are written, I think software should not be patented. One of the article authors was very passionate about this, and I don’t know if that is the most effective way to persuade someone. Software development is still relatively young and growing quickly, so although I don’t think it should grow unregulated, I don’t think it should be constrained this early on with patents and legal issues. I don’t know how effective it would be to invalidate all patents now because tech giants have used them to get to where they are, and they’re probably too powerful at this point to feel any consequences of a patent-free software world.

One group that would definitely be hurt by the invalidation of patents is the patent trolls. These are companies that make their money solely from buying patents and suing people/businesses that are potentially in violation of those patents. Patent trolls exploit the system and show its weaknesses. Given the benefit of the doubt, the intent behind patents is to provide protection, but patent trolls use them as clubs to punish others unnecessarily, and this hinders innovation. It seems like a sketchy business model, but it’s legally permissible and legitimate though I wouldn’t say it’s ethical. Furthermore, they’re not contributing anything to the world if they solely hold patents and sue others, so they should try to find some more honest and productive work.

Reading 12

Self-driving cars were not originally developed because people were lazy and didn’t want to drive themselves, although I can’t say I particularly enjoy driving, so I wouldn’t mind not having to drive myself places while having the convenience of travelling in a car. On a serious note, self-driving cars are often viewed as a good thing because ideally if done properly, they would be safer than human-operated cars. Some people are just bad drivers while others get tired on long distance trips and have to make many stops to make sure they are alert enough to drive safely. Computers, in theory, don’t get tired and are more methodical and precise than humans. Since driving consists of making decisions based on an established set of rules, it seems like a problem that is suited to computers. Some are less optimistic and enthusiastic, however, because it won’t be easy to program a car to perfectly navigate the roads and handle every possible situation that may arise. Some skeptics think that autonomous vehicles are more dangerous because of the grave consequences of an improperly programmed systems. It’s a difficult ethical consideration to make as to whether they are worth developing because it’s inevitable that people will be harmed and die in the process.  I think it’s too early to say whether they will make our roads safer.

Decisions have to be made regarding how to deal with situations where loss of life is unavoidable, but I’m not sure what the ethically correct decision biases would be. I think it’s important to keep mechanical features that would allow humans to manually regain control in the event that something goes wrong or human instinct is better (safer) than the computer system. There should always be a way to “pull the plug” on a system. If you don’t completely hand over control to the vehicle, however, then the human “driver” could potentially be held liable when an accident occurs. If there is no way for a human to take control of the vehicle, then the company that made the vehicle should be held liable. I don’t know how realistic this is though because bearing 100% of the blame would be a huge risk and make companies less likely to sell the product and assume the liability. Just as it’s hard to write one law that applies to every possible scenario, liability should be determined on a case-by-case basis.

While the dangers of self-driving cars are important to consider, there are also benefits that could come from autonomous vehicles. With the rise of Uber and Lyft came an increase in autonomy for people because they could get anywhere they needed to go without owning or having access to a car. This made riders more independent and it provided many average people a way of making extra money. Autonomous vehicles could increase independence by allowing those without the ability to drive to still be able to get around in areas with poor public transportation. The caveat is that all those jobs created by ridesharing services would start to fade away. The articles for this week mention that the economy will shift and balance itself by creating jobs in other areas that have to do with creating and maintaining the vehicles as it has in the past with other technological advancements. I can see how that would work, but I’m concerned that the jobs provided will be of a different skill level, and it will be difficult for those who lose their jobs as drivers to adjust and find new work. The idealist in me wants to believe that it would be better for everyone to get more education and work hard to adjust, but realistically, it’s hard to see that happening, and the Catholic Social Teaching of The Dignity of Work and the Rights of Workers calls us to help those whose jobs are displaced to find new work or gain the new skills needed to survive in the new economic environment. I don’t know what the government’s role would be (or even if they should be involved) in aiding in the economic side of things, but I do think that they should regulate the safety of self-driving cars in terms of standards for production, testing, and safety features.

Right now, I don’t think I would want a self-driving car, but I’m not sure if that would change in the future. It’s still hard for me to wrap my mind around fully autonomous vehicles, but we’re already on the way with the lane detection features, self-parking, and similar features. The transition towards fully autonomous vehicles will be a gradual one, which will help everyone adjust to it and accept it. If we can create proper self-driving cars, then that would be great, and I wouldn’t have to worry about driving myself. In the meantime, I think I should try to drive more and get more practice before I move out on my own and am responsible for finding my own transportation.

Reading 11

Artificial intelligence is a buzzword that many people use, and we all have a general idea of what it is, but we don’t often take the time to define it. Even the articles that were assigned for this week didn’t formally or explicitly define AI. They did provide some descriptions and try to characterize the field of AI, however, and I think the best one that I came across described AI as programming computers to be able to perform tasks that seem intuitive to humans. It is about extending what we can do with computers by having them solve more “complex” problems that appear to require understanding or intelligence to solve, although Rise of the machines points out that what makes these problems harder for computers is that we haven’t found (or there doesn’t exist) a formal set of rules to apply to the problem, so “tasks that are hard for humans are easy for computers, and vice versa.” I’ve never thought of AI this way before, but it makes a lot of sense. If we are trying to mimic human intuition (because that is all we know) and re-create that in machines, we should consider where the boundary is, if any, between human and artificial intelligence.

Growing up, I always heard that there were different kinds of smart. Some people are book smart while others are street smart, and there’s this thing called emotional intelligence. The emotional side of things seems difficult if not impossible to code into a computer, but maybe I’ll be proven wrong in the future.

So far, all the hype in AI surrounds game-playing algorithms. While AlphaGo, DeepBlue, Watson, and the likes are all interesting examples of advances that have been made, they have been optimized to do one thing only. They may be able to “learn” more than a traditional program, but they can’t generalize that knowledge beyond the scope of the problem they were designed to solve. It’s also hard for me to accept these as proof of the viability of artificial intelligence because they aren’t really making their own decisions. In the case of these game-playing algorithms, there has to be some bias introduced in the heuristics used to make decisions or break ties, so I’m not entirely convinced that the algorithms are “thinking” on their own. However, in the direction that we’re headed with neural networks, we’re understanding less about how they make their decisions, and they’ve been able to detect patterns that humans can’t even see or comprehend. This is more what I expect when I think of artificial intelligence, but they’re still just attempts to mimic human intelligence.

The Turing Test and Searle’s Chinese Room are interesting thought experiments that raise philosophical questions. I don’t think the Turing Test is a good enough indicator for intelligence because these chat bots are once again only programmed to do one thing. While it’s cool for computers to be able to have seemingly intelligent conversations, it doesn’t fully capture the breadth of human intelligence. I think there’s something special and unique about humans that goes beyond their “intelligence,” and I think it’s important to acknowledge this difference. I like the Chinese Room argument because it exposes how AI really works, but there’s another part of me that believes that in the strict sense of “intelligence,” it doesn’t matter that the computers don’t actually understand because they can behave as though they do understand, and that’s enough to convey understanding.

Like I mentioned before, humans have a unique quality to them, and I’m not worried that this will ever be able to be replicated in a machine. In addition, computer programs are often made to optimize something, and I don’t think that humans do that, or at the very least, that we would ever figure out a way to identify and codify that underlying “something” that we are all trying to optimize. For example, there are people who have developmental disabilities who lack “intelligence” in the strict sense of the term who may never even be considered a challenge to Watson, but they are as equally worthy of love and equally valued in their humanity as Ken Jennings, which is something we would never and should never say about computers. My only concern is that we will try to incorporate so much artificial intelligence into our daily lives that we lose a significant amount of human-to-human interaction. As humans, we are made for community, and without that, we would be worse off. As far as I know, we are still in control and we can remain in control of artificial intelligence by limiting how much of it we let into our lives.

I don’t know if a computing system could ever be considered to have a mind. I don’t know exactly how I would define a mind. I’m taking my second philosophy course next semester on Minds, Brains, and Persons, so I’ll hopefully have a better understanding of it after that. Something that should be considered though is that there is a wide variety of ideas and values among humans, and I don’t see how that would be implemented in artificial intelligence. If artificial intelligence systems are to represent the best of the field, every “instance” would be the same, and there would be no difference between them to promote growth and development. To inculcate a difference, you could train them on different sets of data, but that would upset people because it’s unfair, so it’s not a feasible route. Computing systems also can’t develop their own morality because they have as a goal whatever they are programmed to have as an end goal. How they make decisions is based on heuristics at some level that were provided by humans, so their decision making is biased. Any values that an artificial intelligence system possesses would be indirectly passed down from a real person, but I don’t think the world is ready to decide which value/moral system is best to give the machine. There are already arguments about bias and censorship in devices such as Alexa and Echo Dot that upset people. I don’t know that the answer is, but I think it’s an important consideration to make because any system that is going to make decisions must have a motivation, end goal, or set of rules to inform those decisions.

Reading 10

Trolling is basically online heckling. People who troll on the internet usually do it to provoke someone else and cause disagreement. Usually it’s not necessarily prompted by a post to begin with. It seems more like the person leaving inappropriate/unnecessary comments was probably already agitated and turned to the internet to take out their frustration on someone else in a misguided attempt to make themselves feel better. I don’t think that it transfers their anger or pain; instead, that pain is spread and everyone is worse off. Trolling can be a form of cyberbullying, which is generally when someone uses the internet as a means through which to bully someone else. Regular bullying is done face to face while cyberbullying distances the bully from the victim, making it easier for the bully to avoid the consequences and act without thinking. Bullying can have serious consequences, and it should be taken seriously. We should try our best to prevent it. While I don’t think that technology companies should be obligated to prevent cyberbullying by not allowing some posts to be published based on their content, I do think that companies should do their best to respond to bullying incidents. It would be hard to prevent cyberbullying because it is very contextual and subjective (it would be hard to detect all instances automatically). I like the idea of having pop-up boxes with warnings to users to make them take a half second to think before they post something, and I think it’s worth trying because it would probably help more than hurt the situation. They should also respond seriously to stalking allegations because stalking is illegal and presents a threat of danger.

A common thing that victims of trolling say is that trolls hide behind anonymity. This makes sense to me, and before doing the readings for this week, I had never heard an argument against this. Even if people are just as likely to post mean things online under their real name as they are anonymously, the cloak of anonymity still shields people from taking responsibility for what they say, and some people will be deterred when you take that option away from them. These are the people that tech companies should be more concerned about because they actually have the power to effect change by not allowing for anonymity. People who would have posted mean things anyway under their real name will do so no matter what, and the only way to change that is to tackle the problem at its source and convince the individuals not to be mean, which is out of the scope of the ethical obligations of a tech company.

But just like some of the other issues we’ve discussed in class, I don’t think that there’s a solution that will please everyone. You can’t provide protection to only some people with anonymity and ensure that those who are protected are the “good guys” without giving it to everyone, and you can’t deprive the “bad guys” of anonymity while extending it to everyone else without any inconvenience to either side. The GamerGate controversy confused me, especially after reading The Future of The Culture Wars Is Here, And It’s Gamergate. It all seems very petty, which is unfortunate because it can and has escalated quickly into real-life consequences in the physical world. I don’t think this type of behavior is best countered via the tech platforms on which they occur. Instead, I think it would be more effective to go to the source and try to figure out why people are trying to create discord and stir up trouble. Again, this is not the responsibility of the tech companies, but it’s something I believe we could all do a little bit to connect with others and help them to not feel like they need to post something harmful. This may be a naive view of the world, but it’s the best way I can see to move forward.

I do think cyberbullying is a problem as much as traditional bullying is a problem. I don’t think we can protect or shield children from everything, but I don’t think the other extreme of making them deal with it on their own is appropriate either. I think support for victims and bullies would help because it could discourage victims from continuing the cycle by bullying others and it could discourage bullies from continuing to bully because they’ll feel better about their own lives and won’t feel the need to tear others down. I view trolling as less of a problem because it’s often impersonal and easier to ignore. Most people who use the internet are aware that if they choose to be on the internet, trolling is just something they’ll have to put up with, but just as it’s easier for trolls to post because they’re separated from the victim by the internet, victims can walk away by turning off their device (to a certain extent, and I would imagine in most cases). Trolls are usually not trying to do anything productive and they just troll to get a reaction out of people, so it seems like the best way to take their power away from them is to ignore them and not react.

I don’t know enough about real name policies to know how effective they are, and I think more research would need to be done in this area. A difficulty with this, however, is that it would be hard to completely isolate the effect of using a real name versus being anonymous or pseudonymous from other outside factors that influence people’s online behavior. I don’t have a problem with services that require me to use my real name although of course, I prefer not to when I have the choice because if I give that over, they can link it to data about me elsewhere and that’s often just unnecessary.

Anonymity on the internet as well as the internet itself can be both a blessing and a curse. Their effects are based on how a person chooses to use them. It appears to me that the capacity for good and the capacity for harm via the internet and anonymity are connected. As one grows, so does the other; as one shrinks, the other does too. Therefore, trolling and cyberbullying don’t seem to be issues that can be properly addressed via technology since it’s just a medium through which they are manifested. Instead, we need to focus on changing people.

Reading 09

The internet is a powerful tool. It allows people to share their thoughts in a faster and more far-reaching way than ever before, which sounds amazing if we lived in a utopian society where everybody got along. Unfortunately, reality has proven otherwise, and now that the internet has become such a large part of so many people’s lives, the question of where to draw the line between censorship and free speech must be considered because it has very practical consequences and implications. The two directly oppose each other, but the line gets blurred when words turn into actions.

Free speech is not completely unbounded/unqualified. The famous restriction on free speech that is used to demonstrate its limitations is that you can’t yell “FIRE!” in a crowded theater when there is no fire. This rule was not made because it would be rude to do so or to teach Americans better movie etiquette and give other customers a better viewing experience but because it would (or could) cause unwarranted danger. Deciding what else falls under the category of dangerous things to say is difficult because it can be somewhat subjective, which makes it a gray area. In China, the government views the ideas of independence and democracy as dangerous even though these same ideas are held up as ideals in the United States.

Censorship places limitations on free speech, and like free speech, it has the ability to be dangerous when misused and abused. Controlling which ideas are and aren’t allowed to be shared gives a group of people power over others, which is why censorship has to be treated carefully. I think a good standard to start with is that speech that directly incites violence against others should be censored online. Things are less clear, however, when something violent occurs as the result of someone’s reaction to an online comment because you can’t always predict or control how others respond to something you say, and you certainly don’t control their behavior, so it’s harder to tell who is to blame when things like that happen.

In the following, it is important to note that telling companies what they can and cannot censor is itself a form of censorship. You have to consider where the company’s free speech rights begin and end. I agree with Robert Epstein when he writes that “If Google were just another mom-and-pop shop with a sign saying ‘we reserve the right to refuse service to anyone,’ that would be one thing. But as the golden gateway to all knowledge, Google has rapidly become an essential in people’s lives – nearly as essential as air or water.” Due to their size and influence, it is more crucial that companies such as Google and Facebook censor where necessary and only where necessary – no more, no less.

I think the most obvious abuse of censorship is using it to suppress opposing views just because you disagree with them. Even this statement is not an absolute because the case could be made for when the opposing view is dangerous or harmful, but relativists will say that this is subjective, which complicates things. From my point of view, government dissent is not gravely dangerous in and of itself (as long as it does not directly call for violence against people or property), and it would be unethical to censor these opinions.

On the other hand, news and messages spread by terrorist groups should be censored because by definition, terrorist groups aim to cause terror. Then, the question becomes how to decide if a group is in fact a terrorist group or if it’s just a group with views that you disagree with. If they are hostile and promote violence justified by their ideology, that is a pretty good indicator that they are a terrorist group. If they just say things that make you uncomfortable, and many reasonable people aren’t concerned about it, then you should re-examine why it makes you so uncomfortable because it probably has more to do with how you perceive things than what was actually said. A similar approach could be taken when considering whether hateful and discriminatory comments should be censored. Whether something is offensive is only determined by the recipient not the writer, so while we have a decent idea of what might cause offense, we cannot control how others react to things, and we can’t guarantee that there is someone who will read what we write and be offended or not be offended by it. My initial thoughts would be to censor anything that calls for physical violence, but if we are to recognize mental health to be equally as legitimate as physical health, it follows that we should censor comments that cause mental and psychological damage. This is harder to detect, so it would be harder to enforce.

A lot of the debate surrounding online censorship seems kind of petty. I don’t think it’s ethical for large, public companies to remove information that is not in line with their interests and political beliefs. However, I do think that sometimes people and organizations try to push the envelope and provoke the large companies into removing their content so that they can make a big deal about it and gain public support for the “wrongdoing” that the large company committed. I think both sides need to lay off a bit.

If political censorship is going to happen, I think the same rules should apply to everyone regardless of where they fall on the spectrum. Some extremist groups promote violence, and I think those should be censored. Others who just promote peaceful discussion of ideas should not be censored for ideas that a company disagrees with. When it comes to discussion of illegal activities, I’m not sure what the right course of action is. My instinct would be to say that they should be censored because not doing so could be viewed as passive endorsement of them, and talking about illegal activities could make people more likely to engage in them, but a blanket ban on talk of illegal activities would also eliminate constructive conversations regarding the pros and cons of making an activity legal or illegal. I think something more nuanced needs to be applied to the censorship of illegal activities.

There is a lot of speculation about what will happen if we enact a law or reverse a ruling, but it’s just speculation. You don’t know for sure until you do it. Censorship is a powerful tool. I don’t know exactly what the answer is as to how to apply it, but I think it lies somewhere between the two extremes of no regulation and censoring everything. I don’t think that we’ll ever completely figure it out and get it perfect, but I think we should try our best to use censorship responsibly to work towards the common good.

Reading 08

Corporate personhood is the idea that although corporations are not natural people, they are legal people, and as such, they are legally afforded the rights that a natural person has where applicable. This is an interesting concept because at the end of the day, corporations are just a collection of people (and money). Because of this, it’s inevitable that we should have to consider what they should be allowed to do and what limitations can be placed on them. I like Kent Greenfield’s point that corporations should be more like people because it feels like a natural move to make. And if the contention surrounding corporate personhood is a result of questions of what they ought to do and what’s fair, the human aspect would help keep that in perspective in contrast to the inherent profit-driven motivations of corporations themselves. The debates about corporations having freedom of speech and freedom of religion seem to be just another attempt to control what other people say and do, which really is only an issue when you disagree with what they’re saying and doing. If at the end of the day, corporations are run by people, it’s hard for me to see how you can separate the two and ask (or legally force) people to do things that go against their personal beliefs, especially if their company is not public. It is troubling that the legal precedence for this idea was founded on a series of lies and untruths that were never corrected, but that doesn’t necessarily mean the idea itself is a bad one.

In the case of IBM’s involvement in Nazi Germany, I think it was an unethical act on the company’s part. We spent the beginning of the semester discussing how technology developers are ethically responsible for considering the implications of their creations, but even if this wasn’t taken into account, IBM is culpable because they knew what their products were being used for, and they actively continued to support Nazi efforts by maintaining the machines and providing materials and means for so many people to be unjustly killed. Even though I’m willing to give the benefit of the doubt that maybe IBM didn’t know what it was getting into initially with the creation of the census system, the company had multiple chances to step away after that when things escalated.

From a Catholic perspective, corporations should definitely refrain from doing business with immoral/unethical organizations and people. As Catholics, we are supposed to avoid scandal. This means not only scandal in the sense of committing sins but also scandal as defined by leading others to believe that something immoral or sinful is okay even if you don’t actually believe it’s okay and you never committed the sin yourself as Father Mike Schmitz describes it. Therefore, even if a company does not directly and actively commit the immoral and unethical acts themselves, doing business with immoral or unethical organizations and people can lead to scandal because others may believe that what they’re doing is okay since you didn’t find anything wrong with it that would cause you to stop doing business with them.

Going back to the idea of corporate personhood, it would seem foolish to give corporations the same legal status as individual natural persons without also placing the limitations on corporations that are placed on individuals. As individuals, we are not allowed to kill or steal or do things of this nature, but the law doesn’t necessarily punish all unethical behavior and it doesn’t require anyone to fully behave ethically. I don’t think it’s possible to expect or force corporations to have the same ethical obligations and responsibilities as individuals do because they are not exactly the same by nature. However, it is important to put checks on corporations because they can have a lot of power and influence on many people, and if it was left up to the corporations to decide, they probably wouldn’t always make the ethical choice because it could result in backlash or the corporation’s death. As much as the idealist in me would like everyone and all corporations to always make good decisions, the reality is that this likely wouldn’t happen on its own.

Reading 07

The story of the dad who got upset at Target for sending his daughter baby product coupons is the classic, go-to example of when data collection crossed the line in regards to privacy. It’s an attention grabbing story, but it reveals a hidden cost of having the conveniences and helpful tools that are sometimes marketed as “free” because they do not have a monetary cost associated with them. Since there’s no such thing as a free lunch, it would be naive to think that all of these things are actually free with no strings attached. Whether it’s a “free” app that requires in-app purchases to actually access its functionalities and features that make it useful, or “free” thanks to its sponsors who take every opportunity to remind you that you got this product or service for free because of them through their ads, or “free” in exchange for consent to collect your data. While it sounds unfair to us as consumers, I think we only say that because we don’t like it. As long as companies are disclosing what information they are keeping on you, they have a right to collect your data in exchange for their service – to a reasonable extent. Some guidelines that I would agree with are ones stated in the General Data Protection Regulation. Companies should have a reason for collecting data before they do it (as opposed to just collecting it in case it will be useful in the future), the data should be anonymized, and users should be told what data is being collected and what it will be used for. Even if most people don’t read the terms and conditions, the company can at least say that it did its part in notifying the user. Unfortunately, the GDPR only applies to data collected in the EU for now, but adopting a global standard would be nice.

As to whether it’s ethical for companies to track your data in exchange for services, I don’t see a problem with it if the services are frivolous or non-essential to life. This is the price they are charging you for their service, and if you’re not willing to pay that, then don’t use it. Let me qualify this statement by acknowledging that it’s much easier said than done, and that this shouldn’t be taken to the extreme in implying that all services should force you to give them your data to use their services. I’m not sure where the balance should be. If the data is just being used to customize your ads, I don’t personally have a big problem with that. Some might argue that customized ads target people and prey on them, but I think it’s fair game to replace generic ads with custom ones. If you’re really worried that it will make you spend more money than you should be, you can consider it as an opportunity to exercise discipline and grow as a person. If you didn’t already guess, I don’t use ad blockers. Sometimes, I like seeing ads for things that I actually like or receiving discounts on things that I would actually buy. Furthermore, I get that not everything can be provided for free, and ads are a way for businesses to provide their services at lower (or no) monetary cost to its users and for the advertising businesses to make more money. All I ask is that ads stay appropriate, especially on sites and apps that kids frequently visit.

Reading 06

Tradeoffs – it’s a word that we hear all the time as engineers, and software engineering is certainly no exception. Privacy vs. security is a tough one because it affects everyone though it is meant to target only specific individuals, and there isn’t a single policy that can benefit both sides. In Apple’s case with the FBI’s request to remove features to make it easier for them to gain access to a device, I’m not sure what the right thing is to do, but I do commend Apple for taking a firm stand and holding true to its values. They seem to have done the right thing in considering the implications of granting the request, and the potential dangers in weakening their encryption standards. They are genuinely trying to avoid harm, which is an ethical decision they have as leaders in the tech industry. As Tim Cook stated in his letter to customers, they are not trying to protect criminals – this is just a side effect of protecting the privacy of their other customers. I’m not sure I entirely buy the whole “if we don’t encrypt, people who are up to no good will find someone else who does” reasoning, but I think there’s some truth to it. Those who carefully plan out their crimes will go to greater lengths to keep their plans hidden, so Apple’s back door may not be helpful; conversely, those who don’t care to try as hard will probably also leave behind evidence in other accessible places, so the back door wouldn’t be needed.

When it comes to privacy vs. security, it’s hard to draw the line as I mentioned before because they seem mutually exclusive. Security precludes privacy because it requires that everyone and everything is accounted for. You can’t trust that a given person isn’t doing something evil unless you know what they are doing period. Sure, the majority of us are doing mundane things like watching cat videos or going to a course website to view this week’s homework problems, but there’s no way to separate these people from the bad guys accurately and permanently so that we can safely ignore the information/data generated by the “normal” people. When I go to the doctor and he/she wants to take a closer look at something, I usually let them do it because even though it’s uncomfortable, I know it’s important for my health. As uncomfortable as it may be to have somebody following all that you do, at the end of the day, I do believe that if you have nothing to hide, you have nothing to fear. Unless, of course, you’re worried that the information the government has on you will be taken out of context or used to paint an inaccurate picture of what happened. I have no good response to this except that there have been cases where this occurred without access to our phones, so I’m not sure that giving them this new data will make the situation that much worse. It shouldn’t be happening at all. That being said, I think the more relevant question is “Which do we value more?” Do we value everyone’s privacy at the expense of overlooking potential danger, or do we try to stop the bad guys at the expense of the good ones? I’d say that safety and security are more important than personal comfort or convenience.

Reading 05

As engineers, we get to work on some pretty cool stuff, and we can help to improve other people’s lives using our knowledge and skills. This is great, but it is also a huge responsibility. In some cases, it’s a matter of life and death. One such case is the Challenger disaster of 1986. The physical cause of the disaster was the cold temperature and the rubber O-ring, but the true cause of the fatal disaster was pride, apathy, and the failure to stand up for what is right under pressure. This is an interesting case because the truth didn’t come out until after people had died. Although Roger Boisjoly is held up as a hero for blowing the whistle on NASA’s failure to listen to his warnings prior to the launch, Vivian Weil notes that “once the decision to launch had been made, he and the other engineers in Utah fell into line, as expected, and accepted the decision” (Whistleblowing: What Have We Learned Since the Challenger?). Even though he may have acted more ethically than his superiors, he still fell short. The scariest thing is that there were so many opportunities to stop the launch. They had put in the work to identify an issue, and they continued anyway, fully knowing that lives were at stake. All it would have taken was for one person along the line to postpone the launch. Everyone who was involved was equally to blame whether that’s the management that disregarded the warning or Boisjoly himself who went along with the decision and accepted it. It’s hard to speak out against authority, but it’s so important to, especially when doing so could save a life.

I don’t think it was unethical of Boisjoly to share information about what went wrong with the public. He didn’t go straight to the public via some platform such as WikiLeaks; Boisjoly exposed the errors in a proper manner when summoned to testify during the Rogers Commission. Morton Thiokol probably only retaliated against him because they knew they had made mistake, and they were embarrassed that others were made aware of it. It also reflected poorly on NASA as a client for proceeding despite the risks. This is where pride came into play. The two companies didn’t want to admit that they’d messed up and take responsibility for the deaths of people who had trusted them. Even though he faced backlash from his employer and others in the industry, it’s pretty safe to say that Boisjoly made the right move in exposing the poor decisions made leading up to the Challenger launch. It’s unfortunate that he was punished rather than rewarded for being honest and telling the truth, but it was ethically the right decision to make, and I think the personal peace of knowing that he did the right thing outweighed the negative consequences he faced. It would have been worse to stay in a company or profession that didn’t take safety concerns seriously enough and have to live with the guilt of knowing that he didn’t do everything possible to ensure other people’s safety. As awful as the Challenger incident was, we can’t go back in time and bring those people back to life. The only way to honor them is to move forward with a greater care for human life, and to keep this at the center of every decision made.