Reading 13

According to WIPO, “A patent is an exclusive right granted for an invention – a product or process that provides a new way of doing something, or that offers a new technical solution to a problem. A patent provides patent owners
with protection for their inventions.” Basically, it’s a way to claim ownership over an invention and control who can and cannot use it and under what circumstances they can use it so that you can profit from your hard work. That’s the idea at least. They acknowledge the value of ideas and intellectual property by providing legal protection against using the patented invention without permission from the person who owns it. It enforces a “give credit where credit is due” system and is financially beneficial to inventors. Patents were intended to encourage innovation because if you came up with a really good idea, nobody could steal it from you. Sometimes, however, patents are counterproductive to this goal because they prevent others from making use of the invention in their ideas.

I used to just assume that patents were good because I’ve never known life without them. I think they are useful in some cases and make less sense in others, but I don’t know where I’d draw the line. They seem to act as protection for the small companies/individuals and a club for the big ones. If I were to come up with a really cool idea, I wouldn’t want some big company to make millions off it while I don’t see a dime just because they had the money and connections to implement the idea that I came up with. That being said, I don’t think they promote innovation as much as they hinder it because people are curious and driven, and I think they would still try to invent new things even without the guarantee of patent protection. I think people are more deterred by the risk of being sued for the work they’re doing than the deprivation of recognition or money that would occur if their idea was stolen.

Although I agree that the idea of patents should extend to software, I don’t think the way it’s currently being done is necessarily the best way to do it. I would consider software to be intellectual property, but it doesn’t seem like the software developer community cares as much about claims to ownership of the software as legal/corporate entities do. In fact, they seem to think patents discourage innovation because developers emphasize sharing and collaboration through open source projects. Until better patent guidelines for software are written, I think software should not be patented. One of the article authors was very passionate about this, and I don’t know if that is the most effective way to persuade someone. Software development is still relatively young and growing quickly, so although I don’t think it should grow unregulated, I don’t think it should be constrained this early on with patents and legal issues. I don’t know how effective it would be to invalidate all patents now because tech giants have used them to get to where they are, and they’re probably too powerful at this point to feel any consequences of a patent-free software world.

One group that would definitely be hurt by the invalidation of patents is the patent trolls. These are companies that make their money solely from buying patents and suing people/businesses that are potentially in violation of those patents. Patent trolls exploit the system and show its weaknesses. Given the benefit of the doubt, the intent behind patents is to provide protection, but patent trolls use them as clubs to punish others unnecessarily, and this hinders innovation. It seems like a sketchy business model, but it’s legally permissible and legitimate though I wouldn’t say it’s ethical. Furthermore, they’re not contributing anything to the world if they solely hold patents and sue others, so they should try to find some more honest and productive work.

Reading 12

Self-driving cars were not originally developed because people were lazy and didn’t want to drive themselves, although I can’t say I particularly enjoy driving, so I wouldn’t mind not having to drive myself places while having the convenience of travelling in a car. On a serious note, self-driving cars are often viewed as a good thing because ideally if done properly, they would be safer than human-operated cars. Some people are just bad drivers while others get tired on long distance trips and have to make many stops to make sure they are alert enough to drive safely. Computers, in theory, don’t get tired and are more methodical and precise than humans. Since driving consists of making decisions based on an established set of rules, it seems like a problem that is suited to computers. Some are less optimistic and enthusiastic, however, because it won’t be easy to program a car to perfectly navigate the roads and handle every possible situation that may arise. Some skeptics think that autonomous vehicles are more dangerous because of the grave consequences of an improperly programmed systems. It’s a difficult ethical consideration to make as to whether they are worth developing because it’s inevitable that people will be harmed and die in the process.  I think it’s too early to say whether they will make our roads safer.

Decisions have to be made regarding how to deal with situations where loss of life is unavoidable, but I’m not sure what the ethically correct decision biases would be. I think it’s important to keep mechanical features that would allow humans to manually regain control in the event that something goes wrong or human instinct is better (safer) than the computer system. There should always be a way to “pull the plug” on a system. If you don’t completely hand over control to the vehicle, however, then the human “driver” could potentially be held liable when an accident occurs. If there is no way for a human to take control of the vehicle, then the company that made the vehicle should be held liable. I don’t know how realistic this is though because bearing 100% of the blame would be a huge risk and make companies less likely to sell the product and assume the liability. Just as it’s hard to write one law that applies to every possible scenario, liability should be determined on a case-by-case basis.

While the dangers of self-driving cars are important to consider, there are also benefits that could come from autonomous vehicles. With the rise of Uber and Lyft came an increase in autonomy for people because they could get anywhere they needed to go without owning or having access to a car. This made riders more independent and it provided many average people a way of making extra money. Autonomous vehicles could increase independence by allowing those without the ability to drive to still be able to get around in areas with poor public transportation. The caveat is that all those jobs created by ridesharing services would start to fade away. The articles for this week mention that the economy will shift and balance itself by creating jobs in other areas that have to do with creating and maintaining the vehicles as it has in the past with other technological advancements. I can see how that would work, but I’m concerned that the jobs provided will be of a different skill level, and it will be difficult for those who lose their jobs as drivers to adjust and find new work. The idealist in me wants to believe that it would be better for everyone to get more education and work hard to adjust, but realistically, it’s hard to see that happening, and the Catholic Social Teaching of The Dignity of Work and the Rights of Workers calls us to help those whose jobs are displaced to find new work or gain the new skills needed to survive in the new economic environment. I don’t know what the government’s role would be (or even if they should be involved) in aiding in the economic side of things, but I do think that they should regulate the safety of self-driving cars in terms of standards for production, testing, and safety features.

Right now, I don’t think I would want a self-driving car, but I’m not sure if that would change in the future. It’s still hard for me to wrap my mind around fully autonomous vehicles, but we’re already on the way with the lane detection features, self-parking, and similar features. The transition towards fully autonomous vehicles will be a gradual one, which will help everyone adjust to it and accept it. If we can create proper self-driving cars, then that would be great, and I wouldn’t have to worry about driving myself. In the meantime, I think I should try to drive more and get more practice before I move out on my own and am responsible for finding my own transportation.

Reading 11

Artificial intelligence is a buzzword that many people use, and we all have a general idea of what it is, but we don’t often take the time to define it. Even the articles that were assigned for this week didn’t formally or explicitly define AI. They did provide some descriptions and try to characterize the field of AI, however, and I think the best one that I came across described AI as programming computers to be able to perform tasks that seem intuitive to humans. It is about extending what we can do with computers by having them solve more “complex” problems that appear to require understanding or intelligence to solve, although Rise of the machines points out that what makes these problems harder for computers is that we haven’t found (or there doesn’t exist) a formal set of rules to apply to the problem, so “tasks that are hard for humans are easy for computers, and vice versa.” I’ve never thought of AI this way before, but it makes a lot of sense. If we are trying to mimic human intuition (because that is all we know) and re-create that in machines, we should consider where the boundary is, if any, between human and artificial intelligence.

Growing up, I always heard that there were different kinds of smart. Some people are book smart while others are street smart, and there’s this thing called emotional intelligence. The emotional side of things seems difficult if not impossible to code into a computer, but maybe I’ll be proven wrong in the future.

So far, all the hype in AI surrounds game-playing algorithms. While AlphaGo, DeepBlue, Watson, and the likes are all interesting examples of advances that have been made, they have been optimized to do one thing only. They may be able to “learn” more than a traditional program, but they can’t generalize that knowledge beyond the scope of the problem they were designed to solve. It’s also hard for me to accept these as proof of the viability of artificial intelligence because they aren’t really making their own decisions. In the case of these game-playing algorithms, there has to be some bias introduced in the heuristics used to make decisions or break ties, so I’m not entirely convinced that the algorithms are “thinking” on their own. However, in the direction that we’re headed with neural networks, we’re understanding less about how they make their decisions, and they’ve been able to detect patterns that humans can’t even see or comprehend. This is more what I expect when I think of artificial intelligence, but they’re still just attempts to mimic human intelligence.

The Turing Test and Searle’s Chinese Room are interesting thought experiments that raise philosophical questions. I don’t think the Turing Test is a good enough indicator for intelligence because these chat bots are once again only programmed to do one thing. While it’s cool for computers to be able to have seemingly intelligent conversations, it doesn’t fully capture the breadth of human intelligence. I think there’s something special and unique about humans that goes beyond their “intelligence,” and I think it’s important to acknowledge this difference. I like the Chinese Room argument because it exposes how AI really works, but there’s another part of me that believes that in the strict sense of “intelligence,” it doesn’t matter that the computers don’t actually understand because they can behave as though they do understand, and that’s enough to convey understanding.

Like I mentioned before, humans have a unique quality to them, and I’m not worried that this will ever be able to be replicated in a machine. In addition, computer programs are often made to optimize something, and I don’t think that humans do that, or at the very least, that we would ever figure out a way to identify and codify that underlying “something” that we are all trying to optimize. For example, there are people who have developmental disabilities who lack “intelligence” in the strict sense of the term who may never even be considered a challenge to Watson, but they are as equally worthy of love and equally valued in their humanity as Ken Jennings, which is something we would never and should never say about computers. My only concern is that we will try to incorporate so much artificial intelligence into our daily lives that we lose a significant amount of human-to-human interaction. As humans, we are made for community, and without that, we would be worse off. As far as I know, we are still in control and we can remain in control of artificial intelligence by limiting how much of it we let into our lives.

I don’t know if a computing system could ever be considered to have a mind. I don’t know exactly how I would define a mind. I’m taking my second philosophy course next semester on Minds, Brains, and Persons, so I’ll hopefully have a better understanding of it after that. Something that should be considered though is that there is a wide variety of ideas and values among humans, and I don’t see how that would be implemented in artificial intelligence. If artificial intelligence systems are to represent the best of the field, every “instance” would be the same, and there would be no difference between them to promote growth and development. To inculcate a difference, you could train them on different sets of data, but that would upset people because it’s unfair, so it’s not a feasible route. Computing systems also can’t develop their own morality because they have as a goal whatever they are programmed to have as an end goal. How they make decisions is based on heuristics at some level that were provided by humans, so their decision making is biased. Any values that an artificial intelligence system possesses would be indirectly passed down from a real person, but I don’t think the world is ready to decide which value/moral system is best to give the machine. There are already arguments about bias and censorship in devices such as Alexa and Echo Dot that upset people. I don’t know that the answer is, but I think it’s an important consideration to make because any system that is going to make decisions must have a motivation, end goal, or set of rules to inform those decisions.

Reading 10

Trolling is basically online heckling. People who troll on the internet usually do it to provoke someone else and cause disagreement. Usually it’s not necessarily prompted by a post to begin with. It seems more like the person leaving inappropriate/unnecessary comments was probably already agitated and turned to the internet to take out their frustration on someone else in a misguided attempt to make themselves feel better. I don’t think that it transfers their anger or pain; instead, that pain is spread and everyone is worse off. Trolling can be a form of cyberbullying, which is generally when someone uses the internet as a means through which to bully someone else. Regular bullying is done face to face while cyberbullying distances the bully from the victim, making it easier for the bully to avoid the consequences and act without thinking. Bullying can have serious consequences, and it should be taken seriously. We should try our best to prevent it. While I don’t think that technology companies should be obligated to prevent cyberbullying by not allowing some posts to be published based on their content, I do think that companies should do their best to respond to bullying incidents. It would be hard to prevent cyberbullying because it is very contextual and subjective (it would be hard to detect all instances automatically). I like the idea of having pop-up boxes with warnings to users to make them take a half second to think before they post something, and I think it’s worth trying because it would probably help more than hurt the situation. They should also respond seriously to stalking allegations because stalking is illegal and presents a threat of danger.

A common thing that victims of trolling say is that trolls hide behind anonymity. This makes sense to me, and before doing the readings for this week, I had never heard an argument against this. Even if people are just as likely to post mean things online under their real name as they are anonymously, the cloak of anonymity still shields people from taking responsibility for what they say, and some people will be deterred when you take that option away from them. These are the people that tech companies should be more concerned about because they actually have the power to effect change by not allowing for anonymity. People who would have posted mean things anyway under their real name will do so no matter what, and the only way to change that is to tackle the problem at its source and convince the individuals not to be mean, which is out of the scope of the ethical obligations of a tech company.

But just like some of the other issues we’ve discussed in class, I don’t think that there’s a solution that will please everyone. You can’t provide protection to only some people with anonymity and ensure that those who are protected are the “good guys” without giving it to everyone, and you can’t deprive the “bad guys” of anonymity while extending it to everyone else without any inconvenience to either side. The GamerGate controversy confused me, especially after reading The Future of The Culture Wars Is Here, And It’s Gamergate. It all seems very petty, which is unfortunate because it can and has escalated quickly into real-life consequences in the physical world. I don’t think this type of behavior is best countered via the tech platforms on which they occur. Instead, I think it would be more effective to go to the source and try to figure out why people are trying to create discord and stir up trouble. Again, this is not the responsibility of the tech companies, but it’s something I believe we could all do a little bit to connect with others and help them to not feel like they need to post something harmful. This may be a naive view of the world, but it’s the best way I can see to move forward.

I do think cyberbullying is a problem as much as traditional bullying is a problem. I don’t think we can protect or shield children from everything, but I don’t think the other extreme of making them deal with it on their own is appropriate either. I think support for victims and bullies would help because it could discourage victims from continuing the cycle by bullying others and it could discourage bullies from continuing to bully because they’ll feel better about their own lives and won’t feel the need to tear others down. I view trolling as less of a problem because it’s often impersonal and easier to ignore. Most people who use the internet are aware that if they choose to be on the internet, trolling is just something they’ll have to put up with, but just as it’s easier for trolls to post because they’re separated from the victim by the internet, victims can walk away by turning off their device (to a certain extent, and I would imagine in most cases). Trolls are usually not trying to do anything productive and they just troll to get a reaction out of people, so it seems like the best way to take their power away from them is to ignore them and not react.

I don’t know enough about real name policies to know how effective they are, and I think more research would need to be done in this area. A difficulty with this, however, is that it would be hard to completely isolate the effect of using a real name versus being anonymous or pseudonymous from other outside factors that influence people’s online behavior. I don’t have a problem with services that require me to use my real name although of course, I prefer not to when I have the choice because if I give that over, they can link it to data about me elsewhere and that’s often just unnecessary.

Anonymity on the internet as well as the internet itself can be both a blessing and a curse. Their effects are based on how a person chooses to use them. It appears to me that the capacity for good and the capacity for harm via the internet and anonymity are connected. As one grows, so does the other; as one shrinks, the other does too. Therefore, trolling and cyberbullying don’t seem to be issues that can be properly addressed via technology since it’s just a medium through which they are manifested. Instead, we need to focus on changing people.