Reading 14: A math major and an art major walk into a bar…

I do not think coding is the new literacy. I think coding is an incredibly useful skill to have, and I think computational thinking is important to learn, but I do not see if being a necessary subject in all schools, at least in the academic set up we have currently. If computer science becomes a mandatory class I think we will have to rethink our entire education system, and what we view as “mandatory.” I do not think that computational thinking is more important than creative thinking, and we live in a world right now where funding for creative classes is being cut. I do, however, believe that computer science classes, or other classes that promote computational thinking, should be made more accessible to all students, and that we should make it a more desirable class to take than it is currently being sold as.

In my high school, most people did not even know we offered a computer science class. The class was taken by a few guys who were already fairly familiar with computers, in a hidden computer lab, and it did not count for anything more than an elective. With all of the mandatory classes I had to take to graduate along with classes I was “encouraged” to take in order to get into the colleges I wanted, I had no time for an extra elective that would not count towards a specific requirement. I understand that all schools and states have different requirements and different amounts of classes you can take, but in my instance there simply would not have been time for me to take that class. By allowing students to take the class to count towards a requirement, or making the class a more welcoming environment for all students (despite gender, race, clique, or familiarity with the topic), we can at least make computer science a more accessible skill to learn for those who may be interested.

One way computer science could become more accessible or attractive is including it as a part of the already required computer classes in elementary school. I was required to take a computer class in from K-8. We learned how to type, use word, powerpoint, excel, google efficiently, and spent a lot of time in Microsoft paint and “Kid pix.” Now kids are exposed to more technology, and it may be worth teaching them some basic computational thinking or coding techniques at that young age. Then the kids can decide for themselves if that is a skill they would like to enhance and continue taking classes in, just like the do with art, music, gym, etc. I think computational thinking is important to learn young, like creative thinking, but not everyone is meant to love to code, just how not everyone loves playing an instrument or creating artistic masterpieces.

I have a math mind, thats why I majored in it. I think about things logically and developed computational thinking in my advanced math and science classes I took. So in my head, it feels like everyone could learn to program. But then I think about my sister who has an incredibly artistic mind. She hated math and loved art. That is why she pursued an art degree. I think if someone tried to teach her how to code she could catch on to certain things because she is a very smart person, but I don’t think she would enjoy it the same way I do. Her and I have gone to painting classes together and I get frustrated at times because it comes so naturally to her, and mine looks slightly better than the elementary schoolers she teaches. I think everyone has their skills, and their passions and they should be free to explore those. This is something that is wrong with our educational system as a whole right now. Everyone can learn math, they can learn to read, they can learn to draw, and they can learn to code. But not everyone is good at all those things, and not everyone is passionate about all those things.

Reading 12: We’re Cruisin.

I can see the motivation for building self-driving cars. It is all about innovation. Everyone wants to be the first person to create “the next big thing.” Although many companies are working on this problem, and some have even put cars on the road, no one has perfected it yet. Driving a car can be a very dangerous thing. The first person to perfect a vehicle that can drive itself and avoid the risk of human error will be very rich. One thing I think is interesting is that we do not yet know if self driving cars will actually be safer, we just know that human drivers are known to cause accidents and that self driving cars “might” eliminate or decrease the number of danger accidents. 

Aside from the possibility of being safer, another pro to self driving cars is that they will allow the “driver” to become a passenger giving them more time to do other things. Imagine having an hour commute to and from work everyday. That means a typical 8 hour work day turns into a 10 hour day. Now imagine being able to work while in your car on the way to the office. If you could start working on your commute, and work on your way back home, you could cut your once 10 hour day back to an 8 hour day and still get all of your work done. Another pro would be that other self driving cars could communicate with your car. Imagine knowing exactly what the car in front of you is going to do! That could help you determine which lane you want to be in to make your commute faster. Other pros include eliminating “driver distractions” and drunk driving, being able to increase speed limits, and improve heavy traffic.

Important cons of self driving cars include cost, if this will actually increase safety, determining who is actually at fault in case of an accident, and whether the sensors and cameras will actually be effective on all roads in all conditions. People also fear loosing the ability to actually drive a car.  What if something malfunctions and the car must default to manual control, and the driver does not actually know how to drive? Also, some people actually do enjoy driving cars. Probably the biggest concern with self-driving cars is the “social dilemma.” Who is at fault, and what should the car do in a life-and-death situation? Many people use the trolley problem to discuss the morality of autonomous cars. Another way to think about this is if your car is driving next to a cliff and it has the option to drive off the cliff or run into a child, what should it do? What if there is also a child in your car? How do we value one persons live over another. As a human driver, often our natural reaction in an accident is to save ourselves. Often we do not have time to consider the value of the other persons life over our own before a collision occurs. But what does that mean for a computer. And who is liable when this occurs?

I honestly do not have a great answer to this problem. Every human is different, and every human is going  to have a different opinion. It is impossible to make a “perfect” self driving car that reflects the moral behavior of every human because we will not all agree on the answer to the trolley problem. We also will not all agree on who is liable. The software engineers are trying to “mimic” human reactions. They are just doing their jobs, and not actually the computer themselves, so how can they be liable? The driver isn’t really in control, so how can they be liable? In the most recent case where an uber killed a pedestrian in Arizona, police are putting the blame on the victim, saying this situation would have been difficult to avoid even if the car was not autonomous. But what if the pedestrian was not a pedestrian and instead another self driving car? Then who can they throw the blame on?

Personally, I am not interested in owning a totally self driving car at this time. I think some of the automatous features are cool, like the car being able to parallel park on its own, or braking for you to avoid an accident, or even making sure your car stays in its lane. But I am not ready to give total control of my vehicle to a computer. In my opinion cars are very dangerous things. I am a tiny human in total control of something that weighs multiple tons and I take it very seriously. For now, I feel safer being in control of the car, than trusting a computer to be in control. Maybe some day the technology will advance enough and there will be enough proof that giving control to a computer is actually safer, but for now I do not have enough evidence the trust a computer to totally drive my car.

Reading 11: “Elementary My Dear Watson”

Artificial Intelligence is a thing people study to try and make computers be intelligent or rather be able to think, learn, decide, understand, etc.  the way humans do. Strong AI would be totally simulating the way a human thinks. Weak AI is more aimed at doing one particular thing similar to how humans might. AI is similar to human intelligence in that we are trying to simulate exactly that, human intelligence. What makes it different is right in the name: Artificial. This “intelligence” is not natural. We as humans can tell a computer what to do and how to make decisions. We can tell it where to look to learn new things. We can inform it how to interact with other humans. We can even make it look like a human. But none of it is natural or really real. In many cases of AI that exist today, the machine is doing what the developer tells is to do and learning what the developer tells it to learn.

Things like AlphaGo, AlphaZero, and Deep Blue seem to be more weak AI if AI at all. When “teaching” a computer how to play a game, you are teaching it how to analyze data and choose a “move” based on the data’s outcome. A machine will perform better the more games it plays, just like a human will perform better the more games they play, however the way they learn is very different. A machine does not require sleep or food. They have one job and it is to play the game for 24 hours a day, 7 days a week, and learn everything it can. It also can easily check the outcomes of certain moves from past experiences and remember winning percentages of certain moves. This last part is something a human does not do when playing a game. When a human plays a game they have to go with their instincts. They have to consider their options and trust their gut they are making the right decision. This could mean they choose a safe move over a risky move or vice versa. A human can take things into account that a computer may not consider, like if they have played the same person many times and they know they will make safer choices or if they feel really lucky that day. There are certain instincts and signals that a human can pick up on, that even math cannot predict.

Watson is a little different because it is more open ended than just playing a game. Watson was created to be able to answer questions. This seems a bit more complicated because questions on Jeopardy can cover a wide variety of topics, and now Watson is mainly used in the medical field. Watson has been trained to do many different things. This seems like more of a human-like situation. Reading about some of the errors that Watson made in his rounds of jeopardy just remind me that Watson is not human. He made mistakes that a human would not make, like answering “1920’s” after another contestant already guessed “the 20’s.” There are certain common sense things I do not think we can teach computers.

I am a fan of the Chinese Room counter argument to the Turing Test. I agree that a machine that is merely “simulating the ability to speak Chinese,” is not really understanding Chinese, and rather outputting what a developer’s algorithm thinks it should output. Can we really say a machine is intelligent like a human if it cannot fully understand what it is saying? One good counter argument to the Chinese Room test would be that  the definition of artificial is “made or produced by human beings rather than occurring naturally, typically as a copy of something natural; insincere.” This would mean that Artificial intelligence is indeed simulating human intelligence, not necessarily having human intelligence. However, this is not the way that many people perceive AI.

I am more concerned about the intentions of those who create AI than I am about AI itself. The computer responds in the way the developer has told it to respond. The people behind the AI need to be thoughtful in what exactly they are telling the computer to do. For this reason I find it hard to believe that a computing system can be thought to have a mind of its own. A human can learn things from other people and be taught how to do things, but we have the ability to make our own decisions and judge if something is right or wrong, or if we want to do it. We can decide if we want to learn the things we are being taught. A computer does not have these capabilities (at least for now…).