Reading 14 – Computer Science 4 All

Teaching programming for all seems like a difficult problem to solve. As a computer science student, I see how programming is slowly becoming part of many fields. I often assist my friends in their programming homework. Often, I’m surprised by the large number of careers that require programming now. Recently I assisted a friend who’s major is speech therapy in creating an NLP neural network to classify Japanese words into different categories. Programming is certainly becoming a new sort of literacy.

There seem to be two problems with CS4All program. The first is finding teachers to be part of the program. The problem with the current approach is that people in charge want change now. This may be an effect of short-term leadership in the US. Mayors and presidents want to see change within their term. However, a complete overhaul of the education system requires long-term planning. The first step should happen at the top, in colleges. The first step is to include a CS program for all students at the college level. This is certainly more achievable and would only require a couple of new classes to be added at each university. The effect of this would be that in a couple years, all teachers will have some CS education. This would eventually make it possible to adopt a CS program at every school at the K-12 level. This would take some time for it to have the desired effect, but it would greatly reduce the cost of the program and make it an achievable goal.

The second problem is the claim that there is a two-hump division of people into two categories: those who can program and those who can’t. To that I’ll say that we already see something similar in school. Certainly, there are kids who can’t do well in math no matter how hard they try. This doesn’t mean that math shouldn’t be part of the core curriculum at school. Other subjects will also have students that perform better than others. Still, all benefit from a broad education.

The next issue is, how should a K-12 CS problem look like? Personally, I’m not a big fan of visual programming languages like scratch and LabVIEW. However, scratch is almost a standard now used to teach kids how to program. Some research should be done to determine whether these systems are effective. Personally, I believe a better approach would be to create a high-level language with a visual element to it. Some ideas are: a game where you pass different stages by solving a small programming exercise (with a small plot and nice visuals attached to it), teaching in a very high-level programming language where you can create simple games, or simply teaching python at a very basic level. Again, these decisions should be guided by research.

I believe having a CS educated world is possible and beneficial to all. It will take a long time and a lot of effort, but eventually we can create a more educated world.

Reading 13 – Patents

After going through the readings, I now see there’s more to patents than what is usually said about them. Originally, I thought patents were a good for society necessary to protect innovation, but I now see it’s more complicated than that. One of the problems with patents is that they prevent iterative improvements on inventions. If a company has a monopoly on an invention that is quickly adopted by society, there is little incentive to keep improving the technology. For example, if apple was the only company that was able to create smartphones, I doubt the technology would’ve had major improvements from the iPhone 1. In other words, patents may be a force that opposes the ides of capitalism. Monopoly of a technology prevents the forces of competition on the market to drive innovation.

An example given in the podcast was the airplane. The airplane was invented and patented by the Wright brothers. However, they were unable to make an actual fully functional plane. However, the patent on the technology prevented other people on perfecting the technology. Due to this, the airplane was perfected in a foreign country where the patent was not applicable.

Another argument against patents is the rising force of patent trolls. Patent trolls try to create as many patents as possible without actually implementing the patented device. Their whole business strategy involves threatening people into paying for the licenses of use and in suing those who refuse to do so. This kind of companies threaten innovation by making the world of innovation a hostile environment. You may unknowingly violate an existing patent and be targeted by one of these companies.

Due to this, I believe something has to be done about the current patent system. My ideas are not as extreme as those given by Michele Boldrin and David K. Levine in their paper The Case Against Patents. I believe a patent system is still necessary, but I believe modifications should be made. First of all, I believe a requirement should be made that in order to request a patent they should have a working product. Once an inventor has a developed project, they can request for a patent. This would prevent patent trolls from abusing the system. Having to actually develop the invention would prevent people who just come up with the concept from stopping other people from actually creating it. At the same time, patents would still create enough of a protection to inventors to fulfill their function as an incentive.

The second change I would make would be to decrease the time patents are valid for. 20 years is too long of a time, especially in an age where major breakthroughs are created every year. If apple had decided to just patent the concept of smartphones without actually creating them, we might not have the highly advanced mobile devices we do today. A more appropriate time could be 5 years. This would fall more in line with the fast developments of the era, while still providing the incentive to create to inventors.

In terms in software, the issue becomes a bit more complicated. As a student I have been taught the value of the open source movement. At the same time, there should be some protection to software developers. Videogames, for example, are simply software created for entertainment. What if someone could simply get a copy of Grand Theft Auto and started selling it for much cheaper? If there was no protection, they could do this. They have no costs that they need to offset with their earnings. With no protection whatsoever, this would be possible. At the other end of the spectrum, what if someone got protection from something essential from the coding perspective. Imagine if they someone had gotten a patent for Balanced Binary Trees, our computer science lessons would be very different. The only solution I can think of is that decisions have to be made on a case by case basis, and that in order to do so properly, patent agencies should have experts in programming to asses cases better.

Reading 12 – Self Driving Cars

The motivation behind self-driving cars is two-fold. The first is convenience. It is a lot more convenient to lay back and relax behind the wheel than driving yourself. Especially under a lot of traffic, driving can be stressful. With fully functional automated cars, one can potentially even be productive while waiting in traffic or relax and watch a movie. The second motivation is safety. A great deal of traffic accidents are caused by human error. If all humans are removed from driving, traffic accidents should significantly decrease.

The problem with self-driving cars is that they are required to make ethical decisions. The biggest one that comes up is the trolley problem, with the added element of the driver and passengers in the car. What should a self-driving do in a situation in which it has to choose whether to save the driver and passenger’s life versus saving the life of pedestrians. The opinion is very clear, but unhelpful. A group of researchers found that while most people agree that a utilitarian model where self-driving cars prioritize the life of pedestrians is the most moral one, most people would be reluctant to drive in a car with such a model. The effect of this is that this goes against the utilitarian approach itself, because by delaying the adoption of self-driving cars, we are preventing the decrease in road fatalities expected from their full deployment.

If we take this into account, it seems like the most utilitarian answer would actually be to create an AI the prioritizes the lives of the driver and passengers. The difficulty lies in the fact that people might be unwilling to accept this. There’s this pressing fear of knowing a huge chunk of metal may just run over you without a second thought. Perhaps the solution would be to convince people that even with an AI prioritizing the lives of passengers in the car, the likelihood of you being killed in a car related accident still decreases with the adoption of AI technology.

Another difficult part is the role of the government in all of this. As you can see, my opinion is that its best if the car chooses to protect drivers and passengers. Given that, I believe the role of the government would be to ease the adoption of this model. AI and cars deal best with clear cut roles. A benefit would be a reeducation of both drivers and pedestrians on how to best protect yourself from the adoption of self-driving cars. For example, a greater emphasis should be made in letting people know that they shouldn’t jaywalk. An AI car may not react well to a pedestrian coming out of nowhere. Another change should be a greater emphasis on using turn signals whenever possible: when entering a road, when switching lanes, etc. The good part of this is that as self-driving cars become more prevalent, the cars will adopt all these clear-cut rules themselves and better interact with other self-driving vehicles.

Personally, when I have the money to do so, I will invest on a self-driving car. I hate driving. It makes me anxious and nervous. Hopefully when the time comes, the technology will be more mature and more widely adopted.

Reading 11 – Artificial Intelligence

I am great admirer of artificial intelligence. AlphaGo defeating the go world champion seems like a gimmick, but its implications are relevant. The benefit of artificial intelligence is the broad applicability of it. I once assisted a presentation on IBM Watson and how its beneficial to the medical industry. The devices IBM assist doctors by providing them up do date research that is particular to an individual patient. It checks thousands of medical papers to produce a list of most likely results. Doctors liked this because it gave them suggestions and not simply an answer. People have difficulty trusting a AI system giving them a diagnosis by itself, but it makes them feel more confident that an actual doctor will process these results and give a diagnosis. This is the best of both worlds, the power of AI to search through thousands of documents in seconds, with the judgement and critical thinking of a human doctor with real world experience. This is just the tip of the ice berg, AI has many real-life applications that can potentially improve human quality of life.

The relevance of AI is also independent of the consciousness of the mind problem. We can never know whether a computer is actually conscious or simply emulating consciousness. I believe the Turing test can’t tell us that. However, I consider it irrelevant. If a general AI system is indistinguishable from a human being’s mind, why does it matter whether it thinks like us? If the system shows that it has emotions, concepts of morality, mental curiosity, and all elements we deem human, then the AI would very well just become an artificial human. Finding whether an AI that has a consciousness the same way a human does is tantamount to finding whether human beings have souls or not. For this reason, I believe the Turing test is a perfectly sensible test for an AI having a practical human mind.

In the movie Her for example, Theodore believed Samantha to be a real emotional partner. He knew she wasn’t a human, but that didn’t matter to him. She felt human enough to him. By the end of the movie she had transcended to a state that superseded matter itself, but he was just sad about losing his new partner. The ending was sad with a hint a possible AI world domination that was tossed aside in the background.

Are these dangers real? Of course. Imagine if there was a “human” with the body of a machine. If they truly create an AI brain that for all practical purposes performs the same as a human, it is expected that the AI will have a concept of not wanting to die. The AI can very easily explore the net and find how antagonistic people are against him. The next logical step is to not trust humans and defend itself. These dangers are real and should be taken into account when creating AI humanoids. For example, there is the idea that AI robots will be the ones fighting the wars of the future. Measures have to be taken to prevent them from turning against human. Developing AI without taking this into consideration is simply reckless.

Reading 10 – Fake News

Fake news is a term that has been coined to represent news that are completely made up. It is different from conspiracy theories, which are presented as theories and usually (but always) based on some truth or on some facts they believe to be true. Fake news is written by people who know they are writing lies with some intended purpose. Some simply do it for money, while others do it with political purposes in mind. The problem is that their power is so great, that many think they were the major force behind the results of the 2016 elections.

Their power is so great due to the easy and exponentially expanding platform that Facebook and other social media entities provide. Fake news have a mind dulling effect. Their titles are so controversial that some automatically hit the share button before even reading the article, a dangerous practice that leads to the spread of misinformation.

This danger is even greater because of Facebook’s news feeds algorithm, which personalizes content to the preferences of the individual based on number of clicks, likes, shares, and comments. Personally, I was never aware of this problem. I like double checking articles with others, just because when I read some incredible big news I want to know as much about it as I can. If it’s too sensational, I double check with Snopes. For example, I found out through Snopes that videos shared by President Trump on his twitter account of Muslim British residents performing acts of terror were all false. None of those incidents occurred on British land. I have, in other situations, found fake news on my news feeds, but I never realized that it was this big of a problem. The algorithm learned to show me videos and posts I like to read the most, which seemed to have little overlap with the issues disseminated through fake news. My friends are also mostly educated college students, so the issue of fake news spreading on my news feeds seemed less prevalent.

The question is, how responsible are social media companies in policing fake news? The problem is that their actions may be illegal given current legislation. There are laws that prevent other countries from intervening in US elections, but Facebook’s open system allowed Russia to circumnavigate that. Furthermore, Facebook hasn’t been reporting advertisement with political intentions as political campaigning investment, another violation. In addition to this, fake news with political intentions may further go around this requirement without even stating their political motives. While the first two are clear violations that Facebook can easily fix, the third one isn’t so much. They need to be able to discern fake news from real news and to read the motives under the advertising.

To address this, Facebook is trying to outsource “disputed” media (using some heuristic) to some third party that will verify the content of the post and add a link to the post leading to a website showing evidence disproving the disputed post if they found the post’s content to be false. Multiple entities have signed into the fact verifying process. This is a great idea, since it decentralizes the fact checking process, which might help reduce bias. This also makes the process less private, so to speak, as there is an openness to it to outside entities. My only worry is that this will lead to a “fact-checking war”, where the left and right consistently try to disprove each other, which may further increase our currently rising sense of distrust on media.

For now I will give them the benefit of the doubt and hope it works. While that is set up, it would be beneficial for all of us to try to break off the echo chamber, the effect were social media “echos” back our own believes through its individual filtering process. Try reaching out to other sources of media. Personally, I started listening to a daily broadcast of important news. It is not comprehensive, and it is only one more source of news, but it helps escape the filter bubble.