Reading13: Intellectual Property

Reading13: Intellectual Property

I have to admit that coming into these readings, copyright law was something I badly lacked knowledge of.  I knew copyrights were weaker than patents and lasted longer, but I wasn’t sure exactly what they entailed.  Regardless, the idea of copyrights present an interesting juxtaposition for me as someone who supports innovation and progress.  I recognize the importance of protecting intellectual property as an incentive to innovate but also realize that collaboration is important and that much progress is made from people building off others’ work, so a delicate balance must be struck between strict and loose IP protections.

 

One of the most interesting debates related to copyright surrounds the practice of reverse engineering, or the reproduction of a product made from an examination of the product’s inner workings.  I was surprised to learn through these readings that “the fair use doctrine allows users to make unauthorized copies in certain circumstances.” In other words, people can reverse engineer products without permission assuming they follow certain guidelines. In general, legal precedence allows for reverse engineering as long as there are no license agreements preventing it and it is done strictly from studying and fiddling with the original product in an effort to produce a copy independently.  Copying actual restricted instructions would not be allowed under fair use (see Atari Games Corp. vs. Nintendo of America, Inc.).

 

Ethically, I think reverse engineering is permissible as long as the intended purpose is not to explicitly rip off the original seller.  In other words, if you are reverse engineering to improve your individual experience or add a positive contribution (something new), I believe you are acting ethically.  This has become a bigger issue lately with some companies like John Deere using the DMCA to argue that “consumers do not own the software underpinning the products they buy,” essentially meaning that “farmers receive an implied lease for the life of [a John Deere] to operate the vehicle.”  I think this is quite a stretch.  The DMCA was constructed to prevent theft of IP, not to prevent people from modifying their property to use as they see fit, and I cannot see why fiddling with the stereo of a John Deere to allow for more audio options represents a theft of IP.  Luckily, earlier this year, the US Copyright office clarified that such use is permissible (and it is even permissible to hire someone to alter such internal hardware/software for you).

 

However, I do not believe that the DMCA is a totally misguided law.  As mentioned, it was designed to protect intellectual property and it is generally effective at doing so.  For instance, it permits companies to use DRM (digital rights management), or access control technologies that restrict the use of proprietary hardware and copyrighted works, to protect IP.  I think this is a perfectly ethical way to prevent piracy and the illegal copying of IP, and circumventing DRM to copy such works for distribution is certainly not moral.  Though it would probably be illegal, I have no problem with people ripping a CD to get audio files they already own in a format they can personally use better, but impermissibly distributing such ripped audio is just blatant theft.

Reading11: Intelligence

Reading11: Indelligence

Artificial intelligence is probably one of the most exciting and promising fields right now due to the extraordinary amount of success researchers have achieved in the past decade.  It seems that every few weeks, news of another breakthrough comes out, and people are increasingly interacting with AI technology in their daily lives.  However, actually defining what artificial intelligence actually means is not completely obvious.  As a first pass, I would roughly define artificial intelligence as anything a computer can do that seems to require human intellect and/or intuition (for instance, recognizing that an image is of a dog and not a cat).  However, I do agree with John McCarthy’s caution that “once [something] works, no one calls it AI anymore.”  Though there are many exciting AI applications on the horizon, we must not continue considering AI as something only of the future because as I have noted, AI is all around us.

There is one thing, though, that was previously considered AI, that do not think qualifies.  Early “AI” game algorithms focus on brute force Monte Carlo simulations to pick an optimal outcome, such an approach to me does not seem intelligent.  I think for something to be considered AI, it has to do some sort of learning.  Given this restriction, I am hesitant to label IBM DeepBlue as a true AI system because although it is impressive, it does not actually learn anything.  In contrast, newer systems like AlphaGo and Watson learn from experience to improve qualifying them as intelligent systems.  Many ideas used in these systems have been successfully adopted across the AI industry, making them more than just gimmicks but exciting steps forward in artificial intelligence.

Though I think I have a working understanding of what AI is, I am extremely conflicted when debating its potential equivalence to human intelligence. I have qualms with both the Turing Test as a valid measure of intelligence and the Chinese Room as a sound counter argument.  I like the idea from the Chinese Room thought experiment that humans have a sort of working understanding of surroundings that a machine can never have, but I think the Chinese Room is an overly simplified case because it assumes that there is always a certain function that will be followed and discounts the ability for a mechanized system to learn from experience.  Therefore, my view is that the Turing test (including its equivalents for different tasks) is a valid measure of a machine’s ability to perform at or above human level for a certain task, but it does not necessarily mean that the intelligence it is displaying is of the same sort as human intelligence.

In this vain, I do not think that a machine can ever be thought of as possessing actual morality.  Though it may have to make decisions that seem like moral decisions, these decisions will be made to optimize some result or fulfill some rule and not to actually follow innate understood morals.  Because I have this view, I am not that worried about the prospect of AI taking over. I agree that we should be careful not to build AI systems with the potential to independently reach an “optimized state” that is harmful to us, I don’t think we are anywhere near a world where machines will consciously and malevolently take over.

Reading10: Fake News

Reading10: Fake News

I believe there is little doubt that discord spawned by fake and misleading news on social media is one of the great issues of our day.  Before the advent of the Internet and social media platforms, people generally got their news from trusted news providers like newspapers, magazines, and TV stations.  While some of these media undoubtedly have certain partisan slants, they generally do not spread blatantly false stories (and tend to admit errors when they are found to have done so).  With social media, news no longer has to come directly from these trusted providers in the established media – it can come from anyone.  This has led to people increasingly viewing content shared and “liked” by their friends, which is problematic for two reasons.  First, people are more likely to believe things their friends share than a stranger (particularly those less educated on how news works).  This makes it easier for a questionable news story to gain credibility when something shared by someone’s friend’s friend’s friend eventually reaches them.  Second, people tend to share somewhat similar views to their friends, so the spreading of content through different social media circles can create echo chambers where people only hear what they want to hear, which increases discord by accentuating extreme positions at the expense of moderation.

Personally, I am acutely aware of these issues and have taken concrete steps to ensure I do not succumb to fake news or a political echo chamber.  When I was in high school, my Facebook feed definitely looked quite one sided and contained some content of questionable validity, but since then, I have actively tried to shift this by unfollowing accounts that are egregiously one-sided, following news sources known to have different political leanings, and unfollowing any accounts that share content I deem to be probably made up.  This has definitely greatly improved the quality of content on my feed, but I still recognize that I could be stuck in an echo bubble so I make a point to read news from as many different sources as possible, even turning to international news to give me as balanced a perspective as possible.  Being properly informed is a huge priority for me, and taking these steps has certainly helped me to achieve that end and is my way of avoiding entry into a “post-fact” world.

However, I know that not everyone shares the same caution – particularly those who are not as well informed or educated.  This can lead to massive problems from swaying elections to inspiring killings.  I agree with Kathleen Jamieson that “Russian trolls helped elect” President Trump through a coordinated campaign to spread fake news.  Even if Trump would have won without the foreign help, the possibility that an adversary could use social media to attempt to sway a US election is extremely troubling.  Even scarier is the way social media has promoted the genocide of the Muslim Rohingya in Myanmar, where a horrifying amount of hate speech has spread throughout Facebook promoting their slaughter.  In cases like these, I believe social media companies have the responsibility to filter out offensive, false, and dangerous content.  However, I am definitely somewhat uncomfortable with giving them too much of this power because just as the Russians were able to use social media to achieve political ends, so could to technocrats leading these companies.  Therefore, I think a new law is needed regulating the standards for removal of objectionable content.  With such a law, the standards for unacceptable content would be set and it would become illegal to filter out legitimate views.  This would help social media companies achieve their goal of connecting the world without giving them too much power to influence events to their liking.