Reading13: A Patent Is a Terrible Thing to Waste

Reading13: A Patent Is a Terrible Thing to Waste

A patent is “an exclusive right granted for an invention”. Essentially, it is a way to guarantee ownership of an invention, so that it can only be used by the patent holder, or by other parties at the patent holder’s discretion. This allows the patent holder to disclose or release more detailed information about the invention (ostensibly to further human knowledge) while still retaining the benefit from being its inventor.

I think patents, on principle, are necessary and proper – it makes sense to afford protections to those who invented something so that they can reap the benefits. Doing so encourages innovation: people or groups are willing to expend significant resources to develop or create something new, because they can stand reasonably assured that they can benefit from it, should it be a success.

With traditional physical artifacts or inventions, patent litigation is relatively straightforward. Another artifact infringes on the patent if it is clearly very similar or even the same, or uses designs, etc from an existing patent. But, as we’ve seen so many times before, things that seem to be clear or have worked so far get considerably muddier when applied to the world of software and computer science.

The reason that it’s somewhat controversial is that “software” umbrella covers a staggeringly wide array of things. Software can be simply implementations of mathematical formulas or algorithms – as was the case in Gottschalk v. Benson, where the software in question was used to convert numbers from one format to another. I think the court was correct to rule that it was an abstract mathematical idea and therefore could not be patented.

It seems clear to me that software that close to theory and mathematics should not be patentable. On the other end of the spectrum lies large software suites or products. It makes sense to me that these should be patentable – considerable work was spent to engineer and create them, and their use should be protected, just as physical products.

In the middle, along the spectrum, it gets muddier. I don’t know if there’s a good way to draw clear lines in the sand defining what should and should not be patentable. Software and how it is used changes quickly; it would be almost impossible to codify an exhaustive set of rules that define software patent rules. At least in the judgments that were presented in the readings, I honestly think the courts have done a decent job of ruling on a case-by-case basis, and explaining why things were the case as they too try to explore how traditional patent and IP rules apply to software.

The area of the IP system that is in most need of overhaul is the litigation of patents, specifically in regards to the existence of patent trolls. It is a clear symptom of a broken system that there are “companies” with no sources of income aside from patent litigation. Such companies are contributing nothing, only benefitting at the expense of others using rules that were meant to protect inventors, not endanger them.

Something that may help combat the issue is the introduction of another requirement for patent litigation: proof of “stake” – show that they are using the patent, or at least doing work that closely relates to it. This would prevent patent trolls from buying patents simply for the purpose of litigation – unless they were also doing work in that area and began using the patent, they couldn’t litigate frivolously.

The argument against such a rule would be that it would negatively impact individuals, rather than companies. For example, if one person invented something but does not plan to move forward with it commercially, he or she could be at risk of effectively losing the patent (by not having proof of “stake”). The proper thing would be for this person to be able to continue to hold it and earn royalties off its use.

Maybe the solution is for this requirement of “stake” to not kick in for the first few years of a patent – long enough for a private citizen to, if not planning on using it, to determine its worth and sell it to someone who will. It would prevent someone from inventing something and then keeping the patent long-term if they are not using it… but is that a bad thing? Shouldn’t our IP system bias towards those who are using the patents to create things and contribute to society? If doing so prevents patent trolls from extorting onerous sums of money from fledgling businesses (and large, if less burdensome, amounts from big ones), I’m all for making patents “use or lose”.

Reading12: We should be scrutinizing policy, not the tech

Reading12: We should be scrutinizing policy, not the tech

Self-driving cars would be very convenient. Wouldn’t it be great to hop into an autonomous car and surf the web, play a game, read a book, or any number of other things you could do other than staring at the road on your way to your destination? More to the point, there is a lot of money to be made in providing this service, which may be the most truthful reason why it’s happening. Ultimately, though, this is all beside the point. Should we make autonomous cars?

The discussion around whether or not we should develop or allow autonomous cars ultimately focuses on safety. Over 94 percent of the tens of thousands of annual road fatalities are caused by driver error. Could autonomous vehicles alleviate most, if not all, of these? After all, they shouldn’t get tired, or distracted, or angry at other drivers.

I tend to agree – beyond thinking that autonomous cars would be extremely convenient and overall very cool, I think they could certainly make the roads much safer overall. I do think that we can reach the point where autonomous cars are considerably safer than human drivers, which would make widespread adoption a great improvement on the safety of our roads.

However, we’re certainly not there yet. I wouldn’t feel comfortable riding in a fully autonomous car at anywhere approaching highway speed; not because I don’t think that an autonomous car can be safe, but because I’m not convinced they are completely safe yet.

There is a great deal more work to do on autonomous cars. How far should we go with them? The crash in Tempe, Arizona, in which an Uber autonomous car struck and killed a pedestrian calls a lot of this into question. Should we test on public roads? Should we stop this altogether? Should a computer be allowed to make what can become life-and-death decisions?

I think that in light of the Tempe accident, we should question more the prudence of Uber in their decisions about design and testing, rather than the algorithmic capacity of the car.

I want to preface this by saying that I am not trying to rationalize away the loss of life. Any human loss of life is a tragedy. But we should not be looking at the capability of the autonomous car. It was dark, and the woman was crossing the road at a place where pedestrians would not be expected. But – and these next two facts are, I think, the crucial ones – a) the vehicle’s system attempted to initiate an emergency brake prior to the impact and b) Uber had disabled this capability in favor of a smoother ride.

The vehicle attempted to initiate an emergency brake 1.3 seconds before impact. Travelling at 39 miles per hour, this would be a distance of 74.3 feet. I can’t say with confidence that the vehicle could stop fully in that distance (stopping distance from 70 mph to stationary is ~185-190 ft for that car), but it would almost certainly have been a much less than fatal crash. The autonomous system correctly detected that it needed to perform an emergency stop. And, with a pedestrian appearing out of the darkness in the night, would we expect a human to have done better? After watching the video, the woman appeared suddenly out of the dark – 1.3 seconds is better than I’d expect most human drivers to do.

If the autonomous system was allowed to carry out the stop, the woman likely would not have died. But, Uber “had disabled the Volvo’s emergency braking mechanism, hoping to avoid a herky-jerky ride.” They cited that it was the responsibility of the human operator to intervene. The same human operator whose responsibility it is to be relaying data and working on other things during the ride.

This is grossly irresponsible. I think it’s obvious that you can’t rely on a human driver to take over for an otherwise vehicle in emergency situations. If a human is not actively driving, even the best-intentioned will get distracted, sleepy, or simply not have the focus to have the necessary split second reactions. So why disable the autonomous emergency brake? Even if there was a full-time emergency observer, why disable it? Another layer of redundancy could never hurt, and I don’t buy that avoiding a “herky-jerky” ride is enough to do so.

I don’t know how this could be regulated, but companies like Uber need to have more of a focus on safety. For everyone involved – things like disabling emergency braking should be unthinkable. More robust safety features and continued effort on these cars will make the road safer for everyone – pedestrians and drivers alike. Further, I don’t think there’s as much of a trolley problem concern as many like to posit.

These “trolley problems” almost never happen on the road. And, if they do, they are most likely the result of previous irresponsible driving that, in theory, an autonomous car should avoid. If you have to choose between running into another car at highway speeds or ramming pedestrians, couldn’t that have been alleviated by following less closely, or not speeding? There are systems – speed limits, road signage, etc – designed to make the roads safe and avoid these sorts of dangerous situations. To me, this is the more compelling challenge to autonomous cars – they rely on infrastructure that may not be there all of the time. Things like road markings or signage that may be absent, damaged, or obstructed by weather. At least so far, computers have a hard time improvising.

I’ll cut this off here – I have more thoughts on the trolley problem/ethics of autonomous cars (mostly about why people are focusing on the wrong thing) – but this is getting long. I can spell it out more in the in-class discussion.

In summary, I believe that autonomous cars are a promising possibility to make travel more convenient and safer for everyone; we’re just not there yet. We need to be more responsible in our testing and think long and hard about how we’re rolling out the technology, but we shouldn’t let poor safety decisions lead us to give up on this technology.

Reading11: More like a calculator than a brain

Reading11: More like a calculator than a brain

I believe that what we call “artificial intelligence” (and I would argue is more aptly called “machine learning”), the field that we are currently seeing an enormous boom in, is fundamentally different from what we would truly call “intelligence”. Worries that we could unleash a monster smarter than us that ultimately leads to our destruction are unfounded in response to the current work being done, and very well could be hindering progress.

I like to quip to my friends that machine learning is just “statistics that have gotten out of hand”. And, essentially, that’s what it is: a rigorously defined architecture of nodes with weighted edges, with those weights and the connections between them being fine-tuned and adjusted depending on the type of network and its desired inputs and outputs.

That’s all AI is. It’s simply another tool that we are learning to use. It is a tool that can be very good at recognizing patterns, grouping things, or extracting meaning from complicated or chaotic data in a way that’s not easy for us to follow – which is the crux of the issue.

With typical computer programs, a programmer writes each instruction. Every action is defined and laid out beforehand; we can trace the execution and figure out exactly how and why things happened. With machine learning, this isn’t exactly the case. Programmers set up the network and provide the data, the goal function, etc., but then a dizzying amount of math happens, and a trained network is the result.

This “black box” is what scares people. The fact that there aren’t lines of code to trace to show why decisions were made is worrying. Consider AlphaGo, DeepBlue, Watson, etc. They decisively beat the best players in the world at things we consider very difficult. That doesn’t mean they’re “intelligent”, or understand what they or doing or why. It just means that they’re finely tuned systems meant to output chess moves, or trivia answers.

The article in ND magazine had a truly baffling quote: “Couldn’t you just turn it off? Not necessarily. Shutting down an AI would prevent it from achieving its goals and, with its superior intellect, it could disable that option.” Consider Google’s more recent Go playing program: AlphaZero. It is programmed to, provided game states in certain formats, decide on the optimal next game state and output a move (as well as learning from it, etc). It is NOT programmed to “BECOME BEST GO PLAYER” or anything that would have bucking its handlers and taking over the world (so as to never lose at Go, paperclip-machine style) be a possibility. (Let alone the fact that there many other issues with this: like a computer can’t just “decide” not to turn off – “superior intellect” can’t defeat an unplugged power supply. “Escaping to the internet”, and other clichés are equally nonsensical, and any further discussion on this will just become an increasingly silly rant).

Even though I firmly believe that the machine learning we are doing now is just making tools, completely separate from “intelligence”, I can’t say definitively that there is any reason why a truly intelligent computer couldn’t exist. Though, saying that likely comes more from ignorance of the human brain than any educated opinion – maybe there’s aspect of the brain that would preclude this, but the example of simply simulating a human brain seems to me like it could produce at least a facsimile of a consciousness. Would that really be a consciousness, or just a simulated one, a la Chinese Room? Would that even make a difference? I truly don’t know. That’s not to say I think it’s feasible, or could even happen in even the next century. It’s just to say that we don’t, and cannot, know what the future holds.

Overall, the fear over artificial intelligence or, as Elon Musk put it, humanity being “just the biological boot loader for digital superintelligence”, is unfounded as it relates to machine learning today. Maybe that could happen down the road, but as it is now, the AIs are something completely separate, more akin to a calculator or abacus than a brain.

Reading10: On fake news and influence campaigns

Reading10: On fake news and influence campaigns

I think it is the reasonable and sensible conclusion for platform providers such as Facebook and Twitter to be combating against “Fake News” on their site. Information that is demonstrably incorrect should be removed, particularly when it is being used to further certain agendas unfairly, or when it causing real, demonstrable harm.

It’s a no-brainer that the spread of false information should be stopped. In this way, I don’t mind the fact that it’s technically a private company deciding what is and is not “fake news” – if it’s demonstrably false, it can and should be removed.

Additionally, we need to have a discussion about simply how much power these companies have, and what their platforms mean for the public discourse. See the effect that WhatsApp is having in India; these are becoming more than just corporations. They so profoundly affect how we live our lives, and we need to take a good, hard look at what to do about it. That is, to recognize that they may have some different capabilities than we afford typical companies, but also (emphatically) that they have additional social responsibilities. See, for example, how Russian agents were able to use Facebook to (credibly) affect the outcome of the 2016 US elections.

The reason I’m relatively nonchalantly saying “yeah, go ahead and remove fake news” is that I don’t think that’s the root of the problem, particularly with the election. While it is true that they said things that weren’t true, you can do much the same with true stories presented or framed in certain ways to certain people.

In fact, a component of the 2016 election shenanigans that may have been as or more harmful is indistinguishable from typical use of the website at the surface level. On Facebook, people make posts, add friends, read and share articles, etc. In the 2016 influencing campaign, there were actors who did all these things, just with a political agenda in mind. Actions like these can never be prevented at a high level, and must be detected with fine-grain analysis, which makes them as a matter of course extremely difficult to combat.

The issue, I think, is not that what people are saying is “fake”, or completely impermissible. The problems arise when we are tracking what people do so closely and building up such huge dossiers of information on the general populace, that can then be leveraged to deliver very targeted messages to certain groups. When a firm can identify groups of people likely to vote for a certain political candidate and serve them specifically targeted ads to encourage them against voting, we have a problem.

I wouldn’t go so far as to say that we live in a “post-fact” world, but it is certainly the case that it’s becoming far easier for people to be manipulated. As the public’s news consumption is growing tied to social media rather than coming from news outlets that they themselves seek out, it is becoming easier and easier to influence people into believing certain things or behaving in certain ways.

I believe that this is a threat to our democracy – not in an “overthrow of government to authoritarian society” sense of things, but in the way that it could become those with the best analytics and election campaign will win, rather than those who have the best platform or the promise to be the most effective leader. In the age of social media and data analytics, the process of attaining political office is growing increasingly independent from having the qualities to hold that office effectively – a trend that can lead to only bad if we don’t do something to fix our political discourse.

And, I think, we can do something to alleviate it. Social media doesn’t appear to be going away any time soon, but the climate of this past presidential election cycle – with all its fearmongering, finger-pointing, and name-calling – lent itself uniquely to this sort of influencing campaign. This sort of rhetoric will never go away, but the more we try to move back towards a reasoned, policy-centered discussion, the better off we’ll all be.