Reading14: Technological literacy in a digital world

Reading14: Technological literacy in a digital world

It seems like everything these days is online, uses a device, has a companion app, etc. Computing is growing increasingly intertwined with every part of our lives. People are beginning to argue/lobby that everyone should be introduced to and learn about computing. Most of           the arguments I’ve seen revolve around the growing ubiquity of programming – one of the articles likened the current state to how a long time ago writing was a skill only known to scribes, but it’s now the norm.

I honestly haven’t seen too many arguments against teaching everyone about computing – the one article we had to read that presented that view is my only exposure to that opinion. It seems to be taking issue with a part of the presentation, not the idea at the core of it; the article seems to argue that it’s somewhat dishonest to promise coding as a golden ticket for social mobility, that there are other things necessary to make a career out of it, and it’s not that simple. I suppose I don’t disagree.

Where I fall on the matter is that I think that we should definitely try more to expose everyone to what one of the articles calls “computational thinking”. I could easily see this kind of thing slotting in alongside the regular curriculum for students. Later on, it would be easier for those who have interest to take for actual programming courses to dive deeper.

Even if they don’t program, I think it would be useful for everyone to have exposure to these things. In a world becoming increasingly computerized and networked, it’s important for everyone to have a base understanding of what’s happening. People use smartphones, apps, websites, etc, without any idea of what’s happening to their personal information, data, and plans. In the current day, not knowing anything about computers and how the digital world operates opens you up to exploitation.

I’ve long thought that computer science is much more universal than everyone seems to believe. I have friends who throw up their hands at a problem that seems the least bit “techy”, and ask me (the resident “computer person”) to fix it. Exposing people to these “computational thinking” concepts early on can help show them the bridge between the things that CS students study and the real world, even if you’re not doing anything digital. Things like sorting large groups of items, or planning and optimizing logistics are closely tied with the more mathematical side of the things we study.

I think that everyone can learn to program (it’s not a mystical in-born art), but it’s a bit much to say that everyone should. I would, however, say that everyone should have some exposure to code, and some understanding of what’s happening beneath the hood when they use online services or social media. That is to say, things like knowledge that they don’t own their data, how their web traffic may be tracked, etc, as opposed to being able to name the layers of the OSI model. If the job of school is to prepare students for the world, we should recognize that at this point some of the preparation must concern the digital ecosystems we live in.

 

Reading13: A Patent Is a Terrible Thing to Waste

Reading13: A Patent Is a Terrible Thing to Waste

A patent is “an exclusive right granted for an invention”. Essentially, it is a way to guarantee ownership of an invention, so that it can only be used by the patent holder, or by other parties at the patent holder’s discretion. This allows the patent holder to disclose or release more detailed information about the invention (ostensibly to further human knowledge) while still retaining the benefit from being its inventor.

I think patents, on principle, are necessary and proper – it makes sense to afford protections to those who invented something so that they can reap the benefits. Doing so encourages innovation: people or groups are willing to expend significant resources to develop or create something new, because they can stand reasonably assured that they can benefit from it, should it be a success.

With traditional physical artifacts or inventions, patent litigation is relatively straightforward. Another artifact infringes on the patent if it is clearly very similar or even the same, or uses designs, etc from an existing patent. But, as we’ve seen so many times before, things that seem to be clear or have worked so far get considerably muddier when applied to the world of software and computer science.

The reason that it’s somewhat controversial is that “software” umbrella covers a staggeringly wide array of things. Software can be simply implementations of mathematical formulas or algorithms – as was the case in Gottschalk v. Benson, where the software in question was used to convert numbers from one format to another. I think the court was correct to rule that it was an abstract mathematical idea and therefore could not be patented.

It seems clear to me that software that close to theory and mathematics should not be patentable. On the other end of the spectrum lies large software suites or products. It makes sense to me that these should be patentable – considerable work was spent to engineer and create them, and their use should be protected, just as physical products.

In the middle, along the spectrum, it gets muddier. I don’t know if there’s a good way to draw clear lines in the sand defining what should and should not be patentable. Software and how it is used changes quickly; it would be almost impossible to codify an exhaustive set of rules that define software patent rules. At least in the judgments that were presented in the readings, I honestly think the courts have done a decent job of ruling on a case-by-case basis, and explaining why things were the case as they too try to explore how traditional patent and IP rules apply to software.

The area of the IP system that is in most need of overhaul is the litigation of patents, specifically in regards to the existence of patent trolls. It is a clear symptom of a broken system that there are “companies” with no sources of income aside from patent litigation. Such companies are contributing nothing, only benefitting at the expense of others using rules that were meant to protect inventors, not endanger them.

Something that may help combat the issue is the introduction of another requirement for patent litigation: proof of “stake” – show that they are using the patent, or at least doing work that closely relates to it. This would prevent patent trolls from buying patents simply for the purpose of litigation – unless they were also doing work in that area and began using the patent, they couldn’t litigate frivolously.

The argument against such a rule would be that it would negatively impact individuals, rather than companies. For example, if one person invented something but does not plan to move forward with it commercially, he or she could be at risk of effectively losing the patent (by not having proof of “stake”). The proper thing would be for this person to be able to continue to hold it and earn royalties off its use.

Maybe the solution is for this requirement of “stake” to not kick in for the first few years of a patent – long enough for a private citizen to, if not planning on using it, to determine its worth and sell it to someone who will. It would prevent someone from inventing something and then keeping the patent long-term if they are not using it… but is that a bad thing? Shouldn’t our IP system bias towards those who are using the patents to create things and contribute to society? If doing so prevents patent trolls from extorting onerous sums of money from fledgling businesses (and large, if less burdensome, amounts from big ones), I’m all for making patents “use or lose”.

Reading12: We should be scrutinizing policy, not the tech

Reading12: We should be scrutinizing policy, not the tech

Self-driving cars would be very convenient. Wouldn’t it be great to hop into an autonomous car and surf the web, play a game, read a book, or any number of other things you could do other than staring at the road on your way to your destination? More to the point, there is a lot of money to be made in providing this service, which may be the most truthful reason why it’s happening. Ultimately, though, this is all beside the point. Should we make autonomous cars?

The discussion around whether or not we should develop or allow autonomous cars ultimately focuses on safety. Over 94 percent of the tens of thousands of annual road fatalities are caused by driver error. Could autonomous vehicles alleviate most, if not all, of these? After all, they shouldn’t get tired, or distracted, or angry at other drivers.

I tend to agree – beyond thinking that autonomous cars would be extremely convenient and overall very cool, I think they could certainly make the roads much safer overall. I do think that we can reach the point where autonomous cars are considerably safer than human drivers, which would make widespread adoption a great improvement on the safety of our roads.

However, we’re certainly not there yet. I wouldn’t feel comfortable riding in a fully autonomous car at anywhere approaching highway speed; not because I don’t think that an autonomous car can be safe, but because I’m not convinced they are completely safe yet.

There is a great deal more work to do on autonomous cars. How far should we go with them? The crash in Tempe, Arizona, in which an Uber autonomous car struck and killed a pedestrian calls a lot of this into question. Should we test on public roads? Should we stop this altogether? Should a computer be allowed to make what can become life-and-death decisions?

I think that in light of the Tempe accident, we should question more the prudence of Uber in their decisions about design and testing, rather than the algorithmic capacity of the car.

I want to preface this by saying that I am not trying to rationalize away the loss of life. Any human loss of life is a tragedy. But we should not be looking at the capability of the autonomous car. It was dark, and the woman was crossing the road at a place where pedestrians would not be expected. But – and these next two facts are, I think, the crucial ones – a) the vehicle’s system attempted to initiate an emergency brake prior to the impact and b) Uber had disabled this capability in favor of a smoother ride.

The vehicle attempted to initiate an emergency brake 1.3 seconds before impact. Travelling at 39 miles per hour, this would be a distance of 74.3 feet. I can’t say with confidence that the vehicle could stop fully in that distance (stopping distance from 70 mph to stationary is ~185-190 ft for that car), but it would almost certainly have been a much less than fatal crash. The autonomous system correctly detected that it needed to perform an emergency stop. And, with a pedestrian appearing out of the darkness in the night, would we expect a human to have done better? After watching the video, the woman appeared suddenly out of the dark – 1.3 seconds is better than I’d expect most human drivers to do.

If the autonomous system was allowed to carry out the stop, the woman likely would not have died. But, Uber “had disabled the Volvo’s emergency braking mechanism, hoping to avoid a herky-jerky ride.” They cited that it was the responsibility of the human operator to intervene. The same human operator whose responsibility it is to be relaying data and working on other things during the ride.

This is grossly irresponsible. I think it’s obvious that you can’t rely on a human driver to take over for an otherwise vehicle in emergency situations. If a human is not actively driving, even the best-intentioned will get distracted, sleepy, or simply not have the focus to have the necessary split second reactions. So why disable the autonomous emergency brake? Even if there was a full-time emergency observer, why disable it? Another layer of redundancy could never hurt, and I don’t buy that avoiding a “herky-jerky” ride is enough to do so.

I don’t know how this could be regulated, but companies like Uber need to have more of a focus on safety. For everyone involved – things like disabling emergency braking should be unthinkable. More robust safety features and continued effort on these cars will make the road safer for everyone – pedestrians and drivers alike. Further, I don’t think there’s as much of a trolley problem concern as many like to posit.

These “trolley problems” almost never happen on the road. And, if they do, they are most likely the result of previous irresponsible driving that, in theory, an autonomous car should avoid. If you have to choose between running into another car at highway speeds or ramming pedestrians, couldn’t that have been alleviated by following less closely, or not speeding? There are systems – speed limits, road signage, etc – designed to make the roads safe and avoid these sorts of dangerous situations. To me, this is the more compelling challenge to autonomous cars – they rely on infrastructure that may not be there all of the time. Things like road markings or signage that may be absent, damaged, or obstructed by weather. At least so far, computers have a hard time improvising.

I’ll cut this off here – I have more thoughts on the trolley problem/ethics of autonomous cars (mostly about why people are focusing on the wrong thing) – but this is getting long. I can spell it out more in the in-class discussion.

In summary, I believe that autonomous cars are a promising possibility to make travel more convenient and safer for everyone; we’re just not there yet. We need to be more responsible in our testing and think long and hard about how we’re rolling out the technology, but we shouldn’t let poor safety decisions lead us to give up on this technology.

Reading11: More like a calculator than a brain

Reading11: More like a calculator than a brain

I believe that what we call “artificial intelligence” (and I would argue is more aptly called “machine learning”), the field that we are currently seeing an enormous boom in, is fundamentally different from what we would truly call “intelligence”. Worries that we could unleash a monster smarter than us that ultimately leads to our destruction are unfounded in response to the current work being done, and very well could be hindering progress.

I like to quip to my friends that machine learning is just “statistics that have gotten out of hand”. And, essentially, that’s what it is: a rigorously defined architecture of nodes with weighted edges, with those weights and the connections between them being fine-tuned and adjusted depending on the type of network and its desired inputs and outputs.

That’s all AI is. It’s simply another tool that we are learning to use. It is a tool that can be very good at recognizing patterns, grouping things, or extracting meaning from complicated or chaotic data in a way that’s not easy for us to follow – which is the crux of the issue.

With typical computer programs, a programmer writes each instruction. Every action is defined and laid out beforehand; we can trace the execution and figure out exactly how and why things happened. With machine learning, this isn’t exactly the case. Programmers set up the network and provide the data, the goal function, etc., but then a dizzying amount of math happens, and a trained network is the result.

This “black box” is what scares people. The fact that there aren’t lines of code to trace to show why decisions were made is worrying. Consider AlphaGo, DeepBlue, Watson, etc. They decisively beat the best players in the world at things we consider very difficult. That doesn’t mean they’re “intelligent”, or understand what they or doing or why. It just means that they’re finely tuned systems meant to output chess moves, or trivia answers.

The article in ND magazine had a truly baffling quote: “Couldn’t you just turn it off? Not necessarily. Shutting down an AI would prevent it from achieving its goals and, with its superior intellect, it could disable that option.” Consider Google’s more recent Go playing program: AlphaZero. It is programmed to, provided game states in certain formats, decide on the optimal next game state and output a move (as well as learning from it, etc). It is NOT programmed to “BECOME BEST GO PLAYER” or anything that would have bucking its handlers and taking over the world (so as to never lose at Go, paperclip-machine style) be a possibility. (Let alone the fact that there many other issues with this: like a computer can’t just “decide” not to turn off – “superior intellect” can’t defeat an unplugged power supply. “Escaping to the internet”, and other clichés are equally nonsensical, and any further discussion on this will just become an increasingly silly rant).

Even though I firmly believe that the machine learning we are doing now is just making tools, completely separate from “intelligence”, I can’t say definitively that there is any reason why a truly intelligent computer couldn’t exist. Though, saying that likely comes more from ignorance of the human brain than any educated opinion – maybe there’s aspect of the brain that would preclude this, but the example of simply simulating a human brain seems to me like it could produce at least a facsimile of a consciousness. Would that really be a consciousness, or just a simulated one, a la Chinese Room? Would that even make a difference? I truly don’t know. That’s not to say I think it’s feasible, or could even happen in even the next century. It’s just to say that we don’t, and cannot, know what the future holds.

Overall, the fear over artificial intelligence or, as Elon Musk put it, humanity being “just the biological boot loader for digital superintelligence”, is unfounded as it relates to machine learning today. Maybe that could happen down the road, but as it is now, the AIs are something completely separate, more akin to a calculator or abacus than a brain.

Reading10: On fake news and influence campaigns

Reading10: On fake news and influence campaigns

I think it is the reasonable and sensible conclusion for platform providers such as Facebook and Twitter to be combating against “Fake News” on their site. Information that is demonstrably incorrect should be removed, particularly when it is being used to further certain agendas unfairly, or when it causing real, demonstrable harm.

It’s a no-brainer that the spread of false information should be stopped. In this way, I don’t mind the fact that it’s technically a private company deciding what is and is not “fake news” – if it’s demonstrably false, it can and should be removed.

Additionally, we need to have a discussion about simply how much power these companies have, and what their platforms mean for the public discourse. See the effect that WhatsApp is having in India; these are becoming more than just corporations. They so profoundly affect how we live our lives, and we need to take a good, hard look at what to do about it. That is, to recognize that they may have some different capabilities than we afford typical companies, but also (emphatically) that they have additional social responsibilities. See, for example, how Russian agents were able to use Facebook to (credibly) affect the outcome of the 2016 US elections.

The reason I’m relatively nonchalantly saying “yeah, go ahead and remove fake news” is that I don’t think that’s the root of the problem, particularly with the election. While it is true that they said things that weren’t true, you can do much the same with true stories presented or framed in certain ways to certain people.

In fact, a component of the 2016 election shenanigans that may have been as or more harmful is indistinguishable from typical use of the website at the surface level. On Facebook, people make posts, add friends, read and share articles, etc. In the 2016 influencing campaign, there were actors who did all these things, just with a political agenda in mind. Actions like these can never be prevented at a high level, and must be detected with fine-grain analysis, which makes them as a matter of course extremely difficult to combat.

The issue, I think, is not that what people are saying is “fake”, or completely impermissible. The problems arise when we are tracking what people do so closely and building up such huge dossiers of information on the general populace, that can then be leveraged to deliver very targeted messages to certain groups. When a firm can identify groups of people likely to vote for a certain political candidate and serve them specifically targeted ads to encourage them against voting, we have a problem.

I wouldn’t go so far as to say that we live in a “post-fact” world, but it is certainly the case that it’s becoming far easier for people to be manipulated. As the public’s news consumption is growing tied to social media rather than coming from news outlets that they themselves seek out, it is becoming easier and easier to influence people into believing certain things or behaving in certain ways.

I believe that this is a threat to our democracy – not in an “overthrow of government to authoritarian society” sense of things, but in the way that it could become those with the best analytics and election campaign will win, rather than those who have the best platform or the promise to be the most effective leader. In the age of social media and data analytics, the process of attaining political office is growing increasingly independent from having the qualities to hold that office effectively – a trend that can lead to only bad if we don’t do something to fix our political discourse.

And, I think, we can do something to alleviate it. Social media doesn’t appear to be going away any time soon, but the climate of this past presidential election cycle – with all its fearmongering, finger-pointing, and name-calling – lent itself uniquely to this sort of influencing campaign. This sort of rhetoric will never go away, but the more we try to move back towards a reasoned, policy-centered discussion, the better off we’ll all be.

 

Reading09: Net Neutrality, or Why Thinking About Our Internet Situation Just Makes Me Mad

Reading09: Net Neutrality, or why thinking about our internet situation just makes me mad

Net neutrality is, in short, the principle that Internet Service Providers (ISPs), like Comcast, AT&T, Time Warner, etc, cannot throttle or block particular content on any grounds except for legality. In other words, they have to act as infrastructure and infrastructure alone – who the user communicates with, and what they say, cannot be grounds for their web traffic to be handled any differently.

The arguments for net neutrality mostly focus on protecting the customers (both individuals and other companies) from predatory practices by ISPs. A free and open internet is in everyone’s best interest. Without net neutrality, ISPs would be free to do things such as block certain sites based on their content, throttle or block entirely services of competitors, and divide the internet into fast and slow lanes based on who’s paying.

The arguments against net neutrality say that the government is overreaching with these regulations, and is stifling the free market competition. Removing the ability for companies to have different tiers, or to differentiate traffic stifles the growth of the ISP industry. Certain services could benefit from paying to be guaranteed fast access, and companies are now disincentivized from innovating to provide a better product.

Like many people on the tech-ier side of things, I am staunchly for net neutrality. A lot of the arguments for repealing the net neutrality regulations claim “The ISPs won’t do these profit grabbing things that they would now be able to do! In fact, we don’t even want to! We just… don’t want that regulation!” I’m pretty cynical on this subject as a whole – to me, it seems that net neutrality is in the best interest of everyone but the ISPs (who stand to make increased profits without it), and its repeal is symptom of large corporate donors having too strong a voice in American politics.

Questions of implementation and enforcement I’m less able to speak to – beyond having regulations and federal oversight from the FTC/FCC to punish infractions, I don’t know the specifics of what can be done to enforce this. If certain services load just a little faster on a certain ISP, how are we to tell if that’s deliberate, or just a quirk of the fluctuating speeds and unreliable connection that we’ve all-too-often come to accept as normal from our ISPs?

But, above all, I am not at all confident in the free market fixing all of our internet woes. Time and time again, the ISPs have showed that they are not interested in continually improving their service, iterating on their company for the benefit of the consumer. The internet sector as a whole has been historically very money-driven and bad for the consumer. Staggering amounts of money have disappeared to ISPs and telecom utilities companies, with nothing visible to show for it, and no clear answers given what happened to it (See the $200 billion dollar grant to cable companies to build fiber optic lines for the US that just… disappeared).

The internet situation in the US is profoundly broken. ISPs are already charging twice (charging users on both ends, be they people or corporations) on any interactions that occurs over the cables they own. They want the freedom to differentiate the service they provide in order to get more money out of people. In what other sector do the people that own the infrastructure have such radical control over everyone else involved? We need to reassess, and it’s hard to do so when the huge corporations have such strong voices (read: such deep pockets).

Reading08: Corporations are by the few, for the few

Reading08: Corporations are by the few, for the few

Corporate Personhood is when corporations are afforded some of the same rights and freedoms that people are granted (and, too infrequently, when they are given the same responsibilities). The most high-profile example is the ability of corporations to participate in the political process by lobbying and giving donations, a freedom given under Free Speech from the 1st amendment.

Most of the aspects of corporate personhood are freedoms, under the 1st, 4th, 5th (though they have no privilege against self-incrimination), 6th, and 8th Amendments to the U.S. Constitution. Criminal and civil legal actions can be taken against a corporation as a whole, as well.  The legal ramifications are pretty straightforward – certain actions by its members can be held against a corporation as a whole, and penalties (usually in the form of fines) can be levied against them.

The social and ethical ramifications are more complicated, and I think are where our current system is lacking. In short, I believe that a problem arises because we give corporations the ability to do things like participate in our political process and engage with communities, but the current view and structure of corporations holds that making as much money as possible (i.e. maximizing shareholder profit) is the goal. I will discuss this more after the specific case study.

I chose to read about the Sony BMG Rootkit. It seems like a no-brainer to me that installing a rootkit without users’ knowledge was unethical. In addition to installing software on user systems unbeknownst to them with no clear way of uninstalling it (bad enough already), the rootkit made the users vulnerable to all manner of attacks – any malicious program simply needed to put “$sys$” at the start of its name, and it would be hidden from the user.

I’m largely against DRM – if people want to pirate, they’re going to pirate; no company can make their content completely locked down. But, I don’t think that DRM in and of itself is unethical. I think it’s ineffective, unnecessary, and often puts too much power in the hands of the distributors rather than those who produce the media, but that’s beside the point. DRM in and of itself isn’t unethical. Coopting customers’ systems without their knowledge is. Doubly so because it was done irresponsibly, in a way that opens even careful customers up to opportunistic attacks.

I think that if corporations are afforded the same rights as individual persons, they should also be expected to, at the very least, have similar ethical and moral obligations and responsibilities. There will obviously be differences because it’s a corporation, but it can’t be like it is now – where profit matters above all else.

Just like people, corporations need to act as citizens. Currently, they largely try to make money above all else, to others’ detriment – they lobby for political change in self-interest, not policies that will help better society, they pollute the environment except where fines make it more economic not to, etc. Corporations have a lot of power, much more so than any one person. If we’re going to let them into our political process, we also need to rethink how we approach corporations. When they’re understood to be these large machines, controlled by the C-level executives, it’s no small wonder that the effects they have benefit the few rather than the many.

I don’t have a good answer for how to fix things, but the current system is not set up to approach the good of everyone. It would be a step forward if, as one reading suggested, lower-level workers had more say in the running and policy of their corporation. If employees contributed more than just labor to their company – their views and opinions in discourse, too – it would be at the very least a step in the right direction.

Reading07: Data gathering isn’t always good… but it’s not always bad, either

Reading07: Data gathering isn’t always good… but it’s not always bad, either

I’m honestly not sure how to judge the use of data gathering/mining in order to sell customers goods and services. There is definitely a point that goes too far, but there’s also a degree that is logical and necessary for a company to function well. How do we draw a line between these two? When does a company go too far?

It seems fitting and necessary for a company to aggregate data one their customers that are directly pertinent to their business. I wouldn’t begrudge a supermarket for tracking my purchases and using my purchasing habits to send me coupons or alert me on deals I’m interested in. That seems fitting and logical.

However, I’m less sure about the practice of seeking out/buying additional data on consumers, even if it’s pursuant to the goals of the company. For example, when Target sought out extra data on their customers to determine who was pregnant, in order to send them targeted advertising. This is clearly in line with their “greedy strategy” – they’re doing their best to maximize their sales and profits, and capture the spending habits of additional people. But, and I think I’m far from alone in this, something feels a bit off. To have this corporate giant purchasing information about me that’s not really relevant in order to sell me things? That doesn’t feel quite right.

My initial thoughts are that the issue arises when the company seeks out and purchases additional data on people. Data gathered in the course of doing business is, in my opinion, pretty uncontroversially “fair game”. But should companies be able to purchase additional, seemingly unrelated information about us?

To say that data brokers should be abolished and are across-the-board unethical is a bit naïve. On one hand, when there is money to be made from doing this (and there clearly is), it’s not as if it’s going to stop easily. On the other, though… aren’t we deriving benefit from this?

If a pregnant woman is preparing to have a child, and receives coupons and deals from Target, isn’t she drawing benefit? Maybe money is tight, and they really make a difference. Is it so wrong for Target to benefit from this knowledge, if they’re providing a service? We’ve all benefited from the growing interconnectedness of different parts of our lives, often in ways that we don’t even think about.

To me, a large part of the concern is in what happens to personal data, and how it’s handled. Hacks and data breaches happen, and the more companies that have our personal data, the more it’s been spread around, the more opportunities there are for it to get out to people whose intentions might be less above-board, might be seeking to benefit from our loss. If there was a guarantee that the information was completely secure, would we care as much? What if Target had, after figuring out who was pregnant and acting on it, immediately deleted all the data it had gathered? Why does that feel less objectionable?

Data gathering and analysis is never going to go away. There are definitely degrees that are too much, and there should be an obligation to protect the personal information that is gathered. However, we do at times derive very real benefit from these practices. Maybe this is just the reality of the increasingly connected world we live in.

Reading06: On Encryption and Privacy

Reading06: On Encryption and Privacy

The tradeoff between privacy and security is a difficult subject. At a glance, it seems like a no-brainer that we as citizens should have a reasonable assumption that our personal lives and data are relatively secure, free from prying eyes. At the same time, though, isn’t it a good thing to be able to catch terrorists? To prevent crimes and attacks before they happen?

Progress in privacy and encryption is a double-edged sword; every protection and safeguard provided to general consumers is necessarily also made accessible to those with hostile intent. Should we hold back on encryption and privacy so that criminals aren’t protected?

I think the answer to that is no. Companies such as Apple should continue to strive for increased security and privacy for their users. It is more than fine for them to cooperate in retrieving data from criminals’ phones to help stop further attacks. However, I think they correctly drew the line at creating tools that could be used to unlock anyone’s phone.

Proponents of security sometimes use the phrase “If you’ve got nothing to hide, you’ve got nothing to fear.” I would like to counter that with an even more famous quote:

“First they came for the socialists, and I did not speak out—
Because I was not a socialist.

Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.

Then they came for the Jews, and I did not speak out—
Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.”

– Martin Niemöller

Obviously at first glance, comparing iPhone privacy to the holocaust seems a little drastic. But, I think it is relevant in the statement it makes about small injustices and protections against an ill-intentioned government. In a perfect world where the American government was run by angels, then I suppose it would be true that those with nothing to hide would have nothing to fear. However, people are fallible. Bad people can get too much power, and even worse things can happen. Even barring any institutional evil or something nearly as dire as a genocide, there are still a number of reasons to err on the side of privacy.

In general, fewer people having your data is better for the consumer. I think many of us have made a kind of grudging peace with the fact that our data is getting vacuumed up no matter what we do, but the more encryption and privacy we have, the better. Even if the people collecting it could more or less be trusted, we are then relying on their data protection. If they get hacked, our data is then exposed to who knows what kind of actors. And, due to simple probability, the more people and services that have our data, the more possible failure points for it to get somewhere it doesn’t belong.

In short, I don’t think there’s very much merit to the thinking that “if you’ve got nothing to hide, you’ve got nothing to fear.” More privacy, encryption, and security for consumers’ data is almost always for the better.

Reading05: Not made to be broken, but sometimes you have to try

Reading05: Not made to be broken, but sometimes you have to try

On January 28, 1986, the Space Shuttle Challenger broke apart just over a minute after its launch, killing all seven of its passengers. The failure was caused by the O-ring seals, which were known to have performed poorly in cold conditions; the temperature that morning was considerably colder than any previous launches, well below a level at which they had already observed numerous issues with the O-rings.

While the O-rings may have been the mechanical cause for the disaster, the root cause is just as much the overlooking of these issues and the decision to go ahead with the launch. NASA was under pressure, from the public and from the government, to produce tangible and demonstrable results. Roger Boisjoly, a Space Shuttle engineer, described a meeting prior to launch as “a meeting where the determination was to launch, and it was up to us to prove beyond a shadow of a doubt that it was not safe to do so,” elaborating that “this is in total reverse to what the position usually is in a preflight conversation or a flight readiness review. It is usually exactly opposite that.”

Engineers and others involved in the program raised concerns about the launch, but for a number of reasons the decision was made to proceed with the launch. In particular raising concerns was the aforementioned engineer, Roger Boisjoly. After repeatedly being met with frustration, he eventually brought his concerns to the public in an attempt to raise awareness about the risks about the launch.

Even though the issues that he was concerned about were shown to be more than credible, this whistleblowing attempt led to Boisjoly being ostracized by colleagues, isolated by managers, and “made life a living hell on a day-to-day basis”. Boisjoly himself said that it “destroyed my career, my life, everything else.”

It’s clear that Boisjoly’s actions were punished. But we must ask the question – were his actions ethical? Should he have done this?

There were defined avenues for Boisjoly to raise concerns at work, and he did his best to use them. It was only after these proved fruitless that he turned to the public. He broke explicit and implicit rules, where professional conduct is concerned.

I would argue, however, that he was justified in doing so. All of his attempts were met with frustration, and he had good reason to believe that if these issues were not addressed, they could (and did) lead to the death of several people. There were rules in place, but he broke them in an attempt to prevent a catastrophe from happening.

Rules are rules for a reason; they govern how we should act and what is permissible. But, there is a point where you have to consider breaking them, and this depends on the severity of what might happen, and to whom. If these faulty O-rings would simply lead to his company losing money, even a substantial amount, Boisjoly, would not have had cause to go public. It was the fact that the repercussions were extremely severe and would affect those who had no part in the decision that he was right to try whatever he could to raise his concerns.

Was his employer right in retaliating against him? On one hand, he did break protocol and raise concerns to the public, something that is by and large frowned upon (for good reason). They would certainly have recourse to penalize such actions. However, for the same reason that he was justified in breaking the normal protocol, this was a time when discretion perhaps should have been used and the normal penalties not applied.

In sum, rules are rules for a reason. However, even when rules are just and proper, there are times when it is ethical to break them. One such example is here, when breaking the rules was the only option that Boisjoly had to try and prevent a catastrophe that ultimately took the lives of seven people.