Coding for Everyone

One of the best opportunities I’ve had as a Computer Science student at Notre Dame has been work with South Bend Code School to teach members of the community–at any technical level–how to code. I believe coding is the new literacy and should be taught in some capacity in all levels of education. One particular line from the documentary CODE: Debugging the Gender Gap stands out to me when considering my beliefs on this issue. Specifically, I remember one interviewee saying, “in the same way that everyone should know a little bit about law and economics, everyone should know a little bit about programming.” As technology has pervaded culture so significantly in the past few decades, so has the need grown for more citizens to understand the theory behind the software they use.

Working with South Bend Code School, I had the chance to see grade-school students who never considered themselves “coders” publish their first website–and giddily push the limits of the printed-out assignment. “Can I add an image now?” one girl asked. “Sure!” I said, and she was already searching Google for the HTML code to include an image of her family on her new website. In my view, so many people just haven’t been exposed to coding and so don’t understand their own abilities or how understandable or powerful it becomes when you devote a little energy to it.

Perhaps this is one of the criticisms Brian Drayton expresses in his pendulum manifesto against teaching code. I found the articles against the coding literacy movement myopic and unreasonable. Brian Drayton’s comparison to the Logo education movement in the 1980s seems to miss the point–he addresses specific failures of the movement without acknowledging lessons learned or the different approaches institutions are making today. He even conjures that coding is just a field that either can’t be taught well to the masses or one too new to know how to teach it. That does not mean we shouldn’t try!

My counterargument is for him to consider any basic subject in school–and ask it to meet the same standards he sets for coding. Sure, some students are better at math, some are better at language, and some perhaps will be better at coding. That doesn’t mean that Kindergarten classes are specialized in a particular field! As for teaching methodology, I can think of plenty of courses that require hands-on work–I’m sure we can utilize some lessons learned to programming education.

I think coding education should be required for K-12 but actually take a bit of a different approach than a singular class, course, or subject. It should really be integrated into the other courses in assignments, projects, and tests. The application to STEM classes is obvious–code math problems and puzzles, code stoichiometric equations, etc. But imagine writing a program to help you remember basic grammar rules (“i before e except after c”), decode sentence structure, or find trends in Shakespeare’s word choice. The point is, there are coding applications in every subject–and that’s just the point. Code is something that has transformed society in every area, it should do the same in education.


During my senior year of high school, I came upon an interesting app for my Mac that did something amazing: through a beautiful user interface, it let me select which shows and seasons I wanted to track and would automatically download a high quality version of each episode as it was released. It was seamless, clean, and best of all–free! Of course I had an inkling of concern what I was doing might not be terribly ethical or legal, but alas the process was too easy, and I didn’t have Netflix or the like.

Fast-forward to spring of Freshman year when I get an email from the Office of Community Standards requesting a conduct meeting regarding allegations by Home Box Office (HBO) that I was using nd-secure to illegally download episodes of their show, The Newsroom. Well…Yes, yes I did do that (which was crazy because I didn’t even watch the show!). I did not, in fact, have the meeting but did have a conversation with OIT who requested I remove all copies of the TV show from my computer and stop immediately any other illegal downloads. It was a necessary slap on the wrist but the threat of further repercussions by OIT (no internet access for me!) and the scary legal language from HBO caused me to think hard about whether it was worth it to “enjoy” this copyrighted content at the price of lots of legal hoops and moral quandaries.

My conclusions? First, it’s not worth it. Further, it’s not right.

I made a commitment to begin purchasing content (apps, songs, movies, and shows) I wanted to enjoy. Companies had already made it so easy for me to enjoy this content if only I paid for it. I didn’t have to have my laptop running all night torrenting files with strange extensions and having to convert them later so that I could sync it up with my iPad. I could tap a button and it would download in HD to my iPhone–and stay synced across every device. So it wasn’t worth the hassle, but it wasn’t worth the risk, either: I could actually enjoy my content, knowing my version of Photoshop wasn’t bootlegged or susceptible to weird issues, or that some company was chasing after me. (One time, the company I started was hit with a preemptive lawsuit from a Google image we had used for a photo on our website, asking for $500+ in payment–I emailed them nicely that I was sorry and took it down. No charges pressed.)

But beyond the convenience factor of “not living life like a fugitive” or spending precious time trying to save a few pennies, I reasoned that pirating stuff was just wrong. I am a creator and a developer–and I’ve seen some of my stuff copied and reproduced without my permission. Sad! Thus, in developing recent websites for clients, I’ve paid special attention to licensing information for the themes I use–ensuring not to cross any bounds. It’s a bit tedious, but it’s the right thing to do. Today, I buy all my songs, movies, TV shows, and apps–and I feel great doing so. We should be promoting the fair payment of quality work.

Self-Driving Cars

A recent report by RethinkX, an independent think-tank that studies broad market disruptions from technology, noted that 95% of miles driven in 2030 will be driven by autonomous electric vehicles. Their comprehensive study examined the social, environmental, economic, and geopolitical impacts of such a shift, producing the following diagram to help explain some of the motivations for developing and building self-driving cars:

Source: RethinkX

There are a couple of hot-button issues at play as this technology advances and its impacts (especially the negative ones) become more and more inevitable. Specifically, is it moral for an autonomous system to make decisions about life and death? Is it moral to displace so many jobs via automation? Is it moral to increase wealth inequality beyond current levels? Such questions directly relate to the shift we will see in the next 5-10 years.

Last week, I had the chance to speak to Democratic Congressman John Delaney, the head of the Artificial Intelligence caucus in the US House of Representatives, at the 8th annual Naval Academy Science and Engineering Conference in Annapolis, Maryland. Representative Delaney was bullish on autonomous cars and the enormous safety and productivity benefits of a society transported by this technology. He was asked several tough questions last Monday. On job loss, he argued we would be “betting against history” to say that the market wouldn’t balance itself out, noting that innovation has reversed the poverty rate significantly in the last century. However, he said that the key for America is to be invested in more education–K to 12 is no longer viable. He suggested pre-K through college or technical training to get more Americans prepared for the future. Finally, Congressman Delaney mentioned that life has tradeoffs. He said:

“If the positives outweigh the negatives, there will still be negatives! You just have to deal with them.”

Thus, I think he viewed our autonomous-driving future as inevitable and good, with only minor consequences. I tend to agree with this view but don’t have the solution to the negative impacts…does anyone?

This summer, I saw Google’s Waymo cars driving about daily in Mountain View and got so excited! I love disruptions in technology because it means I happen to be living at a time when history is being made. Specifically, I feel like our society is optimizing the human experience. Innovation usually means more money, more time, more lives saved–for everyone. RethinkX’s widely-publicized study found that self-driving cars will increase mobility among low-income families and many people restricted by today’s model. In addition, they found:

“Savings on transportation costs will result in a permanent boost in annual disposable income for U.S. households, totaling $1 trillion by 2030. Consumer spending is by far the largest driver of the economy, comprising about 71% of total GDP and driving business and job growth throughout the economy.”

In the US alone, more than 30,000 auto accident deaths last year alone can be attributed to human error. Autonomous cars will bring that number significantly down–and have already displayed positive results. In the same way that seatbelts were legalized inconveniences to protect safety, I think we’ll find ourselves in a world where human driving is illegal in some places to promote the safety self-driving cars bring.

Self-driving cars will surely be disruptive, but so has every progressive event in human history. Humans have an incredible imagination and will only continue to innovate as we afford ourselves more resources. I can’t wait to ride in my first self-driven ride!


This weekend, I had the opportunity to attend the 8th Naval Academy Science and Engineering Conference in Annapolis, MD, which was focused this year on two themes: Artificial Intelligence and Space Exploration. The conference featured lectures, panel discussions, and breakout groups on various issues regarding these themes. My specific group focused on the ethics of automation (ha!) and presented a presentation to the group at the end about what the US government and industry needs to do in the next five years to ensure the automation age is ushered in responsibly.

Before we compiled our thoughts, it was important to get some background information and ask panelists tough questions to better understand the issues at play. One of the speakers at the conference was democratic Congressman John Delaney, head of the new bipartisan Artificial Intelligence Caucus in Congress. He was certainly concerned that automation would temporarily disrupt industries such as transportation but seemed confident new jobs would arise to fill the void. Someone asked about Universal Basic Income (UBI) and Congressman Delaney was somewhat dismissive. “It’s too early to begin talking about that yet,” he said.

My specific question regarded a loss of tax revenue associated with the outsourcing of taxable, human tasks to machines. The thought is, if McDonald’s has the choice between a $35k/year employee and a $35k kiosk, it’s preferable for them to replace the human worker, even if the machine costs more initially. The kiosk is never late to work, never misses an order, and cannot be rude to customers. However, this decision actually has huge consequences on tax revenue. While the employee’s salary would have been taxed, the machine’s total cost would not have for completing the same task! Ultimately, this leads to a loss of tax revenue for the federal government unless we decide to start taxing robots–a strange concept today, but perhaps our only option tomorrow.

I was disappointed that my question wasn’t particularly addressed because I see this as a complex issue. Congressman Delaney seemed to echo again that new jobs would be created in the process and that those would also get taxed. Ultimately, I suppose, without much data, it is difficult to make accurate predictions about what jobs will be lost, how much tax revenue is lost (or gained), and when this is all going to happen. Some people in my breakout group argued that the economic surplus of automation would be enough to fund Social Security for decades and pay down the national debt! There are also likely subtleties that exist that are not quite clear now. For instance, a report published this summer noted that the automation of transportation would actually put more money into the pockets of everyday Americans as less people bought cars and more used transportation as a service. Additionally, it’s hypothesized that advertising opportunities in self-driving ride-sharing services might bring the cost down further–i.e. McDonald’s subsidizes half of your ride to the movies in exchange for you watching a 4-minute ad during the ride.

I know it’s not satisfactory to put it this way, but it’s difficult to know exactly what impact automation will have. I think, in the long term, it will lead to more lives saved, more money in everyone’s pockets, and less stress–but we’re a long way from that point.

Filter Bubbles

I agree with Edward Snowden that too many people in our generation rely on Facebook as their sole or primary source of news information, for several reasons. First, Facebook is a media company and thus controls what content meets viewers eyes. I think the corporate mission to “increase engagement” stands in stark comparison with an ethical obligation to present relevant stories that both stimulate intellectual curiosity and promote civil discussion. In other words, it makes more sense to keep people on the site reading articles they agree with (true or not) than provide thought-provoking pieces that span the spectrum.

I realize this is a bold claim to make. People might argue birds of a feather flock together or that Facebook is simply broadcasting the sentiments of its users. However, I think Facebook is primarily a media company, not a megaphone. Further, I think all media companies need to be held to a high standard of being fair to both sides of an issue. Consider that 170 million people in North America use Facebook every day. This company then has the power to influence that many people in an election–no company should have that unchecked power!

Facebook is not simply, as it often portrays itself, a simple megaphone for the press, broadcasting articles chronologically in our newsfeeds without weight or intent. It absolutely times when we view articles, what articles we view, and those algorithms more often than not curate content from particular sources, creating echo chambers. Consider the findings from authors of a 2017 study on Facebook’s News Feed:

“The authors saw that active Facebook users were more likely to interact with a limited number of news sources. Additionally, the more active a community was, the more self-segregated and polarized it was.”

I see little benefit in allowing hyper-polarization online, but that seems to be exactly what Facebook promotes. What we need instead is balance. In the UK, the BBC is required to give both sides of an issue equal time, and still remains a top-trusted news source for international news. Why can’t Facebook employ similar tactics? It might force readers to be critical about what they read or talk about their differences with a little more understanding.

Understandably, such an argument is based upon a utopian view of the world: people everywhere will understand and agree with each other and little tribalism exists. It doesn’t seem practical. However, such conversations exist on a particular subreddit: r/cmv, short for “change my view.” Here, people do have frank discussions, with the caveat of following a few rules.

The rules of CMV are simple: you have to provide a well-reasoned argument, be open to changing your view, be able to respond in 3 hours, and….not be hostile toward one another. Chenhao Tan at the University of Washington was fascinated with this community and studied its effectiveness. He found that if you want to change someone’s mind on Change My View, you can reply back and forth up to about three times before your chances of changing their mind begin to decrease. After that, it makes sense to agree to disagree.

I think it’s important to have these conversations, but it’s impossible to in an echo-chamber like Facebook, which can indirectly control your opinion. Either Facebook needs to tweak its algorithms promoting equality for many views–even differing ones that your own, or our generation needs to find a more thought-provoking source of media. Information is vital to our democracy, it doesn’t make sense for a single company to be in charge of dispersing it.

Privacy + Cloud Computing

  • What trade-offs are you making when using the cloud? Have you consciously evaluated these trade-offs? What is your justification?

When using the cloud, you are giving up some control of your data. For instance, you likely do not know the physical location of your data–it is abstracted so much that you may know the “zone” it resides in throughout the world but not much else. When using your own server, you likely have physical access to the hard drive and can ensure its physical security is under your control. Cloud companies make claims about how secure your data is and that there is little downtime–but there are still certain risks associated with that. For instance, in managing my own server, I can be sure I am in complete control of uptime and downtime and maintenance, whereas I may not be able to do anything when an outage occurs on other services.

I have consciously evaluated these trade-offs and think using cloud services makes so much more sense than personally setting up and managing my compute environments. I am afforded so much more power at a lower cost and have the ability to expand easily using load-balancers. I know GCP and AWS hires engineers full time to keep my data secure and running 99.999% of the time.

  • Is it ever worth it to manage your own private cloud services? Do you envision a future where you may use your own services rather than third party ones?

It is not worth it for me to manage my own private cloud services. I build lots of websites for people using WordPress and rely on services large companies can offer that I just cannot do myself. For instance, GCP and AWS have vendors who offer one click set-up for WordPress sites, DDoS protection, and automatic scaling as the websites need to scale up or down.

I cannot see myself using my own services rather than third party ones in the future. I think the use case has been well-tested with many Fortune 500 companies and the cost benefits and security insurance outweigh the benefits of setting up and using my own services. Consider that for this project, we set up GitLab via GCP…imagine if we had physically built our own server and installed necessary components from the ground up! The point is, there is convenience in abstraction and I simply do not have the time, will, nor reason to be constantly reinventing the wheel.

  • Do you have the moral standing to complain about encrouchment on your privacy when you consciously give away your information to third party services?

These services operate through contracts, so Google or Amazon legally obligate themselves to certain standards. I believe when those standards are breached, I am justified in complaining. It’s important for these services, however, to display transparency in communicating with consumers so that each side understands the expectations. If I consciously give my information (and money) to Amazon or Google, I also have to expect privacy and security from them in return.

Free Speech

There are a couple different issues at play regarding technology, censorship, and free speech, so we should be clear about our distinctions between the rights citizens have, the rights companies have, and the rights governments have. I bring this up to help explain that government censorship is not the same as Facebook taking down a post–these entities reside in different domains of authority and purpose.

Looking at the spectrum of political beliefs, it is clear that one one extremity endorses uncensored and unfettered free speech while the other extremity favors suppressed and censored speech. Importantly, the reality, baked into place through decades of Supreme Court cases and litigious battles, is somewhere in the middle. The standard of unprotected speech today is speech “directed to inciting or producing imminent lawless action.” Note that this actually doesn’t mean yelling fire in a crowded theater is illegal, but still shows that speech can be legally restricted.

Of course, much of this has to be decided on a case-by-case basis. Our government is structured to promote truth not through censorship or jailing a person for speaking, but by promoting the overwhelming counterbalance of more speech. Former Supreme Court Justice Oliver Wendell Holmes sums up this view of the Constitution in one of his cases:

“The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out.”

In other words, community action is paramount. It is up to the community of people rather than a single, opaque “censorship unit” to determine what is acceptable speech or not. There are platforms that allow uncensored, unfettered, (and anonymous) free speech–and there’s a reason a lot of people stay away from them: they tend to turn into echo chambers of vitriolic hate speech.

Thus, what is a company like Facebook to do to retain its users, protect their rights to free speech, but also protect users from violent speech? Facebook needs some form of speech suppression to function. Unlike the government, which only restricts speech that promotes imminent lawless action, Facebook can restrict speech that is mean, false, or obscene if it wants to. What is the ethical approach?

I think Facebook should do three things to ensure people are happy and safe on their platform. First, continue to primarily police its feeds through the community–empower people to self-report posts that violate community standards. Second, Facebook should offer significant transparency with their post-removal guidelines. For instance, terrorist activity needs to be acutely defined so that there is little confusion about policy. Finally, Facebook should actually encourage people to unfollow someone who upsets them. These steps offer transparency and allow the public to make their own decisions about speech.

I think tech companies were right in restricting access to the Daily Stormer because they set out policies that were clearly violated.

Corporate Personhood

Corporate Personhood basically says that corporations should be treated, for most legal intents and purposes, as a person. Thus, corporations have the right to free speech and are protected by many of the amendments in the Bill of Rights. However, there are subtle differences of course. Corporations clearly cannot marry, cannot vote, and have no Fifth Amendment privilege against self-incrimination. The ramifications of this principle mean that corporations can sue or be sued (thank goodness!), make and enforce contracts, buy, sell, and hold property, etc. It also means that corporations cannot go to jail, raising some legal eyebrows.

A particular Google Antitrust case comes to mind regarding the moral obligations of a corporation. In June of this year, the European Commision fined Google nearly $3 billion for  for having “abused its market dominance as a search engine by giving an illegal advantage to another Google product, its comparison shopping service.” While I do not want solely Google+ reviews showing up on my Google searches, I don’t believe Google is wrong to promote their own products on their service.

Google is unquestionably the market leader in search engines and several other online areas, so it’s understandable how monopolistic and antitrust issues arise. In publishing only Google+ reviews for say a hotel search, I don’t think Google was keeping the consumer’s best interests directly in mind. Competitive analysis shows that many many more reviews existed when searched on services like Yelp–as a consumer, I would want the best data to show up in my search results and that often means the source with the most data.

However, I don’t think Google can be held to this standard of giving every company equal playing time by the simple fact that Google needs to generate revenue. Ironically, in order to continue to bring the best experience to consumers, Google relies on advertising revenue and continued growth of its own services. In other words, to put consumers first, you may say it has to put itself first. Ben Thompson expertly underscores this point:

“I agree that Google has a monopoly in search, but as the Commission itself notes that is not a crime; the reality of this ruling, though, is that making any money off that monopoly apparently is. And, by extension, those that blindly support this decision are agreeing that products that succeed by being better for users ought not be able to make money.”

In other words, Google is big but not necessarily “bad.” In fact, I would argue they are looking out for the best interest of the consumer–after all, consumers freely choose to use the service or not.

Companies aren’t quite afforded the same rights as individual persons and they aren’t individual persons, so it’s illogical to equate personal morality with corporate morality. Luckily, our laws and courts understand this and account for the nuances of corporate actions. Our laws thus reinforce the ethical obligations by which corporations are expected to abide. So yes, ethics apply to corporations, but not the same ones as for people…they’re not people, after all.

The Cloud

I build lots of websites for people. In fact, I’m working on a couple of them right now and host them on Amazon Web Services because it’s inexpensive, reliable, and scalable. AWS has let me get a website up and running in a couple hours and also let me seamlessly scale certain websites as traffic grew. For instance, I built a website for a family friend who published a book that began to get traction in bookstores and on Amazon. As the book surged in popularity–causing higher traffic–the server was able to scale up to better-performing hardware that could accept and handle more concurrent requests–seamlessly.

After my experience with AWS and Google Cloud Platform, I look at the alternatives to the public cloud and shudder. I don’t have time to set up an entire compute environment! I don’t have space to run a personal server nor the time to ensure it’s running healthy nor the ability to troubleshoot when something bad happens. All of that is taken care of in the public cloud.

“A company like Amazon has so many engineers focused on these services—so many people watching for potential problems. It has already spent a decade building this thing.” – Cade Metz

Sometimes I just have a small bit of code that I need to run quickly on a distributed system, like when I used GCP to render a 4K iPhone animation I made for a class or watched my friend run a massive neural network seamlessly through a remote connection. The cloud is empowering users to do and make things like never before.

This summer, I worked on the Google Cloud Platform team and saw firsthand the intricacy of the service. I was fascinated that we contractually obligated ourselves to ensuring 99.999% global uptimes on some products and constantly planned for failure, in order to prevent it. During training, one speaker asked, “Have any of you ever experienced Google being down?” As I racked my brain to remember a time, I looked around to find no one of the 400 interns in the room raising their hands. The truth is, companies like Snapchat, Spotify, and Uber use Google Cloud Platform because they trust in the uptime guarantees and understand the remarkable benefits to using the Cloud. How else would Pokemon Go have scaled so fast if not for GCP global load balancers? Banks too entrust sensitive data on the platform because they know Google will keep it secure. Even Google runs some of its services on Google Cloud Platform.

What then to make of the rare cases of outages, data loss, and privacy issues? I say, thank goodness these things have happened! Incidents are always an opportunity for improvement. As Mr. Slosser mentions, not even Google or Amazon strive for perfection.

“[An outage] is an expected part of the process of innovation, and an occurrence that both development and SRE teams manage rather than fear.” – Mr. Slosser, Google

I can attest that post-mortems are part of the culture and actually help contribute to that five-9’s uptime percentage. Paradoxically, companies do have the users interests in mind first, since they choose whether or not to pay for their service. Goldman Sachs is not going to shell out millions transitioning to the Cloud if outages and data loss are imminent. To the detractors, I say the overwhelming odds are that using these services will benefit companies and people more than not–and that’s only backed up by the success of these services.


Trade-offs are real and boldly highlighted in this class. In this case, the tradeoff between security and privacy prevails. Before I respond to the specific reflection questions, I want to emphasize a certain point made by President Obama in June of 2013 concerning NSA surveillance. He said:

“I think it’s important to recognize that you can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience,” — President Obama, June 2013.

This is just another way of saying trade-offs exist. I don’t mean to oversimplify the argument or seem to not take a stance–there are certainly extremes when it comes to overreaches in privacy or no sense of security. However, I think it’s important to note that our society is largely the way it is because of hundreds of years of tug-a-war between values. You can’t, as the question implies, employ the logical fallacy of false dichotomy: either Apple is ethically responsible for protecting consumer privacy or it is ethically responsible for helping prevent extremist activity. Isn’t there grey area? Doesn’t this require a more complex response?

In Thomas Hobbes’ view, the relationship between individuals and the government or individuals and corporations is defined by a social contract where people make concessions in exchange for a service or social good. For instance, I agree to pay taxes if the government agrees to keep my neighborhood safe and respond in cases of emergency. Additionally, I give my email and location information to a service to find me better food recommendations. That’s not 100% privacy, it’s likely not 100% security, and it may have both convenient and inconvenient factors. The question then has to do with relative importance between values–which understandably shifts from person to person.

In Brave New World, Aldous Huxley poses a question: is it better to be happy or free? A relevant corollary might be: is it better to be safe or have privacy? In my view, privacy is a fundamental human right that, if not protected, forces us into unhealthy conformity. In the late 18th century, the English philosopher Jeremy Bentham designed a prison that ensured inmate conformity using no extra weapons or locks. Known as the Panopticon, it utilized psychology to control people by instituting a single watchman with a view of all inmates–only, the inmates could never know if they were being watched. This gave the impression that one was constantly being watched and subsequently led to obedience and conformity. Such a concept became the new societal weapon of control in western civilization. No longer was brute force necessary to keep crowds of people at bay–the illusion of surveillance is enough.

Our behavior changes when we think we’re being watched, and not because we’re doing anything wrong. Why do some people sing only in the shower? Why might someone password-protect a diary? Perhaps people need that freedom of privacy to accomplish great things. By requesting a backdoor to the iPhone, the FBI was teetering on the edge of a slippery slope with our privacy–how can we be certain the software won’t be used for more than it was originally intended? In short, we can’t. History has shown that there’s no such thing as just, as the question states, “a little more privacy.” There is more benefit to society than detriment to it by safeguarding our freedom to privacy.