Writing 09: Intellectual Property, the overly protective parent of today’s software industry

The last time I watched Shark Tank, I remember the five hosts shooting down an excellent pitch because the entrepreneur failed in one, and only one, regard: he didn’t have a patent for his invention. To these celebrity investors, patents are worth their weight in gold, and the bold entrepreneur from that week’s episode must have had some nerve to pitch his idea without first securing a patent.

Generally, the pitches on this show don’t deal with software concepts, but it made me think about the true benefits of patents, trademarks, and other kinds of intellectual property. For many industries, I’m sure a patent is (and will remain) a prerequisite for bringing new ideas to market – but in the software industry, I don’t think this is always the case. Software and technology evolve rapidly, perhaps more so than any other industry (in part because of a lack of barriers to entry, such as professional certifications and legal oversight), and many new ideas are simply iterations of a recent invention (consider Apple’s “revolutionary” graphical user interface on its first Mac computers – an idea taken directly from Xerox).

With this rapid iteration of software, then, intellectual property can inhibit innovation in one of two ways. First, the lengthy patent process introduces an obstacle into pushing code to production (if a company wishes to patent a new operating system design, such as the Xerox GUI that Apple adopted in the 1980s, it must invest time and money into the patent process and await a final determination from the patent office). Second, and perhaps even more problematic, a clever legal team could buy intellectual property rights to a similar (but previously unpatented) idea and block the release of a competitor’s product. In this situation, the “patent troll” encourages the use of intellectual property not to protect new ideas but rather as a method of blocking iterative development.

As an alternative, the software development industry could embrace non-proprietary software distribution, to a greater extent – either through free software licenses or open-source development. In fact, I think the prevalence and widespread adoption of open-source software in recent years suggests these sorts of proprietary licenses are not, and do not have to be, the only solution – rather, society can and will continue to innovate without the incentives provided by intellectual property and its protections. While proponents of proprietary technology might point to incidents like the Heartbleed bug (in which several vulnerable OpenSSL releases allowed attackers to intercept sensitive web traffic) as evidence for the necessity of intellectual property, proprietary technologies suffer from the same sorts of issues – patents, trademarks and other types of intellectual property don’t ensure products are any more stable than open-source projects.

The software industry is drastically different than many other industries considering its reliance on iterative development in a largely unregulated environment; in such a rapidly changing industry, I think patents and other forms of intellectual property actually do more harm than good. We’re seeing companies embrace open-source development to a greater extent than ever before (Microsoft purchasing GitHub several years back is a great example), perhaps recognizing that other revenue streams exist in a successful software company – revenue streams that actively encourage innovation not by stifling the competition with intellectual property rights, but by taking advantage of a larger network of ideas through open-source development to continuously innovate and iterate. Here’s to hoping this trend continues.

Writing 08: Embrace – don’t fear – artificial intelligence

Artificial intelligence has emerged as a potent yet hotly debated technology in recent years due to both its capabilities and its potential to redefine how many industries operate. But like any new technology, artificial intelligence lends itself to controversy – many argue its usage and development should be regulated or even banned due to its high potential for misuse and bias, and further for its economic impact on manual industries, industries that can be automated to reduce labor force and eliminate jobs.

In my mind, it’s easiest to argue against banning artificial intelligence on the basis that it eliminates jobs through automation – not because it’s not true, but because I think the argument itself is a bit of a red herring. Through the course of each history, accentuated during each industrial revolution, society has embraced new technologies to improve efficiency and automation. The rise of artificial intelligence is no different. Many industries, particularly those that rely on manual, repetitive efforts, will undergo transformations and specific positions will be eliminated, a trend we already realize in manufacturing, retail, and at least developing, in the trucking industry. But with this transformation comes new jobs that demand new skills, and our education systems will adapt to promote these new, highly desired skills (we already see this trend happening as some states are beginning to require computer science in public high schools). And so, while I agree advances in AI will cut jobs and possibly entire industries, it will also open new doors and create a demand for new, technical skills. This is how society enacts progress, and I believe the rise of AI will be no different, economically, than the industrial revolutions we have experienced in the past.

A stronger argument against artificial intelligence comes from its ability to infringe upon a concept often thought to be reserved exclusively for humankind: intelligence. These arguments often reference AlphaGo and Deep Blue, two AI systems developed to defeat humans in Go and chess, games that require logic and intelligence. One of this week’s readings suggests a 2016 AlphaGo victory over a South Korean expert is particularly influential as it demonstrates “a way of bottling something very much like intuitive sense” (“Is AlphaGo really such a big deal?”, QuantaMagazine). I would argue, however, AlphaGo doesn’t truly mimic human thought and reasoning but rather takes advantage of a computer’s ability to understand logic by computing probabilities – which boils down to processing large amounts of data from past games, with computational power no human can hope to match. This neural network described in the article may be much more complex than IBM’s Deep Blue system, but it’s still a mathematical model based on data and patterns – it can’t make cognitive decisions without this massive network or weigh pros and cons of an unknown situation in the same manner as a human.

Could an AI system ever pass the Turing Test undisputed and be thus considered equal in mind to a person? It’s possible, but I believe those days are far off. While computers are excellent at processing enormous amounts of data more quickly than any human, humans possess certain levels of emotional intelligence that are far beyond any computer’s capability to replicate in this era. It goes without saying that society must carefully consider the potential dangers of artificial intelligence, but these risks come with any new technology – and fear of the unknown is an insufficient reason to outright ban artificial intelligence and related technologies.

Writing 07: The dangers of online censorship and repealing net neutrality

Information has been shared through many avenues over the course of history, changing dramatically in each new era as access to technologies improves. In the 21st century, information of nearly every kind is shared through the internet, which creates a new host of challenges with regards to access, regulation and censorship. And throughout American history, at least, access to information has long been a given right – the First Amendment alone grants freedom of speech and press, among other privileges. The basic right to access information via the internet isn’t a new or even particularly contentious concept, but the ability to access information in a split second, across international borders, creates challenges with differentiating between safe regulation of hate speech and censorship; with these challenges in mind, I believe censorship must be avoided at all costs, while conceding that extreme cases that promote hate speech or inspire violence are among those where regulation is necessary.

In some countries – perhaps most notably, China – access to information via the internet can be limited according to a government’s political and social agenda (Business Insider, “Tech companies censoring content for China”), a danger that suppresses diverse viewpoints and thus progressive social growth. It’s one thing to limit information with the potential to encourage violence or terrorism (which I agree should be suppressed on the grounds that hate speech, etc. infringes on basic human rights, such as life and personal safety), but there should be a much finer line drawn with information whose purpose is to criticize or spur political change. Suppressing web access to critical information of this kind, a common practice by the Chinese government and other regulatory bodies, is dangerous in the sense that these regulatory bodies cannot as easily be held accountable by their constituents.

The dangers I see in the FCC’s recent decision to repeal net neutrality relate strongly to this issue; while proponents argue repealing net neutrality encourages innovation and creates opportunities for new business models, it allows companies – particularly internet service providers – greater opportunity to censor information and promote a specific political or social agenda. This, in my mind, is a danger that immensely outweighs any potential benefits to digital innovation (for instance, the New York Times article “Court upholds net neutrality repeal” that suggests the Trump administration views this repeal “as a victory for consumers” fails to consider the associated dangers arising from deregulation). Consider a hypothetical internet service provider whose leaders want to promote a specific political candidate in the next general election; in a net-neutral environment, this ISP cannot limit access to a media site who publishes negative articles about this candidate. While various media sites may choose to promote or undermine this candidate on their own, internet users have free and equal access to opinions on both sites – which is not necessarily the case in the absence of net neutrality regulations (these ISPs could begin charging more to access sites with opposing political views, for example). The result is greater power placed in the hands of these internet providers, who are (at least, in the United States) private companies with little accountability to the general public, thus creating greater opportunity for discriminatory access to information.

Access to information has long been a right, and though the internet has inspired new regulatory challenges, access to the internet should largely remain free and equal. Both technology companies (ISPs and independent sites) and governments have a responsibility to regulate extreme cases (including hate speech and misinformation), and further regulations to prevent censorship – like net neutrality – only serve to promote this goal of free and equal access to information.

Writing 06: If corporations are people, let’s start acting like it

Businesses operate to earn money. That’s no secret – Microsoft, Google, Facebook all operate under different business models that sell products and services in return for money. No one is faulting them for trying to do so, but issues arise when these same corporations – who are granted many of the same legal freedoms as individuals – are not held to the same ethical or moral standards as individuals.

I don’t disagree that corporations deserve many of these same legal freedoms as individuals, for a strong argument can be made these corporations are indeed – legally speaking – “people.” Any given corporation is, intrinsically, operated by groups of individuals. They can influence the press and the economy just as any individual person. A Consumerist article that details some of these specific legal rights, including rights under the First, Fourth, Fifth (limited), Sixth, and Eighth Amendments (“How Corporations Got the Same Rights As People”), makes an excellent point regarding the double-standard that arises from granting corporations these legal rights but failing to hold them accountable for potentially immoral decisions. And there are countless cases – from the Enron scandal in the late 90s/early 2000s to Microsoft’s anti-trust lawsuits and even more recently, McKinsey’s consulting arrangements with foreign, anti-democratic supporter groups, corporations are often involved in business operations that fall under a moral grey area, at best, or morally bankrupt, at worst.

Consider IBM’s controversial involvement with the Nazi regime in the 1930s and 1940s, where a strong business relationship between the Nazis and IBM – as evidenced by an IBM CEO regularly dining with Adolf Hitler (Mic.com, “The Hidden Nazi History”) – led to a profitable business opportunity at the expense of millions of Jews and other enemies of the regime. The strongest penalty levied upon IBM was strictly in the court of public opinion, and as a result, to this day corporations must decide for themselves whether the moral consequences of a business decision are worth sacrificing their bottom line. Unfortunately, lacking regulation and precedent, many corporations choose poorly.

The ultimate question that arises it, of course, is what can be done. Corporations may be people in a legal sense, but they lack their own heart and mind and instead rely on the direction of their leaders. So why, then, should we not hold these leaders accountable for immoral actions on behalf of their corporations? Various governmental bodies have the power over corporations, both in the U.S. and internationally: the SEC can already levy fines against public companies for fraudulent financial reporting practices and remove CEOs, when deemed necessary, and the EU has undertaken stronger measures to enforce anti-trust laws against Google and other international tech giants in recent years. These measures are a step in the right direction, but more can be done to hold individuals accountable as well. Immoral, individual actions – theft, sexual abuse, fraud – are already punishable with prison time and heavy fines, at least idealistically in the same vein as their consequences (that is, a case of tax fraud on behalf of an individual is punished with greater fines and a longer jail sentence if the scope of the fraudulent behavior involves larger sums or money or a longer period of time). So why should corporations, run by individuals, be held to any different standard?

Writing 05: Big Brother is Watching

I have read few novels that conjure fear like George Orwell’s 1984, in which a totalitarian government exerts complete control over its citizens public and private lives. And for good reason! Orwell’s depiction of a futuristic dystopia revolves around a world where security and unquestioned authority are prioritized over privacy and personal freedoms. Orwell’s novel is meant to scare us; it’s meant to warn existing democracies of the dangers associated with an overly powerful government. But 1984 doesn’t warn us of the consequences of the other extreme, where privacy is the only true virtue and there exists no provision for national security. And make no mistake, security and privacy must be held in balance for any society to function.

High-profile cases like Apple’s fight against the FBI related to the San Bernardino shooting in 2015 often highlight the juxtaposition between these two ideals that can be seemingly contradictory: security and privacy. While the FBI’s motivation for its iOS “backdoor” request was valid, it represented a short-term priority at the expense of long-term consequences, and Apple was correct to refuse the request.

In any society, individual privacy must be protected to an extent, but especially in a democracy. However, the underlying assumption of a society implies the necessity for social structure and some form of governing body, whose wide-ranging goals include security. From this broad perspective, whistleblowers like Edward Snowden fundamentally undermine social stability despite their arguably moral motivations, and while individual privacy is and must remain a priority, it cannot come at the expense of a government’s ability to protect and provide for its citizens.

If Apple vs. the FBI is an example of when to prioritize individual privacy over national security, Snowden’s case is perhaps an example to the contrary, where the implications of Snowden’s leaks have damning consequences to national security – far exceeding the benefits realized from revealing the extent of the government’s surveillance programs. By exposing highly classified information about how and why the NSA monitors electronic communications, Snowden compromised these very systems and therefore undermined significant counterterrorism efforts. The morally controversial nature of these programs is certainly an issue worth discussing, but Snowden alone made the choice to reveal information to which he was entrusted, and national security cannot be ensured if insiders act alone to undermine these classified efforts. While critics might suggest my positions on Apple vs. the FBI and Snowden vs. the NSA are contradictory, I believe they’re necessarily consistent: in both cases, I believe the correct course of action minimizes long-term consequences. Refusing to create an iOS backdoor prioritized  long-term privacy over short-term national security, and in a similar manner, I view Snowden’s actions as detrimental to long-term national security efforts.

The public’s prerogative to prevent such abuses of power comes through the traditional strengths of a democracy: that is, citizens wield the power to elect and replace their public representatives if they believe their government has overstepped its own responsibilities to defend national security. Thus, the success of a democracy relies on these ideals of privacy and security being held in balance, and concerns like those expressed in Orwell’s 1984 are alleviated as long as a democratic electoral process exists to check any potential abuses of power.

Writing 04: Are Whistleblowers Heroes – or Villains?

Given recent political events, especially including last week’s secret whistleblower who exposed a phone call between our president and Ukraine, it seems pretty relevant to consider the moral and ethical issues concerns of whistleblowing – or refusing to do so.

There are a lot of people who look at the examples set by Chelsea Manning and Edward Snowden and (rightfully) see their actions as breaches of trust and even potentially as national security concerns. This week’s NPR article, which mentioned Manning released over “750,000 classified documents that contained military and diplomatic dispatches,” doesn’t take a position on the morality of her actions – but it’s not hard to make the case that releasing military documents is indeed a national security concern; with this information, our enemies can better predict the movements and capabilities of our armed forces, potentially putting service members at risk. No one told Manning to release this information, but she recognized these documents contained examples of our government and military abusing their powers and possibly even committing war crimes. And so, judging that the government’s cover-ups were a greater moral concern than exposing highly sensitive information about our military, Manning chose to release massive amounts of classified information. Many people vilify Manning for doing so; personally, even though I recognize how Manning’s actions were vitally important to keep our government accountable, I struggle to justify the consequential national security concerns that arose from her breach of trust.

While Manning’s case is a bit trickier because her actions had such significant, immediate consequences, it’s worth considering high-profile engineering disasters like the Challenger explosion as examples where an engineer with inside knowledge – a potential whistleblower – could have exposed private concerns and possibly prevented tragedy. And the related concern here isn’t whether exposing highly classified information is an acceptable trade-off to exposing secret and immoral actions, but whether there’s actually a moral responsibility to expose information when the potential benefits are much less obvious. This week’s Vice article on the Challenger disaster notes how NASA engineers and managers were aware of potential O-ring problems “as early as 1977,” approximately nine years before the fatal Challenger explosion, and mentions that one engineer, Allan McDonald, refused to sign launch papers as he normally would. In this situation, I can’t help but wonder – what if McDonald had made his concerns public? He had direct knowledge of the situation, backed up by years of scientific tests and data, and simply exposing the O-ring erosion concerns to government officials or the news media would have significantly increased pressure on NASA to delay the launch. The political repercussions of either of these drastic actions would have been massive and likely detrimental to McDonald’s career. But they almost certainly would have saved seven lives and millions of dollars.

Ultimately, I don’t think there’s an easy answer. Whether a whistleblower is a hero or a villain is highly circumstantial – if McDonald had exposed his concerns before the Challenger launch, very few would have recognized he saved a potential disaster and he quite possibly would have been vilified by NASA for exposing sensitive information. But at the very least, whistleblowers deserve complete protection while a related investigation is undertaken; should an investigation determine the whistleblower exposed a significant concern that could be detrimental to anyone’s life, I think they should be legally protected such as not to discourage others in similar situations. In this sense, I view whistleblowing as a natural protection against corruption and immoral actions and believe potential whistleblowers do have a measure of responsibility to check the immoral actions of an organization.

Writing 03: Diversity and equality aren’t always mutually attainable (and that’s okay)

I’ll start by playing the devil’s advocate for a bit, but first I’d like to make couple of points clear:

  • I think diversity is critically important – especially in a field like computer science where creativity is such a valuable asset.
  • This isn’t a restricted to a moral issue or an economic issue – it’s both. Giving preference to any one individual or group purely because of demographics is discriminatory, but besides that, it’s also a poor economic decision. Businesses grow and innovate faster with a diverse set of ideas from which to draw.

Last fall was a pretty high-pressure recruiting season; I’d just finished a software development internship the previous summer but knew I didn’t want to return, so I started applying elsewhere. There was one company in particular I was interested in – I won’t call them out here, but they recruit heavily on campus – and after attending their info sessions, visiting their networking events, and talking with current employees, I was disappointed to hear they wouldn’t even interview me. I was even more surprised when one of their recruiters gave the following explanation: “We were looking for more diversity.”

My immediate thought: that’s not fair. I was put at a disadvantage, if not outwardly eliminated from consideration, because this company thought my demographics were already too heavily represented. Further thought gave me the opportunity to recognize that – while I still don’t think it was a fair situation – underrepresented groups are playing against much worse odds. Of my peers in computer science who are female, black, etc. – many struggle to find role models within the industry because there are too few that exist in the first place. They’ve overcome obstacles I never had to face, just to be in the same position. I eventually received other opportunities, so I wasn’t too torn up over the decision, though I was certainly surprised. But the experience taught me a lot about diversity, equality, and tolerance – and it actually helped me recognize some of the unfair advantages I’ve benefited from in the past.

This is where it’s critical to acknowledge how diversity and equality often cannot be achieved simultaneously. Consider the following thought experiment: women are disproportionately underrepresented in computer science. Company A believes it will have ultimately possess a competitive advantage with a more diverse, innovative set of ideas, so it encourages more women to apply for its open software engineering positions and aims to eventually employ an even ratio of men and women. While Company A’s ultimate goal is both diversity and equality, trying to hire women at a faster rate to improve gender diversity inevitably puts men at a disadvantage compared to the advantage they previously possessed. In a perfect scenario, eventually Company A achieves a balanced gender ratio among its employees, at which point it can go back to hiring the best possible candidate – a truly meritocratic and equitable system.

None of this is to say we can’t (or shouldn’t) work to improve both diversity and equality, however, and this is where the concept of tolerance is highly relevant. Consider any field with a highly visible lack of diversity (computer science is a great example). The over-represented group must acknowledge and tolerate that there might be situations where they are at a competitive disadvantage due to diversity improvement efforts (while also recognizing the many situations where they, too, have had significant advantages due to their demographics). Likewise, the underrepresented group should acknowledge and tolerate some opportunities might be made available to them, specifically, in efforts to improve diversity.

Many controversial situations related to diversity in the tech industry seem to arise from a lack of tolerance. A 2017 internal memo that circulated around Google is a prime example: the male author, who claims “biological differences” account for the gender gap in technology and leadership positions (Gizmodo article from Reading 04) appears to lack tolerance for some of the struggles – or successes – of his female peers at Google. In my own aforementioned story, it’s not unreasonable to claim my demographics – a white, suburban male – put me at a competitive disadvantage in the eyes of a company aiming to strengthen its diversity. But that’s okay – because these same demographics have given me numerous advantages I certainly didn’t deserve to get to where I am. Promoting diversity in STEM-related fields, particular in computing and technology, often comes at the expense of equality, and once more: that’s okay. In a highly paradoxical manner, sacrificing equality for a traditionally well-represented demographic is the only way to achieve diversity and thus a true equality of people and ideas.

Writing 02: Is meritocratic hiring a realistic goal?

I feel like I’ve experienced a somewhat unusual career discernment and hiring process through the last three years. When I began studying computer engineering, I wanted to pursue a career in software development and so I learned object-oriented programming, bought a copy of “Cracking the Coding Interview,” and completed mock interviews with upperclassmen and recent graduates. I was thrilled to receive a software development internship after my sophomore year – but that summer, I realized my favorite part of the experience was communicating technical results to our team and learning how to integrate technology into everyday business operations. When I began my junior year, then, I pivoted my career aspirations toward technical consulting.

Notre Dame’s computer science and engineering program does an uncommonly good job at preparing students for careers in technical consulting, largely due to an emphasis on communication skills and professionalism; unlike computer science students at many other schools, I haven’t taken a significant number of electives in any one area and thus haven’t specialized in robotics, or artificial intelligence, or cybersecurity – but I have taken classes in presentational speaking, entrepreneurship, philosophy, and economics. This sort of broad and business-focused background lends itself well to the consulting industry and differentiates Notre Dame’s students by teaching us how to analyze and communicate technical results to a greater extent than students from many other universities. In this target industry – technical consulting – I believe potential employers greatly value these additional skills; I would not have felt prepared for case interviews and the general hiring process with a background strictly focused on technology and programming.

Though I was happy to pursue my interest in tech consulting, I had to learn how to navigate a completely different hiring process. Whereas technology companies swear by the coding interview, my interviews with EY, PwC and Deloitte included both behavioral and case interviews. The case interviews, in particular, were a foreign challenge – but over time I realized they provide a more accurate representation of a candidate’s ability than the coding interviews I’d practiced in years prior. Coding interviews often require the candidate to come up with a specific solution (e.g. use a specific data structure to solve the problem with optimal time complexity). One of this week’s readings, “Hiring is Broken,” mentions the “artificial” nature of these coding challenges and argues this method is broken because of its reliance on “flashes of inspiration” to solve an arbitrary – and often unrealistic – coding challenge. I couldn’t agree more. By contrast, case interviews generally have no right answer: the interviewer wants to test your business logic, your thought process, and your ability to prioritize conflicting priorities. In this way, I think the case interview – a key component of the hiring process for consultants – solves a significant issue that exists in the software development hiring process.

This isn’t to say consultants are always hired in a predictable or meritocratic manner, however. Over the past year, I’ve realized that the technical consulting industry relies heavily – perhaps too heavily – on networking and connections as a part of its hiring process. At some firms, it’s difficult to even receive an interview without a referral or inside connection. At one consulting firm I considered last year, I went through four rounds of interviews before ultimately not receiving an offer, while a classmate (whose dad is a partner in the firm) received an offer after a single case interview (I’ve been in this position, too, having recently received an interview with a consulting firm largely due to the influence of a friend’s brother – and I’ll be the first to acknowledge neither of these situations are even remotely meritocratic).

Every industry struggles with some element of its hiring process; the “Hiring is Broken” article does an excellent job highlighting many of the hiring struggles within the software industry but offers no true solution for this or any other industry. There will always be trade-offs between conflicting hiring priorities and thus we should acknowledge there is no such thing as a truly meritocratic hiring process; rather, employers should consider their own internal priorities and employ several different hiring methods throughout the entire process – and even more importantly, candidates shouldn’t hesitate to view a rejection as an indication of a broken process.

Writing 01: Stereotypes, Privilege, and Identity

Computer science students, I feel, are characterized more often than students of nearly any other major. In the eyes of business kids and lab science students, computer science kids are the nerds who work for Google and Microsoft. We’re the overwhelmingly male math geeks who started coding at age 12. In fact, one of the most common “computer science” stereotypes I come across relates to social competence; that is, it seems to be a commonly held belief that computer science students are antisocial or socially awkward in some manner. And following many experiences working with computer science students both from Notre Dame and from universities across the country, I’ve always been disappointed to hear this stereotype perpetuated. As with any field of study, this label perhaps applies to some individuals, but certainly not all – and I would argue not even the majority. At Notre Dame, the sheer number of computer science students who pursue careers related to business technology – like technical consulting – suggests this stereotype is unfounded, yet it continues to pervade from both the perspective of students and employers.

There’s another stereotype, or rather a characteristic, of both computer science students and Notre Dame students, I’d argue is more accurate: both groups are highly inquisitive by nature. For computer science students, this creativity aligns with the “hacker” mentality discussed in Paul Graham’s essay – I don’t think most Notre Dame students, nor most computer science students in general, learn mostly through their formal classwork, but rather through side projects, through participation in extracurriculars, and through their peers (Paul Graham, “Hackers and Painters”). I embrace this aspect of a stereotypical “hacker” because it suggests I enjoy using my skills to create, whether that involves creating software or something else entirely.

While in this sense I reject some common stereotypes related to computing and technology, and accept others, the larger issue of my own identity as a computer science student at Notre Dame must account for privilege and the circumstances surrounding my Notre Dame experience. Attending Notre Dame itself is an enormous privilege; if I fail to recognize that fact, it becomes much more difficult to identify and reject a number of potentially detrimental stereotypes toward myself and others. I’m privileged to attend a school where major employers recruit on campus and a school that invests heavily to attract excellent professors. In this regard, I, along with many of my classmates, have advantages that may not have been offered to my soon-to-be peers in the workplace; with this in mind, I should approach my first job opportunity as an opportunity to learn from my peers and assume I never know as much as possible on any given subject. I don’t think these privileges should be forsaken (that is, I believe it would be shortsighted to transfer schools just to avoid the notion of privilege), but rather openly acknowledged and used to benefit those without similar opportunities (one example might be to mentor low-income, high-achieving students who have the ability to excel in an excellent computer science program, but are unfamiliar with financial resources that would allow them to attend schools like Notre Dame). I also hope to practice humility and learn to acknowledge that the diversity of my peers (and their experiences) will only help our collaborative efforts. I hope my identity as a Notre Dame Computer Engineering graduate reflects my willingness and ability to learn from others, above all else, and whether or not this occurs relies heavily on my ability to accurately judge myself and others – including my ability to acknowledge my own weaknesses and areas of privilege.

Writing 00: Ethical Frameworks in Computing and Technology

In the past, I have often relied on a form of the “Utilitarianism” framework because it seemed the most logically sound (that is, if I can measure the utility of an action and all of its possible consequences, the correct moral choice is simply determined by an equation – benefits minus detriments). However, this framework is potentially dangerous in the sense that there is no way to objectively measure utility, and the subjective nature of this measurement itself means utilitarianism can be used to justify immoral actions quite easily. This isn’t to say the other frameworks don’t present their own unique issues – they do. While I agree in the concept of a “Divine Command” theory, I don’t necessarily agree that any religious body, such as the Catholic Church, correctly translates biblical texts into moral guidelines on every occasion – even within Christianity, consider how many unique denominations have formed as a result of contradictory interpretations of the Bible.

With those concerns in mind, I now adhere most strongly to the “Social Contract” framework, though it too is imperfect. This week’s first article on moral frameworks, which describes how this approach arose from “The Law Code of Hammurabi in ancient times, differentiates this framework most strongly by emphasizing how moral guidelines are chosen, and specifically chosen by free individuals “in an initial situation of equality” (“A Framework for Making Ethical Decisions”, brown.edu). This distinction regarding choice is key, for it assumes these choices will, at times, be imperfect; however, it importantly assumes these choices are made in a free environment and reflect the collective knowledge of an entire society. Regardless whether moral law is indeed dictated by a deity, independent of any subjective measure of utility, a social contract system – government – thus provides the opportunity to discern the best possible moral code as a free and collective society.

Regarding computer science and technology, I believe a similar social contract system is equally applicable and advantageous. A strong example comes from the Association for Computing Machinery and its Code of Ethics discussed in class: the Code represents an agreement, a social contract, between the organization and its members to “avoid harm” and “be honest and trustworthy,” among other promises (Association for Computing Machinery: Code of Ethics and Professional Conduct). Though there is no guarantee these promises represent objective moral behaviors, since an immoral individual could gain membership and seek to undermine the organization by promoting a different set of guidelines, the collective size and influence of the ACM dilutes any potential negative impact this one individual could have (this framework does seem to fail when there is no strong consensus on an issue, however, for if a significant minority of society disagrees on a moral guideline, it seems illogical this entire group should be bound to the agreements made by a slightly-larger majority).

As a computer engineering student at Notre Dame, I believe I have developed a strong set of technical skills in several narrow areas, including back-end development and machine learning – and this applies to the vast majority of our class, as well. We’ve learned very concrete technical skills through skills, labs, and internships, and at this point in our careers, we’ve only been told how to do something such that we can correctly apply these technical skills in interviews, on exams, etc. However, it’s worth considering the nature – and dangers – of these technologies, because ethical questions arise from any and all of these tools. Databases give us the power to collect, process, and distribute massive amounts of information instantly across the world, but database breaches occur frequently and often release damaging information to malicious users. These situations present ethical questions related to the extent these database developers are responsible for the security of their systems, as well as their responsibility in the event of a breach. Machine learning presents similar challenges: we can develop algorithms to predict, with increasing accuracy, the stock market, a baseball game, and the next item you’ll put in your online shopping cart, but these same algorithms can unfairly discriminate against people of color if their developers have implicit and unknown biases against a particular demographic. As we continue to develop these technologies, it is tempting to immediately release the latest cutting-edge software tool – but these tools can easily provoke unintended, harmful consequences if they are designed without first considering a clear framework for ethics and responsibility.