Reading 04: Diversity in Technology

The lack of diversity in technology is a real and severe problem. The NCWIT article makes a compelling argument for the fact that the lack of gender diversity specifically is not a result of biological differences and that the discussion of it as such gives undue legitimacy to the argument. Any perceived difference is the ability of men and women or of people of different races is systematic and cultural – not a result of biology. Last week an article from the BBC described the slower pace of business in Rome compared to that of American cities and a classmate shared experience with this phenomenon. No one is discussing how Italians are biologically wired to work less than Americans, and if someone were to make this argument, he or she would be rightfully be dismissed as racist. The real obstacles to women and minorities in technology are those described in this week’s articles: a relative lack of resources and encouragement from a young age, differing societal pressures and expectations, toxic work and learning cultures, and people like James Damore at Google making excuses for it all.

The lack of diversity in technology is a problem. I can understand why someone might think it is okay for women or minorities to disproportionately pursue other careers if that is what they are passionate about, but today we are all stakeholders in the tech industry. Software is eating the world – and almost all of it is being developed by a bunch of white guys on the west coast. Everything from our means of transportation to our personal relationships is touched and shaped by services such as Uber and Facebook. If the teams behind these products do not reflect the diverse makeup of their users then the products will inevitably at best have blind spots for those users’ experiences and at worst perpetuate the toxic and exclusive cultures from which they come.

I don’t know what we can do to help fix this problem. One place for companies to start would be to continue pursuing programs that encourage diversity and to fire employees for harassment like Uber failed to do as described in Susan Fowler’s blog post. The structural and societal challenges are harder to address. I hope solutions to these will come, but if they are coming, they are a long way off. It’s going to take a lot more women and minorities in positions of power and a fundamental shift in culture the world over to fix these problems, and that is not going to happen overnight.

One thing we cannot do is give undue attention to the people who seek to perpetuate the white-male-dominated culture. It may seem counter-intuitive, but a tolerant culture cannot tolerate intolerance. I think Google made the right move by firing James Damore. To have done anything else would be to implicitly support his message. I don’t doubt that his heart might have been in the right place, but by appealing to gender stereotypes and justifying those stereotypes by appealing to biology, he had a part in reinforcing an unwelcoming culture that should have no place anywhere. If we are going to have open, honest discussions about diversity in technology, it needs to be rooted in a basic understanding that everyone deserves respect and owes respect to everyone else.

Reading 03: Work, Life, Balance

I, just like everyone else, want it all. There is nothing I would like more than a fulfilling job that allows me the time and means to live my life outside the office and eventually (a long while from now) have children and spend time with them. This is what the work-life balance is for me – work happens at work, life happens everywhere else. Problems seem to occur when work seeps into life and ruins things for both of them. As mentioned in several of the articles for this week, when we work non-stop we become less productive and less happy. When work takes the place of life, we burnout and can’t do either as well as we could otherwise.

I have dealt with minor cases of burnout throughout my university career. It is often necessary to work practically non-stop to keep up with classes, extra curriculars. Whenever I start to feel that the workload is too much, I tend to take more breaks. I’ve found running, naps, and meditation to be the best activities to get myself back to a mental state where I can give my whole energy to a problem. I am especially a fan of sleeping. Not only is my mind more clear when I am well rested, but I also feel happier and healthier in general. Of course, regular exercise also helps with this, and meditation is kind of like a nap in how it can refresh you by clearing your head for a bit. If we all could make time to take more intentional, quality breaks, we would more productive and feel better.

Companies can promote balance and, I would think, boost productivity by encouraging regular breaks from work and providing facilities and resources to to take intentional, quality breaks. It’s a Silicon Valley stereotype to have the ping-pong tables, free food, and comfortable break-rooms, but I think this is one place where the tech companies are getting it right. Where they seem to be lacking is in providing the proper means and encouragement to take breaks and work a healthy amount of hours during the week. I don’t know if companies are ethically obligated to make sure their employees aren’t spending their whole lives at the office (unless those employees do not have the mobility to find another job that would not require this), but as Amanda Ruggeri suggests in The Compelling Case for Working a lot Less, I imagine they would get a greater degree of productivity from their employees and possibly a greater degree of loyalty to the company. I think one of the best things companies can do to prevent burnout or overworking is to have executives and managers who set the tone by working a healthy amount of hours when possible and thereby encourage their coworkers to do the same.

I want a perfect work-life balance. I don’t know if such a thing exists, and if it does I doubt I’ll find it anytime soon. I do, however, hope that a strong work ethic tempered by a healthy perspective will help me to operate at maximum productivity and still have the energy to live when I leave the office for the night.

Reading 02: Moving On

I am very excited about the possibility of switching jobs every few years. Sure, stability is nice, but the idea that within the next couple years I might choose a single company and work there for the next forty to fifty years of my life is terrifying. This might just be because I am by nature extremely indecisive, but I also think there are a number of cases for switching jobs presented in this week’s articles that I find very appealing. The most important for me, touched on by Vivian Giang’s You Should Plan On Switching Jobs Every Three Years For The Rest Of Your Life, is the idea of being able to learn new things. I love learning, and the idea of working in a job where I have nothing to learn seems extremely dull. Besides this, these articles and others in recent news suggest that upward mobility within a company is increasingly rare within modern industry. When I am a few years into my career I want to be able to move into positions that allow me to have greater direction over my work so I may continue to work on projects that I find interesting and worthwhile. For these reasons as well as the evidence showing that employees who switch jobs make more money and are better able to recover from layoffs, I do see myself moving from job to job if I start to find myself stagnating in a certain role.

I do not believe in company loyalty. It would be great if employees could expect loyalty from their employers and vice versa, but at the end of the day a company exists to make money – not to take care of its employees. That is not say there are not companies out there which are loyal to their employees and earn loyalty in return. I would love to work for such a company, but I do not believe that is how most companies operate. I certainly do not think artificially mandating this loyalty through non-compete clauses is acceptable. As explained in Conor Dougherty’s  How Noncompete Clauses Keep Workers Locked In, these agreements shackle employees to their employers by barring them from working at any other company in their field of work. Though I could see legitimate reasons for this (protecting trade secrets, for example), this is a clearly abusive practice that could only enforced fairly in the most specific of circumstances. The non-disclosure agreement is a much more fair way to protect proprietary secrets. By limiting the scope of the information and what can be done with it, the NDA seems to do a better job of protecting the rights of the employee as well as the company’s information.

Is job-hopping ethical? I think so. As long as someone is honest with their employer, gives their best performance while employed and does not go on to harm that company, they have fulfilled their end of the contract and should be free to move on to a new opportunity. Ultimately, it should be the responsibility of the company to create a culture that hires, retains, and promotes its best employees.

Reading 01: Computer Engineering, Arts, and Sciences

Is computer science an art, engineering, or science discipline? I can see the case for all three, but in my opinion it is best described as either art or engineering. I do not see computer science as a science at all. Science is about studying natural occurring phenomena in the universe. There is nothing natural about computers. As Jeff Atwood writes in Bridges, Software Engineering, and God, “God didn’t invent x86”. Atwood uses this in his case against software engineering being an engineering, but I think this is must more relevant for making the case against computer science being a science. Science, even more so than engineering, is about studying “God’s rules”. Even the social sciences are about studying naturally occurring phenomena in human behavior rather than “whatever some random bunch of guys thought was a good idea in the early 1980’s”.

I do think, however, that computer science/software engineering can be considered an engineering discipline in certain contexts. Atwood seems to argue that this is not true because the field is young, but every discipline has to start somewhere. I am sure that humanity’s first bridges were much less reliable than today’s and subject to much less oversight and regulation. Ian Bogost argues in Programmers: Stop Calling Yourselves Engineers that software engineering is not engineering because it is not subject to the same regulation and certification systems as other engineering disciplines. This is not a fair argument because it is not the regulation that makes engineering engineering. The medical profession is also subject to a great deal of regulation and certification, but we do not call our doctors engineers. Instead, engineering at its heart seems to be about creating quality and lasting solutions to technical problems. This sounds a lot like software engineering to me. Bogost does make a good point that software engineering does not have the same reputation for reliability that other engineering professions have. I think this is certainly a place where we can learn from the other fields. A greater degree of oversight and standardized methods would be highly beneficial to applications that exist for the public good, affect a large number of users, or handle personal information. There are, however, many applications that do not require this sort of oversight. A video game or takeout app is a far cry from a public utility. No one is going to suffer when their Netflix app crashes.

Computer science is best described as an art. This is because the process of software development is much like the process of creating art (when it is not like the process of engineering). Paul Graham succinctly describes this relationship when he compares coding and sketching in his article Hackers and Painters. Like creating art, there is often not a set procedure for creating software. It is more about putting something on the paper (or the text editor as the case may be) and building up the final work. Like artists, coders are able to work alone and have a great deal of authorship over their work. They are also not necessarily bound to a certain institution to do their work. It is, of course, very common for coders to work on large projects with large teams for large companies, but I believe these cases are where computer science is best considered engineering and should be regulated like more mature engineering professions as argued by Bogost.

Reading 00: Right and Wrong

The ethical frameworks that align most closely with my decision-making are the Consequentialist and Virtue frameworks discussed in the Brown University article. Generally, when determining whether an action is right or wrong, I consider first the potential consequences of the action (the Consequentialist framework) and then what action I would take if I was the sort of person I would like to be (the Virtue framework). The Duty framework is not something that I would say comes as naturally to me as the others seem to, but I can certainly see how considering a rigid set of morally right actions may be useful in more ambiguous situations. All that being said, I cannot say that I often spend much time considering the morality my professional actions. I do spend slightly more time considering the morality of my personal actions – especially the morality of not acting. There are clearly many problems in the world: poverty, crime, climate change, inequality, and so on, and I am often troubled by the fact that I rarely do anything that brings good into the world, let alone addresses the evil. With this in mind, I believe that not actively doing good is “impermissible” as defined in the Brown article.

I do believe that there is a universal right and wrong, and as a Catholic I believe this is determined by the will of God. The problem with this is, of course, that the will of God is very much open to interpretation, so I tend to consider actions right when they respect human life and dignity first, the well-being of the earth and all its other creatures second, and everything else (material possessions for example) third. Actions and refusals to act are wrong, in my opinion, when they place the importance of something of lesser inherent value over something with greater value. In this case, it is both the intention of the action and its consequences that make the action wrong. For example, I would consider an act like pollution to be wrong because it values monetary profit over the well-being of the environment (and consequently human well-being).

I believe this method of determining right and wrong is very much applicable to the field of computing. There are many computer applications that do not adequately respect the rights and value of the human person – instead, they prey on human insecurities and weaknesses for profit. I am sure most of these programs were not maliciously designed, as Samuel C. Florman writes in Engineering Ethics: The Conversation without End, but rather designed ignorantly, without properly considering the consequences of the product. I would argue that this, much like not actively doing good, is wrong, and, as Florman argues, is perhaps a more important concern than that of malicious engineering. The solution to this problem seems to be thoughtful engineering utilizing the Consequential framework in particular. If we spend enough time considering what negative impacts our work might have, we might be able to prevent them through thoughtful design.

Reading 00: About Me

My name is Jeff Klouda. I’m from Lisle, Illinois (“the Arboretum Village”), and I am a senior studying Computer Engineering and Visual Communication Design at Notre Dame. I was drawn to Computer Engineering because I have always been interested in technology and math and Visual Communication Design because I have always loved working with visual arts. These two majors are each, at their core, about making, and that is why I chose to pursue both – because I am passionate about making beautiful, useful artifacts and applications. I have absolutely no idea what I want to do professionally after graduation (which will be in 2020 because the Reilly Dual Degree Program takes five years to complete), but I would love to be in a creative role in which I can utilize my skills as both an engineer and a designer.

When I am not working on classwork I enjoy playing the guitar, listening to music, playing board games and video games, sketching, hiking, running, and reading. I also enjoy spending time keeping up-to-date with the latest in technology and graphic design and learning new tools and techniques. I am particularly interested in the fields of computer graphics, robotics, animation, and typography. I’ve spent the last three summers working for Elite Electronic Engineering in Downers Grove, IL. Elite is an electrical engineering test lab, and as an intern I’ve worked in their environmental, EMC, and wireless departments.

In my opinion, the most pressing ethical and moral issues currently facing computer scientists and engineers are those concerning privacy, attention economies, and artificial intelligence. These are all issues that have gotten a great deal of attention in popular media, and I hope we will be able to spend some time discussing each of them in class this semester.

Everyone should have a right to privacy, even when using the internet. This is obviously more complicated when we have to consider the problems of public safety, but the current practices of mass surveillance, widespread buying and selling of personal data, and absurdly targeted advertising are completely unacceptable. I hope we get the chance to discuss the value of a person’s privacy and methods of collecting and using data that respect that value.

Jonathon Harris addresses the problem of attention economies as well as the related issue of software engineers becoming social engineers in his Modern Medicine article. Products designed to keep users engaged through addictive features exploit those users by wasting their time and potentially negatively impacting their well-being all for the sake of selling their attention or personal data. I would very much like to have a discussion about creating technology that respects the well-being of its users.

Finally, artificial intelligence has, in my opinion, the potential to be one of the most dangerous fields in computing. I am not so concerned about killer robots and the like (yet), but I think we should think very seriously about what types of decisions machines should be making and how we can guarantee those decisions are sound and fair, especially as AI becomes more prevalent in fields such as healthcare, weapon development, and transportation.