Reading 01: A Danger of Hacker Culture

Reading 01: A Danger of Hacker Culture

What is a hacker? The answer to this question has changed quite a bit over recent years. Initially, a hacker was a reclusive computer expert, likely social inept, who could do incredible things but had a one-track mind of sorts – hacking is more or less all they do. Recently, it’s been flipped on its head: a hacker is someone skilled in the areas that can change the world, someone who’s not afraid to solve the problems they see, regardless of what the establishment thinks! Programming is a superpower!

While this looks to some like a more positive view of “hackers” (the result of those “hackers” starting to be the ones with power), I think this can be damaging in its own way. That’s always the risk when a trait like this becomes something you are rather than something that you do. Nobody says “I hack.” People say “I’m a hacker!”  It’s not simply a question of skill, attitude, or any one thing. It becomes a question of how one lives one life.

Computer science, and the tech industry as a whole, still has many social problems. Chief among them is the issue of diversity. There has also been a recent push for imposter syndrome to be recognized and combatted. Given that this is the case, why, why are we all celebrating this image of a hacker? We’re pushing the idea that if you don’t check all these boxes, don’t write network APIs for fun, you’re not a hacker! You don’t belong! This kind of gatekeeping can only be harmful to the industry.

A startling example is from “Hackers and Painters” When discussing hiring practices, Paul Graham writes that “When we interviewed programmers, the main thing we cared about was what kind of software they wrote in their spare time. You can’t do anything really well unless you love it, and if you love to hack you’ll inevitably be working on projects of your own.”

What?! No! Is work/life balance a thing of the past? How does this look to an outsider?

Consider someone with an interest in the tech industry trying to figure out if it’s for them. “Hmm,” they might think, ”is this for me? Am I the right fit for this kind of work?” Then, they read this quote. “Guess not, I want to raise a family!” How do people not see how destructive this could be to the culture of programming?

We should dial it back just a bit. Yes, I won’t disagree that some of the truly “disruptive” (buzzword alert!) new projects and software are more likely to come from those that spend all of their free time coding away. But, we should push back against the notion that this is the only way to be a computer scientist. We should be free to pursue other things in our free time – have a hobby! Play an instrument! Raise a family! – without feeling that it makes us somehow less in the eyes of the tech world.

Reading 00: Utilitarianism, Mostly

Reading 00: Utilitarianism, Mostly

It’s hard to explicitly articulate how exactly I decide if something is right or wrong – by nature, a lot of it will simply be the product of experience and more nuanced than I can easily write – but I’ll do my best. In short, I think my views have a large utilitarian component to them, with the focus or motivation being on the resulting society or common good that results. In terms of the reading, I’d say I lean towards the Consequentialist theories, with a mixture of the Utilitarian and Common Good approaches.

A quick disclaimer: I don’t have any formal experience with ethics, so this is neither going to be a tidy idea that sits neatly in the ethics box, nor an expression of hard and fast rules that I would stick to without qualifications. That said, I’ll do my best to express myself.

The short way that I would express my thinking is “What would happen if everyone behaved this way?” Now, this sounds like the categorical imperative: “Act only according to that maxim by which you can at the same time will that it should become a universal law.” However, I don’t mean to make statements about the universality of rules or actions like the categorical imperative does – I think that’s reductive, and any reasonable system should have more nuance.

I’m more concerned with the motivations behind these actions. I’ve always found emergent behavior from simple rules very interesting – for example, things like linked lists or cellular automata exhibit very interesting properties from a set of relatively small constraints. In the same way, I think that in even small actions, the way we give and take can add up to have large impacts on the larger communities we’re a part of. Instead of “should everyone take this action or follow this rule?”, my thinking is more along the line of “should everyone weigh their own needs against those of others in this way?”

This is where my thinking more resembles utilitarianism. I agree with much of utilitarianism – I think that if everyone strives to have a net positive impact on the world, to leave people better off than they met them, it could make a profound difference. However, “Don’t set yourself on fire to keep others warm.” People have different needs and abilities, and at times being willing to accept help or an “unfair” distribution in your favor can be as important as offering help or giving up something of yours for others’ benefit.

This way of thinking can trivially decide between right and wrong in a lot of cases, simply by taking an idea to its extreme. Is murder wrong? Well, if everyone murdered people, that certainly wouldn’t work. It’s the same with just about any crime, or other actions that are more or less unanimously agreed on as bad. The times when I lean on this thinking are more questions of what would be the best course (rather than black and white), particularly in questions of common resources and their allocation.

One possible weakness to this way of thinking are situations like the prisoner’s dilemma, in which the solution that’s optimal to everyone is not the optimal strategy for single actors – that is, situations where someone trying to think of the group could be “taken advantage of” by people who don’t have the same intentions.

Obviously you have to consider the chances for this and be careful not to expose yourself to huge risk (irresponsible altruism can be naïve), but by and large I think one should still try for the group outcome anyway. Someone has to take the first step if anything can change for the better.

Introduction

Introduction

Hello! I’m Jacob Beiter, and this is my blog for CSE 40175 – Ethics and Professional Development. This is where I’ll be putting  my responses to the course material, which will hopefully be at least somewhat interesting and thought-out. We’ll see!

A bit about me – I’m a senior studying Computer Science, hailing originally from Charlottesville, Virginia, living in Keenan Hall, very involved in the band, etc etc (get that Notre Dame introduction out of the way).

I first got involved with Computer Science because I found both that it was fun and I was pretty good at it. As I’ve gone through college this has refined somewhat, but at the core it’s still because the kind of problem-solving via decomposition and precise description/understanding of problems that computer science focuses on is right up my alley.

In that vein, I’m still a generalist within CS, and haven’t given a ton of thought to what it means to be a computer scientist, or the different ways that I can or should apply the skills I’m learning here. That’s what I’m hoping to get out of this class – I’m hoping to think a little more broadly, and spend time considering and discussing the kinds of issues that a responsible member of the computer science community needs to be aware of.