Reading 00: Utilitarianism, Mostly

Reading 00: Utilitarianism, Mostly

It’s hard to explicitly articulate how exactly I decide if something is right or wrong – by nature, a lot of it will simply be the product of experience and more nuanced than I can easily write – but I’ll do my best. In short, I think my views have a large utilitarian component to them, with the focus or motivation being on the resulting society or common good that results. In terms of the reading, I’d say I lean towards the Consequentialist theories, with a mixture of the Utilitarian and Common Good approaches.

A quick disclaimer: I don’t have any formal experience with ethics, so this is neither going to be a tidy idea that sits neatly in the ethics box, nor an expression of hard and fast rules that I would stick to without qualifications. That said, I’ll do my best to express myself.

The short way that I would express my thinking is “What would happen if everyone behaved this way?” Now, this sounds like the categorical imperative: “Act only according to that maxim by which you can at the same time will that it should become a universal law.” However, I don’t mean to make statements about the universality of rules or actions like the categorical imperative does – I think that’s reductive, and any reasonable system should have more nuance.

I’m more concerned with the motivations behind these actions. I’ve always found emergent behavior from simple rules very interesting – for example, things like linked lists or cellular automata exhibit very interesting properties from a set of relatively small constraints. In the same way, I think that in even small actions, the way we give and take can add up to have large impacts on the larger communities we’re a part of. Instead of “should everyone take this action or follow this rule?”, my thinking is more along the line of “should everyone weigh their own needs against those of others in this way?”

This is where my thinking more resembles utilitarianism. I agree with much of utilitarianism – I think that if everyone strives to have a net positive impact on the world, to leave people better off than they met them, it could make a profound difference. However, “Don’t set yourself on fire to keep others warm.” People have different needs and abilities, and at times being willing to accept help or an “unfair” distribution in your favor can be as important as offering help or giving up something of yours for others’ benefit.

This way of thinking can trivially decide between right and wrong in a lot of cases, simply by taking an idea to its extreme. Is murder wrong? Well, if everyone murdered people, that certainly wouldn’t work. It’s the same with just about any crime, or other actions that are more or less unanimously agreed on as bad. The times when I lean on this thinking are more questions of what would be the best course (rather than black and white), particularly in questions of common resources and their allocation.

One possible weakness to this way of thinking are situations like the prisoner’s dilemma, in which the solution that’s optimal to everyone is not the optimal strategy for single actors – that is, situations where someone trying to think of the group could be “taken advantage of” by people who don’t have the same intentions.

Obviously you have to consider the chances for this and be careful not to expose yourself to huge risk (irresponsible altruism can be naïve), but by and large I think one should still try for the group outcome anyway. Someone has to take the first step if anything can change for the better.