Discounting Preferential Pains

One problem in hedonistic utilitarianism is how to treat mental pain – or distress, as I would prefer to call it. The fact that some actions might cause distress to people sometimes leads people to make dubious interpretations of what hedonistic policies might look like. For example, if it is the case that the mere thought that homosexual intercourse is taking place causes some mental distress for a substantial amount of people, then wouldn’t the hedonist have to say that homosexual intercourse must be forbidden? (This may not be the most interesting example to bring forward, since most people who argue against homosexuality do not do it on utilitarian grounds, but anyway…)

To answer this we must firstly put the distress of those who are being denied the opportunity to engage in sexual activities in the balance. Even so, it may still be the case that the distress of a lot of straight people counterweighs the distress that comes from a relatively small minority of people being sexually frustrated. Would the hedonist have to concede that in this case, the ban on homosexual intercourse is still a good policy?

I would say no, since it is still the case that many (probably most) of those who feel such grave distress at the mere thought of homosexual intercourse taking place feel this distress merely because they have previously formed (or have been taught) the preference that homosexuality is wrong. On the other hand, most people who see absolutely nothing immoral with homosexuality do not feel very much mental distress when imagining homosexual activities, or at least not so much distress that they feel warranted in complaining about it.

In other words, if grave cases of mental distress are caused by certain preferences (or ideas), and if the distress would go away if the preference were to go away, then it seems unreasonable to count this distress in the summation of pleasures and pains. I might have a preference not to be tortured, but the pain of the torture does not go away if I somehow managed to talk myself out of this preference. I might have a preference for eating ice cream, but the pleasure of eating ice cream would (in most cases) not go away if I got rid of this preference (otherwise, imagine how easy it would be to lose weight, stop smoking, etc.).

But the pain of thinking about about homosexuality will (at least in most cases) be considerably mitigated if the preference against it disappears. We might say the same thing about the pains and pleasures pertaining to revenge. If we get the (rather primitive) idea out of our heads that all wrongs must be revenged by inflictions of pain to the offender that supersedes the pain inflicted to the victim, then we would probably not feel so much pain when someone who has done us wrong is not getting the punishment he or she “deserves”.

It should also be added that we are often fooled by our preferential pleasures when it comes to planning our own lives. We often think that we will be happier if we only take this or that journey, buy new clothes or furniture, pursue a certain career, win another medal, have another child etc. etc. Often we turn out to be mistaken. Or preferences do not always increase our pleasure (or reduce our pain). Of course, it may be the case that we get a certain amount of joy (mental pleasure) from the mere act of planning these future events, and it would seem strange to discount this joy just because it is built out of as yet imaginary things. If it all stopped at planning then no harm would be done; but if we are fooled by these preferential joys we will eventually try to make the things themselves happen, and often find out that we are no happier than before. We would, in other words, find out ex post facto that we could have used our energy on other things that would, perhaps, have a greater change of increasing our pleasure.

Advertisements

Mozi and the Dangers of Narrow Utilitarianism

In the minds of some (perhaps most) people, “utilitarianism” has a special meaning. It often refers to an extremely “materialistic” view, which aims at maximization of wealth. This is not surprising, since utilitarianism has been (but less so nowadays) associated with economics. In the early 19th century, those who wrote on economics (or “political economy”, as it was usually called) were not seldom adherents to hedonistic utilitarianism of the Benthamite kind. Later in the same century, however, the goal of maximizing pleasure was by many economists regarded as too imprecise to be used in scientific discussions. So pleasure was generally replaced by “utility”, and the by the maximization of utility, it was usually meant the highest possible satisfaction of subjective preferences. Moreover, the easiest way to “measure” such satisfaction of preferences is by measuring income, wealth, and the like. It is hard to be scientific if one wants to measure levels of mental “satisfaction”.

It is, of course, a shame that many associate utilitarianism with this “economistic” way of thinking, since it is rather barren if it is regarded as a moral view (and if it is not intended as a moral view, then why should one make policy recommendations on the basis of it?). Nevertheless, it is, in fact, a very old view, and if we look back in the history of ideas, we find one extreme adherent to it in the Chinese philosopher Mozi (or Mo Tzu), who lived around 400 B.C. Mozi put forward a principle of “universal love” (not really emotional love, but rather “concern”) which is not all that different from the utilitarian/hedonist idea of impartiality when it comes to maximization of happiness, and he criticized the Confucian view that parents and relatives should always be one’s first concern.

However, It seems that Mozi’s prime concern was economic production, and he criticized Chinese rulers who squandered resources by, for instance, waging war. He also criticized traditions that seemed very unproductive, like expensive funerals and long periods of mourning, during which one was expected to do no work.

Now, it is, of course, rather uncontroversial to criticize war and the destructive forces and costs that it brings along. Somewhat less uncontroversial is perhaps the critique of superstitious practices that drain resources, but even people who subscribe to such beliefs would probably agree that such things cannot swallow too much of one’s material resources.

But there are other things in Mozi’s doctrine which appear more controversial. To quote a scholar: “To attain the end of a rich, numerous, orderly, peaceful, and literally ‘blessed’ population, Mo Tzu was willing to sacrifice very nearly everything else. Clothing should keep out the cold in winter and the heat in summer but should not be attractive. Food should be nourishing but not well-seasoned. Houses should keep out the cold and heat, the rain and thieves, but should have no useless decoration.” Mozi went so far as to condemn music, “which used men’s time and wealth in the making and playing of instruments, yet created nothing tangible”.

Mozi’s utilitarianism is, thus, extreme in its maximization of “utility”, in the form of production of material, “useful” things, and the production of new people, and in its rejection of what is of interest to the hedonistic utilitarian, namely, pleasure. Now, it is probably the case that no utilitarian economist adhere to such an anti-hedonistic view. Nevertheless, their political recommendations have not always been all that different. Politics is often regarded as a wealth-making machine, whereas pleasure is something that should be left to individuals, and not concern politicians. Just enable people to make money, and the rest will take care of itself.

The problem with this is that the way we make money affects the possibilities of leading a more pleasurable life. And it is not easy for an individual to plan these things by her- or himself, when the system itself is geared towards a specific way of making money. It is, for instance, hard to find a career that will give you a reasonable balance between leisure and money. The logic of the system demands that you either work hard for a lot of money, or work very little for little money (although many also work hard for little money, of course, just as a few lucky people work little for a lot of money). It is hard to find a career which requires a medium amount of work for a medium amount of money, even though that would probably produce more pleasure than the present way of making a living.

In conclusion, utilitarian politics must from the beginning regard people as interested in pleasure and promote it through politics, rather than decide that one thing (for instance, income) should serve as a proxy for happiness.

Using Slippery Slope Arguments in Good Faith

A slippery slope argument goes like this: Action A leads to consequence k. Consequence k will, in turn, make consequence happen, and will make consequence happen. While is not an obviously bad consequence, l is somewhat bad, and m is potentially catastrophic. Thus, action A should not be done, because it will eventually lead to consequence m.

Sometimes slippery slope arguments are simply dismissed as fallacies, but it is obvious that they are not inherently fallacious. However, the strength of the argument is very dependent on the strength of the empirical evidence that the consequences one invokes will actually occur. Such evidence is often lacking, and that is why slippery slopes are often dismissed.

As a consequentialist I am very open to slippery slope arguments. If I, as a hedonist,  make an argument about an action or a policy, and if someone can credibly claim that the action or policy in question will probably have bad consequences down the line, – even though the immediate consequences may appear good – then I would be perfectly willing to drop my argument.

Nevertheless, I am somewhat reluctant to take slippery slopes seriously if they are raised against me by people who are not consequentialists themselves; because even if I could counter the slippery slope with better evidence that removes the slipper slope, my antagonists would, presumably, not change their minds anyway, since their arguments do not depend on consequences at all. A parallel would be a scientific discussion where a pseudoscientist who rejects naturalism argues that the scientist’s naturalistic evidence is wrongly interpreted. However, if the pseudoscientist rejects naturalistic evidence altogether we would fell disinclined to take him ot her seriously in the first place.

I do understand, however, why non-consequentialists are eager to use slippery slope arguments, because it is mostly a win-win situation for them. If the argument is convincing, the consequentialist will have to drop her position. If the argument is unconvincing, the non-consequentialist can just fall back to her non-consequentialist principles and disregard the empirical evidence. For the consequentialist, it is, however, a lose-lose situation, since the non-consequentialist does not have to endorse the consequentialist’s position even it the slippery slope is successfully disproved.

In short, please bring up slippery slope arguments if you want, but do so in good faith. Be prepared to change your own views if the slippery slope does not pan out they way you thought in the first place. Otherwise, it is just like pseudoscientists (and similar folk) who argue against real scientists  by invoking empirical evidence, while they themselves refuse to state clearly which empirical evidence might falsify their own position.

A Reasonable Doctrine of Personal Responsibility

I do not believe in the existence of free will in any deep sense. Everything we observe in nature happens because of some prior cause. A puddle of water cannot decide to freeze; it is, rather, caused to freeze when reaching a certain temperature. Human beings are products of nature as well – simply a collection of atoms (note that there is no valuation implied in this use of the word ‘simply’), although our brains are infinitely more complex than a puddle a water, which means that most of our actions are infinitely more unpredictable than the ‘actions’ of most other objects in nature. So even a determinist (i.e., those who do not believe in the existence of free will in the abstract) should concede that, in practice, there is not much difference between a belief in free will and determinism.

Some would, however, claim that it makes a great moral difference whether you believe in free will or determinism. On this account, we cannot punish or reward anyone if they are not acting out of free will. As a utilitarian hedonist I must disagree with this statement. If the consequences of punishing or not punishing someone were the same, then it would not matter what we did. But the consequences are (almost) never the same. Even if the actions of a ‘rapist’ were determined by prior processes in the brain (i.e., they were not the result of ‘free will’), it still makes a difference whether we punish him or not. Among the most important consequences are that the streets will be safer with the rapist behind bars, and that other potential rapists will be deterred from raping. The last point is especially important; it indicates that our actions (even if they, in turn, are not the result of free will) can create new determining factors for the behavior of other people.

Thus, I view the question of punishment, blame, reward, and praise as wholly separate from the question of determinism or free will. Even though we could, at least in principle, describe everything that caused someone’s actions, we can still discuss how much this person should be blamed or praised, or how much personal responsibility we should ascribe to them. A general rule might be that the more we can actually assess the determining factors of someone’s behavior, the less responsibility we can ascribe. For example, if I were to watch someone get pushed into the street in front of a car, I could easily predict that this person would get involved in an accident, and so I could not ascribe responsibility to the person being pushed for the consequences of this accident. I could, however, ascribe a heavy responsibility to the pusher, because I could not predict her actions at all (unless, perhaps, there was another pusher behind her).

What about the responsibility for poverty or unemployment, which might be important to assess in order to decide who ‘deserves’ assistance from the state? Here we must apply sliding scales, since it would be highly unusual to find someone who is not responsible at all for their situation, as well as someone who is completely responsible for their situation. Again, the test would consist in predictability. We might predict that someone with a certain disability can be expected to have a hard time finding a job, which means that responsibility for this unemployment cannot be total. On the other hand, we might be able to find a not insignificant number of people who have succeeded in finding employment in spite of this disability, which would mean that our ability of prediction will decrease (at least ceteris paribus), while the level of personal responsibility will increase.

While certain  disabilities would decrease the level of responsibility for unemployment drastically, there are other things which would increase it drastically. An example of this would be laziness. While it is certainly true that laziness, as well as other personal traits, are endowed to us by nature, it it still the case that we can find many examples of lazy people who are able to control or overcome their laziness in order to get a job. In other words, it is very hard to predict whether someone who exhibits lazy tendencies early in life will succeed in finding employment or not. Thus, when it comes to lazy people, their laziness does not remove very much of personal responsibility. Furthermore, while the lazy person might claim that he is lazy for certain reasons (that he cannot remove), we can create other reasons for him to counteract this laziness by, for instance, nagging at him or threatening to remove unemployment benefits. This does not exclude the possibility that we might find rare cases of extreme laziness that cannot be overcome by any means, but in such cases we would probably not call it ‘laziness’ anymore, but refer to it as mental disorder.

So my suggestion is that the level of predictability regarding people’s actions should determine the level of personal responsibility we ascribe. Right now I am just throwing this out as an idea which, in the end, might prove to be untenable. In any case, the theory seems to fall in line with many of our standard practices in deciding whether we should accept someone’s excuses or not. If you are late to a meeting and make the excuse that you were late because you had to finish playing a game on your phone, people would probably not accept your excuse, because they could point to so many examples of people who like to play games on their phone, but still manage to get to meetings on time. If your excuse is that your train was late because of a terrorist attack, we would accept your excuse, because it would be very hard to find examples of people who would make it to work on time under these circumstances. It would, in other words, be easy to predict that a person stuck on a train during a terrorist attack would not make it to work, while it would be extremely difficult to predict whether someone caught up in a game on her phone will manage to make the time or not.

I guess the obsessive phone gamer would be able to say that once she started to play the game it would be very easy to predict that she would not make it to the meeting in time, given that we know how long the game usually lasts. Thus, we should not ascribe much responsibility to her. Here one could probably reply that even thought it was predictable that she would be late once the game has started, it was very hard to predict that she would take up the phone and start to play in the first place. Thus she had a very large responsibility for starting to play in the first place. And since she, presumably, knew of the risks that she would be late if she started playing, the responsibility for the tardiness remains high.

In the end, however, it must be added that praise or blame should only partly be determined by level of personal responsibility. The most important thing is always the consequences of punishing or not punishing. If we see many good consequences of not reprimanding the phone gamer in any way, in spite of high levels of personal responsibility, then we should not mete out any blame or punishment. Conversely, the consequences might be such that it is beneficial to punish people in spite of low levels of responsibility (indeed, we do sometimes lock up people who are dangerous to other people, even though they are very mentally ill and not responsible at all for their actions).

Ethical Hiring and Firing

It struck me one day when I was passing a parking inspector on the street that that kind of job could not be very hard to learn, i.e., most ‘normal’ people could probably do it quite easily. Yet, it is probably the case the people who actually do jobs like that – jobs that most people can do with a small amount of on-the-job training – often have more work experience or education that is needed (but it could also be the case that they simply have the right social connections). There are other jobs that require some sort of education, but usually not a three- or four-year university degree. Many civil service jobs, for instance, ask for degrees in political science, law, sociology, etc. even though the job description has very little to do with the contents of such degrees.

In other words, many people are over-skilled or over-educated, relative to their present job. On the other hand, there are many people who remain unemployed because they can’t get the experience or education that will put them on the top of the list of the hiring employer. There might be a measure of ‘hidden’ discrimination in this.  We usually frown upon nepotism or favoritism, because we want to ensure equality of opportunity and meritocracy. Thus, one might want to rely on objective criteria (years of education, years of work experience, etc.) when hiring. But if these objective criteria are not very relevant for the job in question, then this appears to be nothing but a form of discrimination of those who have not been able to connect to the right kind of social networks (and we all know that social networks are of increasing importance when it comes to landing a job these days) or have been forced to stay away from work life for other reasons. And it is probably the case that this kind of discrimination hits people even harder than the ‘classic’ kinds of discrimination: race, ethnicity, sexual preference, age, religion etc.

Now, if we want to maximize happiness in society it seems that employers should relax their demands for education or experience, when those criteria are not very relevant for the job in question. Empirical research has confirmed that whereas people are able to adapt to many unfortunate circumstances in life (for instance, becoming disabled), unemployment  seems to be an exception to this. After the shock of becoming unemployed has receded, quality of life increases somewhat again, but usually not up to the pre-unemployment level. Combatting long term unemployment should, thus, be important for a hedonistic utilitarian. One way of doing this could be – at least for some vacancies – to consciously hire people who may have less experience or education than the top applicants. In other words, hiring people with adequate qualifications rather than people with excellent qualifications (which are unnecessary for doing the job in question anyway).

In times of high unemployment employers might receive hundreds of applications when they announce a vacancy, and it is hard to imagine that the person who actually gets the job is actually better at doing the job than any other person who ‘ranks’ from, let’s say, place 2 to 20 on the list. On the contrary, we all know from experience that a not insignificant amount of people we encounter in everyday life are more or less incompetent at their job. Thus, in many cases the policy suggested here would not only be more just, but also more efficient for society. But even if it turns out to reduce efficiency, we must always remember that it is always possible to sacrifice some efficiency if it means greater justice (i.e. overall happiness).

You can read more on this topic in my article ‘Ethics in Hiring: Nepotism, Meritocracy, or Utilitarian Compassion,’ in the lastest issue of Philosophy for Business.

The (Un)Importance of Intentions in Ethics

It is sometimes discussed whether people’s intentions are irrelevant or not when it comes to comparing seemingly identical consequences. Many people seem to think intentions and motives are highly relevant, for instance when it comes to assessing collateral damage in warfare; Hitler killing 100.000 civilians is simply not the same as Roosevelt or Churchill killing 100.000 civilians.

As a consequentialist it is, however, hard to attribute any intrinsic value to intentions. And if we contemplate the morality of an action that seems to be, so to speak, a one-shot activity, the intentions become basically irrelevant. We might, for instance, imagine someone who because of temporary desperation attempts a burglary and tries (but fails) to pick a lock, which wakes up the person in the apartment, who would otherwise have died because of an undetected gas leak. The consequences of this action would be the same as if the mailman put mail through the mail slot in a loud fashion, which wakes up the person inside. The mailman acted on perfectly legitimate intentions, while the burglar, presumably, did not. But if we assume that the burglar got so nervous from this attempt that he decides never to try something illegal again, then the intentions seem rather irrelevant when it comes to assessing the consequences. The failed burglar never actually does anything bad (or at least nothing illegal) in his whole life, and he saves one person from dying; and we could say the exact same thing about the mailman.

In reality, however, intentions are usually valuable when it comes to predicting the future behavior of a person. After all, a burglar will usually make more than one burglary in his life (perhaps we wouldn’t call him a “burglar” if he only does it once in his life…). If we fail to blame someone who by mere chance does something good while intending to do harm, we increase the chance that she (or someone else) will try to do something harmful in the future. By the same token, blaming someone for accidentally doing harm when aiming to do good would perhaps make people reluctant to attempt to to good in the future. Intentions are, in other words, almost always important because of their connection to actual consequences.

Intentions in themselves, however, should be irrelevant. We could even imagine cases where it would be good to have what is normally regarded as wicked intentions, i.e., where bad intentions are actively utilized to get good consequences (as apart from the gas leak example above where the good consequences were accidental). For instance, there are rare cases when it would be justified to torture someone in order to save lives, and in such cases one may have to employ a torturer who doesn’t care about saving those lives, but who enjoys torturing people. This would be someone who does (overall) good for what is ordinarily regarded as the wrong reasons, but we would not blame him for it.

One can also image someone who has the best of intentions but mostly does harm instead. A general who works for the UN to stop genocides and other gross human rights violations, but who uses his troops and resources in a thoroughly incompetent way, which leads to many unnecessary deaths, should probably be blamed for his actions, even though his intentions were extremely humanitarian. Good intentions cannot, in other words, always trump bad consequences, just as bad intentions cannot always trump good consequences.

The Standing of Utilitarianism

[This is a translation of a post previously published on my Swedish blog, April 2016]

Recently I have been trying to assess the standing of utilitarianism in normative research and in ethical discourse in society in general. This is not a very easy task. I, myself, am a political scientist (“political theorist”) and, thus, not connected to the same institutional framework as the philosophers, even though the research that I do and that philosophers do very often overlap.

Within the subdiscipline political theory it seems fairly obvious that utilitarianism is as good as dead, and that it has been in that state for a long time. Many textbooks in political theory discuss utilitarianism very briefly and repeat the same objections to it that have been raised for decades (and which can be refuted quite easily by utilitarians). When one turns to the most common academic journals in political theory it is also easy to notice how rare it is that someone is arguing from a utilitarian perspective.

Thus, when it comes to political theory I believe I can plausibly conclude that utilitarianism is rather unpopular. When it comes to fields like “pure” moral philosophy, applied ethics etc., my conclusion will become less clear. Some (qualified) people seem to think that utilitarianism still has a strong position in these disciplines (see, e.g., “utilitarianism” in the Routledge Encyclopedia of Philosophy). Moreover, I recently read a book (by Anne Maclean) which claimed – and lamented – that bioethics (i.e., ethical research concerning moral choices in light of modern technology, medicine etc.) is virtually dominated by utilitarianism (or at least that this was the case in the 1990s, when Maclean’s book appeared).

Now, If it is the case that utilitarianism still has a strong standing within moral philosophy (unlike in political theory), then one has to say that professional philosophers have done a bad job in spreading this doctrine to people outside academia. My impression is that when a utilitarian philosopher gets the chance to talk in regular media (in Sweden it is usually Torbjörn Tännsjö, internationally it is often Peter Singer) they seem to encounter at least as much criticism as assent.

Perhaps it is really the case that although utilitarianism has during certain periods been somewhat popular among normative researchers, it has never been very popular among ordinary people. But if this is true, the reason for this is probably not that people have found another moral theory that is more coherent than utilitarianism; it is, rather, the case that people in general do not care very much about endorsing a coherent or “logically” satisfying moral theory. They mix elements from utilitarianism (most people seem to care about consequences at least in some cases) with other ideas as they see fit. And this way of “philosophizing” has spread to academia. The academics who reject utilitarianism often seem not to do it because they have found a theory that to a higher degree satisfies the demands for argumentative stringency which one should be able to demand from a “scientist”.

Anyway, it is a shame that the rejection of the moral theory that I believe has the least theoretical problems – hedonistic utilitarianism – is often based on rather loose objections, with the aid of counterarguments which are quite easy to respond to, and (which is probably more important) which are made from moral perspectives that would hardly survive the same kind of scrutiny that utilitarianism is usually subjected to. Other moral theories (at least those that are in fashion at the moment) do, in other words, receive a milder treatment when it comes to the amount of objections one is expected to be able to handle in order to claim that one has a defensible theory. A Rawlsian perspective, for instance, is usually considered as less problematic than a utilitarian perspective, even though the writings of Rawls can be demolished quite well by most philosophy undergraduates.