Using Slippery Slope Arguments in Good Faith

A slippery slope argument goes like this: Action A leads to consequence k. Consequence k will, in turn, make consequence happen, and will make consequence happen. While is not an obviously bad consequence, l is somewhat bad, and m is potentially catastrophic. Thus, action A should not be done, because it will eventually lead to consequence m.

Sometimes slippery slope arguments are simply dismissed as fallacies, but it is obvious that they are not inherently fallacious. However, the strength of the argument is very dependent on the strength of the empirical evidence that the consequences one invokes will actually occur. Such evidence is often lacking, and that is why slippery slopes are often dismissed.

As a consequentialist I am very open to slippery slope arguments. If I, as a hedonist,  make an argument about an action or a policy, and if someone can credibly claim that the action or policy in question will probably have bad consequences down the line, – even though the immediate consequences may appear good – then I would be perfectly willing to drop my argument.

Nevertheless, I am somewhat reluctant to take slippery slopes seriously if they are raised against me by people who are not consequentialists themselves; because even if I could counter the slippery slope with better evidence that removes the slipper slope, my antagonists would, presumably, not change their minds anyway, since their arguments do not depend on consequences at all. A parallel would be a scientific discussion where a pseudoscientist who rejects naturalism argues that the scientist’s naturalistic evidence is wrongly interpreted. However, if the pseudoscientist rejects naturalistic evidence altogether we would fell disinclined to take him ot her seriously in the first place.

I do understand, however, why non-consequentialists are eager to use slippery slope arguments, because it is mostly a win-win situation for them. If the argument is convincing, the consequentialist will have to drop her position. If the argument is unconvincing, the non-consequentialist can just fall back to her non-consequentialist principles and disregard the empirical evidence. For the consequentialist, it is, however, a lose-lose situation, since the non-consequentialist does not have to endorse the consequentialist’s position even it the slippery slope is successfully disproved.

In short, please bring up slippery slope arguments if you want, but do so in good faith. Be prepared to change your own views if the slippery slope does not pan out they way you thought in the first place. Otherwise, it is just like pseudoscientists (and similar folk) who argue against real scientists  by invoking empirical evidence, while they themselves refuse to state clearly which empirical evidence might falsify their own position.

Advertisements

A Reasonable Doctrine of Personal Responsibility

I do not believe in the existence of free will in any deep sense. Everything we observe in nature happens because of some prior cause. A puddle of water cannot decide to freeze; it is, rather, caused to freeze when reaching a certain temperature. Human beings are products of nature as well – simply a collection of atoms (note that there is no valuation implied in this use of the word ‘simply’), although our brains are infinitely more complex than a puddle a water, which means that most of our actions are infinitely more unpredictable than the ‘actions’ of most other objects in nature. So even a determinist (i.e., those who do not believe in the existence of free will in the abstract) should concede that, in practice, there is not much difference between a belief in free will and determinism.

Some would, however, claim that it makes a great moral difference whether you believe in free will or determinism. On this account, we cannot punish or reward anyone if they are not acting out of free will. As a utilitarian hedonist I must disagree with this statement. If the consequences of punishing or not punishing someone were the same, then it would not matter what we did. But the consequences are (almost) never the same. Even if the actions of a ‘rapist’ were determined by prior processes in the brain (i.e., they were not the result of ‘free will’), it still makes a difference whether we punish him or not. Among the most important consequences are that the streets will be safer with the rapist behind bars, and that other potential rapists will be deterred from raping. The last point is especially important; it indicates that our actions (even if they, in turn, are not the result of free will) can create new determining factors for the behavior of other people.

Thus, I view the question of punishment, blame, reward, and praise as wholly separate from the question of determinism or free will. Even though we could, at least in principle, describe everything that caused someone’s actions, we can still discuss how much this person should be blamed or praised, or how much personal responsibility we should ascribe to them. A general rule might be that the more we can actually assess the determining factors of someone’s behavior, the less responsibility we can ascribe. For example, if I were to watch someone get pushed into the street in front of a car, I could easily predict that this person would get involved in an accident, and so I could not ascribe responsibility to the person being pushed for the consequences of this accident. I could, however, ascribe a heavy responsibility to the pusher, because I could not predict her actions at all (unless, perhaps, there was another pusher behind her).

What about the responsibility for poverty or unemployment, which might be important to assess in order to decide who ‘deserves’ assistance from the state? Here we must apply sliding scales, since it would be highly unusual to find someone who is not responsible at all for their situation, as well as someone who is completely responsible for their situation. Again, the test would consist in predictability. We might predict that someone with a certain disability can be expected to have a hard time finding a job, which means that responsibility for this unemployment cannot be total. On the other hand, we might be able to find a not insignificant number of people who have succeeded in finding employment in spite of this disability, which would mean that our ability of prediction will decrease (at least ceteris paribus), while the level of personal responsibility will increase.

While certain  disabilities would decrease the level of responsibility for unemployment drastically, there are other things which would increase it drastically. An example of this would be laziness. While it is certainly true that laziness, as well as other personal traits, are endowed to us by nature, it it still the case that we can find many examples of lazy people who are able to control or overcome their laziness in order to get a job. In other words, it is very hard to predict whether someone who exhibits lazy tendencies early in life will succeed in finding employment or not. Thus, when it comes to lazy people, their laziness does not remove very much of personal responsibility. Furthermore, while the lazy person might claim that he is lazy for certain reasons (that he cannot remove), we can create other reasons for him to counteract this laziness by, for instance, nagging at him or threatening to remove unemployment benefits. This does not exclude the possibility that we might find rare cases of extreme laziness that cannot be overcome by any means, but in such cases we would probably not call it ‘laziness’ anymore, but refer to it as mental disorder.

So my suggestion is that the level of predictability regarding people’s actions should determine the level of personal responsibility we ascribe. Right now I am just throwing this out as an idea which, in the end, might prove to be untenable. In any case, the theory seems to fall in line with many of our standard practices in deciding whether we should accept someone’s excuses or not. If you are late to a meeting and make the excuse that you were late because you had to finish playing a game on your phone, people would probably not accept your excuse, because they could point to so many examples of people who like to play games on their phone, but still manage to get to meetings on time. If your excuse is that your train was late because of a terrorist attack, we would accept your excuse, because it would be very hard to find examples of people who would make it to work on time under these circumstances. It would, in other words, be easy to predict that a person stuck on a train during a terrorist attack would not make it to work, while it would be extremely difficult to predict whether someone caught up in a game on her phone will manage to make the time or not.

I guess the obsessive phone gamer would be able to say that once she started to play the game it would be very easy to predict that she would not make it to the meeting in time, given that we know how long the game usually lasts. Thus, we should not ascribe much responsibility to her. Here one could probably reply that even thought it was predictable that she would be late once the game has started, it was very hard to predict that she would take up the phone and start to play in the first place. Thus she had a very large responsibility for starting to play in the first place. And since she, presumably, knew of the risks that she would be late if she started playing, the responsibility for the tardiness remains high.

In the end, however, it must be added that praise or blame should only partly be determined by level of personal responsibility. The most important thing is always the consequences of punishing or not punishing. If we see many good consequences of not reprimanding the phone gamer in any way, in spite of high levels of personal responsibility, then we should not mete out any blame or punishment. Conversely, the consequences might be such that it is beneficial to punish people in spite of low levels of responsibility (indeed, we do sometimes lock up people who are dangerous to other people, even though they are very mentally ill and not responsible at all for their actions).

What is “Leftism” and How Can it Be Defended?

Recently I have been doing some writing on the concept “leftism” (which will probably be available soon as a small book). It is, of course, hard to define exactly what leftism is, but most people probably have a rough idea about what it means. My own method has been to approach the term as an ideal type, i.e., a sort of loose definition which enumerates characteristics normally associated with the phenomenon in question. When talking about ideal types, it is important to note that not all characteristics need to be found in all individual cases. But it is hard to say “objectively” how many characteristics one must observe, or how strongly they must be manifested. Again, an ideal type is not a precise definition.

For the purposes of my own investigations – and my defense of leftist politics – I assumed that the most important characteristic of leftism is a defense of relatively high levels of economic redistribution. It would be hard to call someone a leftist if he or she did not want higher levels of economic redistribution (mostly from rich to poor, of course) than, for instance, conservatives or libertarians, no matter how many of the other criteria for leftism they fulfill. On the other hand, it would, perhaps, be hard to call someone a “pure” leftist if she accepts high redistribution, but rejects all other characteristics of leftism.

The other characteristics or criteria that I include in my ideal type of leftism are support for: (radical) feminism, anti-discrimination laws and affirmative action, immigration and (some sort of) multiculturalism, bottom-up globalization and special rights for workers, and participatory and majoritarian democracy. One could probably also add environmentalist concerns (and a particular view of the state’s role to meet those concerns) to this list, but for different reasons I will not include it in my book.

I defend leftist politics by appealing to my hedonist ethics. I believe that, for instance, economic redistribution from the rich to the poor, a kind of feminism that makes us see beyond culturally specific gender roles, or a state that does not privilege any particular ethnicity or lifestyle can be defended on the grounds that it maximizes pleasure and (perhaps more important) minimizes pain.

I do not, in other words, appeal to Marxism in any way to defend leftism; and I believe that extensive ownership of the means of production by the state (or similar collective entities) would be mistaken. I believe leftists has relied too much on Marxism and other related theories to defend their positions. Hedonism is a more straightforward idea and it does not suffer from the many weaknesses that can be found in most other theories used to defend leftism. Of course, some may claim that the hedonist defense of leftism is not left enough. In that case, I would only answer that that would be a problem for such “left-leftism”, not for hedonism – unless, of course, the critics in question are prepared to make a philosophically rigorous argument against hedonism itself (and make better arguments for another moral theory).

My Ideology: Anti-Egoism

I usually discuss my ethics and my political philosophy. But to discuss one’s political ideology is to move some steps toward practical politics, leaving a few of the foundational steps behind for the sake of simplicity. After all, it is rare that a “conservative”, a “liberal”, a “nationalist” or a “socialist” is called upon to explain the moral foundations of their political views, and that is why we call these views ideologies rather than moral or political philosophies (but they can, of course, be propagated as foundational philosophies, provided that enough theoretical effort is made). And in real life (outside of the ivory towers of academia or think tanks, that is), we usually see no problem with the propagation of a “truncated” ideology rather than a complete political philosophy. In real life there is, alas, not enough time for the latter, nor enough philosophical know-how among most people to understand the philosophies in question.

So after this apology for the use of ideologies I would simply want to state what my ideology is. Put briefly, I probably adhere to what one might call anti-egoism. My view is that you simply don’t have a right to take for yourself more than is necessary for a decent life. By this I mean that you have a right to procure the means to uphold your own life (which, of course, also means keeping some savings for unforeseen events) and the lives of people who are dependent on you. I also grant the right to procure means for a few pleasures and interests that are necessary for a psychologically and culturally decent life. But wanting to go beyond this I would call egoism, at least in a world where many people lack the means to live this kind of decent life.

Now this anti-egoistic position does not by itself entail any specific economic system. In a very libertarian society the non-egoistic ideal could flourish if most people subscribe to it. In a world where many people do not subscribe to it political means are necessary to correct people’s egoism – means which might have different characteristics, depending on the circumstances. Often, however, the problem seems to be one of “financial” egoism, which often necessitates various redistribute measures through the tax system, along with other “leftist” tools. In this day and age the “environmentalist” toolkit seems to be highly appropriate as well.

I am perfectly ready to concede that some people will be happy to be egoists and claim that you basically have a “right” to anything that you can take for yourself. In real life, such conflicts are unavoidable. The resolution of these conflicts can only be left to democratic decision. I suspect, however, that few people would argue that egoism is an “ultimate” good. Some people would, no doubt, argue that egoism is bad, but that it is unavoidable. There might, for instance, be innovative and creative egoists who refuse to render any services to society unless their egoistic cravings are satisfied. I am prepared to concede that to some degree these cravings might have to be satisfied. But to what degree is an empirical matter. Anyway, it is all too easy to start up with the bad-but-unavoidable sentiment and slip into a good-because-unavoidable sentiment. As long as one avoids that slippage I believe the dilemma of unavoidable egoists is fairly manageable. At least as long as the egoists are not the majority.

So, that was the simple story of my political ideology. Less egoism in the world, that’s it. Not a complicated message, one might think. Of course, I am always ready to defend this ideology on a more philosophical level, but in – what I have called – real life such defenses are rarely called for; and I am beginning to think I am wasting my time with philosophical details.

Can We Afford to Help Refugees?

[This a translation of an old post from my Swedish blog]

For a few (mainly among the Sweden democrats, I presume), the refugee question may be about protecting Swedish “culture” or “identity”, but for most people it mainly seems to be a question about costs: can we afford to help a lot of refugees or not? For that reason, I would like to contribute with a few theoretical comments about what it might mean to say that one can or cannot afford something (discussion about our actual national accounts I leave to others).

When is it possible to say that one person definitely cannot afford to help another person? There are possibly saint-like people who would dispute the following answer, but it seems reasonable to assume that someone who does not have food, clothes, and shelter for oneself cannot afford to help someone else (it is, rather, precisely those people who need help from others). It is rather pointless to share one’s last piece of bread with someone else so that both get too little food and perish.

But the further one progresses from the satisfaction of those basic needs, the less meaningful it gets to claim that one “cannot afford” certain things. Of course, one could always use the expression in a technical sense, – Bill Gates might afford to buy 100 luxurious houses, but he “can’t afford” to by 101 – but for a well-to-do person it is more apt to talk about priorities. I think most of us would claim that a parent who says he cannot afford a winter coat to his child, although he has just bought a new hi-fi system for himself, can actually afford the coat, but has prioritized something else.

There is no doubt that Sweden can afford to help more refugees than the numbers that have already been helped. The question is if we want to prioritize it. Many Swedish households seem to have a lot of money to spend on travels, home improvement and interior design, electronic gadgets, restaurant meals etc. This is obviously something they prioritize above helping strangers in need.

Why, then, is it the case that some people still claim that we cannot afford receiving refugees? Apparently the affordability answer is simply a pretext. There must be some other real reason. The main candidates for such a reason, I believe, are: (i) a moral conviction that we do not have any economic responsibility towards strangers, (ii) the nationalist argument mentioned above, or (iii) pure unreflective egoism. It would, thus, be good if those who claim that “we can’t afford” openly state which of these viewpoints they actually endorse.

(Then there is of course the question of how one is supposed to help people if one actually wants to prioritize this – through taxes or through individual actions. That question is too complex to discuss here and now.)

The (Un)Importance of Intentions in Ethics

It is sometimes discussed whether people’s intentions are irrelevant or not when it comes to comparing seemingly identical consequences. Many people seem to think intentions and motives are highly relevant, for instance when it comes to assessing collateral damage in warfare; Hitler killing 100.000 civilians is simply not the same as Roosevelt or Churchill killing 100.000 civilians.

As a consequentialist it is, however, hard to attribute any intrinsic value to intentions. And if we contemplate the morality of an action that seems to be, so to speak, a one-shot activity, the intentions become basically irrelevant. We might, for instance, imagine someone who because of temporary desperation attempts a burglary and tries (but fails) to pick a lock, which wakes up the person in the apartment, who would otherwise have died because of an undetected gas leak. The consequences of this action would be the same as if the mailman put mail through the mail slot in a loud fashion, which wakes up the person inside. The mailman acted on perfectly legitimate intentions, while the burglar, presumably, did not. But if we assume that the burglar got so nervous from this attempt that he decides never to try something illegal again, then the intentions seem rather irrelevant when it comes to assessing the consequences. The failed burglar never actually does anything bad (or at least nothing illegal) in his whole life, and he saves one person from dying; and we could say the exact same thing about the mailman.

In reality, however, intentions are usually valuable when it comes to predicting the future behavior of a person. After all, a burglar will usually make more than one burglary in his life (perhaps we wouldn’t call him a “burglar” if he only does it once in his life…). If we fail to blame someone who by mere chance does something good while intending to do harm, we increase the chance that she (or someone else) will try to do something harmful in the future. By the same token, blaming someone for accidentally doing harm when aiming to do good would perhaps make people reluctant to attempt to to good in the future. Intentions are, in other words, almost always important because of their connection to actual consequences.

Intentions in themselves, however, should be irrelevant. We could even imagine cases where it would be good to have what is normally regarded as wicked intentions, i.e., where bad intentions are actively utilized to get good consequences (as apart from the gas leak example above where the good consequences were accidental). For instance, there are rare cases when it would be justified to torture someone in order to save lives, and in such cases one may have to employ a torturer who doesn’t care about saving those lives, but who enjoys torturing people. This would be someone who does (overall) good for what is ordinarily regarded as the wrong reasons, but we would not blame him for it.

One can also image someone who has the best of intentions but mostly does harm instead. A general who works for the UN to stop genocides and other gross human rights violations, but who uses his troops and resources in a thoroughly incompetent way, which leads to many unnecessary deaths, should probably be blamed for his actions, even though his intentions were extremely humanitarian. Good intentions cannot, in other words, always trump bad consequences, just as bad intentions cannot always trump good consequences.

The Meaning of Anti-Traditionalism

Yesterday was fettisdagen – translatable to Shrove Tuesday  – in Sweden. This day is traditionally the first day when it is allowed to eat a semla (plural: semlor), although nowadays there are, of course, no official ban on eating it before Shrove Tuesday (going further back, fettisdagen was the only day on which you could eat semlor). Even though semlor has been available in grocery stores several weeks before this day, some people still prefer to wait until Shrove Tuesday before they eat one.

This highlights a broader questions about traditions. Is it really rational to honor a tradition if there is no reason to do it other than the fact that it is a tradition? Why, in other words, wait until fettisdagen to eat a semla when there is no rational reason to wait? If you answer no to the first question you can probably be classified as an anti-traditionalist.

Personally, I am a anti-traditionalist in that sense. I see no reason to honor traditions that have no reason behind them. This does not mean that the reason must be very elaborated and foolproof. There might not be any “rational” reason why we should celebrate birthdays, but one might suggest – rightly or wrongly – that it is valuable to have some day where every person deserves some positive attention. It does not have to be the birthday, but since this tradition is established we might as well continue to honor it, rather than change it to some other day.

But there are other traditions where it is hard to find a good reason for keeping it going. Circumcision among atheist jews might be one example. If one does not believe that one must, literally, do it for God’s sake then some other reason for keeping the tradition going is needed. Now some might claim that it is for medical reasons, but I rarely hear Jews invoke that reason (and it would be a strange coincidence if peoples who have traditionally practiced circumcision uniformly praise the medical effects, while others uniformly do not). The most common reason seems to be because it is tradition among their people. The reason for continuing the tradition is, in other words, just the tradition itself.

Now imagine if I were to start a chess club. I would say to the potential members that to be a member of this club one must not only be interested in chess, but one must also chop off one’s left little finger. If people were to ask the reason for this, I would say because this is one of the two distinguishing marks of our association; we are the people who like chess and are missing one little finger, and that’s that. But I assume potential members would ask what the point is of chopping off the little finger. Chess in itself is, at least for potential members of this club, enjoyable or interesting, so it seems like a worthy cause to gather around. But chopping off little fingers only seems arbitrary and meaningless. The club would lose nothing of its substantial enjoyment value if the finger-chopping practice were discontinued.

Why does the traditionalist, then, want to uphold seemingly meaningless or arbitrary traditions? There are probably two main reasons. The first is that they believe that there are reasons behind most “meaningless” traditions that we cannot see. Traditions evolve for some reason, and to presume that we can discern the exact reasons behind this evolution is a sort of intellectual or rationalist hubris. The second reason is that they believe that having distinct groups in society is valuable and that it is good (or even necessary) for people to belong to groups. Exactly what traditions or distinguishing traits these groups have is of less importance than the fact that one must belong to some group in order to be a fully functioning human being.

The evolution thesis is not altogether unreasonable. If there is no obvious harm in keeping a tradition then perhaps we should not question it too hard. But when there is obvious harm in a tradition (for instance the harm of waiting for the semla or the harm of not having a foreskin or a clitoris), these harms should be weighed against the benefits of the tradition; and if no obvious benefits can be found then the tradition should be discontinued. What counts as benefits or harms is, of course, a topic for further dispute, but it is better that these disputes are had in the open rather than silenced because of the shibboleth of tradition.

When it comes to the second reason, it seems to be mostly a question for social psychologists. All I can say is that the view that people need to belong to (arbitrary) groups in that way seems very pessimistic about common people’s rational faculties and ability to embrace a truly individualist lifestyle. Perhaps this pessimism is warranted, but I find it hard to embrace it, since I am such a “rational” person myself. But that only means that you should not take me for an authority when it comes to social psychology.