The Meaning of Anti-Traditionalism

Yesterday was fettisdagen – translatable to Shrove Tuesday  – in Sweden. This day is traditionally the first day when it is allowed to eat a semla (plural: semlor), although nowadays there are, of course, no official ban on eating it before Shrove Tuesday (going further back, fettisdagen was the only day on which you could eat semlor). Even though semlor has been available in grocery stores several weeks before this day, some people still prefer to wait until Shrove Tuesday before they eat one.

This highlights a broader questions about traditions. Is it really rational to honor a tradition if there is no reason to do it other than the fact that it is a tradition? Why, in other words, wait until fettisdagen to eat a semla when there is no rational reason to wait? If you answer no to the first question you can probably be classified as an anti-traditionalist.

Personally, I am a anti-traditionalist in that sense. I see no reason to honor traditions that have no reason behind them. This does not mean that the reason must be very elaborated and foolproof. There might not be any “rational” reason why we should celebrate birthdays, but one might suggest – rightly or wrongly – that it is valuable to have some day where every person deserves some positive attention. It does not have to be the birthday, but since this tradition is established we might as well continue to honor it, rather than change it to some other day.

But there are other traditions where it is hard to find a good reason for keeping it going. Circumcision among atheist jews might be one example. If one does not believe that one must, literally, do it for God’s sake then some other reason for keeping the tradition going is needed. Now some might claim that it is for medical reasons, but I rarely hear Jews invoke that reason (and it would be a strange coincidence if peoples who have traditionally practiced circumcision uniformly praise the medical effects, while others uniformly do not). The most common reason seems to be because it is tradition among their people. The reason for continuing the tradition is, in other words, just the tradition itself.

Now imagine if I were to start a chess club. I would say to the potential members that to be a member of this club one must not only be interested in chess, but one must also chop off one’s left little finger. If people were to ask the reason for this, I would say because this is one of the two distinguishing marks of our association; we are the people who like chess and are missing one little finger, and that’s that. But I assume potential members would ask what the point is of chopping off the little finger. Chess in itself is, at least for potential members of this club, enjoyable or interesting, so it seems like a worthy cause to gather around. But chopping off little fingers only seems arbitrary and meaningless. The club would lose nothing of its substantial enjoyment value if the finger-chopping practice were discontinued.

Why does the traditionalist, then, want to uphold seemingly meaningless or arbitrary traditions? There are probably two main reasons. The first is that they believe that there are reasons behind most “meaningless” traditions that we cannot see. Traditions evolve for some reason, and to presume that we can discern the exact reasons behind this evolution is a sort of intellectual or rationalist hubris. The second reason is that they believe that having distinct groups in society is valuable and that it is good (or even necessary) for people to belong to groups. Exactly what traditions or distinguishing traits these groups have is of less importance than the fact that one must belong to some group in order to be a fully functioning human being.

The evolution thesis is not altogether unreasonable. If there is no obvious harm in keeping a tradition then perhaps we should not question it too hard. But when there is obvious harm in a tradition (for instance the harm of waiting for the semla or the harm of not having a foreskin or a clitoris), these harms should be weighed against the benefits of the tradition; and if no obvious benefits can be found then the tradition should be discontinued. What counts as benefits or harms is, of course, a topic for further dispute, but it is better that these disputes are had in the open rather than silenced because of the shibboleth of tradition.

When it comes to the second reason, it seems to be mostly a question for social psychologists. All I can say is that the view that people need to belong to (arbitrary) groups in that way seems very pessimistic about common people’s rational faculties and ability to embrace a truly individualist lifestyle. Perhaps this pessimism is warranted, but I find it hard to embrace it, since I am such a “rational” person myself. But that only means that you should not take me for an authority when it comes to social psychology.

Searching for the Best Referencing System

All scientific or scholarly works contain references (perhaps this is true by definition). As you probably know there are different referencing systems to choose from and different journals and book publishers use different systems. University students are often free to choose which system to use as long as they use it correctly and consistently, and people seldom argue about which systems are better than others. I maintain, however, that it can be argued that some systems are actually better than others. Here I will discuss the pros and cons of the systems I mostly come across (mainly in philosophy and the social sciences), going from worst to best.

Endnotes with reference at the end of the chapter: This is the most annoying system, since you have to find the end of the chapter before you can look up the reference, and then you have to keep, for instance, a finger at that place when you’re reading if you want to look up more references. Even more annoying when you’re reading electronic books or articles (unless they contain hyperlinks to the references).

Endnotes with reference at the end of the book (or article): quite annoying, but not as annoying as the above-mentioned system. It is generally less cumbersome to go back to the end of the book than to the end of the chapter. Personally I think endnotes should be abolished altogether, at least for works directed to scholars and researchers (who I assume are actually interested in the references).

Footnote, Oxford style: The reference is found at the bottom of the page. The first time the work is mentioned everything about the reference is written out: author, title, year, publisher. This is helpful if you immediately want to know what work is being referenced. However, the next time the work is being referenced only “op. cit” is written out (even if it has been a hundred pages since the last reference), and this means that you either have to remember what work was being referenced or go back and find the first reference to the work in question. It is a workable system if you have very few references (or many references mentioned only once), but otherwise it is quite annoying.

Footnote, Harvard style: Under the Harvard style of referencing only the last name of the author, the printing year and the page number is written out (like this: Smith 1975: 45). To get full information about the reference (most importantly the title of the book or article) one must go the the reference list at the back of the book (or sometimes the end of the chapter). The advantage with this system is that you immediately get the name of the author being referenced (which you do not get with endnotes). The main disadvantage is that even if you know the author, you still don’t know exactly which work is referenced, and a specialized scholar often wants to know that.

Parenthesis, Harvard style: So, in the Harvard system the references can be put in a footnote or in a parenthesis directly in the text (the latter can probably be called the more “pure” Harvard system). It is largely a matter of taste or aesthetics which one one prefers. Personally I think parentheses are better, but some people think it disturbs the flow of the text. Moreover, I often find it aesthetically displeasing with many short footnotes on the bottom of the page, while others do not care about that at all. An additional advantage of the parentheses style is that your eyes don’t have to wander down to the bottom of the page and then up again. Your reading flow can be maintained without interruption (unless, again, you believe parentheses disturb your flow even more).

Footnote with name of work: This is similar to Harvard style footnotes (I don’t think it would work well with parentheses), but instead of the printing year you write out the name of work (but probably omitting any subtitles). This is the system I prefer, although I seldom use it, because no academic journal uses it (at least no journal I can think of right now). I really can’t see any disadvantages with this system. you immediately know what work is being referenced – i.e., you don’t have go back and forth in the book. You only have to go to the actual reference list if you want to retrieve the referenced work for yourself and need, for instance, the name and volume of the journal, or if you have a very specialized interest regarding, for instance, what edition or translation is being used.

In conclusion, people often think that it’s no big deal which referencing system you use; but my contention is the choice makes an actual difference when it comes to the reading experience for the reader. You should not annoy your readers more than you have to.

Ceterum censeo that block quotes should be abolished.

The Non-Marxist Case against Capitalism

Let us define capitalism as a system where most of the means of production are privately owned, where the freedom to use those means as one wants is relatively large and where the levels of taxes and state economic redistribution are relatively low. As we all know this system has always been criticized from the left, often from a Marxist perspective, where the main point of accusation is that workers are exploited, i.e., they don’t get the full value of their contribution to production.

Nowadays, however, the left seems to have a hard time arguing against capitalism, and bringing up Marx (or other more obscure theoreticians) surely does not help. There is no mass movement among workers in developed countries today who oppose capitalism, and it is difficult to rally them under the banner of “socialism”, when the examples of the Soviet Union, Cuba and other communist states are thrown back in one’s face.

In light of this I have to ask myself: why not bring up a simple moral – and explicitly non-Marxist – case against socialism, a case that virtually anyone can understand in five minutes? Namely: if capitalists did their moral duty, i.e., used their money to help other people, there would be no problem with capitalism. But the fact that there are a few superrich people who own most of the wealth in the world tells us that they are not doing their moral duty. To be substantially richer than one’s fellow human beings is in itself a moral failing, and the quickest way to remedy this is simply to tax them and spread the wealth around (and in some cases socialize the means of production).

Some might protest: if rich people are taxed heavily then people won’t go into business at all. That, however, simply reinforces the point that they are failing morally, being egoists and only ready to exert themselves if that can make them a lot richer than everybody else. A moral person should be willing to work for the benefit of other people (including people one will never meet). The question is, then, how the state should deal with people with radically different morals than the majority; but that is a question that must be handled by all social systems. Claiming that one may have to cater to a few business savvy egoists is not to concede that capitalism is good.

Personally, though, I don’t think anything disastrous would happen if, for instance, top marginal tax rates were raised to levels that were common a few decades ago (for instance, above 90% in the US during both Republican and Democrat rule). People will still create new businesses (and we should encourage people to do so cooperatively) if it allows them to live somewhat more lavishly than the “ordinary” worker, but still very modestly, compared to how CEO:s of large companies live now. Perhaps a first step would be to go back to the time when the CEO made “only” 20 times more than the worker (in the 1960s for the US), rather than 300 times more today.

Adaptive Preferences and the Happy Pauper

The two main forms of utilitarianism are preference utilitarianism and hedonistic utilitarianism (but there are other forms). According to hedonism pleasure should be maximized, while according to preferentialism (as I will call it here) it is the satisfaction of preferences (or “utility”) that should be maximized, i.e., people should get what they wish for to the highest degree. The “crudest” form of preferentialism simply takes people’s actual wishes as given and asks how they can be satisfied, while other versions qualifies the theory by, for instance, demanding that the preferences must be “rational” or the like to count. An “economistic” version simply assumes that people want as much income and/or resources as possible.

A problem for preferentialism – especially in its cruder versions – is adaptive preferences. People who claim to have certain wishes or desires may have adapted them to the fact that there is not much in life they can really get. Or they may have adapted them to certain social expectations of them. For instance, a poor and uneducated woman might claim that she really doesn’t want much for herself and that she doesn’t deserve as much as her husband, because life without patriarchal structures are almost unthinkable. Should we, then, say that the way she “chooses” to live reflects her true preferences?

We can easily see the difference between hedonism and preferentialism. People may, and often do, want things that will not make them happier, and sometimes they even want things that will make them unhappier (of course, a hedonist could also want such things, but then the reason would be to increase happiness for someone else). Of course, as a hedonist I cannot see the good in simply giving people what they want if there is no connection to actual pleasure and pain. Now it is often the case that people – especially people who have few choices in life – want things that will actually make them happier, but we all know from experience that there are many exceptions to this rule.

Serene Khader has discussed adaptive preferences in the book Adaptive Preferences and Women’s Empowerment. Like myself, she rejects preferentialism but do not endorse hedonism. Instead she has another consequentialist theory that wants to maximize “basic flourishing”. Certainly, this theory has some affinities with hedonism, since hedonists (especially in a political context) are also interesting in providing people with the things that Khader includes in basic flourishing; basically the things that poor people (and especially poor women) lack to achieve basic well-being (the book is mainly about development in poor countries).

Why then, does Khader reject hedonism? Although the criticism against preferentialism is thorough, the criticism against hedonism is handled in a few lines. She admits that a hedonist can view a preference for, e.g., staying malnourished as an adaptive preference, since it is more pleasurable to be nourished than malnourished, but she claims that “many preferences we intuitively classify as adaptive may not produce psychological suffering” (p. 50). Khader’s examples of these intuitive cases are not, however, very enlightening. The only actual example against hedonism (as opposed to the many examples against preferentialism) is a poor and oppressed (through discrimination) worker who “gains immense subjective pleasure from the small mercies in her life” (ibid.).

Now I would say that if we can imagine such an unlikely person, who is immensely happy despite being poor and oppressed – and if we have good reasons to think that this person would be unhappier if her social and economic conditions improved – then we do not have any obligation spend resources to help that person. But, again, these people are probably so rare that we do not really have to take account of them in political discussions. If people are living in wretched conditions but still claim to be as happy as they could be, we can simply assume that they are victims of adaptive preferences, unless we get clear evidence for individual cases that this is not so.

Problems with Happiness Research

Personally I believe one should use happiness research with caution as a hedonist. For one thing, happiness research usually do not measure pleasure but rather self-reported life satisfaction. Of course, one can assume that the one thing has something to do with the other, but it would be too simple to make them out to be basically the same thing. Therefore I believe happiness research can be used to some degree when, for instance, arguing for certain policies, but one must also use other tools, such as introspection and anecdotal observation. Hedonistic politics can never be based on scientific precision (are there any kind of applied ethics that can?); and that – by the way – is one reason why it is so important that policies are decided democratically.

A paper that discusses problems with self-reported happiness appeared in the last 2016 issue of Journal of Happiness Studies. The authors, Ponocny, Weismayer, Stross, and Dressler, observe that there is something strange with the fact that most research seems to report that most people are mostly happy, i.e. generally very satisfied with their lives. The problem with that kind of research (which is often used by those who want happiness levels to be assigned more relative weight alongside traditional policy measures, such as GDP) is that it is usually left to the respondents themselves to decide what, for instance, life satisfaction means. Some studies have shown that people tend to interpret this in a way that moves us away from an actual correlation between reports of subjective life satisfaction and actual happiness (or the average pleasure level of one’s life). For example, people tend to downgrade the importance of negative experiences (even though they actually felt very negative for them when they occurred), but their assessments might also be distorted by social contexts (what does it mean to be “happy” in your culture?).

The authors of the article “Are Most People Happy? Exploring the Meaning of Subjective Well-Being Ratings” base their study on 500 interviews in Austria (random and snowball sampling), reaching approximate representativeness of the Austrian population regarding age and education, but not for gender (62.1 % were female). Respondents were interviewed by psychology graduates about good and bad things in life. After the interview they responded to a typical life satisfaction assessment between 0 and 10. Thus, the researchers had accounts of “narrated well-being” (NWB) that they could compare with standard self-reported life satisfaction.

The results of this research is that usually very high assessments of life satisfaction are not strongly correlated with the NWB-accounts; i.e., the match between the self-rating and the “external” rating (based on NWB) is not very good. One interesting finding is that “people who express only small emotions or who are categorized as ‘small emotions or close-lipped’ have a strong tendency to rate themselves as ’10,’ which gives rise to the suspicion that, for some respondents, positive self-rating might express defensive response behavior rather than true bliss”. For many cases, rather negative NWB ratings are combined with very positive self-ratings, but there are rare cases where it is the other way around.

A concrete example of the disparity between NWB and self-rating is a woman who rates her life satisfaction at 10, but still complains a lot about how stressful her life is with a professional career and children to raise, even claiming that sometimes she just can’t take it anymore. Another example, with self-rating 9, “reveals verbal compliance to obviously burdensome circumstances”. Regarding a question about restrictions in life she says: “Time pressure, I repeat myself […] It will get better. And still everything works. It does not knock me out. It is ok as it is. I just hope I continue to have the strength and health to keep going like that. And, all in all, it is fine. It is fine.”

The authors observe that “[e]ven respondents with self-ratings of ’10’ often report substantial psychological burden, including financial restrictions, health problems, unemployment, alcoholism, discrimination, death or life-threatening diseases of close relatives, and sadness.” Thus, even though some people “have essential restrictions of their hedonic status (as narrated by themselves)”, they still assess their lives very positively.

Thus, it seems obvious that happiness research cannot be relied on completely when, for instance, arguing for or against certain policies. As a hedonistic utilitarian one must make sure that we are talking about peoples actual experiences of pleasure and pain, and not only their self-reports of life satisfaction and the like. Of course, sometimes self-reported life satisfaction statistics is all we have, and then we must accord some relevance to it. But on the other hand, there are contexts when we can’t trust such figures at all, for instance when there are strong social pressures to appear satisfied with one’s life in spite of an obvious lack of hedonic pleasures.

I think one of the main problems today is that many people are simply reluctant to take a break from their ordinary lives and consider how their lives could be changed to be more pleasurable. They simply continue living in the way that is accepted as normal in their social environment. Often it is also probably the case that one is reluctant to appear ungrateful, for instance if one is living off the crumbs from other people’s tables. If you start complaining too heavily then the supply of crumbs might stop.

Moral Obligations to Future Generations

The question about moral obligations to future generations is sometimes approached in a backwards manner. In those instances it is simply assumed that we must have some moral obligations to future generations, and if a moral theory leads us to think that we do not, then there must be something wrong with the theory in question. This is, as I said, a backwards way of doing ethics, since the question about about future generations is all about application. To apply a theory we must first know what theory we endorse, not the other way around.

Now some people reject utilitarianism, since – they claim – utilitarianism cannot account for our moral obligations to future generations. I think that is a bad reason to reject utilitarianism. If one wants to reject it, one should argue against the theory itself and not make it stand or fall with one particular application of the theory. But destroying utilitarianism as a theory does not, of course, establish that we do have duties to future generations, because that should be defended as an application of a different theory, and not simply assumed.

But is it actually true that utilitarianism does not entail obligations to future generations? My answer would be no, at least not when it comes to the type of hedonistic utilitarianism that I endorse. It is true that this kind of hedonism sees no value in maximizing the pleasure of people not yet born. I believe in maximizing the pleasure of people who live right now. This is, thus, a kind of average utilitarianism, and it differs from “total” hedonism in that we can’t just make more and more babies to raise the sum of happiness (provided that the new babies will have lives that are more on the pleasure than on the pain side of the spectrum – lives “barely worth living” as some would say). By the same token we do not have to care about the potential pleasures of those unborn people.

In theory, the average view might advice us to simply use up all the world’s resources to enhance the pleasure of the people living right now. And if this was the last generation living on earth this would surely be the right conclusion. But as soon as a new baby is born, its pleasure must matter to us. If we can expect that this baby will live to the age of eighty, then we must at least make sure that her or his quality of live does not diminish sharply when our own generation is gone. Right, you might say, but do we only have to care about one generation after us? I would reply that this is no small thing. Since new babies are born all the time the horizon of obligation keeps moving forward at the same rate. Whatever point in time we pick it seems that we must always make sure that the world will be a hospitable place to live 80-90 years from now.

So while it is wrong to claim that hedonism implies obligations to future generations (if by this we mean unborn generations), it still implies an obligation to provide a good future for newly born generations – which, as argued above, is a constantly moving target. And it does imply robust protection of our environment, at least until we all perceive that the planet is doomed and we deliberately stop procreating (or move to another planet). This should, however, not be a decisive reason for an environmentalist to endorse hedonism. As I see it, environmentalism is applied ethics (unless we are talking about “deep ecology” and  the like), which means that one should first decide which ethical theory is most reasonable, and then let the theory lead one to whichever conclusion on the environment that the original premises warrant. It seems kind of intellectually dishonest to start with the conclusion and then construct the premises.

Philosophical Hubris and Meta-Aggression

It is perfectly natural that people should disagree about philosophical issues. We all know that it is virtually impossible that total agreement will become reality when it comes to, for instance, moral questions. And I, for one, believe that there is no “truth” about ethical questions anyway (a position known as non-cognitivism). This non-agreement and pluralism of values is something that must be accepted as a fundamental fact, especially in modern societies. These conflicts must, thus, be solved by some procedure which produces winners and losers in the struggle of values (this is the case in every social context, regardless of whether a state exists or not). I think the most fair procedure is a majority vote, since if there is no objective truth about which values are correct, what other reasonable way could there be to determine collective decisions?

Nevertheless, some people would not accept this. They believe that some values are objectively better than others, so it would be unfair if every value would be able to compete on equal terms in a majority decision. This attitude I have named meta-aggression (for article reference, see the Author-page on this blog). It is fairly well established, at least in libertarian circles, what “aggression” is: to invade someone’s personal sphere by physically harming them or taking their possessions. The standard form of libertarianism is all about rejecting aggression (unless it is performed as retaliation for previous aggression).

This non-aggression principle is, however, only one of several possible values. Some people believe that aggression (as defined by libertarians) is sometimes justified, for instance by taxing people for purposes they have not consented to. Now those who endorse the non-aggression principle would claim that the second group is engaging in unjustified aggression if they proceed with the taxation. The second group may, however, claim that the libertarians are unjustified in resisting their aggression. It is this kind of resistance – if it is defended on the (false) grounds that non-aggression principle is “true”, while the opponents’ principles are “false” – that I would like to call meta-aggression.

It can also be called a sort of philosophical hubris, i.e., using a controversial (and, to my mind, false) metaethical position to claim some kind of political privilege. Of course, this kind of hubris is often expressed in less sophisticated ways than through explicit metaethical argument. One of the most insidious ways is to argue as if there is some self-evident “default” position in ethics. The burden of proof is, then, on those who want to argue for something else than this default position. Another way in which this hubris can be expressed is simply through ridicule or scorn (rather than dispassionate argumentation) against alternative positions. A third way is the silent treatment, i.e., simply not discussing other ethical positions, because they are simply not worth mentioning.

Of course, this kind of hubris is not always something dangerous or especially deplorable. But when it is used in the political arena (and turned into meta-aggression), then it becomes dangerous and deplorable, because it is often directed against the political procedure that I find the most fair, namely majority rule.