The (Un)Importance of Intentions in Ethics

It is sometimes discussed whether people’s intentions are irrelevant or not when it comes to comparing seemingly identical consequences. Many people seem to think intentions and motives are highly relevant, for instance when it comes to assessing collateral damage in warfare; Hitler killing 100.000 civilians is simply not the same as Roosevelt or Churchill killing 100.000 civilians.

As a consequentialist it is, however, hard to attribute any intrinsic value to intentions. And if we contemplate the morality of an action that seems to be, so to speak, a one-shot activity, the intentions become basically irrelevant. We might, for instance, imagine someone who because of temporary desperation attempts a burglary and tries (but fails) to pick a lock, which wakes up the person in the apartment, who would otherwise have died because of an undetected gas leak. The consequences of this action would be the same as if the mailman put mail through the mail slot in a loud fashion, which wakes up the person inside. The mailman acted on perfectly legitimate intentions, while the burglar, presumably, did not. But if we assume that the burglar got so nervous from this attempt that he decides never to try something illegal again, then the intentions seem rather irrelevant when it comes to assessing the consequences. The failed burglar never actually does anything bad (or at least nothing illegal) in his whole life, and he saves one person from dying; and we could say the exact same thing about the mailman.

In reality, however, intentions are usually valuable when it comes to predicting the future behavior of a person. After all, a burglar will usually make more than one burglary in his life (perhaps we wouldn’t call him a “burglar” if he only does it once in his life…). If we fail to blame someone who by mere chance does something good while intending to do harm, we increase the chance that she (or someone else) will try to do something harmful in the future. By the same token, blaming someone for accidentally doing harm when aiming to do good would perhaps make people reluctant to attempt to to good in the future. Intentions are, in other words, almost always important because of their connection to actual consequences.

Intentions in themselves, however, should be irrelevant. We could even imagine cases where it would be good to have what is normally regarded as wicked intentions, i.e., where bad intentions are actively utilized to get good consequences (as apart from the gas leak example above where the good consequences were accidental). For instance, there are rare cases when it would be justified to torture someone in order to save lives, and in such cases one may have to employ a torturer who doesn’t care about saving those lives, but who enjoys torturing people. This would be someone who does (overall) good for what is ordinarily regarded as the wrong reasons, but we would not blame him for it.

One can also image someone who has the best of intentions but mostly does harm instead. A general who works for the UN to stop genocides and other gross human rights violations, but who uses his troops and resources in a thoroughly incompetent way, which leads to many unnecessary deaths, should probably be blamed for his actions, even though his intentions were extremely humanitarian. Good intentions cannot, in other words, always trump bad consequences, just as bad intentions cannot always trump good consequences.

Advertisements

An Attempt to Define Feminism

Feminism is rarely defined precisely. Even introductory works on feminism are usually reluctant to provide any definition that one can immediately assess in terms of assent or dissent. This is a shame, since so many people nowadays seem eager to reject feminism without saying exactly what they are rejecting. And there are also those who claim to be feminists (often “liberal” feminists) themselves, but who reject other kinds of wrongheaded feminism. However, some within the latter group should perhaps not call themselves feminists at all if the term feminism is to have a relevant meaning.

The definition I would like to propose is the following: A feminist believes that women are currently unequal to men in one or more normatively relevant ways (and this inequality should be remedied). The assertion put in parenthesis can probably be removed in most cases, since it can be tacitly assumed that if one believes a normatively relevant inequality (as opposed to normatively irrelevant inequalities) exists, then it should, by virtue of moral “logic”, be remedied (at least if it is practically possible).

So, the definition talks about women being currently unequal to men. This would perhaps exclude people who believe that feminism is (only) about granting women and men equal political and civil rights. At least in many countries, this struggle seems to have been fought and won som time ago (but in some countries, this fight still goes on). In a “Western” setting this kind of feminism – if we are to call it feminism it all – seems rather irrelevant, and when feminists of this kind call themselves feminists they should be careful to always label themselves liberal feminists, as well as to acknowledge that there are other legitimate forms of feminism.

Even if “liberal feminism” does not necessarily have to be excluded from being a kind of feminism, there are other kinds of merely “hypothetical” feminisms that should be excluded from this label. One could, for instance, imagine a sort of “libertarian feminist”, who believes that if a society were to exist where men’s rights of self-ownership are respected, while women’s rights are not, then one should be a feminist to remedy this inequality in rights. If we allow for these kinds of hypothetical inequalities, then feminism as a concept would loose all interesting meaning (since almost anyone could be a feminist under some circumstances).

In any case, I believe the proviso about current inequalities adds something important to a meaningful definition of feminism, since it is usually perceived as an “ideology” that wants to rectify injustices that actually exist right now. In the same way, we would hardly call someone who is against slavery an “egalitarian” in any meaningful sense, even though he or she is definitely against the kind of inequality that slavery entails.

The other part of the definition discusses normatively relevant inequalities. This captures the fact that we all believe that there are inequalities that we do not believe are relevant when it comes to moral evaluation, or to assess, for instance, economic outcomes. But there are, of course, different ideas about exactly what should be regarded as normatively relevant. This is where distinctions between different sorts of feminisms come in.

The distinction between normatively relevant and irrelevant inequalities also makes it easy to pinpoint exactly why some people end up on one side of the debate rather than the other. The debate about the gender wage gap, for instance, seems to be all about relevant and irrelevant inequalities. Both sides can agree on the fact that women’s life earnings are on average significantly lower than men’s life earnings; but the non-feminist would maybe claim that this inequality is morally irrelevant, since this reflects women’s life choices (i.e., inequalities based on choices which were not physically forced upon you are not morally, or politically, problematic), whereas the feminist would perhaps claim that this is not only a result of choices, but also of discrimination, or that all choices are to some degree a result of different forms of indoctrination, which might be normatively relevant when assessing outcomes.

The definition proposed here facilitates the categorization of different kinds of feminism, since most moral and political philosophies can be distinguished in terms of (in)equalities. As mentioned above, a liberal feminist would only see inequalities in political and civil rights as normatively relevant. A socialist feminist would perhaps see inequalities in possibilities to lead an unalienated life free from economic exploitation as normatively relevant. A utilitarian feminist would see inequalities in consideration when calculating levels of happiness and unhappiness as normatively relevant. A virtue feminist would see inequalities in possibilities to exercise moral  virtues as normatively relevant (and these virtues might be different for men and women). An existentialist feminist would see inequalities in possibilities to lead an authentic life as normatively relevant. (Needless to say, these were just a few examples and not an exhaustive list.)

Again, the definition is that a feminist believes that women are currently unequal to men in one or more normatively relevant ways (and this inequality should be remedied). An advantage with this definition is that it dispenses with controversial concepts like “patriarchy” and “oppression”. Instead it moves the discussion to a more basic normative level, where the differences between viewpoints become clearer and easier to understand. The definition also clearly separates questions about facts, which we can (hopefully) agree on, and values (i.e., whether the facts in question are normatively relevant or not). It is better that people fight about whether some existing inequality is good or bad, rather than fighting about whether the inequality in fact exists or not. (And it is better that they fight about the goodness or badness of currently existing inequalities, rather than those that have already been remedied.)

The definition also has the advantage that it separates the question of methods from the normative (foundational) question. One might, for example, be a “socialist” feminist, because one believes a socialist system will equalize the possibilities to lead authentic lives. In this case, it is more helpful to classify this person as an existentialist feminist, rather than a socialist feminist, since socialism is just a method (and whether this is the best method or not to reach the existentialist utopia is something that can be discussed regardless of agreement or disagreement to either socialism or existentialist feminism). In the same way one could be a utilitarian feminist who believes that liberalism will lead to an equal consideration of women of men in the felicific calculus – but one would not become a liberal feminist because of this.

The Standing of Utilitarianism

[This is a translation of a post previously published on my Swedish blog, April 2016]

Recently I have been trying to assess the standing of utilitarianism in normative research and in ethical discourse in society in general. This is not a very easy task. I, myself, am a political scientist (“political theorist”) and, thus, not connected to the same institutional framework as the philosophers, even though the research that I do and that philosophers do very often overlap.

Within the subdiscipline political theory it seems fairly obvious that utilitarianism is as good as dead, and that it has been in that state for a long time. Many textbooks in political theory discuss utilitarianism very briefly and repeat the same objections to it that have been raised for decades (and which can be refuted quite easily by utilitarians). When one turns to the most common academic journals in political theory it is also easy to notice how rare it is that someone is arguing from a utilitarian perspective.

Thus, when it comes to political theory I believe I can plausibly conclude that utilitarianism is rather unpopular. When it comes to fields like “pure” moral philosophy, applied ethics etc., my conclusion will become less clear. Some (qualified) people seem to think that utilitarianism still has a strong position in these disciplines (see, e.g., “utilitarianism” in the Routledge Encyclopedia of Philosophy). Moreover, I recently read a book (by Anne Maclean) which claimed – and lamented – that bioethics (i.e., ethical research concerning moral choices in light of modern technology, medicine etc.) is virtually dominated by utilitarianism (or at least that this was the case in the 1990s, when Maclean’s book appeared).

Now, If it is the case that utilitarianism still has a strong standing within moral philosophy (unlike in political theory), then one has to say that professional philosophers have done a bad job in spreading this doctrine to people outside academia. My impression is that when a utilitarian philosopher gets the chance to talk in regular media (in Sweden it is usually Torbjörn Tännsjö, internationally it is often Peter Singer) they seem to encounter at least as much criticism as assent.

Perhaps it is really the case that although utilitarianism has during certain periods been somewhat popular among normative researchers, it has never been very popular among ordinary people. But if this is true, the reason for this is probably not that people have found another moral theory that is more coherent than utilitarianism; it is, rather, the case that people in general do not care very much about endorsing a coherent or “logically” satisfying moral theory. They mix elements from utilitarianism (most people seem to care about consequences at least in some cases) with other ideas as they see fit. And this way of “philosophizing” has spread to academia. The academics who reject utilitarianism often seem not to do it because they have found a theory that to a higher degree satisfies the demands for argumentative stringency which one should be able to demand from a “scientist”.

Anyway, it is a shame that the rejection of the moral theory that I believe has the least theoretical problems – hedonistic utilitarianism – is often based on rather loose objections, with the aid of counterarguments which are quite easy to respond to, and (which is probably more important) which are made from moral perspectives that would hardly survive the same kind of scrutiny that utilitarianism is usually subjected to. Other moral theories (at least those that are in fashion at the moment) do, in other words, receive a milder treatment when it comes to the amount of objections one is expected to be able to handle in order to claim that one has a defensible theory. A Rawlsian perspective, for instance, is usually considered as less problematic than a utilitarian perspective, even though the writings of Rawls can be demolished quite well by most philosophy undergraduates.

Pragmatism and Ethics

Pragmatism (at least in its “classical” form), as I perceive it, is based on the claim that theories are made for the sake of human action (no other species makes theories). The pragmatist does not assume that any substantial theory is better than another, but whenever a theory is considered it should undergo what one might call the pragmatic test: the theory must have some verifiable practical consequences if it is to count as a meaningful theory. By the same token, a disagreement between two theories, the consequences of which do not differ in practice, is not a meaningful disagreement. William James – one of the main figures in pragmatism – writes: “Whenever a dispute is serious, we ought to be able to show some practical difference that must follow from one side or the other’s being right” (Pragmatism, p. 45f).

Pragmatism is, in other words, directed towards action and power, and theories appear as instruments of action or as tools to change the world (or perhaps to keep it unchanged if that is one’s wish). In essence, it seems to be an instrumentalist philosophy of science, i.e. a theory where prediction appears to be the ultimate testing block. Of course, we can always predict things using “hunches” or “common sense”, but as a philosophy of science pragmatism concerns theories, i.e. more generalized statements. “The pragmatist,” to quote James again, “clings to facts and concreteness, observes truth at its work in particular cases, and generalizes”. Humans have always observed that things can be ordered into “kinds”, and “when we have once directly verified our ideas about one specimen of a kind, we consider ourselves free to apply them to other specimens without verification” (Pragmatism, p. 68, 208). Theories are Denkmittel (as James sometimes says) that helps us to “better foresee the course of our experiences, communicate with one another, and steer our lives by rule” (The Meaning of Truth, p. 62f).

Although pragmatism may look a lot like classic empiricism, there is the difference that on the latter view “the truth of a proposition is a function of how it originates in experience” (for instance, by sense impression), while pragmatism is only interested in what will obtain in the future. “In short, all beliefs are virtual predictions (hypotheses) about experience and, regardless of how they originate, their truth is a function of whether what they virtually predict, if true, will obtain in the future” (Robert Almeder, “A Definition of Pragmatism” [1986], p. 81).

It would appear that pragmatism does not have much to do with ethics. The only things James say about ethics in the book Pragmatism is the following: “‘The true,’ to put it very briefly, is only the expedient in the way of our thinking, just as ‘the right’ is only the expedient in the way of behaving”, plus a remark about the “sentimentalist fallacy”, i.e. “to shed tears over abstract justice and generosity, beauty, etc., and never to know these qualities when you meet them in the street, because the circumstances make them vulgar” (Pragmatism, p. 222, 229). There are, however, a few things one should be able to conclude about ethics from a pragmatist perspective.

Firstly, one can not – on purely pragmatist grounds – assume or exclude any substantial normative propositions a priori, since pragmatism is a method for sorting out workable and unworkable theories – it is not itself a substantial theory about the world or about how we should behave (although it may exclude some substantial theories as unworkable, as discussed below).

Secondly, pragmatism is, again, concerned with theories, so a pragmatist ethical theory cannot be about singular intuitions or the like. Pragmatist ethics must involve some general propositions.

Thirdly, pragmatist ethics must be workable, in the sense that they can actually guide behavior. This excludes theories that give no guidance, too indeterminate guidance, or contradictory guidance. Probably we can also exclude theories that are too utopian, given present historical circumstances (such theories are “workable” in theory, but not in practice), although the early pragmatist (and fascist) Papini – and to a certain extent James himself – might have disagreed about that.

Fourthly, recommendations for action in particular cases must be deduced from the general theory in a clear way, since if ad hoc considerations are added in certain particular cases we do not have a workable theory. The same is, of course, true of scientific theories. If I claim that astrology is verified because I have derived successful predictions from it, I must be able to show that these predictions are actually deduced from the general astrological theory and that certain ad hoc assumptions – more compatible with traditional science – have not been mixed in along the way and are doing all the actual predictive work.

These four points seem to follow quite clearly from pragmatism. An additional fifth point could possibly be added, although it is probably more controversial, namely, that even though pragmatism does not assume any substantial starting point for ethical reasoning, it seems to be in line with the general tone of pragmatism that the starting point cannot be wholly arbitrary. There should be some connection to things that appear “significant” to human activity and life projects.

There should, in other words, be some kind of intuition about what is good for human beings (perhaps including other sentient beings) lurking in the background, rather than some arbitrary statement. A theory that begins with the fundamental principle that one should maximize the number of hats (rather than, for instance, the maximization of freedom, happiness or creative achievement) in the world might be “logically” workable, but it is hard to imagine how someone could have such an intuition about goodness.

This theory of ethical reasoning seems to be in line with classical pragmatism, and I believe it is a rather sound standard to judge ethical theories. It is a shame that some people who call themselves pragmatists these days have other ideas about what pragmatism entails for ethics. They often assume that pragmatism is an inherently “progressive”, “radical” or “contextualizing” theory when it comes to moral and political philosophy (and the main reason for this is the damaging influence of John Dewey). Some substantial theories of that kind are probably compatible with the pragmatist outlook, but so are other kinds of theories. One has to remember – something which James often underscores – that pragmatism is a method, not a specific doctrine about anything. Just as the only important thing for a scientific theory is that it is useful, or workable, the same should be true for ethical theories. They should give us clear and non-contradictory advice on how to act – advice that is deduced from some more general (and non-arbitrary) propositions. Otherwise an ethical outlook is not workable as a theory.

How Demanding is Utilitarianism?

Utilitarianism (at least some forms of it) is often criticized for being too demanding for the individual. It seems to entail, for instance, that if you have a surplus of welfare you should compensate those who have a deficit until you are equal in welfare. In reality this, of course, often means transfer of money or other physical resources, because it is usually assumed that the more money you have, the less extra welfare you get from more money, while those who have very little money get more welfare for every unit of money that get transferred to them. (I think this seems intuitively plausible to most people. Imagine that you live on the streets and manage to obtain about € 10 a day. An additional € 20 a day would make a huge difference to your quality of life, while an addition of € 20 to a salary of € 180 a day would not be a similar improvement in welfare.)

But how demanding is utilitarianism, or hedonism, really? That rich people have a duty to give to the poor seems very reasonable to that theory, but do those who are slightly above the median income face heavy obligations as well? Before discussing the real world, let’s imagine the following scenario:

You are walking through the desert. You believe that you will find water, food and shelter within three days. You are carrying three water bottles, each containing enough water to keep you alive for one day. You know that it is likely that you will meet other people wandering lost in the desert, you also know that some of them carry about the same amount of water as you, but some have less or no water at all. If you meet someone who have less water than you, are you obliged to give that person some of your water, until you have an equal amount?

In a world of “perfect compliance” with hedonism, this might be so, because then you would be sure that if you yourself start to run out of water, you can count on someone else to provide for you. But if you are not sure whether the second person you meet will behave morally to you, then it would be rather foolish to give a lot of your water to the first person. And you might not be sure if the first person is a moral person either. Perhaps he or she won’t reciprocate in the same manner at a later occasion when the tables have turned. An altruist should, in other words, be careful when helping an egoist, even if the egoist happens to be poorer than the altruist.

The point of the story is that in a world of less than complete compliance with the altruism that hedonism entails, it is not reasonable that everyone should, for instance, give away all of their monetary surplus above the median wage each month in order to equalize resources (and, presumably, happiness). For one thing, it is reasonable that everyone should be able to pile up some savings for rainy days, especially if one has an irregular income. Again, you need money for emergencies, since you can’t always count on your neighbors to help you (of course, a robust welfare state – keeping in mind that welfare states get less and less robust these days – can cover for some emergencies, like being suddenly prevented from working on account of illness). It is also reasonable that you have some money left for your own pleasures – the world would not be a very enjoyable place if we are to live like monks as long as there are people who are worse of than ourselves, and some pleasure and relaxation is probably needed to keep up the psychological motivation to be altruistic.

Thus, I would not say that hedonistic utilitarianism is extremely demanding (as, for instance, Liam Murphy claims in his book Moral Demands in Nonideal Theory). But it is, at least, moderately demanding. It does not demand that we give until all are equal or that the “ordinary worker” should renounce all pleasures and comforts in life; but it does demand that when your life is starting to get settled and comfortable then you should not expect any more material improvement for yourself. Indulging yourselves with, for instance, more cars, more trips abroad, designer clothing or furniture, fancy jewelry, swimming pools, dining in fine restaurants, cosmetic surgery et cetera would simply be immoral. (And if you renounce a lucrative career – as a surgeon, lawyer or engineer, for instance – just because you would have to give away a lot of your money, that would, of course, also be immoral.)

Two points should be added: 1. These demands for material redistribution does not entail that redistribution must take place in a haphazard and unorganized fashion. Preferably it would mostly be done through the tax system, which means that your prime obligation might be to vote for parties that are committed to effective redistributive policies (although this does not mean that you are completely off the hook when it comes to voluntary charity). 2. We should keep in mind that the more people live up to this standard of altruism the less we would all have to sacrifice. If most people in affluent countries were to live in accordance with it, then most of the world’s (material) problems would be solved long before we would have to live like monks.

The Meaning of Anti-Traditionalism

Yesterday was fettisdagen – translatable to Shrove Tuesday  – in Sweden. This day is traditionally the first day when it is allowed to eat a semla (plural: semlor), although nowadays there are, of course, no official ban on eating it before Shrove Tuesday (going further back, fettisdagen was the only day on which you could eat semlor). Even though semlor has been available in grocery stores several weeks before this day, some people still prefer to wait until Shrove Tuesday before they eat one.

This highlights a broader questions about traditions. Is it really rational to honor a tradition if there is no reason to do it other than the fact that it is a tradition? Why, in other words, wait until fettisdagen to eat a semla when there is no rational reason to wait? If you answer no to the first question you can probably be classified as an anti-traditionalist.

Personally, I am a anti-traditionalist in that sense. I see no reason to honor traditions that have no reason behind them. This does not mean that the reason must be very elaborated and foolproof. There might not be any “rational” reason why we should celebrate birthdays, but one might suggest – rightly or wrongly – that it is valuable to have some day where every person deserves some positive attention. It does not have to be the birthday, but since this tradition is established we might as well continue to honor it, rather than change it to some other day.

But there are other traditions where it is hard to find a good reason for keeping it going. Circumcision among atheist jews might be one example. If one does not believe that one must, literally, do it for God’s sake then some other reason for keeping the tradition going is needed. Now some might claim that it is for medical reasons, but I rarely hear Jews invoke that reason (and it would be a strange coincidence if peoples who have traditionally practiced circumcision uniformly praise the medical effects, while others uniformly do not). The most common reason seems to be because it is tradition among their people. The reason for continuing the tradition is, in other words, just the tradition itself.

Now imagine if I were to start a chess club. I would say to the potential members that to be a member of this club one must not only be interested in chess, but one must also chop off one’s left little finger. If people were to ask the reason for this, I would say because this is one of the two distinguishing marks of our association; we are the people who like chess and are missing one little finger, and that’s that. But I assume potential members would ask what the point is of chopping off the little finger. Chess in itself is, at least for potential members of this club, enjoyable or interesting, so it seems like a worthy cause to gather around. But chopping off little fingers only seems arbitrary and meaningless. The club would lose nothing of its substantial enjoyment value if the finger-chopping practice were discontinued.

Why does the traditionalist, then, want to uphold seemingly meaningless or arbitrary traditions? There are probably two main reasons. The first is that they believe that there are reasons behind most “meaningless” traditions that we cannot see. Traditions evolve for some reason, and to presume that we can discern the exact reasons behind this evolution is a sort of intellectual or rationalist hubris. The second reason is that they believe that having distinct groups in society is valuable and that it is good (or even necessary) for people to belong to groups. Exactly what traditions or distinguishing traits these groups have is of less importance than the fact that one must belong to some group in order to be a fully functioning human being.

The evolution thesis is not altogether unreasonable. If there is no obvious harm in keeping a tradition then perhaps we should not question it too hard. But when there is obvious harm in a tradition (for instance the harm of waiting for the semla or the harm of not having a foreskin or a clitoris), these harms should be weighed against the benefits of the tradition; and if no obvious benefits can be found then the tradition should be discontinued. What counts as benefits or harms is, of course, a topic for further dispute, but it is better that these disputes are had in the open rather than silenced because of the shibboleth of tradition.

When it comes to the second reason, it seems to be mostly a question for social psychologists. All I can say is that the view that people need to belong to (arbitrary) groups in that way seems very pessimistic about common people’s rational faculties and ability to embrace a truly individualist lifestyle. Perhaps this pessimism is warranted, but I find it hard to embrace it, since I am such a “rational” person myself. But that only means that you should not take me for an authority when it comes to social psychology.

Searching for the Best Referencing System

All scientific or scholarly works contain references (perhaps this is true by definition). As you probably know there are different referencing systems to choose from and different journals and book publishers use different systems. University students are often free to choose which system to use as long as they use it correctly and consistently, and people seldom argue about which systems are better than others. I maintain, however, that it can be argued that some systems are actually better than others. Here I will discuss the pros and cons of the systems I mostly come across (mainly in philosophy and the social sciences), going from worst to best.

Endnotes with reference at the end of the chapter: This is the most annoying system, since you have to find the end of the chapter before you can look up the reference, and then you have to keep, for instance, a finger at that place when you’re reading if you want to look up more references. Even more annoying when you’re reading electronic books or articles (unless they contain hyperlinks to the references).

Endnotes with reference at the end of the book (or article): quite annoying, but not as annoying as the above-mentioned system. It is generally less cumbersome to go back to the end of the book than to the end of the chapter. Personally I think endnotes should be abolished altogether, at least for works directed to scholars and researchers (who I assume are actually interested in the references).

Footnote, Oxford style: The reference is found at the bottom of the page. The first time the work is mentioned everything about the reference is written out: author, title, year, publisher. This is helpful if you immediately want to know what work is being referenced. However, the next time the work is being referenced only “op. cit” is written out (even if it has been a hundred pages since the last reference), and this means that you either have to remember what work was being referenced or go back and find the first reference to the work in question. It is a workable system if you have very few references (or many references mentioned only once), but otherwise it is quite annoying.

Footnote, Harvard style: Under the Harvard style of referencing only the last name of the author, the printing year and the page number is written out (like this: Smith 1975: 45). To get full information about the reference (most importantly the title of the book or article) one must go the the reference list at the back of the book (or sometimes the end of the chapter). The advantage with this system is that you immediately get the name of the author being referenced (which you do not get with endnotes). The main disadvantage is that even if you know the author, you still don’t know exactly which work is referenced, and a specialized scholar often wants to know that.

Parenthesis, Harvard style: So, in the Harvard system the references can be put in a footnote or in a parenthesis directly in the text (the latter can probably be called the more “pure” Harvard system). It is largely a matter of taste or aesthetics which one one prefers. Personally I think parentheses are better, but some people think it disturbs the flow of the text. Moreover, I often find it aesthetically displeasing with many short footnotes on the bottom of the page, while others do not care about that at all. An additional advantage of the parentheses style is that your eyes don’t have to wander down to the bottom of the page and then up again. Your reading flow can be maintained without interruption (unless, again, you believe parentheses disturb your flow even more).

Footnote with name of work: This is similar to Harvard style footnotes (I don’t think it would work well with parentheses), but instead of the printing year you write out the name of work (but probably omitting any subtitles). This is the system I prefer, although I seldom use it, because no academic journal uses it (at least no journal I can think of right now). I really can’t see any disadvantages with this system. you immediately know what work is being referenced – i.e., you don’t have go back and forth in the book. You only have to go to the actual reference list if you want to retrieve the referenced work for yourself and need, for instance, the name and volume of the journal, or if you have a very specialized interest regarding, for instance, what edition or translation is being used.

In conclusion, people often think that it’s no big deal which referencing system you use; but my contention is the choice makes an actual difference when it comes to the reading experience for the reader. You should not annoy your readers more than you have to.

Ceterum censeo that block quotes should be abolished.