Pragmatism and Ethics

Pragmatism (at least in its “classical” form), as I perceive it, is based on the claim that theories are made for the sake of human action (no other species makes theories). The pragmatist does not assume that any substantial theory is better than another, but whenever a theory is considered it should undergo what one might call the pragmatic test: the theory must have some verifiable practical consequences if it is to count as a meaningful theory. By the same token, a disagreement between two theories, the consequences of which do not differ in practice, is not a meaningful disagreement. William James – one of the main figures in pragmatism – writes: “Whenever a dispute is serious, we ought to be able to show some practical difference that must follow from one side or the other’s being right” (Pragmatism, p. 45f).

Pragmatism is, in other words, directed towards action and power, and theories appear as instruments of action or as tools to change the world (or perhaps to keep it unchanged if that is one’s wish). In essence, it seems to be an instrumentalist philosophy of science, i.e. a theory where prediction appears to be the ultimate testing block. Of course, we can always predict things using “hunches” or “common sense”, but as a philosophy of science pragmatism concerns theories, i.e. more generalized statements. “The pragmatist,” to quote James again, “clings to facts and concreteness, observes truth at its work in particular cases, and generalizes”. Humans have always observed that things can be ordered into “kinds”, and “when we have once directly verified our ideas about one specimen of a kind, we consider ourselves free to apply them to other specimens without verification” (Pragmatism, p. 68, 208). Theories are Denkmittel (as James sometimes says) that helps us to “better foresee the course of our experiences, communicate with one another, and steer our lives by rule” (The Meaning of Truth, p. 62f).

Although pragmatism may look a lot like classic empiricism, there is the difference that on the latter view “the truth of a proposition is a function of how it originates in experience” (for instance, by sense impression), while pragmatism is only interested in what will obtain in the future. “In short, all beliefs are virtual predictions (hypotheses) about experience and, regardless of how they originate, their truth is a function of whether what they virtually predict, if true, will obtain in the future” (Robert Almeder, “A Definition of Pragmatism” [1986], p. 81).

It would appear that pragmatism does not have much to do with ethics. The only things James say about ethics in the book Pragmatism is the following: “‘The true,’ to put it very briefly, is only the expedient in the way of our thinking, just as ‘the right’ is only the expedient in the way of behaving”, plus a remark about the “sentimentalist fallacy”, i.e. “to shed tears over abstract justice and generosity, beauty, etc., and never to know these qualities when you meet them in the street, because the circumstances make them vulgar” (Pragmatism, p. 222, 229). There are, however, a few things one should be able to conclude about ethics from a pragmatist perspective.

Firstly, one can not – on purely pragmatist grounds – assume or exclude any substantial normative propositions a priori, since pragmatism is a method for sorting out workable and unworkable theories – it is not itself a substantial theory about the world or about how we should behave (although it may exclude some substantial theories as unworkable, as discussed below).

Secondly, pragmatism is, again, concerned with theories, so a pragmatist ethical theory cannot be about singular intuitions or the like. Pragmatist ethics must involve some general propositions.

Thirdly, pragmatist ethics must be workable, in the sense that they can actually guide behavior. This excludes theories that give no guidance, too indeterminate guidance, or contradictory guidance. Probably we can also exclude theories that are too utopian, given present historical circumstances (such theories are “workable” in theory, but not in practice), although the early pragmatist (and fascist) Papini – and to a certain extent James himself – might have disagreed about that.

Fourthly, recommendations for action in particular cases must be deduced from the general theory in a clear way, since if ad hoc considerations are added in certain particular cases we do not have a workable theory. The same is, of course, true of scientific theories. If I claim that astrology is verified because I have derived successful predictions from it, I must be able to show that these predictions are actually deduced from the general astrological theory and that certain ad hoc assumptions – more compatible with traditional science – have not been mixed in along the way and are doing all the actual predictive work.

These four points seem to follow quite clearly from pragmatism. An additional fifth point could possibly be added, although it is probably more controversial, namely, that even though pragmatism does not assume any substantial starting point for ethical reasoning, it seems to be in line with the general tone of pragmatism that the starting point cannot be wholly arbitrary. There should be some connection to things that appear “significant” to human activity and life projects.

There should, in other words, be some kind of intuition about what is good for human beings (perhaps including other sentient beings) lurking in the background, rather than some arbitrary statement. A theory that begins with the fundamental principle that one should maximize the number of hats (rather than, for instance, the maximization of freedom, happiness or creative achievement) in the world might be “logically” workable, but it is hard to imagine how someone could have such an intuition about goodness.

This theory of ethical reasoning seems to be in line with classical pragmatism, and I believe it is a rather sound standard to judge ethical theories. It is a shame that some people who call themselves pragmatists these days have other ideas about what pragmatism entails for ethics. They often assume that pragmatism is an inherently “progressive”, “radical” or “contextualizing” theory when it comes to moral and political philosophy (and the main reason for this is the damaging influence of John Dewey). Some substantial theories of that kind are probably compatible with the pragmatist outlook, but so are other kinds of theories. One has to remember – something which James often underscores – that pragmatism is a method, not a specific doctrine about anything. Just as the only important thing for a scientific theory is that it is useful, or workable, the same should be true for ethical theories. They should give us clear and non-contradictory advice on how to act – advice that is deduced from some more general (and non-arbitrary) propositions. Otherwise an ethical outlook is not workable as a theory.

How Demanding is Utilitarianism?

Utilitarianism (at least some forms of it) is often criticized for being too demanding for the individual. It seems to entail, for instance, that if you have a surplus of welfare you should compensate those who have a deficit until you are equal in welfare. In reality this, of course, often means transfer of money or other physical resources, because it is usually assumed that the more money you have, the less extra welfare you get from more money, while those who have very little money get more welfare for every unit of money that get transferred to them. (I think this seems intuitively plausible to most people. Imagine that you live on the streets and manage to obtain about € 10 a day. An additional € 20 a day would make a huge difference to your quality of life, while an addition of € 20 to a salary of € 180 a day would not be a similar improvement in welfare.)

But how demanding is utilitarianism, or hedonism, really? That rich people have a duty to give to the poor seems very reasonable to that theory, but do those who are slightly above the median income face heavy obligations as well? Before discussing the real world, let’s imagine the following scenario:

You are walking through the desert. You believe that you will find water, food and shelter within three days. You are carrying three water bottles, each containing enough water to keep you alive for one day. You know that it is likely that you will meet other people wandering lost in the desert, you also know that some of them carry about the same amount of water as you, but some have less or no water at all. If you meet someone who have less water than you, are you obliged to give that person some of your water, until you have an equal amount?

In a world of “perfect compliance” with hedonism, this might be so, because then you would be sure that if you yourself start to run out of water, you can count on someone else to provide for you. But if you are not sure whether the second person you meet will behave morally to you, then it would be rather foolish to give a lot of your water to the first person. And you might not be sure if the first person is a moral person either. Perhaps he or she won’t reciprocate in the same manner at a later occasion when the tables have turned. An altruist should, in other words, be careful when helping an egoist, even if the egoist happens to be poorer than the altruist.

The point of the story is that in a world of less than complete compliance with the altruism that hedonism entails, it is not reasonable that everyone should, for instance, give away all of their monetary surplus above the median wage each month in order to equalize resources (and, presumably, happiness). For one thing, it is reasonable that everyone should be able to pile up some savings for rainy days, especially if one has an irregular income. Again, you need money for emergencies, since you can’t always count on your neighbors to help you (of course, a robust welfare state – keeping in mind that welfare states get less and less robust these days – can cover for some emergencies, like being suddenly prevented from working on account of illness). It is also reasonable that you have some money left for your own pleasures – the world would not be a very enjoyable place if we are to live like monks as long as there are people who are worse of than ourselves, and some pleasure and relaxation is probably needed to keep up the psychological motivation to be altruistic.

Thus, I would not say that hedonistic utilitarianism is extremely demanding (as, for instance, Liam Murphy claims in his book Moral Demands in Nonideal Theory). But it is, at least, moderately demanding. It does not demand that we give until all are equal or that the “ordinary worker” should renounce all pleasures and comforts in life; but it does demand that when your life is starting to get settled and comfortable then you should not expect any more material improvement for yourself. Indulging yourselves with, for instance, more cars, more trips abroad, designer clothing or furniture, fancy jewelry, swimming pools, dining in fine restaurants, cosmetic surgery et cetera would simply be immoral. (And if you renounce a lucrative career – as a surgeon, lawyer or engineer, for instance – just because you would have to give away a lot of your money, that would, of course, also be immoral.)

Two points should be added: 1. These demands for material redistribution does not entail that redistribution must take place in a haphazard and unorganized fashion. Preferably it would mostly be done through the tax system, which means that your prime obligation might be to vote for parties that are committed to effective redistributive policies (although this does not mean that you are completely off the hook when it comes to voluntary charity). 2. We should keep in mind that the more people live up to this standard of altruism the less we would all have to sacrifice. If most people in affluent countries were to live in accordance with it, then most of the world’s (material) problems would be solved long before we would have to live like monks.

The Meaning of Anti-Traditionalism

Yesterday was fettisdagen – translatable to Shrove Tuesday  – in Sweden. This day is traditionally the first day when it is allowed to eat a semla (plural: semlor), although nowadays there are, of course, no official ban on eating it before Shrove Tuesday (going further back, fettisdagen was the only day on which you could eat semlor). Even though semlor has been available in grocery stores several weeks before this day, some people still prefer to wait until Shrove Tuesday before they eat one.

This highlights a broader questions about traditions. Is it really rational to honor a tradition if there is no reason to do it other than the fact that it is a tradition? Why, in other words, wait until fettisdagen to eat a semla when there is no rational reason to wait? If you answer no to the first question you can probably be classified as an anti-traditionalist.

Personally, I am a anti-traditionalist in that sense. I see no reason to honor traditions that have no reason behind them. This does not mean that the reason must be very elaborated and foolproof. There might not be any “rational” reason why we should celebrate birthdays, but one might suggest – rightly or wrongly – that it is valuable to have some day where every person deserves some positive attention. It does not have to be the birthday, but since this tradition is established we might as well continue to honor it, rather than change it to some other day.

But there are other traditions where it is hard to find a good reason for keeping it going. Circumcision among atheist jews might be one example. If one does not believe that one must, literally, do it for God’s sake then some other reason for keeping the tradition going is needed. Now some might claim that it is for medical reasons, but I rarely hear Jews invoke that reason (and it would be a strange coincidence if peoples who have traditionally practiced circumcision uniformly praise the medical effects, while others uniformly do not). The most common reason seems to be because it is tradition among their people. The reason for continuing the tradition is, in other words, just the tradition itself.

Now imagine if I were to start a chess club. I would say to the potential members that to be a member of this club one must not only be interested in chess, but one must also chop off one’s left little finger. If people were to ask the reason for this, I would say because this is one of the two distinguishing marks of our association; we are the people who like chess and are missing one little finger, and that’s that. But I assume potential members would ask what the point is of chopping off the little finger. Chess in itself is, at least for potential members of this club, enjoyable or interesting, so it seems like a worthy cause to gather around. But chopping off little fingers only seems arbitrary and meaningless. The club would lose nothing of its substantial enjoyment value if the finger-chopping practice were discontinued.

Why does the traditionalist, then, want to uphold seemingly meaningless or arbitrary traditions? There are probably two main reasons. The first is that they believe that there are reasons behind most “meaningless” traditions that we cannot see. Traditions evolve for some reason, and to presume that we can discern the exact reasons behind this evolution is a sort of intellectual or rationalist hubris. The second reason is that they believe that having distinct groups in society is valuable and that it is good (or even necessary) for people to belong to groups. Exactly what traditions or distinguishing traits these groups have is of less importance than the fact that one must belong to some group in order to be a fully functioning human being.

The evolution thesis is not altogether unreasonable. If there is no obvious harm in keeping a tradition then perhaps we should not question it too hard. But when there is obvious harm in a tradition (for instance the harm of waiting for the semla or the harm of not having a foreskin or a clitoris), these harms should be weighed against the benefits of the tradition; and if no obvious benefits can be found then the tradition should be discontinued. What counts as benefits or harms is, of course, a topic for further dispute, but it is better that these disputes are had in the open rather than silenced because of the shibboleth of tradition.

When it comes to the second reason, it seems to be mostly a question for social psychologists. All I can say is that the view that people need to belong to (arbitrary) groups in that way seems very pessimistic about common people’s rational faculties and ability to embrace a truly individualist lifestyle. Perhaps this pessimism is warranted, but I find it hard to embrace it, since I am such a “rational” person myself. But that only means that you should not take me for an authority when it comes to social psychology.

Searching for the Best Referencing System

All scientific or scholarly works contain references (perhaps this is true by definition). As you probably know there are different referencing systems to choose from and different journals and book publishers use different systems. University students are often free to choose which system to use as long as they use it correctly and consistently, and people seldom argue about which systems are better than others. I maintain, however, that it can be argued that some systems are actually better than others. Here I will discuss the pros and cons of the systems I mostly come across (mainly in philosophy and the social sciences), going from worst to best.

Endnotes with reference at the end of the chapter: This is the most annoying system, since you have to find the end of the chapter before you can look up the reference, and then you have to keep, for instance, a finger at that place when you’re reading if you want to look up more references. Even more annoying when you’re reading electronic books or articles (unless they contain hyperlinks to the references).

Endnotes with reference at the end of the book (or article): quite annoying, but not as annoying as the above-mentioned system. It is generally less cumbersome to go back to the end of the book than to the end of the chapter. Personally I think endnotes should be abolished altogether, at least for works directed to scholars and researchers (who I assume are actually interested in the references).

Footnote, Oxford style: The reference is found at the bottom of the page. The first time the work is mentioned everything about the reference is written out: author, title, year, publisher. This is helpful if you immediately want to know what work is being referenced. However, the next time the work is being referenced only “op. cit” is written out (even if it has been a hundred pages since the last reference), and this means that you either have to remember what work was being referenced or go back and find the first reference to the work in question. It is a workable system if you have very few references (or many references mentioned only once), but otherwise it is quite annoying.

Footnote, Harvard style: Under the Harvard style of referencing only the last name of the author, the printing year and the page number is written out (like this: Smith 1975: 45). To get full information about the reference (most importantly the title of the book or article) one must go the the reference list at the back of the book (or sometimes the end of the chapter). The advantage with this system is that you immediately get the name of the author being referenced (which you do not get with endnotes). The main disadvantage is that even if you know the author, you still don’t know exactly which work is referenced, and a specialized scholar often wants to know that.

Parenthesis, Harvard style: So, in the Harvard system the references can be put in a footnote or in a parenthesis directly in the text (the latter can probably be called the more “pure” Harvard system). It is largely a matter of taste or aesthetics which one one prefers. Personally I think parentheses are better, but some people think it disturbs the flow of the text. Moreover, I often find it aesthetically displeasing with many short footnotes on the bottom of the page, while others do not care about that at all. An additional advantage of the parentheses style is that your eyes don’t have to wander down to the bottom of the page and then up again. Your reading flow can be maintained without interruption (unless, again, you believe parentheses disturb your flow even more).

Footnote with name of work: This is similar to Harvard style footnotes (I don’t think it would work well with parentheses), but instead of the printing year you write out the name of work (but probably omitting any subtitles). This is the system I prefer, although I seldom use it, because no academic journal uses it (at least no journal I can think of right now). I really can’t see any disadvantages with this system. you immediately know what work is being referenced – i.e., you don’t have go back and forth in the book. You only have to go to the actual reference list if you want to retrieve the referenced work for yourself and need, for instance, the name and volume of the journal, or if you have a very specialized interest regarding, for instance, what edition or translation is being used.

In conclusion, people often think that it’s no big deal which referencing system you use; but my contention is the choice makes an actual difference when it comes to the reading experience for the reader. You should not annoy your readers more than you have to.

Ceterum censeo that block quotes should be abolished.

The Non-Marxist Case against Capitalism

Let us define capitalism as a system where most of the means of production are privately owned, where the freedom to use those means as one wants is relatively large and where the levels of taxes and state economic redistribution are relatively low. As we all know this system has always been criticized from the left, often from a Marxist perspective, where the main point of accusation is that workers are exploited, i.e., they don’t get the full value of their contribution to production.

Nowadays, however, the left seems to have a hard time arguing against capitalism, and bringing up Marx (or other more obscure theoreticians) surely does not help. There is no mass movement among workers in developed countries today who oppose capitalism, and it is difficult to rally them under the banner of “socialism”, when the examples of the Soviet Union, Cuba and other communist states are thrown back in one’s face.

In light of this I have to ask myself: why not bring up a simple moral – and explicitly non-Marxist – case against socialism, a case that virtually anyone can understand in five minutes? Namely: if capitalists did their moral duty, i.e., used their money to help other people, there would be no problem with capitalism. But the fact that there are a few superrich people who own most of the wealth in the world tells us that they are not doing their moral duty. To be substantially richer than one’s fellow human beings is in itself a moral failing, and the quickest way to remedy this is simply to tax them and spread the wealth around (and in some cases socialize the means of production).

Some might protest: if rich people are taxed heavily then people won’t go into business at all. That, however, simply reinforces the point that they are failing morally, being egoists and only ready to exert themselves if that can make them a lot richer than everybody else. A moral person should be willing to work for the benefit of other people (including people one will never meet). The question is, then, how the state should deal with people with radically different morals than the majority; but that is a question that must be handled by all social systems. Claiming that one may have to cater to a few business savvy egoists is not to concede that capitalism is good.

Personally, though, I don’t think anything disastrous would happen if, for instance, top marginal tax rates were raised to levels that were common a few decades ago (for instance, above 90% in the US during both Republican and Democrat rule). People will still create new businesses (and we should encourage people to do so cooperatively) if it allows them to live somewhat more lavishly than the “ordinary” worker, but still very modestly, compared to how CEO:s of large companies live now. Perhaps a first step would be to go back to the time when the CEO made “only” 20 times more than the worker (in the 1960s for the US), rather than 300 times more today.

Adaptive Preferences and the Happy Pauper

The two main forms of utilitarianism are preference utilitarianism and hedonistic utilitarianism (but there are other forms). According to hedonism pleasure should be maximized, while according to preferentialism (as I will call it here) it is the satisfaction of preferences (or “utility”) that should be maximized, i.e., people should get what they wish for to the highest degree. The “crudest” form of preferentialism simply takes people’s actual wishes as given and asks how they can be satisfied, while other versions qualifies the theory by, for instance, demanding that the preferences must be “rational” or the like to count. An “economistic” version simply assumes that people want as much income and/or resources as possible.

A problem for preferentialism – especially in its cruder versions – is adaptive preferences. People who claim to have certain wishes or desires may have adapted them to the fact that there is not much in life they can really get. Or they may have adapted them to certain social expectations of them. For instance, a poor and uneducated woman might claim that she really doesn’t want much for herself and that she doesn’t deserve as much as her husband, because life without patriarchal structures are almost unthinkable. Should we, then, say that the way she “chooses” to live reflects her true preferences?

We can easily see the difference between hedonism and preferentialism. People may, and often do, want things that will not make them happier, and sometimes they even want things that will make them unhappier (of course, a hedonist could also want such things, but then the reason would be to increase happiness for someone else). Of course, as a hedonist I cannot see the good in simply giving people what they want if there is no connection to actual pleasure and pain. Now it is often the case that people – especially people who have few choices in life – want things that will actually make them happier, but we all know from experience that there are many exceptions to this rule.

Serene Khader has discussed adaptive preferences in the book Adaptive Preferences and Women’s Empowerment. Like myself, she rejects preferentialism but do not endorse hedonism. Instead she has another consequentialist theory that wants to maximize “basic flourishing”. Certainly, this theory has some affinities with hedonism, since hedonists (especially in a political context) are also interesting in providing people with the things that Khader includes in basic flourishing; basically the things that poor people (and especially poor women) lack to achieve basic well-being (the book is mainly about development in poor countries).

Why then, does Khader reject hedonism? Although the criticism against preferentialism is thorough, the criticism against hedonism is handled in a few lines. She admits that a hedonist can view a preference for, e.g., staying malnourished as an adaptive preference, since it is more pleasurable to be nourished than malnourished, but she claims that “many preferences we intuitively classify as adaptive may not produce psychological suffering” (p. 50). Khader’s examples of these intuitive cases are not, however, very enlightening. The only actual example against hedonism (as opposed to the many examples against preferentialism) is a poor and oppressed (through discrimination) worker who “gains immense subjective pleasure from the small mercies in her life” (ibid.).

Now I would say that if we can imagine such an unlikely person, who is immensely happy despite being poor and oppressed – and if we have good reasons to think that this person would be unhappier if her social and economic conditions improved – then we do not have any obligation spend resources to help that person. But, again, these people are probably so rare that we do not really have to take account of them in political discussions. If people are living in wretched conditions but still claim to be as happy as they could be, we can simply assume that they are victims of adaptive preferences, unless we get clear evidence for individual cases that this is not so.

Problems with Happiness Research

Personally I believe one should use happiness research with caution as a hedonist. For one thing, happiness research usually do not measure pleasure but rather self-reported life satisfaction. Of course, one can assume that the one thing has something to do with the other, but it would be too simple to make them out to be basically the same thing. Therefore I believe happiness research can be used to some degree when, for instance, arguing for certain policies, but one must also use other tools, such as introspection and anecdotal observation. Hedonistic politics can never be based on scientific precision (are there any kind of applied ethics that can?); and that – by the way – is one reason why it is so important that policies are decided democratically.

A paper that discusses problems with self-reported happiness appeared in the last 2016 issue of Journal of Happiness Studies. The authors, Ponocny, Weismayer, Stross, and Dressler, observe that there is something strange with the fact that most research seems to report that most people are mostly happy, i.e. generally very satisfied with their lives. The problem with that kind of research (which is often used by those who want happiness levels to be assigned more relative weight alongside traditional policy measures, such as GDP) is that it is usually left to the respondents themselves to decide what, for instance, life satisfaction means. Some studies have shown that people tend to interpret this in a way that moves us away from an actual correlation between reports of subjective life satisfaction and actual happiness (or the average pleasure level of one’s life). For example, people tend to downgrade the importance of negative experiences (even though they actually felt very negative for them when they occurred), but their assessments might also be distorted by social contexts (what does it mean to be “happy” in your culture?).

The authors of the article “Are Most People Happy? Exploring the Meaning of Subjective Well-Being Ratings” base their study on 500 interviews in Austria (random and snowball sampling), reaching approximate representativeness of the Austrian population regarding age and education, but not for gender (62.1 % were female). Respondents were interviewed by psychology graduates about good and bad things in life. After the interview they responded to a typical life satisfaction assessment between 0 and 10. Thus, the researchers had accounts of “narrated well-being” (NWB) that they could compare with standard self-reported life satisfaction.

The results of this research is that usually very high assessments of life satisfaction are not strongly correlated with the NWB-accounts; i.e., the match between the self-rating and the “external” rating (based on NWB) is not very good. One interesting finding is that “people who express only small emotions or who are categorized as ‘small emotions or close-lipped’ have a strong tendency to rate themselves as ’10,’ which gives rise to the suspicion that, for some respondents, positive self-rating might express defensive response behavior rather than true bliss”. For many cases, rather negative NWB ratings are combined with very positive self-ratings, but there are rare cases where it is the other way around.

A concrete example of the disparity between NWB and self-rating is a woman who rates her life satisfaction at 10, but still complains a lot about how stressful her life is with a professional career and children to raise, even claiming that sometimes she just can’t take it anymore. Another example, with self-rating 9, “reveals verbal compliance to obviously burdensome circumstances”. Regarding a question about restrictions in life she says: “Time pressure, I repeat myself […] It will get better. And still everything works. It does not knock me out. It is ok as it is. I just hope I continue to have the strength and health to keep going like that. And, all in all, it is fine. It is fine.”

The authors observe that “[e]ven respondents with self-ratings of ’10’ often report substantial psychological burden, including financial restrictions, health problems, unemployment, alcoholism, discrimination, death or life-threatening diseases of close relatives, and sadness.” Thus, even though some people “have essential restrictions of their hedonic status (as narrated by themselves)”, they still assess their lives very positively.

Thus, it seems obvious that happiness research cannot be relied on completely when, for instance, arguing for or against certain policies. As a hedonistic utilitarian one must make sure that we are talking about peoples actual experiences of pleasure and pain, and not only their self-reports of life satisfaction and the like. Of course, sometimes self-reported life satisfaction statistics is all we have, and then we must accord some relevance to it. But on the other hand, there are contexts when we can’t trust such figures at all, for instance when there are strong social pressures to appear satisfied with one’s life in spite of an obvious lack of hedonic pleasures.

I think one of the main problems today is that many people are simply reluctant to take a break from their ordinary lives and consider how their lives could be changed to be more pleasurable. They simply continue living in the way that is accepted as normal in their social environment. Often it is also probably the case that one is reluctant to appear ungrateful, for instance if one is living off the crumbs from other people’s tables. If you start complaining too heavily then the supply of crumbs might stop.