Reflections…, part 3

So what is “science,” then? Personally I want to view it as an extension of the activities in which we all take part in everyday life. What we all want is success in what we do, and I cannot see why science should have any other goal. Thus, science should be practically fruitful to merit its license.

This is what, in the context of philosophy of science, is usually called instrumentalism – a view which is often subject to very little discussion (if discussed at all) in the literature, probably because it has never been very popular. The most famous proponent of instrumentalism is probably the economist Milton Friedman, who claimed that unrealistic assumptions in economics (and, one would assume, also in other disciplines) is not a problem as long as we can, with the help of those assumptions, develop hypotheses that can be tested by correct predictions.

Now, it may happen that Friedman’s own theories was not that strong when it came to providing correct predictions; but I still think there is something in instrumentalism as a philosophy of science. What separates instrumentalism from “regular” science, i.e., science directed at finding causal links? A significant affinity seem to exist, but it is probably true to say that instrumentalism is more “tolerant” than, for instance, logical positivism. It does not deem “metaphysical” or other unclear statements to be illegitimate as long as they are fruitful when it comes to producing empirically verifiable hypotheses. And according to instrumentalism it is not as important to lay bare causal mechanisms as clearly in order to talk about a “legitimate” scientific hypothesis. If we can find strong correlations between variables, then that might very well help us to handle reality better, even if we fail to see the “underlying” mechanism absolutely clearly.

One could also claim that instrumentalism is more tolerant than Popper’s principle of falsification. Making predictions in the social sciences (and sometimes also in the natural sciences) is not about finding “laws” which are always valid. The hypotheses we work with pertain to probabilities, which means that one can always find cases that do not behave as the hypothesis predicts. One might, perhaps, ask whether such probabilistic predictions are of any use. Some philosophers of science claim that instrumentalist science cannot really contribute much, since it cannot improve a lot upon the “folk psychology” and intuitive understanding of the world which we all possess. Personally, However, I think that it is possible (and it happens all the time) to conceive of (and verify) probabilistic hypotheses that go counter to what we intuitively think is true and that improve our folk psychology. Nevertheless, there is some affinity between instrumentalism and falsificationism, if one by the latter mean that repeated falsifications must take place in order to reject a hypothesis (if one, in other words, can show that the probabilistic predictions made do not seem at all to be in line with the observed frequency).

The clearest difference between instrumentalism and other philosophies of science can be found when it is compared to the sort of social (and humanistic) science which is not at all interested in formulating verifiable hypotheses. To “interpret” or “understand” the world cannot be viewed as science, according to the instrumentalist, unless we by, for instance, understanding mean the understanding of observable consequences with the help of predictive hypotheses, or the like. By the same token, studies which only aim to describe the world cannot be counted as fully scientific, even if descriptive studies are usually quite valuable (to say the least) preparations for the scientific activity of formulating hypotheses. It is, however, important that descriptive studies are made in a way that is fruitful for continued scientific use.

The hardest things to defend as “scientific” in the court of instrumentalism is probably very abstract theorizing, or different kinds of case studies, which experience has shown to be pragmatically unfruitful. This does not mean that such studies cannot be valuable for other reasons (they might, for example, be pleasurable to conduct or to read), but in theses cases, the label “science” may not be appropriate to apply.

Lastly, however, we should add the following point: if science is only supposed to serve our pragmatic goals, who chooses which goals are relevant? My thinking is that this is ultimately decided by those who pay for the research, and in this case we are hopefully talking about democratic institutions which have enough sense to understand that science thrives when there are many approaches and perspectives that work side by side. But “science” that decade after decade produces results without any pragmatic relevance at all should not be paid for by the public purse. (If there are private actors prepared to continue the research one can, of course, turn to them.)

Advertisements

Reflections…, part 2

For some people social science will appear as something fundamentally different from natural science. It is, after all, not possible to detect any underlying “laws” in society in the same way that natural laws can be detected. And the principles we might actually uncover do usually not need much scientific effort; “folk psychology” is often enough.

The most extreme view ought to be the rejection of all pretentions to explain any causal links. Instead, the focus is, so to speak, to make social phenomena understandable, to grasp and interpret them. In its most pure form the result might be a “thick” description of something, in the way that an anthropologist might strive to understand and describe a certain culture, without any pretentions to find any law-like generalities in human behavior. This view, focused on understanding, is most prevalent in case studies, and interviews are often used as primary sources.

History, too, is often distinguished by a lack of pretentions to uncover general hypotheses about human action. There is, however, a difference in that not all historians regard their discipline as a “science”, in its strict meaning; and sometimes history is classified as an art, rather than a science, although its “weight” or “value” is usually not diminished by this. One could say the same about the academic study of literature, a discipline which extremely rarely attempts to describe or explain any human generalities.

Still, it is interesting that in some places – I am mostly thinking about Sweden here – the latter discipline is called  “literature science” (litteraturvetenskap). This might easily lead one into some linguistic quagmires, where it is difficult to find out whether all things that are labeled “science” do really have much in common. It is possible that all that remains is an “institutional” definition: what people do at universities is science – even if the activity itself is marked by vast differences regarding methods and aims when it comes to different departments.

So how should one find one’s bearings in all this? Should one simply refrain from any attempts to define what science really is (or ought to be), or should one try to find some common criteria that must be met in order for something to be called a science – something like the positivist idea about general causal hypotheses? The question is, perhaps, most burning when a lot of science is financed through the public purse. We do, after all, want out tax money to go to “legitimate” science and our universities to retain a decent ranking compared to universities in other countries. Think, for instance, about the debate that raged some years ago when Lund University was about to appoint a professor (for an endowed chair) in parapsychology, or the constant debates surrounding a discipline like gender studies (or “gender science”, as it is called in Sweden).

For my part, I think that some basic criteria (developed further in part 3) should be fulfilled in order for something to qualify as a science; and one consequence of this might be that some research programs at Swedish universities should be discontinued. At the same time, I do not think that all separate studies that are produced at universities need to contain strictly scientific hypotheses; but it should still be clear in which way these “unscientific” studies are smaller pieces of a larger picture which, in turn, is a scientific endeavor.

Reflections on Philosophy of Science, pt. 1

[The following is a translation of an old post from my Swedish blog]

Abstract philosophical questions is not something that only people with philosophy degrees are occupied with. As a social scientist one must sometimes reflect on some philosophical issues as well. Firstly, it might be good to think about what separates “science” from other human activities, and secondly, about the difference between natural science and social science (and maybe the humanities is also worthy of some attention).

The definition of science is probably not something that normal people lose any sleep over; but active scientists do probably need – at least from time to time – to assess whether a particular study should be regarded as scientific or not. In general, it is probably the case that a certain degree of empiricism has been prevalent since the birth of modern science (in the 16th and 17th centuries). In other words, hypotheses are scientific if they can be confirmed by observation, often in combination with mathematical/logical reasoning. Things that cannot be observed by our senses (e.g., mythological creatures) are not included in the domain of science.

It seems that this way of doing scientific studies has mainly been developed to make controlled experiments the gold standard for scientific rigor. We might, for instance, put some rats in a laboratory, control for as many environmental variables as possible (their food, their sensory impressions, and so on), and then observe what effects we get by systematically changing one of the variables (in cases where we cannot control for the variables we must attempt to settle for historical data, for instance). In the end, what most scientists are interested in is finding certain connections, i.e., what causes what. Do rats feel happy when eating sugar? Can apples cure cancer? Do children who have not been breastfed become violent adults?

This view of science was probably propagated most succinctly by the so-called logical positivists (who started ravaging around the 1920s, influenced by the early philosophy of Wittgenstein, among others). They believed that all statements that are not about empirically verifiable things or about purely logical deductions can be ruled out as unscientific – or as “metaphysical nonsense” if one wants a sharper formulation. The task of scientists should, thus, be to formulate hypotheses that could be verified by observation (which, by the way, has been criticized by Karl Popper, who thought that one should try to falsify hypotheses, instead of verifying them, because even if a hypothesis has been verified many times, it is still the case that one single contradictory case might destroy it.)

The standard view is probably that “pure” logical positivism is not something that all too many scientists (or philosophers) subscribe to, although the science being done from day to day still seems to move in the same precinct. The task is to attempt to show certain causal connections through empirical observation and to verify/falsify the hypotheses regarding these causal connections. Biologists or medical researchers who are interested in other things will probably be the subject of some skepticism among their colleagues.

For the logical positivists there was one, and only one, scientific method. Regardless of what we are studying, – it may be nature or society – we should use this method if we want to be scientific. If this positivist view of what is “scientific” is something that many natural scientists can, more or less, agree to, then one might ask: is it something that social scientists should agree to? Do social scientists think that one should apply the term “scientific” in the same way, regardless of the the object of study, or does “scientific” mean something else when one is engaged in the study of human societies and cultures?

Using Slippery Slope Arguments in Good Faith

A slippery slope argument goes like this: Action A leads to consequence k. Consequence k will, in turn, make consequence happen, and will make consequence happen. While is not an obviously bad consequence, l is somewhat bad, and m is potentially catastrophic. Thus, action A should not be done, because it will eventually lead to consequence m.

Sometimes slippery slope arguments are simply dismissed as fallacies, but it is obvious that they are not inherently fallacious. However, the strength of the argument is very dependent on the strength of the empirical evidence that the consequences one invokes will actually occur. Such evidence is often lacking, and that is why slippery slopes are often dismissed.

As a consequentialist I am very open to slippery slope arguments. If I, as a hedonist,  make an argument about an action or a policy, and if someone can credibly claim that the action or policy in question will probably have bad consequences down the line, – even though the immediate consequences may appear good – then I would be perfectly willing to drop my argument.

Nevertheless, I am somewhat reluctant to take slippery slopes seriously if they are raised against me by people who are not consequentialists themselves; because even if I could counter the slippery slope with better evidence that removes the slipper slope, my antagonists would, presumably, not change their minds anyway, since their arguments do not depend on consequences at all. A parallel would be a scientific discussion where a pseudoscientist who rejects naturalism argues that the scientist’s naturalistic evidence is wrongly interpreted. However, if the pseudoscientist rejects naturalistic evidence altogether we would fell disinclined to take him ot her seriously in the first place.

I do understand, however, why non-consequentialists are eager to use slippery slope arguments, because it is mostly a win-win situation for them. If the argument is convincing, the consequentialist will have to drop her position. If the argument is unconvincing, the non-consequentialist can just fall back to her non-consequentialist principles and disregard the empirical evidence. For the consequentialist, it is, however, a lose-lose situation, since the non-consequentialist does not have to endorse the consequentialist’s position even it the slippery slope is successfully disproved.

In short, please bring up slippery slope arguments if you want, but do so in good faith. Be prepared to change your own views if the slippery slope does not pan out they way you thought in the first place. Otherwise, it is just like pseudoscientists (and similar folk) who argue against real scientists  by invoking empirical evidence, while they themselves refuse to state clearly which empirical evidence might falsify their own position.

A Reasonable Doctrine of Personal Responsibility

I do not believe in the existence of free will in any deep sense. Everything we observe in nature happens because of some prior cause. A puddle of water cannot decide to freeze; it is, rather, caused to freeze when reaching a certain temperature. Human beings are products of nature as well – simply a collection of atoms (note that there is no valuation implied in this use of the word ‘simply’), although our brains are infinitely more complex than a puddle a water, which means that most of our actions are infinitely more unpredictable than the ‘actions’ of most other objects in nature. So even a determinist (i.e., those who do not believe in the existence of free will in the abstract) should concede that, in practice, there is not much difference between a belief in free will and determinism.

Some would, however, claim that it makes a great moral difference whether you believe in free will or determinism. On this account, we cannot punish or reward anyone if they are not acting out of free will. As a utilitarian hedonist I must disagree with this statement. If the consequences of punishing or not punishing someone were the same, then it would not matter what we did. But the consequences are (almost) never the same. Even if the actions of a ‘rapist’ were determined by prior processes in the brain (i.e., they were not the result of ‘free will’), it still makes a difference whether we punish him or not. Among the most important consequences are that the streets will be safer with the rapist behind bars, and that other potential rapists will be deterred from raping. The last point is especially important; it indicates that our actions (even if they, in turn, are not the result of free will) can create new determining factors for the behavior of other people.

Thus, I view the question of punishment, blame, reward, and praise as wholly separate from the question of determinism or free will. Even though we could, at least in principle, describe everything that caused someone’s actions, we can still discuss how much this person should be blamed or praised, or how much personal responsibility we should ascribe to them. A general rule might be that the more we can actually assess the determining factors of someone’s behavior, the less responsibility we can ascribe. For example, if I were to watch someone get pushed into the street in front of a car, I could easily predict that this person would get involved in an accident, and so I could not ascribe responsibility to the person being pushed for the consequences of this accident. I could, however, ascribe a heavy responsibility to the pusher, because I could not predict her actions at all (unless, perhaps, there was another pusher behind her).

What about the responsibility for poverty or unemployment, which might be important to assess in order to decide who ‘deserves’ assistance from the state? Here we must apply sliding scales, since it would be highly unusual to find someone who is not responsible at all for their situation, as well as someone who is completely responsible for their situation. Again, the test would consist in predictability. We might predict that someone with a certain disability can be expected to have a hard time finding a job, which means that responsibility for this unemployment cannot be total. On the other hand, we might be able to find a not insignificant number of people who have succeeded in finding employment in spite of this disability, which would mean that our ability of prediction will decrease (at least ceteris paribus), while the level of personal responsibility will increase.

While certain  disabilities would decrease the level of responsibility for unemployment drastically, there are other things which would increase it drastically. An example of this would be laziness. While it is certainly true that laziness, as well as other personal traits, are endowed to us by nature, it it still the case that we can find many examples of lazy people who are able to control or overcome their laziness in order to get a job. In other words, it is very hard to predict whether someone who exhibits lazy tendencies early in life will succeed in finding employment or not. Thus, when it comes to lazy people, their laziness does not remove very much of personal responsibility. Furthermore, while the lazy person might claim that he is lazy for certain reasons (that he cannot remove), we can create other reasons for him to counteract this laziness by, for instance, nagging at him or threatening to remove unemployment benefits. This does not exclude the possibility that we might find rare cases of extreme laziness that cannot be overcome by any means, but in such cases we would probably not call it ‘laziness’ anymore, but refer to it as mental disorder.

So my suggestion is that the level of predictability regarding people’s actions should determine the level of personal responsibility we ascribe. Right now I am just throwing this out as an idea which, in the end, might prove to be untenable. In any case, the theory seems to fall in line with many of our standard practices in deciding whether we should accept someone’s excuses or not. If you are late to a meeting and make the excuse that you were late because you had to finish playing a game on your phone, people would probably not accept your excuse, because they could point to so many examples of people who like to play games on their phone, but still manage to get to meetings on time. If your excuse is that your train was late because of a terrorist attack, we would accept your excuse, because it would be very hard to find examples of people who would make it to work on time under these circumstances. It would, in other words, be easy to predict that a person stuck on a train during a terrorist attack would not make it to work, while it would be extremely difficult to predict whether someone caught up in a game on her phone will manage to make the time or not.

I guess the obsessive phone gamer would be able to say that once she started to play the game it would be very easy to predict that she would not make it to the meeting in time, given that we know how long the game usually lasts. Thus, we should not ascribe much responsibility to her. Here one could probably reply that even thought it was predictable that she would be late once the game has started, it was very hard to predict that she would take up the phone and start to play in the first place. Thus she had a very large responsibility for starting to play in the first place. And since she, presumably, knew of the risks that she would be late if she started playing, the responsibility for the tardiness remains high.

In the end, however, it must be added that praise or blame should only partly be determined by level of personal responsibility. The most important thing is always the consequences of punishing or not punishing. If we see many good consequences of not reprimanding the phone gamer in any way, in spite of high levels of personal responsibility, then we should not mete out any blame or punishment. Conversely, the consequences might be such that it is beneficial to punish people in spite of low levels of responsibility (indeed, we do sometimes lock up people who are dangerous to other people, even though they are very mentally ill and not responsible at all for their actions).