Deflating Bayes’ Theorem

It’s a trivial arithmetical fact that “x percent of y equals y percent of x”. But it can be useful to bear it in mind. For example, I find it harder to figure out “8% of 50” than “50% of 8” — the latter is obviously 4, because 50% of any quantity is simply one half of that quantity, and I can tell that half of 8 is 4 “without even thinking about it”.

The algebra of “x percent of y equals y percent of x” is completely straightforward:

Percentages are proportions, and in many situations we can figure out proportions of cardinal numbers (i.e. how many members are in a given set) even when we don’t know the cardinal numbers themselves. For example, I don’t know how many Uruguayans there are, but I do know that about half of them must be female, because Uruguayans are human, and about half of all humans are female.

Proportions are expressed by fractions, and the algebra of fractions can be useful in many other less-than-obvious ways. For example, for any non-zero values of xy and a:

If we divide both sides of this equation by x/a, we get:

Let us call this last equation Φ. Please note that Φ is true for all non-zero values of a, x and y, no matter what quantities they stand for. For example, suppose Zeke has spoken to his rival Xavier, but they have yet to meet in person. Zeke wonders: when they do meet, will he be the taller of the two? If so, how much taller? He knows where he stands height-wise with Yuri, because they shared a house when they were students. He has a photograph of Yuri standing next to Angela, and has seen Angela standing next to Xavier in Zoom meetings, and he has noted their height ratios. Even if he does not know any of their actual heights, this is enough information to estimate how much taller (or shorter) than Xavier he will turn out to be. The usefulness of equation Φ isn’t limited to special quantities, such as those that are no greater than one, or to positive values only, or even to real numbers. But it really comes into its own when it is applied to sets which intersect. (In what follows, I use the words ‘set’ and ‘class’ interchangeability.)

Suppose that, within a larger “universal” set A (with cardinal number a), a set X (with cardinal number x) and a set Y (with cardinal number y) have intersection Z (with cardinal number z):

As an aid to the imagination, suppose A represents the class of all animals on earth. (“A is for animals” — to make things a bit easier to follow, I’ll use memorable combinations of letter-names and sets, even if they get a bit silly.) If we were able to count each animal individually, we would arrive at the (very large) number a. And suppose X represents the class of animals that lay eggs. (Mnemonic: the letter X sounds like “eggs”.) If we could count these individually, we would again arrive at another large number x, although it would be a considerably smaller number than a. Now suppose Y represents the class of snakes. (Mnemonic: the letter Y looks like a forked tongue.) The number y would also be large, but again, it would be smaller than a. Some snakes give birth to live young, and other snakes lay eggs, so the sets X and Y intersect, making the set Z. (Mnemonic: the letter Z looks like the numeral 2, here standing for the set of animals that belong to both X and Y.) Each of the sets A, X, Y and Z is non-empty, so each of the numbers a, x, y and z is non-zero.

In real life, counting of the above sort is practically impossible, of course. But often we are able to estimate proportions of large numbers like these, just as a moment ago I was able to estimate what proportion of Uruguayans are female. And with equation Φ, proportions rather than cardinal numbers are all we need to know.

Equation Φ allows us to calculate z/x (the proportion of egg-layers that are snakes) from other proportions that we already know, namely z/y (the proportion of snakes that are egg-layers), y/a (the proportion of animals that are snakes), and x/a (the proportion of animals that are egg-layers).

There’s a field of study in which we routinely deal with proportions rather than with cardinal numbers, namely, statistics. In statistics, equation Φ is widely known as Bayes’ Theorem, although in the present form it hardly deserves the name of something that calls for a “proof”, as it’s such a modest bit of algebra. A brief digression into statistical probability is all we need to re-write Φ in terms more familiar to students of statistics.

One of my main contentions is that we can (and should) understand all numerical estimates of “probability” in terms of proportions like the ones we’ve just been discussing. For example, consider the claim that “the probability of throwing heads when tossing a fair coin is one half”. All this means is that if we were to toss a fair coin an indefinitely large number of times, the proportion of heads that result would approach one half. Note that the actual number of tosses (i.e. the cardinal number of the set of tosses) is not specified — the more the merrier. What matters is the proportion as a limiting value.

In other words, numerical probabilities should be understood as relative frequencies among pluralities of things or events. So understood, the probability of a specific sort of occurrence (such as getting “heads” when tossing this coin) is the relative frequency of that sort of occurrence in a larger reference class of occurrences (such as tosses of this coin). Since the former is a plurality of things that are relevantly similar to one another in a specified way, I’ll call it the specific class.

Unhappily, most talk of probability has the superficial appearance of referring to a single or isolated event. We often speak of “the probability of rolling a heads” — the indefinite article superficially suggesting a singular event — in order to identify a specific sort of event, i.e. a plurality, the specific class. This can lead us astray. In fact I think this quirk of language is partially responsible for an unfortunate tendency to treat numerical probabilities as expressing something about beliefs — they supposedly “measure credibility”, or something of that sort.

Another quirk of language comes into play too: the reference class is often left implicit. I think these last two façons de parler are important sources of philosophical error, and I will return to them. For now, please note that it is salutary to explicitly identify the classes involved — and they always are involved, even if one of them “goes without saying”.

How does this work with our illustrative example? Let’s re-phrase one of the numerical proportions we’ve been discussing in terms of probabilities. Take the proportion of snakes that lay eggs, say. This is the ratio of the cardinal number of the intersection Z ( = the set of animals that are both snakes and lay eggs) and the larger set Y ( = the set of animals that are snakes) in the Venn diagram above. In other words, it’s the fraction z/y. This quantity corresponds to “the probability of a snake being an egg-layer”. That might sound like a rather odd way of putting it, as we struggle to imagine artificial situations (such as that of Noah, wondering whether this or that snake is an egg-layer as they randomly slither aboard the Ark). However artificial such situations may sound, the containment of sets involved here is the very same as that of situations where we find it quite natural to talk of probability — “the chances of getting heads when tossing a coin”, “the probability of a child being female”, “the likelihood of an Irish person having red hair”, and so on. All such claims depend on a specific class (of coin landings with the heads face up, of births of female children, of people having red hair, etc.) and a reference class (of coin-tosses, of human births, of Irish people chosen at random, etc.). Any numerical proportion so ascribed is a fraction whose numerator is the cardinal number of the relevant specific class, and whose denominator is the cardinal number of the relevant reference class.

I mentioned above that although a reference class is always present in the background of any claim about numerical probability, it’s often left implicit. This happens when the reference class is the universal class, given the context. (Quite often, several universal classes can work just as well for a given context.) For example, a claim about “the probability of an animal being a snake” might be re-phrased as an apparently simpler claim about “the probability of being a snake”, as long as the context makes it clear we are only talking about animals. In such a context, z/a, the numerical proportion of animals that are snakes, corresponds to “the probability of an animal being a snake”, which using a customary notation can be written as P(Y). Likewise, x/a, the numerical proportion of animals that are egg-layers, corresponds to “the probability of being an egg-layer”, which can be written as P(X).

In general, a fraction like s/r — expressing a ratio of cardinal numbers of a specific class S and a reference class R — can be understood as “the proportion of members of class R that are also members of class S”. The numerator s cannot be greater than the denominator r, because S is a subset of R. In terms of probability, it expresses “the probability of members of R also being members of S”.

The reference class of “the probability of a snake being an egg-layer” is not the universal class A of animals in general, but rather X, the set of egg-layers, so it has to be made explicit. One way of putting this is to say that “the probability of a snake being an egg-layer” is a conditional probability: the proportion in question expresses the probability of an animal’s being an egg-layer given the prior condition that is it a snake. I don’t much like this way of putting things, since it may suggest — wrongly — that some probabilities do not have a reference class at all.

Given our current schema of letters, and the customary notation for so-called conditional probability, z/x — the numerical proportion of egg-layers that are also snakes — corresponds to “the probability of an egg-layer being a snake” or in other words “the probability of an animal’s being a snake given the prior condition that it is an egg-layer”. In customary notation, this is written P(Y | X). Likewise, z/y — the numerical proportion of snakes that are also egg-layers — corresponds to “the probability of a snake being an egg-layer” or in other words “the probability of an animal’s being an egg-layer given the prior condition that it is a snake”. It is written P(X | Y).

So we have x/a = P(X), y/a = P(Y), z/x = P(Y | X), and z/y = P(X | Y). Substituting these into Φ:

This is the version of Bayes’ Theorem that is most familiar to students of statistics. There’s no doubt that this formula is very useful to statisticians, and to anyone who has to think about numerical probabilities. But more has been claimed for it — much more than is warranted, or so I will argue in future posts.

Conclusion

What is the relevance of all this? I hope I have convinced you that there is nothing mysterious or magical about Bayes’ Theorem — it can be derived from some rather elementary algebra, which can be applied to any sort of quantity, real or complex, positive or negative, as well as to the cardinal numbers of sets, as long as three of the four relevant quantities involved are non-zero.

The idea that Bayes’ Theorem captures the essence of rationality, or anything like that, is inspired by bad, old-fashioned epistemology. According to the tradition of Plato and Descartes, knowledge is justified true belief, and the main epistemic challenge we face is to seek and achieve justification. In Descartes’ time, the ideal of justification was certainty — i.e. “total” justification. Nowadays, with the development of an arithmetical treatment of probability, the ideal instead is supposed to involve partial but numerically measurable justification, often referred to in terms of “degrees of belief” — i.e. supposed measures of credibility or assurance. According to this very common way of thinking, probability is understood epistemically, as “how much we are entitled to believe” that a claim is true, rather than non-epistemically, as I have characterized it, in terms of relative frequencies. Inquiry is typified as a decision-procedure in which “what we go on to believe” is decided by “what we believe already”.

The way of thinking that I oppose takes belief to be one-dimensional (as it must be if it is to be “measured” on any sort of numerical scale) and inquiry to be one-directional.

Quine remarked that modal logic was “conceived in sin” (the sin of confusing use and mention). I think the application of Bayes’ Theorem to measures of credibility is similarly conceived in sin: the sin of confusing the subject matter of beliefs (what they are about) with the manner in which beliefs are held (the “degree” of their strength or credibility). Almost invariably, its defenders choose as “typical examples of belief” those that occur in gaming. To put it in the starkest terms, they treat the belief that “one in six throws of a pair of dice result in doubles” as a belief held to a degree of one sixth that “any given throw will result in doubles”. As I mentioned above, our linguistic habit of using singular terms to refer to pluralities aids and abets us in this sin.

In a succession of posts to follow this one, I will defend an alternative understanding of knowledge and belief, which I call contextual reliabilism, in which justification — or at least justification as traditionally understood — plays a very minor role. It rejects numerical measures of how much anything ought to be believed, and indeed of how much anything actually is believed. In simple terms, my view is that at any given time a given agent either does believe something or does not believe it, and that what is or is not believed is either true or false. A belief might be more or less well-entrenched in an agent’s belief system, but that is a highly contextual matter which resists numerical treatment. I will argue that beliefs are not one-dimensional as they would have to be to be measured on a scale, but “multi-dimensional”: each belief is embedded in the believer’s system as a node in a web (to use Quine’s famous metaphor). And inquiry is not a one-directional march forwards, but rather a chaotic dance between our theories and the various aspects of reality they purport to describe.

The sacrifice of the innocent

[This paper was originally delivered as an informal address to the Icelandic Philosophical Society, September 11, 1994]

Is it ever morally right to sacrifice an innocent person for the greater good of some other people? Our immediate reaction might be to say No. But I will argue that in some circumstances, which are not especially unusual, it is morally right. In the most ordinary sense of the word, sacrifice is often morally obligatory.

Cases in which an innocent person is sacrificed are philosophically interesting, but academic philosophers don’t like to discuss them unless they can be used as obvious examples of moral wrongdoing. The usual way in which the sacrifice of an innocent person enters philosophical discussion is as follows. We “test” our moral theories by examining what they tell us about how we ought to behave. If a moral theory tells us to sacrifice an innocent person in a situation where it is obviously morally wrong, then we must abandon that moral theory and look for a new one.

That is fine as far as it goes, but it doesn’t go far enough: it only tells half the story. Below, I will give a few examples of situations in which the sacrifice of the innocent is not obviously morally wrong. Regrettable and tragic though these cases of sacrifice certainly are, they are actually morally right. Any such case must be regarded as the “test” of a moral theory, just as much as the more familiar “tests” in which sacrifice is morally wrong. For a moral theory to be acceptable, it mustn’t just tell us not to sacrifice innocent people when it is obviously morally wrong to do so — it must also tell us that we should sacrifice them when it is morally right to do so. So a moral theory can fail such a “test” in at least two distinct ways. And when a moral theory is mute — where it offers no guidance at all on awkward matters of life and death — we must say that it fails in yet a third way. We look to it for help, but it doesn’t give us any.

Most of the moral theories currently in circulation fail in this third way. It seems that their proponents would rather keep silent than admit that we are sometimes morally obliged to do very awkward, painful, or unpopular things. That partly explains the enduring popularity of such theories: they are pleasantly undemanding. Academic philosophers usually overlook this sort of failure because of a narcissistic assumption that morally good behaviour means never having to “dirty one’s hands” with a cold or heartless-looking act.

However, I believe there is a moral theory that doesn’t fail in any of the ways just described. And it has numerous other virtues as well. It is called “preference utilitarianism”, and one of my main objectives is to explain and to defend it.

What is a Moral Theory?

I have been using the expression ‘moral theory’ over and over again. What do I mean by a moral theory? — When we make reasoned moral judgements, we implicitly appeal to our basic moral principles. A moral principle works a bit like the definition of a word — a definition of the sort that can be used to make decisions about whether the word in question properly applies. For example, suppose we come across a quince for the first time, and wonder whether it is a citrus fruit. The issue might be settled by appealing to an explicit definition of ‘citrus fruit’, which tells us that citrus fruits have segments. But a quince doesn’t have segments, so the term ‘citrus fruit’ doesn’t properly apply to it. In its modest way, this is a sort of discovery, and it was facilitated by an explicit definition of a word.

In effect, a moral principle is an explicit “definition” of morally right action, and we can use it to judge whether any particular act falls under that heading. So if we are contemplating doing something, and wonder whether it is morally right or wrong, an appeal to a moral principle might decide the matter. The question whether a quince is a citrus fruit was decided in a similar way by the definition of ‘citrus fruit’. But just as a definition must be rejected if it fails to agree with the way a word is properly used, a moral principle must be rejected if it fails to agree with our most strongly held moral intuitions.

Divine Commands Theory

An example may help here. Many people — perhaps the majority of the world’s population — base their moral judgements on (what they understand as) God’s commands. In other words, they accept the following principle:

(DC) An action is morally right if and only if God commands it.

To use this principle to make moral judgements, whoever accepts it would have to have some additional beliefs. For example, a Jew or a Christian would normally believe that God exists, that God is for the most part benevolent, that the Ten Commandments are an expression of God’s commands, etc. And he would probably have some further views about why we ought to do what God commands, about the nature of morality, and so on. These additional beliefs and assumptions are the “packaging” that has to be accepted along with the central principle (i.e. DC, above). Its finer details may differ slightly from one individual to the next — for example, one person may think that God is all-good but not all-powerful, while another thinks he is all-powerful but not all-good — but the central principle (DC) is the shared core.

Every central principle — moral or otherwise — goes along with some extra “packaging” like that. In other words, every such principle is embedded in a theory. This is perhaps clearest in the case of scientific law. No scientific law can ever be adopted or applied on its own. It can only be used in conjunction with some further laws and assumptions about the world. For example, take Newton’s Law of Gravity (often called the “Universal Law of Gravitation”). No one could apply this law on its own to make predictions about falling apples, orbiting planets, etc. Newton himself applied it in conjunction with his three other laws of motion, some further “Newtonian” assumptions about the nature of space and time, and some mathematical techniques for applying them to the real world. Together with his central Law of Gravity, these further laws and assumptions formed an integrated “package” — which we may as well call Newton’s “theory” of gravity.

It is because moral principles come in “packages” like scientific laws that I have been using the word ‘theory’ to refer to any such “package” as a whole. But we mustn’t take the analogy between moral and scientific theories too far. Scientific laws (like Newton’s law of gravity) are hypotheses that purport to describe the world. That is, they are attempts to guess at the way things actually are. It is reasonable to suppose that a scientific hypothesis is either true or false, depending on whether it describes things accurately or otherwise. But many philosophers doubt that moral principles are true or false in that straightforward way. On the face of it, a moral principle seems to prescribe human behaviour like an order (e.g. “Shut the door!”) rather than describing the way things actually are like a declarative sentence (e.g. “The door is shut.”). The appropriateness or otherwise of an order (i.e. a prescription) seems to have a different flavour from that of the truth or falsity of an ordinary sentence (i.e. a description).

The “package” I am trying to sell — preference utilitarianism — contains a central moral principle, and some packaging, as with every theory. But the packaging contains some quite radical revisions to our familiar ways of thinking about right and wrong. It also contains some radical revisions to our traditional ways of thinking about the mind. Much of the rest of this essay is concerned with explaining and defending these revisions against the parochialism and reactionary attitudes of our moral traditions.

Let us return to the “test” of a moral theory. The moral theory just described (whose core principle is DC, above) is called the “divine commands” theory. I chose it as an example because I think it can be used to show how we reject a moral theory if tells us to do something that our moral intuitions tell us is obviously wrong. How might this happen?

Consider the Old Testament story of Abraham and Isaac. Many people feel uneasy about the “divine commands” theory when they think about that story. The theory says that whatever God commands is morally right, so when God commanded Abraham to sacrifice his son Isaac, the theory says that that particular act must be morally right. But, many people feel, an action of that kind couldn’t conceivably be morally right: so the theory must be wrong. The intuition that it is wrong for a father to sacrifice his son seems to be more powerful, or more trustworthy, than the opposing intuition that morality is a matter of following God’s commands.

Perhaps that explains why philosophers have never taken the “divine commands” theory very seriously. Quite apart from the story of Abraham and Isaac, which illustrates how arbitrary God’s commands might be, most philosophers think that there must be some independent way of telling which human actions are right or wrong. (If there weren’t, how could we tell with any confidence that they were God’s commands rather than Satan’s?)

So here we leave the divine commands theory. The important point is this. A proposed “sacrifice of the innocent” was felt to be so morally repugnant that it led to the rejection of the theory that proposed it.

As it happens, the very same pattern recurs, but this time with a theory that philosophers do take seriously: classical utilitarianism.

Classical Utilitarianism

The next moral theory I want to consider is very simple, in the sense that it can be learned in a minute or two. Yet is also very powerful, in the sense that it enables us to make moral judgements about almost any human action at all. These are both highly attractive features of any theory.

And it has other attractive features as well. One such feature is that it explains how a person might be motivated to do the morally right thing. Utilitarianism’s most famous modern proponents, J.S. Mill and Jeremy Bentham, were justly very proud of this feature of their theory.

Here is how it works.

Look at any individual human action, taking account of the circumstances in which it is performed. Try to estimate the likely consequences of the action. Some actions lead to an overall increase in the amount of pleasure (or a decrease in the amount of suffering) for all those affected: these actions are morally right. Other actions lead to an overall decrease in the amount of pleasure (or an increase in the amount of suffering) for those affected: these actions are morally wrong.

Sometimes every possible course of action, including that of not acting at all, leads to a decrease in pleasure or an increase in suffering. In such a case, the morally right action is the “least of a number of evils”.

Since nothing matters except an action’s consequences, the morality of any action is just the same thing as its utility, or “usefulness” in bringing about the specified end. For classical utilitarianism, a kind of hedonism, the end is pleasure, or the elimination of suffering. It is aimed at pleasure in general, where everyone affected counts for the same. (The agent doesn’t count for more or less than anyone else, so this is not “selfish” hedonism.)

Classical utilitarianism’s basic moral principle, its “definition” of morally right action, would go like this:

(CU) An action is morally right if and only if it tends to maximize pleasure.

Why would a person adopt such a principle? Or, in other words, what is the motivation for acting morally? We all enjoy our own pleasure, and hate our own pain. It is a short step from here to saying that we always want to get as much pleasure as we possibly can, and to avoid pain whenever we can. Furthermore, many of us have “fellow feelings” for other creatures. Those among us who have these feelings will want to try to maximize pleasure, no matter whose pleasure it is. In other words, they will want to do the morally right thing.

Not everyone feels that way, of course, but perhaps the right sort of influences in childhood might help bring more people to feel that way in the future.

Let us now apply the theory to real life. Classical utilitarianism would say that the following sorts of action are almost always morally right:

· giving to charity

· being kind to animals

· maintaining a reasonably equitable distribution of goods

· giving individuals in society “private space” in which to pursue activities that they enjoy

· any sexual practice, as long as it is done in private between consenting adults

· voluntary euthanasia

So far, we have compiled a list that most modern liberals would be happy to endorse.

However, despite all these agreeable features, there is something terribly wrong with classical utilitarianism: like divine commands theory, it seems to tell us to do things that are obviously morally wrong. It says that, under certain circumstances, we ought to “sacrifice the one for the many”.

For example, imagine a friendless, homeless down and out, who suffers the mental anguish of his lot and, let’s say, the physical pain of an illness. He is a burden to society and he suffers a great deal. However, he has a healthy liver, kidneys, heart, and other internal organs. Consider the action of killing him painlessly, without warning, in order to use his body parts for transplanting, to relieve the suffering of a number of others. Since this action would lead to a decrease in the net amount of suffering of those affected, and to an overall increase in pleasure, classical utilitarianism would have to say that such an action is morally right.

I don’t know of anyone who would be prepared to accept this conclusion.

This problem has led most present day philosophers to reject utilitarianism, and to adopt a very different sort of moral theory, usually one that takes morality to be essentially a matter of following abstract rules and regulations.

Now I do not think that morality has much to do with following rules and regulations. Utilitarianism does not rely on such things, so I would like to see it somehow getting over this problem.

But we mustn’t underestimate our difficulty here. Classical utilitarianism doesn’t just tell us to sacrifice people’s lives. It also seems to demand the sacrifice of freedom and happiness, as long as the numbers of those adversely affected are relatively small. For example, imagine a slave society in which a small number of slaves serve a large number of slave owners, thereby bringing them great amounts of pleasure. If, on balance, the outcome of this arrangement is more pleasure and less suffering than any other arrangement, then classical utilitarianism seems to say that it is morally right.

I take it that this result is completely unacceptable to everyone. It would certainly not have been welcomed by J.S. Mill or Jeremy Bentham, both vigorous opponents of slavery.

However, I think a very simple solution to the problem is already available. We should not reject utilitarianism, but modify it. The modification is very basic, but it can be made cleanly. The modified theory is even simpler and more powerful than its predecessor. It explains the motivation for moral action better than its predecessor. And it meshes with the best philosophical understanding of the mind that we have so far.

To the best of my knowledge, the modification was first explicitly spelled out in 1979 by Peter Singer in his book Practical Ethics. But the main idea can be found in the writings of J.S. Mill, and there are some intriguing hints in the writings of Edmund Burke. Singer, however, is the one who gives the theory a name: preference utilitarianism.

Preference Utilitarianism

Singer’s moral theory is just like classical utilitarianism in most respects. Actions are still to be judged solely on their likely consequences. The morality of an action is still equated with its utility, that is, with its “usefulness” in helping to bring about some specified end.

The difference between the two theories is this. Classical utilitarianism was hedonistic: it aimed at pleasure. Preference utilitarianism, by contrast, aims at something rather different, namely, the satisfaction of desires.

So the modification is a simple alteration to the theory’s basic moral principle. All we have to do is change our earlier “definition” of a morally right action as follows:

(PU) An action is morally right if and only if it tends to maximize the satisfaction of desires.

By “desire”, I mean any state of the mind that causes us to move towards some more or less determinate goal. It might be anything from the mildest inclination to the most unshakable determination. It might be accompanied by powerful feelings of physical lust or emotional longing, or by a purely intellectual sense of duty, or anything in between. It might even be something completely unconscious, in the sense that the agent might not know that his own behaviour is in fact directed towards some goal or other. I take ‘desire’ and ‘want’ to be roughly synonymous, but we use many words to express desires.

The most important thing to notice about desires, so understood, is that they come in various strengths. This helps to explain why we have so many words for desire. A very weak desire might be a mere “inclination”, whereas a very strong desire might be a “need”.

If we aim to maximize the satisfaction of desires, following preference utilitarianism, we must take account of how strong they are, as well as taking account of how many people have them. To find out the morally right thing to do, we have to calculate which of various courses of action is likely to lead to the greatest overall satisfaction of desires. Sometimes this means satisfying one big desire and thwarting a number of lesser desires. But if the desires in question are of roughly equal strength, the numbers of those who have them will matter more.

“Preference” utilitarianism is so called because the strength of our desires is revealed in our preferences. In general, an agent has a stronger desire for A than for B if he prefers courses of action that bring A closer to those that bring B closer.

Let’s take a few examples to illustrate this idea.

1. Adolf Hitler once remarked that he would prefer have two or three of his own teeth extracted than to go through another meeting with General Franco. Even if he was not being completely honest, this remark says quite a lot about Hitler’s mind, by revealing the nature and strength of some of his desires.

2. In the 1960s, an American cigarette manufacturer adopted the following advertising slogan: “I’d walk a mile for a Camel”. Even committed smokers would probably only walk a mile for a packet of twenty Camels. So the slogan suggested that Camel cigarettes are twenty times more desirable than they really are. The link between preference and strength of desire enabled the slogan to make this suggestion.

3. Here’s a common enough scenario: a man drives home from work, bringing his weekly pay packet back to his wife and children. On the way, he passes various sources of pleasure and objects of desire — prostitutes, a bar, a restaurant, bookshops, etc. If he passes them by, it is because his desires for transient physical or intellectual pleasure are weaker than his desire for the security of his family.

Examples like these suggest a method for measuring the strength of a person’s desires. Examine his behaviour, and see which desires override which.

We could adopt a very similar method for measuring the strength of materials, by seeing which can cut through which. For example, steel is stronger than wood, because it is always able to cut through it, but wood is stronger than butter, for the same reason, and butter is stronger than air.

Analogously, in a normal person, the desire to stay alive is stronger than the desire to have a new car, and the desire to have a new car is stronger than the desire to buy today’s newspaper, and the desire to buy the newspaper is stronger than the desire to stay indoors all day.

We can apply this method of measuring the strength of desires to animals as well as to humans. How can we tell when a horse’s desire to relieve its thirst is stronger than its desire to stay away from a fire? When it decides to run through the blazing stable in order to reach the water.

I would claim that preference utilitarianism is even simpler and more powerful than classical utilitarianism, and that it explains moral motivation better:

The modified theory is simpler because the concept of desire, as I have defined it, is simpler than the concept of pleasure.

The modified theory is more powerful because it can be applied to a wider range of situations: desire is more readily observable than pleasure. (I just described how to measure the strength of an animal’s desire: but how could we possibly measure quantities of an animal’s pleasure?)

Finally, the modified theory gives us a cleaner account of moral motivation. The previous version relied on a dubious claim that we always want to do things that bring us pleasure. But sometimes we defer pleasure, or do masochistic things. The new version relies on the trivial truth that we always want to do what we desire to do. If we have the sort of “fellow feeling” that makes us see the goals of others as desirable, then we have a reason to do the morally right thing.

This brings me to one of the central points of my talk. Some desires are so much stronger than others that we must regard them as being of a different order of magnitude. Just as a steel knife will cut through any amount of butter, no matter how great the quantity, the normal person’s desire to remain alive is stronger than other normal people’s desires to buy today’s newspaper, even when added together. There is no multitude of people, however large, whose collective desire to buy today’s paper could be greater than the normal individual’s desire to remain alive.

If desires can be of different orders of magnitude as I have just suggested, but this fact is not widely recognized, then certain questions, which seem perfectly meaningful, may yet fail to have a determinate answer. For example: How much money would a normal person accept in exchange for the life of one of his children? There is no determinate answer to this question. In most people, the desire for the safety of one’s children is immeasurably greater than the desire to be rich.

Desires for one’s own life and liberty, and for the life and liberty of one’s children, are typically immeasurably stronger than desires for creature comforts.

We can apply this idea to our earlier problem of the sacrifice of the friendless down and out as follows. If he is not suicidal, we may suppose that the strength of his desire to live is much greater than the combined strength of the desires of all those who would benefit from his death. Even if there were a hundred of them, or a thousand of them, his desire would be greater than all of theirs added together, because it is of a different order of magnitude. His desire is to continue living, while their desires are merely for greater physical comfort.

But much more important than his desire to live is his desire to retain his autonomy, to be the one who decides whether he lives or dies, and if so when. We would have to take account of this desire even if he were suicidal. Indeed, many a suicide attempt might be understood as an assertion of autonomy, that is, as an act of deciding the hour and place and character of one’s death for oneself. If this is true, it suggests that the desire to be autonomous, to control one’s own life, is often even stronger than the desire to live.

This desire for autonomy is perhaps our most distinctively human characteristic. It is immeasurably stronger than most of the desires it might be set against in the sort of utilitarian calculations we have been considering.

How would our modified theory treat the case of the down and out? Preference utilitarianism says it is morally right to maximize the satisfaction of desires. Given their relative strengths, it is morally right to make sure that the down and out’s desire is satisfied, while the transplant recipients’ desires are thwarted. So, unlike classical utilitarianism, our new theory does not recommend the sacrifice of the innocent in this case.

Similar considerations apply to the example of the slave society. The desires of slaves to be free would always outweigh the desires of slave owners to keep slaves, however tiny the number of slaves or how large the number of slaveowners. That is because, in general, the desire to be free and autonomous is immeasurably stronger than the desire to have other people do work for you. So preference utilitarianism says that slave societies are morally wrong.

If human beings were very different, and we did not have this overriding desire to retain control of our own lives, then actions which take a person’s control away would not be so terribly wrong. But taking us as we actually are, preference utilitarianism does say that such actions are terribly wrong. This is why crimes like rape and kidnapping are in the same league, morally, as murder.

Conclusion

In most circumstances, preference utilitarianism agrees with common sense and tells us that it is morally wrong to sacrifice innocent people. The examples that prove so destructive to classical utilitarianism do not pose a threat to preference utilitarianism.

However, it has to be said that preference utilitarianism does tell us that sometimes we are morally obliged to sacrifice innocent people. I have chosen four examples to illustrate this point.

1. In William Styron’s novel Sophie’s Choice, the eponymous heroine is a prisoner in a Nazi concentration camp. The crux of the story occurs when she is forced to choose between her two children. That is, she must decide which of the two is to taken away to be killed in the gas chambers, and which is to be allowed to escape with her. In making her choice, she thereby saves one of her children; but her choice is no less the sacrifice of an innocent person for that.

Preference utilitarianism would say that her action was not immoral, since either option would thwart roughly equally strong desires, whereas refusing to make a choice at all would have thwarted even stronger desires. To make a choice was the lesser of two evils, and so it was morally obligatory.

Of course, the actions of those who forced her to choose were extremely morally wrong, as they would inevitably led to the thwarting of the strongest desires of at least two people.

This story comes from a novel, but I would be surprised if situations like this imaginary one had not occurred during the Second World War.

2. In 1989, a fire broke out in the reactor room of a Soviet nuclear submarine K-278 Komsomolets, when it was at sea. Horrific scenes followed. Fire and radiation spread quickly to many other parts of the submarine. In order to save themselves, crew members in Compartment 10 closed the bulkhead door to Compartment 9, thereby condemning its occupants to death. If they had not closed the door when they did, perhaps a few more men might have made it into Compartment 10, but the delay would almost certainly have meant the death of everyone in Compartment 10 as well. The action of closing the door was the sacrifice of innocent people, or so it seemed to those who did it.

But again, preference utilitarianism would say that the men in Compartment 10 did the morally right thing. Everyone involved had strong desires to continue living. The closing of the bulkhead door thwarted many of these desires. But leaving it open would have thwarted all of them.

3. Every year, hundreds of people are killed in road accidents in Ireland. These innocent people are “sacrificed”, in a sense, because although we know that roughly this number of innocent people will die, and that their deaths could be prevented by simply banning all road traffic, we choose the course of action that leads to their deaths. We do so for no more noble reason than that we want to be able to make everyday journeys quickly and easily.

Preference utilitarianism would defend this decision as well. The crucial factor is not that that the risks are very slight, but that they are undertaken willingly. Suppose driving were much more dangerous than it actually is, so that getting into a car was practically an act of suicide. Preference utilitarianism would still say that it is morally right to allow people to drive, as long as they are doing it willingly.

Since present arrangements allow drivers to do what they want to do, and alternative arrangements would prevent them from doing what they want to do, preference utilitarianism says that the arrangements should stay as they are.

4. My final example is too complicated to go into in much detail. I will outline the main ideas. It has to do with our arrangements for protecting ourselves from criminals, and how we protect innocent people from those very arrangements.

Any system of criminal justice, if it is to successfully put criminals into prison, runs some risk, even if very slight, of putting innocent people into prison as well. As the years pass, and thousands upon thousands of cases are processed, even the very best system we can conceive of will inevitably “sacrifice” some innocent people.

The only way to be absolutely sure that no innocent people are sacrificed would be to have no system of criminal justice at all. That would lead to an explosion of crime, including murder, which would be another, worse kind of sacrifice of innocent people.

So the only real option available is one of various possible systems along a spectrum ranging from the very mild to the very severe. A system is “mild” if it puts away a small proportion of criminals. This sort of system would sacrifice a correspondingly small number of innocents, but it would permit a correspondingly large number of innocents to fall victim to crime.

A system is “severe” if it puts away a large proportion of criminals. This sort of system would sacrifice a correspondingly larger number of innocents than the mild system, but it would permit a correspondingly smaller number of innocents to fall victim to crime.

How severe should a system of criminal justice be? Preference utilitarianism can help us to answer this question. It depends greatly on circumstances. Among these would be: how strongly we want to avoid becoming victims of serious crime; how strongly we want to avoid being imprisoned for crimes we did not commit; how much we mind being imprisoned for crimes we did commit; and so on.

These desires can reasonably enter the calculations because their relatives strengths can be compared: they are of the same order of magnitude because they all have to do with life and liberty.

It should be possible, in principle anyway, to estimate the degree of severity of the system that corresponds to the greatest satisfaction of desires. Preference utilitarianism says we ought to maintain a system with just that degree of severity.

In extreme circumstances, where there is a lot of crime, the system may need to be very severe indeed, and for certain kinds of crime we may even be obliged to suspend “normal” procedures such as trial by jury. So be it. As long as our arrangements maximize the satisfaction of desires, where these have been given due weighting according to their strength, we are doing what we ought to do.

[The following was written in the early 1990s — JB 2007]

A debate is taking place in Ireland at the moment over whether those thought to be involved in terrorist crimes should be interned without trial. A normal trial by jury is not possible for many terrorist offences, because both witnesses and members of the jury are liable to be intimidated, or worse. For terrorist-type criminal offences, judicial decisions to imprison people are already taken by the so-called “Special Criminal Court”, in which three judges work in consultation with high ranking police officers. This is a less-than-ideal alternative to “normal” criminal law courts with sworn witnesses and juries, and the rest of the apparatus of criminal justice that ideally ought to be in place.

Many of those who oppose internment point out that an inevitable consequence would be a greater number of innocents sacrificed by the system. But, as we have seen, such arrangements may be the least of a number of evils, and so might be morally right. The question whether we should or should not fall back on these less-than-ideal arrangements cannot be answered with the simple sweeping statement that it is always wrong to impison people without a full jury trial. We have to take account of unusual circumstances.

The preference utilitarian would thus agree with the Irish philosopher Edmund Burke, who held that “government is a contrivance of human wisdom to provide for human wants”. He would also agree with Burke’s claim that “circumstances are what render every civil and political scheme beneficial or noxious to mankind”. Given any particular set of circumstances, we may want to deny the necessity of extreme measures. But to deny the necessity of sacrificing innocent people under any circumstances would be hypocrisy. Every system of criminal justice entails the sacrifice the innocent.

What are emotions?

Some mental states are selected for (by Darwinian natural or sexual selection) because they cause specific kinds of behavior. I think an emotion is such a mental state combined with a perception on the part of the agent that they are in that state.

The perception aspect of it is essential, I think. We may talk in shorthand of “angry wasps”, but wasps only behave as if they’re angry, unlike dogs, say. To genuinely have an emotion of anger, agents must perceive themselves as being in that state, so that they have a distinctive experience of it, and can reflect on it a bit, maybe even giving some thought to “giving it free rein” or “reining it in”.

To perceive our own mental states, we need to recognize them for what they are. This involves identifying them as belonging to this or that category. How do we learn to do this? First, we learn to recognize the mental state in question by the distinctive behavior it causes in others. Then we learn to “spot the signs” or inner feelings in ourselves — urges, if you like — that accompany our own propensity to behave in similar distinct ways.

So how “private” are emotions? Natural selection creates and shapes emotions for the behavior they cause: in other words they have a public purpose. And furthermore, we initially identify and categorize emotions with reference to public criteria. This makes emotions a much more public matter than has traditionally been supposed. Like the identification and categorization of private experiences of color, the identification, categorization, and therefore understanding of emotions depends on shared activities. Yes, with effort we can sometimes keep an emotion under wraps, but this is the exception rather than the rule. And we usually only try to do so when we think it’s to our social advantage to do so, social advantage being another thing we learn to recognize through swimming in a social sea.

Obviously, emotions have both public and private aspects, and because it can be advantageous to have a particular emotion or to seem to others to have it, they are liable to be faked. (Keeping emotions under wraps as mentioned above is one sort of fakery.) In other words, in some respects they work like signals. Rather as butterfly wings can exhibit fake eyes, some expressions of emotion aren’t quite what they seem. There can be social advantages in faking contrition, for example, or in faking emotional attachment.

Here things get a bit murky, so what I say here is tentative. In general, the possibility of fakery can result in a sort of arms race in which potential dupes get better and better at spotting fakery, while fakers get better and better at producing it. Usually, “costs” increase — ideally to the point where fakers are no longer prepared to pay. But close to that threshhold, both fakers and non-fakers pay a lot. Furthermore, there is a blurred line between fake and genuine emotions, because emotions are not literally true or false. Both fake and genuine expressions of emotion — and thus emotions themselves — can become a more or less deliberate “extravagance”. One might declare his love by threatening to play piano in public ad infinitum, another might proclaim their grief by self-harming. Are genuine or faked emotions behind such activities? It seems to me the question should be how appropriate such emotions and expressions of emotion are in the circumstances. The appropriateness in question is not moral so much as “theatrical”. Is this agent “playing to the gallery” or “laying it on with a trowel”? James Joyce expressed this idea brilliantly when he characterized sentimentality as “unearned emotion”.

Where does responsibility lie for rape?

Trains of thought like the following are very common nowadays:

  • When a woman is raped, it is wholly the rapist’s fault.
  • When a man rapes a woman, it is not in any respect the fault of the woman.
  • Women do not in fact give any sort of provocation to men to rape.
  • Women needn’t give any thought to their own supposedly “provocative” behavior.
  • Supposedly “provocative” behavior on the part of women has no causal effect on would-be rapists.

But wait. In real life, rapists are like terrorists. We may condemn their violence using words like ‘indiscriminate’, but in fact it is not wholly indiscriminate. They do choose to target some types of people in preference to others. Heterosexual rapists tend to target women rather than men to rape. Palestinian terrorists tend to target Jews rather than Arabs to stab with knives. And so on.

Rapists and terrorists are both culpably responsible for their acts of violence, but those they target do play some causal role in being chosen as a target, if only because they can be classified as a member of this or that group, or because they inadvertently provide a (wholly spurious) pretext for the rapist or terrorist to target them.

Why is this? — The cause of an event (such as a rape or a stabbing) consists of all of the conditions that together are sufficient for the event to occur. Whether we like it or not, to be identifiable as a member of this or that group is one such condition. Outrageous as it would be to recommend that Jews disguise themselves as Arabs to reduce the risk of being targeted by knife-wielding criminals, such a precaution might indeed slightly reduce their risk of being stabbed. Over the course of history, several women have chosen traditional men’s roles in life, and in doing so many disguised themselves as men. This was partly to avoid the unwanted attentions of men, and no doubt their efforts met with some success. In both of these examples, we might say that would-be victims are to some extent causally responsible for avoiding victimhood, although of course neither would be in any way culpably responsible if they became victims.

There is no such thing as a woman “asking for” or “inviting” rape, as rape is by definition non-consensual. But there are situations in which a woman can inadvertently provide a spurious pretext for rape. Remember, rapists are perverts, their minds are twisted, and pretexts for action are their chosen currency. And no matter how twisted a mind may be, causes are “brute causes” — that is to say, they are often immune to reason or appeals to justification. And causal responsibility is distinct from culpable responsibility. The failure to distinguish the two is a classic case of ambiguity in language leading us astray. “Philosophical problems arise when language goes on holiday”, Wittgenstein remarked, and the word ‘responsibility’ is a seasoned traveler between the contexts of blame and causation.

It’s also a case of confusion of “is” and “ought”. During the course of the train of thought sketched above, the idea that “men ought not to rape” (in such-and-such circumstances) subtly shaded into the idea that “men do not rape” (in such-and-such circumstances). That’s more than a philosophical error: it’s dangerous. To deny the reality of risk on the grounds that being subject to such risk is unjust is to expose oneself to greater risk.

What do equality campaigners want?

Most “equality campaigners” are confused as to what sort of equality they want. They’re unclear as to whether they’re looking for equal consideration of interests, equal opportunity, equal treatment, or factual equality.

By neglecting to reflect on what “equality” means, they tend to talk as if they value the most obvious form of equality, i.e. factual equality. Typically, they call for a reduction of de facto differences between people, as if factual equality were valuable in itself.

But a couple of reasonably well-aimed questions can reveal that they aren’t really attached to that at all. For example: Is it a good thing for children born with a disfiguring cleft palate to receive cosmetic treatment? — Of course it is. Would it be a good thing for people of exceptional beauty to be deliberately disfigured, in order to reduce the inequality of physical attractiveness that causes such widespread distress? — Of course not.

Anyone who sincerely regarded factual equality as something valuable in itself would answer Yes to that second question, since the act it describes would effectively reduce inequality. No one in their right mind would though, because no one in their right mind genuinely values factual equality per se. They’re opposed to privation, and resent people who have been luckier in life than they have been themselves.

I share their opposition to privation, and indeed I share their resentment. But I’m not going to dress the latter up so it looks like a principled attachment to “equality”.

Respecting agency

Preference utilitarianism differs from traditional utilitarianism in that it doesn’t enjoin us to maximize any sort of commodity such as pleasure or happiness. The rather murky concept of “utility” has no place in preference utilitarianism.

Instead, as Peter Singer lucidly put it, preference utilitarianism is the “minimum moral position”. To act morally, an agent should simply respect other agents’ preferences in the same way as he trivially respects his own preferences. To put it another way, preference utilitarianism enjoins us to “respect agency in general”.

Whatever we do, we do it because our beliefs and desires cause our action. Respecting agency means giving due deference to the beliefs and desires that cause an agent to act.  At the moment of acting, given the constraints of circumstances, limited options, limited time, and so on, what we actually do is what we most want to do, given what we believe. If our wants (desires) were aimed at different goals or had different strengths, or if our beliefs differed in content or degree of entrenchment, we might act differently — but we would still do what we prefer to do. As long as we are genuinely acting, our preferences always “win”.

But let’s take care to consider “scope” here. If I hand over my wallet to a mugger rather than risk death, within the narrow context of being mugged I do what I “prefer”. But considered from the wider perspective of me going about my business, I would prefer not to be mugged at all. So obviously, although we always do what we prefer in a trivial sense looked at from the narrowest perspective, we don’t always manage to do what we want in a larger sense. In other words, our preferences are often thwarted; we are often unfree. Preference utilitarianism says that to act morally, we should as far as possible prevent the thwarting of others agents’ preferences, considered from the widest perspective, ideally taking account of the entirety of the agent’s beliefs and desires.

Like other forms of genuine utilitarianism, preference utilitarianism considers each act individually, in its particular circumstances, rather than promoting a general prescription or rule. However, a fairly close approximation to such a rule is the so-called Golden Rule expressed in Matthew 7:12:

all things whatsoever ye would that men should do
to you, do ye even so to them

In other words, treat others as you would like to be treated yourself. Respect the agency of others as you respect your own agency.

But what if an agent acts with false beliefs? Wouldn’t we want others to prevent us walking out in front of an approaching bus? Or to whip out of our reach the glass of acid that we mistakenly think is a cool beer?

Indeed we would, and we should do the same for others, as long as — and this is absolutely vital — we do not overrule their overall agency in so doing. By “overruling their agency” I mean neglecting to give their beliefs and desires due deference as their own beliefs and desires, and not, please note, as true beliefs or as acceptable desires. When I grab someone to prevent them walking out in front of a bus, I assume that they have a whole bunch of other mental states such as a desire to not to be run over by a bus, a desire to reach the other side of the road uninjured, that they believe what they want can be found on the other side of the road, and so on. By overruling their belief that there is no immediate danger in stepping out into the road, I respect the many more other beliefs and desires that cause them to cross the road in the first place. Overall, I respect their agency more by overruling one belief while respecting the larger whole of the rest of their beliefs and desires.

Much the same goes for keeping a glass of acid out of the reach of a thirsty beer-drinker. It is reasonably safe to assume that we respect his overall beliefs and desires more by thwarting his narrower desire to drink the contents of this particular glass.

I’d like to emphasise, again, that these infringements are justified by respect for the agent’s overall agency considered as a larger whole, and not because they are caused by false beliefs.

As with the example of getting mugged, above, much depends on “scope” here. Within the narrow context of stepping out onto the road, or of drinking the contents of this glass, it might look like prevention means agency is disrespected. But from the wider perspective of the agent’s overall beliefs and desires, we respect agency more — we give due deference to the agent’s own beliefs and desires  — by preventing this or that particular act.

In these spur-of-the-moment cases of preventing action, we rely on our psychological assumptions being correct. We assume that most people don’t want to be hit by a bus, don’t want to drink acid, and so on. We assume they believe buses and acid can kill. As with any assumption, we might conceivably be wrong, but in any case, we can easily check afterwards: we simply bring the bus or the acid to the agent’s attention. In most cases, the agent will thank us for keeping an eye out for his safety. But it might turn out that he has unusual beliefs or desires. He might doubt the presence of the bus, or strenuously resist the idea that the glass contains anything dangerous. He might be practicing his daredevil skills, or hoping to take his daily dose of vitamin C. Or he might be suicidal. In such cases, I think the preference utilitarian should respect overall agency. He should try to consider the agent’s entire system of beliefs and desires, and respect them as much as possible. If the agent persists in his beliefs and insists on his course of action, we should respect that. It might entail letting him go ahead and kill himself — or allowing him to take great risks in performing some sort of experiment in living.

As an aid to thinking about such scenarios, we can use the Golden Rule above as an approximation, and ask: What we would want others to prevent us doing ourselves? I think most of us would want others to pull us out of the way of a bus, or to grab the acid before we can drink it. We’d thank them for it. But we would not want them to overrule our overall agency, nor would we thank them for doing so. No one has the moral or intellectual superiority to legitimately thwart an act simply because they think it’s caused by an improper desire or a false belief.

Small-P protestant revolutions

Can you see a shared pattern in Brexit, the English Civil War, the Reformation, and similar uprisings of ordinary people against longer-established authorities with their widely-respected experts? I think I can, and I see all of them as “protestant revolutions” — protestant with a small P, as their opponents are not always Catholic with a capital C, but catholic in the broader sense of being more mainstream (i.e. more “universal”), more traditional, and more jealously protected by hierarchical power structures.

Perhaps I recognize the pattern quickly because I feel I have been engaged in my own protestant revolution for decades now against bad science. It’s a strictly peaceful, intellectual revolution, but there is real anger on both sides. I’ll try to explain why I feel a bit like a low-ranking partisan in such a revolution.

I’m a scientific realist. In graduate school, I put a lot of effort into defending scientific realism against the criticism of most of my teachers and every single one of my fellow graduate students. In doing so, my realism was tempered and mitigated somewhat, but remained essentially intact. I grew to appreciate the centrality of the hypothetico-deductive “method” to genuine science, and became increasingly aware of the ubiquity of pseudo-sciences that eschew it.

For decades now, “the authorities” (such as Sir Paul Nurse of the Royal Society, almost all governments, most academics and mainstream media, politically correct conformists in every walk of life) have been telling us that we must all believe “the science” — whatever the authorities deem “the science” to be.

But I think we have good reasons not to do so. First, none of these authorities ever seem to express the slightest interest in the hypothetico-deductive method that I regard as essential to genuine science. Second, the history of science teaches us that science has always had bad branches, and there are no doubt bad branches right now: there is no monolithic body of reliable opinion that deserves to be called “the science”. Third, genuine science does not ask us to accept the word of an authority. Fourth, observation plays a crucial role in science, and observation is what anyone with working sense organs can do: to dismiss non-expert opinion is to allow ideology to overrule observation. Fifthly, and from a moral perspective most important of all, we are entitled to believe whatever we like. Martin Luther put it in terms of “conscience”. Elizabeth I put it by saying she “would not open windows into men’s souls”. No one has a moral entitlement to insist that anyone else must believe anything. It’s simply morally wrong to so insist.

So, like a cut-rate Martin Luther, I simply cannot believe what the authorities are insisting I should believe. “Here I stand. I can do no other.” And I’m not the only one.

It’s no coincidence that supporters of Brexit tend to be so-called “climate deniers”. Skepticism about a body of opinion supported by authoritarianism rather than observation is exactly the sort of thing that characterizes protestant revolutions. The development of the internet has worked much like printing in the original protestant revolution. Blogs and social media are replacing academic journals that no individual can afford and no honest writer would expect his work to be widely read in. This is analogous to vernacular bibles coming to be seen as more valuable than authoritative interpretation of the Latin bible by clerics.

What strikes me as sad, or maybe just funny, about present-day “counter-revolutionaries” is they seem not to understand why anyone in their right mind would reject expert opinion. They assume it’s completely obvious that expert opinion is better than non-expert opinion, and only a madman or an utter fool would think otherwise. But a philosophical mistake underlies this assumption. There are two kinds of expertise, which we might call that of the “texpert” and that of the “prexpert”. A Texpert (T for ‘Theory’ or ‘Text’) is someone familiar with a body of theory or the writing of a theoretician (such as Marx, Freud, Keynes or Hayek, say). A PRexpert (PR for ‘PRactice’) is someone who has a demonstrable practical skill. The latter is something we rightly all admire. We wisely consult and frequently hand decision-making powers to prexperts. The practical expertise of a prexpert might not touch on actual opinion (belief or claims that purport to be true) at all. But a texpert is just someone with a theory, usually one whose esotericism makes its epistemic status doubtful. If we confuse these two types of expertise we are liable to unwisely hand decision-making powers to the wrong sort of person.

Alas, many texperts familiar with a theory T of subject-matter X flatter themselves with the thought that they “know a lot about X” instead of having a mere “familiarity with T”. Let’s not be taken in by such flattery!

 

POSTSCRIPT

On further reflection, I should add that a defining characteristic of “catholic” ways of thinking (as currently described) is the assumption that “transcendent” matters are to be decided by “earthly powers”.  This sets more puritan ways of thinking against it. To take a few examples, it’s often thought that questions of morality (of taking military action, say) or justice (of proposed legislation, say) or credibility (of a scientific theory, say) are to be decided by such earthly powers as reside in committees: a vote at the United Nations, a decision by the European Court of Human Rights, an act of some branch or other of the EU, or consensus among qualified scientists. Or indeed a decree by the Pope.

Puritans rankle at that assumption. Questions of morality, justice, truth, of what to believe are matters of conscience, they respond, or at least matters an individual must judge for himself because they lie beyond the competence of a committee. Such questions are transcendent is the sense that their answers are to be discovered rather than decided (i.e. created by a decision) and we are all in the same boat: we all fallible, and committees are multiply fallible. As we nowadays put it, they are subject to groupthink.

As an example of puritanical thinking, consider my own insistence (touched on above) that genuine science follows the pattern of the hypothetico-deductive “method”. I am outraged by the claim that we should accept a theory as scientific or as worthy of belief because “97% of scientists say so”. No doubt my own puritanism is as distasteful to people who make that claim as their invocation of earthly powers is to me.

As an example of catholic thinking (as here understood), consider ex-President of Ireland Mary Robinson. As far as I am aware, her opinions are always unwaveringly mainstream, underwritten by the supposed authority of some administrative body or other, and approved by current “powers that be” from academia to the Council of the European Union. She seems to suppose that UN resolutions are the highest appeal on moral questions, and that consensus among insiders is the authoritative last word on scientific questions. She revels in positions she occupies in the hierarchy, and constantly reminds us of various honours she has received (most recently, the “city of Chicago’s highest honor, the Medal of Merit”).

As you have probably guessed, the puritan in me finds that all very unseemly. But I should add that many sincere Catholics (capital C, i.e. members of the Roman Catholic religion) also disapprove. The Catholic church is no longer the earthly power it once was, and its position on abortion and single sex marriage differs from that of current catholic thinking (small C, as understood here). So this is a useful reminder that the words are not used in the same way and refer to categories that only loosely overlap.

Why do males die younger than females?

I have a hypothesis that explains why in many (most?) species, males have a shorter life expectancy than females. My apologies if this has been thought of before, or if it’s already well-known. It’s quite likely that I’m re-inventing the wheel here, that I’ve come across the current explanation before somewhere, and have simply forgotten. I have a keen interest in evolutionary theory, but I’m not a biologist.

The hypothesis is this: males are subject to more exploitation by parasites than females, because in general parasites “want” their host species to thrive. Over the course of a lifetime, this greater exploitation takes its toll.

In non-monogamous species, males are useful for fertilizing the eggs of the females, but not much else. In effect, after donating sperm most of them are redundant. They use up the food supply that could otherwise swell numbers of individual members of the species, and hence safeguard the species itself. In non-monogamous species, too many males are “bad for the species”. Drone bees consume as much nectar as honey-producing females. Male elephant seals consume far more fish than their smaller female counterparts, and few of them even get to donate sperm.

Farmers — in effect, human parasites of animals used as food — know all this, and so they usually kill males apart from the few needed to fertilize females. In doing so, they strengthen the species they parasitize, in the sense of increasing their numbers and assuring their future. Through domestication, the humble jungle fowl of Asian forests has become the mighty chicken, found in huge numbers all over the world. Much the same applies to cattle and sheep, which now occupy much of the earth’s surface.

Most parasites (such as microbes) are brainless, but through the process of natural selection they adopt “strategies” which can promote their numbers. In most cases, these strategies ensure that their host species do well enough to function reliably as hosts. The parasites aren’t actually thinking as human farmers think, of course, but over many generations they stumble upon similar strategies, which become established as the parasites that benefit from them proliferate.

With sex ratios, the “interests” of species and genes conflict. What’s “good for the species” is a much larger proportion of females than males, at least in non-monogamous species. But what’s “good for the genes” is a roughly equal number of males and females (as explained by Fisher’s Principle). The fact that in most species the ratio of males to females is indeed 1:1 makes a compelling case for a gene-centered understanding of evolution (a la Richard Dawkins’ Selfish Gene), and against group selectionism.

This hypothesis (I hesitate to call it “my” hypothesis) should be easy enough to test, as it entails that there should be a greater difference in male–female life expectancy in non-monogamous species than in monogamous species. It also entails that many of the diseases we associate with early male mortality (such as coronary heart disease, possibly suicide) may in fact be partially caused by infection by microbes.

The tyranny of conditioning

From a very early age, I detested learning by rote. My refusal to engage in this soul-destroying activity led me to my first brush with criminality, when I tried to cheat when reciting the seven times tables.

I’m not the only one who has a deep distaste for learning by rote. What is rather surprising, perhaps, is that some people seem to have a genuine liking for it. Witness the eagerness with which so many take to learning foreign languages, with new vocabularies, irregular verbs, unpredictable genders of words for inanimate objects, and so on — all of which require tedious repetition and absorption.

It seems to me that there is a telling difference here of temperament — between those who assume education is essentially a matter of acquiring good habits of thought, and those who assume education is essentially a matter of getting a better understanding. Both types of people embrace education as a good thing, as a vital aspect of personal growth, but the former expect and even welcome an onerous period of habit-formation to achieve it. The latter embrace a sort of intellectual “principle of least action”: whatever is sufficient to explain is “all we need to know”. That attitude can look downright lazy to fastidious habit-formers.

The difference in temperament extends far beyond education. Here I’ll just touch on how it emerges in attitudes to mental illness, and in politics.

People who assume that education is a matter of acquiring good mental habits tend to think that mental health issues — from mild neuroses to out-and-out illness — are to be overcome by means of conditioning. For example, a phobia of spiders is supposedly overcome by coming into ever-closer contact with them — letting them crawl over one’s hands and so on. At the end of the conditioning process, the patient has “got used to the idea” — in other words, he has changed his habits.

Now I am no Freudian, as I think his understanding of the mind was badly mistaken in many respects. Yet I think he was importantly right, both factually and morally, in thinking that the way to better mental health was not through conditioning, as above, but through self-understanding. Let us put aside the details of such self-understanding, such as whether it really involves uncovering unconscious desires or phantasies. The important thing is that therapy is aimed at enlarging one’s understanding of oneself, rather than at achieving greater “self-control” through the acquisition of new habits. Rather than trying to instil such habits, a Freudian therapist would encourage exploration and experiment, with its attendant risks.

This difference of temperament can also be seen in political thought, where a deep division exists between Rousseau and Hobbes (almost everyone has an affinity for one or the other of them). Rousseau thought that by the time humans reach adulthood, they have been corrupted by the bad conditioning of modern society, and the solution to this problem is counter-conditioning. We must acquire new “habits of the heart” (as Tocqueville called it, re-phrasing Rousseau). Unlike Rousseau, Hobbes thought humans were born selfish, and there’s no way to change that; but by understanding ourselves better we will agree to an imperfect compromise in which the most important freedoms are safeguarded.

I think it’s pretty obvious that the current enthusiasm for minimum alcohol pricing, for special taxes on fat and sugar, and the rest of it, comes from the “conditioning” side of the divide. If people eat or drink too much, the idea goes, they should be re-educated by acquiring new habits. By putting unhealthy foods and alcohol that little bit further from reach, new habits will take root.

Let us pass over the fact that the attempt to instil new habits involves coercion, and that such coercion is discriminatory, because only a select few are poor enough to be affected by modest price increases. Let us pass over the issue of legislators making laws that they themselves are not subject to. The question remains: Is conditioning — enforced learning by rote — the right way to achieve personal growth?

My temperament says No. As alcohol prices have fallen in Ireland, Irish people have cut down on alcohol. I think this is because affordability enables younger people to learn how to drink, to educate themselves through exposure, experiment, and increased self-understanding rather than through the forced acquisition of a habit.

What is this “positive” concept of freedom?

Isaiah Berlin famously distinguished a “negative” and a “positive” concept of freedom. The negative concept is straightforward, but what can be made of the positive concept? Too often, attempts to distinguish them rely on a superficial linguistic difference between the terms ‘freedom from’ and ‘freedom to’. For example, it might be said that negative freedom is freedom from external constraints, whereas positive freedom “represents freedom to do things on one’s own volition.” [Taken from here.]

But that simply won’t do. Agents only ever do things because they want to do them. In other words, any genuine act (rather than a mere twitch, or a frogmarch) is done as the result of the agent’s own volition, and it only can be done when the act is not hindered by external constraints. And that amounts to a “negative” re-formulation of what was intended to capture the essence of “positive” freedom.

Perhaps what’s meant is something like this. If a person is forced to do something under duress — at gunpoint, say — then although he does it because he (briefly) “wants” to do it while the gun is aimed at his head, he can hardly be said to do it on his own volition. A mugger threatens him with death, and he’d prefer to live despite handing over his money than die holding on to it. This is not a free act, surely?

Well, of course the mugging victim is not free. But his lack of freedom can be characterized in an entirely negative way. Although he wanted to hand over his money while held at gunpoint, and that narrowly-circumscribed act in isolation could be described as “free” (no policeman suddenly turned up to prevent him doing so), that is to consider events within far too narrow a context. He had a much stronger, longer-term want not to be mugged. That want — considered in the larger context — was thwarted by his actually being mugged. The mugger was an external constraint that prevented the victim from doing what he wanted to do. So the victim was not free to go about his business unmolested, and therefore he was not free — for entirely “negative” reasons.

Notice that the word ‘free’ applies both to agents and to acts, and furthermore, acts have to be considered within contexts of varying scope. This invites confusion, as the word’s meaning can slide almost imperceptibly between them. (I started the last paragraph with a subtle shift of my own by giving an answer about an agent to a question about an act.)

When the negative concept of freedom seems to suggest that a man being mugged is “free” to give money to his mugger, some are drawn to the idea that we need a more robust concept of freedom than this negative one. And here thoughts usually turn to autonomy — to the idea of self-rule, of being the author of one’s own acts. It sounds silly or sinister to say that the mugging victim was “free” to hand over his money, because he lacks autonomy. The next obvious step is to embrace a concept of freedom that links it with autonomy.

The concept of autonomy is quite similar to that of power, specifically inner strength. To achieve something, we don’t just need an absence of external obstacles, we also need the wherewithal to act — the “muscle”,  if you like, for movement to occur.

I think we need to proceed carefully here. To act at all, we need power — quite literally we need muscle to lift a finger, and in an extended sense we need various mental abilities. To act successfully — which goes beyond merely acting — we need an absence of obstacles that would prevent our acts achieving their goals. We should observe and respect this distinction. Greater power tends to bring with it greater freedom, but power and freedom remain distinct concepts, as one is a prerequisite of action, while the other is a prerequisite of success.

Hobbes memorably said that a man “fastened to his bed by sickness” did not lack freedom but power. In saying this, Hobbes exhibited a remarkable degree of political sensitivity. We have a legitimate prima facie claim against other agents who put limits on our freedom, but no such claim against mere circumstances (rather than agents) that make us internally weak. (Of course what one historical era counts as weakness can later be regarded as the effect of human agency.)

It seems to me that we need both a concept of power, and a distinct concept of freedom. But as far as freedom is concerned, the negative concept is all anyone needs. Most attempts to define the positive concept are in fact just alternative ways of defining the negative concept. “Freedom from” and “freedom to” are inter-definable, the two definitions in effect pointing to figure and ground that share lines of demarcation. Freedom to do X is just the same thing as an ability to do X thanks to the lack of external constraints that prevent one doing X. To be an autonomous agent is to have both the power to act, and the freedom to act, the latter understood negatively.

Yet, there is a positive concept of freedom. I know this because Rousseau used it, Marx used it, and it is presupposed in almost every ringing patriotic declaration of national freedom. This concept of freedom is expressed in rather mystical-sounding claims that to be free one must partake in the “general will”; that one can be “forced to be free”; that one must beware of “false consciousness”; that one’s “true self” must take control over one’s merely “empirical self”; that being free means embracing the “destiny of the nation”; or whatever.

As I see it, the essential difference between positive and negative freedom is this: having positive freedom means more than simply being able to get what you want — it means wanting the right things, usually understood in some implicitly moral sense. The various goods that are thought to empower those who have positive freedom — such as education, “strength of will”, etc. — are things that many people do not in fact strive for. But according to those who understand freedom positively, they ought to strive for them, for their own empowerment.

Implicit “oughts” are an essential ingredient in the positive concept. In Rousseau’s terms, being free means partaking in the “general will”, in other words not simply pursuing goals one already happens to have, but adopting larger goals as one’s own. Only with that essential extra ingredient can people be “forced to be free” or considered unfree if their “empirical selves” fall short of the “self-realisation” enjoyed by their “true selves” (in Berlin’s terminology). Ideas such as “false consciousness” only make sense against the background assumption that being free means wanting the right things as well as being able to achieve them.

These mystical-sounding appeals aren’t exactly to autonomy per se, but to something like the additional power that agents would acquire if they adopted goals — equality, justice, truth, whatever — shared by members of a group.

This is a deeply illiberal understanding of freedom, and I think a confusion of power and freedom lies at the root of the positive concept.