Sentience and preference utilitarianism

There was a brief discussion on Twitter yesterday about whether we should grant “human rights” to non-sentient robots. My reaction: “Why give a damn about non-sentient agents? They can’t feel anything, so who cares if harm should befall them?”

This idea that “morally, the only thing that matters is sentience” was famously expressed by Jeremy Bentham:

a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? the question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Despite my confidence that non-sentient agents do not matter morally, I admit that sentience might seem to pose a special problem for me as a preference utilitarian. The dissolution of this problem adds detail to my moral theory, and explains why we call it ‘preference’ rather than ‘desire’ utilitarianism.

A preference utilitarian differs from the traditional hedonistic type of utilitarian (such as Bentham) in that his basic good is not a particular sort of experience such as pleasure or relief from pain, or happiness understood as a feeling, but the satisfaction of desires. His “greatest good” is not the “greatest happiness of the greatest number” but the maximisation of the satisfaction of desires.

Now it’s important to see that the satisfaction of desires here is not the having of a “satisfying experience”, but the satisfying of objective conditions — and the agent might be wholly unaware that those conditions have in fact been satisfied. A desire is satisfied when the desired state of affairs is actually realised, whether or not the agent has any idea that the state of affairs is realised. Like a man becoming an uncle by virtue of a birth he knows nothing about, or a belief being true, a desire’s being satisfied is a matter of the world’s being arranged in the right way — something typically external to the mind of the agent.

For example, most people want their spouses to be faithful. They don’t want the mere experience of their spouse being faithful, but the actual objective fact of their spouse being faithful. This desire is not for the spouse to “keep up appearances” by telling convincing lies about their infidelities — there mustn’t be any infidelities to tell lies about.

Here’s why sentience might seem like a problem for preference utilitarianism: unless a desire is a desire to have a particular sort of experience, which it typically isn’t, the experience of a desire being satisfied is like a by-product of its actually being satisfied. So a “robotic” agent who doesn’t have any conscious experiences at all — but still has desires which can be satisfied or thwarted — would seem to make moral demands on preference utilitarians like myself. That conflicts with the intuition expressed above that only sentient agents matter morally.

The problem is dissolved, I think, when we remind ourselves that genuine desires (and beliefs, for that matter) only exist where pluralities of them together form a “system”. In moral deliberation, the utilitarian weighs desires thwarted against desires satisfied in an imaginary balance. Obviously, strong desires count for more than weak desires. When desires come into conflict with one another in the mind of a single agent, the strongest desire is the agent’s preference. Only desires in a system of several desires competing for the agent’s “attention through action” can count as preferences.

So system is required for one desire to take precedence over another, as it must if it’s a preference. And a preference to pursue one goal rather than another involves the weighing up of the relative merits of competing goals, the level of time-management needed to defer the less urgent goal, and so on… In short, it requires reflection and choice. This is “second-level representation” — i.e. meta-level representation of primary representational states — of the very sort that makes for consciousness. We need reflection to decide between competing desires (and for that matter, we need epistemic beliefs to guide our choices of first-level beliefs about the world — in other words, a sense of which among rival hypotheses is the more plausible). Second-level representations like these amounts to awareness of our own states, including awareness of such states as physical injury. In other words, the experience of pain. It’s a matter of degree, but the richer the awareness, the greater the sentience. So genuine desire and sentience are linked in a crucial way, even though any particular desire and the conscious experience of its satisfaction might not be.

To better understand why “genuine” desires are part of a system, we might contrast them with more rudimentary goal-directed states of ultra-simple agents such as a thermostats, or slightly more sophisticated but still “robotic” agents such as cruise missiles.

Thermostats and cruise missiles each have a rudimentary desire-like state, because their behaviour is consistently directed towards a single recognisable goal. And they have rudimentary belief-like states because they co-vary in a reliable way with their surroundings, co-variation which helps them achieve their goal. In both cases, they might be said to “bear information” (non-semantic information, reliable co-variation) about the world. A clever physicist (a “bi-metallurgist”?) would be able to work out what temperature a thermostat “wants” the room to stay at, and what temperature it “thinks” the room is currently at. A clever computer scientist would be able to reverse-engineer a cruise missile to reveal what its target is, the character of the terrain it is designed to fly over, its assumed current location, and so on. We could go further and adopt the intentional stance, assigning mental content to these agents. In effect, that would be to drop the cautionary quotation-marks around the words ‘wants’ and ‘thinks’. We might regard ourselves as referring literally to its desires and beliefs. But we would not be able to take the next step and talk about preferences. For preferences, we need various gaols of varying strengths, and we need something like consciousness to make decisions between them. In other words, we need sentience, at least to some degree.

Does Biology Have Laws?

[This blog post was prompted by this Scitable discussion. Unfortunately comments were closed before I could contribute.]

Laws are bits of language that describe regularities in nature. If the laws are true, the regularities are real. Laws are general claims, but they are more than accidental generalisations such as “everyone in this room is over five feet tall”. Laws are more like hyper-generalisations in that they don’t just describe what has actually been the case so far — they describe what would be the case, even if the states of affairs that would make them true have not yet come to pass.

There aren’t any laws about the heights of people who happen to be in a room together, but we’d be moving in that direction if we arranged some sort of screening mechanism that only allowed admittance to that room on the basis of height. Genuinely scientific laws rely on such mechanisms when they describe such things as the electric charge of fermions in an atomic nucleus.

Many fundamental laws of physics like Pauli’s Exclusion Principle do not admit of exceptions. Exceptionless laws like that are quite common in physics and chemistry. What about biology?

The question whether there are laws in biology is too often understood as asking whether there are exceptionless laws in biology. I’d guess there probably aren’t any such laws, because the categories of biology (species, etc.) are not like the categories of physics.

But it does not follow that biology has no laws. The salient feature of laws is not that they admit of no exceptions but that the links they express (between categories, concepts, etc.) are non-accidental.

Examples: animals with high male parental investment tend to be monogamous; mammalian mothers tend to be protective of their young. The biological functions of parental investment and pair-bonding are linked; and so are the functions of producing milk and caring for young.

Those links entitle us to draw inferences: if we hear that animals of species X exhibit high male parental investment, we can guess that they are monogamous, although there is always the possibility that we are dealing with an exception. If we hear that Y is a female mammal, we can guess that she is protective of her young, even though there is always the possibility that this particular individual’s behaviour is “aberrant”.

I hope it’s clear that biology does critically rely on and describe non-accidental links between categories — links that entitle us to make inferences between claims containing the corresponding concepts. It is that warrant to infer that makes for genuine scientific laws, not their exceptionlessness.

Biological laws have exceptions because many biological categories are “functional” (as exemplified above). In describing, explaining, predicting (etc.) things biologically, we adopt what Dennett calls the “design stance”. We assume that things have functions (purposes, goals, tasks, etc.) and that they perform those functions more or less well “as they were designed” to. “Working properly” shades into “less-than-optimal performance”, which in turn shades into out-and-out “malfunction”. Thus biological categories have fuzzy edges, in other words, these categories have grey areas where there are exceptions.

(Warning: of course nothing in biology is literally designed by a designer. The main point of evolutionary theory is to show how no such design is required. Talk of design, purposes, goals etc. in biology is just shorthand for past contribution to survival and reproduction.)


When people talk about “self-control”, what do they mean? On the face of it, a “self” and something else that “controls” that self sound like two separate agents. But every agent is in reality just a single agent. What is going on? I think some buried philosophical assumptions and mistakes lurk here.

[Edit: I see no real difference between a core part of the “self” controlling unruly peripheral parts, versus its being controlled by them. The main idea of self-control is that the “self” is “divided against itself”, or at least divided into more than one part that can be treated as an agent in its own right.]

When we say someone should control himself, we mean first and foremost that he has conflicting desires. Then we go further, and give one of those desires a superior status as being “more genuinely his own” than the other one. His “gaining control of himself” is then a matter of the desire that is “more genuinely his own” resulting in action, overruling the desire that is “less genuinely his own”.

Now it seems to me that this decision to regard one of the conflicting desires as “more genuinely his own” is not taken with reference to what the agent himself most strongly desires, but instead with reference to what is considered more laudable — in other words, with reference to what society at large approves of. This might be anything regarded as valuable — such as good heath, prudence in financial matters, scientific rigour, religious piety, whatever. You can see the difference in terms of “is” and “ought”: what the agent most strongly desires is a factual matter to be decided by considering his own choices, whereas what is laudable is a matter of value decided by the likes and dislikes of society at large.

It’s important to see that the factual matter is a completely trivial one — whatever the agent actually ends up doing is what he wanted to do most in the first place. What makes one desire stronger than another is simply that it “wins” any conflict between them by issuing in action. [Edit: So if we look at what an agent most strongly desires, there is no question of one part of himself controlling any other part of himself. He will have to compromise with other agents, of course, and that may involve agents controlling each other to some extent, but that is an everyday fact of life.]

So I would argue that the word ‘self-control’ is to this extent inappropriate: whatever “control” may be involved is not really “self-control” so much as “control by society”. Now please don’t get me wrong here: I don’t mean to say that that sort of “control” involves actual coercion by society. But it does involve guidance from outside the self — with the agent’s tacit approval, of course. He takes his lead from what society approves of rather than from himself in isolation.

Some will protest that self-control usually involves pursuing longer-term gaols and deferring immediate gratification. If longer-term goals are more “genuinely an agent’s own” than mere passing whims, perhaps longer-term goals are more rationally entitled to direct conduct. Perhaps longer-term goals represent an agent’s character more faithfully than whims, so that the latter can be considered “out of character”, and thus a suitable subject for the “self” to exercise “control” over.

I think that’s a red herring. Spontaneity, impulsiveness, even capriciousness are aspects of an agent’s “true” character just as much as stolidity or lack of imagination. Rational action involves the pursuit of all sorts of goals, with an eye both to how desirable this or that goal may be, as well as to how confident one may be that this or that course of action will achieve it. If someone chooses to pursue this shorter-term goal rather than that longer-term goal, say, it simply indicates that on balance he prefers this to that, and/or he has more confidence in achieving it. So there’s nothing intrinsically more “rational” about the pursuit of longer-term goals.

That isn’t the only red herring. We tend to discount pursuits that seem to undermine an agent’s integrity or harm him as being less “genuinely the agent’s own” (I’m thinking of activities such as smoking and drinking). But what counts as “harm” here? Inasmuch as he is able to pursue something he really wants, he is not harmed — and inasmuch as he is prevented from pursuing what he really wants, he is harmed. If we regard something an agent freely pursues as undermining his integrity or as harmful to him, once again we are appealing to values of society at large rather than values of the agent in isolation. And once again, we’re not talking about “self-control” here so much as “control by society” — or, as I said above, at least “guidance by society”.

So far, no harm done. An agent is still doing what he wants to do, even when what he wants to do is determined by the likes and dislikes of other agents than himself. But I think our understanding has taken a sinister turn. We are using misleading words, and in doing so we are turning a blind eye to a possible source of genuine coercion. By treating something that lies outside the agent as if it were the agent’s own, we slide inexorably towards thoughts such as that “society can help a person to control himself”. There are monsters about.

[Edit: One such monster is Rousseau’s idea that people must be “forced to be free”. That slogan expresses the most insidious and dishonest form of paternalism, which goes beyond simply forcing people to do what they don’t want to do “for their own good”. The greasier version — embraced by anyone who appeals to “false consciousness” or the like — involves pretending they do in fact want it by virtue of the fact that it’s for their own good.

The idea that an agent can “really” want something although superficially seeming not to want it is at the heart of the “positive” concept of freedom. As Isaiah Berlin noted, it involves the self’s being divided into two — the “empirical” self and the “real” self — and obviously so too does the idea of self-control.]

Formal versus informal implication

I want to compare and contrast two sorts of implication — and I want to suggest that our understanding of beliefs and logic is badly affected when we confuse them, as we often do. In the hope of making things a little clearer, I propose to use the following symbolism:  (written in Mistral font) stands for the belief that P, P (in italics) stands for the linguistic sentence expressing the same content P, and stands for the fact that P, which of course only exists if P is true. (I gave “earthy” colours because it’s “in the world”, geddit? Also the looks a bit like an octopus, i.e. a real thing in the world.)

For illustration, if P is the sentence ‘Snow is white’, is the belief that snow is white, and is the fact of snow’s being white — a very simple sort of fact that might be represented by a Venn diagram like this:

That silly diagram is intended as no more than a reminder that although we are using a letter for a mental state (belief ) which is true or false, and a letter for a linguistic utterance (sentence P) which is true or false in the same circumstances, in the third usage (fact ) a letter stands for those circumstances themselves — something that is neither true nor false. Now it may sound strange to say that a fact isn’t true — facts are “true by definition”, aren’t they? Well, a fact is what makes a true sentence or true belief true, so wherever there’s a fact there’s a truth. In a loose colloquial sense we might refer to truths as facts. But in the current philosophical sense, a fact is strictly a state of affairs corresponding to a truth.

So understood, facts cannot imply anything, being themselves neither true nor false. But their linguistic or mental counterparts can, and this is what I want to examine here. It seems to me that confusion between facts, sentences and beliefs has generated much misunderstanding about the nature of thought itself. I hope to disentangle a little of this confusion here, and in doing so I hope to persuade you that formal logic is much less useful than is widely supposed as a tool of critical thinking.

Although facts can’t imply one another, linguistic sentences often do. For example, what are we to make of the claim that P implies Q?

If it is true, it describes a fact of some sort of lawlike connection — formal, causal, categorical, or whatever — between two possible facts and . I say “possible” facts because the implication can hold at the same time as the individual sentences P and Q it connects are not true. What matters is the connection between the sentences rather than their truth-values. For that reason, material conditionals of elementary logic (whose truth-value depends simply on the truth-values of what they connect) don’t capture this sort of implication. The conditionals we use for that purpose have to be understood as counterfactual conditionals, or as having some sort of subjunctive mood, so that they can be true or false regardless of the truth or falsity of their component parts.

Just as the sentence P can both describe a purported fact and stand for the belief , the claim that P implies Q can both describe a purported fact and stand for a belief. The nature of this fact and of this belief have seemed a bit of a mystery, to me at any rate in the past. I now think that mystery is largely the product of confusion between formal and informal implication. Apologies if this is no mystery to you.

Formal implication

As a model of implication, most of us take the case we are most familiar with: implication in formal logic, where the premises of a valid deductive argument imply the conclusion. When I say the implication here is formal, I mean that the work is done by language, and thought follows. That is, relations between sentences guide the formation of beliefs.

When conditionals that express such implications are true, they are true by virtue of the fact that one sentence can indeed be derived from another sentence via rules of inference that enable the derivation.

Deriving one sentence from another is a bit like building a structure out of Lego bricks. In this analogy, our rule of inference might be “every new brick must engage at least half of the interlocking pins of the bricks underneath”. When we begin, we might have no clear idea whether a given point in space can be reached given our starting-point. But once we do reach it (if we do), we can believe that it is legitimately reachable, given that starting-point and the rules of inference. Or at least, we can “accept” it as true, because we “accept” the rules of inference simply by using them.

With formal implication, the fact that corresponds to a true claim that P implies Q is a “linguistic” fact, embodied by the actual derivability of Q from P. The belief that corresponds to a claim that P implies Q (or sort-of belief, if all we do is “accept” it as true) is about derivability in language.

Informal implication

With formal implication, the work is done by language and thought follows. But with informal implication it’s the other way around: the work is done by thought and language follows. Actually, if thought is working as it should, this one-thing-following-another goes deeper, all the way to facts. The world has some lawlike features, and the thoughts of animals reflect them — in other words, animals have true beliefs about lawlike facts. Later, we human animals try to express those thoughts using language. Here real-world relations guide the formation of beliefs, which in turn guide the formation of sentences.

These sentences can be misleadingly ambiguous. A sentence like ‘P implies Q’ can be read in three distinct ways. It can say something about the lawlike connections in the world, i.e. facts about how and are related; or it can say something about the way sentences P and Q are related; or it can say something about how beliefs and are related. This ambiguity is compounded by the fact that a sort of meta-level “conditional” corresponds to each of these types of relation, and the situation is made still worse by our inclination to take formal implication as our model of implication in general.

It seems to me that the way to avoid getting lost here is to constantly remind ourselves that the primary link is between things in the world where lawlike connenctions exist: “what goes up must come down”, “if it has feathers, it’s a bird”, etc. Thought captures these lawlike connections by forming belefs that stand or fall together in a systematic way. If and are related in a lawlike way, a mind captures that meta-level fact by being disposed to adopt the belief whenever it adopts the belief , and to abandon the belief  whenever it abandons the belief . Given the larger belief system to which the pair may or may not belong, they’re “stuck together” like the ends of a Band-Aid:

The system as a whole has the property that whenever gets added to it, gets added too, and whenever gets stripped away from the system, gets stripped away too, like a Band-Aid whose adhesive parts are put on or taken off (in reverse order).

If we can be said to have a “conditional belief” corresponding to this sort of implication, it amounts to little more than belief that a lawlike connection exists between and . This meta-level “conditional belief” is embodied in the way and stand together or fall together in the system. Even if such a belief is false — as it would if there were in fact no lawlike connection between and — that distinctive linkage of beliefs and in the system is all it amounts to. When we come to capture it in language, we may use arrows or similar symbols to indicate a non-symmetrical linkage of P and Q, but let’s be careful not to think of such informal links as perfectly mirroring formal links.

I hope you agree that the Band-Aid analogy goes too far in that it contains one unnecessary detail that ought to be omitted from our understanding of informal implication. That detail is the “bridge” between the adhesive parts, with its supposed “hidden mechanism” enabling an inference from P to Q. I think we are inclined to imagine such a mechanism exists because we are so used to taking formal implication as our model, and we have a tendency to assume something akin to interlocking Lego bricks are needed to “bridge the gap” between and . A better analogy perhaps would be a Band-Aid with the non-adhesive part removed:

What does it all mean?

The assumption that formal and informal implication are closely parallel misleads us about the nature of thought. It promotes the idea that thinking is a matter of “cogwheels and logic” rather than many direct acts of recognition by a richly-interconnected belief system, often of quite abstract things and states of affairs.

People who praise or actively promote logic as an aid to critical thinking routinely assume that beliefs work like discrete sentences in formal implication. That is, they assume beliefs have clear contents with logical consequences which are waiting to be explored. Well, as I’ve said several times now, in formal implication, language does guide thought. Beliefs correspond to sentences which are discrete because of their distinct form. One sentence leads to another thanks to the rules of inference, and beliefs follow their linguistic counterparts. The beliefs that are so led are themselves discrete because they are so closely associated with discrete sentences. Their contents determine the inferential connections between them. But most beliefs aren’t like that at all. Their content isn’t determined by prior association with discrete sentences whose form precisely determines their content. Rather, their content is attributed via interpretation, which is an ongoing affair and, well, a matter of interpretation. That interpretation involves “working our way into the system as a whole”, taking account of the inferences an agent draws and attributing whichever mental content best reflects his inferential behaviour. If someone behaves as if he is committed to lawlike connections in the real world, we attribute beliefs whose contents are appropriate to commitment to those lawlike connections. Here, inferential connections between beliefs determine their content rather than vice versa.

As far as I can see, this limits the usefulness and scope of logic. It’s useful in the academic study of logic, obviously, but outside of that field, only the most elementary applications are of much use, even in formal disciplines like computer science and mathematics. I agree that it’s useful to be aware of informal fallacies and to try to avoid them. But beyond that, the power of logic has been over-inflated by the assumption that beliefs are like “slips of paper in the head with sentences written on them”, and the assumption that thinking proceeds by drawing out their consequences — by examining what they formally imply.

We are not culpable for “wrong opinions”

When we act, our bodily movements are caused by mental states. These mental states consist of a desire to achieve a particular goal, and some relevant beliefs which help us “steer a course through the world” towards achieving the goal.

It all means a human agent is a bit like a sophisticated version of a cruise missile, which is programmed to reach a target, and to do something (usually explode) when it gets there. It steers a course towards its target by comparing the terrain it flies over with its onboard computer map.

Although both the map and the targeting are necessary for it to reach its goal, the map is “neutral” in the sense that it only contains information about the outside world. It is compatible with the missile hitting any other target within the mapped area, and with its doing good things like delivering medicine or food aid when it reaches its target (not just doing something bad like exploding).

If the “act” of a cruise missile is to be praised or condemned, we judge what it is programmed to do, and where. We do not judge its map, whose greater or lesser accuracy simply results in greater or lesser efficiency in fulfilling the aim of the programming.

It should be the same with human agents. If we praise or condemn what they do, it should be with reference to the good or evil they intend to do, or are willing to do, and to whom. We should suspend judgement of an agent’s beliefs when we judge his actions, as beliefs are “neutral” with respect to the good or evil of what they help to achieve, just like the cruise missile’s onboard map. Like the accuracy of the missile’s map, the truth or falsity of an agent’s beliefs affect his success or efficiency in achieving gaols, but the beliefs do not set any goals. A belief can be true or false, but it can’t be good or bad. The worst an opinion can be is false, rather than “aimed at an evil goal”.

Despite the neutral role of beliefs, some people blame others for having the “wrong” opinions, or in other words for not believing what they “should believe”. For example, many Muslims think “apostasy” should be punished by death. Many Westerners think “denialism” should be ostracised or worse.

Those are remarkably similar views, and both are primitive, in the worst sense of the word. They belong to a backward state of society. They are inspired by confused understandings of agency, and we should reject them. If someone has false beliefs, he has either had bad luck (by being exposed to unreliable sources of knowledge) or he is epistemically ill-equipped. In neither case is he culpable.

Freedom trumps power

Imagine an über-homophobe. He doesn’t just hate homosexuals and avoids homosexual activity himself — the very idea of other people engaging in homosexual acts makes him sick with repulsion and fury.

He may not describe his attitudes in terms of hate. He may prefer to express it as a sort of “love”, perhaps as a virtuous reverence for heterosexuality. “My heart is with heterosexuality”, he may say.

Whether or not we accept his euphemistic spin on it, to say he has “strong feelings” is to understate the case. He has a super-strong urge to prevent homosexuals “doing whatever they do”. The reality of everyday homosexual acts routinely sends him into a towering rage, or reduces him to bouts of uncontrollable weeping. He is “offended” to a degree that’s “off the scale of offence”.

Question: Should homosexuals curb their sexual activity to spare this unfortunate man’s feelings? Should efforts be made to prevent him taking such immeasurably deep offence?

Answer: Of course not. Not by an inch. Not by the tiniest fraction of a millimetre. An adult’s freedom to engage in sexual acts with other consenting adults trumps anyone else’s urge to prevent him engaging in such acts.

However pathetically our über-homophobe may try to paint himself as the “victim” of other people’s “offensiveness”, the unalterable fact is that he wants power over others rather than freedom from others. His complaint amounts to an illegitimate claim to control their behaviour.

Freedom (and the legal rights that protect it) is more important than any ability to direct other people’s behaviour. Freedom trumps power: the choices people make for themselves always count for more than “feelings” and urges others may have to overrule those choices. “Feelings” and “offence” may be important between members of a family, but they count for nothing in the political sphere.

To pander to this unfortunate fellow’s aversion would certainly harm those whose freedoms it restricts. But it would probably harm him as well. Homosexuality isn’t going to go away, and he may as well just get used to that fact. Sooner or later he is bound to run into it, to his further chagrin. It may well be salutary — like immunisation — to deliberately offend him.

The same applies to other forms of giving and taking “offence” and “hurting people’s feelings”. In particular, it applies to Muslim “offence” taken at cartoons. Personally, I suspect it’s mostly faked: I’d guess many Muslims don’t give a rat’s ass about “insults to the Prophet”, and are simply itching for confrontation with Western people and Western values. But even if their “feelings” are entirely genuine, they still don’t count. No one’s “feelings” count when we’re talking about freedom.

The significance of desire

Most of us have an under-inflated concept of desire, and an over-inflated concept of belief. We happily accept that beliefs are fairly detailed representational states — so that taken together they prompt the metaphor of an “inner world”. But we tend to think of desires as much vaguer or thinner on detail than beliefs, and perhaps not even as representational states at all. Why is this way of thinking so common? — Here are a few suggestions:

First, we tend to specify desires with reference to objects rather than states of affairs. For example, we say “I’d like some chocolate” rather than “I have a desire to be eating chocolate”, or “I need some WD-40” instead of “I want my door hinges to be lubricated with WD-40”. Being human, we can safely assume that other humans have broadly similar goals to our own, so it’s often linguistically redundant to explicitly specify these goals as states of affairs. This can give the mistaken impression that desires do not represent states of affairs at all. In other words, it leads us to overlook the fact that desires represent the same sorts of things as make beliefs true or false.

Second, in general the states of affairs desires are aimed at are not yet realised. When we believe something, or at any rate when we believe something about the past or present, if our belief is true then the state of affairs that makes it true is a “fact”, with much attendant “detail”. When we desire something, on the other hand, the state of affairs that would satisfy it is not yet a fact. So for the time being it’s a “mere idea”, something more like Pegasus than a real horse grazing in a real field at this very moment. Any attendant “detail” is more obviously “imaginary”. We probably err on the side of assuming our beliefs are more detailed than they really are, as if they inherit some of the detail of the fact that makes them true, but with desires, we err in the opposite direction.

Third, in the Western philosophical tradition from Plato through Descartes (and in other traditions too), we tend to think of mental states as conscious experiences rather than as functional representational states that direct the behaviour of agents. This is changing, of course, with the continuing influence of American pragmatism and of the later Wittgenstein, as well as with the growth of functionalism in the philosophy of mind. But it is still very common to assume that a desire is a mere “feeling” or emotion rather than an essential part of the mechanism of action. This assumption is promoted still further by the possibility of wishing (and expressing wishes) for states of affairs that as agents we can play no part in bringing about (such as “I wish it would snow!”). It all suggests that desire is something rather touchy-feely and causally unserious. Worse, it can suggest that the real “purpose” of desire is nothing more than the having of a further sort of conscious experience — pleasure, or whatever.

We must reject this assumption that desire is a “feeling” (although of course specific desires are usually accompanied by distinctive feelings). Rather, a desire is a causally efficacious and typically fairly detailed representational mental state aimed at bringing about a real state of affairs external to the mind. Desires are complementary to beliefs, which are also representational mental states. Instead of bringing about real states of affairs external to the mind via behaviour, beliefs are typically brought about by these states of affairs, often via observation. Although there is something to the claim that desires are less detailed than beliefs, I think we should take Hume’s lead in giving desires priority: a desire (or “passion” as Hume put it) is the mainspring of any act. Whenever we act, our behaviour is aimed at achieving a goal; desire is the mental state that establishes such a goal, and beliefs (or “reason”) can do no more than help us steer a course towards achieving it. Hence “reason is the slave of the passions”.

Although we do not literally have an “inner world” of belief in our minds, together our beliefs form a sort of “map of the world” — the world as we take it to be. But that’s only half the story. Together our desires form a sort of “blueprint for the world” — the world as we would like it to become. The “map” and the “blueprint” contain the two essential components of the causation of all acts.

The traditional under-inflated way of thinking about desire tends to ignore the “blueprint” and puts far too much emphasis on the “map” — it imbues it with more detail than is really there, and it gives it causal powers that it simply doesn’t have. This often emerges in the assumption that specific sorts of belief are associated with specific sorts of acts.

A classic age-old example is the thought that belief in God causes people to behave in more “moral, God-fearing” ways. But of course such belief can only cause the valued sort of behaviour in conjunction with specific desires — to do what God wants, to avoid punishment, and so on.

Nowadays, much effort is expended on promoting beliefs such as “all races are exactly alike in respect of ability” and “there are no grey areas in rape”. The hope is that simply having such beliefs will discourage racist or sexist behaviour. But as we have just seen, behaviour of any sort is caused not only by our “map” of beliefs, but crucially — and more saliently, because desires are classified according to their goals — by our “blueprint” of desires as well.

The “attenuated” understanding of desire has a couple of really nasty side-effects. One is a blurring of the distinction between beliefs and desires, and the thought that desires can be “implanted” in an agent’s mind in the same way as many beliefs can: via observation. So if we watch violence on television, we will want to be violent ourselves. If we see ads on TV, we will want what they advertise. And so on. This gives rise to the sort of puritanism that discourages or even forbids the expression of “unhelpful” ideas. Traditional religious puritanism frowned on the expression of atheistic or agnostic views, and kept Hume out of a proper academic job. No doubt there are many lesser yet still talented people who are nowadays excluded from academic jobs for having beliefs that are currently regarded as “unhelpful”.

The side-effect that really makes me queasy is not the exclusion of talent from the groves of academe and the media, but the active promotion of falsity for the sake of our general moral betterment. For example, although I don’t think there are any significant differences between races as far as abilities are concerned, the claim that there are none at all is statistically vanishingly unlikely. If there are differences between individuals — and there are— there are bound to be differences between groups of individuals. Yet we are enjoined never to utter the forbidden words of that obvious truth. This is sick-making, and anyone who cares about truth should speak out against its deliberate suppression.

Leaving a trail of destruction

Some people who are terminally ill or in constant pain kill themselves to end their suffering. I think that’s a perfectly reasonable and decent thing to do.

But most suicides — especially among physically healthy people — are not like that at all. I think they’re motivated instead by the urge to “leave a trail of destruction in one’s wake”. This destruction takes the form of a slow train-wreck of blame and shame on the part of those who are left behind. Suicide prompts inevitable questions and invites a particular sort of interpretation: “What drove him to it?” — “It must have been his ____ [fill in blank here with name of supposed oppressor]. — How horribly they must have treated him! Shame on them!”

Self-harm is usually a passive aggressive activity. It’s manipulative. In a disguised way it’s intended to cause more harm to peripherally “blameworthy” people than to the immediate “victim”.

Suicide is the ultimate in self-harm, and so the ultimate in passive aggression. It exploits our taboos as expressed in phrases like “we mustn’t speak ill of the dead”. Because it is verboten to utter bad thoughts about the dead person, yet something undoubtedly bad has taken place, there is a “finger of blame”, but it cannot be pointed at the “victim”. We are inclined to be inventive, and re-direct our condemnation towards “those who victimised the victim” (who are usually imaginary).

This is the sly thinking of hunger strikers, suicide bombers, and those who exploit children by forcing them to become “suicide bombers by proxy”.

Of course many suicidal people are depressed. And depressed people deserve sympathy rather than condemnation. True. But depressed people are ill, and illness is better treated with honesty than deception. Depressed people are often angry. Angry people are often aggressive, and sometimes do violent things. These things are no less violent for being done by depressed people. We fail to understand suicide if we treat those who kill themselves with unquestioning, saccharine reverence. And quite apart from failing to understand them, we foster an atmosphere in which further potential suicides are more likely, because their intended effect is more clearly guaranteed.

I’ll say that again: if we treat people who kill themselves with too much reverence and respect, we encourage further suicidal behaviour. This probably helps to explain why suicides often break out like “epidemics” in close-knit rural communities.

Instead of wringing our hands, beatifying the dead, and apportioning blame to the living, I suggest that we reserve our sympathies for the living and if necessary adopt a gallows humour or even mockery for the dead. Don’t worry about hurting their feelings, they can’t feel a thing.

Are we lucky to be alive?

Most things of value in life depend on luck. But what is it, exactly, to be lucky?

I think an agent is lucky when he wants something (i.e. he has a goal) and then passes through a sort of “trial” in which getting what he wants is statistically unlikely, or at least not guaranteed. If he passes the trial and gets what he wants, he’s lucky.

For example, suppose six people play a game of pure chance (to keep this example simple). In the long run, over repeated plays, each player will win about one sixth of the time. Assuming a player’s goal is to win, winning is lucky. A single win is lucky, and repeated wins are lucky: in the long run, winning more than one sixth of the time is lucky. Because the relevant sense of probability here is statistical, we have to imagine repeated events of a similar sort, and what proportion of them would achieve the goal.

Three observations can be made here. First, luck depends on having a specific goal and a clear reference class. The reference class consists of repeated events of a similar sort, a relevant proportion of which achieve the goal. It is often implicit — in the present example, it consists of plays of the game. Suppose we keep that reference class, but change the goal. Suppose a player just wants to have fun rather than win. If he has fun in two thirds of the games he plays, he’s more often lucky than unlucky, because a higher proportion of the same class of events count as successful given the agent’s specific goal. Being lucky can become so routine that we’re less inclined to call it good luck, and focus instead on the less usual case of being unlucky. But the basic idea is the same.

Second, an agent can’t be lucky if there is no possibility of his being unlucky. If some members of a class of events are lucky, then some other members must count as “unlucky”, or at least as “less lucky”.

Third, luck applies to events that are more or less beyond our control. Lucky or unlucky events happen to agents, rather than being done by agents.

If we’re lucky, we’ll inherit good genes from long-lived parents. If we’re lucky, we’ll be engaged in projects in life which go well for us, so that we advance towards our goals. If we’re lucky, our lovers will be faithful and honest. These examples of good luck can only happen to genuine agents who have goals — real goals that are the objects of genuine desires. They only count as cases of good luck because things might have been different — there are other cases of the same sorts of events that count as bad luck. And alas, we don’t have much control over them.

For most of the course of a normal life, it would be remarkably bad luck to die while asleep. So we’re not much inclined to call it “good luck” when we simply wake up in the morning as usual. But I think it’s salutary to think that way. In all human life there is an attrition rate. Nowadays, most of us in the West live in unusually safe circumstances (low infant mortality, good health, peace, prosperity) in which we are liable to forget that “in the midst of life we are in death”. An awareness of our own mortality need not be morbid, nor even pessimistic. It can help us get our priorities right. And it serves to remind us that even routine things depend on luck, however secure they may seem.

One sort of event often assumed to be “lucky” is the emergence of my self, starting with conception in the womb. The thought goes something like this: “so many different combinations of sperm and egg might have met at the crucial moment, with different DNA, in which case someone else would exist rather than me — how very lucky I am to exist, when it might so easily have been different!”

But I think that is a mistaken thought. Furthermore, I think it contributes to bigger philosophical problems concerning personal identity, consciousness, and even bad science.

At the moment of conception, the future agent who is being conceived is not yet an agent. Even if we think of the zygote formed at conception as a “potential person”, no merely potential X is a real X, so again no agent actually exists. And where there is no agent, there is no goal of staying alive. Where there is no such goal, there is no proportion of “successful” events in which the goal is achieved. So luck as understood here isn’t involved. There were countless other possible outcomes, but the actual outcome was not “unlikely” in the sense that an amazing coincidence occurred. It’s a bit like being allocated a car registration number — it’s “one in a billion”, but it’s not anything to be surprised about unless you bet beforehand that you would be allocated that very number.

Yet a widespread sense of perplexity persists, and I think it reveals something significant. It shows how much difficulty we have identifying our selves with physical objects (i.e. functioning brains). Despite near-universal agreement that Descartes’ “immaterial substance” is a fantasy, we are fixed in our ways, and we retain a habit of supposing that my self (i.e. my mind) existed before the formation of the physical object (i.e. my brain), and was lucky not to have “missed the boat”. We think of ourselves as “atomic” — i.e. as incapable of being subdivided into smaller parts, and as having an all-or-nothing existence that can’t emerge gradually from something more inchoate. Such presuppositions are “buried”, and are brought to light by the current sense of having been lucky.

The same sense of perplexity surrounds the so-called “hard problem of consciousness”. We find it relatively easy to imagine how some other agent — even an intelligent robot — might do all of the things that conscious persons do, yet we find it hard to accept that “I happen to be one of those things, doing what those things do” (as we point to a functioning brain). This is not a problem for science — it’s a distinctly philosophical problem of personal identity. The deficit is not one of knowledge so much as of the imagination. We find it hard to imagine that we are one and the same thing as a physical brain, wondering how “it came to be itself rather than something else”. If that isn’t a downright mistaken activity, it’s at least playful, like a cat chasing its own tail, imagining a part of itself belongs to something else.

The vague idea that “atomic” human selves are “queued up waiting to be conceived” also contributes to bad science. For example, attitudes to the extinction of our own species reveal that we treat non-birth as something like being “deprived” of birth, which is comparable to death. But this is a mistake. All individuals inevitably die, and all species inevitably come to an end, but these are entirely different. The supposition that they are similar misinforms much current thinking on ecology and catastrophism about climate change.

We must consider what single-sex marriage commits us to

Every week I seem to say something on Twitter that is almost universally misunderstood. Last week I said that there was nothing of value in equality per se, which many took to mean I was a right-wing lunatic.

This week I said that if we commit ourselves to allowing single-sex marriage, consistency demands that we also commit ourselves to a wider range of other sorts of marriage, sorts that we have hitherto disallowed. For example, we might allow some incestuous marriages.

Cue moralistic outrage. “You’re equating homosexuality and incest!” — “Slippery slope arguments are fallacious!” — “You’re a dirty homophobe for opposing single-sex marriage!” And so on.

First, I’m not “equating” homosexuality and incest at all. They’re obviously completely different. Most homosexual acts are morally neutral, whereas most incestuous acts are morally wrong. But both are routinely observed in the sexual behaviour of many species. Although they are “minority” activities, they are recognisably common — enough to be described as biologically “normal”.

Second, many slippery slope “arguments” (if they count as arguments at all) are not “fallacious” (if that’s the appropriate word). We often do have reason to believe that small initial changes portend much larger changes to come. A hundred years ago, opponents of universal suffrage argued that women should not be allowed to vote, because that would open the floodgates to all sorts of social changes. And they were right. It did lead to all sorts of social changes, most of which most of us warmly welcome.

But in any case I’m not worried at all about any slippery slope, nor am I warning of any such thing. Incestuous sex will always be a minority activity, and genuine, consensual incestuous love so uncommon that very few will ever want to seal their relationship by marrying each other. There are no “floodgates” about to open here.

Third, I am not opposed to single-sex marriage. (Nor would I be a “homophobe” if I were.) Rather, I’m trying to draw attention to some other commitments we inevitably take on if we are consistently committed to single-sex marriage.

Single-sex marriage is justified by a principle. That principle goes something like this: “if two consenting adults want their relationship to be recognised and sealed by law as marriage, the rest of society should not prevent them doing so”. If we deny consenting adults the legal right to marry, we are guilty of discrimination of a morally wrong sort. And it’s quite seriously wrong, I would argue, because the desire to marry — to marry the person one considers the love of one’s life — is a central part of human life and human flourishing.

Avoiding discrimination means “turning a blind eye” to differences, at least in law. We deliberately allow our commitment to a moral principle to override any personal distaste we may feel for people who are different in the way we are now deciding to treat as irrelevant.

By allowing people of the same sex to marry, we choose to override any distaste we may feel for homosexuality. (There must be some who feel such distaste, as we are told homophobia is so common.) We choose to treat their incapacity to procreate as irrelevant. We do the same for older people, or people who are barren for other reasons. We allow people who carry genetic diseases to marry, even though we know that if they were to procreate, their children may suffer serious disability. Our commitment to the above principle — a humane and decent principle guided by respect for erotic love — leads us to treat biologically ill-starred conditions as legally irrelevant. And a good thing too.

One such “ill-starred” condition is exemplified by Siegmund and Sieglinde in Wagner’s opera Die Walküre. As brother and sister who were separated when very young, they don’t recognise each other when they meet again as adults. But their instant affinity quickly grows into full human love. This love is not diminished by the discovery that they are siblings.

That sort of situation in common in mythology, scripture, and art. Incest is probably more common in such stories than homosexuality. However much we may disapprove of it, incestuous love must surely occur in real life, especially with recently increased fluidity of families, greater frequency of separations in childhood, larger numbers of step-parents and half-siblings, and so on.

It seems to me that denying siblings the right to marry is an anachronism, or at least it will become an anachronism as soon as we allow homosexuals to marry, as I think we should. It conflicts with the basic principle that we commit ourselves to by allowing single-sex marriage.

Of course it is appalling that some parents rape their children. Of course the legal right to marry should be strictly limited to consenting adults. Of course consent cannot be given by an adult who is mentally ill or the traumatised victim of abuse. These things go without saying.

But as we consider the question of single-sex marriage, we should consider the broader possibilities that our guiding principle opens, and the wider commitments we are obliged to take on. It doesn’t matter that very few siblings or half-siblings will ever want to marry. That fact that some of them will is enough. We are obliged to consider the possibility, and what our response should be.

What I have learned in the past week is that the quality of debate over single-sex marriage is wretched. Well-meaning but unintelligent journalists pour politically correct syrup over real issues, and chicken out of robust debate with anyone who doesn’t accept their relentlessly and predictably orthodox views. I have no distaste for homosexuality myself, but I’m growing increasingly impatient with a “gay lobby” whose idea of debate is cheap victim-stancing or aggressive accusations of homophobia.