Why do males die younger than females?

I have a hypothesis that explains why in many (most?) species, males have a shorter life expectancy than females. My apologies if this has been thought of before, or if it’s already well-known. It’s quite likely that I’m re-inventing the wheel here, that I’ve come across the current explanation before somewhere, and have simply forgotten. I have a keen interest in evolutionary theory, but I’m not a biologist.

The hypothesis is this: males are subject to more exploitation by parasites than females, because in general parasites “want” their host species to thrive. Over the course of a lifetime, this greater exploitation takes its toll.

In non-monogamous species, males are useful for fertilizing the eggs of the females, but not much else. In effect, after donating sperm most of them are redundant. They use up the food supply that could otherwise swell numbers of individual members of the species, and hence safeguard the species itself. In non-monogamous species, too many males are “bad for the species”. Drone bees consume as much nectar as honey-producing females. Male elephant seals consume far more fish than their smaller female counterparts, and few of them even get to donate sperm.

Farmers — in effect, human parasites of animals used as food — know all this, and so they usually kill males apart from the few needed to fertilize females. In doing so, they strengthen the species they parasitize, in the sense of increasing their numbers and assuring their future. Through domestication, the humble jungle fowl of Asian forests has become the mighty chicken, found in huge numbers all over the world. Much the same applies to cattle and sheep, which now occupy much of the earth’s surface.

Most parasites (such as microbes) are brainless, but through the process of natural selection they adopt “strategies” which can promote their numbers. In most cases, these strategies ensure that their host species do well enough to function reliably as hosts. The parasites aren’t actually thinking as human farmers think, of course, but over many generations they stumble upon similar strategies, which become established as the parasites that benefit from them proliferate.

With sex ratios, the “interests” of species and genes conflict. What’s “good for the species” is a much larger proportion of females than males, at least in non-monogamous species. But what’s “good for the genes” is a roughly equal number of males and females (as explained by Fisher’s Principle). The fact that in most species the ratio of males to females is indeed 1:1 makes a compelling case for a gene-centered understanding of evolution (a la Richard Dawkins’ Selfish Gene), and against group selectionism.

This hypothesis (I hesitate to call it “my” hypothesis) should be easy enough to test, as it entails that there should be a greater difference in male–female life expectancy in non-monogamous species than in monogamous species. It also entails that many of the diseases we associate with early male mortality (such as coronary heart disease, possibly suicide) may in fact be partially caused by infection by microbes.

The tyranny of conditioning

From a very early age, I detested learning by rote. My refusal to engage in this soul-destroying activity led me to my first brush with criminality, when I tried to cheat when reciting the seven times tables.

I’m not the only one who has a deep distaste for learning by rote. What is rather surprising, perhaps, is that some people seem to have a genuine liking for it. Witness the eagerness with which so many take to learning foreign languages, with new vocabularies, irregular verbs, unpredictable genders of words for inanimate objects, and so on — all of which require tedious repetition and absorption.

It seems to me that there is a telling difference here of temperament — between those who assume education is essentially a matter of acquiring good habits of thought, and those who assume education is essentially a matter of getting a better understanding. Both types of people embrace education as a good thing, as a vital aspect of personal growth, but the former expect and even welcome an onerous period of habit-formation to achieve it. The latter embrace a sort of intellectual “principle of least action”: whatever is sufficient to explain is “all we need to know”. That attitude can look downright lazy to fastidious habit-formers.

The difference in temperament extends far beyond education. Here I’ll just touch on how it emerges in attitudes to mental illness, and in politics.

People who assume that education is a matter of acquiring good mental habits tend to think that mental health issues — from mild neuroses to out-and-out illness — are to be overcome by means of conditioning. For example, a phobia of spiders is supposedly overcome by coming into ever-closer contact with them — letting them crawl over one’s hands and so on. At the end of the conditioning process, the patient has “got used to the idea” — in other words, he has changed his habits.

Now I am no Freudian, as I think his understanding of the mind was badly mistaken in many respects. Yet I think he was importantly right, both factually and morally, in thinking that the way to better mental health was not through conditioning, as above, but through self-understanding. Let us put aside the details of such self-understanding, such as whether it really involves uncovering unconscious desires or phantasies. The important thing is that therapy is aimed at enlarging one’s understanding of oneself, rather than at achieving greater “self-control” through the acquisition of new habits. Rather than trying to instil such habits, a Freudian therapist would encourage exploration and experiment, with its attendant risks.

This difference of temperament can also be seen in political thought, where a deep division exists between Rousseau and Hobbes (almost everyone has an affinity for one or the other of them). Rousseau thought that by the time humans reach adulthood, they have been corrupted by the bad conditioning of modern society, and the solution to this problem is counter-conditioning. We must acquire new “habits of the heart” (as Tocqueville called it, re-phrasing Rousseau). Unlike Rousseau, Hobbes thought humans were born selfish, and there’s no way to change that; but by understanding ourselves better we will agree to an imperfect compromise in which the most important freedoms are safeguarded.

I think it’s pretty obvious that the current enthusiasm for minimum alcohol pricing, for special taxes on fat and sugar, and the rest of it, comes from the “conditioning” side of the divide. If people eat or drink too much, the idea goes, they should be re-educated by acquiring new habits. By putting unhealthy foods and alcohol that little bit further from reach, new habits will take root.

Let us pass over the fact that the attempt to instil new habits involves coercion, and that such coercion is discriminatory, because only a select few are poor enough to be affected by modest price increases. Let us pass over the issue of legislators making laws that they themselves are not subject to. The question remains: Is conditioning — enforced learning by rote — the right way to achieve personal growth?

My temperament says No. As alcohol prices have fallen in Ireland, Irish people have cut down on alcohol. I think this is because affordability enables younger people to learn how to drink, to educate themselves through exposure, experiment, and increased self-understanding rather than through the forced acquisition of a habit.

What is this “positive” concept of freedom?

Isaiah Berlin famously distinguished a “negative” and a “positive” concept of freedom. The negative concept is straightforward, but what can be made of the positive concept? Too often, attempts to distinguish them rely on a superficial linguistic difference between the terms ‘freedom from’ and ‘freedom to’. For example, it might be said that negative freedom is freedom from external constraints, whereas positive freedom “represents freedom to do things on one’s own volition.” [Taken from here.]

But that simply won’t do. Agents only ever do things because they want to do them. In other words, any genuine act (rather than a mere twitch, or a frogmarch) is done as the result of the agent’s own volition, and it only can be done when the act is not hindered by external constraints. And that amounts to a “negative” re-formulation of what was intended to capture the essence of “positive” freedom.

Perhaps what’s meant is something like this. If a person is forced to do something under duress — at gunpoint, say — then although he does it because he (briefly) “wants” to do it while the gun is aimed at his head, he can hardly be said to do it on his own volition. A mugger threatens him with death, and he’d prefer to live despite handing over his money than die holding on to it. This is not a free act, surely?

Well, of course the mugging victim is not free. But his lack of freedom can be characterized in an entirely negative way. Although he wanted to hand over his money while held at gunpoint, and that narrowly-circumscribed act in isolation could be described as “free” (no policeman suddenly turned up to prevent him doing so), that is to consider events within far too narrow a context. He had a much stronger, longer-term want not to be mugged. That want — considered in the larger context — was thwarted by his actually being mugged. The mugger was an external constraint that prevented the victim from doing what he wanted to do. So the victim was not free to go about his business unmolested, and therefore he was not free — for entirely “negative” reasons.

Notice that the word ‘free’ applies both to agents and to acts, and furthermore, acts have to be considered within contexts of varying scope. This invites confusion, as the word’s meaning can slide almost imperceptibly between them. (I started the last paragraph with a subtle shift of my own by giving an answer about an agent to a question about an act.)

When the negative concept of freedom seems to suggest that a man being mugged is “free” to give money to his mugger, some are drawn to the idea that we need a more robust concept of freedom than this negative one. And here thoughts usually turn to autonomy — to the idea of self-rule, of being the author of one’s own acts. It sounds silly or sinister to say that the mugging victim was “free” to hand over his money, because he lacks autonomy. The next obvious step is to embrace a concept of freedom that links it with autonomy.

The concept of autonomy is quite similar to that of power, specifically inner strength. To achieve something, we don’t just need an absence of external obstacles, we also need the wherewithal to act — the “muscle”,  if you like, for movement to occur.

I think we need to proceed carefully here. To act at all, we need power — quite literally we need muscle to lift a finger, and in an extended sense we need various mental abilities. To act successfully — which goes beyond merely acting — we need an absence of obstacles that would prevent our acts achieving their goals. We should observe and respect this distinction. Greater power tends to bring with it greater freedom, but power and freedom remain distinct concepts, as one is a prerequisite of action, while the other is a prerequisite of success.

Hobbes memorably said that a man “fastened to his bed by sickness” did not lack freedom but power. In saying this, Hobbes exhibited a remarkable degree of political sensitivity. We have a legitimate prima facie claim against other agents who put limits on our freedom, but no such claim against mere circumstances (rather than agents) that make us internally weak. (Of course what one historical era counts as weakness can later be regarded as the effect of human agency.)

It seems to me that we need both a concept of power, and a distinct concept of freedom. But as far as freedom is concerned, the negative concept is all anyone needs. Most attempts to define the positive concept are in fact just alternative ways of defining the negative concept. “Freedom from” and “freedom to” are inter-definable, the two definitions in effect pointing to figure and ground that share lines of demarcation. Freedom to do X is just the same thing as an ability to do X thanks to the lack of external constraints that prevent one doing X. To be an autonomous agent is to have both the power to act, and the freedom to act, the latter understood negatively.

Yet, there is a positive concept of freedom. I know this because Rousseau used it, Marx used it, and it is presupposed in almost every ringing patriotic declaration of national freedom. This concept of freedom is expressed in rather mystical-sounding claims that to be free one must partake in the “general will”; that one can be “forced to be free”; that one must beware of “false consciousness”; that one’s “true self” must take control over one’s merely “empirical self”; that being free means embracing the “destiny of the nation”; or whatever.

As I see it, the essential difference between positive and negative freedom is this: having positive freedom means more than simply being able to get what you want — it means wanting the right things, usually understood in some implicitly moral sense. The various goods that are thought to empower those who have positive freedom — such as education, “strength of will”, etc. — are things that many people do not in fact strive for. But according to those who understand freedom positively, they ought to strive for them, for their own empowerment.

Implicit “oughts” are an essential ingredient in the positive concept. In Rousseau’s terms, being free means partaking in the “general will”, in other words not simply pursuing goals one already happens to have, but adopting larger goals as one’s own. Only with that essential extra ingredient can people be “forced to be free” or considered unfree if their “empirical selves” fall short of the “self-realisation” enjoyed by their “true selves” (in Berlin’s terminology). Ideas such as “false consciousness” only make sense against the background assumption that being free means wanting the right things as well as being able to achieve them.

These mystical-sounding appeals aren’t exactly to autonomy per se, but to something like the additional power that agents would acquire if they adopted goals — equality, justice, truth, whatever — shared by members of a group.

This is a deeply illiberal understanding of freedom, and I think a confusion of power and freedom lies at the root of the positive concept.

Have we had enough of experts?

Michael Gove’s remark that “the people of this country have had enough of experts” has become the most quoted incomplete quotation since “there’s no such thing as society”.

A more complete version goes like this: “the people of this country have had enough of experts saying that they know what is best.”

That extra bit is important, because it shows that Gove was not referring to people with knowhow, i.e. people with practical skills, but to people who claim to know that something or other is the case (or know that something should be pursued as a goal). That’s a vital difference.

We all accept that some of us have practical skills or abilities that others don’t have. Pilots are better at landing planes than non-pilots. By some miracle, I was able to put new tyres on my bike yesterday. And so on. No one means to disparage this strictly practical sort of “expertise”.

But when we move on from knowhow to claims to know that something is the case, things are quite different. The main difference between them is that a claim is true or false, but a practical skill is neither true nor false. It’s just “there” in an agent’s repertoire. Is isn’t tested in the same way as claims such as scientific hypotheses are tested, but it is “put to the test” in the sense that we can quite easily judge how well someone is driving a bus, playing a violin, fixing the plumbing, or whatever. We can see practical expertise with our own eyes, especially its results, and so we can fairly reliably check whether someone has it.

The difference is especially sharp with claims in areas that are highly specialised, speculative, tentative, exploratory, theoretical, unusual, technical, complicated, abstract, arcane, etc. (henceforth I’ll just say “specialised”). With specialised claims, unless we are specialists ourselves, the best most of us can do is take someone else’s word for it, usually that of a supposed authority. Typically, such an authority will be someone with similar qualifications to the person making the claim. To find out whether a theologian’s specialised theological claim can be trusted, it seems we have to ask another theologian.

I hope it’s obvious how problematic this “non-independent checking” of expertise is bound to be. I’ll leave it as an exercise whether “peer review” fits this pattern.

Taking the word of an authority as a guide to truth is so antithetical to the scientific enterprise that one of science’s most highly respected bodies — the Royal Society — adopted as its motto an explicit warning to not do it: nullius in verba.

But even if we are lucky and have enough specialised training of our own not to have to take anyone else’s word for it, specialised claims are still “long shots” in an epistemic sense. I’ll try to explain. We can’t have absolute certainty about any sort of factual claim, but we can have more confidence in our beliefs about everyday things than we can in our beliefs about non-everyday things. For example, we can tell whether it’s raining or not just by looking out of the window. There’s a direct link (via light and the eyes) between the rain falling from the sky and our mental state of believing it’s raining. So direct is this link that the beliefs it sustains are formed in a reliable way: usually, if it is raining, we believe it’s raining; and if it’s not raining, we believe it isn’t raining. Forming beliefs about everyday matters like these is as reliable as “pushing a thumbtack into a noticeboard right in front of us”. But then forming beliefs about specialised matters is as unreliable as “shooting an arrow at a distant target”: it’s riskier — we’re more likely to “miss”, i.e. to get it wrong.

Science is one of the most valuable of human enterprises because of its ability to reveal the hidden structures of reality. But in doing so the claims it makes are like arrows shot at distant targets. These shots at distant targets are often revelatory, but they’re less certain than the more obvious truths of more pedestrian pursuits. The history of science bears this out: every branch is a string of theories once accepted as true, but later shown to be false. We have to accept that much of what is currently accepted in science is also bound to be exposed as false in the future. And what applies to science, where testing is de rigueur, applies a fortiori to specialised disciplines where there is less testing, such as philosophy and economics.

Language often plays tricks on us, especially when a single word refers to more than one thing. Words like ‘expertise’ and ‘expert’ are ambiguous in just that way. They can apply to practical skills in the hands of evidently capable agents, or to claims made by specialists using distinctly unreliable opinion-forming methods, which always includes soaking up the current orthodoxy of their peers. Let us cherish and respect the former, but treat the latter with due scepticism.

It’s especially important to be on our guard against this ambiguity when a single person seems to be in possession of both sorts of “expertise”. For example, a good doctor can exhibit the first sort of expertise by routinely diagnosing and curing illnesses. But one and the same doctor is also likely to have specialised opinions about (say) preventative medicine. The first is admirable. The second may sound impressive, but it’s really very much less trustworthy than it sounds coming from someone we already recognise as a “good doctor”. Yet we refer to both as “expertise”, and I think are inclined to trust both despite their very different epistemic status. We admire the person for the first sort of expertise, but then exaggerate his skill in the second.

Modern medicine has recently come to realise that its own advice on saturated fats — so confidently drummed into the ignorant masses for decades — is probably mistaken. This is absolutely typical of specialised opinion. There are abundant examples of specialised opinions coming to grief in much the same way in other disciplines.

The confusion of the two senses of the word ‘expert’ is so insidious that many people can’t resist the lure of expert opinion. They think it’s laughable or ridiculous to be more sceptical about it than about everyday opinion. When you point out to them that the opinion of an expert on almost any matter conflicts with the opinion of some other expert on exactly the same matter, they typically appeal to the majority: if most of the experts agree, they say, then the rest of us should take that as authoritative. But that hardly settles things, as any such opinion currently held by the majority of experts was at a previous time the opinion of a minority of experts, and going back still further, before the idea occurred to anyone, it was the opinion of no experts at all.

A show of hands is not a reliable way of serving truth — the question of God’s existence is not to be settled by calling for a vote in a roomful of theologians. Nor is the question of whether to stay in the EU settled by a vote among economists.

On a given topic in a given area of specialisation, most ordinary people simply won’t have any opinion at all. For example, I don’t have an opinion about quantitative easing. We might admire the diligence of anyone who does have an opinion about it, but we mustn’t allow ourselves to assume that his opinion is true. Very often specialised opinion is simply the less common alternative to having no opinion at all.

The revelatory power of science doesn’t depend on how confident we can be in the claims it makes, but when we make rational political decisions, confidence really does matter. That’s why cautious conservatives (small C) tend to be uneasy about specialist opinion in politics. Edmund Burke, “father of modern conservatism”, singled out philosophers and economists as being exactly the wrong sort of people to entrust with critical political decisions. Better decisions are more likely to be made by ordinary people from various walks of life, who have picked up practical skills though everyday living and working. It may not sound as impressive as mighty romantic schemes for future utopias, but a nation can suffer worse fates than becoming “a nation of shopkeepers”.

So far, I’ve taken “knowledge of what is best” to mean knowledge of factual matters, so that the experts Gove thinks we’ve “had enough of” presume to tell others about “is”s rather than “ought”s. A more obvious alternative is to take it to mean “knowledge of what is valuable”.

Here I follow Hume in taking a very simple approach. What is valuable is just what agents regard as valuable, what they treat as having value, what they choose, what they strive for in action, and so on. In a word, what is best for anyone is simply what they prefer. But over their own preferences, each individual is “sovereign”, as JS Mill put it. No individual’s preference can be gainsaid by any other individual. To do so would be a sort of usurpation. For example, homosexuals prefer to have sex with people of the same sex. No expert could conceivably overrule that preference, because homosexual desires, being desires, are neither true nor false. This is a humane liberal insight as well as a Humean point of logic: you can’t derive an “ought” from an “is”. “The heart wants what the heart wants”, in other words, and no expert can do anything about that, however big-headed an expert he may be.

In a liberal democracy, voting has to be understood as an expression of preference rather than the utterance of an opinion. What a voter says he prefers when he casts his vote can’t be gainsaid by an expert telling him that he doesn’t want “the right thing” enough.

Ah well… it’s all academic now. On the most superficial level, Gove was evidently right: the referendum result confirmed that UK voters had indeed had enough of experts telling them to vote Remain, and the majority voted Leave instead. I would have voted Remain had I lived in the UK, but as the result was becoming clear, I changed my mind because I’m a democrat. I recommend other Remain voters do so too.


Is usage of the word ‘terrorist’ racist?

A terrorist is a person who deliberately targets non-combatants of some group seen as “the enemy”. The aim is not to kill as many of them as possible, but rather to instil fear in others who belong to the hated group. Terrorists hope the fear they can generate in other members of the hated group will make them modify their political behaviour, in effect changing their way of thinking about a political issue. The aim of the violence is pour encourager les autres in the hope of bringing a political goal closer.

A bomb planted in a pub may “pointlessly” kill 10 innocent drinkers, but its real purpose is to bring 1000 useful idiots round to the terrorists’ way of thinking. The idea is to get them to have thoughts like these: “The people who did this must be very angry; their anger must be the result of having a serious grievance; intellectuals like us must do what we can to peacefully redress that grievance.” And so on.

Because the goal is political, and because the violence is aimed at changing the thinking of quite large numbers of people, we also normally think of terrorists as being organised, at least to the extent of belonging to a recognised group who share a political goal.

It’s often said that “one person’s terrorist is another person’s freedom fighter”, and there’s some truth to that. The creation of new states often involves acts of great brutality and the more or less deliberate targeting of civilians. But it’s also often said that we tend to classify people who have dark skin as “terrorists”, and exonerate (if that’s the word) light-skinned people as merely “mentally disturbed”. I think that’s unfair, and that most ordinary users of English do in fact use the word ‘terrorist’ reasonably consistently.

Killing is serious. Most of us never kill anyone, certainly not on purpose. Few of us think anyone is guilty of any fault so bad that they “deserve to die”. To kill people known to be innocent of such faults is a very disturbed thing to do. It requires the suspension of everyday judgement that assumes individuals are to blame for what they themselves do, and instead replace it with “assumed blame” of simply belonging to a group. Because such groups often have an identifiable ethnicity, terrorism is akin to extreme forms of racism, such as Ku Klux Klan lynchings of black people simply because they are black. I think we have to accept that people who engage in indiscriminate violence like this, who dress up in identity-hiding costumes intended to frighten, and all the rest of it, must indeed be mentally disturbed.

Of course the converse is not true. But the connection between mental disturbance and terrorism is firm enough for us to confuse them. Do we systematically apply labels in such a way that we are guilty of the very racism I’ve just suggested terrorism amounts to? — I don’t think so. In recent decades, most sensible people unhesitatingly described the Provisional IRA as “terrorists”, despite the pallor of their skin. Those that didn’t so describe them were not motivated by a racist urge to exonerate white skin, but by political sympathy for the Provisional IRA. In many cases that sympathy was the intended result of IRA violence. The same applies to the UDA and other equally white organisations we unambiguously label “terrorist”.

I think what really matters here is degree of organisation. We are unlikely to call a lone gunman who goes apeshit in a gay club a “terrorist” if he does not belong to an organised gay-killing group, no matter what the colour of his skin may be. We are unlikely to call a lone attacker who kills an MP a “terrorist” either, for similar reasons. But a group of people who are sufficiently organised to plan an attack in advance, to coordinate things among themselves, to arrange transportation, weaponry, and so on: these are surely a “terrorists”. And we classify them as such because of their methods, political aims and degree of organisation rather than because of the colour of their skin.

If the word ‘terrorist’ is nowadays more commonly applied to dark-skinned people than before, that is probably because in recent decades fewer terrorists have been descended from people of Northern latitudes. It was not always so.

The urge to blame people targeted by terrorists (by accusing them of racism) instead of terrorists themselves is of course one of the intended results of terrorism.

The most amazing sporting event of all time?

Today’s news sources are talking about Leicester City’s winning the Premier League as a sort of miracle. The bookies’ initially-offered odds of “5000 to 1” has morphed into a supposedly scientific/mathematical measure of probability — we are being told that Leicester City had “a slim chance of only 1 in 5000” of winning the Premier League. Yet amazingly, they did win it! We are given to believe that “1 in 5000” is a numerical measure of how surprised we should be at the fact that they did in fact win.

That is ridiculous. The Premier League consists of 20 teams, chosen specifically for their ability to beat other teams. Suppose instead of the Premier League on its own, we imagine a much larger competitive free-for-all containing the Premier League, plus the First Division below them, plus the Second Division below them, and so on, till we have 5000 teams altogether playing against each other.

If we knew nothing whatsoever about any given team, in that situation we might assign a “probability of only 1 in 5000” that it would win. In other words, if we picked a team randomly from the 5000, and did so repeatedly, then in the long run we would pick the winning team about once in every 5000 attempts to do so.

But now suppose we are told something about a given team: that it is in the top 20. That should make us raise our numerical assessment of its chances of winning the free-for-all. If we were further told that a team in the top 20 never loses to a team in the bottom 4980, we would very significantly raise our estimate of its chances of winning the free-for-all. It would be something similar to playing the Monty Hall game, except that instead of one out of three available doors being ruled out, 4980 out of 5000 available doors are ruled out.

But that, in effect, is what limiting the free-for-all to only the Premier League does. It means that if we know nothing at all about a team, the repeated act of picking one out randomly in the hope of choosing the winner would be successful much more often than 1 in 5000 times.

To lower the “chances of winning” in the face of further knowledge about a given team is to introduce capricious, subjective factors that cannot be relied on to make statistical judgements of relative frequency. They involve unrepeatable events or events that are not statistically lawlike, and so cannot be reliably extrapolated from. All we can do is guess about credibility here.

Casinos make money reliably because the behaviour of dice, cards, rotating cylinders etc. is statistically lawlike. For example, we know that in the long run about one sixth of rolls of pairs of dice will be doubles. But the behaviour of football teams in the Premier League is not at all lawlike. Bookies have to use numbers in their line of work, but let no one think these numbers correspond to measures of anything real or significant.

I suggest that we should sharply distinguish statistical relative frequency and subjective judgements of credibility. Numbers measure the former, but their presence is a will o’ the wisp when we are dealing with the latter.

Antisemitism is a special sort of personal failing

“Labour left in denial over antisemitism” say the headlines. And it is uncanny how people can be blind to something so glaringly obvious to the rest of us. I think the apparent blindness of political extremists to their own antisemitism is systematic: it exists for philosophical reasons that are worth noting.

Political extremists tend to understand justice as a matter of “groups getting what they deserve”. For example, German Fascists thought “Aryans” deserve their “destiny as the master race” or some such hokum, while of course Jews deserve death. According to this way of thinking, individuals do not matter. What matters is the group an individual belongs to — usually an accident of birth. If one’s forbears mistreated others, one inherits their guilt. If one’s forbears were mistreated, one inherits their entitlement to better things.

Extremists on the left have a much more pleasant way of expressing essentially the same view about collective guilt and the supposed entitlements of groups: they put it in terms of social justice. “We are on the side of the oppressed”, they claim. Of course this also puts them on the side opposed to “the oppressors”. Such side-taking has many forms, but as a rule it is regarded as acceptable to “punch upwards” and unacceptable to “punch downwards”. For example, imitating someone’s accent for satire or pure mirth is considered fair game, as long as the target is “up” — a toff, say, or someone with a pretentious way of speaking. But imitating Ken Livingstone’s “common” accent would be considered completely out of order. It doesn’t matter if the toff is now impoverished or if the “common” man is now a rich and powerful political figure — what matters is birth. Those of working class pedigree are uniquely free of sin, cleansed by their own victimhood at the hands of an “elite”.

These ways of thinking are perfectly suited to antisemites with their standard-issue antisemitic tropes such as that Jews are rich, cunning, bank-controlling, behind-the-scenes international political puppet masters, that Mossad is behind every terrorist outrage including those that benefit Israel’s enemies, and all the ludicrous rest of it.

Antisemites are duly attracted to the extreme left, at least in the UK and Ireland, and their presence there in turn prompts accusations of antisemitism. Yet when the accused sincerely ask themselves whether they are antisemites, their thoughts go like this: “Antisemitism is a form of racism, and ‘racism’ means ‘oppressing people regarded as inferior’; but I entertain no such thoughts, especially towards Jews, so I’m innocent of the charge. I remain a committed anti-racist on the side of the oppressed.”

That train of thought is gruesomely self-congratulatory, but I think we should acknowledge that there is little willingness to do harm or to exploit others in it. Antisemitism is a special sort of racism. It’s more insidious than other forms of racism, because those in its grip find no malice in themselves. That makes antisemitism a special sort of personal failing, one where culpability lies not in malice but in lack of reflection. It’s a philosophical failing, of people who have not taken the obligation to know thyself seriously enough. Western antisemites from the Christian tradition are people who have absorbed much of what is worst about Christianity, yet purged themselves of too little of it. This applies in particular to the doctrine of original sin, which says that blame is inherited. It also applies to ethics in which moral rightness is understood simply as a matter of “meaning well”, of keeping one’s nose clean, of acting out of virtue rather than vice, of avoiding malice.

I think we should treat hatred of Jews among Muslims of the Middle East as something different from Western antisemitism, although the former owes much to the latter. I don’t mean to single out for blame anyone who is historically, scientifically or culturally illiterate, who is unable to think beyond the narrow confines of an inadequate education. But I do mean to blame people who are culturally equipped to reflect on the failings of their own Christian tradition, who are morally obliged to do so, yet who have neglected to do so. In the West, we all must ask ourselves how one of the greatest societies in the world, a politically sophisticated democracy whose people created the greatest art of mankind, somehow managed to create hell on earth.

“If we don’t learn the lessons these pictures teach, night will fall”

Self-determination versus nationalism

Self-determination is a wonderful thing, but nationalism is a terrible thing. The difference between them is this. Self-determination is guided by a principle: if a piece of territory is in dispute, then its sovereignty should be settled by asking the people who live there. It is not something to be settled by asking people who do not live there.

For example, according to the principle of self-determination, the question of the sovereignty of the Falkland Islands is very clear. The overwhelming majority of the people who live in the disputed territory want it to remain British. The fact that there may be many more living in Argentina who would prefer “Las Malvinas” to be part of Argentina is irrelevant. Or at least it’s irrelevant according to the principle of self-determination, because they live outside the disputed territory.

Unlike self-determination, nationalism is not guided by principle. Instead, it takes its direction from an ideal of what is best for an identifiable group. This differs from one group of people to the next (and can differ between individuals who have different ideals for the same identifiable group of people). So although strict compliance with the principle of self-determination cannot generate conflict, rival nationalisms can come into conflict, and often do. Over the course of history, perhaps more people have died in disputes over territory than in any other sort of conflict. We are a tribal species, and nationalism is tribalism on the largest scale.

Self-determination is democratic. All that counts is what the majority of a group of individuals (i.e. the people who live in the disputed territory) actually prefer. But nationalism tends to count not what people do as a matter of fact prefer, but what they would prefer in an ideal world, or should prefer for the ideal of nationhood to be realised.

The question of the sovereignty of the Falkland Islands is about as clear as it gets, at least according to the principle of self-determination. Similar questions about Wales, Northern Ireland and Scotland (i.e. should they remain part of the United Kingdom?) are somewhat less clear, because the respective majorities are slimmer. In such cases, principled attachment to self-determination often merges into unprincipled nationalism. The discussion tends to shift imperceptibly from what people do in fact prefer, to what the right sort of people prefer, or to what ordinary people would prefer if they were less ordinary by being better educated, or to what people should prefer according to the ideal of what is best for the group.

Very often, this appeal to an ideal trades on some idea of ethnic purity. The right sort of Scotsman is a “true” Scotsman; the better-educated Welshman speaks the Welsh language; authentic Irishmen enjoy traditional Irish music and play Gaelic games; and so on. The ones who don’t are supposedly remiss in some vaguely “moral” way, and it’s assumed that they should be guided by the ideal. Nationalism nearly always enthusiastically promotes a nation’s language, art, and distinct ways of life—and despises the other language, the other art, and the other, less authentic, more corrupted, ethnically “impure” ways of life.

So even though a clear majority of the people who live in a disputed territory may prefer the status quo, it very often happens that a nationalistic movement blurs matters by making an issue of authenticity and ethnic purity, appealing to ideals instead of actual majority preference. By blurring the issue, and by appealing to tribal sentiments, nationalism can give it the appearance of being a “live issue”, even when the principle of self-determination can settle it easily and unambiguously. Typically, the slimmer the majority, the greater the potential for nationalist blurring of the issue.

I think it’s wonderful for a language to work as a means of communication, but terrible for a language to become an expression of authenticity or a symbol of ethnic purity. I’d say much the same about art, and other forms of culture and ways of life. Apart from the danger of political conflict, there is the damage done to language, art and other forms of culture by turning them into vehicles of struggle. Language, art and other forms of culture are enriched by intermingling rather than insulation, they are improved by a broader rather than a narrower range of influences.

Looking back 100 years to the Irish Rising of 1916, I find very little to like in its leaders. They overruled what most Irish people actually wanted at the time, and instead appealed to a nationalistic ideal of what they should have preferred. I admire the bravery of the 1916 leaders, but I don’t like what they did with it.

Why holists distrust expert opinion

The “default” way of thinking about evidence is often called foundationalism. Foundationalists think that most of our everyday beliefs about the world are justified by “resting on a foundation” of privileged or more certain beliefs—typically, beliefs about conscious experience, raw feels, or “sense data”. In science, foundationalists typically suppose that a theory in a specialised field is a sort of edifice that is justified by resting on the carefully collected observational “data” of that specific field. This idea is partly inspired by mathematics, in which theorems really do rest on (i.e. are derivable from and implied by) axioms. The question is, should we take mathematics as our model of empirical knowledge?

Opposed to foundationalism is holism. Holists think that everyday beliefs are justified by belonging to a larger belief system. Individual beliefs do not stand or fall on their own, but meet the evidence as a whole, and it’s the way that whole “hangs together” that justifies the entire system. In science, holists typically suppose that theories consist of hypotheses, which are justified by meshing smoothly with other hypotheses, often from disparate fields. This is a matter of how much a theory explains, how reliably it predicts unforeseen observable events, how “virtuous” it seems when we consider its conservatism, modestly, simplicity, generality, fecundity, and so on. This is nearly always an intuitive matter of giving conflicting virtues various weightings, guided by little better than “how it feels” pragmatically.

For example, a holist would judge Freud’s theory by asking how much it seems to explain—how well it meshes with evolutionary theory, with other philosophical ideas about agency, with what ordinary people can see for themselves of undercurrents and tensions in family life, with the various insights that art can give us about ourselves, and much else besides.

A telling difference between foundatonalists and holists is in their respective attitudes to specialist or “expert opinion” (by which I don’t mean the pragmatic know-how of a mechanic, but rather narrow theoretical claims made in advanced disciplines). The foundationalist tends to trust expert opinions, because he sees them as the product of skilled minds’ rare ability to trace specialised claims back to their specialised foundations, rather as an actuary can draw specific conclusions about a company’s finances from its specific account books.

The holist tends to distrust expert opinions. He will remind us that we can more reliably form opinions about the simple, familiar, observable, concrete and everyday than we can about the complicated, unfamiliar, unobservable, abstract or unusual. Most importantly, the holist is aware that claims made in specialised disciplines are typically hypotheses rather than the conclusions of arguments. No “data” implies them. If anything, it’s the other way round: hypotheses imply as-yet unseen events that observation can later confirm or deny. To the holist, the broad experience of a reasonably well-educated layman is better than the specialised training of an expert.

Holism has been around for well over a century. It has some well-known academic proponents such as Quine and Davidson. Yet foundationalism remains the default position among academics. Most of them despise hypotheses—mere “guessing”, as many would put it—and encourage their students to “provide arguments” instead of explaining why this or that hypothesis explains or predicts things better than its rivals. I think this is a tragedy.

What are ‘qualia’ ?

The word ‘qualia’ seems to be entering everyday usage. (It’s a plural — the singular is ‘quale’.) A quale is a distinctive sort of conscious experience, such as the subjective experience of blue (i.e. what we consciously experience when we are actually looking at a clear cloudless sky, or dreaming about swimming in the Aegean, etc.). How might qualia be explained from the perspective of evolutionary theory?

The really mysterious thing about qualia is this. The nerve endings send “signals” to the brain via the sensory neurons, like messages along telephone wires, and the brain reacts appropriately by sending “signals” back along the motor neurons to the muscles. Although there is an obvious need for the nerves to work like telephone wires, there doesn’t seem to be any obvious need for conscious experience to enter the picture at all. And yet, the life of a conscious creature is a riot of subjective experiences — distinctive colors, various subjective feelings such as hunger and pain, and so on. Why?

Here’s a very quick answer:

All living creatures are programmed to seek goals such as food, reproduction, safety, etc. Having an internal “map” of the outside world helps animals to achieve these goals. This internal map is a belief system. It works like the onboard computer map in a cruise missile, which looks at the terrain below and guides it towards its target. Of course, a cruise missile has just one goal and a very limited sort of map, but the basic idea is the same.

Programming a cruise missile is no doubt complicated, but maintaining a belief system is even more complicated: it calls for a lot of self-regulation. A belief system needs to perceive situations in the outside world, naturally, but it must also make choices, delay the achievment of some goals in favor of others, discard some beliefs when other beliefs are more likely to be true, and so on.

All of that entails having a higher-level “map”. This is more than just a “map” of the outside world like a cruise missile — it’s a “map” for perceiving one’s own internal states, and one’s overall position in the world. For example, discarding one belief in favor of another belief involves having second-level beliefs about which first-level beliefs are more likely to be true than others.

We are now in a position to ask: what is consciousness? Answer: consciousness is constantly-updated knowledge of our own states — and it mostly consists of higher-level states like the ones just mentioned.

For example, consider reaction to injury. A creature that does not have any such higher-level states (and is therefore not conscious) might have a simple defense mechanism that makes it recoil defensively when injured. But a creature with a higher-level “map” of its own states would be able to make a decision between “carrying on regardless” if the injury is not too serious, or stopping and nursing the wound if the injury is sufficiently serious. The seriousness of the injury depends on the circumstances. If the creature is running away from a predator, it should keep running at all costs. If the creature has to suffer no worse a fate than going without a meal, it should stop and rest. Unless it is in danger of starving to death, in which case it shouldn’t.

The decision-making capacity of these second-level states is a bit like the decision-making capacity of a political assembly. Each of the members wants what’s best for his own constituents, but the decisions of the whole are taken in the interests of the whole. This is achieved when the representations made by each member has its own distinctive character and a degree of insistence.

For example, having a distinctive sort of pain is normally the same thing as having an injury in concert with an internal state that indicates the severity and location of the injury, so that the “assembly” of second-level states (i.e. consciousness) can make an informed decision about whether or not to override it.

As a first pass, above, I said that consciousness is constantly-updated knowledge of our own states. Now I will fine-tune that by saying that consciousness consists of second-level representations of first-level representations of states of the world outside our heads.

For example, suppose I burn my finger. That is a state of the world (my body) outside my head. The injured part of my finger sends a signal to my brain, which then forms a state that normally co-occurs with that sort of injury, and so works as an “indicator” of its presence. This is a first-level representation of such an injury. So far, consciousness has nothing to do with the process. But now my brain has to take account of my overall state, and make decisions based on the various indications that are available to it. Doing that involves perceiving various internal states — such as the first-level representation of the injury to the finger — and weighing them up in terms of their urgency, type, and so on. That involves forming a second-level representation of first-level representations. In a sense, each of the first-level representations has to “make a case” for itself by having distinctive qualities that demand more or less attention, this or that type of attention, and so on.

In order to be represented appropriately at the second level, a first-level representation has to be distinctive. That is why it “feels like something or other”. What these states feel like is a product of how they are physically realized, whether they are welcome or unwelcome, what sorts of decisions have to be made given their occurrence, and so on.

For example, the first-level states that occur with injury are realized in different ways depending on which part of the body is injured. Almost all of them are unpleasant, because almost all injury is unwelcome. Most of them are “insistent” because most of them require some sort of action, taken sooner rather than later.

Or again, consider the first-level states that typically occur with the presence of an (objectively) blue object. Blue objects are unusual in nature, so the second-level states that accompany them are very distinctive, they arouse curiosity, and so on. Mostly these states are pleasant because most blue objects are safe, and some are valuable in some way. The second-level state that accompanies the perception of a blue object (i.e. the “experience of blue”) is not an especially “insistent” sort of state because action is rarely needed in response to the presence of blue objects.

I hope it’s reasonably clear that having “qualia” is a “functional” business that can add significantly to reproductive success.