Altruism. Acts by individuals which benefit others at cost to themselves. Being nice. Cooperation. Why would anyone engage in such things? More specifically, why would any animal evolve to behave in this way? That is, forgetting for a moment the cultural/social factors specific to humanity that might be part of the explanation for our tendency towards altruism, how does it evolve in the genetic sense?

There is a real puzzle here. The basic tenet of evolution states that individuals act so as to maximise the reproduction of their genetic material. This is the "Selfish Gene" theory, which is generally accepted (except that a recent survey had 45% of US citizens believing in creationist "science". But, noding for the ages, we can safely (hopefully) say that evolution is generally accepted in the year you now are reading this).

This is enough to explain directly some examples of apparent altruism, such as that observed in social insects where many individuals regularly sacrifice themselves for the good of their hive-mates. The point here is that although this is altruistic from the perspective of the individual creature, from the perspective of its genetic material it is plain good sense. Members of, say, a bee-hive are all so closely related that from a genetic point of view it is right to view the hive as a single being, and hence its members have evolved to act accordingly.

However, we don't only see altruism occuring within related groups. Individual animals are observed acting in ways which appear to cause overall harm to their genetic material's reproductive fitness in order to benefit other unrelated individuals. Why? Well, there are a number of answers which have been put forward to this question, including group evolution in various forms, the memetic explanation for language-capable creatures (i.e. humans), and game theoretical explanations. This last, game theory, provides a most satisfactory answer to much of what we're asking, so let's look at that.

Many questions of altruism versus selfishness can be viewed through the model of the Prisoner's Dilemma. In its simplest form, it works like this. We have two players, A and B. Each of them has two possible moves, either cooperate (C) or defect (D). They each chose a move independently of the other, and then receive their payoffs according to the moves of both. If both cooperate, they both get a decent payoff. If one cooperates and the other defects, the defector gets a very good payoff and the cooperator a very poor one, and if both defect they each get a pretty poor payoff. These payoffs represent, say, food or warmth or mates or generally anything which enhances reproductive fitness. So from each player's perspective, the optimum scenario is when both players cooperate. However, consider yourself as player A. You'll think like this. "Suppose B cooperates. Then if I defect I'll get more than if I cooperate. And suppose she defects. Then, again, I'll get more if I defect. So I'll defect". But player B will think likewise, and you'll both defect, which is a shame since you'd both have done so much better if you'd both cooperated. Cooperation is what's known as a "dominated" strategy, i.e. and irrational one in the specific context of "instrumental rationality" that game theorists use the term.

So this can be seen as the defining problem of altruism. Why should I be altruistic? OK, sure, it'll help others out, and true it would be great if everyone was nice like that, but that doesn't stop the fact that I'll do better if I'm not. This is the selfish way of looking at things. So why doesn't that argument always follow through? In humans the answer is often simply that we are not selfish, for cultural reasons. In some senses it could be said that cooperative societies themselves are based on the derationalisation of this argument, in making it seem reasonable to its members to go against the above and act for others before themselves. However, we know that when we're talking about evolution selfishness rules absolutely, so there must be another answer.

Well, one might have been found by David Axelrod. In, I believe, 1982, he published a paper "On the evolution of strategies in the iterated Prisoner's Dilemma" which showed it is possible for selfish agents to cooperate - if the interaction is repeated. In the iterated Prisoner's Dilemma, the above game is played again and again with the same partners, and with each player being able to remember those they've played against and how they've played in the past. Hence we have the possibility for the build-up of trust and of reputation, and it is this which allows cooperation to become a stable strategy. Axelrod's experiment, which has since been repeated and the results confirmed by himself and others, consisted of asking people to submit a number of computer programmes into a competition - an iterated Prisoner's Dilemma. The programmes were various and included some which were very complex, but the one which won was simple and elegant. Tit for Tat - cooperate on the first game with a new partner, and from then on do what they did last game. So this strategy punishes defectors, but is quick to forgive.

It can be shown that in an evolutionary context, this or a similar strategy is what will tend in general to evolve, and what will result is a group of cooperators. This group is stable in the sense that it can't be infiltrated by defectors - they will themselves be defected against and will soon die off. Hence we have what we wanted - a coherent explanation for the evolution of altruism, in this limited form.

So what does this teach us about human altruism? Well, I would say it is this "limited form", and the form that those limitations take, that can teach us most. It seems that what passes for altruism in humanity is often of this similarly limited nature, and that it is very hard (though not, note impossible or even that rare) for us to overcome these limitations. We are often willing to sacrifice ourselves for others only when either they are part of our family (a basic biological imperative as discussed above reified into a cultural norm), we feel we "owe them one" because they've been nice to us previously (the cooperative tit for a cooperative tat) or we don't know them but expect to interact with them again in the future (a pre-emptive cooperation inviting cooperation in return, as with the initial C in tit-for-tat as outlined above).

So does this mean that other altruism, without these essential if often hidden and/or instinctive selfish motives behind it, is possible? Well, I would say yes. But that's the subject of another node - namely this one.

Altruism is one of the last (and most deeply embedded) thorns in the side of evolutionary theory, but these recent developments in game theory have finally given us something to grab onto. And we need to do that, because in principle things are very simple: if we humans evolved, then so did our minds, and if our minds evolved, then so did its behaviours – including altruistic tendencies.

There are problems, though. Altruism isn’t just a case of ‘tit for tat’. We aren’t nice just to family members, previous co-operators or possible future allies. We’re also nice to people we don’t know and people we’ll never meet. We donate money to international charities, we volunteer our time to help society’s less fortunate, and we help old ladies who drop their shopping in the street. It strains plausibility to say that we always do these things in the hope of a return favour, a kind of ‘just in case’ strategy where the principle would be ‘always help everyone in case you need to pull in a favour in return’. That’s a decidedly non-optimal strategy, where the net expenditure of effort (tit) is far greater than the net profit when it occasionally pays off (tat). So that kind of behaviour can’t just be explained as indirect selfish rationality, be it conscious or sub-conscious. The Prisoner’s Dilemma findings are helpful as far as they go, but what a game-theoretic explanation glosses over is the fact that altruistic behaviour can be attributed to that apparently mysterious phenomenon, the conscience.

Still, mysterious as it may be, our consciences are just another aspect of our mental behaviour, so there must be some evolutionary explanation for their existence. One recent suggestion, proposed most eloquently by Daniel Dennett, was initially developed when considering the problem of so-called ‘free riders’ in the problem of the commons, a larger-scale version of the Prisoner’s Dilemma. In game theory terms, a free rider is an agent who draws benefits from a co-operative society without contributing. In a one-to-one situation, free riding can easily be discouraged by a tit-for-tat strategy, as alfimp points out. But in a larger-scale society, where contributions and benefits are pooled, they can be incredibly difficult to shake off.

Imagine a situation where a society evolves as David Axelrod describes. Co-operative agents interact with each other, all contributing resources and drawing on the common good. Now imagine a rogue free rider, an agent who draws a favour (you scratch my back) and later refuses to return it. The problem is that free riding is always going to be beneficial to individuals at cost to society. How can well-behaved co-operative agents avoid being shafted?

Over many generations, the obvious solution is for co-operators to evolve the ability to spot potential free riders in advance and refuse to enter into reciprocal arrangements with them. Then, of course, the canonical free rider response is to evolve a more convincing disguise, fooling co-operators into co-operating after all. Before you know it, you have one of those all-too-common evolutionary arms races, with ever-more-sophisticated disguises and ever-more-sophisticated detectors. This may be how some societies have evolved, but it seems a far cry from the genuine altruistic conscience which we feel we have.

Now here’s the clever part. In this evolutionary arms race, how best might an agent convince his comrades that he really is a genuine co-operator, not a free rider in disguise? Answer: by actually making himself a real-life, genuine co-operator, by erecting psychological barriers to breaking his own promises, and by advertising this fact to everyone else. In other words, a good solution is for organisms to evolve things that everyone knows will force them to be co-operators – and to make it obvious that they’ve evolved these things. And we ought to expect evolution to find good solutions. So evolution will produce organisms who are sincerely moral and who wear their hearts on their sleeves; in short, evolution will give rise to the phenomenon of conscience.

This theory, combined with Axelrod’s, seems to cover all the angles. It explains how a blind and fundamentally selfish process can come up with the genuinely non-cynical form of altruism that we observe in our consciences.

And here’s something to think on. If all this is true, and altruism (read: morality) has just evolved as an optimum solution to a game-theoretic problem, what then for ethics? Is right and wrong just an illusion fobbed off on us by our genes so they can survive and reproduce in a society of self-interested agents? This is a meta-ethical question that straddles the boundary between biology and philosophy.

A short, food for thought node. Downvote away if you think this belongs somewhere else.


I'd like to draw a finer distinction between various kinds of behaviors:

1. Cooperation: benefits you and benefits others

2. Altruism: hurts you and benefits others

3. Selfishness / Competition: benefits you and hurts others

4. Stupidity: hurts you and hurts others

Even if it may not be a good idea to encourage everyone to fall into category 2, if you can devise more ways for people to fall into category 1, then you will have a more successful society. The fact is category 3 often just ends up being category 4 in the long run: the Israel / Palestine conflict is a prime example.

Log in or register to write something here or to contact authors.