Altruism. Acts by
individuals which benefit others at cost to themselves. Being
nice.
Cooperation. Why would anyone engage in such things? More specifically, why would any
animal evolve to behave in this way? That is, forgetting for a moment the
cultural/
social factors specific to
humanity that might be part of the explanation for
our tendency towards altruism, how does it evolve in the genetic sense?
There is a real
puzzle here. The basic tenet of
evolution states that individuals act so as to maximise the reproduction of their
genetic material. This is the "
Selfish Gene" theory, which is generally accepted (except that a recent survey had 45% of
US citizens believing in
creationist "science". But,
noding for the ages, we can safely (
hopefully) say that evolution is generally accepted in the year you now are reading this).
This is enough to explain directly some examples of apparent altruism, such as that observed in social insects where many individuals regularly
sacrifice themselves for the
good of their
hive-mates. The point here is that although this is altruistic from the
perspective of the individual creature, from the perspective of its genetic material it is plain good sense. Members of, say, a bee-hive are all so closely
related that from a genetic point of view it is right to view the hive as a single
being, and hence its members have evolved to act accordingly.
However, we don't only see altruism occuring within related groups. Individual animals are observed acting in ways which appear to cause overall
harm to their genetic material's
reproductive fitness in order to benefit other unrelated individuals. Why? Well, there are a number of answers which have been put forward to this question, including
group evolution in various forms, the
memetic explanation for
language-capable creatures (i.e.
humans), and
game theoretical explanations. This last, game theory, provides a most satisfactory answer to much of what we're asking, so let's look at that.
Many questions of altruism versus selfishness can be viewed through the model of the
Prisoner's Dilemma. In its simplest form, it works like this. We have two players, A and B. Each of them has two possible moves, either cooperate (C) or
defect (D). They each chose a move
independently of the other, and then receive their payoffs according to the moves of both. If both cooperate, they both get a decent
payoff. If one cooperates and the other defects, the defector gets a very good payoff and the cooperator a very poor one, and if both defect they each get a pretty poor payoff. These payoffs represent, say,
food or
warmth or
mates or generally anything which enhances reproductive fitness. So from each player's perspective, the
optimum scenario is when both players cooperate. However, consider yourself as player A. You'll think like this. "Suppose B cooperates. Then if I defect I'll get more than if I cooperate. And suppose she defects. Then, again, I'll get more if I defect. So I'll defect". But player B will think likewise, and you'll both defect, which is a shame since you'd
both have done so much better if you'd both cooperated. Cooperation is what's known as a "
dominated" strategy, i.e. and
irrational one in the specific context of "
instrumental rationality" that game theorists use the term.
So this can be seen as the defining problem of altruism. Why should I be altruistic? OK, sure, it'll help others out, and true it would be great if everyone was nice like that, but that doesn't stop the fact that I'll do better if I'm not. This is the selfish way of looking at things. So why doesn't that argument always follow through? In humans the answer is often simply that we are not selfish, for cultural reasons. In some senses it could be said that cooperative societies themselves are based on the derationalisation of this argument, in making it seem reasonable to its members to go against the above and act for others before themselves. However, we know that when we're talking about evolution selfishness rules absolutely, so there must be another answer.
Well, one might have been found by David Axelrod. In, I believe, 1982, he published a paper "On the evolution of strategies in the iterated Prisoner's Dilemma" which showed it is possible for selfish agents to cooperate - if the interaction is repeated. In the iterated Prisoner's Dilemma, the above game is played again and again with the same partners, and with each player being able to remember those they've played against and how they've played in the past. Hence we have the possibility for the build-up of trust and of reputation, and it is this which allows cooperation to become a stable strategy. Axelrod's experiment, which has since been repeated and the results confirmed by himself and others, consisted of asking people to submit a number of computer programmes into a competition - an iterated Prisoner's Dilemma. The programmes were various and included some which were very complex, but the one which won was simple and elegant. Tit for Tat - cooperate on the first game with a new partner, and from then on do what they did last game. So this strategy punishes defectors, but is quick to forgive.
It can be shown that in an evolutionary context, this or a similar strategy is what will tend in general to evolve, and what will result is a group of cooperators. This group is stable in the sense that it can't be infiltrated by defectors - they will themselves be defected against and will soon die off. Hence we have what we wanted - a coherent explanation for the evolution of altruism, in this limited form.
So what does this teach us about human altruism? Well, I would say it is this "limited form", and the form that those limitations take, that can teach us most. It seems that what passes for altruism in humanity is often of this similarly limited nature, and that it is very hard (though not, note impossible or even that rare) for us to overcome these limitations. We are often willing to sacrifice ourselves for others only when either they are part of our family (a basic biological imperative as discussed above reified into a cultural norm), we feel we "owe them one" because they've been nice to us previously (the cooperative tit for a cooperative tat) or we don't know them but expect to interact with them again in the future (a pre-emptive cooperation inviting cooperation in return, as with the initial C in tit-for-tat as outlined above).
So does this mean that other altruism, without these essential if often hidden and/or instinctive selfish motives behind it, is possible? Well, I would say yes. But that's the subject of another node - namely this one.