Você está na página 1de 3

Alex Liebman liebman@fas.harvard.

edu

Summary for Robert Axelrod The Evolution of Cooperation, Chapters 1-4

“A-tisket a-tasket, I go TIT FOR TAT


with anybody who’s talkin’ this shit, that shit”
-- Eminem

Abstract: The basic question is: “Under what conditions will cooperation emerge in a
world of egoists without central authority?” (3). Axelrod’s approach is to ask: what
strategies will be the most effective in non-zero sum games where there is nonetheless a
unilateral incentive to defect (i.e., a prisoner’s dilemma)? Based on competitions between
computer programs, Axelrod argues that cooperative strategies outperform non-
cooperative strategies; specifically, TIT FOR TAT (explained below) is the best strategy.
When applied with evolutionary models, this leads to cooperative equilibriums which
cannot be “invaded” by actors with non-cooperative strategies. The final chapter presents
an example from soldiers behavior in WWII to show how cooperation can evolve even
without a central authority or explicit coordination.

Chapter One: The Problem of Cooperation

In a variety of strategic situations, all actors have a dominant strategy not to


cooperate, but mutual gains could be realized with cooperation. Axelrod thus considers
the basic Prisoner’s Dilemma as representative of a broad set of real-life strategic settings.
In a single-shot game, the only equilibrium outcome is when both players defect.
However, what if the game were to be repeated by the same two players? This is the key
to the entire argument: “what makes it possible for cooperation to emerge is the fact that
the players might meet again” (12). The question is: assume the game will be played
some number of rounds; what is the best strategy to gain the most points? (You get 3 for
CC, 5 for DC, 0 for CD, and 1 for DD). This leads to Proposition One: If the probability
of playing a second round (the discount parameter) is sufficiently high, it is impossible to
say what the best strategy is without considering your opponent’s strategy. (If the
discount factor is low enough, then the best strategy is always D no matter what). Here is
the key insight of the whole book: in repeated games, the best strategy is TIT FOR TAT.
TIT FOR TAT means you play whatever your opponent played in the previous round. In
two successive tournaments of computer simulation against other decision rules, TIT
FOR TAT won both times. Axelrod argues this is because TIT FOR TAT has four key
characteristics: 1) “avoids unnecessary conflict by cooperating as long as the other player
does,” 2) will retaliate if provoked, 3) forgives quickly after retaliating, 4) its behavior is
very clear – others can quickly adapt to its pattern (20).

Chapter Two: The Success of TIT FOR TAT in Computer Tournaments

To see what kind of strategy would work best in an iterated Prisoner’s Dilemma,
Axelrod solicited computer decision rules from experts in a variety of fields:
psychologists, economists, game theorists, computer scientists, etc. All decision rules had
to play against all other rules, including themselves, and against an entry whose rule was
random. Some decision rules were extremely complicated, with complex models of the
Alex Liebman liebman@fas.harvard.edu

opponent’s behavior which would then allow the program to update its expectations,
others were more simple. The tournaments yielded several conclusions: 1) “nice”
strategies (defined as not being the first to defect) did the best. 2) programs which
minimized “echo effects” did best. Programs in which one defection set off long strings
of recriminations and counter recriminations (usually because the decision rule was not
adequately transparent) performed poorly. 3) “It pays to be nice, but also to be
retaliatory” (46). Programs who don’t retaliate are taken advantage of. TIT FOR TAT
combined these two properties well, and won both tournaments.

Is TIT FOR TAT really the best program, or did it do well only because of the
nature of the competition (i.e., the strategic environment)? Axelrod argues that TIT FOR
TAT is robust, and would do very well in a variety of strategic contexts. He tests this by
running an “evolutionary” tournament; those strategies which do poorly are either
eliminated or decrease in number; those that do well increase in number and so come to
dominate more of the game. (This is a reasonable assumption, because losing strategies
are usually either abandoned or the people who tried them fired.) In evolutionary games,
TIT FOR TAT (and other strategies which are nice, retaliatory, forgiving, and clear)
come to dominate.

Chapter Three: The Chronology of Cooperation

Axelrod here further explores the question of evolution: suppose TIT FOR TAT
achieved total dominance. Could a different strategy take advantage of this fact and win
more points for itself? If not, then TIT FOR TAT cannot be “invaded” and is
“collectively stable.” Axelrod then lays out a series of six propositions, which are really
conditions under which a given strategy will be collectively stable:
Proposition Two: If the probability of entering a second (third, fourth…n+1) round is
large enough, TIT FOR TAT is collectively stable. (This just means that if the chance of
a second round being played is very low, then TIT FOR TAT could be knocked out by a
program which always defected by claiming 5 points on the first round.)
Proposition Three: Any strategy which cooperates on the first round needs the
probability of a next round to be sufficiently high to be collectively stable. (Just
generalizes proposition two).
Proposition Four: For a nice strategy to be collectively stable, it must immediately defect
is the other strategy does.
Proposition Five: ALL D (defect every time) is always collectively stable IF only one
TIT FOR TAT can enter the game at a time. If a cluster of TIT FOR TATs enter at once,
however, then they can take over because the points they get playing against each other
(let’s say 30 for a 10 round game) is so much greater than the 10 points the ALL Ds get
when playing each other.
Proposition Six: Nice strategies which are “maximally discriminating” – never cooperate
again if the other keeps defecting – can invade an ALL D world with the smallest
possible cluster. This is because they lose the least points – just one round’s worth –
compared with strategies which are too forgiving and might lose two round’s worth.
Alex Liebman liebman@fas.harvard.edu

Proposition Seven: Nice strategies which are collectively stable from an individual
invasion can also not be invaded by a cluster. Hence, nice strategies are more robust than
mean ones.
The conclusion, then, is that “cooperation can emerge even in a world of unconditional
defection…from small clusters of discriminating individuals, as long as these individuals
have even a small proportion of their interactions with each other” (68).

Chapter Four: The Live-and-Let-Live System in Trench Warfare In World War I

Axelrod illustrates his conclusions with an example from trench warfare during
WWI. On the Western front, especially in the early parts of the war, troops spontaneously
engaged in a “live and let live” system. That is, they would deliberately not hit enemy
troops and, in return, the enemy would deliberately not hit them. While at any given
moment the best strategy was to shoot to kill, the difference was that, unlike in other wars,
there was almost no mobility – the same troops would face each other for weeks or
months on end. The character of the game therefore switched from a single-shot to an
iterated prisoner’s dilemma. Deliberately misfiring weapons (often in identical patterns
every night), satisfied high command’s desire to see fighting, but signaled to the enemy
that they were cooperating. Axelrod argues that one element of TIT FOR TAT in real life
not picked up by the computer program is the emergence of ethics and ritual. Both sides
saw it as a moral duty not to break the agreement; if one side did, the other side usually
responded with indignation and a desire for revenge. TIT FOR TAT is not only a good
strategy, but also has intuitive moral appeal.

One criticism: I think this is an excellent book. It is clear and easy to understand.
However, his initial assumption that the international system can be modeled as a
prisoner’s dilemma is doing a huge amount of work, in the sense that it assumes that the
world is not zero-sum. This is a stance that some realists, and others, I imagine, would
disagree with. What if the strategic environment actually more closely resembles a game
of “chicken?” What would an iterated game of “chicken” look like, and how would TIT
FOR TAT do? I wonder if others have explored this question.

Você também pode gostar