One simple strategy in the game theory treatment of an iterative game of the prisoner's dilemma. Essentially, in round i+1, player A does to player B what player B did to A in round i.

The most successful strategy in the Iterated Prisoner's Dilemma. Following a tit-for-tat strategy entails doing to your partner whatever your partner did you you in the previous turn -- cooperating if s/he cooperated, and defecting if s/he defected. (In other words, if your partner was just nice to you, be nice back -- if your partner harmed you, strike back, but only once!)

Some memeticists propose that tit-for-tat strategies are the source of human ideas of fairness, justice, and, ultimately, morality. This is not, they say, because people consciously plan to carry out such strategies in their daily lives, but simply because tit-for-tat is the best you can do in a zero-sum game. (Of course, it has been argued {and quite well} that human life is not zero-sum.)

"Tit-for-tat" is one of the more optimal strategies to resolve an extended Prisoner's dilemma in a game theoretic situation. Introduced by political scientists Anatol Rapoport and Robert Axelrod in the 1980s, the strategy was further developed in Axelrod's The Evolution of Cooperation. Under the standard Prisoner's Dilemma scenario, the optimal strategy is for both players is to "defect" in all circumstances; by playing this dominant strategy, an individual player can ensure a better outcome for himself regardless of the other player's actions. If both players realize this (and they are assumed to be rational decision makers), they will both defect and thus produce a Pareto inefficient outcome— there is potential for both players to better off, but they will never arrive at that outcome because of rational, self-interested behavior.

An iterated Prisoner's Dilemma is an extension of the standard problem. Over a period of time, if the payoffs for the players' actions remain the same and the game is repeated an indefinite number of times, what is the best strategy for the individual to play, given the other player's future choices and past actions? Generally, the tit-for-tat strategy consists of mimicking the opponent's action from the previous round; if your opponent defected the last round, you defect, but if he cooperated, you cooperate as well. Provided the structure of the game does not change throughout the iteration, the initial round thus determines the outcome of the rest of the game. When two players utilize tit-for-tat, mutual cooperation in the first round results in mutual cooperation for the rest of the game, while mutual defection results in defection throughout.

The outcomes covered above aren't particularly exciting, but the fun begins once you tweak the scenario a little. Suppose one player cooperates while the other defects and both employ tit-for-tat after the first round. The players will then alternate between cooperation and defection, trading off who gets the "cheater's reward" each round. Another variation could be putting a definite end to the game after a set number of rounds. If players realize that the game has a definite end, they will both defect on the very last round to maximize their payoffs. Since they cannot punish each other for defection once the game ends and since they know that the last round will result in mutual defection, the only rational course is to defect on the second-to-last round as well; the end result is defection throughout the entire game, a case where tit-for-tat holds but produces suboptimal payoffs for the players.

One of the implications of the tit-for-tat strategy is that because the initial round is so important in determining the eventual outcome of the iterated Prisoner's dilemma, making sure the players cooperate for the first round results in better payoffs for everyone. This is the rationale behind peace talks between warring states, when negotiations attempt to change the payoffs of the initial round to make mutual cooperation the only viable Nash equilibrium. After the first round, players employing tit-for-tat will thus continue to cooperate indefinitely and live peacefully, even if the game payoffs revert back to the standard iterated Prisoner's dilemma. Another real world example of an iterated game can be found in cartels, which are generally unstable in the long-run; this could be because businesses can project into the future for indications of when the "game" will end, such as falling stock prices of competitors, changes in supply and demand of oil, etc. The iterated Prisoner's dilemma and tit-for-tat can also be used to frame the inability of the world to act on global warming and climate change. Since governments are unable (or unwilling) to implement meaningful changes to the initial round of the "global warming game," countries continue to defect since there's no incentive for mutual cooperation (i.e., the collective, simultaneous reduction of greenhouse gas emissions).

Something to think about the next time you get into a tiff with the wife about cleaning duties around the house.


Brought to you by Axelrod's above referenced book, my game theory intensive courses, and a lot of free time at work.

Log in or registerto write something here or to contact authors.