Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Trembling Hands

Philosophical and Historical Motivation | Basic Elements and Assumptions of Game Theory | Games and Information | Trees and Matrices | The Prisoner's Dilemma as an Example of Strategic-Form vs. Extensive-Form Representation | Solution Concepts and Equilibria | Subgame Perfection | Repeated Games and Coordination | Evolutionary Game Theory | Game Theory and Behavioral Evidence |


Читайте также:
  1. A dream in the hands of the right person is a winner every time
  2. A: What on earth are all these ski-caps doing here? Weren't we going to get someone to take them off our hands as a job lot?
  3. China hands drugmaker GSK record $489 million fine for paying bribes
  4. Hands and arms
  5. HIGGINS (dogmatically, lifting himself on his hands to the level of
  6. The father should pour the water over their bound hands.

Our last point above opens the way to a philosophical puzzle, one of several that still preoccupy those concerned with the logical foundations of game theory. It can be raised with respect to any number of examples, but we will borrow an elegant one from C. Bicchieri (1993). Consider the following game:


Figure 11

The NE outcome here is at the single leftmost node descending from node 8. To see this, backward induct again. At node 10, I would play L for a payoff of 3, giving II a payoff of 1. II can do better than this by playing L at node 9, giving I a payoff of 0. I can do better than this by playing L at node 8; so that is what I does, and the game terminates without II getting to move. A puzzle is then raised by Bicchieri (along with other authors, including Binmore (1987) and Pettit and Sugden (1989)) by way of the following reasoning. Player I plays L at node 8 because she knows that Player II is economically rational, and so would, at node 9, play L because Player II knows that Player I is economically rational and so would, at node 10, play L. But now we have the following paradox: Player I must suppose that Player II, at node 9, would predict Player I's economically rational play at node 10 despite having arrived at a node (9) that could only be reached if Player I is not economically rational! If Player I is not economically rational then Player II is not justified in predicting that Player I will not play R at node 10, in which case it is not clear that Player II shouldn't play R at 9; and if Player II plays R at 9, then Player I is guaranteed of a better payoff then she gets if she plays L at node 8. Both players use backward induction to solve the game; backward induction requires that Player I know that Player II knows that Player I is economically rational; but Player II can solve the game only by using a backward induction argument that takes as a premise the economic irrationality of Player I. This is the paradox of backward induction.

A standard way around this paradox in the literature is to invoke the so-called ‘trembling hand’ due to Selten (1975). The idea here is that a decision and its consequent act may ‘come apart’ with some nonzero probability, however small. That is, a player might intend to take an action but then slip up in the execution and send the game down some other path instead. If there is even a remote possibility that a player may make a mistake—that her ‘hand may tremble’—then no contradiction is introduced by a player's using a backward induction argument that requires the hypothetical assumption that another player has taken a path that an economically rational player could not choose. In our example, Player II could reason about what to do at node 9 conditional on the assumption that Player I chose L at node 8 but then slipped.

Gintis (2009) points out that the apparent paradox does not arise merely from our supposing that both players are economically rational. It rests crucially on the additional premise that each player must know, and reasons on the basis of knowing, that the other player is economically rational. This is the premise with which each player's conjectures about what would happen off the equilibrium path of play are inconsistent. A player has reason to consider out-of-equilibrium possibilities if she either believes that her opponent is rational but his hand may tremble or she attaches some nonzero probability to the possibility that he is not economically rational or she attaches some doubt to her conjecture about his utility function. As Gintis also stresses, this issue with solving extensive-form games games for SEP by Zermelo's algorithm generalizes: a player has no reason to play even a Nash equilibrium strategy unless she expects other players to also play Nash equilibrium strategies. We will return to this issue in Section 6 below.

The paradox of backward induction, like the puzzles raised by equilibrium refinement, is mainly a problem for those who view game theory as contributing to a normative theory of rationality (specifically, as contributing to that larger theory the theory of strategic rationality). The non-psychological game theorist can give a different sort of account of apparently “irrational” play and the prudence it encourages. This involves appeal to the empirical fact that actual agents, including people, must learn the equilibrium strategies of games they play, at least whenever the games are at all complicated. Research shows that even a game as simple as the Prisoner's Dilemma requires learning by people (Ledyard 1995, Sally 1995, Camerer 2003, p. 265). What it means to say that people must learn equilibrium strategies is that we must be a bit more sophisticated than was indicated earlier in constructing utility functions from behavior in application of Revealed Preference Theory. Instead of constructing utility functions on the basis of single episodes, we must do so on the basis of observed runs of behavior once it has stabilized, signifying maturity of learning for the subjects in question and the game in question. Once again, the Prisoner's Dilemma makes a good example. People encounter few one-shot Prisoner's Dilemmas in everyday life, but they encounter many repeated PD's with non-strangers. As a result, when set into what is intended to be a one-shot PD in the experimental laboratory, people tend to initially play as if the game were a single round of a repeated PD. The repeated PD has many Nash equilibria that involve cooperation rather than defection. Thus experimental subjects tend to cooperate at first in these circumstances, but learn after some number of rounds to defect. The experimenter cannot infer that she has successfully induced a one-shot PD with her experimental setup until she sees this behavior stabilize.

If players of games realize that other players may need to learn game structures and equilibria from experience, this gives them reason to take account of what happens off the equilibrium paths of extensive-form games. Of course, if a player fears that other players have not learned equilibrium, this may well remove her incentive to play an equilibrium strategy herself. This raises a set of deep problems about social learning (Fudenberg and Levine 1998. How do ignorant players learn to play equilibria if sophisticated players don't show them, because the sophisticated are incentivized to play equilibrium strategies until the ignorant have learned? The crucial answer in the case of applications of game theory to interactions among people is that young people are socialized by growing up in networks of institutions, including cultural norms. Most complex games that people play are already in progress among people who were socialized before them—that is, have learned game structures and equilibria (Ross 2008. Novices must then only copy those whose play appears to be expected and understood by others. Institutions and norms are rich with reminders, including homilies and easily remembered rules of thumb, to help people remember what they are doing (Clark 1997).

As noted in Section 2.7 above, when observed behavior does not stabilize around equilibria in a game, and there is no evidence that learning is still in process, the analyst should infer that she has incorrectly modeled the situation she is studying. Chances are that she has either mis-specified players' utility functions, the strategies available to the players, or the information that is available to them. Given the complexity of many of the situations that social scientists study, we should not be surprised that mis-specification of models happens frequently. Applied game theorists must do lots of learning, just like their subjects.

Thus the paradox of backward induction is only apparent. Unless players have experienced play at equilibrium with one another in the past, even if they are all economically rational and all believe this about one another, we should predict that they will attach some positive probability to the conjecture that understanding of game structures among some players is imperfect. This then explains why economically rational agents may often play as if they believe in trembling hands.

Learning of equilibria may take various forms for different agents and for games of differing levels of complexity and risk. Incorporating it into game-theoretic models of interactions thus introduces an extensive new set of technicalities. For the most fully developed general theory, the reader is referred to Fudenberg and Levine (1998).


Дата добавления: 2015-11-14; просмотров: 71 | Нарушение авторских прав


<== предыдущая страница | следующая страница ==>
On Interpreting Payoffs: Morality and Efficiency in Games| Uncertainty, Risk and Sequential Equilibria

mybiblioteka.su - 2015-2024 год. (0.007 сек.)