On three occasions, the world perishes: when men die [famine or epidemic], when the production of wars increases; when verbal contracts dissolve.
–– Senchus Mor (Ancient Laws of Ireland )
Protect my empire from the army of the enemy, bad harvest and fraud.
— Darius praying Ahuramazda (inscription at Perspolis)
As detailed in a previous article (“The Tri-functional Ideology“), some ancient civilizations had come to recognize three main social functions: cosmic and social sovereignty, physical (usually violent) force and fertility/fecundity. Each function was represented by its own God.
The first function is of particular interest here. It was the basis of government and religion (considered to be inseparable) and defined the contractual relations and exchanges not only among men but also between men and Gods . In those civilizations, the God of the first function (Take, for instance, Mithra, the Indo-Iranian God of “Contract and Friendship” or Tyr the Nordic God of “Contract”) was usually very powerful . It is argued here that the God of Contract was in fact the image of an unconscious and archetypal ingredient of human mind.
Through the ages, evolution has biased man's reasoning ability toward favoring long-term social contracts. Man, therefore, has an unconscious tendency to view all social exchanges in a long-term context. This archetype-God is — in average — strong enough not to need the help of such painful supplements as fear and guilt which constrain natural instincts and are detrimental to the mental health of the individual and the stability of society. Indeed, some recent scientific results prove the long-term sustainability of an altruistic approach of social exchange. They show that altruistic behavior has a natural basis and does not have to be the result of man-made moral/legal systems.
To understand those studies, let us first introduce the “Prisoner's Dilemma” game. Assume that the game has two players and that the players can choose between two moves, either “cooperate” or “defect”. The idea is that each player gains when both cooperate, but if only one of them cooperates, the other one, who defects, will gain more. If both defect, both lose but not as much as the “cheated” cooperator whose cooperation is not returned. The game and its different outcomes can be summarized by the following table, where hypothetical “points” are given as an example of how the differences in outcome might be quantified.
Outcome for Actor A
Very bad (- 2)
Very Good (+2)
Table: Prisoner's Dilemma game outcomes for Actor A (in words, and in “points”) depending on the combination of A's action and B's action (a similar scheme applies to the outcomes for Actor B)
Such a distribution of losses and gains seems natural for many social interaction situations, since the cooperator whose action is not returned will lose resources to the defector, without either of them being able to collect the additional gain coming from the synergy of their cooperation. The gain for mutual cooperation (+1 point in the example) in the Prisoner's Dilemma is kept smaller than the gain for one-sided defection (+2 points in the example), so that there would always be a temptation to defect. An interactive implementation of the Prisoner's Dilemma game can be found here.
The game's name refers to a famous hypothetical situation: two criminals, having committed a crime together, are arrested. In order to obtain confessions, the Police isolates the prisoners from each other, and visits each offering the following deal: the one who accepts to bring evidence against the other one will be freed. If none of them accepts the offer (that is, if they accept to cooperate with each other), both of them will receive a small punishment because of the Police's lack of proof. If one of them confesses to the police (that is, if one of them defects), he will gain more, since he is freed; the one who remained silent, on the other hand, will receive a harsh punishment because of the evidence against him and the fact that he refused to talk. If both betray, both will be punished, but do not receive a very harsh punishment since they accepted to talk. The dilemma results from the fact that each of the prisoners has a choice between only two options, but cannot make a good decision without knowing what the other one will do.
The particularity of the Prisoner's Dilemma is that if both decision-makers were purely rational, they would never cooperate. Rational decision-making would require that you make the decision which is best whatever the other actor chooses. Suppose the other one would defect, then it is also rational for you to defect: you won't gain anything, but if you do not defect you will lose 2 points. Suppose the other one would cooperate, then you will gain anyway, but you will gain more if you do not cooperate, so here too the rational choice is to defect. The problem is that if both actors are rational, both will decide to defect, and none of them will gain anything! However, if both would “irrationally” decide to cooperate, both would gain 1 point.
Through numerical simulation of the repeated Prisoner's Dilemma game, R. Axelrod  has come to the conclusion that the cooperative “Tit-for-Tat” strategy (put simply: begin by cooperating; after that simply copy the opponent's last move: if he cooperates, cooperate, if he defects, defect; but if the opponent returns to cooperating, do the same) is superior to any other strategy even in a predominantly selfish environment: in the long-term, it is worthwhile to take the risk to cooperate and profit from those players who trust you.
The winning strategy can be described in the following terms:
– It is better to be generous than greedy: begin each game by offering to cooperate (it doesn't pay to start off taking advantage of other players)
– It is better to forgive quickly and try to re-establish cooperation immediately after a defection: if an opponent tries to take advantage of you, but changes their ways, you shouldn't hold a grudge; grudges are self-destructive
– It is necessary to be reactive, not to encourage treason: make it clear that you won't stand betrayal
– It is useless to try to be tricky; clarity of action is the best guarantee of stable cooperation
This result shows clearly that, even in the context of classical Darwinian evolution theory, intelligent individuals  will come to cooperate while pursuing perfectly selfish objectives. Axelrod's conclusion may then be restated:
In a competitive environment, without superior authority, cooperation is the most appropriate strategy of survival.
Some more recent empirical studies (see for instance ) show that the human reasoning capability is basically context-dependent. Human mind is not a reasoning machine. Presented in two different contexts, the same logical problem (Watson logic test of the type: “if p, then q”) is solved by 75% of the people in the first case and only 25% in the second case. The conclusion is that people are good at spotting cheats (situation presented in the first case) and enforcing social contracts. Generally, the brain is now increasingly viewed as a bundle of job-specific mechanisms shaped by evolution rather than a logical machine. Logic is merely a codification of these elementary mental subroutines.
Altruistic behavior of the kind “You scratch my back, I'll scratch your back”, which is the basis of social life, is entirely dependent on man's ability to keep close account of who owes what to whom or, in other words, his ability to enforce contracts. That probably explains the evolutionary development of that particular mental ability. Some anthropologists think that this feature of human mind was especially well adapted to the early societies of hunters: a hunter may return empty-handed for days and then suddenly catch more than he can eat.
This article suggests that the ancient tri-functional civilizations had reached a high level of social perfection. By acknowledging, institutionalizing and internalizing the importance of fair cooperation and the respect for contractual commitments, they had produced a stable and natural way of regulating social exchange. The moralistic/individualistic laws of our monotheistic era constitute an unnatural and occasionally destructive substitute.
Afshin Afshari was born in Iran and spent the first 16 years of his life there. In 1979, he went to France where he finished high school and completed undergraduate and graduate studies in Science and Management. He has worked in R&D and technology management in France, Germany, USA, and Canada. He currently lives in Montreal. His main hobby is the study of Iranian history and religions (pre- and post-Islamic).
 The original word is Draugha which means lie as well as disregard for laws.
 G. Dumezil (1992), “Mythes et Dieux des Indo-Europeens“, Flammarion [in French].
 Despite his predominance, the first God co-existed and interacted with the other Gods: the contributions of the second and third functions were recognized and celebrated. At some point in time (in Iran, with the advent of Zoroastrian monotheism), the men in charge of imposing the high morality of the first function (usually priests) started to reject and demonize the other functions (martial activities and orgy-like cults of fecundity).
 Robert Axelrod (1984), “The Evolution of Cooperation”, Basic Books. See also Axelrod's homepage.
 In fact, the conclusion holds even for non-intelligent beings. For example, among microscopic entities, the strategy may well be programmed as a reflex and result from elementary physical and chemical reactions. The main condition for the emergence of the cooperative strategy is that the game be repeated long enough. It should be noted, however, that, in order to play a repeated Prisoner's Dilemma game, the players must possess some kind of recognition/identification ability. This will enable them to play several games simultaneously and enhance the evolutive process of elimination of uncooperative elements. It can be said, therefore, that complexity and intelligence favor cooperation.
 The Economist (July 4, 1992), “A Critique of Pure Reason“, pp 81-82.