Strategic interaction in trend-driven dynamics Paolo Dai Pra∗, Elena Sartori † and Marco Tolotti ‡ March 2013 Abstract We propose a discrete-time stochastic dynamics for a system of many interacting agents. At each time step agents aim at maximizing their individual payoff, depending on their action, on the global trend of the system and on a random noise; frictions are also taken into account. The equilibrium of the resulting sequence of games gives rise to a stochastic evolution. In the limit of infinitely many agents, a law of large numbers is obtained; the limit dynamics consist in an implicit dynamical system, possibly multiple valued. For a special model, we determine the phase diagram for the long time behavior of these limit dynamics and we show the existence of a phase, where a locally stable fixed point coexists with a locally stable periodic orbit. Keywords: Mean-field interaction, multi-agent models, phase transition, strategic games. 1 Introduction In recent years, modeling interactions in social sciences has become a crucial field of research. In particular, the increasing awareness that systemic risk may lead to large loss in financial markets, has revealed that the effects of contagion were typically underestimated, stimulating the search for models that could allow reliable predictions. It would be impossible to faithfully account for the several lines of research that have dealt with this problem; we rather limit ourselves to a specific class of models, namely those with a mean-field interaction. In these models the interaction is “all to all”, i.e., is not subject to any geometrical constraint; this is usually unrealistic in physical systems, where the interaction forces have short range, but may be reasonable in some social systems, where the fast spread of information prevails over “local” interactions. From the mathematical point of view, the study of diffusions and spin systems with mean-field interaction dates back to [8],[10]; many theoretical issues, such as stability, metastability, criticality, are still under investigation (see, for example, [2]). For applications to social sciences, the reader can refer to [1], [3], [6] or [9]. Most cited models are formulated in terms of Markov dynamics, where the parameters (the drift for diffusions or the spin-flip rate for spin systems) are deterministic functions of the current state; the mean-field assumption in the interaction corresponds to the fact ∗ Department of Mathematics, University of Padova, 63, Via Trieste, I - 35121 Padova, Italy; [email protected] Department of Management, Ca’ Foscari University of Venice, S.Giobbe - Cannaregio 873, I - 30121 Venice, Italy; [email protected] ‡ Department of Management, Ca’ Foscari University of Venice, S.Giobbe - Cannaregio 873, I - 30121 Venice, Italy; [email protected] † 1 that these functions are invariant by permutations of the components of the system. The aim of this paper is to propose a different updating mechanism for the dynamics. In order to motivate our model, we, first, consider a very simple version of the “traditional” mean-field dynamics. Consider a system σ(t) = (σ1 (t), σ2 (t), . . . , σN (t)) of N spins σi (t) ∈ {−1, 1}, evolving in discrete-time t = 0, 1, 2, . . .. We consider Markov dynamics, where, at any time t ≥ 1, spins are simultaneously and independently updated. Models with sequential updating, i.e., where at most one spin flips at a given time, could also be considered, or models in continuous time. Many basic properties do not really depend on this choice. Here it is convenient to choose the simultaneous, or parallel, updating, for the modifications of the model that we will later propose. Simultaneous, independent updating and permutation invariance strongly limit the choice of the transition probabilities, that we assume to be of the following form: P(σ(t + 1) = σ | σ(t) = ζ) = N Y P(σi (t + 1) = σi | σ(t) = ζ), (1.1) i=1 and P(σi (t + 1) = σi | σ(t) = ζ) = where exp[βσi f (mN (ζ))] , 2 cosh[βσi f (mN (ζ))] (1.2) N 1 X ζi , N mN (ζ) := i=1 C 1 -function β > 0 and f is a with strictly positive first derivative. Setting mN (t) := mN (σ(t)), it is easy to check that {mN (t) : t ≥ 0} is itself a Markov process and its dynamics are nearly deterministic for N large: if mN (0) converges to m(0) in probability, then mN (t) converges to m(t) in probability for every t > 0, where m(t) solves m(t + 1) = tanh(βf (m(t))). (1.3) At the infinite volume limit (for N → +∞), the long time behavior of the system can easily be studied: there exists a critical value βc > 0 such that (1.1) has a unique, globally stable equilibrium for β < βc , while for β > βc multiple locally stable equilibria emerge. At the microscopic level (N < +∞) these dynamics can be formulated in terms of a payoff maximization (see [5]). Suppose σi (t) is the action of the i-th agent at time t: she can decide whether to be part (σi (t) = 1) or not to be part (σi (t) = −1) of a project (an investment, a political party or any other binary choice). The agents act strategically playing in a simultaneous way: at time t + 1 each of them chooses the action σi (t + 1), in order to maximize her own (random) payoff, given by Ui (σi (t + 1); σ(t)) := σi (t + 1) [f (mN (t)) + εi (t)] , (1.4) where {εi (t) : i = 1, . . . , N, t ≥ 0} is a family of i.i.d. random variables with distribution η(x) := P(ε ≤ x) = 1 , 1 + e−2βx Note that the payoff Ui for the i-th agent depends on: 2 β > 0. (1.5) • her action σi (t + 1); • all actions σj (t) at the previous time t, through the participation rate mN (t); • a local random noise εi (t). Given σ(t), the chosen action will be σi (t + 1) = 1 if and only if f (mN (t)) + εi (t) > 0, which occurs with probability as in (1.2). By the independence of the noises εi , this produces the stochastic evolution (1.1) - (1.2), which is therefore formulated as a sequence of static stochastic optimizations. The purpose of this paper is to modify this optimization mechanism, in which each agent independently chooses her action on the bases of the previous participation rate. The modification acts at two levels: 1. we generalize the payoff, by adding a term depending on the trend, i.e., on the variation of the participation rate, and a friction, or transaction cost; this may be quite appropriate in many applications; 2. agents choose their action simultaneously but not independently, on the basis of their forecast of the future value of the participation rate. More precisely, the payoff of the i-th agent is Ui (σi (t + 1), σ −i (t + 1); σ(t)) = σi (t + 1) [f (mN (t + 1)) + g(mN (t + 1) − mN (t)) + µσi (t) + εi (t)] , (1.6) where g is a given continuous, increasing function, which favors the “trend imitation”, µ > 0 represents the friction and σ −i (t + 1) := (σj (t + 1))j6=i . Note that, unlike in (1.4), the payoff of the i-th agent depends on the simultaneous actions of the other players. It is, therefore, nontrivial to define the optimal actions. We follow, in this respect, a standard game-theoretical approach. Suppose the actions σ(t) are given. • The local noise εi (t) is observable by the i-th agent, but not by the others. Thus σi (t + 1) must be σ(εi (t))-measurable, i.e., a measurable function of εi (t). • Agents know the common distribution of the noise. Accordingly, they aim at a Nash equilibrium, i.e., a vector of actions σ(t+1) satisfying the following property: for each i = 1, 2, . . . , N , σi (t + 1) maximizes over {−1, 1} the function si 7→ Ui (si , σ −i (t + 1); σ(t)) := E−i Ui (si , σ −i (t + 1); σ(t)) , (1.7) where E−i denotes the expectation with respect to the joint distribution of ε−i (t) := (εj (t))j6=i . Note that this expectation is justified by the fact that σ −i (t + 1) is a function of ε−i (t), which is not observed by the i-th agent. The stochastic dynamics (σ(t))t≥0 we study in this paper result from the sequence of these stochastic games. Note that it is not a priori well defined: there may be no Nash equilibrium at all or even multiple Nash equilibria may appear. In Section 2 we establish the existence of at least one Nash equilibrium, which gives rise to possibly multiple valued dynamics. Section 3 is devoted to the 3 study of the dynamics in the limit of infinitely many agents, based on a law of large numbers. In the limit the dynamics are solutions of a deterministic implicit recursion, though possibly multiple valued. In Section 4 we study the long time behavior of the dynamics with infinitely many players, for the special case in which the function f ≡ 0 and some minimal regularities on the function g are guaranteed. In Section 5 we develop in detail the baseline case, where g is linear. Depending on the values of the parameters of the model, the limit dynamics can have a locally stable fixed point, a locally stable periodic orbit of period 2, or coexistence of the two. We, also, perform some numerical simulations for a large, but finite, number of agents; not surprisingly, in the coexistence region of the parameters, we see a random regime switching behavior, in which periods of stable participation rates are separated by periods of fast fluctuations. 2 Existence of Nash equilibria In what follows, we make the following assumptions on the model. (a) The functions f : [−1, 1] → R, g : [−2, 2] → R are either identically zero or of class C 1 , with strictly positive first derivatives. (b) The random variables {εi (t) : i = 1, 2, . . . , N, t ≥ 0} are independent with common distribution η; moreover, η is a continuous distribution (i.e., with no point mass). Let write η(x) rather than η(−∞, x]. Assume that σ(t) is given and consider the stochastic game defined in (1.5), (1.6) and (1.7). Proposition 2.1. At least one Nash equilibrium exists. Proof. Assume σ(t+1) is a Nash equilibrium, corresponding to the noise vector ε(t). By definition of Nash equilibrium, σi (t + 1) = 1 if and only if E−i [f (mN (t + 1)) + g(mN (t + 1) − mN (t))] + µσi (t) + εi (t) > 0. (2.1) In principle, it could be σi (t + 1) = 1 even though the above inequality is an equality; this, however, has probability zero to occur, whereas εi (t) has continuous distribution and can be ignored. Since inequality (2.1) persists for increasing values of εi (t), σi (t + 1) must be an increasing function of εi (t) as well; hence, it can be identified with the threshold value ξi (t) ∈ R := R ∪ {−∞, +∞}, defined by σi (t + 1) = 2 · 1(ξi (t),+∞) (εi (t)) − 1, (2.2) where 1S denotes the indicator function of a set S. Thus, the set of actions for this game can be N N identified with R , which is compact and convex. Now, consider a vector ξ ∈ R and fix an index i. Suppose that the actions σj = 2 · 1(ξj ,+∞) (εj (t)) − 1 are given for j 6= i and that the player i aims at maximizing her own utility E−i Ui (si , σ −i (t + 1); σ(t)) " ( ! ! )# X X s 1 s 1 i i = E−i si f + σj + g + σj − mN (t) + µσi (t) + εi (t) , N N N N j6=i j6=i 4 whose maximum is attained at σi∗ = 2 · 1(ξi∗ ,+∞) (εi (t)) − 1, with " ξi∗ = −µσi (t) − E−i f 1 X 1 + σj N N j6=i ! 1 1 X +g + σj − mN (t) N N !# . (2.3) j6=i If, in this last formula, we write σj = 2 · 1(ξj ,+∞) (εj (t)) − 1, one notes that ξi∗ is a function ξi∗ (ξ) of the vector ξ (actually of ξ −i ). Given two vectors ξ and ξ 0 , one sees from (2.3) that ! n [ |ξi∗ (ξ) − ξi∗ (ξ 0 )| ≤ c P {min(ξi , ξi0 ) ≤ εi ≤ max(ξi , ξi0 )} , i=1 where c > 0 is a suitable constant. By continuity of the distribution η, the probability on the right hand side goes to zero as ξ 0 → ξ. Thus, the map ξ 7→ ξ ∗ , that in game theory is called the best response map, is continuous. Being defined in a compact and convex set, by standard fixed-point argument, the best response map has a fixed point. It is easily checked that fixed points of the best response map are the Nash equilibria. Equations (1.5), (1.6) and (1.7) define a sequence of stochastic games that have to be solved recursively. Proposition 2.1 shows that a Nash equilibrium exists, but does not guarantee uniqueness. We assume that a specific Nash equilibrium is selected via a non anticipative criterion: more precisely, for every t ≥ 0, the vector of actions σ(t) is stochastically independent of the random variables εi (t) : i = 1, . . . , N . 3 Dynamics with infinitely many agents We consider a probability space (Ω, F, P), where a family of i.i.d. random variables {εi (t) : i ≥ 1, t ≥ 0} is defined and denote by η the common distribution function. Note that in this space we can simultaneously define the above stochastic games with any number of players. To emphasize the dependence on the number of agents, we denote by σ (N ) (t) one Nash equilibrium at time t and ξ (N ) (t) the associated thresholds. We assume the initial condition σ (N ) (0) is given and it is deterministic. Theorem 3.1. The sequence of stochastic processes (mN (t))t≥0 is tight; moreover, any weak limit (m(t))t≥0 obeys with probability one the implicit recursion m(t + 1) =[1 + m(t)] η(f (m(t + 1)) + g(m(t + 1) − m(t)) + µ) + [1 − m(t)] η(f (m(t + 1)) + g(m(t + 1) − m(t)) − µ) − 1. (3.1) Proof. Tightness follows from the fact that mN (t) takes values in the compact set [−1, 1]. Then, consider a convergent subsequence, that we still denote by (mN ) and fix t ≥ 0. By definition of Nash equilibrium, setting H = 2 · 1(0,+∞) − 1, the following identity holds for i = 1, 2, . . . , N : (N ) σi (N ) (t + 1) = H(E−i [f (mN (t + 1)) + g(mN (t + 1) − mN (t))] + µσi 5 (t) + εi (t)), yielding mN (t + 1) = N 1 X (N ) H(E−i [f (mN (t + 1)) + g(mN (t + 1) − mN (t))] + µσi (t) + εi (t)). N (3.2) i=1 (N ) We denote by ξi (t) ∈ R the threshold such that (N ) σi (t + 1) = 2 · 1(ξ(N ) (t),+∞) (εi (t)) − 1. i By possibly considering a further subsequence, we can assume the sequence of empirical measures ! N 1 X n o ρN := δ ξ(N ) (t),σ(N ) (t) N i i i=1 admits a joint weak limit. Note that ρN is a sequence of probability measures on R × {−1, 1}. The space of these probabilities, provided with the weak topology, is metrizable as a compact Polish space. By Skorohod Theorem (see e.g. Theorem 2.2.2 in [4]), the sequence ρN can be realized on a Probability space (Ω1 , F1 , P1 ), in such a way that ρN → ρ almost surely. We denote by ρξN (resp. ρσN ) the marginal of ρN on R (resp. {−1, 1}). With probability one, ρN is the sum of (N ) (N ) N point masses of weight N1 , that we still denote with (ξi , σi )N i=1 . Now, let (Ω2 , F2 , P2 ) be a probability space supporting a sequence (εi (t))i≥1 of i.i.d. random variables with distribution η. We set Ω := Ω1 × Ω2 , provided with the product σ-field F1 ⊗ F2 and the product measure P := P1 ⊗ P2 . We will denote by E, E1 , E2 the corresponding expectations, while Ei2 will denote the E2 - expectation conditioned to εi (t). In this way, all objects appearing in (3.2) have been realized on Ω, without modifying their joint distribution. In particular, N 1 X (N ) mN (t + 1) = φ(ξi , εi ), N i=1 with φ(ξ, ε) = 2 · 1(ξ,+∞) (ε) − 1. Set Z φ(ξ) := φ(ξ, ε)η(dε) = 2η((ξ, +∞)) − 1. Note that,R since we have assumed η to be a continuous distribution, φ is continuous. Thus, the sequence φdρξN converges almost surely. Moreover, by independence of the εi (t), " E Z mN (t + 1) − = φdρξN 2 # io 1 X n h (N ) (N ) (N ) (N ) φ(ξ ) φ(ξ , ε ) − φ(ξ ) E E φ(ξ , ε ) − 1 2 i j i i j j N2 i,j 1 X C (N ) (N ) 2 = 2 E1 E2 φ(ξi , εi ) − φ(ξi ) ≤ N N i 6 (3.3) R for a suitable constant C > 0. This shows that mN (t + 1) − φdρξN converges to zero in L2 and, therefore, almost surely along a subsequence. Thus, along a suitable subsequence, mN (t + 1) converges almost surely to a random variable m(t + 1) that, being also the almost sure limit of R φdρξN , is F1 -measurable. Now, we aim at taking the almost sure limit in (3.2). For this purpose, let {H δ : δ > 0} be a family of bounded Lipschitz functions such that H = inf δ H δ . • By continuity of H δ , f, g and the fact that m(t + 1) is F1 -measurable , if the limit of N 1 X δ (N ) H ([f (m(t + 1)) + g(m(t + 1) − m(t))] + µσi (t) + εi (t)) N (3.4) i=1 R exists almost surely, where m(t) is the almost sure limit of mN (t) = σdρσN , then the following sequence has the same almost sure limit: N 1 X δ −i (N ) H (E [f (mN (t + 1)) + g(mN (t + 1) − mN (t))] + µσi (t) + εi (t)). N i=1 • By repeating the argument in (3.3), the almost sure limit of (3.4) coincides with that of N Z 1 X (N ) H δ ([f (m(t + 1)) + g(m(t + 1) − m(t))] + µσi (t) + ε)η(dε) N i=1 Z Z = H δ ([f (x) + g(x − y)] + µσ + ε)η(dε)ρσN (dσ) x=m(t+1),y=m(t) , (3.5) provided (3.5) converges almost surely; but this is guaranteed by the fact that ρσN converges almost surely, namely to 1 + m(t) 1 − m(t) δ{1} + δ{−1} . 2 2 Summing all up, N 1 X δ −i (N ) H (E [f (mN (t + 1)) + g(mN (t + 1) − mN (t))] + µσi (t) + εi (t)) N i=1 converges almost surely (along a subsequence) to 1 + m(t) 2 Z H δ ([f (m(t + 1)) + g(m(t + 1) − m(t))] + µ + ε)η(dε) Z 1 − m(t) + H δ ([f (m(t + 1)) + g(m(t + 1) − m(t))] − µ + ε)η(dε), 2 7 which is, therefore, an upper bound for m(t + 1), the limit of the left hand side of (3.2). Taking the infimum over δ > 0, we get Z 1 + m(t) m(t + 1) ≤ H([f (m(t + 1)) + g(m(t + 1) − m(t))]+ µ + ε) η(dε) 2 Z 1 − m(t) H([f (m(t + 1)) + g(m(t + 1) − m(t))]− µ + ε) η(dε) + (3.6) 2 = [1 + m(t)] η(f (m(t + 1)) + g(m(t + 1) − m(t)) + µ) + [1 − m(t)] η(f (m(t + 1)) + g(m(t + 1) − m(t)) − µ) − 1. A similar argument can be repeated with H δ Lipschitz, with supδ H δ = H − , where H − = H − 2δ{0} is the left-continuous modification of H. Since η is a continuous distribution, we have, for every ρ ∈ R, Z Z H (ρ + µ + ε) η(dε) = H − (ρ + µ + ε) η(dε), so that we obtain a lower bound for limN →+∞ mN (t + 1), which matches the upper bound in (3.6). Remark 3.2. Suppose the following further conditions are verified: • there exists the limit lim mN (0) =: m(0) ∈ [−1, 1] ; N →+∞ • the implicit recursion (3.1), with initial condition m(0) is single-valued up to time T ∈ N ∪ {+∞}. Under these conditions, by Theorem 3.1, the sequence (mN (t))t≤T converges weakly to the deterministic, unique solution of this recursion. By a Borel-Cantelli type argument, the result of Theorem 3.1 can be refined in this case, showing that the limit is indeed in the almost sure sense. 4 Steady states of the limit evolution equation Equation (3.1) describes the behavior of the system with utility function (1.6) in the limit of infinitely many players. We are interested in detecting the t-stationary solution(s) of this equation and in studying its (their) stability properties. We refer to (3.1) as the limit evolution equation of our system. Notice that (3.1) is an implicit equation of the form G(x, y) = 0, where G(x, y) := y − (1 + x) η (f (y)+g(y − x)+µ) − (1 − x) η (f (y)+g(y − x)−µ) + 1 . (4.1) In this section we consider the problem of existence and local stability for steady states of two types: fixed points, i.e., those x ∈ [−1, 1] for which G(x, x) = 0, and 2-cycles, i.e., those pairs (x, y) ∈ [−1, 1]2 with x 6= y, G(x, y) = G(y, x) = 0. These steady states emerge as attractors in the numerical implementations we illustrate in Section 5. Although we cannot rule out other types of steady states or attractors, theses are the only ones we see in simulations. 8 We, now, show how to describe both fixed point and 2-cycles as fixed points of a suitable map. We introduce a set of minimal assumptions which guarantee differentiability of the maps we consider below. It is plausible that these assumptions can be weakened at the cost of some further technicalities. Assumption 4.1. A.1 The distribution function η of random terms, as introduced in (1.5), is absolutely continuous, with a continuous, strictly positive density. A.2 The map g : (−2, 2) → R is of class C 1 , odd, with g 0 (z) > 0 for every z ∈ (−2, 2). A.3 Assume f ≡ 0. Note that, in principle, x and y may vary in [−1, 1], so that g should be defined on [−2, 2]. However, it follows from (3.1) that y ∈ (−1, 1) whenever x ∈ (−1, 1). In particular, fixed points and 2-cycles are in (−1, 1). We, now, define an auxiliary function θ : R → R that is used in the sequel. θ(R) := η(R + µ) + η(R − µ) − 1 − g −1 (R) [η(R + µ) − η(R − µ)] , 1 − [η(R + µ) − η(R − µ)] (4.2) where g −1 : Im(g) → (−2, 2) is the inverse of g. The map g −1 is well defined thanks to Assumption A.2. Moreover, θ is well defined too, since, by assumption A.1, η has a strictly positive derivative, hence η(R + µ) − η(R − µ) < 1, for every R ∈ R. Theorem 4.2. On the open set D := {R ∈ R : θ(R) ∈ (−1, 1)}, define the map ψ(R) := g(2θ(R)), (4.3) where θ is as defined in equation (4.2). Then, under Assumption 4.1, the set of fixed points and 2-cycles for (4.1) is characterized as the set of fixed points of the map ψ. In particular, i) x∗ ∈ (−1, 1) is a fixed point for G if and only if x∗ = 0. The fixed point corresponds to R∗ = 0, which is always a fixed point for ψ. ii) G admits a 2-cycle (x∗ , y∗ ) if and only if there exists R∗ ∈ D \ {0} such that ψ(R∗ ) = R∗ . In this case y∗ = θ(R∗ ) and x∗ = −θ(R∗ ). Proof. Define R = g(y − x). A fixed point x for (3.1) (rewritten as in (4.1) with f ≡ 0) is such that x = x [η(R + µ) − η(R − µ)] + η(R + µ) + η(R − µ) − 1 R = g(x − x) = 0. (4.4) Setting R = 0 in the first equation of (4.4), we get x = 0. This proves that x = 0 is the only fixed point for (3.1) and, since R = 0, it also corresponds to the zero value fixed point for the map ψ (note that ψ(0) = 0). 9 We are now left with point (ii). Note that the equation G(x, y) = 0 with G as in (4.1) and f ≡ 0, can be rewritten as y = x [η(R + µ) − η(R − µ)] + η(R + µ) + η(R − µ) − 1 R = g(y − x) = 0. (4.5) Solving for x the second equation in (4.5) and inserting the solution x = y − g −1 (R) in the first, we obtain the following alternative formulation for the implicit limit dynamics: y = θ(R) R = g(y − x). (4.6) Suppose there is a 2-cycle (x∗ , y∗ ). Then, by (4.6), there exists R∗ such that y∗ = θ(R∗ ) R∗ = g(y∗ − x∗ ), (4.7) and x∗ = θ(−R∗ ) = −θ(R∗ ) −R∗ = g(x∗ − y∗ ), (4.8) which, immediately, give R∗ = ψ(R∗ ). Viceversa, assume R∗ = ψ(R∗ ), with θ(R∗ ) ∈ (−1, 1). Setting y∗ := θ(R∗ ), x∗ := −θ(R∗ ), the triple (x∗ , y∗ , R∗ ) solves (4.7) and (4.8), so (x∗ , y∗ ) is a 2-cycle. Theorem 4.2 shows that the only steady state equilibrium corresponds to half of the players choosing σ = 1. Moreover, it provides conditions for the existence of a 2-cycle. We, now, discuss the local (linear) stability of the fixed point and of the 2-cycle. Proposition 4.3. i) The fixed point x∗ = 0 is linearly stable if and only if ψ 0 (0) < 1. ii) Let R∗ ∈ D \ {0} such that ψ(R∗ ) = R∗ . Assume ψ 0 (R∗ ) 6= 2. Then the 2-steps dynamics are well defined in a neighborhood of x∗ and the 2-cycle (x∗ , y∗ ) is locally linearly stable if and only if ψ 0 (R∗ ) < 1. Proof. We start by proving point (ii). Given a 2-cycle (x∗ , y∗ ), corresponding to the fixed point R∗ for ψ, consider a small perturbation of equations (4.7) and (4.8): y = θ(R ) R = g(y − x∗ − ), (4.9) and x = θ(Q ) Q = g(x − y ). 10 (4.10) Computing, by implicit differentiation, x0 := 0 x = d d x |=0 , we easily obtain θ0 (R∗ )g 0 (y∗ − x∗ ) 1 − θ0 (R∗ )g 0 (y∗ − x∗ ) 2 . Note now that ψ 0 (R∗ ) = 2g 0 (2θ(R∗ ))θ0 (R∗ ) = 2θ0 (R∗ )g 0 (y∗ − x∗ ), giving x0 = ψ 0 (R∗ ) 2 − ψ 0 (R∗ ) 2 . This both shows that the implicit map is locally well defined if ψ 0 (R∗ ) 6= 2 and that local linear stability, i.e., |x0 | < 1, is equivalent to ψ 0 (R∗ ) < 1. The proof of (i) can be derived similarly, perturbing only (4.7) in a neighborhood of x∗ = 0. The attractors of the dynamics we have described are a fixed point and a 2-cycle. In the next section, we exploit, by means of an example and of some numerical simulations, the stability of the fixed point and existence and stability of the 2-cycle. In particular, we discuss the possibility of coexistence of a linearly stable 2-cycle with the stable fixed point. 5 A significant example: the linear model We, now, specify the function g, defined in (1.6), as follows: g(y − x) = k (y − x), (5.1) where k > 0 is a given constant. Note that, being R linear in (y − x), it trivially satisfies A.2 of Assumption 4.1. We, also, specify the distribution of the noisy component in the utility assuming a logistic distribution as already exemplified in (1.5). The choice of logistic error terms is rather common in many models of evolving social systems (see, for instance, [5] and [11]). Note that the logistic distribution satisfies A.1 in Assumption 4.1. The parameter β measures the impact of the random component in the decision process. We, now, characterize the attractors for the limit evolution equation (3.1) under the new specifications. Note that in this context g(z) = kz, so the map ψ, as defined in (4.3), reads ψ(R) = −2R η(R + µ) − η(R − µ) η(R + µ) + η(R − µ) − 1 + 2k . 1 − [η(R + µ) − η(R − µ)] 1 − [η(R + µ) − η(R − µ)] (5.2) Before stating the main result of this section, we prove a technical lemma, that will be used in Proposition 5.2 to deal with possibly neutrally stable 2-cycles. Lemma 5.1. Define K as the set of k ∈ R+ such that there exists R ∈ R for which ψ(R) = R and ψ 0 (R) = 1. Then K contains is locally finite, i.e., its intersection with any bounded set is finite. 11 Proof. Note that the function ψ in (5.1) is of the form ψ(R) = A(R) + kB(R), so that ψ 0 (R) = A0 (R) + kB 0 (R). Now, the identities ψ(R) = R and ψ 0 (R) = 1 imply, k= R − A(R) 1 − A0 (R) = . B(R) B 0 (R) (5.3) By direct inspection, using the explicit logistic form of η, it is easily seen that A and B are bounded functions and B is bounded away from zero. Then, the second equation in (5.3) admits at most a finite number of solutions, since all functions involved are real, analytic and non constant. Now, 0 (R) k = 1−A B 0 (R) . Thus, if we restrict to values of k in a bounded interval (0, M ], with M > 0, we see that the first equation of (5.3) may be satisfied only for R in a bounded interval (−aM , aM ). Moreover, the second equation in (5.3) is the equality between two real analytic functions which are not identically equal. So, it may admit only a finite number of solutions in (−aM , aM ). Plugging these solutions in the first equation of (5.3), we obtain only a finite number of possible values of k. We have, therefore, shown that K ∩ (0, M ] is finite, for every M > 0. 1 Proposition 5.2. Define kuc = 4β 1 + e2βµ and let K be the locally finite set defined in Lemma 5.1. i) The fixed point x∗ = 0 is linearly stable if and only if k ∈ (0, kuc ). ii) A 2-cycle exists if and only if k > klc , where klc is a critical value satisfying klc ≤ kuc . Moreover, for every k ∈ (klc , +∞) \ K, there exists a linearly stable 2-cycle. iii) If µ = 0, then klc = kuc . Therefore, no coexistence of stable fixed point and 2-cycle is possible. 1 If, instead, µ > 2β log 2, then klc < kuc . In this case, coexistence holds for k ∈ (klc , kuc ). Proof. We study the fixed point and 2-cycles by means of Proposition 4.3. By a simple calculation, we find that 2kη 0 (µ) − 2η(µ) + 1 ψ 0 (0) = . 1 − η(µ) Hence η(µ) 1 ψ 0 (0) < 1 ⇐⇒ k < 0 ⇐⇒ k < 1 + e2βµ , (5.4) 2η (µ) 4β which completes the analysis of linear stability of the fixed point. For the existence and stability of 2-cycles, we begin by observing that, in this model, R = g(y−x) takes values in (−2k, 2k). If R ∈ (−2k, 2k) is such that ψ(R) = R, then, using the notations of Proposition 4.3, R 2θ(R) = ∈ (−2, 2), k or, equivalently, θ(R) ∈ (−1, 1). Thus R ∈ D, the domain of the map ψ. Therefore, taking also into account that ψ is an odd function, we conclude that a 2-cycle exists if and only if ψ has a fixed point in (0, 2k). Being η(R − µ) and η(R + µ) − η(R − µ) ∈ [0, 1) for every R, we have k [η(2k + µ) + η(2k − µ) − 1] − 2k [η(2k + µ) − η(2k − µ)] ψ(2k) = 2 1 − [η(2k + µ) − η(2k − µ)] 1 − η(2k − µ) = 2k 1 − 2 < 2k . 1 − [η(2k + µ) − η(2k − µ)] 12 By continuity we obtain the following facts. Fact 1. A fixed point R∗ ∈ (0, 2k) for ψ (i.e., a 2-cycle) exists if and only if there is R ∈ (0, 2k) for which ψ(R) ≥ R. To discuss linear stability of 2-cycles, it is convenient to establish a refinement of Fact 1. Fact 2. Suppose there is R ∈ (0, 2k) for which ψ(R) > R and that k 6∈ K. Then a linearly stable 2-cycle exists. Fact 2 is proved as follows. If there exists R ∈ (0, 2k) for which ψ(R) > R, then, being ψ(2k) < 2k, the graph of ψ must cross the graph of the identity, in a point R for which ψ 0 (R) ≤ 1. The case ψ 0 (R) = 1 is ruled out, since k ∈ / K. Having proved Facts 1 and 2, we are now ready to analyze existence and stability of 2-cycles. To begin with we remark that, for µ = 0, ψ(R) = 2kη(R), which is a strictly concave function for R > 0. By concavity, a strictly positive fixed point exists if and only if ψ 0 (0) > 1, i.e., when the fixed point is unstable. In this case, the stable fixed point cannot coexist with 2-cycles. When µ > 0, the situation is more complex, but can be clarified by the following simple statements. Fact 3. 2-cycles persist by increasing k. For a given k > 0 and R > 0, suppose ψ(R) ≥ R > 0. Looking at (5.2), it can be seen that, necessarily, η(R + µ) + η(R − µ) − 1 > 0, 1 − [η(R + µ) − η(R − µ)] and so ψ(R) increases in k. Thus, the inequality ψ(R) ≥ R persists by increasing k. Fact 4. If ψ 0 (0) > 1 or ψ 0 (0) = 1, ψ 00 (0) = 0 and ψ 000 (0) > 0, then a 2-cycle exists. Indeed, a Taylor expansion of ψ around 0 shows that ψ(R) > R for R > 0 small enough. By some straightforward computations, we get ψ 00 (0) = 0 and ψ 000 (0) = Now, set k = kuc := η(µ) 2η 0 (µ) = 1 4β 2k [η 000 (µ)(1 − η(µ)) + 3η 0 (µ)η 00 (µ)] − 3η 00 (µ) . (1 − η(µ))2 1 + e2βµ , which implies ψ 0 (0) = 1 and ψ 000 (0)|k=kc > 0 ⇐⇒ η(µ)η 000 (µ) − 3η 0 (µ)η 00 (µ) > 0 u meaning that 8β 3 e4βµ e4βµ − e2βµ − 2 > 0 ⇐⇒ e2βµ > 2 . (e2βµ + 1)5 Then ψ 000 (0) > 0 whenever 2βµ > log 2. In this case, for k = kuc , there exists R > 0 with ψ(R) > R. By continuity, the existence of such R is preserved by a small decrease of k. In other words, 2-cycles exist for all k > klc , where klc < kuc : in this case 2-cycles may coexist with the stable fixed point. 13 1 Remark 5.3. In Proposition 5.2, we show that µ > 2β log(2) is a sufficient condition for coexistence 1 of the fixed point and a locally stable 2-cycle. Our proof does not cover the case 0 < µ ≤ 2β log(2). Numerical simulations suggest that there is no coexistence in this range, so we conjecture klc < kuc 1 if and only if µ > 2β log(2). Relying on Proposition 5.2, we, now, briefly analyze how the picture of the stationary regime changes depending on the parameters of the model. In particular, we are able to discuss coexistence of stable fixed point and 2-cycles. For µ = 0, no coexistence of the stable fixed point and a stable 1 2-cycle is possible: for k < kuc = 2β , the fixed point is linearly stable; for k > kuc , the stability of the fixed point is lost and a linearly stable 2-cycle arises. Introducing frictions, coexistence of the two attractors becomes possible. In this case the two possibly different thresholds klc and kuc , 1 with klc ≤ kuc = 4β 1 + e2βµ , separate the stability regions of the two attractors. Eventually, for 1 µ > 2β log 2, the stable fixed point and a stable 2-cycle coexist. In Figure 1, we plot klc and kuc as functions of β, for µ = 0 (Panel A) and for µ = 0.5 (Panel B). Note that, for µ = 0, klc = kuc . In Panel B, the curve β 7→ klc (β) has been obtained numerically. Since kuc → +∞ as β → +∞, the introduction of a friction term strongly stabilizes locally the fixed point at low noise, without necessarily loosing the stability of a 2-cycle. In the coexistence region, the “typical” behavior is the following: an unstable 2-cycle separates the domains of attraction of the fixed point and of the stable 2-cycle. As µ increases, the domain of attraction of the fixed point grows and the oscillation |y − x| at the stable 2-cycle shrinks. This picture is well supported by numerical evidence. We conclude this section, discussing the role of β. If we let β → 0+ , i.e., the error dominates, only the region of fixed point survives. This is due to the fact that, when β → 0+ , the agents are completely randomizing their choice. Hence, in the asymptotic model with infinite agents, half of them chooses σ = 1 and half σ = −1. The optimal participation rate is, therefore, always equal to 0. When β increases, the impact of the noise component shrinks and the agents are more prone to extreme behaviors: the trend-dependent term in the utility prevails over the noise. 5.1 Simulations: the dynamics with a finite number of players As already noticed, coexistence is one of the most significant results of this model. In particular, it has important consequences at the level of the finite dimensional system. We perform some agent-based simulations in order to capture this aspect. More in details, we simulate a large, but finite, population of N agents. At any time step, we let the N agents play (sequentially) their best response to the (fixed) actions of the other agents. We continue by letting the algorithm work unless a fixed profile σ is reached. In doing this, we are numerically identifying a Nash equilibrium as a strategy profile σ, that is a fixed point of the best response map. In the case of multiple Nash equilibria, in any simulation the algorithm identifies (randomly) one among them. In the case of coexistence, the finite dimensional system exhibits a regime switching phenomenon. Suppose N is large. The finite system tends to stay close to the infinite one; in particular, it gets attracted by one of the attractors, determined by the initial condition. After some time, a large random fluctuation occurs in the finite system, leading the aggregate variable mN to fall into the basin of attraction of a different attractor, where the system will stay until the next large fluctuation. The waiting time to the next large fluctuation has a distribution close to the exponential one, with a mean that grows exponentially in N . Despite of the exponential growth, this regime switching 14 is clearly visible even for N of the order of the thousands. In the context of statistical mechanics models, this phenomenon is often called metastability and it is well understood for some simple models (see, e.g., [2]). In Figure 2 we plot two evolutions of mN in the coexistence region; the two evolutions differ in the initial condition, that in the infinite system would lead to different steady states. 6 Concluding remarks We propose a mean-field dynamic model, where the dynamics of the system are trend-driven, in the sense that the rates of transition depend on the variation of the aggregate variable (the trend), thus introducing an endogenous relation between the state of the systems at two subsequent dates. This feature induces natural and non trivial dynamics that have been formally studied. Thinking about the system as a social network, we are modeling agents facing bounded rationality: when deciding their action, they rely on random utilities (see [5]) characterized by a noisy component. Differently from the majority of the present literature in this field, we let the agents update their opinion in a parallel way, i.e., their action is the consequence of a game whose payoffs depend on the expectations on the behavior of the population. Notice that, owing to the assumption of agents’ simultaneous updating with a trend-driven component, at the equilibrium the limiting dynamics converge either to a fixed point or to a 2cycle. Moreover, because of the strategic behavior of the agents, the two limiting attractors coexist for some values of the parameters; this seems to be a novelty in probabilistic models that describe social interactions and, in case, contagion. See, for instance, [3], [6] or [7], which are based on models relying on agents, who update their choices sequentially without any trend component: the stable attractors can be only fixed points. Acknowledgments Special thanks go to Fulvio Fontini for the fruitful discussions. The authors, also, thank Roberto Casarin, Gustav Feichtinger, Marco LiCalzi, Antonio Nicolò and Paolo Pellizzari. The authors acknowledge the financial support of the Research Grant of the Ministero dell’Istruzione, dell’Università e della Ricerca: PRIN 2008, Probability and Finance, and PRIN 2009, Complex Stochastic Models and their Applications in Physics and Social Sciences. We are responsible for all the remaining errors. References [1] Barucci, E., Tolotti, M.: Social interaction and conformism in a random utility model. J. Econ. Dyn. Control 36(12), 1855-1866 (2012) [2] Bianchi, A., Bovier, A., Ioffe, D.: Sharp asymptotics for metastability in the random field Curie-Weiss model. Electron. J. Probab. 14(53), 1541-1603 (2009) [3] Blume, L., Durlauf, S.: Equilibrium concepts for social interaction models. Intern. Game Theory Rev. 5(3), 193-209 (2003) 15 [4] Borkar, V.S.: Probability Theory: an advanced course. Springer-Verlag, New York (1995) [5] Brock, W., Durlauf, S.: Discrete choice with social interactions. Rev. Econ. Stud. 68(2), 235260 (2001) [6] Collet, F., Dai Pra, P., Sartori, E.: A Simple Mean Field Model for Social Interactions: Dynamics, Fluctuations, Criticality. J. Stat. Phys. 139(5), 820-858 (2010) [7] Dai Pra, P., Runggaldier, W.J., Sartori, E., Tolotti, M.: Large portfolio losses: A dynamic contagion model. Ann. Appl. Probab. 19(1), 347-394 (2009) [8] Dawson, D.A., Gärtner, J.: Large deviations for the McKean-Vlasov limit for weakly interacting diffusions. Stochastics 20(4), 247-308 (1987) [9] Garnier, J., Papanicolaou, G., Yang, T.-W.: Large deviations for a mean field model of systemic risk. arXiv: 1204.3536 [q-fin.RM] (2012) [10] Gärtner, J.: On the McKean-Vlasov limit for interacting diffusions. Math. Nachr. 137(1), 197-248 (1988) [11] Nadal, J.P., Phan, D., Gordon, M.B., Vannimenus, J.: Multiple equilibria in a monopoly market with heterogeneous agents and externalities. Quant. Finance 5(6), 557-568 (2005) 16 Panel A − Phase Diagram − No frictions ( µ=0 ) 10 c ku 9 8 7 k 6 5 4 3 3 2 1 1 0 1 2 3 4 5 β Panel B − Phase Diagram − Frictions ( µ>0 ) 10 c kl c 9 ku 8 7 k 6 5 4 3 3 2 2 1 1 0 1 2 3 4 5 β Figure 1: Phase diagram of the parameters of the model. On Panel A we put µ = 0, on Panel B µ = 0.5. In both panels we denote by 1 the region, where only the fixed point x∗ = 0 is stable. In region 2, we have coexistence of the stable fixed point and of a stable 2-cycle. In region 3, only the 2-cycle is stable. 17 Participation rate (k=1.8,β=1,µ=0.5,m =0.6) 0 1 Asymptotic N=500 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 50 100 150 200 250 300 350 400 Time Participation rate (k=1.8,β=1,µ=0.5, m0=0.2) 1 Asymptotic N=500 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 50 100 150 200 250 300 350 400 Time Figure 2: Asymptotic regime (2-cycle above, fixed point below) for the optimal participation rate (red dotted line) and finite dimensional simulation with N = 500 agents (blue continuous line). Starting points are m0 = 0.6 (above) and m0 = 0.2 (below). 18
© Copyright 2026 Paperzz