NEW APPS READ THIS ADDENDUM!!
Moderator: Littlabit
Forum rules
Remember to read the CN Rules and Policies first!
Remember to read the CN Rules and Policies first!
NEW APPS READ THIS ADDENDUM!!
Heretofore, all new applications must also answer one of the three questions below:
Consider a linear Cournot duopoly with the true inverse demand function P = a − Q and zero marginal costs, where P is price, a> 0,and Q = q1 + q2 is the total supply of a good. Now imagine that each firm i ∫ N = {1, 2} perceives the inverse demand function as P = a + bi − Q, where bi ∫ R is the bias in i’s perception. Let G (b1,b2) be the game in which b1 and b2 are common-knowledge.
(a) Show that G (b1,b2) has a unique rationalisable strategy profile. Compute the true payoffs u1 (b1,b2) and u2 (b1,b2) at the rationalisable strategy profile – computed by using P = a − Q.
(b) Consider the meta game Γ =(N, R, R,u1,u2) , where the strategies are choices of b1 and b2,and u1 and u2 are as in (a). Show that Γ is super-modular in a proper order, has a unique Nash equilibrium b∗ ,and that b∗ i > 0 for each i ∈ N . Show that the replicator dynamics for Γ (using the true payoffs) converges to b*
(c) Now consider an evolutionary learning process in which the agents not only develop their perceptions (i.e., b1 and b2) but also learn how to play the game G (b1,b2) given perceptions. Assume that the learning process is a “two-tiered†replicator dynamics in which they learn how to play G (b1,b2) infinitely faster than they change their perceptions, i.e., given any perception-pair (b1,b2), the play converges to the limit of the dynamics for fixed (b1,b2) before they change their perceptions. What is the limit of this “two-tiered†replicator dynamics?
Regards,
Neibruoma Meztress
66 Enchanter
Consider a linear Cournot duopoly with the true inverse demand function P = a − Q and zero marginal costs, where P is price, a> 0,and Q = q1 + q2 is the total supply of a good. Now imagine that each firm i ∫ N = {1, 2} perceives the inverse demand function as P = a + bi − Q, where bi ∫ R is the bias in i’s perception. Let G (b1,b2) be the game in which b1 and b2 are common-knowledge.
(a) Show that G (b1,b2) has a unique rationalisable strategy profile. Compute the true payoffs u1 (b1,b2) and u2 (b1,b2) at the rationalisable strategy profile – computed by using P = a − Q.
(b) Consider the meta game Γ =(N, R, R,u1,u2) , where the strategies are choices of b1 and b2,and u1 and u2 are as in (a). Show that Γ is super-modular in a proper order, has a unique Nash equilibrium b∗ ,and that b∗ i > 0 for each i ∈ N . Show that the replicator dynamics for Γ (using the true payoffs) converges to b*
(c) Now consider an evolutionary learning process in which the agents not only develop their perceptions (i.e., b1 and b2) but also learn how to play the game G (b1,b2) given perceptions. Assume that the learning process is a “two-tiered†replicator dynamics in which they learn how to play G (b1,b2) infinitely faster than they change their perceptions, i.e., given any perception-pair (b1,b2), the play converges to the limit of the dynamics for fixed (b1,b2) before they change their perceptions. What is the limit of this “two-tiered†replicator dynamics?
Regards,
Neibruoma Meztress
66 Enchanter
a) 42
b) 42
c) 42
b) 42
c) 42
<table border=0><tr><td><img src="http://cn.bitarchitects.com/images/slow.gif" border=0"></td><td><a href="http://www.magelo.com/eq_view_profile.h ... esize=true" target=_new>Kamidari</a>, Barbarian Shaman<br><a href="http://www.magelo.com/eq_view_profile.h ... esize=true" target=_new>Nicia</a>, Wood Elf Warrior<br><a href="http://www.magelo.com/eq_view_profile.h ... esize=true" target=_new>Staaby Taaby</a>, Vah Shir Rogue<br><a href="http://www.celestialnavigators.org" target=_new>Celestial Navigators</a>, <a href="http://www.vazaelle.com" target=_new>Vazaelle</a></td></tr></table>