Contracting with imperfect commitment and the revelation principle the single agent case
Download
1 / 19

Contracting with Imperfect Commitment and the Revelation Principle: The Single Agent Case - PowerPoint PPT Presentation


  • 440 Views
  • Uploaded on

Contracting with Imperfect Commitment and the Revelation Principle: The Single Agent Case. Helmut Bester & Roland Strausz (As told by Daniel Brown & Justin Tumlinson 30 January 2006). Model Vocabulary I. T, set of agent types t  T , the probability distribution of the agent’s type

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Contracting with Imperfect Commitment and the Revelation Principle: The Single Agent Case' - emily


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Contracting with imperfect commitment and the revelation principle the single agent case

Contracting with Imperfect Commitment and the Revelation Principle: The Single Agent Case

Helmut Bester & Roland Strausz

(As told by

Daniel Brown & Justin Tumlinson

30 January 2006)


Model vocabulary i
Model Vocabulary I Principle: The Single Agent Case

T, set of agent types

  • t  T

    , the probability distribution of the agent’s type

  • t, the unconditional probability that the agent’s type is t

    M, message set from which the agent may select

  • m  M

  • principal selects the set M

    X, set of decisions to which the principal can contractually commit

  • x() X

  • (M, x()), contract or mechanism

  • x(m),agent can enforce x by sending message m

    Y, set of decisions to which the principal cannot contractually commit himself

  • y() Y

  • y(m) means message received by the principal determines principal’s actions through model properties (e.g. optimality conditions), not that a message m commits the principal to a specific action per se

    F(x(m)), correspondence restricting feasible choices in Y, given x(), m

  • y(m)  F(x(m))


Model vocabulary ii
Model Vocabulary II Principle: The Single Agent Case

Vt(x(m),y(m)), payoff of the principal if the agent is type t and sends m

Ut(x(m),y(m)), payoff of the agent if the agent is type t and sends m

Q, set of strategies from which a type t agent chooses a strategy, qt  Q

  • qt (m), the probability that a type t agent will send message m

  • E.g. if |M| = 6, then qt = (0, ½, 0, 0, ⅛, ⅜) means a type t agent will send messages 2, 5 and 6 with probabilities ½, ⅛ and ⅜ respectively, a mixed strategy; qt (1) = 0, qt (2) = ½,… , qt (5) = ⅛, qt (6) = ⅜

    p, the posterior belief vector of the principal’s belief about the agent’s type

  • pt (m), the belief of the principal that the agent is type t, given m

    V(q(),y(),x()|M) = E[payoff for principal]

    = T t [M qt (m) Vt(x(m),y(m)) dm]

    Ut(q(),y(),x()|M) = E[payoff for type-t agent]

    = M qt (m) Ut(x(m),y(m)) dm


Model timing
Model Timing Principle: The Single Agent Case

  • Mechanism (M, x()) “induces” the principal-agent game

    • Assumption: principal chooses M and x(), but X, Y, T, F(), V() & U() are exogenous

  • Agent privately observes his type, t

  • Agent chooses (mixed) strategy qt

  • Message m sent according to strategy qt

  • Principal updates beliefs, p, on agent’s type

  • Principal chooses decision y  F(x(m))

  • Payoffs Vt(x(m),y(m)) & Ut(x(m),y(m)) realized


Pbe conditions
PBE Conditions Principle: The Single Agent Case

  • Principal decides optimally given beliefs

    •  m  M,  y’ F(x(m))T pt(m) Vt(x(m),y(m))  T pt(m) Vt(x(m),y’(m))

  • Agent anticipates y(m) & chooses payoff maximizing qt

    • M qt(m) Ut(x(m),y(m)) dm  M qt’(m) Ut(x(m),y(m)) dm,  qt’ Q

  • Principal’s posterior beliefs must be consistent with Bayes’ Rule

    • pt(m) = (qt(m) t) / t’T (qt’(m) t’)


Incentive conditions
Incentive Conditions Principle: The Single Agent Case

  • (q,p,y,x|M) is incentive feasible if (q,p,y) is a BPE given mechanism 

  • (q,p,y,x|M) is incentive efficient if incentive feasible AND… (q’,p’,y’,x’|M) 

    • V(q’,y’,x’|M) > V(q,y,x|M) and

    • Ut(qt’,y’,x’|M) = Ut(qt,y,x|M)  t T

  • (q,p,y,x|M) and (q’,p’,y’,x’|M) are payoff equivalent if

    • V(q’,y’,x’|M) = V(q,y,x|M) and

    • Ut(qt’,y’,x’|M) = Ut(qt,y,x|M)  t T

  • Agent’s individual-rationality constraint:

    • Ut(qt’,y’,x’|M)  Ut0

    • I.e. Principal must guarantee the agent can obtain his reservation payoff, Ut0, after the agent learns his type


  • Direct mechanisms
    Direct Mechanisms Principle: The Single Agent Case

    • (M,x()) is direct if M = T

    • Revelation Principle: Assume all decisions contractible. (q,x | M) incentive feasible  a direct mechanism, d = (T,x’), and incentive feasible (q’,x’ | T)  (q,x | M) and (q’,x’ | M) are payoff-equivalent. Moreover, qt(t) = 1  t  T (i.e. the agent’s strategy is always truth-telling).

    • Contracting problem reduces to

      Maximize T t qt (t) Vt(x(t))

      Subject to

      Ut(x(t))  Ut(x(t’))  t’T incentive compatibility

      Ut(x(t))  Ut0  t’T individual rationality

    • But if the principal cannot commit to the entire allocation (x,y), the Revelation Principle is no longer applicable


    Example
    Example Principle: The Single Agent Case

    • In imperfect commitment setting, (M, x()), may support outcomes, that are not possible under a direct mechanism.

    • Example (from paper): 2 types of agent, M={m1, m2, m3} Principal chooses agent’s speed s, and wage w. Can commit to w, but not s. Payoffs satisfy single crossing property:

      • U1 = w-s2/5 U2 = w-ss/6 V1= 10s-s2-w V2= 10s-s2/4-wLet s(m1)=5, s(m2)=10, s(m3)=20

        w(m1)=5, w(m2)=20, w(m3)=70

      • It is possible to construct principals beliefs p so that they support s()

      • An optimal strategy for the agent, is then mixed:q1= (3/4, 1/4, 0) q2= (0,1/2, 1/2)

      • Precisely constructed so that p and q are consistent (Bayes’ Rule)


    Example cont
    Example (Cont.) Principle: The Single Agent Case

    • So (q,p,s,w | M) is incentive feasible but we don’t know about incentive efficient (it’s not).

    • There is a positive probability that each of the three chosen speeds s is implemented. This is not possible under a direct mechanism T={t1,t2}!

      • Why? Principal’s payoffs are strictly concave in s, at most 2 different speeds could be supported in PBE.

      • Conjecture: For every m  M, there exists a distinct y  Y that is supported in a PBE?

      • Note that the choice of w, placed no restrictions on choice of s.


    Some comments
    Some Comments Principle: The Single Agent Case

    • Connection between |M| and supportable y (s in example). This is in part due to the generality of the PBE conditions. There are lots of Equilibria.

      • Principal has flexibility: M can be any metric space. Also, can choose beliefs that will support a choice of y and x(M).

      • If we focus on incentive efficiency rather than incentive feasibility, we get tighter restrictions.

    • In example, there is redundancy in the message space. Consider message’s effect on s. If mixture 1/3m3 + 2/3m1, then in expectation you get s=10, which is the same s, as if you sent m2

      Messages are a means to distinguish types. Ideal situation for principal: Choose M such that each mi  M is associated with a type ti


    Proposition 1 implications
    Proposition 1 Implications Principle: The Single Agent Case

    • Support of q’ contains at most |T| messages

    • p and q’ are consistent with Bayesian Updating

    • Since replacing q with q’ does not change principal’s belief, y() remains optimal

    • Principal’s expected payoff unchanged

    • q’ is optimal strategy for agent

      • Same expected payoff for agent using q or q’

      • Agent indifferent between all messages selected with positive probability


    A new revelation principle
    A New Revelation Principle Principle: The Single Agent Case

    • Linear independence of q’(mh) allows us the apply the marriage theorem. There exists a one to one mapping from M’ into T.

    • Proposition 2: If (q,p,y,x| M) is incentive efficient, then their exists a direct mechanism d = (T, x*) and an incentive feasible (q*,p*,y*,x*| T) such that (q*,p*,y*,x*| T) and (q,p,y,x| M) are payoff equivalent. Moreover, qi*(ti)> 0 for all ti T.

    • Differences compared to Revelation Principal? This only applies to incentive efficient allocations (rather than incentive feasible) and the agent doesn’t reveal his type with certainty. Rather revealing his true type is an optimal strategy which he chooses with positive probability.


    Optimal contracts
    Optimal Contracts Principle: The Single Agent Case

    • With Proposition 2, we can formulate the principal’s problem as a standard programming problem with z =(x,y).

    • Maximize tT t’M tqt(t’)Vt(z(t’))subject to

      • Ut(z(t))  Ut(z(t’))  t’T incentive compatibility

      • Ut(z(t))  Ut0  tT individual rationality

      • [Ut(z(t))-Ut(z(t’))]qt(t’)=0  t, t’T

      • y(t)argmaxyF(x(t)) t’T pt’(t)Vt’(x(t),y)  tT optimality

      • pt(t’) = (qt(t’) t) / t’’T (qt’’(t’) t’’)  t, t’T Bayesian consistency


    Some comments1
    Some Comments Principle: The Single Agent Case

    • Constraint 3: If it is optimal to the principal to induce the agent to misrepresent his type, the agent must be made indifferent between reporting the different types.

      • It is possible that in the solution there is positive probability that an agent misrepresents his type.

    • Finding Optimal Contract becomes computational problem

      • Difficulty lies in determining which constraints are binding at the optimum.

        Apply program to example: Constraint 4 implies principal chooses s(ti) to maximize p1(ti)V1(w,s)+ (1- p1(ti))V2(w,s)

        Implies: s(ti)= 20/(1+3 p1(ti))

        Constraint 3 is binding for t1 which implies q2(t1)=0, q2(t2)=1 and

        U1(w(t2), s(t2)) = U1(w(t1), s(t1)) = 0

        Check to see that other constraints are satisfied, use 5 to determine beliefs then do principal’s maximization problem.

        Solution: s(t1)=5, s(t2)=10, w(t1)=5, w(t2)=20


    Applications
    Applications Principle: The Single Agent Case

    • Multi Stage contracting

      • Contract binds for first period, then gets renegotiated in the next period. In 2nd period, agent may insist on original contract. So first period contract x imposes restriction on 2nd period choice y (the assumption y(m)  F(x(m)) can be used).

      • For more periods, consider that at each period, principal can commit to decision in that period, but not to ones in future periods. Future periods are discounted.

      • Use Dynamic Programming techniques to solve.

      • In solution, agents gradually reveal information of their type.


    Backup

    Backup Principle: The Single Agent Case


    Circumventing the problem techincals
    Circumventing the Problem (Techincals) Principle: The Single Agent Case

    • Direct mechanisms may not support some indirect mechanisms

    • Focus on efficient allocation

    • Lemma 1: Let (q,p,y,x | M) be incentive efficient. Then    |T|  T pt(m) Vt(x(m),y(m)) = T pt(m) t / t

    • Proposition 1: Let (q,p,y,x | M) be incentive efficient. Then  incentive feasible (q’,p,y,x | M) and M’ with |M’| |T| and T,M’ tqt(m) = 1  (q,p,y,x | M) and (q’,p,y,x | M) are payoff equivalent. Moreover, vectors q’(m) = [q1(m), …, qt(m), …, q|T|(m)] are linearly independent  m  M’.


    Constructing payoff equivalent q
    Constructing Payoff Equivalent q’ Principle: The Single Agent Case

    • Assume (q,p,y,x| M) is incentive efficient. Lemma 1 is FOC implied by assumption.

    • Proposition 1: gives us q’ such that (q’,p,y,x| M) is payoff equivalent to (q,p,y,x| M), q’s support (M’) contains at most |T| elements, The set of vectors{q’(mh)} h=1,.., |M’| is linearly independent.

      • Rescales q in a clever way (getting rid of redundancies in M) so that optimality condition, Bayes Rule are satisfied and principals beliefs don’t change (which then implies same choice of y)

      • Why linear independence is important? we have almost gotten to the point where each element of M is associated with a specific type (think of changing to an orthogonal basis)

        -This implies the principal can solve the contracting problem with a Message set of dimension |T|


    Proposition 1 technical intuition
    Proposition 1 Technical Intuition Principle: The Single Agent Case

    • |M| > |T|  vectors q(m) = [q1(m),…, q|T|(m)] are linearly dependent

    • From Bayes’ Rule: pt(m) = (qt(m) t) / t’T (qt’(m) t’)pt(m) q(m) = (qt(m) t), where q(m) = Prob{message = m} = t’T qt’(m) t’mM pt(m) q(m) = mM qt(m) tmM pt(m) q(m) = tmM p(m) q(m) = 

    • Thus   P = conv({[p1(m),…, p|T|(m)] : mM })

    • dim(P)  |T|-1 because tT pt(m) = 1

    • Carathedory’s Theorem: Given a set S, for any point p in conv(S) there is a subset T with p in conv(T), with |T| = dim(S)+1, and the points of S' are affinely independent

    • Carathedory’s Theorem  mM p(m) (m) =  has a non-negative solution  at most |T| scalars (m) > 0

    • Define qt’(m) = (m) qt(m) / q(m)mM qt’(m) = mM (m) qt(m) / q(m) = mM (m) pt(m) / mM t qt(m) = t / mM t qt(m) = 1 / mM qt(m) = 1 (i.e. it is a valid strategy)


    ad