Você está na página 1de 17

European Economic Review 33 (1988) 807423.

North-Holland

ADVERSE SELECTION AND MORAL HAZARD WITH RISK


NEUTRAL AGENTS*

Roger GUESNERIE
EHESS, Paris, France

Pierre PICARD
Universird de Paris X et CEPREMAP, Paris, France

Patrick REY
INSEE, Paris, France

Received March 1987, final version received September 1988

This paper surveys some recent developments of the literature on adverse selection and moral
hazard in agency problems. It is concerned with the case where both aspects coexist and both
agents are income risk-neutral. It shows that in most cases, the moral hazard aspect does not
entail welfare losses compared to the pure adverse selection case. It moreover analyses different
ways (using a single contract or a family of contracts) of deriving the optimal contracts from the
optimal pure adverse selection solution.

1. Introduction

Consider a standard agency problem. Incentives theory stresses two main


classes of difficulties for the design of binding contracts: adverse selection and
moral hazard. With another terminology, the relationship between the
principal and the agent involves hidden knowledge (asymmetries in know-
ledge or information existing when the contract is signed determine adverse
selection phenomena) and hidden actions (unobservability or non-
verifiability of some of the contractants actions give rise to moral hazard).

*A lirst version of this paper was written to be presented at the invited session Incentive
Theory of the lirst meeting of the European Economic Association held in Vienna (August
1986). The purpose of such sessions is to present an overview of recent results obtained in areas
of active research. We are grateful to two anonymous referees and one editor of this journal for
their constructive criticisms.

CO14-2921/89/%3.500 1989, Elsevier Science Publishers B.V. (North-Holland)


808 R. Guesnerie et al., Adverse selection and moral hazard

This paper considers a contracting problem in which both hidden


knowledge* and hidden actions coexist.
The two pure cases are today reasonably well understood: a synthesis of
the literature for the canonical adverse selection problem (with one-sided
information) and for the canonical moral hazard problem can be found
respectively in Guesnerie and Laffont (1984) and Grossman and Hart (1983).
However the design of optimal contracts in a more realistic setting involving
both hidden knowledge and hidden actions is not fully comprehended. This
paper analyses this latter problem under the simplifying assumption that
both contractants are risk neutral vis-&is income.
In the absence of hidden knowledge, it is well known that under income
risk neutrality a franchise contract solves the incentives problems: moral
hazard is not effective in this case. Recent contributions have focused
attention on the case where hidden knowledge is present in the principal-
agent relationship while the income risk-neutrality hypothesis is maintained
[Laffont and Tirole (1986), Caillaud, Guesnerie and Rey (1986), Melumad
and Reichelstein (1986), Picard (1987), McAfee and McMillan (1987) - see
also Baron and Holmstrom (1980 for a previous attempt]. The purpose of
the present paper is to review in an integrated framework the main ideas
which emerged from this research area. Precise statements are provided but
proofs are omitted (the interested reader should refer to the papers quoted in
reference). The main message, mainly borrowed from Caillaud, Guesnerie
and Rey (1986), Melumad and Reichelstein (1986) and Picard (1987) is that,
under risk-neutrality, the optimal solution for problems mixing adverse
selection and moral hazard does not entail welfare losses when compared to
the optimal pure adverse solution contract in which actions (but not
characteristics) are fully observable. In particular, there are different ways, to
derive the optimal contract of the mixed problem from the optimal pure
adverse selection solution.
The paper proceeds as follows. Section 2 describes a simple principal-
agent model with asymmetric information; it recalls the classical characteriza-
tion of incentive-compatible contracts when the agents action choice is
observable. We then consider the possibility of inducing the agent to take the
same action with the same expected amount of transfer from the principal
when rhe action choice is no longer perfectly observed. If this problem can be
solved we say that the perfect observation incentive compatible contract can
be implemented under noisy observation. Two different implementation
procedures are considered: implementation via a single reward schedule and
implementation via a family of reward schedules. Section 3 shows that
families of simple (linear or quadratic) schedules can successfully solve the
implementation problem. Implementation via a single schedule is examined
in section 4. Sufficient conditions for implementation, bearing on the original
(perfect observation) contract and the nature of observational disturbances,
R. Guesnerie et al., Adverse selection and moral hazard 809

are exhibited. Also, approximate implementation is defined and shown to be


generally possible. Section 5 analyzes a problem with multidimensional
action where only one of the action variables is affected by noise. Attention
is focused on a class of contracts which have been proposed, as realistic, in
the literature. Suficient conditions which guarantee implementation via such
schedules are however restrictive. Lastly section 6 gathers some concluding
remarks.

2. The model
2. I. Preferences und basic assumptions
We consider a standard principal-agent model with one-sided asymmetric
information. The characteristics of the agent are associated with a parameter
vector 0 and we assume 8~ 0, where 0 is a connected subset of I?, which is
unknown to the principal. The agents actions are described by a non-
verifiable vector I, IE RL.
However it will be assumed that a vector I, correlated with I, is publicly
observed (and is verifiable). More precisely we assume here that the noise is
additive, i.e. f=I-E, where E is a random. vector of zero mean, the
distribution of which is associated with a continuous density function V(E).
Both utility functions are quasi-linear in t, the amount of money trans-
ferred to the agent: i.e. both are risk-neutral vis&vis income. The utility
functions of the principal and the agent are respectively written as W(I,O)-r
and V(I, 6)+t. There are two possible interpretations of the function FVI
Either the principals utility depends upon the agents actions, or it depends
upon the results I. In this latter case, W should be defined by W(I, 6)=
E,@(I-E, 0) where w defines the principals welfare as a function of results.
Accordingly the statements that will be presented can be interpreted in two
different ways; in the following our comments stick to the first interpretation,
i.e. to the case where the utility function W(I,O) is the intrinsic adverse
selection utility function. We leave to the reader to comment on the results
with the other interpretation in mind.*

2.2. The case where I is observable


Several definitions can now be presented.

The fact that E is uncorrelated with 0 is more important than additivity. Correlation between
E and 0, which opens new possibilities for self selection is likely to be helpful to the principal.
*However risk neutrality vis-&vis the actions, obtained for example when W(I,O)=b(O)~i, will
reconcile the two interpretations.
810 R. Guesnerie et al., Adnerse selection and moral hazard

Definition 2. A direct incentive compatible mechanism (DICM) is a couple


of functions I: 8~@+1(t?)~R~ and t: 8~O+t(8)~88 such that

BEargmax {v(l(ey,e)+ t(el)) for all 8 in 0. (1)


B

According to the revelation principle there is no loss of generality when 1


is fully observable, in identifying the set of DICM with the set of feasible
contracts.
The next proposition states the taxation principle [see Hammond (1979)
or Guesnerie (1981)] according to which a DICM can be associated with a
(non-linear) transfer schedule.

Proposition 1. (T,f): eEe+IwL+l is a DICM if and only IY there exists a


mapping cp: 1~ RL+R such that for all 8 in 0

T(e)E argmaxA VU,e) + 4m


(2)
f(e) = 40(e)),

In the following, such a mapping cp will be called a (XQassociated schedule.

Note that for a given DICM there may exist many (an infinity) of
associated schedules, since there are usually many ways to complete the
function cp outside the set T(0). In the following, our assumptions will bear
on the associated schedule rather than directly on the DICM.

2.3. The problem when onIy l=I- E is observable

The question raised in this paper can now be stated: when and how is it
the case that a pure adverse selection contract (with observable 1) can be
implemented under noisy observation of l? Two different notions of imple-
mentability can be defined.

2.3.1. Implementation via a family of reward schedules


Dejnition 2. The DICM (tr) is implementable via the family of reward
schedules A: 8 E 0, IE RL+n(8, I)E R if and only if for all 8 in 0

(e, T(e))E argmax (V(1, e) + E,n(8, 1-E)), and

r(e)= ow, T(e)


- 4, (4)

where E,n@, A- ~)~z~jQ, A- E)V(E)


de.
R. Guesnerie et al., Adverse selection and moral hazard 811

Here, the principal offers a menu of (non-linear) compensation functions


which are indexed by the characteristics parameter 8. Note that there is no
loss of generality (from the revelation principle) in indexing the contracts by
8. The agents choice can be viewed as a two-step procedure: in a first step,
he announces the value of his parameter (equivalently, he picks up a schedule
in the menu offered by the principal) and, in a second step, he selects his
preferred action. The DICM (Jr) is thus said to be implementable via the
family A if three conditions are fulfilled. First the agent announces his true
parameter (again refer to the revelation principle). Secondly, the agent of
type 6 selects an action T(O)and lastly his expected compensation equals r(O).
Note that the implementability of an optimal DICM implies that no welfare
loss results from imperfect observability of actions.

2.3.2. Implementability via a single reward schedule


Definition 3. The DICM (T;f) is implementable via the single reward
schedule $: W+lR if there exists p, a schedule associated with (zr>, such that

CPU)
= WiI - 4 VIER, (5)
or equivalently

For the sake of brevity we will sometimes say: $ implements cp instead of:
I,+implements the DICM with which cp is associated.

Condition (5) means that there exists a reward scheme tj (as a function of
observations) which gives the same expected reward to the agent (as a
function of his action) as the schedule cp associated with (f;r). Consequently
the risk-neutral agent will take the same actions when faced with Y and
noisy observations as when faced with cp and perfect observations. Still in
other words, the introduction of noise into the original adverse selection
problem does not decrease expected social welfare even if one uses a single
schedule. Note finally that eq. (5) is a particular case of the central equation
of Melumad and Reichelstein (1986), where the distribution of E does not
depend on the action 1 and the characteristic 8.

3. Implementation via a family of reward schedules


This section gathers results which can be found in Picard (1987). First
consider the case where the support R of the random vector E is bounded.
There is then a positive probability to detect unambiguously that the agent
of type 8 has picked up an action different from T(O).Consequently, using
812 R. Guesnerie et al., Adverse selection and moral ha:ard

penalties which are high enough, no welfare loss will result from the
imperfect observability of actions. Formally, we have:

Proposition 2.3 If Q is bounded, any DICM (tr) is implementable via a


family of discontinuous schedules A,, defined by

t(O) if T(O)-rESZ
A,(t?,l)=
I -A otherwise,
(6)

where A is a large enough parameter.

In that case, the principal is supposed to know exactly the set I2 in order
to define the function /! ,. As we shall see now in the one dimensional case,
linear or quadratic transfer schedules are less informationally demanding.
Assume L = 1, V is differentiable and aV/al<O. Consider a DICM (T;r) and
a corresponding associated schedule p(l). Assume that cp is differentiable.
Since V(l,Q+cp(l) is maximum at T=l(0), we have (dq/dl)(T(tI))=
-(8V/81)(?(0),0)>0 so that the function cp is necessarily increasing over l(0).
Let us first assume that cp is convex over T(O). Let /1,(&l) be a linear
approximation of the function cp at point (T(O),r(O)).That is

Since E,Az(B,l-~)=AZ(O,l), one easily checks that conditions (3) and (4)
are satisfied for the family A,. Clearly the argument is still valid for L> 1.

Proposition 3. Any DICM (tr) associated with a convex dtJerenciabl$


schedule ~(1) is implementable via a family of linear schedules A2 with

Proposition 3 is illustrated in tig. 1 for L= 1. We let r7((e)~V(T(@,O)


+r(e).
It is well known since Mirrlees (1975) that when the random noise has no compact support
but when its likelihood ratio has adequate properties, the high penalty-low probabilities
schedules can implement the lirst best contract in a pure moral hazard problem. Proposition 2
could be extended in a similar direction. However these schedules, which are infrequent in
practice, are subject to severe theoretical limitations: for example, enforcement problems [see
Caillaud, Guesnerie and Rey (1986)], incomplete knowledge of the probability distribution of the
noise, non-robustness to incomplete individual knowledge of ones characteristics [see Macho
(1987)]. From now we shall not consider such schedules any longer.
4Note that if the agents utility function is differentiable. a convex associated schedule (if any)
can be chosen differentiable.
R. Guesnerie et al.. Adverse selection and moral hazard 813

Fig. I. (4O)~V(liO),O)+1(t))).

The locus V(I,O)+r=ti(O) is thus type B-agents indifference curve associated


with (T(O),r(O)).

Remarks When the assumptions of Proposition 3 are satisfied, the family of


linear schedules is a utriuersal one, in the sense that it allows implemen-
tation of the DICM for any distribution of the noise. When L= 1, 8 is a one-
dimensional parameter and 0 is an interval, assuming d* V/c7120~0, it can be
shown that a differentiable contract (to is a DICM if and only ifs

for all 0 in 0. In this case, the associated schedule q(l) is convex if

8 in 0.

which provides a condition for a DICM to be implementable via the family


A2,
Furthermore, in this one-dimensional case, interpreting I as the agents
%ke Guesnerie and Laffont (1984). Here we restrict ourselves to continuous, piecewise
dilTerentiable functions Tand f.
814 R. Guesnerie et al., Adverse selection and moral hazard

t = Ioce,

A,ce,e>
i(e)

I I
) e,e

h(e)
Fig. 2

performance (aV/al CO) and 0 as a cost parameter (aV/ae < 0) the family A2
can be interpreted as a bonus-penalty reward scheme,6 including a fixed fee
r(0) which decreases with the cost parameter 8 and a variable-transfer
-(av/ar)(T(e),e)(r--l(e)). -rh*1s variable transfer depends on the difference
between observed and expected actions (I-T(B)) and the proportionality
coefftcient -(aV/aZ)(T(e),e) is higher for low cost agents than for high cost
agents.
Consider now the case where the function q(I) may not be convex and
L=l. Let 2, be defined as

= -yp-r(ep+~,(e,r), (9)
with y >O. Functions ii,(e, .) and cp(a) are tangent at point r(e) and we have
A,(& I)srp(l) for all 1 if the parameter y is high enough. In this case, we
have
(e, T(e))E argmax (VU, e) + A,(@, I)) for all 8 in 0.
w.1
Let A,(0,1)=ji,(0,1)+ya2 with ~~=var(a). We have E,A,(0,I-6)s
ii, and conditions (3)-(4) are satisfied for the family A,.
6See Laffont and Tirole (1986) for an application to the control of regulated firms.
y could a priori depend upon the characteristic 0. This extension is however useless when the
set of possible characteristics is compact, as it suffices then to choose y=max, (;(@I.
R. Guesnerie et al.. Adverseselection andmoralhazard 815

ice,=iw
Fig. 3

Extending the result to the case L> 1, we have

Proposition 4. Any DICM (tr) associated with a diflerentiable schedule ~(1)


is implementable via a family of quadratic schedules A3 with

A~(o.~=-Y i ((k-T,(e)Y-0,2)- i (kmf$uv,e)+i(s) (10)


h=l h=l h

where y is a non-negative parameter and e = (eh), ah = var (&h),h = 1,. . . , L.

Remarks. (1) It can be observed that only the variances bh are required to
get the optimal family ,4,: this result will contrast sharply with the
implementability via a single transfer schedule examined in the next section,
where knowledge of the whole probability distribution of E will be needed.
(2) Differentiability of the associated schedule cp has been assumed in
Propositions 3 and 4. Non-differentiability would correspond to a bunching
phenomenon for the DICM ([f) (as in fig. 3, where agents 0 and 8 choose
the same 1). In that case, approximating c~ with an increasing differentiable
function would allow us to implement a DICM which is close to (tr).

4. Implementation via a single reward schedule


We now turn to the analysis of implementation via a single reward
schedule (see Definition 3 above). Most results presented in this section are
borrowed from Caillaud, Guesnerie and Rey (1986). The more general
816 R. Guesnerie et al., Adverse selection and moral hazard

framework of Melumad and Reichelstein (1986) as well as some of their


results are also evoked.
The analysis of a simple case, where the disturbance is uniformly
distributed over a compact set, first provides some intuition; we then stress
some conditions which warrant implementability in the one-dimensional case
(L= 1). We conclude with some remarks about the multidimensional case
and approximate solutions.

4.1. An elemenlary example


Let us suppose L= 1 and consider the implementation of a DICM
associated with a differentiable schedule rp (W-+lR), the support of which is
included is some interval [a,/TJ.
Let us assume that the random disturbance E is uniformly distributed over
a compact interval [-a, +a]. Remember that in this case, it is possible to
implement any DICM by a family of discontinuous schedules (see Proposi-
tion 2). As we will see, it is also possible to construct a single non-linear
schedule tj which, given the disturbance E, implements cp: such a schedule II/
must satisfy the following condition, which directly derives from (5)

(11)

which gives

s(l)=&[$(1+0)-@(l-a)]. (12)

Eq. (12) enables us to construct point by point a particular solution: take


$e equal to zero over [- 00, a + a[ and then, for any IG [a +a, + co[ define
@,, in order for it to satisfy (12)

(13)

so that

$(I-(2k+ 1)a). (14)


k=O

t+bo is a particular solution of (12) and any solution can then be obtained
R. Guesnerie et al.. Adverse selection and moral hazard 817

as the sum of this solution +,, and a periodical function P of period 2a, such
that I;: P(I)dl=O.
The above construction deserves some remarks:

(i) This solution makes sense only if cp is differentiable. In any case the
schedule I// is less regular (differentiable) than the original one.
(ii) The constructive approach uses the rough nature of disturbance
(discontinuity in I= --a, +a, which permits the point-by-point determination
of the solution $. It can be generalized to any step-density function.
(iii) There exists an infinity of solutions. Actually, given a DICM defined
over some compact set 0, there exists a priori an infinity of associated
schedules cp. verifying cp=O outside a sufficiently large interval of Iw, which
themselves can be implemented by an infinity of non-linear schedules. Thus,
in a sense, the problem of implementing a DICM defined over the compact
set 0 has a double infinity of solutions.

4.2. The one-dimensional case: General results


The previous example shows that regularity* of the schedule rp is a
favourable factor for its implementation. We gather here some results which
to some extent confirm this first intuition [see Caillaud, Guesnerie and Rey
( 1986)].

Proposition 5. A schedule cp can be implemented via a single reward schedule


us soon as one of the following conditions is satisfied:
(i) The schedule cp is a polynomial of degree n and the disturbance has
moments of order up to n.
(ii) Both the schedule cp and the disturbance distribution have compact
support.
(iii) The schedule cp and the disturbance distribution v admit Fourier
transforms (6 and C) such that &G exists and admits a reciprocal
Fourier transform.

Note that condition (iii) is fulfilled, for example, if the disturbance has a
normal distribution and the schedule cp can be analytically extended over a
compact set and is smaller than an exponential, or if 1 is of the form
VJE)= a fle - a nlsl and the schedule has derivatives up to the fourth order,
belonging to L*.
Let us now comment briefly on the results and proofs of the proposition.
In case (i), the polynomial schedule may actually be implemented by another
polynomial II/ of the same degree n. the coefficients of which depend upon the
coefficients of cp and the tr first moments of v. In case (ii), the proof derives
818 R. Guesnerie et al.. Adverse selection and moral hazard

from the analysis of the previous example, using step-density functions and a
limit argument. In the last case (iii), the solution is directly given by: #=
($7C) where _ designates the reciprocal Fourier transform.
Proposition 5 emphasizes the crucial role of the regularity of cp. Intuitively,
note that a differentiable reward schedule cp tends to implement itself when
the disturbance becomes extremely weak; on the contrary, if there is a kink
in the initial reward function, then even with small risks the kink will be
smoothed, and the choice of the corresponding agents will change. The
problem is of course even more drastic if the schedule cp is discontinuous.

4.3. Further remarks


Let us come to the approximability problem: when the associated schedule
is not differentiable, it can generally be approximated by differentiable ones,
in such a way that most agents decisions are only slightly affected. Is it
therefore possible to approximately implement the schedule cp? The answer
is positive if the disturbance has a bounded support; if it is not the case, then
the issue depends on whether the set of a priori available actions (which until
now has been supposed to be WL) is compact or not: on a compact set a
continuous reward schedule can be approximated as closely as desired by a
polynomial and conclusion (i) of Proposition 5 applies. However if the set is
not compact the reasoning breaks down.
Lastly, let us note that most of the qualitative results of this section can be
generalized to the multidimensional case. In particular:

Proposition 6. Suppose either that:


(i) The schedule cp is a polynomial of degree n in 1E RL, and the disturbance
distribution v has moments of order up to n.
(ii) The schedule cp has derivatives of order greater than three, which belong
to IL, and the disturbance is an L dimensional normal distribution.
Then the schedule cp is implementable via a single reward schedule.

Melumad and Reichelstein (1986) considered the more general case where
the noise is not additive (i.e. the probability distribution of E is conditioned
by 1). Eq. (5) is then replaced by

(5)

where F(ll1) denotes the conditional probability distribution of I.


Melumad and Reichelstein characterize the set of probability distributions
F( -) such that there are approximate solutions, in the sense that for any
associated schedule cp, there exists functions tj and @,Q uniformly close to cp,
such that
@(I)= j~,&(l)dF(IIl).
R. Guesnerie et al., Adverse selection and moral hazard 819

In particular they show that an approximate solution exists for normal,


log-normal, Beta and Gamma distributions, if I shifts the distribution in the
sense of first order stochastic dominance.
The next section analyses more carefully the multidimensional case where
the disturbance only affects one of the coordinates.

5. ImplementaGon when only one variable is affected by noise

In this section [which relies mainly on Caillaud, Guesnerie and Rey


(1986)] we assume that 1 has two components. The second one I, is perfectly
observed and the first one I1 is noisy /,= I, --E. We also restrict attention to
the case where 0 is a one-dimensional parameter and we assume T,(f?)#&(O)
if 8#8.
In that case, implementing a DICM through a single reward schedule
II/(/,,I,) amounts to offering a menu of one-variable reward schedules
I&. /12)= $( -, I,). It should first be noticed that this way of implementing a
DICM differs from implementation via a family of reward schedules as
studied in section 3. In the present section, the agents utility does depend on
the index of the selected reward schedule (i.e. /J, which is an action variable
rather than an abstract announcement. Even for DICMs involving a one-to-
one correspondence between 0 and I,, the problem considered here intro-
duces an additional complexity as well as additional freedom in defining
reward schedules. Furthermore, indexing reward schedules by an observable
action rather than by a type parameter allows a closer approximation to the
way contracts are often formulated in the real world. An example of such an
incentive mechanism is developed by Laffont and Tirole (1986) for the
regulatory policy of control firms: assuming that the level of production is
perfectly observable and that the observation of the unit cost is noisy,
Laffont and Tirole consider (linear) cost-sharing rules, indexed by the
produced quantity.
As in this latter example, it is tempting to focus on reward schedules
$(I,, I,) which correspond to interpretable functions I& - I/*). By analogy with
our analysis in section 3, given a DICM (I;O, we can define knife-edge
reward schedules $I) ruled (or truncated ruled) reward schedules $2 and
quadratic-in-section reward schedules e3 as follows (A and y are positive
parameters):

Knife-edged reward schedule

. if 3 0 E 0 such that l2 = T,(O) and r(O)- r1 E s2,


then $,(r;,I,)=t(@, (15)
*otherwise, +1(1;,/2)= --A.
820 R. Guesnerie et al., Adverse selection and moral hazard

Ruled reward schedule and truncated ruled schedule

*if 38~0 such that i2=f2(0), then


11/2(1;,iz)=f(e)-(I;-r,(e))(d~~dil)(T(e),e) (16)
-otherwise, $2(!,, I,) = -A.

For truncated ruled reward schedules: the definition is the same but the
linear relationship is restricted to 1, belonging to a neighbourhood of T,(O).

Quadratic-in-section reward schedule

.if 30~0 such that l,=T2(0), then


~J(I;,~Z)=5(e)-(1;-r~(e))(d~//ld11)(T(e),e)-y(l;-r,(e))*, (17)
*otherwise, $3(lIr12)= -A.

The above formulae define reward schedules, based on observable vari-


ables I; and 1,. A crucial question is whether these schedules, when
considered in the framework of the non-noisy adverse selection problem, are
associated schedules in the sense defined in Proposition 1.
We show that (i) if the answer to the previous question is positive, then
the schedules under consideration have strong implementability properties -
in particular they provide reward schedules robust to the distribution of the
garbling noise. We will then (ii) consider cases where the answer is indeed
positive.
(i) An argument similar to Proposition 2s would allow us to show that the
knife-edged reward schedule $i implements the DICM whenever the noise E
has a bounded support. Secondly, one checks immediately that the ruled
reward schedule I,!J* implements the DICM (for any distribution of the
random noise) if and only if it is an associated schedule. Likewise, if there
exists an associated schedule which is quadratic with respect to I,, then there
exists a quadratic-in-section reward schedule $s which implements the
DICM in the presence of noisy observation: this schedule is simply obtained
by translating the quadratic associated schedule in a way which depends
only on the variance of the random noise. Lastly, if a truncated ruled
schedule is an associated schedule, then it implements the DICM for any
small noise with bounded support [see Caillaud, Guesnerie and Rey (1986)
for details].
These results are summarized in the following proposition.

Proposition 7. Consider a DICM (<r)


(I) whenever the noise E has bounded support, there exists a knife-edged
R. Guesnerie er al.. Adcerse selection and moral hazard 821

schedule which implements the DICM. Furthermore, if there exists an


associated schedule which is:
(2) a ruled schedule: then this ruled schedule implements the DICM, whatever
the random noise: it is a universal schedule.
(3) a quadratic-in-section* schedule: then there exists a quadratic-in-section*
schedule, derived from the original one, which implements the DICM, this
schedule only depends on the variance of the noise.
(4) a truncated ruled schedule: this schedule then implements the DICM for
any small noise E with bounded support.

(ii) The fundamental question is thus the following: under which conditions
are the above defined functions associated schedules?
A first answer is obtained when the agents preferences do not depend on
1, [examples can be found in Picard (1987)]. In that case, the following
proposition is nothing but a reinterpretation of Propositions 3 and 4.

Proposition 8. If the preferences of the agent are independent of I,, there


exists at least one associated schedule which depends only on 1,; then
(I) if the projection of the associated schedule upon (I,, t) is differentiable and
convex, then the ruled reward schedule I++~ is an associated schedule. It thus
implements the DICM.
(2) if the projection is diflerentiable, then there exist associated schedules
which are quadratic with respect to I,, the DICM can thus be implemented
through a reward schedule $j.

The ruled schedule can thus only be used to implement a DICM which
satisfies very special properties: see Laffont and Tirole (1986) for an example
which illustrates the above proposition. Conversely, implementation through
quadratic in section schedules is much more general.
One could think that the truncated ruled schedule is a good candidate for
the implementation of a DICM when the noise associated with the obser-
vation is small, since it would allow to focus on local conditions. However,
the next proposition gives a sufficient condition which suggests that imple-
mentation through truncated ruled schedules may be limited by rather severe
conditions.

Proposition 9. Consider a random variable E with a small compact support.


Assume that the agents utility function is C*. Consider a DICM (I; f) where 7; f
are C functions on 0. If for every 0~ 8, argmax, (f(s) + U@(s),0)} = {e} and the
following inequalities hold:
(I) (81, V(.,C(&, Vl&Wd@+@,, V,.,@dWl >(d,, V:.,, where
( . ) = (T(e), @,
(4 V,, V)c.,(dT,/de)+(~,,V),.,(dT,/de)fO.
822 R. Guesnerie er al., Adverse selection and moral hazard

Then the ruled schedule can be constructed in such a way that it is an


associated schedule. Thus from Proposition 7, if the support of E is small
enough, the DICM can be implemented via a truncated rule schedule.

Conditions 1 and 2 of Proposition 9 ensure that all e-agents indifference


curves, for 8 close to some &,, will locally remain above the ruled schedule
(this implies that the section, in the plane 1, =l~(O,), of the lower envelope of
the O-agents indifference surfaces, is locally convex, and thus above its
tangent at point (I*(&,), t*(O,))). Indeed these conditions are rather restrictive
and emphasize the fact that (truncated-) ruled schedules are efficient only in
quite peculiar situations (as for example the situation analysed in Laffont
and Tirole). Finally, note that when these conditions are not fulfilled, a
DICM can nevertheless be implemented, in general, via quadratic in section
schedules.

6. Concluding remarks
Hidden actions and hidden knowledge coexist in many principal-agent
relationships. However, under a risk-neutrality assumption, the results pre-
sented in this paper show that imperfect observability of actions usually does
not prevent the implementation of a pure adverse selection contract. This
contract can be implemented either via a family of transfer schedules or by
means of a single schedule, but informational requirements are usually
stronger in this latter case.
As shown in the literature on moral hazard, the design of optimal
contracts under risk-sharing involves some compromise between risk-
aversion and ex-post efficiency. Understanding the logic of optimal incentive
contracts when risk-sharing and moral hazard interfere with adverse selection
would deserve further research.

References
Baron, D. and B. Holmstrom, 1980, The investment banking contract for new issues under
asymmetric information: Delegation and the incentive problem, Journal of Finance 35.
1115-1138.
Caillaud, B., R. Guesnerie and P. Rey, 1986, Noisy observation in adverse selection models,
INSEE Discussion paper no. 8802.
Guesnerie, R., 1981, On taxation and incentives: Further reflections on the limits to redistribu-
tion, to appear in Contribution to the theory of taxation, Chap. 1 (Cambridge University
Press).
Guesnerie, R. and J.J. LaNont, 1984, A complete solution to a class of principal-agent problems
with an application to the control of a self-managed firm. Journal of Public Economics 25,
329-369.
Grossman, S. and 0. Hart, 1983, An analysis of the principal-agent problem, Econometrica 51,
7-46.
Hammond, P., 1979, Straightforward individual incentive compatibility in large economies,
Review of Economic Studies 46, 263-282.
R. Guesnerie et al., Adverse selection and moral hazard 823

LaNont, J.J. and J. Tirole. 1986, Using cost observation to regulate Brms, Journal of Political
Economy.
McAfee, R.P. and J.P. McMillan, 1987, Competition for agency contracts. Rand Journal of
Economics 18, 287-307.
Macho, 1.. 1987. Essai sur la theotie des contrats et des organisations. Thesis in preparation,
EHESS, Paris.
Melumad, N. and S. Reichelstein, 1986, Value of communication in agencies, Berkeley
Discussion paper no. 818.
Mirrlees. J.A.. 1975, Notes on welfare economics, information and uncertainty, in: M.S. Balch.
D.L. McFadden and S.Y. Wu, eds. Essays on economic behavior under uncertainty, 243-257.
Picard, P.. 1987, On the design of incentives schemes under moral hazard and adverse selection,
Journal of Public Economics 33.305-331.

E.E.R. F

Você também pode gostar