Você está na página 1de 27

Notes on social choice and mechanism design

Econ 8104, Spring 2009, Kim Sau Chung


Joseph Steinberg
May 10, 2009
Contents
1 Social choice 1
1.1 Social welfare functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Social welfare functions with 2 alternatives . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Social welfare functions with at least 3 alternatives . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Single-peaked preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Social choice functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Bargaining problems 11
2.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Nash bargaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Other bargaining solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Egalitarian solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 Utilitarian solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.3 Kalai-Smorodinsky solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Bargaining with more than 2 people . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Mechanism design 16
3.1 King Solomons dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Implementation in dominant-strategy equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Quasi-linear preferences and auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Optimal auction design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4.2 Revenue equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4.3 Auctions with two (or more) items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4.4 Vickrey-Clarke-Groves mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4.5 Zero-sum transfers and implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Bayesian Nash equilibrium implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.1 Zero-sum transfers in BN-implementation (expected externality mechanism) . . . . . . 27
1 Social choice
1.1 Social welfare functions
Denition 1.1. Let X denote the set of alternatives, i.e., social choices. Let I denote the set of citizens
or voters. R is the set of all rational (complete and transitive) preference relations over X. P R is the
set of all strict rational preferences. We will use R
I
to denote the I-dimension product space of rational
preferences. A vector

_ R
I
is a prole of preferences for each voter in I.
1
Denition 1.2. A social welfare function is a map F : R
I
R. In other words, for any prole of
rational preferences

_ R
I
, F(

_) is a rational prefefence relation on X. We will write x F(

_) y or x _
s
y
when we want to say that society prefers x to y under the social preference induced by F.
1.1.1 Social welfare functions with 2 alternatives
When #X = 2, we can express any preference _
i
R as

i
=
_

_
1 x ~
i
y
0 x
i
y
1 x
i
y
Then we can write R as 1, 0, 1 and R
I
as 1, 0, 1
I
. And a social welfare function F is now a map of
the form F : 1, 0, 1
I
1, 0, 1, i.e., F(
1
, . . . ,
I
) =
s
.
There are three main properties that we would like F to have when #X = 2.
Denition 1.3 (Anonymity/symmetry). F is anonymous or symmetric if F(
1
, . . . ,
I
) = F(N(1), N(1)),
where N(1) = #i :
i
= 1 and N(1) = i :
i
= 1. In other words, for any onto function : I I,
F(
1
, . . . ,
I
) = F(
(1)
, . . . ,
(I)
).
Denition 1.4 (Neutrality). Given = (
1
, . . . ,
I
), F is neutral if F() = F().
Denition 1.5 (Positive responsiveness). F is positively responsive if F() 0,

, and

,= ,
then F(

) = 1.
Below are three examples of social welfare functions (when #X = 2) that satisfy two out of three of the
properties above (MWG exercise 21.B.2)
Example 1.1 (Not symmetric). Dene F : 1, 0, 1
I
1, 0, 1 as F() = sign

iI
i
i
(the weighted
sum of each voters preference, with the voters weight equal to the index). Then for any 1, 0, 1
I
,
F() = sign

iI
i(
i
) = sign

iI
i
i
= F().
Thus F is neutral. For any such that F() 0, it must be that

iI

i
0. Then for any

with

,= , we must have

iI

i
> 0, so sign

iI

i
= F(

) = 1. Thus F is positively responsive. However,


F is not symmetric. To see this, let = (1, 1, 0, . . . , 0). Then
F() = sign

iI
i
i
= sign(1 + 2(1) + 0 +. . . + 0) = sign(1) = 1.
Now let

= (1, 1, 0, . . . , 0), i.e., just switch the preferences of the rst two voters. Now we have
F() = sign

iI
i
i
= sign(1 + 2(1) + 0 +. . . + 0) = sign(1) = 1.
Since F() ,= F(

), F is not symmetric.
Example 1.2 (Not neutral). Let F() = 1 for all 1, 0, 1
I
. Then for any = (
1
, . . . ,
I
) and
any onto function : I I, F(
(1)
, . . . ,
(I)
) = F() = 1. Thus F is symmetric. For any such that
F() 0 (in this case the set of such is 1, 0, 1
I
, by the way) and

with

,= , F(

) = 1. Thus
F is positively responsive. However, F is not neutral since for any , F() = F() = 1.
Example 1.3 (Not positively responsive). Let F() = 0 for all R
I
. Then for any = (
1
, . . . ,
I
) and
any onto function : I I, F(
(1)
, . . . ,
(I)
) = F() = 0. Thus F is symmetric. For any 1, 0, 1
I
,
F() = F() = 0. Thus F is neutral. However, F is not positively responsive, since for any such that
F() 0 and any

with

,= , F(

) = 0 ,= 1.
2
In this special case, majority voting induces a rational social preference over the two alternatives. The
next theorem takes this further by showing that any social welfare function that satises the three properties
above is a majority voting function.
Theorem 1.1 (Mays theorem). F satises symmetry, neutrality, and positive responsiveness if and only if
F is a majority voting social welfare function.
Proof. The proof that majority voting satises symmetry, neutrality and positive responsiveness is trivial.
We can express the majority voting social welfare function as
F() = sign

iI

i
.
Let

be a rearrangement of , i.e.,

= (
(1)
, . . . ,
(I)
) for some onto function : I I. Clearly,
F() = F(

). Thus F is symmetric. Fo is neutral since


F() = sign

iI

i
= sign

iI

i
= sign

iI

i
= F().
Finally, if F() 0, then #i :
i
= 1 #i :
i
= 1. Then for any

with

,= ,
#i :

i
= 1 > #i :

i
= 1. Thus F(

) = 1, so F is positively responsive.
To prove that any F that satises symmetry, neutrality, and positive responsiveness is a majority voting
social welfare function, it is helpful to dene the following shorthand:
N

(1) = #i :
i
= 1
N

(1) = #i :
i
= 1
Step 1: (N

(1) = N

(1) F() = 0) Take so that N

(1) = N

(1) and let

= . Then
N

(1) = N

(1), so

is a rearrangement of . By symmetry we must have F(

) = F(), and by
neutrality we must have F(

) = F(). Thus F() = F(), so F() = 0.


Step 2: (N

(1) > N

(1) F() = 1) Take such that N

(1) > N

(1). Let k = N

(1) and let


= N

(1). Then k > . WLOG (by symmetry) let be of the form


= (1, . . . , 1
. .
k
, 1, . . . , 1
. .

, 0, . . . , 0
. .
Ik
).
Construct

as

= (1, . . . , 1
. .

, 0, . . . , 0
. .
k
, 1, . . . , 1
. .

, 0 . . . , 0
. .
Ik
).
Then N

(1) = = N

(1). By step 1, F(

) = 0. Clearly,

and ,=

, so by positive responsiveness,
F() = 1.
Step 3: (N

(1) < N

(1) F() = 1) Take as in step 2 and let

= . By neutrality,
F(

) = F() = 1. Thus we have shown that any F that satises symmetry, neutrality and positive
responsiveness is a majority voting social welfare function.
1.1.2 Social welfare functions with at least 3 alternatives
In this case, pairwise majority voting no longer induces a rational social preference. The following example
shows why.
Example 1.4 (Condorcet paradox). For simplicity, consider a society with three voters and three alternatives
a, b, and c. Let their preferences over the three alternatives be as follows:
v
1
v
2
v
3
a b c
b c a
c a b
3
Suppose the societys preference is given by pairwise majority voting, i.e., if more people prefer a to b than b
to a, then society prefers a to b. We can see that a ~
1
b and a ~
3
b, so a ~
s
b. Similarly, b ~
1
c and b ~
2
c,
so b ~
s
c. But we also have c ~
2
a and c ~
3
a, so c ~
s
a. But this implies that
a ~
s
b ~
s
c ~
s
a ~
s
b ~
s
c ~
s
a . . .
Thus societys preference is nontransitive. This is known as the Condorcet paradox or a Condorcet cycle.
The Borda rule is a social welfare function that is similar to majority voting but always induces a rational
preference. Under this rule, each voter ranks alternatives (1 being the best, #X being the worst) and the
social preference is given by the order induced by summing these rankings. Unfortunately, the Borda rule
has a rather undesirable property.
Example 1.5. Consider a society with 2 voters and 3 alternatives x, y, and z. Construct the prole

_ as
follows:
v
1
v
2
x y
z x
y z
Then x gets 3 points and y gets 4 points, so the society prefers x to y. This seems all well and good. But
consider the preference prole

_

dened below:
v
1
v
2
x y
y z
z x
Now 4 has 5 points and y has 3 points, so the society now prefers y to x. This is undesirable because the
placement of x relative to y has not changed for either voter. In other words, the societys preference between
x and y depends on the placement of the third alternative z.
There are two main properties that we want social welfare functions to have:
Denition 1.6 (Pareto eciency). F is Pareto ecient or Paretian if for all x, y X, if x ~
i
y, i I,
then x ~
s
y. Note this is just for strict preference between alternatives.
Denition 1.7 (Independence of irrelevant alternatives). F satises the independence of irrelevant
alternatives axiom if the social preference between x and y induced by F only depends on individual
preferences between x and y. We can write this in more mathematical notation. F satises IIA if given

_,

_

R,
x _
i
y x _

i
y, i
implies that
x F(

_) y x F(

) y.
Lemma 1.1. Let F be a SWF satisfying PE and IIA and let

_ R. If x is extreme for everyone (for all
i I, either x ~
i
y, y X or y ~
i
x, y X), then it is also extreme for the societys prefefence F(

_).
Proof. Suppose not. Then x X such that x is extreme for everyone, and a, b X such that a _
s
x _
s
b.
WLOG, consider a prole of preferences as shown below:
v
1
. . . v
i
v
i+1
. . . v
I
society
x x
.
.
.
.
.
. a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a b b a x
b a a b
.
.
.
.
.
.
.
.
. x x b
4
Construct

_

so that v
1
and v
I
switch preferences over a and b and everything else stays the same. Then
all voters prefer b to a, and all voters preferences between a and x are unchanged, as are their preferences
between x and b. By IIA, a _

s
x and x _

s
b. By PE, b ~

s
a. This new preference is shown below:
v
1
. . . v
i
v
i+1
. . . v
I
society
x x
.
.
.
.
.
. a
.
.
.
.
.
.
.
.
.
.
.
. x
b b b b b
a a a a a
.
.
.
.
.
. x x
.
.
.
But this implies that a _

s
x _

s
b ~

s
a. This preference is nontransitive, so F(

) , R. This is a
contradiction. Therefore it must be that x is extreme for the society.
Theorem 1.2 (Arrow impossibility theorem). If #X > 2, any social welfare function that satises PE and
IIA is a dictatorial rule, i.e., d I such that for all x, y X, x ~
d
y x ~
s
y.
Proof. Step 1: Identify an issue dictator, i.e., for each x X, nd d(x) I so that for all a, b ,= x,
a ~
d(x)
b a ~
s
b.
Start by constructing

_ so that x is the worst alternative for all i I. By PE, x is the worst alternative
for the society, i.e., y ~
s
x for all y X. Now change _
1
to _

1
so that x is now the best alternative for voter
1. Do the same for voter 2, voter 3, and so on, so that x is the best alternative for all voters given prole

. By PE, x is now the best alternative for the society. By the lemma above, x is an extreme alternative
for the society at each iteration of the process (_

1
, . . . , _

i
, _
i+1
, . . . , _
I
).
So we know that x is the worst alternative at prole

_, the best alternative at prole

_

, and an extreme
alternative (either best or worst) at each iteration (_

1
, . . . , _

i
, _
i+1
, . . . , _
I
). Therefore there must be a
point at which x switches from best to worst for the society, i.e., d(x) I such that x is the worst alternative
for society at prole

_
0
= (_

1
, . . . , _

d(x)1
, _
d(x)
, _
d(x)+1
, . . . , _
I
),
but x is the best alternative at prole

_
1
= (_

1
, . . . , _

d(x)1
, _

d(x)
, _
d(x)+1
, . . . , _
I
).
At prole

_
0
, for every a X such that a ,= x, x ~
0
i
a for i = 1, . . . , d(x) 1, and a ~
0
j
x for
j = d(x), . . . , I. Further, since we know that x is the worst alternative for society at this prole, a ~
0
s
x.
The table below shows this graphically:
v
1
. . . v
d(x)1
v
d(x)
v
d(x)+1
. . . v
I
society
x x a a a a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a a x x x x
Similarly, at prole

_
1
, for every b ,= x, x ~
1
i
b for i = 1, . . . , d(x), and b ~
1
j
x for i = d(x) + 1, . . . , I. Since
we know x is the best alternative for society at this prole, x ~
1
s
b. This is shown below
v
1
. . . v
d(x)1
v
d(x)
v
d(x)+1
. . . v
I
society
x x x b b x
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
b b b x x b
Now, change preferences for d(x) to _

d(x)
so that a ~

d(x)
x and x ~

d(x)
b. Take the prole

_
2
= (_

1
, . . . , _

d(x)1
, _

d(x)
, _
d(x)+1
, . . . , _
I
).
5
Comparing with the outcome from

_
0
and using IIA, we can see that it must be that a ~
2
s
x. Comparing
with the outcome from

_
1
and using IIA again, we can see that x ~
2
s
b. Then by transitivity, a ~
2
s
b. Thus
voter d(x) is an issue dictator. This is shown graphically in the table below:
v
1
. . . v
d(x)1
v
d(x)
v
d(x)+1
. . . v
I
society
x x a b b a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a b x a a x
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
b a b x x b
Step 2: Show that d(x) = d(y) for all x, y X, i.e., the issue dictator is the same for every alternative
x X.
Pick x, y, z X. Suppose for contradiction that d(x) ,= d(y). Construct a preference prole

_ so that:
(i) x ~
i
y, i I.
(ii) y ~
d(x)
z.
(iii) z ~
d(y)
x.
This preference prole is shown below:
v
1
. . . v
d(x)
v
d(y)
. . . v
I
society
x x z z y
y y x x z
.
.
.
.
.
.
.
.
.
.
.
. x
z z y y y
By (i) and PE, x ~
s
y. By (ii) and step 1, y ~
s
z. By (iii) and step 1, z ~
s
x. Then we have y ~
s
z ~
s
x ~
s
y.
This violates transitivity, which contradicts the fact that F maps to R. Thus it must be that d(x) = d(y)
for all x, y X. Therefore d X such that for all x, y X, if x ~
d
y, then x ~
s
y, i.e., F is dictatorial.
Below are some examples of social welfare functions (when #X 3) that satisfy some but not all of the
desired properties (MWG exercise 21.C.2).
Example 1.6 (#X < 3). Let F be the pairwise majority voting social welfare function. The domain is
clearly R
I
, which is the same as 1, 0, 1
I
in this case. With two alternatives, transitivity is irrelevant so F
is rational, i.e, F R. Having only two alternatives also makes it trivial that F has pairwise independence.
If all agents strictly prefer x to y, then by majority vote the society strictly prefers x as well. Thus F is Pareto
ecient. Last, neutrality of the majority voting social welfare function means that F(

) = F() for any


rearrangement

of , so no agent can impose his strict preference for all proles in R


I
(no dictatorship).
Example 1.7 (Domain restriction). Let the domain of F be A
I
= (_
i
)
iI
: _
i
=_
j
, i, j (_
i
)
iI
:
i I with _
i
=_

R
I
, where _

is some binary relation in R. Dene F : A


I
R as
F(_) =
_
_
1
= . . . =_
I
if _
i
=_
j
, i, j
_

if i I with _
i
=_

Obviously the number of alternatives can be at least 3. Since F((_)


iI
) R for all (_)
iI
, F is rational.
Since F is equal to the preference(s) of individual agent(s), it satises pairwise independence and Pareto
eciency. Given the denitions of F and its domain, F is neutral, so it satises no dictatorship.
In the next section, we will see that strict single-peaked preferences and majority voting will work when
there is an odd number of alternatives.
Example 1.8 (Irrational social preference relation). Let F be the pairwise majority voting social welfare
function. It clearly satises the 5 conditions other than social rationality.
6
Example 1.9 (Violates independence of irrelevant alternatives). Let F be the Borda rule social welfare
function. While F does not satisfy pairwise independence, it does satisfy the other 5 properties.
Example 1.10 (Not Pareto ecient). Let F((_)
iI
) =_

R for all (_)


iI
R
I
. F is not Pareto ecient,
but it satises the other properties.
Example 1.11 (Dictatorial). Let F be a dictatorial rule. By Arrows impossibility theorem, F satises the
other 5 properties.
1.1.3 Single-peaked preferences
Arrows impossibility theorem is a fairly negative result since it tells us that we cannot get the properties
we would like (PE and IIA) out of a non-dictatorial social welfare function dened on R. However, we will
see that if we restrict the domain in a way that is not entirely unreasonable for many situations, the outlook
improves signicantly.
Denition 1.8. A linear order on X is a binary relation that is rational (complete and transitive) and
anti-symmetric (y x & x y x = y).
Note that MWG has a dierent denition of linear order (reexive, transitive, total: x y XOR y x).
To see that they are equivalent, rst suppose that binary relation satises MWGs denition. Let x, y X
such that x ,= y. Then by totality, either x y or y x, but not both. Formally, this means that exactly
one of the following is true
x y & (y x)
y x & (x y)
where denotes negation. Regardless of which one is true, this implies that the statement
x y & y x
is false. In other words, it is not true that x y and y x. Now let x, y X such that x = y. Then by
reexivity, x y and y x. Thus is antisymmetric. Totality and reexivity imply completeness. Then
is rational, so satises our denition.
Now suppose that satises denition our denition. Let x, y X such that x ,= y. Then by
antisymmetry,
(x y & y x).
Using completeness, at least one of x y and y x must be true. which implies either (x y) or
(y x). Thus is total. Completeness and antisymmetry imply reexivity, so satises MWGs
denition. Therefore the two denitions are equivalent.
Denition 1.9. Preference prole

_ is single-peaked with respect to linear order if for all i I, x i X
such that z < y x
i
z
i
y and x
i
y < z y ~
i
z. We will use R
I

to denote the set of preference


proles that are single-peaked with respect to some exogenously given linear order . P
I

will denote the


subset of strict single-peaked preferences.
Theorem 1.3. For all

_ R
I

, there exists a Condorcet winner (not necessarily unique), i.e., x X such


that for all y ,= x, #i : x ~
i
y #j : y ~
j
x.
Proof. There exists a median voter m such that
#i : x
i
x
m
I/2
and
#j : x
m
x
j
I/2.
Take y < x
m
. Then for all i I with x
i
x
m
, y < x
m
x
i
y
i
x
m
. Then
#i : y
i
x
m
#i : x
i
x
m
1/2.
Thus by majority vote, x
m
_
s
y. Similar logic applies for y > x
m
. Thus x
m
is a Condorcet winner.
7
Note that this theorem does not imply that the social preference induced by majority voting is necessarily
rational. Take a society with 2 voters and 3 alternatives, a, b, c such that a < b < c. Let voter 1s preferences
be c ~
1
b ~
1
a, i.e., x
1
= c. Let voter 2s preferences be b ~
2
a, b ~
2
c, and a ~
2
c, i.e., x
2
= b. By majority
voting, b
s
c
s
a
s
b. Thus the societys preference is nontransitive. Thus suggests that another line of
inquiry might be to expand the range of our social welfare functions. However, the next theorem gives us a
much stronger result.
Theorem 1.4. For all

_ P
I

, when I is odd, pairwise majority voting is a well-dened social welfare


function, i.e., F : P
I

R.
Proof. Suppose a _
s
b _
s
c. We want to show that societys preference is transitive, i.e., a _
s
c. Since I is
odd and all preference prole sin the domain are strict, we then have a ~
s
b and b ~
s
c. Restrict attention
to the subset a, b, c X. Any

_ P
I

remains single-peaked over a, b, c since all individual preferences


are strict, so no one can be indierent between any elements of a, b, c. By theorem 1.4, we know there
exists a Condorcet winner among a, b, c. It cannot be b since a ~
s
b and it cannot be c since b ~
s
c. Thus
a is the Condorcet winner, i.e., a _
s
c.
Example 1.12. Pairwise majority voting is NOT the only social welfare function dened in P
I

that satises
IIA and P. Let I = 5 and consider a weighted majority voting social welfare function F : P
I

in which agents
1 and 2 each have 2 votes and agents 3 - 5 each have one vote. In other words, x ~
s
y

5
i=1

i
> 0,
where

i
=
_
2 i 1, 2
1 i 3, 4, 5
and

i
=
_
1 x ~
i
y
1 y ~
i
x
.
We can consider this social welfare function as a normal majority voting scheme with 7 agents (the original
5 agents plus 1 more of types 1 and 2 each) having strict, single-peaked preferences. By a theorem proved
in class, F gives rise to a well-dened social preference that satises PE and IIA.
Example 1.13 (Application). This application of the theory of social welfare functions on the single-peaked
domain comes from MWG exercise 21.D.10.
Suppose there is an odd number I of agents, each with wealth w
i
> 0 and an increasing utility function
over wealth levels. The mean wealth is w and the median is w

. A dierence between the mean and median


implies that wealth is distributed unequally, so as the median and mean grow farther apart, the distribution
of wealth is becoming more unequal.
Consider a proportional tax rate t [0, 1] across agents. The set of alternatives is X = [0, 1] (the possible
levels of the tax). Taxes are redistributed uniformly. Thus after-tax wealth for agents i is (1 t)w
i
+t w.
Clearly [0, 1] has a linear order. Each agent will vote to maximize his own after tax utility
u
i
(a
i
) = u
i
[(1 t)w
i
+t w].
We can see that if w
i
> w, a
i
is strictly decreasing in t, so u
i
(a
i
) is also strictly decreasing in t. Thus agent
i will prefer t = 0 over all other alternatives, and if 0 t < t

, then t ~
i
t

. Thus agent is preferences are


single-peaked with peak t
i
= 0.
If w
i
< w, a
i
is strictly increasing in t, so u
i
(a
i
) is also strictly increasing in t. Thus in this case, agent
i will prefer t = 1 over all other alternatives, and if 1 t > t

, then t ~
i
t

. Thus agent is preferences are


single-peaked with peak t
i
= 1.
Thus every agent has a peak at either 1 or 0. We know from a theorem above that since all agents have
single-peaked preferences there will be a Condorcet winner. If w

> w, then more than half of the agents


prefer t = 0 to all other alternatives, so the Condorcet winner will be t
c
= 0. If w

< w, then more than


half of the agents prefer t = 1 to all other alternatives, so the Condorcet winner will be t
c
= 1.
Now suppose that taxation has a deadweight loss so that a tax rate of t decreases agent is pretax
wealth to w
i
(t) = (1 t)w
i
. Then average tax receipts are t(1 t) w and after-tax wealth for agent i is
(1 t)
2
w
i
+t(1 t) w.
8
Now a
i
= (1t)
2
w
i
+t(1t) w. Note that a
i
is a quadratic function of t, so all agents have single-peaked
prefefences. The FOC is
2w
i
+ 2tw
i
+ w 2t w = 0
and the SOC is
2w
i
2 w < 0.
Thus for all agents with w
i
> w we will have a corner solution, so such an agent will have peak t
i
= 0. For
agents with w
i
< w, the FOC is necessary and sucient. It yields
t
i
=
w
i
1/2 w
w
i
w
.
Thus for agents with w
i
[0, 1/2 w], t
i
[0, 1/2]. Since t cannot be negative, agents with w
i
(1/2 w, w]
will have peak t
i
= 0.
Since all agents have peaks in [0, 1/2], we have shown that t
c
[0, 1/2]. Further, if w

< 1/2 w then


t
c
(0, 1/2], and if w

> 1/2 w then t


c
= 0. This result is dierent than in (b) because the deadweight loss
causes the uniform tax redistribution to be a quadratic function of t with maximum at t = 1/2, so no one
would want t to be higher than 1/2 (even an agent with zero initial wealth).
Finally, suppose that the deadweight loss changes pretax wealth to w
i
(t) = (1 t
2
)w
i
. Now after-tax
wealth is
a
i
= (1 t)(1 t
2
)w
i
+t(1 t
2
) w = (1 t t
2
+t
3
)w
i
+ (t t
3
) w.
The FOC is
(1 2t + 3t
2
)w
i
+ (1 3t
2
) w = 0
which ciearly can have more than one solution. Thus the agents preferences are not necessarily single peaked.
1.2 Social choice functions
Denition 1.10. A social choice function is a map f : A X, where A R
I
.
Rather than giving a binary relation for society, a social choice function simply indicates which alternative
the society will choose. We have two properties that are desriable for social choice functions, similar to PE
and IIA for social welfare functions.
Denition 1.11 (Weakly paretian). f is weakly paretian if x ~
i
y, i I y ,= f(

_).
Denition 1.12 (Maskin monotonicity). Let L(x, _) = y X : x _ y (the lower contour set of x). f is
Maskin monotone if
_
x = f(

_) & L(x, _
i
) L(x, _

i
), i I
_
x = f(

).
We also have another kind of Pareto condition:
Denition 1.13. f is paretian if x ~
i
y, i I, y ,= x x = f(

_).
It may seem strange at rst, but P is actually weaker than WP.
Proposition 1.1. If f is MM, then P = WP.
Proof. To be completed.
Denition 1.14. If A = R
I
or P
I
, the SCF f : A X is called strategy-proof if for all

_
i
, _
i
, _

i
,
f(

_
i
, _
i
) _
i
f(

_
i
, _

i
).
Theorem 1.5. If f : P
I
X is strategy-proof then it is MM.
9
Proof. Suppose for contradiction that f is not MM. Then there exists

_,

_

P
I
such that x = f(

_),
x ,= y = f(

), and L(x, _
i
) L(x, _

i
), i I. This implies that i I such that
f(_

1
, . . . , _

i1
, _

i
, _
i+1
, . . . , _
I
) = z ,= x
and
f(_

1
, . . . , _

i1
, _
i
, _
i+1
, . . . , _
I
) = x.
If x ~

i
z, then i will lie when his true preference is _

i
. If z ~

i
x, then L(x, _
i
) L(x, _

i
), i I implies
that z ~
i
x. Thus i will lie when his true preference is _
i
as well. This contradicts the fact that f is
strategy-proof.
Denition 1.15. For a nite subset X

X,

_

takes X

to the top from



_ if
(i) x X

, y , X

, x ~

i
y, i.
(ii) x, y X

, x _
i
y x _

i
y.
The following theorem provides the analogous result to Arrows impossibility theorem for social choice
functions.
Theorem 1.6. If #X 3 and A = R
I
or A = P
I
, then any SCF that satises WP and MM must be
dicatorial, i.e., d I such that for all

_ A, f(

_) x X : x _
d
y, y X.
Proof. Step 1: Prove that if

_

and

_

take x, y to the top from



_, then f(

) = f(

). By WP,
f(

) x, y. WLOG let f(

) = x. By construction, L(x, _

i
) = L(x, _

i
) for all i I. Then by MM,
x = f(

).
Step 2: Construct F : R
I
P. Dene F so that x F(

_) y for every

_

R that takes x, y to the top


from

_, f(

) = x (must hold for all such



_

by step 1). Clearly, F(

_) is a complete binary relation. Also


note that x F(

_) y y ,F(

_) x, i.e., F(

_) is a strict preference relation. Thus we can interchangeably


write F(

_) and F
p
(

_).
To see that F(

_) is transitive, suppose that x F(

_) y and y F(

_) z. Let

_

take x, y, z to the top


from

_. Now construct three more preference proles as follows:


_
L
takes x, y to the top from

_


_
M
takes x, z to the top from

_


_
R
takes y, z to the top from

_

Note that these three proles also take their respective sets to the top from

_. Since x F(

_) y and

_
L
also
takes x, y to the top from

_, by denition of F it must be that x = f(

_
L
). Similarly, y = f(

_
R
).
Suppose that f(

) = y. Then by MM, f(

_
L
) = y as well. This is a contradiction, so f(

) ,= y.
Similarly, f(

) ,= z by the same argument on



_
R
. Thus it must be that f(

) = x.
By MM, f(

_
M
) = x. Therefore x F(

_)z since

_
M
takes x, z to the top from

_.
So we have shown that F(

_) is complete, transitive, and strict. Thus it a strict, rational preference


relation, i.e., F : R
I
P.
Step 3: Show that F rationalizes f, i.e., for all

_ R
I
, f(

_) F(

_) y, y X. Suppose not. Suppose


instead that y X such that y F(

_) x = f(

_). But y F(

_) x implies that there exists



_

that takes x, y
to the top from

_ and f(

_) = y. This violates MM. Therefore it must be that F rationalizes f.


Step 4: Show that F is PE, i.e., x ~
i
y, i x F
p
y. If x _
i
y, i, then for all

_

that takes x, y to
the top from

_, f(

) = x by WP. Thus x F(

_) y. Since F is always strict, we have x F


p
(

_) y.
Step 5: Prove F is IIA. Take

_ and

_

such that x _
i
y x _

i
y, x, y X, i I. Then for any
x, y, either x F(

_) y or y F(

_) x. WLOG, say x F(

_) y. Let

_

take x, y to the top from



_. Then
f(

_) = x.
10
Notice that

_

also takes x, y to the top from



_

by construction of

_ and

_

. Then by denition of
F, x F(

_) y. Thus F(

_) = F(

), so F is IIA.
Step 6: By Arrows impossibility theorem, F is dictatorial, i.e., d I such that x F(

_) y x ~
d
y.
Step 7: Therefore f is also dictatorial.
2 Bargaining problems
2.1 Denitions
Denition 2.1. A bargaining problem is a pair (U, u

) where
U is a closed and convex subset of R
2
.
u

U is the threat point such that U


u
:= u U : u u

is compact.
The solution is a function f : (U, u

) U.
There are several axioms that are desirable in the solution to a bargaining problem.
Denition 2.2. A function a : R R is an ane transformation if a(x) = x + for some > 0. A
function A : R
2
R
2
is ane if A(x, y) = (a
1
(x), a
2
(y)) where a
1
and a
2
are both ane transformations.
Denition 2.3 (IA). For any ane transformation A : R
2
R
2
, f is independent of ane transforma-
tions (IA) if f(A(U), A(u

)) = Af(U, u

). Note that if f is IA then it suces to consider only normalized


problems where u

= 0. A normalized bargaining problem is fully characterized by U and a solution is f(U).


Note that IA combines the properties IUU and IUO in MWG.
Denition 2.4 (Symmetric). Let U be symmetric ((a, b) U (b, a) U) and u

1
= u

2
. Then f is
symmetric (S) if u = f(U, u

) is symmetric as well, i.e., u


1
= u
2
.
Denition 2.5 ((W)PE). f is (weak) Pareto ecient (WP or P) if y U such that y ~
i
f(U, u

), i.
Denition 2.6 (IIA). For any 2 normalized problems U
1
, U
2
, if U
1
U
2
and f(U
2
) U
1
, then f(U
1
) =
f(U
2
).
2.2 Nash bargaining
Denition 2.7. The Nash bargaining solution is
f
n
(U, u

) = argmax
uU
(u
1
u

1
)(u
2
u

2
).
Note that we can write this equivalently as
f
n
(U, u

) = argmax
uU

iI
ln(u
i
u

i
).
The normalized versions of this are
f
n
(U) = argmax
uU
u
1
u
2
f
n
(U) = argmax
uU

iI
ln(u
i
)
Theorem 2.1. The Nash bargaining solution f
n
() is the only solution that satises IA, S, PE, and IIA.
11
Proof. First we show that f
n
satises the listed properties. To see that f
n
is IA, take a bargaining problem
(U, u

) and let A be an ane transformation of the form


A(x, y) = (
1
x +
1
,
2
y +
2
).
Take (U

, u

) = (A(U), A(u

)). Let f
n
(U

, u

) = u

= A( u). Since the objective function of the Nash


solution is strictly concave it has a unique solution. Then
( u

1
u

1
)( u

2
u

2
) > (u

1
u

1
)(u

2
u

2
), u

.
This implies that
(
1
u
1
+
1

1
u

1

1
)(
2
u

2
+
2

2
u

2

2
) > (
1
u

1
+
1

1
u

1

1
)(
2
u

2
+
2

2
u

2

2
), u U.
This simplies to

2
( u
1
u

1
)( u

2
u

2
) >
1

2
(u

1
u

1
)(u

2
u

2
), u U
or just
( u
1
u

1
)( u

2
u

2
) > (u

1
u

1
)(u

2
u

2
), u U.
Thus u = f
n
(U, u

), i.e., f
n
(U

, u

) = f
n
(A(U), A(u

)) = A(f
n
(U, u

)). Therefore f
n
is IA.
Now it suces just to consider normalized bargaining problems to show that rest of the properties.
Suppose f
n
is not PE. Then u U such that u f
n
(U). But then u
1
u
2
> f
n1
(U)f
n2
(U) (since the
objective function of f
n
is strictly increasing) which is a contradiction, so f
n
is PE.
Suppose f
n
is not symmetric. Then symmetric U such that f
n1
(U) ,= f
n2
(U). Let (a, b) = f
n
(U).
Again, since the objective function of the Nash bargaining solution is strictly concave it has a unique
maximizer on every convex set, so ab > u
1
u
2
, u U. But ba U since U is symmetric and ab = ba. This
is a contradiction, so f
n
must be symmetric. Finally, f
n
is IIA since if U
1
U
2
and f
n
(U
1
) U
2
, then strict
concavity implies that f
n
(U
1
) = f
n
(U
2
).
Next, we want to show that any bargaining solution f that is IA, PE, Symmetric, and IIA is equal to
f
n
. Suppose that f satisesa those properties. Again, since f is IA we only need to consider normalized
bargaining problems. Given arbitrary normalized U, let u = f
n
(U) and consider
U

=
_
u R
2
:

iI
u
i
u
i
2
_
and
U

=
_
u R
2
:

iI
u
i
2
_
.
Note that U

is the symmetric half-space below the hyperplane that is tangent to U at u, so U U

.
Further, U

= A(U

) where A(x, y) = ( u
1
x, u
2
y). Since U

is symmetric and f is symmetric and PE, then


f(U

) = (1, 1). Since f is IA, f(U

) = Af(U

) = ( u
1
, u
2
. Finally, since U U

and f is IIA, it must be


that f(U) = ( u
1
, u
2
) = f
n
(U). This completes the proof.
At rst glance, it might seem like this is a counterexample to Arrows impossibility theorem since we can
think of each u U as representing the utility to each agent from some alternative x X. However, notice
that we have placed several restrictions on U (closed and convex). Further, the Nash bargaining solution
depends on the threat point as well as preferences, so it is not Maskin monotone. Finally, the Nash solution
implicitly requires the agents utilities to be of von Neumann-Morgensterm form (linear in probabilities), so
it only works for a subset of R.
12
2.3 Other bargaining solutions
2.3.1 Egalitarian solution
Denition 2.8. The egalitarian solution is
f
e
= max
uU
min(u
1
, . . . , U
I
).
It is kind of analogous to maximization of Leontief utility.
Proposition 2.1. f
e
does not satisfy IA.
Proof. Obvious from the fact that f
e
involves comparing utilities across agents.
2.3.2 Utilitarian solution
Denition 2.9. The utilitarian solution is
f
u
= max
uU

iI
u
i
.
Proposition 2.2. f
u
also does not satisfy IA.
Proof. Obvious for the same reason as f
e
.
2.3.3 Kalai-Smorodinsky solution
Denition 2.10. Throughout this subsection we will use the following denitions:
b
1
(U) = maxx : (x, y) U, (x, y) 0.
b
2
(U) = maxy : (x, y) U, (x, y) 0.
g(x[U) = maxy : ( x, y) U, x x.
We will refer to bargaining problems with b
1
(U) = b
2
(U) as normalized problems.
Denition 2.11 ((Partial) Monotonicity). Suppose b
1
(U) = b
2
(U

). f is (partially) monotone if g([U)


g([U

) f
2
(U) f
2
(U

).
Denition 2.12. For normalized bargaining problems, the Kalai-Smorodinsky bargaining solulution
is
f
k
(U) = t

(b
1
(U), b
2
(U))
where
t

= maxt : t(b
1
(U), b
2
(U)) U.
In other words, the K-S solution is to pick the largest (in terms of vector ordering) pair of utilities on the
ray beginning at the origin that goes through (b
1
(U), b
2
(U) that is still in U.
Proposition 2.3. The K-S solution satises IA, P, S, and M, but not IIA.
Proof. Let L be a linear rescaling dened as L(x, y) = (
1
x,
2
y), and let U

= A(U). Clearly, b
1
(U

) =

1
b
1
(U) and b
2
(U

) =
2
b
2
(U). Then
t

= maxt

: t

(b
1
(U

), b
2
(U

)) U

= maxt

: t

(
1
b
1
(U),
2
b
2
(U)) U

= maxt

: t

(b
1
(U), b
2
(U)) U
= t

Thus f
k
(U

) = Lf
k
(U), so f
k
satises IA.
13
Take arbitrary U and let u = f
k
(U). Suppose f
k
does not satisfy P. Then u

U such that u

u.
Let t = min
i=1,2
u

i
bi(U)
. Then t > t

and t(b
1
(U), b
2
(U)) u

, so t(b
1
(U), b
2
(U)) U. This contradicts
f
k
(U) = u, so f
k
satises P.
If U is symmetric, then b
1
(U) = b
2
(U), so f
k1
(U) = t

b
1
(U) = t

b
2
(U) = f
k2
(U). Thus f
k
satises S.
f
K
satises monotonicity by construction.
Take U = u : u
1
+ u
2
2 and U

= u : u
1
+ u
2
2, u
2
1. Then U

U. By symmetry,
f
k
(U) = (1, 1) U

. Clearly, b
1
(U

) = 2 and b
2
(U

) = 1. Then it must be that


f
k1
(U

)
f
k2
(U

)
=
1
2
, so f
k
(U

) ,= (1, 1).
This f
k
does not satisfy IIA.
2.4 Bargaining with more than 2 people
Here we will implicitly assume that all agents are risk neutral by letting their utilities be equal to the amount
of money the receive. Because there are more than 2 players, we have to consider what each coalition of
players could achieve without working with the other players. Let N = 1, . . . , n be the set of players. We
will use S N to denote a coalition (note that N is the grand coalition). Let N be the set of all 2
n
possible subsets of N. We will use a function v : N R to denote the value of a coalition. In other
words, v(S) is the amount of money the coalition can generate be cooperating, and they can distribute this
amount among themselves as they wish. We will assume that
(i) v() = 0.
(ii) v(S) + v(T) v(S T) (superadditivity).
Let V = v : N R : v satises (i) and (ii) above be the set of possible coalition valuations. Note that
given v() = 0, we can also think of any v V as an element of R
2
n1
.
Denition 2.13. Abargaining solution in this context is a function f : V R
n
, i.e., f(v) = (f
1
(v), . . . , f
n
(v)).
Denition 2.14 (Pareto ecient). A bargaining solution f is Pareto ecient if

iI
f
i
(v) = v(N) for
all f V .
Denition 2.15 (Symmetry). Let : N N be a permutation, where v is dened as
v((S)) = v(S), S N .
We say that f is symmetric if
f
(i)
(v) = f
i
(v), i, v, .
In other words, rearranging the players in a coalition does not aect their utilities.
Denition 2.16. A player i is a dummy in game v if for all S N , v(S i) = v(S).
Denition 2.17 (Dummy axiom). f satises the dummy axiom if f
i
(v) = 0 for all i such that i is a
dummy in game v.
Denition 2.18 (Linearity). f is linear if for all p (0, 1) and for all v, w V ,
f(pv + (1 p)w) = pf(v) + (1 p)f(w).
Note that this is weaker than requiring that f(v) +f(w) = f(v +w) for all v, w V (additivity).
Denition 2.19. For all S , i, let m(S, i) = v(S i) v(S), i.e., the marginal contribution of i to
coalition S. For all permutations : N N, let S(, i) = j : (j) < (i). Note that there are n!
dierent permutations. The Shapley value solution Sh() is dened as
Sh
i
(v) =
1
n!

m
_
S(, i), i
_
. (*)
14
Theorem 2.2 (Shapley). Sh(v) is the only bargaining solution that satises PE, symmetry, dummy, and
linearity.
Proof. First, we have to show that Sh(v) satises the four properties above. To see that Sh(v) satises PE,
note that for all pi,
m(, 1) +m(1, 2) +. . . +m(N n, n) = [v(1) v()] + [v(1, 2) v(1)] +. . . + [v(N) v(N n)]
= v(N)
Then

iI
Sh
i
(v) =
1
n!

v(N) = v(N).
By denition, Sh(v) trivially satises symmetry since permuting the original order of the players will just
rearrange the terms in the summation in (*). The dummy axiom is trivial as well, since if i is a dummy,
then m(S, i) = 0 for all S N . Finally, m is linear in v: m(S, i) = v(S i) v(S). Then Sh
i
(v) is linear
in v for all i, i.e., Sh(v) is linear.
Next, we need to show that any f that satises the 4 axioms above is equal to Sh(v). Let f be an
arbitrary bargaining solution that satises PE, symmetry, dummy axiom, and linearity.
Step 1: We want to show that the linearity axiom implies that f is a linear map from R
2
n1
R
n
.
Note that if v = 0 = (0, 0, . . . , 0), then by PE,

iI
f
i
(v) = v(N) = 0. By symmetry, f
i
(v) = 0. This implies
that f(0) = 0. We want to show that for all , R and all v, w V , f(v +w) = f(v
1
) +f(w).
First we will show that f(v) = f(v). If 1 then the proof is trivial by the linearity axiom using
the zero vector. If >, then by the linearity axiom,
f(v) = f
_
1

(v) + (1
1

)0
_
=
1

f(v).
Multiplying both sides by gives us the desired result. Now we need to show that for > 1, > 1,
f(v +w) = f(v) +f(w). Note that
f(v +w) = f
_
( +)
_

+
v +

+
w
__
= ( +)f
_

+
v +

+
w
_
= f(v) +f(w)
where the second line comes from the previous result and the third comes from the linearity axiom.
Therefore f is linear.
Step 2: Let R N. Dene the R-unananimity game v
R
as
S o, v
R
(S) =
_
1 R S
0 otherwise
Note that i , R, i is a dummy in game v
R
. Then since f satises the dummy axiom, f
i
(v
R
) = 0, i , R.
Since f satises symmetry and PE, f
j
(v
R
) =
1
|R|
, j R. It is important to note that Sh
i
takes exacty the
same values for these R U games since it also has these properties.
Step 3: There are 2
n1
nonempty coalitions, so there are 2
n1
dierent R-U games. We can think of
any v V as a vector in R
2
n1
, so we can think of f as a map from R
2
n1
to R
n
. We claim that v
R

R
are
linearly independent, i.e.,

R
v
R
= 0
R
= 0, R. Suppose not. Then

R

R
v
R
= 0 and R such
that
R
= 0. Pick S such that
S
,= 0 and S is a minimal coalition among those R such that
R
,= 0, i.e.,
R S
R
= 0. Then by denition of v
R
,
0 =

R
v
R
(S)

RS

R
v
R
(S).
15
But since S is minimal, any R such that R S has
R
= 0. This implies that
0 =
S
v
S
(S) =
S
,= 0
which is a contradiction. Therefore it must be that v
R

R
are linearly independent. This implies that v
R

R
is a basis of R
2
n1
, i.e., any v R
2
n1
can be represented as a linear combination of v
R

R
.
Step 4: Since f is linear and f(0) = 0, f is unique determined by its behavior on the basis v
R

R
. Since
f(v
R
) = Sh(v
R
) for all v
R
, this implies that f(v) = Sh(v) for all v V .
Note that if v is superadditive, then i , S implies that v(i) +v(S) v(S i). Since m(S, i) = v(S
i) v(S), this means that m(S, i) v(i). Summing up as in the RHS of (*), we have Sh
i
(v) v(i).
Thus Sh(v) is individually rational.
Does this generalize to multi-person coalitions, i.e., for S N , is it true that

iS
Sh
i
(v) v(S)? The
following counterexample shows that the answer is no.
Example 2.1. Suppose that n = 3. Let player 1 have a left-handed glove (L) and let players 2 and 3 each
have a right-handed glove (R). This game is characterized by
v(S) =
_
1 S contains at least one L and at least one R
0 otherwise
Then we have:
v(i) = 0, i.
v(2, 3 = 0.
v(1, 2) = v(1, 3) = v(1, 2, 3) = 1.
There are 6 permutations of N. Then
Sh
1
(v) =
1
6
(2 0 + 4 1) =
2
3
.
Then by symmetry and PE, Sh
2
(v) = Sh
3
(v) =
1
6
. Thus we have
Sh
1
(v) +Sh
2
(v) =
5
6
< v(1, 2) = 1.
3 Mechanism design
3.1 King Solomons dilemma
This is a fun, lengthy example from Kim Saus class that introduces the concept of mechanism design. In a
story from the Bible, King Solomon is presented with the problem of determining which of two women is the
true mother of a baby. He proposes to split the baby in half and give one half to each woman. This macabre
proposal prompts one of the women to scream whole the other remains silent. King Solomon decrees that
the true mother would never stand by while her baby was murdered, and thus gives the baby to woman who
screamed.
Call the two women Anna and Beth. Then I = Anna, Beth is the set of players. Let A = , be
the set of possible states of the world, where represents the state in which Anna is the true mother and
represents the state in which Beth is the true mother. Let X = A, B, C represent the set of social choices,
where A allocates the baby to Anna, B allocates the baby to Beth, and C cuts the baby in half. Suppose
that each woman prefers to get the baby herself over all other alternatives, but the true mother prefers to
have the baby allocated to the false mother to having it cut in half while the false mother prefers to see
16
it cut in half to seeing the true mother get it. In otehr words, if is the state of the world, the womens
preferences are
Anna : A ~ B ~ C
Beth : B ~ C ~ A
and if the state is , their preferences are
Anna : A ~ C ~ B
Beth : B ~ A ~ C
In a social choice sense, King Solomons problem is to nd a social choice function f

: A X such
that f

() = A and f

() = B. Thus he introduces the mechanism = (M, g), where M = M


A
M
B
is
an action space and g is a map from M to X that determines which alternative in X is chosen based on the
actions of the players. is a sort of meta-game that induces two normal form games corresponding to the
states and . Let NE(, ) denote the set of Nash equilibria in the game induced by when the state of
the world is . We say that Nash-implements f

if g(NE(, )) = f

() and g(NE(, )) = f

().
Anna
Beth
scream nothing
scream C A
nothing B C
Figure 1: The mechanism
Anna
Beth
scream nothing
scream 2, 3 4, 2
nothing 3, 4 2, 3
Figure 2: The game induced by when the state is
Anna
Beth
scream nothing
scream 3, 2 4, 3
nothing 2, 4 3, 2
Figure 3: The game induced by when the state is
In the game , screaming strictly dominates doing nothing for the fake mother Beth and the true
mothers best response when Beth screams is to do nothing. Thus NE(, ) = (nothing, scream), so
g(NE(, )) = B ,= f

(). In other words, when Anna is the true mother, the mechanism designed by King
Solomon causes him to end up allocating the baby to the fake mother Beth.
Similarly, in the game , screaming strictly dominates doing nothing for the fake mother Anna and
doing nothing is a best response for the true mother Beth. Thus NE(, ) = (nothing, scream) and
g(NE(, )) = A ,= f

(), i.e., the fake mother Anna gets the baby. This means that King Solomons
mechanism does exactly the opposite of what he was intending. In the parlance of mechanism design, does
not Nash-implement f

.
3.2 Preliminaries
As usual, we will let I = 1, . . . , n and X denote the sets of players and social choices respectively. For
each i I, let
i
denote the set of player is possible types. Then =
iI

i
is the set of possible type
proles, i.e., = (
1
, . . . ,
n
) is a type prole. Let denote the probability distribution on .
17
Denition 3.1. A mechanism is = (M, g) where M =
iI
M
i
is the set of action proles and g : M X
determines the outcome. The quadruple (, g; , ) induces an incomplete information game.
Preferences are this context can be thought of as maps _
i
:
i
R (or utility functions u
i
: X
i
).
It might also be that preferences or utilities depend on the types of the other players as well, i.e., _
i
maps
from to R rather than just
i
(u
i
maps from X rather than just X
i
). This scenario is called an
interdependent value setting, while the rst scenario is called a private-value setting.
Denition 3.2. A strategy of player i is a mapping
i
:
i
M
i
.
Denition 3.3. Similar to the previous section, a social choice function is f : X.
Denition 3.4. The mechanism = (M, g) implements SCF f in Z-equilibrium if a Z-equilibrium

such that
such that () > 0, g(

i
(
i
)
iI
) = f()
where Z might be dominant-strategy, Bayesian Nash, etc. Note that

i
(
i
) is an action m
i
M
i
, so we
really have g(m
1
, . . . , m
n
) = f() where m
i
=

i
(
i
), i I.
strongly implements f in Z-equilibrium there exists a Z-equilibrium, and for all Z-equilibrium

and for all with () > 0, g(

i
(
i
)
iI
) = f().
3.3 Implementation in dominant-strategy equilibrium
Denition 3.5. implements f if

such that

is a d.s. equilibrium: i I,
i

i
, m
i
M
i
,
i

i
,
i
,
g(

i
(
i
),
i
(
i
)) _
i
(
i
) g(m
i
,
i
(
i
)).
Implementation: , g(

()) = f().
Given a social choice function f, we want to know whether or not it is d.s.-implementable.
Denition 3.6. Given SCF f, the direct mechanism is
direct
where M
i
=
i
and g = f. Note that all
other mechanisms are indirect.
Theorem 3.1 (Revelation principle for d.s. equilibrium). If there exists a mechanism that implements f in
d.s.-equilibrium, then f can be implemented in d.s. equilibrium with the direct mechanism, with truth-telling
as the domininant strategy (i.e., f is strategy proof ).
Proof. Let = (M, g) be the mechanism that implements f is d.s. equilibrium. Let

be the d.s. equilibrium


such that g(

()) = f(), . Then by denition of d.s. equilibrium, i I,


i

i
, m
i

M
i
,
i

i
,
i
,
g(

i
(
i
),
i
(
i
)) _
i
(
i
) g(m
i
,
i
(
i
)).
Since this holds for every possible strategy of the other players, we can substitute

i
for
i
. Similarly,
since

i
(

i
) M
i
,

i

i
, we can substitute

i
(

i
) for m
i
. Thus we have i I,
i
,

i

i
,
i

i
,
g(

i
(
i
),

i
(
i
)) _
i
(
i
) g(

i
(

i
),

i
(
i
)).
By denition of

, this implies that i I,


i
,

i

i
,
i

i
,
f() _
i
(
i
) f(

i
,
i
).
Thus f is strategy-proof, so f can be implemented by the direct mechanism with truth-telling as the d.s.
equilibrium.
Note that there is no revelation principle for strong/full implementation. The following example illus-
trates this point.
18
Example 3.1. Let I = 1, 2, X = a, b, c, d,
1
=
1
,

1
, and
2
=
2
. Let the players preferences
be as follows:

1
: a b ~ c ~ d

1
: a b ~ d ~ c

2
: a b ~ c ~ d
Let be dened as the game in the following gure: Then player 2s dominant strategy is to always play
m
2
m

2
m
1
a c
m

1
b d
Figure 4: Indirect mechanism
m
2
. However, when player 1s type is
1
, his dominant strategy is m
1
, but when his type is

1
, his d.s. is
m

1
. Thus the d.s. equilibrium is

dened as

(
1
,
2
) = (m
1
, m
2
) and

1
,
2
) = (m

1
, m
2
).
Let the SCF f be dened as f(
1
,
2
) = a and f(

1
,
2
) = b. Then clearly g fully implements f. Then
the direct mechanism in this context is Note that truth-telling is a dominant strategy for player 1 (player 2

1
a

1
b
Figure 5: Direct mechanism
only has one strategy so we dont need to think about him). Thus the dreict mechanism implements f with
truth-telling as the d.s. equilibrium. However, since a
1
(
1
) b and a
1
(

1
) b, there is no incentive for
player 1 to tell the truth in the direct mechanism either. So there is another d.s. equilibrium for the direct
mechanism in which player 1 lies. Thus the direct mechanism does not fully implement f.
Theorem 3.2 (Gibbard-Satterthwaite). If #X 3, f is onto, _
i
() : = P, and f is imple-
mentable in d.s. equilibrium, then f is dictatorial, i.e., i I such that ,
f() x X : x _
i
() y, y X.
Proof. Recall theorem 1.6: If #X 3, f is Paretian, and f is MM, then f is dictatorial. By the revelation
principle, since f is implementable in d.s. equilibrium, f is strategy proof. By 1.5, since the image of _
i
is
P, f is MM.
All we need to do now is show that f is Paretian. Suppose not. Then such that x _
i
() y, i I
but f() = y. Pick

such that f(

) = x (note that we can do this since f is onto). Construct

such that
i I, x _
i
() y _
i
() z, z X. By MM, f(

) = y if we go from to

. But MM also implies that


f(

) = x if we go from

to

. This is a contradiction, so it must be that f is Paretian.


Since #X 3, f is Paretian, and f is MM, then theorem 1.6 implies that f is dictatorial. QED.
Note that if the image of f only has 2 elements x and y, majority voting satises all the required
properties and is not dictatorial. If we restrict the second assumption of the theorem so that the image of
_
i
is single-peaked preferences, then the median voter SCF will work if I is odd. We can generalize to even
I by adding dummy voters.
3.4 Quasi-linear preferences and auctions
Let X = KR
I
, where K is some nite set of public choices and each R in R
I
= R. . . R is a transfer.
Then we can think of a social choice as x = (k, t
1
, . . . , t
I
), where t
i
is a (monetary) transfer to player i.
We will assume that private-value preferences that are quasi-linear, i.e.,
u
i
(x,
i
) = v
i
(k,
i
) +t
i
.
v
i
is player is valuation of option k given type
i
.
19
Denition 3.7. k() is implementable (in d.s. equilibrium) if t() such that f() := (k(), t()) is imple-
mentable (in the usual sense of the last subsection). We say that t() implements k().
In an auction setting, we can think of K as
K = (y
1
, . . . , y
n
) : i, y
i
0, 1,

iI
y
i
= 1.
In other words, K represents the set of possible allocations of a single object among the players. We will
usually think of
i
as an interval [0,

i
] and preferences will be specied even further:
v
i
(k,
i
) = v
i
(y
i
,
i
) =
i
y
i
.
Thus the player receives utility of
i
if he is allocated the object and zero utility otherwise. The rst thing
we want to know is what kind of allocation functions are implementable?
Example 3.2. Suppose that #I = 2 and k() is such that for some
2
and
1
<

1
, y
1
(
1
) = 1 and y
1
(

1
) = 0.
Suppose that k() is implementable. Then there exists t() such that

1
+t
1
(
1
,
2
) 0 +t
1
(

1
,
2
)
and
0 +t
1
(

1
,
2
)

1
+t
1
(
1
,
2
).
Adding these inequalit9ies together, we get
1

1
which is a contradiction.
This leads us to the observation that k() is implementable only if i I and
i

i
,

i
such that
y
i
(
i
,
i
) =
_
1
i
>

i
0
i
<

i
We call this type of allocation function a monotone allocation function.
This naturally leads one to ask if all monotone allocation functions are implementable.
Theorem 3.3 (Taxation principle). Suppose t(

) implements k(). If k(
i
,
i
) = k(

i
,
i
), then t
i
(
i
,
i
) =
t
i
(

i
,
i
). Hence t
i
can be written as
t
i
(
i
,
i
) = (k(
i
,
i
),
i
).
Proof. Suppose not. Then player i will lie at either
i
or either

i
, i.e., he will just say whichever type will
give him a higher transfer.
Combined with the monotonicity condition, this implies that transfers must take the following form:
t
i
(
i
,
i
) =
_

i
<

i

i
>

i
Further, it must be that > . Otherwise player i will lie when his type is less than

i
. Even more
specically,

i
= since player i must be indierent between getting the object and not getting it at

i
(otherwise he will lie).
To summarize, suppose k() is montone and suppose t() implements k(). Then t() takes the following
form:
i I,
i

i
, t
i
(
i
,
i
) =
_

i
<

i
=

i

i
>

i
where

is a function of
i
, i.e.,

i
=

i
(
i
) = inf
i

i
: y
i
(
i
,
i
) = 1.
20
Theorem 3.4. k() is implementable if and only if it is monotone. If it is monotone, then it can be
implemented as above.
Example 3.3. Let k

() = (y

1
(), . . . , y

n
()), where
y

i
() =
_
0
i
< max
j=i

j
1
i
> max
j=i

j
Note that this describes more than one k

since you can break ties in many dierent ways. We can implement
this allocation function using the following transfers:
t
i
(,
i
) =
_
0
i
< max
j=i

j
0 max
j=i

j

i
> max
j=i

j
This is second price auction.
Note that the second price auction also satises the extra requirement of individual rationality:
i I, ,
i
y
i
() +t
i
() 0.
This ensures than all players are willing to participate.
3.4.1 Optimal auction design
Which auction design leads to the highest revenue for the auctioneer? The auctioneers problem is
max
k(),t()
E

iI
t
i
()
_
subject to
(i) f() is strategy-proof: i I, ,

i

i
,
v
i
(k(),
i
) +t
i
() v
i
(k(

i
,
i
),
i
) +t
i
(

i
,
i
)
(dominant strategy incentive compatibility constraint).
(ii) Individual rationality: i I, ,
v
i
(k(), ) +t
i
() 0.
Given what we already know, this problem collapses to
max
k()
E

iI
t
i
()
_
subject to
(i) k() is monotone.
(ii) Transfers are given by
t
i
(,
i
) =
_
0
i
<

i
(
i
)

i
(
i
)
i
>

i
(
i
)
where

i
(
i
) = inf
i

i
: y
i
(
i
,
i
) = 1.
21
3.4.2 Revenue equivalence
For simplicity, consider the 1-agent case, where =
i
and k : 0, 1. We say that k() satises
revenue equivalence if whenever both (k, t) and (k,

t) are incentive compatible, then c R such that


, t() =

t() +c.
An important underlying assumption that guarantees revenue equivalence is connectedness of . In
other words, we assume that

: y() = 0 : y() = 1.
If this is not the case, given > 0, we can nd and

such that ,=

but both schemes are
incentive compatible.
Example 3.4. Let = [0, 1] [2, 3] and let k() be dened by y() = 0 for [0, 1] and y() = 1 for
[2, 3]. Then for any > 0, the transfer schemes
t() =
_
[0, 1]
1.5 [2, 3]
and

t() =
_
[0, 1]
1.7 [2, 3]
both implement k. Thus k is not revenue equivalent.
3.4.3 Auctions with two (or more) items
What if there are two dierent objects A and B? Then there are #K = (#I + 1)(#I) dierent ways to
allocate the two objects among the agents (assuming each agent can only have one object), i.e., k K can
be written as k = (k
A
, k
B
).
An ecient allocation k

solves
max
kK
V
kA
(A,
kA
) +V
kB
(B,
kB
).
Given
i
, player i will recive either , , or depending on whether he is allocated nothing, A, or B. Thus
instead of just

i
, there will be

iA
and

iB
, where =

iA
and =

iB
. Further,

iB
solves
max
kK
kA=i=kB

j=i
V
j
(k,
j
)
. .
Vi(0)
+0 = max
kK
kB=i

j=i
V
j
(k,
j
)
. .
Vi(B)
+

iB
.
In other words,

iA
= V
i
(0) = V
i
(A) and

iB
= V
i
(0) V
i
(B). We can think of V
i
(0) as the total
amount of utility that the rest of the bidders receive if player i does not participate. Similarly, V
i
(B) is the
total amount of utility that the other players get if you take object B. Thus V
i
(0) > V
i
(B).
Note that can be a function of
i
. The process above just pins down the dierences and .
We also have to check that this transfer payment structure together with k

is strategy proof.
3.4.4 Vickrey-Clarke-Groves mechanism
As usual, let K = 1, . . . , #K and let =
1
. . .
I
, where each
i
is a connected subset of R
#K
. An
ecient allocation is
k

argmax
kK

iI
v
i
(k,
i
).
The planner can implement k

in the following way:


Ask each i to tell me your
i
.
Calculate k

().
22
Give each player a base salary of
i
(
i
).
If

k is chosen, give back
max
kK

j=i
v
j
(k,
j
)

j=i
v
j
(

k,
j
).
This is called the Vickrey-Clarke-Groves mechanism.
To see that this transfer scheme indeed implements k

, note that given any


i
, if player i tells the truth,
he gets k

(
i
,
i
) +t
i
(
i
,
i
). If he lies, he gets k

i
,
i
) +t
i
(

i
,
i
). Thus we have
truth v
i
(k

(
i
,
i
),
i
) +
i
(
i
)
_
_
max
kK

j=i
v
j
(k,
j
)

j=i
v
j
(k

(
i
,
i
),
j
)
_
_
and
lie v
i
(k

i
,
i
),
i
) +
i
(
i
)
_
_
max
kK

j=i
v
j
(k,
j
)

j=i
v
j
(k

i
,
i
),
j
)
_
_
.
Thus the dierence in player is utility from telling the truth rather than lying is
truth lie

jI
v
j
(k

(
i
,
i
),
j
)

jI
v
j
(k

i
,
i
),
j
) 0
by denition of k

.
Theorem 3.5. Suppose that for all i I,
i
is a connected subsset of R
$K
. Then if k

is an ecient
allocation rule, then transfer function t() implements k

in d.s. equilibrium if and only if t() is a Groves


scheme.
Corollary. If
i
= R
#K
, i and all the other assumptions of the previous theorem hold, then t() implements
k

in d.s. equilibrium if and only if t() is a Groves scheme.


Proof of Corollary. R
#K
is a connected subset of itself.
3.4.5 Zero-sum transfers and implementation
Suppose there is a single object that we want to allocate among two bidders and we want to keep the transfers
zero-sum, i.e., t
1
() = t
2
(). For simplicity, let
1
=
2
= [0, 1]. Suppose that a zero-sum transfer scheme
t() implements some montone allocation function k.
Let
i
<

i
, i. Construct three type proles ,

, and

as follows:
= (
1
,
2
).

= (

1
,
2
).

= (

1
,

2
)
By the taxation principle, t
1
() = t
1
(

). Similarly, the taxation principle implies that t


2
(

) = t
2
(

). This
means that t
1
() = t
1
(

) = t
1
(

). Thus player 1 gets a transfer of when y


1
= 0 and when y
1
= 1,
regardless of . In other words, and are the same for all
2
. This contradicts the fact that k must be
monotone in order to be implementable.
For a more concrete example, let player 1 get the object if
1
>
2
and vice versa. Then

1
() =
1
and

1
(

) =

1
. But then we must have =

1
() =
1
and =

1
(

) =

1
. This contradicts the fact
that
1
<

1
.
23
3.5 Bayesian Nash equilibrium implementation
Again, a mechanism is = (M, g). Let G = (, , ) denote the incomplete information game associated
with .
Denition 3.8. A Bayesian Nash equilibrium of G is a strategy prole

= (

i
)
iI
(where

i
:
i

M
i
) such that i I,
i

i
, m
i
M
i
,
E
i
_
u
i
(g(

i
(
i
),

i
(
i
)),
i
)[
i
_
E
i
_
u
i
(g(m
i
,

i
(
i
)),
i
)[
i
_
.
We will use BNE(G) to denote the set of Bayesian Nash equilibria of G.
Denition 3.9. implements social choice function f : X in Bayesian Nash equilibrium if
BNE(G) ,= and

BNE(G) (where G = (, , )) such that support(),


g(

()) = f().
As in the d.s. equilibrium section, strictly/fully implements f if the above the above condition holds for
all

BNE(G) rather than just one of them.


Theorem 3.6 (Revelation principle for BN implementation). If f is BN-implementable, then f is BN-
implementable by the direct mechanism
direct
= (, f) with truth-telling as a BNE.
Proof. Suppose BN-implements f. Let

be the BNE such that g(

()) = f(), support(). By


denition of a BNE, i I, , m
i
M
i
,
E
i
_
u
i
(g(

i
(
i
),

i
(
i
)),
i
)[
i
_
E
i
_
u
i
(g(m
i
,

i
(
i
)),
i
)[
i
_
.
Since g(

()) = f(), support(), we have i I, ,

i

i
,
E
i
_
u
i
(f(
i
,
i
),
i
)[
i
_
E
i
_
u
i
(f(

i
,
i
),
i
)[
i
_
.
Thus truth-telling is a BNE in the direct game, i.e., f is strategy proof.
Denition 3.10. If f is strategy proof, we say that f is d.s. incentive compatible. If f is BN-implementable,
we say that f is Baysesian incentive compatible. Note that d.s. IC implies BN IC, but not the other way
around. In terms of set notation, BN IC d.s. IC.
Because BN implementation is a weaker condition, it seems reasonable that a wider variety of social
choice functions could be BN-implemented. Exactly which SCFs can be BN-implemented depends on the
assumptions about the probability distribution . If is a product measure (i.e.,
i
are independently
distributed), then the kinds of SCFs that can be BN-implemented are pretty similar to the ones that can be
d.s. implemented (in this case there is a similar monotonicty condition that we will see shortly). However,
if is not a product measure, then under weak regularity condition virtually all SCFs can be implemented.
In this case, the transfer function that implements the SCF will often take a complex structure and be very
sensitive to parameters.
In a mathematical sense product measures are rare (non-generic), and in an economic sense it is often
unrealistic to assume that is a procuct measure. But for better or worse, in most of the mechanism design
literature it is assumed that is a product measure. Thus we will also make this assumption throughout
the rest of this section.
Denition 3.11. Let be a product measure and suppose that t() implements k() = (y
1
(), . . . , y
n
()).
For each i I, dene y
i
() and y
i
() as

t
i
(
i
) := E
i
[t
i
(
i
,
i
)]
24
and
y
i
(
i
) := E
i
[y
i
(
i
,
i
)]
These denitions allow us to write the BN incentive compatibility condition as
y
i
(
i
)
i
+

t
i
(
i
) y
i
(

i
)
i
+

t
i
(

i
),

i

i
.
Note that we dont have a separate IC condition for each
i
as we did in the d.s. implementation case.
Instead, we just have one constraint given the averages y and

t implied by .
Theorem 3.7. k() is BN-implementable only if it is monotone in the sense that i I, y
i
() is weakly
increasing.
Proof. Suppose not. Then i I and
i
,

i

i
such that
i
<

i
but y
i
() > y
i
(

i
). Let t() be a transfer
function that BN-implements k(). If player is type is , then incentive compatibility implies that
y
i
(
i
)
i
+

t
i
(
i
) y
i
(

i
)
i
+

t
i
(

i
).
If his type is

i
, then IC implies that
y
i
(

i
)

i
+

t
i
(

i
) y
i
(
i
)

i
+

t
i
(
i
).
Adding both inequalities together and cancelling out the transfers implies yields
y
i
(

i
)(

i
) y
i
(
i
)(

i
).
Then

i
cancels out as well, implying that
y
i
(

i
) y
i
(
i
).
This is a contradiction, so it must be that k() is monotone.
This leads us to the following question: Are all monotonic k() BN-implementable? In the last section,
the fact that k() was nite implied that y
i
was also nite. Here, we are dealing with y
i
, which is a probability.
Thus the benet of dealing with a nite K is gone, i.e., we will now consider the most general case:
K = k = (y
1
, . . . , y
n
) [0, 1]
n
:

iI
y
i
1.
Given type
i
and any m M
i
, player i gets expected utility of
p(m)
i
+t
i
(m)
which is a linear function of
i
. Thus player is optimal strategy is
m

= argmaxp(m)
i
+t(m) : m M
i
.
Since M
i
=
i
in the direct mechanism, this is equivalent to

i
= argmaxp(

i
)
i
+t(

i
) :

i

i
.
In other words, player i will claim to have whichever type puts him on the upper envelope of the set of lines
given above. Call this upper envelope U
i
(
i
).
Since U
i
(
i
) is the upper envelope of a set of linear functions, U
i
(
i
) is convex and continuous. This
implies that is dierentiable almost everywhere, and U
i
(
i
) is equal to the integral of its own derivatives,
i.e.,
U
i
(
i
) = U
i
(x) +
_
i
x
U

i
(y) dy.
Note that U

i
(y) = p(m) for some m M
i
.
25
If k() is incentive compatible, then U

i
(
i
) = p(
i
) = y
i
(
i
), the probability of getting the object by
playing type
i
. Thus
U
i
(
i
) = U
i
(x) +
_
i
x
y
i
(z) dz.
Assuming
i
= [
i
,

i
], we can also write this as
U
i
(
i
) = y
i
(
i
)
i
+

t
i
(
i
) = U
i
(
i
) +
_
i

i
y
i
(z) dz.
Rearranging, we have

t
i
(
i
) =

t
i
(
i
) [ y
i
(
i
)
i
y
i
(
i
)
i
] +
_
i

i
y
i
(x) dx.
Note that if
i
= 0, this simplies to

t
i
(
i
) =

t
i
(
i
) y
i
(
i
)
i
+
_
i
0
y
i
(x) dx.
Thus if t() implements k(),

t
i
() must take this form.
Lets check to see that this

t
i
() works. If
i
<

i
, then we need to make sure that player i will not lie
regardless of whether his true type is
i
or

i
. Note that
U
i
(

i
) U(i(
i
) =
_

i
i
y
i
(x) dx.
This implies that
y
i
(
i
)(

i
) U
i
(

i
) U(i(
i
) y
i
(

i
)(

i
).
Using the rst inequality, we get
U
i
(

i
) U
i
(
i
) + y
i
(
i
)(

i
).
Using the dention of

t
i
() from above, we get
U
i
(

i
)

t
i
(
i
) + y
i
(
i
)

i
.
Thus player i is better o by telling the truth when his true type is

i
.
Using the second inequality, we get
U
i
(
i
) U
i
(

i
) + y
i
(

i
)(

i
).
Using the dention of

t
i
() from above, we get
U
i
(
i
)

t
i
(

i
) + y
i
(

i
)
i
.
Thus player i is also better o by telling the truth when his true type is
i
. Therefore

t
i
() dened above
does indeed BN-implement k().
To summarize, y
i
() is BN implementable if and only if it is monotone (in the BN sense). Moreover,
if
i
is a connected subset of R, then any

t
i
() that BN-implements y
i
() must take the form

t
i
(
i
) =

t
i
(
i
) y
i
(
i
)
i
+
_
i
0
y
i
(x) dx.
26
3.5.1 Zero-sum transfers in BN-implementation (expected externality mechanism)
In the d.s. implementation section, we found that it was impossible to allocate an object between two bidders
using zero-sum transfers in a strategy-proof way. In the BN-implementation context, this is no longer the
case. Start with a normal second-price auction and modify it so the winner pays a xed amount to the other
player equal to the average he would have paid in the normal second-price auction.
In other words, dene

i
(
i
) = E
i
[t
i
(
i
,
i
)], i = 1, 2.
Then dene

t
i
(
i
,
i
) =
i
(
i
) +
i
(
i
), i = 1, 2.
This shuts down the incentive to lie and implements the ecient k

() in BN-equilibrium with a balanced


budget (tranfers stay within the system).
Note that in a more general context with more than two players, we can dene

t
i
(
i
,
i
) =
i
(
i
) +

j=i
1
n 1

j
(
j
).
This will implement the ecient allocation with zero-sum transfers.
27

Você também pode gostar