Você está na página 1de 16

Rank Preservation and Reversal: The Ideal and the

Distributive Modes in the Analytic Hierarchy Process


Thomas L. Saaty
University of Pittsburgh
Pittsburgh, PA 15260
saaty@katz.pitt.edu

The legitimacy of rank reversal and the questionable legitimacy of the assumption of
independence of alternatives when they depend on each other both according to
number and quality, but not according to function.
Abstract
Multicriteria decision making methods that rate alternatives one at a time as the only
option often give undependable rankings. Not having a measurement scale with a
uniformly applied unit, the alternatives of a decision can only be evaluated by relating
them using knowledge about other alternatives to determine which is better and how
much better in deriving a relative scale of priorities for them with respect to an
intangible criterion. This kind of dependence in thinking and judging is inevitable, but
rating alternatives assumes their independence. With respect to intangibles,
alternatives are never independent in judgment even when compared with an ideal
derived from knowledge about many alternatives in or outside the collection under
consideration. Because of this inevitable dependence in subjective judgments, our
natural ability to make paired comparisons using the distributive mode of the AHP is
essential. Examples where measurements are available for validation are given. For
the simplistic normative requirement of always preserving rank of the old alternatives
when new ones are added unless criteria are added or judgments are changed, the
relative ideal mode is used, but the findings indicate that because of dependence in
judgment, it must be used with caution and under special circumstances.
1. Introduction
The fundamental problem of decision-making is the ranking of alternatives of a
decision on one or on several criteria and then synthesizing the several rankings into a
single overall rank. There are two way to rank alternatives with respect to a criterion
or more specifically, an attribute. The first (known as the ideal or absolute
approach) is to compare the alternatives one by one (rate them) with respect to an
imagined ideal (absolute) alternative for each criterion. This ideal may differ among
different people and also differ in how ideal it is for each criterion involved. The other
way (known as the distributive approach) is to compare the alternatives with each
other with respect to their relative dominance in possessing a criterion such as weight
or height or political acumen. In the first case the alternatives are assumed to be
independent of each other whether according to their functions or according to how
good or bad each one may be. The second approach does not need to make such an
assumption. The criteria themselves must also be ranked in a meaningful way with
respect to a higher goal. Such a ranking cannot be done by rating the criteria but more
1

meaningfully always done through comparisons with one another. One is led to think
that, except for convenience, why should the alternatives not be always compared
with each other instead of rating them one at a time.
It is clear that if the ranking is ordinal on each criterion, in general one cannot
synthesize the different rankings into a meaningful overall rank. It is also clear that if
the ranking on each criterion is made according to different scales of measurement on
instruments, like yards and pounds and the like it is generally not meaningful to
perform arithmetic operations on them to obtain a single overall rank. Care needs to
be used to do things correctly. A final observation is that when ranking alternatives
one at a time with respect to an ideal and assuming independence one implicitly the
condition that adding or deleting an alternative from the collection has no influence
on the ranks of the remaining ones. This is a normative condition that is useful in
practice but begs the question of dependencies and eventually makes people
uncomfortable. Emergency cases of a dangerous disease are admitted to a hospital
above all other emergencies until there is no more room and they are then differed or
rejected. Room cannot be included as a criterion because the ranking of the
alternatives on it makes them dependent on how many other alternatives there are,
violating the assumption of independence.
Before the Analytic Hierarchy Process arrived on the scene, people could only rate
the alternatives of a decision one at a time. This technical limitation has led to many
wrong beliefs about what should and what should not be because there was limited
understanding about what really happens in practice. Most of the objections and
paradoxes have come from researchers as well as from outsiders who do not have the
vested interest in the techniques but in how much they can depend on the reliability of
the assumptions people make to press their favorite technique on practitioners. One
of the false beliefs that arose out of rating is that alternatives must be independent.
Because of that, a widespread belief is developed out of thinking that there is always
some measurement scale applied to the alternatives one at a time and that
independence implies that adding or deleting alternatives should not change the rank
order of the alternatives that one has unless the criteria or the judgments are changed.
But adding alternatives can change what we think of the old alternatives as shown by
many multicriteria examples and that happens without adding criteria or changing
judgments, mainly because often what we think how good an alternative is depends
on our experience with many other alternatives. Adding new alternatives changes
their number and number can change the perceived worth of alternatives. Using the
number of alternatives as a particular criterion in every decision problem violates
independence. Thus in general, how we evaluate an alternative depends on how
desirable or undesirable other alternatives with which it is compared are and on how
many of them there are. If one has a technique to use such as rating alternatives one at
a time, and if the alternatives are not independent as they never are in our minds, we
would naively enforce the idea with all our ability insisting that alternatives are
independent because what else can we do to rank them? If we have measurements of
the alternatives with respect to the criteria, again the meaning of these measurements

depends on the purpose for which the alternatives are used and need not be linear
with the measurements, as a middle value may be preferred to a higher or lower
value. Our preference for an alternative would depend not only on the measurements
but also on what other alternatives are available.
Our story here is about the AHP and its way of dealing with ranking both through
paired comparisons and through ratings, with independence enforced, or with
dependence and feedback as with the Analytic Network Process (ANP) that
specifically deals with dependence [2,3]. The idea of what is independent of what
and what we do when we think about alternatives when we have no measurements to
apply is an interesting and complicated problem. How the AHP/ANP deals with it
should be of interest to any one who wants to know about decision making and its
practices.
To define an ideal element requires knowledge about many elements and the best one
among them, and to know the best one involve knowledge of all other existing and
potential alternatives. Because potential alternatives are never fully known indirect
comparison with an ideal is only approximate, subject to change, involves
dependence on other elements in a collection being considered or outside that
collection. What was considered ideal previously, may not be the ideal now because
new alternatives change the qualifications of the ideal. Both kinds of dependence
have to do with our cognitive ability to prioritize criteria and alternatives in a
decision. It appears then that cognitively, independence is a convenient assumption
often adopted to make things easier to deal with ranking (particularly of a large
number of alternatives) even when the alternatives may be intrinsically independent.
Out of habit, it is easier for people to pretend or assume that adding new alternatives
should have no effect on the ranking of the old ones when in fact they should and
actually do in practice and need comparisons among themselves. The alternatives of a
decision can also depend on each other physically according to function, which is
another way of dependence that we have no choice to assume away as we do in the
judgment process. So far results in the AHP/ANP have often been given for decisions
with both judgmental and with physical dependence (the distributive mode) and
without either by assuming it away (the ideal mode). The latter is always used to
preserve rank by technically ignoring judgmental dependence.
The problem of rank preservation and reversal does not arise with a single criterion
and consistent judgments. However, it does occur with consistent judgments when
multicriteria are involved. Our purpose here is to explain the need for the ideal mode
that preserves rank for convenience and the distributive method that more realistically
allows the rank of the old alternatives to reverse when new alternatives are added or
old ones deleted. The ideal mode can be used to preserve rank in both comparisons
(by idealizing the first time only and comparing alternatives with the ideal allowing
them to fall above it in value) and in ratings, by developing a complete system of
intensities of several orders of magnitude and reducing their intensities to ideal form.
Paired comparisons are always more reliable in accuracy than ratings, but ratings are
useful when a large number of alternatives are involved (although they can be

compared through a process of clustering) and a protocol is invoked to always


preserve the ranks that are derived from it. Often after ratings, one compares a few of
the top alternatives to refine their ranking. When do we use the distributive mode and
when the ideal mode? Before answering this question we need to look at the idea of
dependence of the importance of the criteria on the measurements of the alternatives
when we have scales of measurement for the criteria.
2. The Case of Dependence of the Priorities of the Criteria on the Alternatives
As a rule, when there is any kind of dependence in a decision, rank reversal can take
place. When alternatives are dependent among themselves anything can happen to
their ranks when new alternatives are added or old ones deleted. When alternatives
are compared in pairs they become dependent on each other as to how high or low
their relative priorities are and hence rank reversals can occur. When alternatives are
rated one at a time there should be no rank reversals unless criteria or judgments are
changed. But as we said above, that is a questionable practice because it is known that
when for example copies of the same alternative are added, the value of that
alternative can go down (or up). If the universe becomes gradually full of gold, gold
would have a smaller and smaller value because of its abundance and rating gold
alternatives one at a time does not make it possible to take that occurrence into
consideration. Putting number of alternatives or uniqueness as criteria makes the
alternatives dependent on each other and rating them by comparison with respect to
an ideal ignores the effect of number. It is often expedient to preserve rank for
humanitarian reasons and for convenience as in hospital and college admissions, not
because it is the scientifically right thing to do. New alternatives, by their very own
qualities can bring to light new information that enlightens peoples understanding
about ranking the old alternatives with respect to the same set of criteria. To always
preserve rank is an act of will that can produce wrong rankings which contradict
common sense. Luce and Raiffa write in their book [3] by first speculating that The
addition of new acts to a decision problem under uncertainty never changes old,
originally non-optimal acts into optimal ones, and then concluding that, The all-ornone feature of the last form may seem a bit too stringent ... a severe criticism is that
it yields unreasonable results." In the field of queuing theory, when there are many
actions to take for example in a battle that change the need for resources and soldiers,
preemptive priorities are used to downgrade a message X from decoding service to
await its turn when a higher priority message A arrives. Message X may even be
delayed until after an earlier message B that is waiting if B is now known to be more
appropriate than X to follow A. In fact X may become obsolete. No new criteria need
be introduced by A except for its presence as a higher priority at that time that now
makes X unimportant, but it was important at the time before A arrived. Even if new
criteria appear after one alternative is added, such criteria would be exhausted after a
few alternatives are successively added.
In practice the criteria we develop for judging alternatives are acquired from
experience with a large number of alternatives. They do not appear in our minds

without reasons that link them to the alternatives we consider. The question is how
circuitous the path is that separates the two. It is only for convenience in thinking that
one assumes criteria to be independent from alternatives.
There are two cases of dependence to consider. The first is when the priorities of the
criteria depend on the priorities of the alternatives, such as the case where the
importance of the alternatives is given in the form of measurements on the same scale
for all the criteria. Although priorities are used to interpret the importance of
measurements from a scale with an arbitrary unit applied uniformly to generate all
measurements on that scale, there are times when one may wish to make the priorities
coincide with these measurements as a special case. The following example illustrates
that normalization is essential in converting absolute numbers to relative ones that
have the semblance of priorities. In Table 1 below if we add the values 2 and 3 (both
measured in dollars for example) for alternative A for the two criteria C1 and C2 to
obtain the total of 5 and similarly the values 6 and 4 for alternative B to obtain the
total of 10, and then normalize these final values, we obtain the relative values of 5/15
and 10/15. If we then simply normalize the column of values under each criterion and
sum these values for each of A and B over the criteria we do not get the desired
outcomes of 5/15 and 10/15. However, we do obtain the desired relative values if the
criteria are assigned relative weights equal to the sum of the measurement under them
to the total measurement under both criteria. These values are used to weight the
normalized values of the alternatives under them before the sum is taken for both
criteria.

Table 1 Absolute and Normalized Composition with Dollar Measurement on Two


Criteria
C1
A
B
Total

2
6
8

Absolute Sums
C2
Sum
3
4
7

5
10
15

Norm
alized
5/15
10/15
1

C1

C2

2/8
6/8
1

3/7
4/7
1

Relative Sums
Weighted
C1
Sum
8/15
5/15
2/8
10/15
6/8
8/15

C2
7/15
3/7
4/7
7/15

Weighted
sums
5/15
10/15
1

We note from this example that when relative values are used for the alternatives, the
criteria themselves, none of which as an attribute depends on any particular
alternative, must be assigned weights that do depend on the measurements of the
alternatives. These weights are then used in a weighting and adding process to obtain
the correct final relative outcome. This is a useful observation when we use a process
of deriving relative values for the alternatives which we do in the AHP for then in
comparing the alternatives and the criteria one may be thought to be estimating these
relative values and using them in normalized form to make the overall synthesis as we
just did. Even though we used the distributive mode here, there can be no rank

reversal in this case when new alternatives are added because making the
measurement relative changes the weights of the criteria and simply reproduces the
final outcome with absolute numbers. If the weights of the criteria are not changed
accordingly, one would not get the desired relative values if an alternative C is added
and rank reversal can occur because normalization would produce different relative
numerical values for the alternatives under each criterion. This is the case when the
weights of the criteria are fixed once and for all for a decision problem no matter how
many alternatives it has. Yet it is possible that for a given number of alternatives one
has the right estimate for the weights of the criteria but not if new alternatives are
introduced or old ones deleted.
Theorem: Synthesis with normalization is necessary to derive priorities in relative
form for multiple alternatives with respect to several criteria with the same scale of
measurement.
Proof: Assume that the measurement of alternative i with respect to criterion j is mij .
With additive synthesis we must determine how to transform mij from the relation
n

m x
j 1

ij

j 1

i 1 j 1

( mij ) / mij that is normalized on the right to produce relative


m

readings. This relation becomes an identity if we replace mij by mij / mij and x j by
i 1

m / m
i 1

ij

i 1 j 1

ij

Lemma: Idealization can give the wrong outcome for the alternatives in the case of
several criteria with the same scale of measurement.
Examples given below for predicting relative values for the alternatives confirm this
observation. In other words, knowledge of the alternatives influences the relative
importance of the criteria used to compare them and aggregate their weights. The
question of rank reversal on adding alternatives has no relevance here as the weights
of the criteria may need to be changed in the presence of new alternatives. Note that if
we have a very large number of alternatives in the case of similarly measured tangible
criteria, and add a new alternative, the weights of the criteria can remain closely the
same, despite normalization. This indicates that fixing the weights of the criteria need
not automatically cause a shift from the distributive to the ideal mode. The ideal mode
for the alternatives would not give the right final answer in the case of tangibles. Is it
then guaranteed to give good answers when intangible criteria are added? We believe
not. Rank preservation is something people wish to enforce and is not the natural
state.
Theorem: If the alternatives are measured on the same scale for all the criteria, to
obtain the relative values of the alternatives for all the criteria with the alternatives
given in relative form for each criterion, it is sufficient to assign each criterion a

priority that is equal to the relative sum of the measurements of the alternatives under
it to the total sum of the measurements of the alternatives under all the criteria and
weight the relative values of each alternatives by the priority of its criterion and sum
over the criteria to obtain the relative outcome for the alternatives.
Proof: Let xij be the measurement of alternative i (i=1,...,m) with respect to criterion j
n

(j=1,...,n) on the same scale for all the criteria. Then xij / xij gives the relative
j 1

overall
m

j 1

i 1

value
m

of

alternative

i. This

i 1 j 1

value

coincides

with

[( x / x )( x / x )] .
ij

i 1 j 1

ij

ij

i 1

ij

We call the foregoing type of dependence, structural dependence of the priorities of


the criteria on the alternatives because they depend on the values of the alternatives
and not on the alternatives themselves.
When there are different measurements for the criteria, one normalizes the
alternatives but trades off a unit of one criterion against a unit of the other in making
the paired comparisons to derive priorities for them in relative form. Here again
which criterion is more important derives from the quality of the alternatives it
represents. As an extension of synthesis in the single measurement for all the criteria
by using normalization, there is no good reason to change the weighting and adding
process by having a different form for the priorities of the alternatives. However in
this case adding new alternatives can cause rank reversal because the process no
longer reproduces the overall relative outcome of sums of values of the alternatives.
Any simple example of rank reversal (see below) serves to justify this proposition and
suggests that measurement on any scale is arbitrary particularly when applied to more
than one criterion. It essentially means that the worth of a dollar in buying a car is not
the same as its worth in repairing a car even though we use the same paper dollar to
pay for them. These values must be traded off through comparisons to determine their
relative importance by using various criteria to establish the worth of a dollar under
different conditions. This observation is well-known in economics and does not need
detailed elaboration. Thus normalization is needed when criteria are attributes of the
alternatives. It can lead to rank reversal that is legitimate. It is not unreasonable to
think that when three alternatives are ranked and one of them is removed, the other
two may no longer have the same rank as before.
The second case of dependence that also requires normalization is when the criteria
depend on the alternatives functionally as in the Analytic Network Process. For
example, in ranking cars according to size and engine power, given a car one answers
the question for that car as to whether its size or its engine power is more dominant
and how much more dominant it is than the other. In that case all the measurements
are given in relative terms as in the foregoing measurement example because again
the priorities of the criteria are derived from those of the alternatives.

3. The Case of the Independence of the Priorities of the Criteria from the
Alternatives
It is only when a criterion has no immediate relation as an attribute of the alternatives
but is a condition imposed on the decision that the alternatives must satisfy that
normalization is no longer a valid way of relative measurement and one must idealize
the priorities of the alternatives (by dividing by the largest value among them for each
criterion) and compare new arrivals with the ideal for each criterion allowing it to fall
above that ideal. In this case there should be no rank reversal because the alternatives
are not connected by common attribute to influence each others ranking. This same
idea can be artificially used to rate alternatives one at a time by comparing them with
respect to an ideal for each criterion.
The priorities of the criteria are determined independently of the alternatives as for
example in a health problem where the importance of anemia is determined from
blood measurement and the importance of exercise is determined from muscular
strength, and a diet that helps improve anemia and make exercise more beneficial, the
priorities of the criteria depend on the state of health and not directly on the foods to
eat. The foods are selected to serve the criteria. In that case the best food chosen on
the list must receive the full value of the criterion and not depend on how many types
of foods are considered that would make its relative value smaller the more
alternatives are considered for each criterion. The best food is assigned the maximum
value of one. It becomes the ideal. In this case adding alternatives should have no
effect on the ranks of the alternatives if each new alternative is compared with the
ideal and allowed to receive a value greater than one if it is better than the ideal.
In the ANP the control criteria used to determine different types of influence are
independent of what alternatives are considered. However, within each control
criterion there is a network of influences with respect to that criterion in which often
the criteria depend on the alternatives. In that case we have both types of synthesis,
the first in distributive form that is then converted to ideal form. Because of
dependence within the network, the priorities of the alternatives must be given in
relative form. To synthesize the outcome for several control criteria the relative values
of the distributive mode are converted to ideal values for each control criterion before
their weighted sum is taken over the control criterion.
The example below (Table 2) illustrates a real occurrence in the world of marketing
where rank reversals naturally happen that cannot be explained by rating alternatives
one at a time without introducing criteria or changing judgments. A phantom
alternative A3 is publicized in the media to bring about rank reversal between A1 and
A2 with the ideal mode. We begin with A1 dominating A2. Introducing A3 we obtain
reversal in the rank of A2 over A1 once with A3 between A1 and A2 and once ranking
the last of the three. This is the case of a phantom alternative (a car) A3 that is more
expensive and thus less desirable but has the best quality. People bought A1 because it
is cheaper but A2 is a much better car on quality. Knowing that a (considerably) more

expensive car A3 will be on the market that also has a slightly better quality than A2
makes people shift their preference to A2 over A1, without anything happening that
causes them to change the relative importance of the criteria: efficiency and cost. Car
A3 is called a phantom because it is never made, it is proposed in advertising in a way
to induce people to change their overall choice, although their preferences remain the
same as before.
Table 2. Example of Rank Reversal with Ideal Mode
Efficiency
Cost
Normalized
Alternatives
0.5
0.5
Composition Weights
A1
0.6
1
0.8
0.533 A1>A2
A2
1
0.4
0.7
0.467
Total
1.6
1.4
1.5
Normalized
0.5Composition Weights
1
0.65
0.322
0.4
0.695
0.344A2>A3>A1
0.35
0.675
0.334
1.75
2.02

Alternatives
A1
A2
A3

0.5
0.3
0.99
1
2.29

Alternatives
A1
A2

0.45
0.6
1

Normalized
0.55Composition Weights
1
0.82
0.550A1>A2
0.4
0.67
0.450
1.49

Alternatives
A1
A2
A3

0.45
0.1
0.99
1

Normalized
0.55Composition Weights
1
0.595
0.327
0.4
0.6655
0.366A2>A1>A3
0.2
0.56
0.308
1.8205

4. When to use the Distributive and the Ideal Modes?


We have seen that the distributive mode is used when the criteria depend on the
alternatives structurally and functionally. In hierarchies the criteria are assumed to be
independent from the alternatives, why does one use the distributive mode? It is
because there are problems that are treated as decisions with structural dependence in
which one has the means to compare the outcome with actual measurements. In that
case one makes the assumption that the relative values derived for the alternatives
using the principal eigenvector are approximations to normalized values of actual
measurements with respect to the criterion and that the priorities of the criteria are
themselves approximations to the relative priorities one would obtain had the
alternatives been measured on some scale under each one. This is particularly useful
when the underlying numbers are like how many people vote for presidential

candidates on several criteria thus using a homogeneous scale of measurement for all
the criteria or when expert judgment is used. There are numerous examples (see
later)which validate this use of the distributive mode in hierarchic decision making.
In addition, there are serious difficulties when not using it were one to use the ideal
mode. Here is an illustration of the need to use the distributive mode in hierarchic
context. We begin with some comments about using the ideal form without deriving
the ideal through pairwise comparisons.
A simple example of rank reversal is the presidential elections of 1991 when the
entry of Ross Perot into the election took votes away from Bush. The prediction as
to who would win the race prior to Perots entry is shown in Figure 1. After Perot
entered the race, Bush lost to Clinton as shown in Figure2, below, because Perot
took votes from Bush.
1991
E le c tio n

Clinton

E conom y
( .4 9 3 )

H e a lt h
(.2 9 9 )

F o re ig n A ffa ir s
(.0 5 8 )

Im a g e
( .0 9 8 )

.327

.600

.229

.627

.673

.400

.771

.373

A b o r tio n
(.0 5 1 )
.550

(.44)
B ush
(.56)

.450

Figure1 Presidential Elections with Standings for Bush and Clinton before Perot
1991
E le c t i o n
E conom y
(.4 9 3 )

Clinton

H e a lth
( .2 9 9 )

F o r e i g n A f fa i r s
(.0 5 8 )

Im a g e
(.0 9 8 )

A b o r ti o n
( .0 5 1 )

.327

.600

.229

.627

.550

.473

.300

.623

.323

.340

.200

.100

.148

.050

.110

(.44)
Bush
(.37)
Perot
(.19)

Figure 2 Presidential Race with Three Candidates; Prediction Close to Actual Result
But if, instead of comparing them pairwise as shown in Figure2, we were to rate them
one at a time, adding Perot would not change the rank order of Bush and Clinton and
Bush would be predicted to be the winner, contrary to what happened. Rating one at a
time forces rank preservation, Perot would have no effect, and Bush would wrongly be

10

predicted to win as shown in Figure 3. To force rank preservation by using ratings,


start with the situation shown in Figure 1 and idealize by dividing by the larger priority
of the two candidates, Bush and Clinton, under each criterion. Bush would receive a
larger overall priority than Clinton. We then assign Perot his proportionate value from
the second figure with respect to the ideal in Figure 3. This has no effect on the ranking
of Bush and Clinton, so the outcome in Figure 3 shows that Bush should be the winner.
In effect, comparisons take into consideration the relative number of people voting for
the candidates by considering each criterion separately, and then weighting and
combining the relative numbers.
1991
E le c t i o n

Clinton

E conom y
( .4 9 3 )

H e a lt h
(.2 9 9 )

F o re ig n A ffa irs
(.0 5 8 )

.485

.297

.667

.423

.166

Im a g e
(.0 9 8 )

A b o r tio n
(.0 5 1 )

.595

.818

.238

.080

(.66)

Bush
(.89)
Perot

.200

(.43)

Figure 3 Forcing Rank Preservation by Idealizing gives Wrong Results


5. Hierarchic Validation Examples of the Distributive Mode
To make good applications one needs expert knowledge of the subject, a structure that
represents the pertinent issues, and a little time to do justice to the subject. In this part we
give three hierarchic examples that gave results close to what the values actually were.
All the works were published in refereed journals.
World Chess Championship Outcome Validation Karpov-Korchnoi Match
The following criteria (Table 3) and hierarchy (Figure 4) were used to predict the
outcome of world chess championship matches using judgments of ten grandmasters in
the then Soviet Union and the United States who responded to questionnaires they were
mailed. The predicted outcomes that included the number of games played, drawn and
won by each player either was exactly as they turned out later or adequately close to
predict the winner. The outcome of this exercise was officially notarized before the
match took place. The notarized statement was mailed to the editor of the Journal of
Behavioral Sciences along with the paper later (Saaty and Vargas 1991(a).) The
prediction was that Karpov would win by 6 to 5 games over Korchnoi, which he did.
11

Table 3 Definitions of Chess Factors


T (1)
B (2)
T (3)
B (4)
T (5)
B (6)
T (7)
T (8)
T (9)
T (10)
T (11)
B (12)
T (13)
T (14)
T (15)
T (16)
B (17)
T (18)

Calculation (Q): The ability of a player to evaluate different alternatives or strategies in light of
prevailing situations.
Ego (E): The image a player has of himself as to his general abilities and qualification and his
desire to win.
Experience (EX): A composite of the versatility of opponents faced before, the strength of the
tournaments participated in, and the time of exposure to a rich variety of chess players.
Gamesmanship (G): The capability of a player to influence his opponent's game by destroying
his concentration and self-confidence.
Good Health (GH): Physical and mental strength to withstand pressure and provide endurance.
Good Nerves and Will to Win (GN): The attitude of steadfastness that ensures a player's health
perspective while the going gets tough. He keeps in mind that the situation involves two people
and that if he holds out the tide may go in his favor.
Imagination (IW: Ability to perceive and improvise good tactics and strategies.
Intuition (IN): Ability to guess the opponent's intentions.
Game Aggressiveness (GA): The ability to exploit the opponent's weaknesses and mistakes to
one's advantage. Occasionally referred to as "killer instinct."
Long Range Planning (LRP): The ability of a player to foresee the outcome of a certain move,
set up desired situations that are more favorable, and work to alter the outcome.
Memory M: Ability to remember previous games.
Personality (P): Manners and emotional strength, and their effects on the opponent in playing the
game and on the player in keeping his wits.
Preparation (PR): Study and review of previous games and ideas.
Quickness (Q): The ability of a player to see clearly the heart of a complex problem.
Relative Youth (RY): The vigor, aggressiveness, and daring to try new ideas and situations, a
quality usually attributed to young age.
Seconds (S): The ability of other experts to help one to analyze strategies between games.
Stamina (ST): Physical and psychological ability of a player to endure fatigue and pressure.
Technique M: Ability to use and respond to different openings, improvise middle game tactics,
and steer the game to a familiar ground to one's advantage.

Figure 4 Criteria and Payers in Chess Competition


Monetary Exchange Rate Dollar versus the Yen

In 1987 three economists at the University of Pittsburgh, Professors A. Blair, R.


Nachtmann, and J. Olson, worked with T. Saaty on predicting the yen/dollar exchange

12

rate (Figure 5).


The predicted value was fairly close to the average value for a
considerable number of months after that.

V a lu e o f Y e n /D o lla r E x c h a n g e
R a te in 9 0 D a y s
R e la tiv e
In te r e s t
R a te
.4 2 3
F e d e ra l
R e s e rv e M o n .
P o lic y
.2 9 4
T ig h te r
.1 9 1
S te a d y
.0 8 2
E a s ie r
.1 9 1

S iz e o f
F e d e ra l
D e fic it
.0 3 2

F o rw a rd
Exchange
R a te B ia s
.0 2 3
Bank of
J a p a n M o n e t.
P o lic y
.0 9 7

F o rw a rd R a te
P re m iu m /
D is c o u n t
.0 0 7

T ig h te r
.0 0 7
S te a d y
.0 2 7
E a s ie r
.0 6 3

H ig h
.0 0 2
M e d iu m
.0 0 2
Low
.0 0 2

C o n tra c t
.0 0 2
No
C h an g e
.0 0 9
Expand
.0 2 1

S iz e o f
F o rw a rd ra te
D iffe re n tia l
.0 1 6
P re m iu m
.0 0 8
D is c o u n t
.0 0 8

O ffic ia l
E x c h g . M k t.
In te r v e n tio n
.1 6 4
Cons is te n t
.1 3 7

e rra tic
.0 2 7

S tro n g
.0 2 6
M o d e ra te
.1 0 0
W eak
.0 1 1

S tro n g
.0 0 9
M o d e ra te
.0 0 9
W eak
.0 0 9

R e l. D e g . o f
C o n fid . in
th e U S E c o n .
.1 0 3

S iz e / D ir e c t i o n o f P a s t B e h a v i o r o f
U S C u rre n t A c c t. E x c h a n g e R a te s
B a la n c e
.0 3 5
.2 5 2

R e la tiv e
In f a ltio n
R a te s
.0 1 9

R e la tiv e
Real
G ro w th
.0 0 8

R e la tiv e
P o lit ic a l
S ta b ility
.0 7 5

S iz e o f
D e fic it o r
S u rp lu s
.0 3 2

H ig h e r
.0 1 3
E qual
.0 0 6
Low er
.0 0 1

H ig h e r
.0 0 3
Equal
.0 0 3
Low er
.0 0 3

M o re
.0 4 8
Equal
.0 2 2
L ess
.0 0 6

L a rg e
.0 1 6
S m a ll
.0 1 6

A n tic ip a te d
C h an g es
.2 2 1
D e c re as e
.0 9 0
N o C h a rg e
.1 0 6
In c re a s e
.0 2 5

R e le van t
.0 0 4

Irre le vant
.0 3 1

H ig h
.0 0 1
M e d iu m
.0 0 1
Low
.0 0 1

H ig h
.0 1 0
M ed.
.0 1 0
Low
.0 1 0

Probable Impact of Each Fourth Level Fctor

119.99 and below

119.99-134.11

134.11-148.23

148.23-162.35

162.35 and above

Sharp
Decline
.1330

Moderate
Decline
.2940

No
Change
.2640

Moderate
Increase
.2280

Sharp
Increase
.0820

Expected Value is 139.90 yen/$ (in the late 1980s)


Figure 5 The dollar Versus the Yen Values in the Late 1980s
Number of Children in Rural Indian Families
In a hierarchy whose goal is the optimal family size in India (Saaty, Wong, 1983) there
were four major criteria of Culture (with subcriteria: Religion, Women Status,
Manlihood), Economic factors (with subcriteria: Cost of child Rearing, Old Age security,
Labor, Economic Improvement, Prestige and Strength), Demographic factors (with
subcriteria: Short Life Expectancy, High Infant Mortality) and the Availability and
acceptance of Contraception (with subcriteria: High Level of Availability and Acceptance
of contraception, Medium level of Availability and Acceptance of contraception, low
level of Availability and Acceptance of contraception. At the bottom three alternatives
were considered: Families with 3 or Less Children, Families with 4 to 7 Children, and
Families with 8 or More Children. The outcome of this example for reasons explained in
the research paper had two projections of 5.6 and 6.5 children per family (due to regional
differences.) The actual value we obtained from the literature after the study was done
were 6.8 births per woman in 1972 and 5.6 in 1978.

13

Predicting the Outcome of the 1996 Superbowl (Saaty and Turner 1995-1996)
My brilliant student David Turner was very interested and knowledgeable about football
and he and I worked early in December 1995 at the very start of the playoffs to predict
who would go to superbowl 96 and who would win and who would lose. We used a
combination of two hierarchies, one for benefits and one for costs. We predicted that
Dallas would win and our own city Pittsburgh would lose which was the correct outcome.
On many occasions students tried to make predictions with simplified structures and
inevitably ended up making the wrong ones. Of course it is never guaranteed that a
prediction of an event that is susceptible to hazards and accidents would come out right
as we also mention and give a reference later on.
6. Conclusion
A habit acquired from only knowing how to rate things one at a time rather than compare
them in pairs is to assume they are independent because the technique used requires it not
because they are really independent in thinking about how good they are relative to each
other to make judgments about them. How would one ever know what an average apple
is if one has never seen any apple before? Rating things misses important factual
information about how the mind learns to relate things so it can decide on what is good
and what is not, and what is preferred and what is not.
Prioritization and ranking depend on judgment. Judgment depends on thinking and
thinking depends on previous knowledge and experience. Knowledge about criteria and
alternatives of a decision requires that they should be related within a framework so that
on can think about and judge what is important and what is not. For things to be related
they must have some kind of mental interdependence. There are two kinds of
interdependence, one is in the mind of the judge or decision maker and the other in the
real world as the input of an industry depends on the output of another or as a baby
depends on its mother for survival. The only cases where the criteria of a decision do not
depend on the alternatives of that decision is when they are imposed on them rather than
being attributes of them. For example, anemia is a characteristic of red blood and is not
an intrinsic property of the foods one eats like height or weight. It can be considered in
most cases as independent of the foods eaten. The importance of criteria that are
attributes of alternatives depends on the alternatives and needs to be taken into
consideration in ranking them. In all cases where there is dependence whether criteria on
alternatives or of alternatives on alternatives, one must use the distributive mode. Only
when the criteria are imposed conditions or when one wishes to force independence for
convenience to satisfy habits, long entrenched, primitive in conception for various
seemingly egalitarian reasons, and because it takes time to really rank carefully a very
large number of alternatives, does one impose rank preservation by rating things one at a
time and use the ideal mode. What we have here is not a conjecture but a factual
observation and study of many examples. What evidence do we have to support a blindly
accepted assumption about independence? From real life examples that contradict it. For

14

example, in many practical situations, an attribute that is very important when there are
few alternatives can become less important when all the alternatives have that attribute.
In judging graduate students at a university, knowing how to read or write are not as
important as it is in judging undergraduates. The distributive mode is used in all cases of
dependence whether mental or physical: when several criteria are measured on the same
scale, when the criteria are attributes of the alternatives and hence derive their importance
from their abundance or scarcity in the alternatives.
Only use the ideal mode when: 1) the criteria are not attributes of the alternatives but are
used to determine which alternative qualifies and how well it qualifies in satisfying the
extrinsic condition, or have an influence on other alternatives subject to these conditions
and 2) it is convenient to obtain a rough ranking of the alternatives by rating them one at
a time thus requiring and artificially enforcing the independence of these alternatives.
It is agreed that both the ideal and the distributive modes are needed in decision making
and that any process that simply rates alternatives with respect to an ideal ignore
structural dependence that is critical to converting measurements on several criteria to
relative form. Generally, many decision problems have an underlying assumption that
measurements on scales may be involved. In these problems it is necessary to use the
distributive mode despite the assumption of functional independence of the criteria from
the alternatives. Using the ideal mode would can lead to meaningless rankings.
However, it is important to use the ideal mode when the weights of the criteria are
determined in a way that is completely independent from the alternatives. This for
example in the ANP where distributive mode is used to synthesize the supermatrix when
there is feedback from the alternatives to other elements in the network, the resulting
priorities are put in ideal form with respect to each control criterion with respect to which
the comparisons of influence are made because the importance of the control criteria in
way depends on the alternatives of the particular decision. The resulting ideal priorities
are weighted by the importance of the control criteria and then summed to obtain the
overall answer for each of the four BOCR merits. These are then combined into a single
overall answer after the top alternative in each is used to represent that merit in the
ratings with respect to strategic criteria to obtain the priorities for the merits which are
then used to get the overall answer using the formulas described in the previous section.
References
1. Luce, R. D., and H. Raiffa, Games and Decisions. Wiley, NY, 1957.
2. Saaty, T.L., "Rank Generation, Preservation, and Reversal in the Analytic Hierarchy
Decision Process," Journal of the Decision Sciences Institute, Vol. 18, No. 2, Spring,
1987.
3. Saaty, T.L., Rank from comparisons and from ratings in the analytic hierarchy/network
processes, European Journal of Operational Research 168 (2006) 557570.
4. Saaty, T.L., Response to the Response to the Response, Journal of the Operational
Research Society, Vol. 42, No.10, pp 909-924.
15

16

Você também pode gostar