Você está na página 1de 4

Decision Trees

Another approach in analyzing risk situations with expected values makes decision trees. These can
be helpful when there is a sequence of decision to be made such would be the case when a choice of
one action in combination a particular state of nature. Yielding a given outcome, present a number of
other decision options. In such situation of the decision-matrix approach would be the sequence.
Decision trees schematically represent the sequence of decision states of nature, and other
subsequent decision and states of nature. This done mean network of graph-node format which will be
illustrated with some examples. Probabilites of the states of nature and payofs are depicated also,
decision expresented by decision nodes or points as small boxes from which emanate iranche
representing the avalaible choices or decision. For example 13-9 the two production choices would be
represented by a decision node with five brancies, one for each production decision.

The possibility that different states of nature can occur is represented in chance node. It contains a
circle with branches emanating from it, each branch corresponding to a different state of nature. The
probabilities can be indicated among the branches or with the state of-nature label.
The most common approach is to begin with the first decision node at the left, laying out the states of
nature for the first decision to the right, proceeding with subsequent decision and states of nature
further to the right. This gives rise to a treelike structure: hence the name, payoffs are generally
indicated at the extremities of the branches. When the payoffs are listed at the extremities, they
represent net amounts, based on all the investments, costs, and revenues specified by the entire
sequence of decision and states of nature leading to that payoff. Furthermore, since decision trees
typically deal with time horizons of several years more, the payoffs should also be computed using the
concepts developet in this book. In other words, they should be net present values (or equivalent
annual worths) computed on an after-tax basis and adjusted for inflation, where applicable.
A decision tree for example 13-9 would look like figure 13-3.
The computation of expected values in a decision tree is often referred to as folding back
Although Example 13-9 can be folded back, it is a single-stage situation, and so does not provide a
representative framework for the generalized decision-tree problem. Therefore, another example will
be used.

Example 13-10
A small automobile manufacture will have sufficient resources to the product design; testing. And
production design and planning for only one new car at a time over the next six years. The process
will take about three years for each now car introduced. Thus, it can introduce two new cars over the
next six years. It is felt that a new compact, (C), and subcompact, (S), should be introduced to satisft
the changing market. The decision being faced is whether to introduce the compact first, followed by
the subcompact, or vice versa. A study is done in which a decision tree is used to depict the choices
faced. The states of nature thought to be appicabel, the probabilities of those states of nature and
payoffs associated with decision-state-of-nature combinations. The analysis of the decision
tree(Figure 13-4) involves folding it back or determining which decision yields the highest expexted
values of the payoffs.
In figure 13-4 both the decision nodes and chance nodes are numbered for identification
purpose. Node I represent the initial decision, wherher to introduce a compact or a subcompact first.
Node 2 represent the states of nature possible after three years if a compact is initially introduced sale
may be above target (T+), on target (T), or below target (T-).
The estimated probabilities are indicated along each branch. Although nodes 4,5,7, and 8
treated as decision points, the fact that only one branch emanates from each indicated no real choices
at the points. For example, node 4 shows that, having introduced a compact first, and having
indications that sales will exceed targets, the firm would go ahead with development of the
subcompact. On the other hand, nodes 6 and 9 show that, with indications that sales will be below
target for the configuration chosen initially, there are two choices that would be entertained. For
instance, node 6 shows that, having first introduced a compact with initial sales below target, the firm
could choose between planning a subcompact or a newly designed compact car.
The states of nature for nodes 10 to 7 are structured liked those for nodes 2 and 3. However,
in this example, the probabilites and payoffs depend in the previous conditions and their ordering. In
the words, P(T+) at node 11 or node 2; it is a conditional probability thought be applicable to this
situation. (In other situations the probabilities of the states of nature at the branch extremities may be
the same: see figure 13-5 in example (13-11). Also, the payoff of 100 corresponding to branch 1-2-4-
10 is not the same as the payoff of branch 1-3-7-14. In this case, it was felt that a compact, being more
profitable than a subcompact at a given target level of sales, would provide a greater payoff it
procedded the introduction of the subcompact. Recall that each payoff represent a discounted cash
flow, either a present with on periodic worth, computed on an after-tax basis, and possibly adjusted
for the anticipated effects of inflation.
Given the decision tree, the next task is to compute the expected values of the decision at
node 1 by folding back. Thus, the expected value of node 10 would given by,
E(10) = 100(.3) + 90(.5) + 50(.2) = 85
Where E(10) represent the indicated expected value. The reader should verify the expected values for
nodes 11 to 17. The expected values appear in the boxes adjacent to the nodes. Folding back amounts
to assigning the expected value of 85 to decision node 4 similary, the value for nodes 5,7, and 8 may
be filled in. As for node 6, it is a decision point with two choices or branches. One has an expected
value of 25; the other, 20.5. A decision-maker wishing to maximize the expected value of payoffs
would chose the decision branch with the higher expected value. He the assign its expected value to
the decision node, given the data here, this is equivalent to saying that having first introduced a
compact, and having experienced below-target initial sales, one would design another compact rather
than a subcompact. This is emphasized in the diagram by crossing out the decision branch with the
lower expected value(s). The reader should verify this procedure by examining node 9/
Once this has been done. The folding back may continue to the preceding stage. The expected
value of node 2 is found from
E(2) = 85(.2) + 66.25(.5) + 25(.3) = 57.625
Similarly, E(3) is 61.8. thus, since the branch corresponding to producing the subcompact first has the
higher expected value, it would be chosen by an expected value maximizer.

Since the decision tree uses the expected-value criterion, result can sometimes be unexpected.
Here, for example, although the compact car is thought to be potetially more profitable, the
probability of its achieving satisfactory market acceptance offsets that feature. In other words.
Although the subcompact may not contribute as much profit, given the most desirable market
conditions, those conditions are such that the subcompact, having greater chances of satisfactory
market acceptance, is the sensible first choice.
Note that the decision tree does not purport to map out the entire sequence of decision at the
outset. If the decision-maker is an expected-value maximizer, the tree would be used for the initial
decision only. Then, after time has passed, the situation would need to be re-evaluated, on the basis of
the information than available in this example, this mean that no decision for the second three-years
period will be made now. This will be re-examined after the subcompact has been introduced, after a
sense of the market at that time is obtained, and after the costs and revenues of the various alternative
then available have been ascertained.
Some readers may be skeptical about the practicality of this tool. However, where large
investments are to be made and the pitfalls are consequently large, industrial decision-makers try to
bring analytical methods to beat on the problem faced in order to gain the greatest possible insight.
The industrial use of decision trees has been reported in the literature. (see referances for this chapter)
computar programs have been written to perform the folding back. (However, note that the
computation of expected values is the least time-consuming aspect of the whole process. Studying the
alternatives, identifying the states of nature, and estimating the probabilities payoffs under the stated
assumptions take the vast bulk of the analysts time.)
Even where decision trees are not practical, owing to the difficulties of estimating
probabilities or payoffs, it is possible that the schematic representation of decision, states of nature,
and ensuring decision and states of nature, may be help ful in clarifying the probabilities and
identifying potential pitfalls.

Bayesian Analysis
Bayess rule permits computation of conditional probabilities based on other probabilities. This can
somtimes be useful in the cintext of a decision-tree analysis and will be illustrated later in this section.
Before this can be done, however, it will be necessary to introduce Bayess Rule. As with the other
material in this chapter, the approach will be informal, and will draw on illustrative examples to
developed the concepts.
Bayess Rule may be thought of as an extension of the rule of multiplication. From equation
13-4,
P(A and B) = P(A)P(B/A)
One may also write it as,
P(A and B) = P(B)P(A/B)
(It was not also stated this way earlier in the chapter because the illustrative example of drawing
marbles from an urm used events defined sequantially. The reader should verify to himself that the
flipped condition, (A/B), makes no sense in the context of the urn example)
Permitting P(A and B) to be written both ways allows the statement
P(A)P(B/A) = P(B)P(A/B)
Suppose that one wishes the compute P(A/B). One can easily obtain that fromthe statement above by
diving both sides by P(B)

()( )

() = ()
(13-7)

Finally, the denominator, P(B), can be segmented as the following illustration will suggest;
Supposed a manufacturer produces three models of a given product, X,Y, and Z. Quality
control inspections classify the output as defective, D, or as nondefective, N. All three models are
produced simultaneously in the plant. Six hundred items are sampled for a quality control study.
Although the items have not yet been inspected, suppose that the distribution of defective and good
items for the different models are as specified in table 13-4
Designate the probability of randomly drawing a defective item as D, and a good item as N.
Designate the random selection of models X,Y, or Z with those letters. Conditional events and
probabilities may also be designated. For example, P(D/X) represent the conditional probability of
drawing a detective, given that it is of model type X. Form the table it is easy to visualize that P(D/X)
would be 3/200. In order to see what meant by segmenting the probabilities, first look at

Você também pode gostar