Escolar Documentos
Profissional Documentos
Cultura Documentos
+ +
=
=
otherwise
if
) Rating ( U w ) Rating ( U w ) Rating ( U w
) Severe ( U ) Rating ( U
Score Risk
3 3 2 2 1 1
2 2
1
(Equation 1)
In equation 1, U
1
is the risks Probability (table 2) with importance weight w
1
, U
2
is the Impact
Score with importance weight w
2
, and U
3
is the equivalent numerical value for the risks
Timeframe (table 6) with importance weight w
3
. Furthermore, the weights sum to unity; that is,
1
3 2 1
= + + w w w .
*
Impact Score may also be defined as a weighted average of the three impact areas. However, the IS upgrade project decided to
minimize its use of weighting at this level of the analysis.
Definition: If the impact of a risk to a project on Cost, Schedule, Technical Performance is such
that it would cause project termination, then it is rated Severe and the Impact Score defaults to its
maximum numerical value of one.
Definition: Risk Score is defined to be equal to one if the Impact Score is equal to one (refer to
equation 1).
Definition: The overall Rating for Impact and Risk Priority is defined as follows:
The overall Rating of an identified project risk is rated Severe (in the projects RAW) if
the Score for that risk is equal to one.
The overall Rating of an identified project risk is rated High (in the projects RAW) if the Score
for that risk is greater than or equal to 0.65 and less than one.
The overall Rating of an identified project risk is rated Moderate (in the projects RAW) if the
Score for that risk is greater than or equal to 0.35 and less than 0.65.
The overall Rating of an identified project risk is rated Low (in the projects RAW) if the Score
for that risk falls between 0 and 0.35.
Table 8 summarizes the results of the scoring and risk ranking process
described above.
Risk Analysis Worksheet (RAW)Prioritize
Risks Segment
IS Upgrade (ISU)
Prioritize Risks
Risk
ID
Impa
ct
Score
Impact
Rating
Risk
Score
Risk
Priority
Rating
Risk
Priority
Ranking
ISU.00
1
0.627 Modera
te
0.736 High 2nd
ISU.00
2
1.000 Severe 1.000 Severe 1st
ISU.00
3
0.643 Modera
te
0.474 Moderate 3rd
Table 8. Risk Analysis Worksheet: Prioritize Risk Segment (Sample)
Risk Score is used to rank a risks priority relative to the other identified
risks. The risk with the highest risk score is ranked first in priority, the
risk with the next highest risk score is ranked second in priority and so
forth. The closer the risk score is to one the higher the priority; the
closer a risk score is to zero the lesser the priority.
Determining Consequence Score
Lastly, another measure is defined for identifying the criticality of
managing potential risks. On the IS upgrade project, this measure is
referred to as Consequence Score. Defined by equation 2, a risks
Consequence Score is a function of the Impact Score and the Timeframe.
Consequence Score is computed independently of the value of a risks
probability. Because of this, the potential consequence of a risk event
will be visible to management whether the risk event is assessed to have
a high or a low occurrence probability.
+
+
+
=
=
otherwise
if
) Rating ( U
w w
w
) Rating ( U
w w
w
) Severe ( U ) Rating ( U
Score e Consequenc
3
3 2
3
2
3 2
2
2 2
1
(Equation 2)
Displaying Risks
Figure 2 presents two key output displays designed for the IS upgrade
project. The left-most plot in figure 2 illustrates a risk events
consequence to the IS upgrade project versus its probability of
occurrence. When Consequence Score is plotted against Probability, the
potential negative impacts of a risk can be viewed separately from its
occurrence probability. The right-most plot in figure 2 illustrates a risk
events Risk Score versus its probability of occurrence, where Risk Score
is given by equation 1.
Figure 2. Risk Status Displays Risk versus Consequence versus
Probability
When the plots in figure 2 are viewed jointly, management has a
mechanism to visualize which risks are potentially the most important to
manage. For instance, the Risk Score for risk event C is 8.5 percent
larger than the Risk Score for risk event A. However, the Consequence
Score for risk event C is 24 percent larger than the Consequence Score
for risk event A. Although, in this example, Impact was weighted twice
as important to the project as Risk Probability it is clearly more critical
for management to place its attention on risk event C than risk event A
(in this case). These plots will reveal characteristics about each risk that
might be hidden from management. In particular, the Consequence
Score for each risk will be an important discriminator for management
decisions when the risk scores computed by equation 1 are close or even
tied.
On the IS upgrade project, each point in figure 2 is color-coded. The
color depicts the status of the least complete risk management action in
the set of actions described in that risks management plan. These colors
correspond to an action status color scheme in table 9. Color-coding the
points in figure 2 allows management to focus their attention on the high
priority risks that also have the least complete set of risk mitigation
actions.
Color Interpretation
Blue Risk management action is completed and successful.
Green Risk management action is on schedule and will likely be completed
successfully.
Yellow Risk management action may not be completed on schedule.
Red Risk mitigation action is not started or deemed unsuccessful.
Table 9. Reporting the Status of Risk Management Actions
4.0 Considerations and Recommended Practices
This paper briefly described the key elements of a risk management
process recently implemented on a large scale, multi-billion dollar,
information system upgrade. One of the most important aspects of the
risk identification process is the proper description of each risk in terms
of the risk statement formalism described herein. Following this
formalism has enabled IS upgrade team members, stakeholders,
management, and decision-makers to clearly see the source and the
nature of the risk. This has improved the teams ability to properly
analyze the risk, distinguish whether the event is truly a risk (and not an
issue), make reasoned assessments of occurrence probabilities, and
select appropriate risk management strategies.
The risk analysis process has benefited from the careful construction of
impact tables defined to identify those areas of the project that are
potentially negatively affected by risk. Establishing a set of anecdotal
definitions for what is meant by a risk that potentially has a severe
impact to the project, or a low impact, enables all process participants to
have a common language and a common basis for risk evaluation and
interpretation.
The use of the scoring models, described herein, provides the IS upgrade
project a consistent and traceable basis for generating a most-to-least
critical risk ranking as a function of multiple evaluation criteria. A
unique design feature of the Risk Score and the Consequence Score
models is the automatic identification of risk events whose potential
impacts are so severe that, irrespective of their occurrence probabilities,
importance weightings, or timeframes, they are considered show-
stoppers and must be flagged to management at all times. The reader is
directed to references 1 and 4 for a technical discussion on the use of
these types of models in project risk management.
Lightly addressed in this paper, but considered a key element of the IS
upgrade projects risk management process are its mechanisms for
identifying risk dependencies. On the IS upgrade project, it is critically
important to identify risks that correlate, or link, to other aspects of the
project. Identifying these linkages is done early and continuously in the
process. The specific mechanism employed by the project is a risk
correlation matrix. This matrix is used by project management to
closely monitor risks that are flagged (identified) as having the greatest
number of linkages, or dependencies, across the entire scope of the
project.
Acknowledgements
The information presented in this paper evolved from the efforts of a
multidisciplinary MITRE team. The author is grateful to MITRE staff
Edward E. Goodnight, Charlene J. McMahon, Norm A. Bram, Susan A.
Powers, Meredith LaDuke, Mike C. Swelfer, and Susan H.
Kapsokavadis for their insights and contributions to the design,
development, and implementation of the risk management processes
presented herein.
References
1. Garvey, Paul R.; Risk Management, Encyclopedia of Electrical and
Electronics Engineering, John Wiley & Sons, Inc., 1999;
www.interscience.wiley.com:83/eeee/.
2. Garvey, Paul R.; Probability Methods for Cost Uncertainty Analysis:
A Systems Engineering Perspective, Marcel Dekker, Inc., 270
Madison Avenue, New York, NY, 10016-0602, January, 2000; ISBN:
0824789660; www.dekker.com/e/p.pl/8966-0.
3. Garvey, Paul R., Phair, Douglas J., Wilson, John A.; An Information
Technology Architecture for Risk Assessment and Management,
IEEE Software, May, 1997;
www.computer.org/software/so1997/s3toc.htm.
4. Cho, C. C., Garvey, Paul R., Giallombardo, Robert J.; RiskNav: A
Decision Aid for Prioritizing, Displaying, and Tracking Program
Risk, Military Operations Research, V3, N2, 1997; www.mors.org.
5. Garvey, Paul R., Lansdowne, Zachary F.; Risk Matrix: An Approach
for Identifying, Assessing, and Ranking Program Risks, Air Force
Journal of Logistics, Volume XXII, Number 1, June 1998;
www.mitre.org/resources/centers/sepo/risk/risk_matrix.html.