Escolar Documentos
Profissional Documentos
Cultura Documentos
• Cost estimation:
• Models
1. Static single variable models
2. Static multivariate models
• COCOMO(Constructive cost model)
1. Basic model
2. Intermediate model
3. Detailed COCOMO Model
• Putnam resource allocation model
• Trade off b/w time vs cost
• Development sub cycle
• Software risk management
• What is risk? Typical s/w risks
• Risk management activities
• Risk identification
• Risk projection
• Risk management activity
s/w project planning
• After finalization of SRS, next step is to estimate cost and development time of project.
• Sometime, customer may like to know cost and development time even prior to
finalization of SRS.
POOR PLANNING
• Result in s/w failure. Delivered s/w was late, unreliable, costed several times the original
estimate, poor performance.
• Project planning must incorporate major issues like size and cost estimation,
scheduling, project monitoring and reviews and risk management.
• s/w planning begins before technical work starts, continues as s/w evolve from concept
to reality and culminates when s/w is retired.
Ex of poor planning:
if want to renovate home. After getting several quotations, most of which are around 2.5 lac, we pick up
builder who offers to do job in 2 months for 2.0 lacs. We sign an agreement and builder starts work.
But, after about a month, builder explains that because of some problems, job will take an extra month and
cost an additional 0.5 lacs.
1. Size estimation
It’s difficult area of project planning.
Other engineering disciplines have advantages that bridge, road can be
seen or touched, they are concrete. s/w is abstract so difficult to
identify size of system.
SIZE METRICS: 2 units to measure size
1. LOC
2. Function count
1. LOC
• Simple metric that can be counted(simply a count of number of lines).
• LOC include declaration, executable statements but exclude comments and blank lines.
• Comments to include or not???
• There is fundamental reason for including comments in program. As quality of comments affects
maintenance cost.
• But, inclusion of comments and blank lines in count may encourage developers to introduce
many such lines in project development in order to create illusion of high productivity.
DISADV:
• Measuring system by no. of LOC is rather like measuring building by no. of bricks involved in
construction. Buildings are described in terms of facilities, no. and size of rooms and their total
area in sq. feet or meters.
• LOC is language dependent
• Major problem with LOC measure is that its not consistent as some lines are more difficult to
code than others.
2. Function Count
• Alan Albrecht while working for IBM, recognized problem in size measurement in the
1970s, and developed a technique ( called Function Point Analysis), which appeared to
be a solution to size measurement problem.
• When dealing with customers, manufacturers talk in terms of functions available(e.g: digital
tuning(function count based)), not in terms of components(e.g: integrated circuits(LOC based)).
• So, FPA(function point analysis) is solution to size measurement problem.
• It measures functionality from user point of view i.e on basis of what user request and
receives in return.
• Deals with functionality being delivered, not with LOC.
• Measuring size in this way has advantage that size using FPA is independent of technology used
to deliver functions.
• Ex:2 identical counting system, 1 written in 4GL and other in assembler would have same
function count.
Function points
• A productivity measure, empirically justified
• Motivation: define and measure the amount of value (or
functionality) produced per unit time
• Principle: determine complexity of an application as its function point
• Size of project may vary depending upon function points
Ch. 8 10
The principle of Albrecht’s function point analysis(FPA) is that a system
is decomposed into 5 functional units.
• Inputs : information entering the system
• Outputs : information leaving the system
• Inquiries : requests for instant access to information
• Internal logical files : information held within system
• External interface files : information held by other system that is
used by the system being analyzed.
5 functional units are divided in two categories:
(i) Data function types
1. ILF
2. EIF
• Internal Logical Files (ILF): User identifiable group of logical related data or
control information maintained within system.
• External Interface files (EIF): User identifiable group of logically related data or
control information referenced by system, but maintained within another
system.
(ii) Transactional function types
1. EI
2. EO
3. EQ
• External Input (EI): An EI processes data or control information that comes from outside system.
• The EI is an elementary process, which is the smallest unit of activity that is meaningful to end user
in the business.
• those items provided by the user that describe distinct application-oriented data (such as file names
and menu selections)
Item Weight
Number of inputs 4
Number of outputs 5
Number of inquiries 4
Number of files 10
Number of interfaces 7
Special features
• Function point approach is independent of language, tools, or methodologies
used for implementation; i.e. they do not take into consideration programming
languages, dbms, processing hardware or any other db technology.
• Function points can be estimated from requirement specification or design
specification, thus making it possible to estimate development efforts in early
phases of development.
• Function points are directly linked to the statement of requirements; any change
of requirements can easily be followed by a re-estimate.
• Function points are based on the system user’s external view of the system,
non-technical users of software system have a better understanding of what
function points are measuring.
Counting Function Points
• 5 function points are ranked acc. to their complexity
1. LOW
2. AVERAGE
3. HIGH
Organizations that use FP methods develop criteria to find whether particular
entry is low, avg. or high.
After classifying each of 5 FP, UFP(unadjusted function points) are
calculated using predefined weights for each function type as given in table.
In
In
• The weighting factors are identified for all functional units and multiplied with
the functional units accordingly.
• Unadjusted Function Point (UFP) :
• i=5 Function point(row, Z)
• j=3 ranks(low,avg,high),column, W
Final number of function points is arrived by multiplying UFP by an adjustment factor that’s determined by considering 14
aspects of processing complexity given in following table:
• FP = UFP * CAF
• Where CAF is complexity adjustment factor and is equal to [0.65 +0.01 x ΣFi].
• The Fi(i=1 to 14) are the degree of influence and are based on responses to
questions noted in following table:
Technical Complexity Factors:
1. Data Communication
2. Distributed Data Processing
3. Performance Criteria
4. Heavily Utilized Hardware
5. High Transaction Rates
6. Online Data Entry
7. Online Updating
8. End-user Efficiency
9. Complex Computations
10. Reusability
11. Ease of Installation
12. Ease of Operation
13. Portability
14. Maintainability
Uses of FP
1. To monitor levels of productivity, for example, no. of function points achieved per
work hour expended.
2. Software development Cost estimation.
These metrics are controversial and are not universally acceptable. There are
standards issued by the International Functions Point User Group (IFPUG, covering
the Albrecht method) and the United Kingdom Function Point User Group (UFPGU,
covering the MK11 method). An ISO standard for function point method is also
being developed.
• FP method continues to be refined.
Example: SafeHome Functionality
Test Sensor
Password
Zone Setting Sensors
Zone Inquiry
Monitor
Password, Alarm Alert and
Sensors, etc. Response
System
System
Config Data
Example: SafeHome FP Calc
weighting factor
measurement parameter count simple avg. complex
number of user inputs 3 X 3 4 6 = 9
number of ext.interfaces 4 X 5 7 10 = 22
count-total 52
complexity multiplier[0.65 0.01 F ] [0.65 0.46]
i 1.11
function points 58
For average ƩFi =14*3
4: significant data communication
5: critical performance
2: moderately reusable
0: no multiple installation
Rest factors=average=3
Cost Estimation
For any new s/w project, it’s necessary to know
1. how much will it cost to develop? and
2. how much development time will it take.
These estimates are needed before development is initiated. How is this done???
In many cases estimate are made using past experience as only guide.
But in most of cases project are different, so past experience alone is not sufficient.
Number of estimation techniques have been developed and are having following attributes in common
• Project scope must be established in advance
• Software metrics are used as a basis from which estimates are made
• The project is broken into small pieces which are estimated individually
PM = a.KLOCb
Legend
• PM: person month
• KLOC: K lines of code
• a, b depend on the model
• b>1 (non-linear growth)
Ch. 8 34
Static, single variable model
• Methods using this model use an equation to estimate desired values such as time, effort(cost), etc.
• They all depend on the same variable used as predictor (say, size).
• An example of the most common equations is: ……………… eq(1).
• C=cost(effort expressed in any unit of manpower, e.g: person-months)
• L=size given in number of LOC.
• a, b constants are derived from historical data of organization.
• As a and b depend on local development environment and these models are not transportable to different
organizations.
• Software engineering lab(SEL) of university of Maryland established SEL model, to estimate its own s/w
productions.
• Model is typical ex of staic, single variable model, SEL Model:
Take L as KLOC
Or Average Manning(avg. no. of persons required per month)
• Relationship b/w productivity (number of lines of source code per person months) and
• productivity index, I.
• Productivity index uses 29 variables which are found to be highly co-related to productivity as follows:
• Where Wi is factor weight for ith variable and Xi =(-1,0,+1) depending on whether variable
decreases, has no effect, or increases productivity respectively.
• Terms of above eq are then added up to give productivity index.
Or Average Manning(avg. no. of persons required per month)
W-F
• COCOMO is hierarchy of s/w cost estimation models, which include basic, intermediate and detailed sub models.
• evolved from COCOMO to COCOMO II
Acc to Boehm, s/w cost estimation should be done through 3 stages:
1. Basic: Compute effort and cost estimated as LOC
2. Intermediate: compute effort and cost using set of 15 cost drivers and LOC.
• Includes subjective assessments of product, h/w, personnel and project attributes.
3. Detailed: incorporates intermediate version with an assessment of cost driver’s impact on each step(analysis,design etc)
Detailed model provides set of phase sensitive effort multipliers for each cost driver.
• The COCOMO model predicts the effort and duration of a project based on inputs relating to the size of the resulting systems
and a number of "cost drives(both phase-sensitive effort multipliers and 3-level product hierarchy)" that affect productivity.
• Any s/w development project can be classified into 1 of following 3 categories:
1. Organic(corresponds to simple application. ex: data processing programs)
2. Semi-detached(corresponds to utility. Ex: compiler, Linker)
3. Embedded(corresponds to system programs. Ex: OS real time system programs,
system programs interact directly with h/w and typically involve meeting timing
constraints and concurrent processing).
To classify product into these 3 categories, Boehm not only considered
characteristics of product but also those of development team and development
environment.
1. Basic Model
• In organic mode, small team of experienced developers develop s/w in familiar environment.
• In-house, less complex developments.
• There is proper interaction among team members and they coordinate their work.
• Project deals with developing a well understood application program, size of development team is
reasonably small and team members are experienced in developing similar type of projects.
• Size of s/w development in this mode ranges from small(few KLOC) to medium(few terms of KLOC).
• While in other 2 modes, size ranges from small to very large(few hundreds of KLOC).
• Semi detached mode is an intermediate mode b/w organic and embedded mode in terms of team size.
• It consist of mixture of experienced and inexperienced staff.
• Team members are unfamiliar with system under development and may have limited experience on
related systems but may be unfamiliar with some aspects of system being developed.
• In embedded mode of s/w development, problem to be solved is unique and project has tight
constraints which might be related to target processor and it’s interface with associated h/w.
• Project env. is complex.
• Team members are highly skilled but it’s often hard to find experienced persons
• Team members are familiar with system under development but system is highly complex, innovative,
requiring high reliability, with real time issues.
• Cost and schedule are tightly controlled.
Basic COCOMO model takes form:
gives an approximate estimate of project parameters.
1. organic: E=2.4(KLOC)^1.05 PM
2. Semi-detached: E=3.0(KLOC)^1.12 PM
3. Embedded: E=3.6(KLOC)^1.20 PM
So, effort calculated for embedded mode is approx. 4 times the effort for organic mode.
Effort calculated for semidetached mode is 2 times the effort of organic mode.
There is large differences in these values.
Development time is approx. same for all 3 modes. So, selection of mode is very important.
As, development time is approx. the same, only varying parameter is requirement of persons.
Every mode will have different manpower requirements.
For over 300 KLOC projects, embedded mode is right choice.
SOLUTION:
Ex:
• Let size of an organic type s/w product has been estimated to be 32000 LOC.
• Let average salary of s/w engineer be Rs 15000/month.
• Find effort required to develop s/w product and development time.
Sol:
Effort = 2.4(32 KLOC)^1.05 = 91 PM
Development time=2.5(91 Effort)^0.38 =14 months
Cost required to develop product= 14*15000 = Rs 2,10,000
Intermediate COCOMO model
• Basic model allowed for quick and rough estimate but resulted in lack of accuracy.
• Used for medium sized projects. Team size: medium
Moreover, Basic COCOMO model assume that effort and development time are functions of product size alone.
However, lot of other project parameters besides product size affect effort as well as development time.
So, for accurate results for effort and time, effect of all relevant parameters must be taken into account.
Intermediate COCOMO model recognizes this effect and refine initial estimate obtained using Basic COCOMO expressions
by using set of 15 cost drivers based on various attributes of s/w development like product reliability, db size, execution
and storage.
Cost drivers are critical features that have direct impact on project.
• Boehm introduced an additional set of 15 predictors called cost drivers in intermediate model to take account of s/w
development environment.
• Cost drivers are used to adjust nominal cost of project to actual project env., hence increasing accuracy of estimate.
Cost drivers: 4 categories
1. Product attributes 2. Computer attributes
3. Personnel attributes 4. project attributes
Typical cost driver categories
1. Product
• Characteristics of product that are considered include inherent complexity of product, reliability requirements.
2. Computer
• Characteristics of product that are considered include execution speed required, time, space or storage constraints.
3. Personnel
• Attributes of development personnel that are considered include experience level of personnel, programming
capability, analysis capability etc.
4. Project(Development environment)
Captures development facilities available to developers.
An important parameter that’s considered is sophistication of automation (CASE) tools used for s/w development.
• e.g., Are modern programming practices/sophisticated software tools being used?
(ACAP)
(RELY) (AEXP)
(DATA) (PCAP)
(CPLX) (VEXP)
(LEXP)
(TIME)
(STOR) (MODP)
(TOOL)
(VIRT) (SCED)
(TURN)
Cost drivers are critical features that have direct impact on project.
Each cost driver is rated for given project environment.
Rating uses a scale very low, low, nominal, high, very high, extra high which describe to
what extent the cost driver applies to project being estimated.
Steps for intermediate level:
steps:
Step 1: Nominal effort estimation
• Determine project’s development mode (organic, semidetached, embedded)
• Estimate size of project
(RELY) (AEXP)
(DATA) (PCAP)
(CPLX) (VEXP)
(LEXP)
(TIME)
(STOR) (MODP)
(TOOL)
(VIRT) (SCED)
(TURN)
Cost drivers are critical features that have direct impact on project.
Each cost driver is rated for given project environment.
Rating uses a scale very low, low, nominal, high, very high, extra high which describe to what extent the cost
driver applies to project being estimated.
• Multiplying factors for all 15 cost drivers are multiplied to get
EAF(Effort Adjustment factor).
• Typical values for EAF range from 0.9 to 1.4.
• Intermediate COCOMO eq takes form:
2. Product design
2nd phase of COCOMO development cycle is concerned with
determination of product architecture and specification of sub-system.
This phase consumes 16% to 18% of effort and 19% to 38% of development time.
3. Programming
3rd phase, divided into 2 sub-phases: detailed design and code/unit test.
This phase consumes 48% to 68% of effort and 24% to 64% of development time.
4. Integration/Test
This phase occurs before delivery.
This mainly consist of putting tested parts together and then testing the final product.
This phase consumes 16% to 34% of effort and 18% to 34% of development time.
• Where S –represents KLOC(thousands of LOC of module).
In order to allocate effort and schedule components to each phase in life cycle of s/w development program.
• There are assumed to be 5 distinct life cycle phases and
• Effort and schedule of each phase are assumed to be given in terms of overall effort and schedule by:
For constant values, refer intermediate COCOMO table
• COCOMO model is most thoroughly documented model currently available.
• Easy to use.
• s/w managers can learn a lot about productivity, from very clear presentation of cost drivers.
• Data gathered from previous projects may help to determine value of constants of model(like: a,b,c,d).
• These values may vary from organisation to organisation
Issues:
• This model ignores s/w safety and security issues.
• Also ignores many h/w and customer related issues.
• It’s silent about involvement and responsiveness of customer.
• It does not give proper importance to s/w requirements and specification phase which has identified as most
sensitive phase of s/w development life cycle.
Staffing level estimation
• Once effort required to develop s/w has been determined, next step is to find staffing requirement for s/w
project.
• Putnam works on this problem, he extended work of Norden who had earlier investigated staffing pattern of
R&D type of h/w projects.
• Norden found that staffing pattern can be approximated by Rayleigh distribution curve but these results were
not meant to model staffing pattern of s/w development projects.
• Later, Putnam studied problem of staffing of s/w projects and found that s/w development has characteristics
very similar to other R&D projects studied by Norden.
• Putnam suggested that optimal staff build-up on project should follow Rayleigh curve. Only a small no. of
engineers are required at beginning of project to carry out planning and specification tasks
• . As project progresses and more detailed work is required, no. of engineers reaches a peak. After
implementation and unit testing, no. of project staff fails.
• Constant level of manpower through out project duration would lead to wastage of effort and increase time and
effort required to develop product. If constant no. of engineers are used over all phases of project, some phases
would be overstaffed and other phases would be understaffed causing inefficient use of manpower, leading to
schedule slippage and increase in cost.
Putnam resource allocation model
• Norden of IBM observed that RAYLEIGH curve can be used as an approximate model for range of h/w
development projects.
• Then, Putnam observed that
RAYLEIGH curve was close representation for s/w subsystem development.
• 150 projects were studied by NORDAN and then by PUTNAM, both researchers observed same
tendency for manpower curve to rise, peak and then exponentially trail off as function of time
Effort per
unit time
• Rayleigh curve represents manpower -----measured in persons per
unit time as function of time.
• Expressed in PY/YR(person-year/year)
Peak manning:
m’(t)=2Ka(t.-a2t.e^(-at^2)+e^(-at^2)) =
• This relationship shows that project is more difficult to develop when manpower
demand is high or when time schedule is short(small td).
• Difficult projects will tend to have steeper demand for manpower at the
beginning for same time scale.
• After studying about 50 army s/w projects, Putnam observed that for systems
that are easy to develop, D tended to be small while for systems that are hard to
develop, D tended to be large.
Difficulty Metric
D(Difficulty) ∞ m0(peak manning)
• Consider ex 4.13 and calculate difficulty and manpower build-up.
Productivity vs Difficulty
Where S is LOC produced and E is cumulative manpower used from t=0 to t=td(inception of project to delivery time).
Using nonlinear regression, Putnam determined from analysis of 50 s/w army projects that
Quantity is replaced by coefficient c called technology factor.
levels
program
It’s easy to use size, cost and development time of past projects to determine value of C and hence revise value of C obtained
to model forthcoming projects.
Trade off b/w time vs cost
• In s/w projects, time can’t be freely exchanged against cost.
.Putnam model
Software Risk Management
• We Software developers are extremely optimists. We assume, everything will go exactly as planned.
• Other view
not possible to predict what is going to happen ?
• Software surprises
Never good news, when unexpected things happens that throws project completely off track.
Project planning is expected to quantify both probability of failure & consequences of failure.
Describe what will be done to reduce risk?
Software project
• Vague requirement
• User not sure of needs
• Huge number of people
• Large number of resources
• Time span
• Requirement changes
• What is risk ? Tomorrow’s problems are today’s risks. Risk is an uncertainity.
“Risk is a problem that may cause some loss or threaten success of project, but which has not
happened yet”.
• These potential problems might have an adverse effect on cost, schedule or technical success of project,
quality or project team morale.
Risk management: process of identifying addressing and eliminating these problems before they can
damage project.
• Current real problems require prompt, corrective action while risk can be dealt with many ways like we
might choose to avoid risk entirely by changing project approach or even cancelling the project.
• There are no magic solution to any of these risks factors, so we need to rely on past experience and
strong knowledge of contemporary s/w engg. and management practices to control these risks.
What is Risk?
• Risks are potential problems that may affect successful completion of a software
project.
• Risks involve uncertainty and potential losses.
• Risk analysis and management are intended to help a software team understand
and manage uncertainty during development process.
• risk management begins long before technical work starts, risks are identified
and prioritized by importance
• team builds a plan to avoid risks if they can or to minimize risks if they turn into
problems
103
Problem vs risk
• Problem is some event which has already occurred but
• Risk is something that’s unpredictable.
Typical Software Risk
Capers Jones has identified the top five risk factors that threaten projects in different applications(accumulated
from previous projects).
1. Dependencies of project on outside agencies or factors.
• Availability of trained, experienced persons
• Inter group dependencies
• Customer-Furnished items or information
• Internal & external subcontractor relationships
2. Requirement issues
Many projects face: Uncertainty around product’s requirements. Tolerable in early stages but threat to success increase
if such issues are not resolved as project progresses.
If we don’t control requirements-related risk factors, we might either build wrong product or Right product badly.
Either situation results in unpleasant surprises and unhappy customers.
• Lack of clear product vision
• Unprioritized requirements
• Lack of agreement on product requirements
• New market with uncertain needs
• Rapidly changing requirements
• Inadequate Impact analysis of requirements changes
3. Management Issues
Project managers usually write risk management plans, and most people do not wish to air their weaknesses in public.
• Inadequate planning
• Inadequate visibility into actual project status
• Unclear project ownership and decision making
• Staff personality conflicts
• Unrealistic expectation
• Poor communication
4. Lack of knowledge
Rapid rate of change of technologies and increasing change of skilled staff means our project teams may not have skills we need to be successful.
Key is to recognize risk areas early enough so that we can take appropriate preventive actions like training, hiring consultants and bringing right people
together on project team.
• Inadequate training
• Poor understanding of methods, tools, and techniques
• Inadequate application domain experience
• New Technologies
• Ineffective, poorly documented or neglected processes
5. Other risk categories
Some of the critical areas are:
• Unavailability of adequate testing facilities
• Turnover of essential personnel
• Unachievable performance requirements
• Technical approaches that may not work
RM involve several important steps.
Risks we encounter in project should be resolved so that we are able to deliver desired project to customer.
Art of managing of risks effectively so that WIN-WIN situation and friendly relationship is established b/w team and
customers is called RM.
• We should assess risks on project, so that we understand what may occur during course of development or maintenance.
Risk Assessment
• Process of examining project and identifying areas of potential risk.
2. RA-analyse them. Understand the nature, kind of risk and gather information about risk.
Risk analysis involves examining how project outcomes might change with modification of risk input variables.
• Questions
• What is causing the risk
• How much will it affect
• Are the risks dependent
• The probability that it will occur
3. RP- assigning priorities to each of them. Risk prioritization focus for severe risks.
Rank the risks according to management priorities, by risk category and rated by likelihood and possible cost or consequence.
• Risk exposure: It is the product of the probability of incurring a loss due to the risk and the potential magnitude of that loss.
This prioritization can be done in quantitative way, by estimating probability and relative loss, on scale of 1 to 10.
Higher the exposure, more aggressively risk should be tackled.
Another way of handling risk is the risk avoidance. Do not do the risky things! We may avoid risks by not undertaking certain projects.
Risk Control
• Process of managing risks to achieve desired outcomes.
Risk Management Planning produces a plan for dealing with each significant risks.
• Avoidance
• Protection
• Reduction
• Research
• Reserves
• Transfer
• It’s useful to record decision in the plan, so that both customer and developer can review how problems are to be avoided as
well as how they are to be handled when they arise.
• Monitor project as development progresses, periodically reevaluating risks, their probability and likely impact.
• Risk resolution is execution of the plans for dealing with each risk.
• Risk resolving
• Risk documentation
Risk management
• Uncertain requirements
• Unknown technology
• Infeasible design
• Cost and schedule uncertainty.
• To manage risks we need to establish strong bond b/w customers and team
members.
• s/w metrics and tools can be developed to manage risks.
• Risk necessarily need not be negative and it can be viewed as an
opportunity to develop our projects in a better way.
• Thanks
• ????
Manpower buildup
• D is dependent upon “K” and “td”.
• Derivative of D relative to “K” and “td” are:
Played an important role in explaining
Putnam also discovered that D0 could vary slightly from 1 organization to another depending on average skill of analyst,
developers and management involved. D0 has strong influence on shape of manpower distribution.
Larger D0 is, steeper manpower distribution is, and faster the necessary manpower build up will be. So quantity D0 is called
manpower build-up.
• EX: if modern programming practices are used, initial estimates are
scaled downward by multiplication with cost driver having value less
than 1.
• If there are stringent reliability requirements on s/w product, this
initial estimate is scaled upward.
• Boehm requires project manager to rate these 15 different
parameters for particular project on scale 1 to 3.
• Then depending on these ratings, he suggests appropriate cost driver
values which should be multiplied with initial estimate obtained using
BASIC COCOMO.
Detailed COCOMO MODEL
• Major shortcoming of both basic and intermediate COCOMO model is that
they consider s/w product as single homogenous entity.
• But, most large systems are made up of several smaller subsystems. These subsystems may have widely different
characteristics.
• E.g: Some subsystems may be considered as organic type, some semi-detached and some embedded.
• Not only that, inherent development complexity of subsystem may be different.
• But for some subsystems reliability requirements may be high, some development team might have no previous experience of
similar development and so on.
• Complete COCOMO model considers these differences in characteristics of subsystem and estimate effort and development
time as sum of estimates for individual subsystems.
• Cost of each subsystem is estimated separately. This approach reduces margin of error in final estimate.