Você está na página 1de 117

Software project planning

• Cost estimation:
• Models
1. Static single variable models
2. Static multivariate models
• COCOMO(Constructive cost model)
1. Basic model
2. Intermediate model
3. Detailed COCOMO Model
• Putnam resource allocation model
• Trade off b/w time vs cost
• Development sub cycle
• Software risk management
• What is risk? Typical s/w risks
• Risk management activities
• Risk identification
• Risk projection
• Risk management activity
s/w project planning
• After finalization of SRS, next step is to estimate cost and development time of project.
• Sometime, customer may like to know cost and development time even prior to
finalization of SRS.

• Key issues during project planning:


1. Cost estimation
2. Development time
3. Project Scheduling
4. Risk analysis
5. Resources requirements
6. Quality management
Software Project Planning
In order to conduct a successful software project, we must understand:
• Scope of work to be done
• The risk to be incurred
• The resources required
• The task to be accomplished
• The cost to be expended
• The schedule to be followed

POOR PLANNING
• Result in s/w failure. Delivered s/w was late, unreliable, costed several times the original
estimate, poor performance.
• Project planning must incorporate major issues like size and cost estimation,
scheduling, project monitoring and reviews and risk management.
• s/w planning begins before technical work starts, continues as s/w evolve from concept
to reality and culminates when s/w is retired.
Ex of poor planning:

if want to renovate home. After getting several quotations, most of which are around 2.5 lac, we pick up
builder who offers to do job in 2 months for 2.0 lacs. We sign an agreement and builder starts work.

But, after about a month, builder explains that because of some problems, job will take an extra month and
cost an additional 0.5 lacs.

This creates several problems:


1. We badly need space and another month of delay is real inconvenience.
2. Have already arranged for a loan, Need to arrange additional amount.
3. Even if we get lawyer and decide to fight in court, may take several months while case is decided.
4. More cost required if switch to a new builder in middle of job.
5. At this point, no option and will continue with current builder.
However, we would neither use this builder again, nor would probably recommend the builder to anyone else.
Activities during s/w project planning
1st activity is to estimate “size of project”.
Size is key parameter for estimation of other activities.

It’s an i/p to all costing models for estimation of cost,


development time and schedule of project.

Resources requirements are estimated on basis of


cost and development time.

Project scheduling is useful for controlling and


monitoring project’s progress. This is dependent on
resources and development time.
Software project planning

1. Size estimation
It’s difficult area of project planning.
Other engineering disciplines have advantages that bridge, road can be
seen or touched, they are concrete. s/w is abstract so difficult to
identify size of system.
SIZE METRICS: 2 units to measure size
1. LOC
2. Function count
1. LOC
• Simple metric that can be counted(simply a count of number of lines).
• LOC include declaration, executable statements but exclude comments and blank lines.
• Comments to include or not???
• There is fundamental reason for including comments in program. As quality of comments affects
maintenance cost.
• But, inclusion of comments and blank lines in count may encourage developers to introduce
many such lines in project development in order to create illusion of high productivity.

DISADV:
• Measuring system by no. of LOC is rather like measuring building by no. of bricks involved in
construction. Buildings are described in terms of facilities, no. and size of rooms and their total
area in sq. feet or meters.
• LOC is language dependent
• Major problem with LOC measure is that its not consistent as some lines are more difficult to
code than others.
2. Function Count
• Alan Albrecht while working for IBM, recognized problem in size measurement in the
1970s, and developed a technique ( called Function Point Analysis), which appeared to
be a solution to size measurement problem.
• When dealing with customers, manufacturers talk in terms of functions available(e.g: digital
tuning(function count based)), not in terms of components(e.g: integrated circuits(LOC based)).
• So, FPA(function point analysis) is solution to size measurement problem.
• It measures functionality from user point of view i.e on basis of what user request and
receives in return.
• Deals with functionality being delivered, not with LOC.
• Measuring size in this way has advantage that size using FPA is independent of technology used
to deliver functions.
• Ex:2 identical counting system, 1 written in 4GL and other in assembler would have same
function count.
Function points
• A productivity measure, empirically justified
• Motivation: define and measure the amount of value (or
functionality) produced per unit time
• Principle: determine complexity of an application as its function point
• Size of project may vary depending upon function points

Ch. 8 10
The principle of Albrecht’s function point analysis(FPA) is that a system
is decomposed into 5 functional units.
• Inputs : information entering the system
• Outputs : information leaving the system
• Inquiries : requests for instant access to information
• Internal logical files : information held within system
• External interface files : information held by other system that is
used by the system being analyzed.
5 functional units are divided in two categories:
(i) Data function types
1. ILF
2. EIF
• Internal Logical Files (ILF): User identifiable group of logical related data or
control information maintained within system.
• External Interface files (EIF): User identifiable group of logically related data or
control information referenced by system, but maintained within another
system.
(ii) Transactional function types
1. EI
2. EO
3. EQ
• External Input (EI): An EI processes data or control information that comes from outside system.
• The EI is an elementary process, which is the smallest unit of activity that is meaningful to end user
in the business.
• those items provided by the user that describe distinct application-oriented data (such as file names
and menu selections)

• External Output (EO): An EO is an elementary process that generate data or


control information to be sent outside system. those items provided to the user
that generate distinct application-oriented data (such as reports and messages)

• External Inquiry (EQ): An EQ is an elementary process that is made up to an


input-output combination that results in data retrieval.
Function point definition

• A weighted sum of 5 characteristic factors

Item Weight
Number of inputs 4
Number of outputs 5
Number of inquiries 4
Number of files 10
Number of interfaces 7
Special features
• Function point approach is independent of language, tools, or methodologies
used for implementation; i.e. they do not take into consideration programming
languages, dbms, processing hardware or any other db technology.
• Function points can be estimated from requirement specification or design
specification, thus making it possible to estimate development efforts in early
phases of development.
• Function points are directly linked to the statement of requirements; any change
of requirements can easily be followed by a re-estimate.
• Function points are based on the system user’s external view of the system,
non-technical users of software system have a better understanding of what
function points are measuring.
Counting Function Points
• 5 function points are ranked acc. to their complexity
1. LOW
2. AVERAGE
3. HIGH
Organizations that use FP methods develop criteria to find whether particular
entry is low, avg. or high.
After classifying each of 5 FP, UFP(unadjusted function points) are
calculated using predefined weights for each function type as given in table.

In
In
• The weighting factors are identified for all functional units and multiplied with
the functional units accordingly.
• Unadjusted Function Point (UFP) :
• i=5 Function point(row, Z)
• j=3 ranks(low,avg,high),column, W

Final number of function points is arrived by multiplying UFP by an adjustment factor that’s determined by considering 14
aspects of processing complexity given in following table:
• FP = UFP * CAF
• Where CAF is complexity adjustment factor and is equal to [0.65 +0.01 x ΣFi].
• The Fi(i=1 to 14) are the degree of influence and are based on responses to
questions noted in following table:
Technical Complexity Factors:

1. Data Communication
2. Distributed Data Processing
3. Performance Criteria
4. Heavily Utilized Hardware
5. High Transaction Rates
6. Online Data Entry
7. Online Updating
8. End-user Efficiency
9. Complex Computations
10. Reusability
11. Ease of Installation
12. Ease of Operation
13. Portability
14. Maintainability
Uses of FP
1. To monitor levels of productivity, for example, no. of function points achieved per
work hour expended.
2. Software development Cost estimation.

These metrics are controversial and are not universally acceptable. There are
standards issued by the International Functions Point User Group (IFPUG, covering
the Albrecht method) and the United Kingdom Function Point User Group (UFPGU,
covering the MK11 method). An ISO standard for function point method is also
being developed.
• FP method continues to be refined.
Example: SafeHome Functionality
Test Sensor
Password
Zone Setting Sensors
Zone Inquiry

User Sensor Inquiry SafeHome Messages


System User
Sensor Status
Panic Button
(De)activate (De)activate

Monitor
Password, Alarm Alert and
Sensors, etc. Response
System
System
Config Data
Example: SafeHome FP Calc
weighting factor
measurement parameter count simple avg. complex
number of user inputs 3 X 3 4 6 = 9

number of user outputs 2 X 4 5 7 = 8


number of user inquiries 2 X 3 4 6 = 6
number of files 1 X 7 10 15 = 7

number of ext.interfaces 4 X 5 7 10 = 22
count-total 52
complexity multiplier[0.65  0.01  F ]  [0.65  0.46]
i 1.11
function points 58
For average ƩFi =14*3
4: significant data communication
5: critical performance
2: moderately reusable
0: no multiple installation
Rest factors=average=3
Cost Estimation
For any new s/w project, it’s necessary to know
1. how much will it cost to develop? and
2. how much development time will it take.

These estimates are needed before development is initiated. How is this done???
In many cases estimate are made using past experience as only guide.
But in most of cases project are different, so past experience alone is not sufficient.

Number of estimation techniques have been developed and are having following attributes in common
• Project scope must be established in advance
• Software metrics are used as a basis from which estimates are made
• The project is broken into small pieces which are estimated individually

To achieve reliable cost and schedule estimates, a number of options arise:


• Delay estimation until late in project.(not practical option)
• Use simple decomposition techniques to generate project cost and schedule estimates
• Develop empirical models for estimation
• Acquire one or more automated estimation tools
Cost Estimation Models

• Concerned with representation of process to be estimated.


• In static model, unique variable(say size) is taken as key element for
calculating all other(say cost, time).
• Form of equation used is same for all calculations.

• In dynamic model, all variables are interdependent


• And there is no basic var as in static variable.
Static single variable model:
• Model make use of single basic var to calculate all others.
Static Multivariable model:
• Several variables are needed to describe s/w development process and selected
equation combines theses variables to give estimate of cost and time.
• Predictor: are variables, single or multiple, that are input to model to predict
behaviour of s/w development.
Generic formula for effort

PM = a.KLOCb
Legend
• PM: person month
• KLOC: K lines of code
• a, b depend on the model
• b>1 (non-linear growth)

Ch. 8 34
Static, single variable model
• Methods using this model use an equation to estimate desired values such as time, effort(cost), etc.
• They all depend on the same variable used as predictor (say, size).
• An example of the most common equations is: ……………… eq(1).
• C=cost(effort expressed in any unit of manpower, e.g: person-months)
• L=size given in number of LOC.
• a, b constants are derived from historical data of organization.
• As a and b depend on local development environment and these models are not transportable to different
organizations.
• Software engineering lab(SEL) of university of Maryland established SEL model, to estimate its own s/w
productions.
• Model is typical ex of staic, single variable model, SEL Model:

Take L as KLOC
Or Average Manning(avg. no. of persons required per month)

• productivity =number of lines of source code(LOC or KLOC) per


(EFFORT) person-years (or per person-months).
• PRODUCTIVITY=LOC(or KLOC)/EFFORT(PY or PM)
Static, multivariable model
• These models are based on eq 1, also depends on several values representing various aspects of s/w
development environment, for ex: method used, user participation, customer oriented changes,
memory constraints, etc.
• Model provides relationship b/w delivered LOC(L in thousands of lines) and effort E (E in person-
months),given by eq:

• And duration of development (D in months) is given as:

• Relationship b/w productivity (number of lines of source code per person months) and
• productivity index, I.
• Productivity index uses 29 variables which are found to be highly co-related to productivity as follows:

• Where Wi is factor weight for ith variable and Xi =(-1,0,+1) depending on whether variable
decreases, has no effect, or increases productivity respectively.
• Terms of above eq are then added up to give productivity index.
Or Average Manning(avg. no. of persons required per month)

• productivity =number of lines of source code(LOC or KLOC) per


(EFFORT) person-years (or per person-months).
• PRODUCTIVITY=LOC(or KLOC)/EFFORT(PY or PM)
=8*12

W-F
• COCOMO is hierarchy of s/w cost estimation models, which include basic, intermediate and detailed sub models.
• evolved from COCOMO to COCOMO II
Acc to Boehm, s/w cost estimation should be done through 3 stages:
1. Basic: Compute effort and cost estimated as LOC

2. Intermediate: compute effort and cost using set of 15 cost drivers and LOC.
• Includes subjective assessments of product, h/w, personnel and project attributes.

3. Detailed: incorporates intermediate version with an assessment of cost driver’s impact on each step(analysis,design etc)
Detailed model provides set of phase sensitive effort multipliers for each cost driver.
• The COCOMO model predicts the effort and duration of a project based on inputs relating to the size of the resulting systems
and a number of "cost drives(both phase-sensitive effort multipliers and 3-level product hierarchy)" that affect productivity.
• Any s/w development project can be classified into 1 of following 3 categories:
1. Organic(corresponds to simple application. ex: data processing programs)
2. Semi-detached(corresponds to utility. Ex: compiler, Linker)
3. Embedded(corresponds to system programs. Ex: OS real time system programs,
system programs interact directly with h/w and typically involve meeting timing
constraints and concurrent processing).
To classify product into these 3 categories, Boehm not only considered
characteristics of product but also those of development team and development
environment.
1. Basic Model

• Used for relatively smaller projects. Team size: small


• Aims at establishing, in quick and rough fashion, most of small to medium sized s/w
projects.
• In Basic model, 3 modes of s/w development are considered in this model:

• In organic mode, small team of experienced developers develop s/w in familiar environment.
• In-house, less complex developments.
• There is proper interaction among team members and they coordinate their work.
• Project deals with developing a well understood application program, size of development team is
reasonably small and team members are experienced in developing similar type of projects.
• Size of s/w development in this mode ranges from small(few KLOC) to medium(few terms of KLOC).
• While in other 2 modes, size ranges from small to very large(few hundreds of KLOC).
• Semi detached mode is an intermediate mode b/w organic and embedded mode in terms of team size.
• It consist of mixture of experienced and inexperienced staff.
• Team members are unfamiliar with system under development and may have limited experience on
related systems but may be unfamiliar with some aspects of system being developed.

• In embedded mode of s/w development, problem to be solved is unique and project has tight
constraints which might be related to target processor and it’s interface with associated h/w.
• Project env. is complex.
• Team members are highly skilled but it’s often hard to find experienced persons
• Team members are familiar with system under development but system is highly complex, innovative,
requiring high reliability, with real time issues.
• Cost and schedule are tightly controlled.
Basic COCOMO model takes form:
gives an approximate estimate of project parameters.

Basic COCOMO model is given by following expressions:


EFFORT ESTIMATION(PERSON-MONTHS)

DEVELOPMENT TIME ESTIMATION(MONTHS)


Where E is total effort required to develop s/w product, expressed in Person-Months.
KLOC= estimated size of s/w product expressed in KLOC.
D is estimated development time to develop s/w in months.
Coefficients are constants for each category of s/w product, given in following table.
Boehm derived these expressions by examining historical data collected from large no. of actual projects.
8 equations: Basic COCOMO Model
• Estimation of development effort
For 3 classes of s/w products, formula for estimating EFFORT based on code size are given
below:

1. organic: E=2.4(KLOC)^1.05 PM
2. Semi-detached: E=3.0(KLOC)^1.12 PM
3. Embedded: E=3.6(KLOC)^1.20 PM

• Estimation of development time


For 3 classes of s/w products, formula for estimating DEVELOPMENT TIME based on code size
are given below:

1. organic: D=2.5(E)^0.38 Months


2. Semi-detached: D=2.5(E)^0.35 Months
3. Embedded: D=2.5(E)^0.32 Months
• With basic model, s/w estimator, cost and development time of s/w project can
be easily estimated, once the size is estimated.
• s/w decide on its own which mode is most appropriate.
• Total 8 equations to calculate effort, time for 3 modes, average manning and
productivity.
E

So, effort calculated for embedded mode is approx. 4 times the effort for organic mode.
Effort calculated for semidetached mode is 2 times the effort of organic mode.
There is large differences in these values.

Development time is approx. same for all 3 modes. So, selection of mode is very important.
As, development time is approx. the same, only varying parameter is requirement of persons.
Every mode will have different manpower requirements.
For over 300 KLOC projects, embedded mode is right choice.
SOLUTION:
Ex:
• Let size of an organic type s/w product has been estimated to be 32000 LOC.
• Let average salary of s/w engineer be Rs 15000/month.
• Find effort required to develop s/w product and development time.

Sol:
Effort = 2.4(32 KLOC)^1.05 = 91 PM
Development time=2.5(91 Effort)^0.38 =14 months
Cost required to develop product= 14*15000 = Rs 2,10,000
Intermediate COCOMO model
• Basic model allowed for quick and rough estimate but resulted in lack of accuracy.
• Used for medium sized projects. Team size: medium
Moreover, Basic COCOMO model assume that effort and development time are functions of product size alone.
However, lot of other project parameters besides product size affect effort as well as development time.
So, for accurate results for effort and time, effect of all relevant parameters must be taken into account.

Intermediate COCOMO model recognizes this effect and refine initial estimate obtained using Basic COCOMO expressions
by using set of 15 cost drivers based on various attributes of s/w development like product reliability, db size, execution
and storage.
Cost drivers are critical features that have direct impact on project.
• Boehm introduced an additional set of 15 predictors called cost drivers in intermediate model to take account of s/w
development environment.
• Cost drivers are used to adjust nominal cost of project to actual project env., hence increasing accuracy of estimate.
Cost drivers: 4 categories
1. Product attributes 2. Computer attributes
3. Personnel attributes 4. project attributes
Typical cost driver categories
1. Product
• Characteristics of product that are considered include inherent complexity of product, reliability requirements.

2. Computer
• Characteristics of product that are considered include execution speed required, time, space or storage constraints.

3. Personnel
• Attributes of development personnel that are considered include experience level of personnel, programming
capability, analysis capability etc.

4. Project(Development environment)
Captures development facilities available to developers.
An important parameter that’s considered is sophistication of automation (CASE) tools used for s/w development.
• e.g., Are modern programming practices/sophisticated software tools being used?
(ACAP)

(RELY) (AEXP)
(DATA) (PCAP)
(CPLX) (VEXP)
(LEXP)

(TIME)
(STOR) (MODP)
(TOOL)
(VIRT) (SCED)
(TURN)

Cost drivers are critical features that have direct impact on project.
Each cost driver is rated for given project environment.

Rating uses a scale very low, low, nominal, high, very high, extra high which describe to
what extent the cost driver applies to project being estimated.
Steps for intermediate level:
steps:
Step 1: Nominal effort estimation
• Determine project’s development mode (organic, semidetached, embedded)
• Estimate size of project

Step 2: Determine effort multipliers


• 15 cost drivers within model-each has rating scale and set of effort multipliers which modifies
step 1 estimate.

Step 3: Estimate development effort


• Compute estimated development effort=nominal effort* product of effort multipliers for 15 cost
driver attributes.
(ACAP)

(RELY) (AEXP)
(DATA) (PCAP)
(CPLX) (VEXP)
(LEXP)

(TIME)
(STOR) (MODP)
(TOOL)
(VIRT) (SCED)
(TURN)

Cost drivers are critical features that have direct impact on project.
Each cost driver is rated for given project environment.
Rating uses a scale very low, low, nominal, high, very high, extra high which describe to what extent the cost
driver applies to project being estimated.
• Multiplying factors for all 15 cost drivers are multiplied to get
EAF(Effort Adjustment factor).
• Typical values for EAF range from 0.9 to 1.4.
• Intermediate COCOMO eq takes form:

• Coefficients ai, bi, ci, di are given in table:


low capable(1.29), high experience(0.95)

Highly capable(0.82),little programming experience(1.14)


Detailed COCOMO MODEL
• Large amount of work has been by Boehm to capture all significant aspects of s/w development.
• Large sized projects. Cost drivers depend upon requirements, analysis, design, testing and maintenance.
• It offers means of processing all project characteristics to construct a software estimate.
• Team size: large
Detailed model introduces 2 more capabilities:
1. Phase-sensitive effort multipliers:
Some phases(design, programming, integration/test) are more affected than others by factors defined by cost
drivers. Detailed model provides set of phase sensitive effort multipliers for each cost driver.
This helps in determining manpower allocation for each phase of project.

2. Three-level product hierarchy:


3 products levels are defined.
1. Module
2. Subsystem
3. System levels
Ratings of cost drivers are done at appropriate level,
Level at which it’s most susceptible to variation.
Development phases
s/w development is carried out in 4 successive phases:
1. Plans/Requirements
2. Product design
3. Programming
4. Integration/Test
DISTRIBUTION OF S/W LIFE CYCLE: There are 4 phases of s/w life cycle:
1. Plans/Requirements
• 1st phase of development cycle.
• Requirement is analysed, product plan is set up and full product specification is generated.
• This phase consumes 6% to 8% of effort and 10% to 40% of development time.
• These percentages depend not only on mode(organic, semi-detached or embedded), but also on size.

2. Product design
2nd phase of COCOMO development cycle is concerned with
determination of product architecture and specification of sub-system.
This phase consumes 16% to 18% of effort and 19% to 38% of development time.

3. Programming
3rd phase, divided into 2 sub-phases: detailed design and code/unit test.
This phase consumes 48% to 68% of effort and 24% to 64% of development time.

4. Integration/Test
This phase occurs before delivery.
This mainly consist of putting tested parts together and then testing the final product.
This phase consumes 16% to 34% of effort and 18% to 34% of development time.
• Where S –represents KLOC(thousands of LOC of module).

Multipliers have been developed that can be applied to


• total project effort, E and
• total project development time, D

In order to allocate effort and schedule components to each phase in life cycle of s/w development program.
• There are assumed to be 5 distinct life cycle phases and
• Effort and schedule of each phase are assumed to be given in terms of overall effort and schedule by:
For constant values, refer intermediate COCOMO table
• COCOMO model is most thoroughly documented model currently available.
• Easy to use.
• s/w managers can learn a lot about productivity, from very clear presentation of cost drivers.
• Data gathered from previous projects may help to determine value of constants of model(like: a,b,c,d).
• These values may vary from organisation to organisation

Issues:
• This model ignores s/w safety and security issues.
• Also ignores many h/w and customer related issues.
• It’s silent about involvement and responsiveness of customer.
• It does not give proper importance to s/w requirements and specification phase which has identified as most
sensitive phase of s/w development life cycle.
Staffing level estimation
• Once effort required to develop s/w has been determined, next step is to find staffing requirement for s/w
project.

• Putnam works on this problem, he extended work of Norden who had earlier investigated staffing pattern of
R&D type of h/w projects.

• Norden found that staffing pattern can be approximated by Rayleigh distribution curve but these results were
not meant to model staffing pattern of s/w development projects.

• Later, Putnam studied problem of staffing of s/w projects and found that s/w development has characteristics
very similar to other R&D projects studied by Norden.

• Putnam suggested that optimal staff build-up on project should follow Rayleigh curve. Only a small no. of
engineers are required at beginning of project to carry out planning and specification tasks
• . As project progresses and more detailed work is required, no. of engineers reaches a peak. After
implementation and unit testing, no. of project staff fails.

• Constant level of manpower through out project duration would lead to wastage of effort and increase time and
effort required to develop product. If constant no. of engineers are used over all phases of project, some phases
would be overstaffed and other phases would be understaffed causing inefficient use of manpower, leading to
schedule slippage and increase in cost.
Putnam resource allocation model
• Norden of IBM observed that RAYLEIGH curve can be used as an approximate model for range of h/w
development projects.
• Then, Putnam observed that
RAYLEIGH curve was close representation for s/w subsystem development.
• 150 projects were studied by NORDAN and then by PUTNAM, both researchers observed same
tendency for manpower curve to rise, peak and then exponentially trail off as function of time

Effort per
unit time
• Rayleigh curve represents manpower -----measured in persons per
unit time as function of time.
• Expressed in PY/YR(person-year/year)

It’s an indication of no. of engineers


(staffing) at any particular time during
the duration of project.

(K tells s/w development cost)

Cumulative manpower is null at start of project and grows


monotonically towards total effort k (area under curve).
Integration step:
• dy=2Ka∫t𝑒 −𝑎𝑡^2 dt
• Put 𝑡 2 =p  2tdt= dp tdt=dp/2
• Also when t=0, p=0 and when t=𝑡 2 ,p=𝑡 2
=dy= 2Ka∫𝑒 −𝑎𝑝 dp
2
𝑡 2 −𝑎𝑝
=dy=Ka‫׬‬0 𝑒 dp
𝑒 −𝑎𝑝 𝒕𝟐
=Ka[ ]0
−𝑎
=-K[𝑒 −𝑎𝑡^2 -1]
=K[1-𝑒 −𝑎𝑡^2 ]
• Parameter a has dimensions of 1/(time^2), plays an important role in determination of peak manpower.
• Larger the value of a, earlier peak time occurs and steeper is person profile.
• By deriving manpower function relative to time and finding 0 value of this derivation, relationship b/w peak time, “td”
and “a” can be found as:

=> 1-2at^2=0 => t^2=1/2a put t=td

Td :peak development time ( time at which curve attains maximum


value) and
considered as time required to develop s/w..
Estimate for development time
So, point td on time scale should correspond very closely to total
project development time.
Replace td for t in eq 2,
we can obtain estimate for development time
where
Double derivative step:
d[2kat𝑒 −𝑎𝑡^2]
=
𝑑𝑡
=2ka[-ta2𝑡𝑒 −𝑎𝑡^2 +𝑒 −𝑎𝑡^2 ]
=2ka𝑒 −𝑎𝑡^2 [1-2𝑎𝑡 2 ]
• Peak manning time is related to “a”.
• Larger the value of a, earlier peak time occurs and steeper is person profile.
• Number of people involved in project at peak time then becomes easy to
determine :
Summary: 4 equations of Putnam Model
1.

Peak manning:

Replacing a in Rayleigh differential equation

2. Average rate of software team build-up=m0/td.


3. Estimate for development time
Double derivative
4. Cumulative manpower
Ex 4.12
Software project is planned to cost 95 PY in period of 1 year and 9 months.
Calculate peak manning and average rate of s/w team build up.
PY
Sol:
e=2.71828

There are 4 phases of s/w life cycle:


Difficulty Metric
• Differentiating Norden/Rayleigh function with respect to time, foll. eq is obtained.

m’(t)=2Ka(t.-a2t.e^(-at^2)+e^(-at^2)) =
• This relationship shows that project is more difficult to develop when manpower
demand is high or when time schedule is short(small td).

D small: for easy project


D high: for hard project

• Difficult projects will tend to have steeper demand for manpower at the
beginning for same time scale.
• After studying about 50 army s/w projects, Putnam observed that for systems
that are easy to develop, D tended to be small while for systems that are hard to
develop, D tended to be large.
Difficulty Metric
D(Difficulty) ∞ m0(peak manning)
• Consider ex 4.13 and calculate difficulty and manpower build-up.
Productivity vs Difficulty

Where S is LOC produced and E is cumulative manpower used from t=0 to t=td(inception of project to delivery time).
Using nonlinear regression, Putnam determined from analysis of 50 s/w army projects that
Quantity is replaced by coefficient c called technology factor.

Technology factor reflect effect of various factors on productivity such as


Personal

levels
program

Putnam proposed: Values for C ranging from 610 to 57314(assume


K measured in person-years and T in years) depending on
assessment of technology factor that applies to project under
Above eq. can be modified as: consideration

It’s easy to use size, cost and development time of past projects to determine value of C and hence revise value of C obtained
to model forthcoming projects.
Trade off b/w time vs cost
• In s/w projects, time can’t be freely exchanged against cost.

• Compression of development time td will produce an increase of manpower cost.


• If compression is excessive, s/w development cost will be more, development will be
difficult(risk of being unmanageable).
• Putnam named this model as SLIM(s/w life cycle methodology).
• This model is combination of expertise and statistical computations and could be used
effectively for predictive purposes if we had suitable algorithm that we use to predict value of
C for s/w project.
• Effort K varies inversely as 4th power of development time.
• Let, constant C=5000 and
• size of project, S=500,000 LOC
• Table shows that
“how required effort in person-years changes as development
time measured in years changes”.
• Reducing development time from 5 years to 4 years would
increase total effort and cost by factor 2.4.
• Reducing it to 3 years would increase them by factor of 7.7.
Development subcycles
• Curve is represented by Rayleigh function, which gives
manning level relative to time and reaches peak at time td.
• Project curve is addition of 2 curves
1. Development curve
2. Test and validation curve.
Both the curves are sub-cycles of project curve and can be modelled by Rayleigh function.
Design manning

Cumulative design manpower

Relation b/w development time td and development peak manning, tod.


Where Kd is total manpower cost of development sub-cycle and
K is total manpower cost of generic cycle.
Here extra factor √6 is present when calculations are made at sub-cycle level

.Putnam model
Software Risk Management
• We Software developers are extremely optimists. We assume, everything will go exactly as planned.
• Other view
not possible to predict what is going to happen ?
• Software surprises
Never good news, when unexpected things happens that throws project completely off track.

Risk management is required to reduce this surprise factor.


RM means: Dealing with concern before it becomes a crisis.
• Most of s/w development activities include RM as key part of planning process and expect plan to highlight specific risk areas.

Project planning is expected to quantify both probability of failure & consequences of failure.
Describe what will be done to reduce risk?
Software project
• Vague requirement
• User not sure of needs
• Huge number of people
• Large number of resources
• Time span
• Requirement changes
• What is risk ? Tomorrow’s problems are today’s risks. Risk is an uncertainity.
“Risk is a problem that may cause some loss or threaten success of project, but which has not
happened yet”.

• These potential problems might have an adverse effect on cost, schedule or technical success of project,
quality or project team morale.

Risk management: process of identifying addressing and eliminating these problems before they can
damage project.

• We need to differentiate risk as potential Problems, from current problems of project.


• For ex: staff shortage as not able to hire people with right technical skills is current problem but threat
of our technical team being hired away by competitors is risk.

• Current real problems require prompt, corrective action while risk can be dealt with many ways like we
might choose to avoid risk entirely by changing project approach or even cancelling the project.
• There are no magic solution to any of these risks factors, so we need to rely on past experience and
strong knowledge of contemporary s/w engg. and management practices to control these risks.
What is Risk?
• Risks are potential problems that may affect successful completion of a software
project.
• Risks involve uncertainty and potential losses.
• Risk analysis and management are intended to help a software team understand
and manage uncertainty during development process.
• risk management begins long before technical work starts, risks are identified
and prioritized by importance
• team builds a plan to avoid risks if they can or to minimize risks if they turn into
problems

103
Problem vs risk
• Problem is some event which has already occurred but
• Risk is something that’s unpredictable.
Typical Software Risk
Capers Jones has identified the top five risk factors that threaten projects in different applications(accumulated
from previous projects).
1. Dependencies of project on outside agencies or factors.
• Availability of trained, experienced persons
• Inter group dependencies
• Customer-Furnished items or information
• Internal & external subcontractor relationships

2. Requirement issues
Many projects face: Uncertainty around product’s requirements. Tolerable in early stages but threat to success increase
if such issues are not resolved as project progresses.
If we don’t control requirements-related risk factors, we might either build wrong product or Right product badly.
Either situation results in unpleasant surprises and unhappy customers.
• Lack of clear product vision
• Unprioritized requirements
• Lack of agreement on product requirements
• New market with uncertain needs
• Rapidly changing requirements
• Inadequate Impact analysis of requirements changes
3. Management Issues
Project managers usually write risk management plans, and most people do not wish to air their weaknesses in public.
• Inadequate planning
• Inadequate visibility into actual project status
• Unclear project ownership and decision making
• Staff personality conflicts
• Unrealistic expectation
• Poor communication

4. Lack of knowledge
Rapid rate of change of technologies and increasing change of skilled staff means our project teams may not have skills we need to be successful.
Key is to recognize risk areas early enough so that we can take appropriate preventive actions like training, hiring consultants and bringing right people
together on project team.
• Inadequate training
• Poor understanding of methods, tools, and techniques
• Inadequate application domain experience
• New Technologies
• Ineffective, poorly documented or neglected processes
5. Other risk categories
Some of the critical areas are:
• Unavailability of adequate testing facilities
• Turnover of essential personnel
• Unachievable performance requirements
• Technical approaches that may not work
RM involve several important steps.
Risks we encounter in project should be resolved so that we are able to deliver desired project to customer.

Risk mush not affect project in a big way.

Art of managing of risks effectively so that WIN-WIN situation and friendly relationship is established b/w team and
customers is called RM.
• We should assess risks on project, so that we understand what may occur during course of development or maintenance.
Risk Assessment
• Process of examining project and identifying areas of potential risk.

Risk Assessment consist of 3 activities:


1. RI-identify risks. Search for risks before they create a major problem.
Risk identification can be facilitated with help of checklist of common risk areas of s/w project or by examining contents of an organizational
database of previously identified risks.
• Identification of risks

2. RA-analyse them. Understand the nature, kind of risk and gather information about risk.
Risk analysis involves examining how project outcomes might change with modification of risk input variables.
• Questions
• What is causing the risk
• How much will it affect
• Are the risks dependent
• The probability that it will occur

3. RP- assigning priorities to each of them. Risk prioritization focus for severe risks.
Rank the risks according to management priorities, by risk category and rated by likelihood and possible cost or consequence.
• Risk exposure: It is the product of the probability of incurring a loss due to the risk and the potential magnitude of that loss.
This prioritization can be done in quantitative way, by estimating probability and relative loss, on scale of 1 to 10.
Higher the exposure, more aggressively risk should be tackled.

Another way of handling risk is the risk avoidance. Do not do the risky things! We may avoid risks by not undertaking certain projects.
Risk Control
• Process of managing risks to achieve desired outcomes.
Risk Management Planning produces a plan for dealing with each significant risks.
• Avoidance
• Protection
• Reduction
• Research
• Reserves
• Transfer

• It’s useful to record decision in the plan, so that both customer and developer can review how problems are to be avoided as
well as how they are to be handled when they arise.
• Monitor project as development progresses, periodically reevaluating risks, their probability and likely impact.
• Risk resolution is execution of the plans for dealing with each risk.
• Risk resolving
• Risk documentation

Risk management
• Uncertain requirements
• Unknown technology
• Infeasible design
• Cost and schedule uncertainty.

• To manage risks we need to establish strong bond b/w customers and team
members.
• s/w metrics and tools can be developed to manage risks.
• Risk necessarily need not be negative and it can be viewed as an
opportunity to develop our projects in a better way.
• Thanks
• ????
Manpower buildup
• D is dependent upon “K” and “td”.
• Derivative of D relative to “K” and “td” are:
Played an important role in explaining

Value of D0 is related to nature of s/w developed in foll. way:

Putnam also discovered that D0 could vary slightly from 1 organization to another depending on average skill of analyst,
developers and management involved. D0 has strong influence on shape of manpower distribution.
Larger D0 is, steeper manpower distribution is, and faster the necessary manpower build up will be. So quantity D0 is called
manpower build-up.
• EX: if modern programming practices are used, initial estimates are
scaled downward by multiplication with cost driver having value less
than 1.
• If there are stringent reliability requirements on s/w product, this
initial estimate is scaled upward.
• Boehm requires project manager to rate these 15 different
parameters for particular project on scale 1 to 3.
• Then depending on these ratings, he suggests appropriate cost driver
values which should be multiplied with initial estimate obtained using
BASIC COCOMO.
Detailed COCOMO MODEL
• Major shortcoming of both basic and intermediate COCOMO model is that
they consider s/w product as single homogenous entity.
• But, most large systems are made up of several smaller subsystems. These subsystems may have widely different
characteristics.
• E.g: Some subsystems may be considered as organic type, some semi-detached and some embedded.
• Not only that, inherent development complexity of subsystem may be different.
• But for some subsystems reliability requirements may be high, some development team might have no previous experience of
similar development and so on.
• Complete COCOMO model considers these differences in characteristics of subsystem and estimate effort and development
time as sum of estimates for individual subsystems.
• Cost of each subsystem is estimated separately. This approach reduces margin of error in final estimate.

Foll. Development project can be considered as an ex. application of detailed COCOMO.


• Distributed management information system(MIS) product for an organization having offices at several places across country
can have foll. Sub-components:
1. DB part(semi-detached)
2. GUI part(organic)
3. Communication part(embedded)
Of these, comm. Part can be considered as embedded s/w. DB part could be semi-detached s/w and GUI part is organic s/w.
Cost for these 3 components can be estimated separately and summed up to give overall cost of system

Você também pode gostar