Você está na página 1de 61

Page 1 of 61

BACHELOR OF SCIENCE INFORMATION TECHNOLOGY


ICS 2302: SOFTWARE ENGINEERING: - COURSE OUTLINE

1. Software specification Definitions of SWE, Other Engineering Disciplines, Data &
functional specs, Data Gathering Methods
2. Software Development Methods SW Process Activities, SW Process Development, SW
Process Improvement, Process Management, Framework for SW Process Change
3. Software Development Tools SDLC, Development Models: - Waterfall, Prototyping, Agile
Method, Rapid Application Development, Iterative & Spiral models.
4. Software Design SW Construction, Design Concepts, Modularity, CASE Tools, Aspects of
Object Oriented Programming
5. Software project management Definition of Terms, Management Context, Process
Management, SW Measurements, Total Quality Management
6. Software Estimation Concerns Costs & Effort Estimation, Estimation Methods:-
Empirical Models, SLIM, COCOMO
7. Software Metrics Maintainability, Documentation, Complexity, Reliability, Availability,
Comprehensibility
8. Software Testing Techniques - Debugging environments, SW Verification & Validation SW
Documentation, SW Configuration Management
9. Software Quality Evaluation Problems, Software standards, Software maintenance,
Certification, Software Tools support for Systems Engineering
10. Software Maintenance - Perfective, Adaptive, Preventive and Corrective maintenance

ASSESSMENT: Assignments 2; CATs 3; Exam
DELIVERY: Lectures; Research / Readings; Group Discussions; Case Studies
LECTURER: Mr. Musika mmusika@yahoo.com 0722451338
COURSE TEXTS

1) Software Engineering; Addison Wesley
2) Information systems; Foulks Lynch ltd; 2001
3) Systems Theory, Analysis & Design; Riaga Alphonse; 2000
4) Software Quality Analysis & Guidelines for Success; Casper Jones, 1997
5) Internet

Page 2 of 61
1. SOTWARE ENGINEERING SPECIFICATIONS
1.1 SWE - DEFINITION
Software engineering is the application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; that is, the application of engineering to
software. (IEEE Computer Society)

Software engineering is the process of solving customers' problems by the systematic
development and evolution of large-high quality software systems within cost, time, and other
constraints (Lethbridge)

Software engineering can be described in terms of
Analysis - breaking apart a problem into pieces
Synthesis - constructing a solution from available or new components.
Methods & tools - that enable software projects to be built predictably within proscribed
schedules & budgets, meeting the customer's requirements of functionality & reliability.

Software engineering is concerned with the theories, methods and tools which are needed to
develop high quality, complex software in a cost effective way on a predictable schedule.

Software engineering: The disciplined application of engineering, scientific, and mathematical principles,
methods, and tools to the economical production of quality software.

Softwareis abstract and intangible. It is not constrained by materials, governed by physical laws
or by manufacturing processes. There are no physical limitations on the potential of software.

A software product consists of developed programs and all associated documentation and
configuration data needed to make the programs operate correctly.
Software process: The set of activities, methods, and practices that are used in the production
and evolution of software.
Software process model: One specific embodiment of Software process architecture.
Software engineering process: The total set of software engineering activities needed to
transform a user's requirements into software.
Software process architecture: A framework within which project-specific software processes
are defined.
Levels of Software engineering process
1. Initial - depends on heroic efforts and skills of key individuals
2. Repeatable - success is predictable for projects in similar application domains.
3. Defined - documented software life cycle model is used for each project
4. Managed - data is collected and analyzed during the project to understand the software
life cycle model

Page 3 of 61
5. Optimized - the measurement data used in a feedback mechanism to improve the
software life cycle model over the lifetime of the organization.
The Nature of Software
Software is flexible- Software is an executable specification of a computation / application
Software is expressive- All computable functions may be expressed in software. Complex event
driven systems may be expressed in software.
Software is huge - An operating system may consist of millions of lines of code.
Software is complex - Software has little regularity or recognizable components found in other
complex systems and there are exponentially many paths through the code and changes in one
part of the code may have unintended consequences in other equally remote sections of the code.
Software is cheap - Manufacturing cost is zero, development cost is everything. Thus, the first
copy is the engineering prototype, the production prototype and the finished product.
Software is never finished - The changing requirements and ease of modification permits the
maintenance phase to dominate a software product's life cycle, i.e., the maintenance phase
consists of on going design and implementation cycles.
Software is easily modified - It is natural to use an iterative development process combining
requirements elicitation with design and implementation and use the emerging implementation to
uncover errors in design and in the requirements.
Software is communication - Communication with a machine but also communication between
the client, the software architect, the software engineer, and the coder. Software must be readable
in order to be evolvable.

But software engineering is radically different than other engineering disciplines, in that:
The end product is abstract, not a concrete object like a bridge
Costs are almost all human; materials are an ever shrinking fraction
Easy to fix bugs, but hard to test and validate
Software never wears out...but the hardware/Os platforms it runs on do
Variations in application domain are open-ended, may require extensive new non-
software, non-engineering knowledge for each project
Difference with traditional engineering disciplines
Software development/engineering: The engineering method
Requirements elicitation Formulate the problem
Requirement Analysis Analyze the problem
System design Search for solutions
Object design Decide on the appropriate solution
System Implementation Specify the solution

During analysis review, comparison of application domain model with client's reality may
result in changes to each.
During testing, the system is validated against the solution domain model, which might
change due to the introduction of new technologies.
During project management, managers compare their model of the development process
(schedule & budget) against reality (products and resources).

Page 4 of 61
Software life cycle processes
Software life cycle: The period of time that begins when a software product is conceived and
ends when the software is no longer available for use. This cycle typically includes a

Traditional software life cycle phases Extreme programming
concept phase
software development life cycle
o requirements phase
o design phase
o implementation phase
o test phase
o installation and checkout phase
operation and maintenance phase
retirement phase
These phases may overlap and/or be performed
iteratively.
Listening/Planning
Designing
Testing
Coding
These phases overlap and are performed
iteratively.

Project phases for the development for any large system
1. Initial conception
2. Requirements analysis
3. Specification
4. Initial design
5. Verification and test of design
6. Redesign
7. Prototype manufacturing
8. Assembly and system-integration tests
9. Acceptance tests (validation of design)
10. Production (if several systems are required)
11. Field (operational) trial and debugging
12. Field maintenance
13. Design and installation of added features
14. Systems discard (death of system) or complete system redesign.
The reasons for increases in cost are:
1. Testing becomes more complex and costly;
2. Documentation of changes becomes more widespread and costly;
3. Communication of problems and changes involves many people;
4. Repeating of previous tests (regression testing) becomes costly;
5. Once operation is begun, the development team is disbanded and reassigned.


Page 5 of 61
1.2 DATA GATHERING METHODS
Common methods are: Interviewing, Questionnaires, Observation, Repertory Grids, Joint
Application Design
1.2.1 INTERVIEWING
An interview is a systematic attempt to collect information from a person. Interviewing is the
most widely used technique in requirements engineering. Analysts interview future users of the
system individually to find out what the present system does and what changes are needed.
The information gathered during the interviews enables the analysts to design a new system that
will eliminate the shortcomings of the current one.
Interviewing is an important skill for systems analysts because success depends on an ability to
identify:
Work flows,
Factors that influence the operations of systems, and
The elements (documents, procedures, policies, etc.) that make up systems.
Without accurate and complete information:
the new system would probably not contain the necessary features to meet the needs of
the organization
Poorly performed interviews can affect the attitudes of the users and have a negative
effect on the entire project effort.
The interview process has five steps:
Preparing for the interview
Planning and scheduling the interview
Opening and closing the interview
Conducting the interview
Following up for clarification
Preparing for the Interview
Before undertaking an interview the analyst should have a good understanding of the
organization, its industry setting, and the project's scope and objectives. This involves reviewing:
organization reports
annual reports
long-range planning documents
statements of departmental goals
existing procedure manuals and
systems documentation

Page 6 of 61
Analysts must understand common industry terms and be somewhat familiar with the business
problems of the industry. The following are errors commonly made by inexperienced analysts:
Sitting back in a chair with arms folded across the chest (This posture implies a lack of
openness to what is being said and may also indicate that the analyst is ill at ease.)
Looking at objects in the room or staring out the window instead of looking at the
interviewee. (Because this behavior suggests that the analyst would rather be somewhere
else doing other things, the interviewee will often cut the interview short.)
Taking excessive notes or visually reviewing notes. (An analyst who records rather than
listening may arouse interviewee concerns over what is being written.)
Sitting too far away or too close. (Sitting too far away often communicates that the
analyst is intimidated by the interviewee, while sitting too close may communicate an
inappropriate level of intimacy and make the interviewee uncomfortable. Acceptance
cues are used to convey understanding, not agreement.
Advantages:
Appropriate when the requirements engineer wants to explore an issue
Facilitates description of domain in a way that is easy for the interviewee
Goal is to establish rapport and to get a broad view
Forces an organization on the interview
Very goal-directed
Attempts to remove distortion from interviewees subjectively
Allows better integration of material after the interview
Forces the interviewee to be systematic
Requirements engineer identifies gaps in the knowledge which acts as a basis for
questions
Purpose of session is clear to interviewee
Disadvantages:
Data acquired is often unrelated and difficult to integrate
Often exhibits lack of structure
Does not allow gathering of specific knowledge
Takes time and training to do well
Similar questions asked in future sessions may annoy interviewee
Needs more preparation by the requirements engineer
Needs to study background material extensively
1.2.2 JOINT APPLICATION DESIGN (JAD)
Joint Application Design (JAD) is a group interview conducted by an impartial leader. The
advantages of JAD are that the analysis phase of the life cycle is shortened and the
specifications document produced are better accepted by the users.

Page 7 of 61
Requirements analysis should not start until there is a clear statement of scope and objectives of
the project.
Interviewing users to build a physical model of the present system, abstracting this to a
logical model of the present system, and then incorporating desirable changes so as to
create the logical model of the future system.
Organizing a workshop with group methods such as JAD to create the logical model of
the new system directly, without the intermediate steps of the physical and logical models
of the present system.
The logical model of the new system becomes the requirements specification. Typically, it
contains:
An introduction containing the scope, goals, and objectives of the system.
Data flow diagrams depicting the workflow of the new system, including the clear
identification of business transactions.
A project data model (e.g. ER model).
Information needs: which types of questions will be asked of the system and which data
will support the answers.
Interfaces with other systems.
Sample report, screen, and form layouts, if required.
Operational information such as processing frequencies and schedules, response time
requirements, resource utilization constraints.
Measurable criteria for judging the implementation.
The final test of the system, the user acceptance test, should be conducted as a benchmark
against this document. Therefore, all those factors that will determine whether the users are
satisfied with the system should be documented and agreed upon. There may be a final step in
the requirements analysis--to have the users and top management sign off on the specification
document. There is the need to review the present system before designing a new one, because:
it gives the design team a good grasp of the business problems
it lets them understand the environment in which the new system will operate
it helps make sure everything needed is considered
it helps to relate the new design to the old one for ease of use
it enhances cooperation between project team and users

Page 8 of 61
2. SOFTWARE DEVELOPMENT METHODS
SOFTWARE DEVELOPMENT ACTIVITIES
The software process is a set of activities and associated results which produce a software
product. The software process is the set of tools, methods, and practices we use to produce a
software product.
Requirements elicitation - Client and developers define the purpose of the system, Delivering
Description of the system in terms of actors and use cases.
Requirements Analysis - Developers transform use cases into an object model that completely
describes the system; Delivering a Model of the system that is correct, complete, consistent,
unambiguous, realistic, and verifiable.
System design - Developers define the design goals of the project and decomposition of the
system in to smaller subsystems that can be realized by individual teams. Delivers a Clear
description of strategies, subsystem decomposition, and deployment diagram
Objects design - Developers define custom objects (interfaces, off-the-shelf components, model
restructuring to attain design goals) to bridge the gap between the analysis model and the
hardware/software platform defined during system design. Delivers a Detailed object model
annotated with constraints and precise descriptions for each element
I mplementation - Developers translate object model into source code; Delivering a Complete set
of source code files.
Software specification - software functionality and constraints
Software development - production of the software to meet the specifications.
Software validation - ensure software does what the customer wants.
Software evolution - ensure the software continues to meet the customer's needs.

The characteristics of an effective software process
It must be predictable.
Cost estimates and schedule commitments must be met with reasonable consistency, and the
resulting products should generally meet users' functional and quality expectations.
The objectives of software process management are to produce products according to plan
while simultaneously improving the organization's capability to produce better products.
The basic principles are those of statistical process control, which have been used successfully
in many fields.
A process is said to be stable or under statistical control if its future performance is predictable
within established statistical limits.

Page 9 of 61
When a process is under statistical control, repeating the work in roughly the same way will
produce roughly the same result. The basic principle behind statistical control is measurement.
The numbers (measures) must properly represent the process being controlled, and they must
be sufficiently well defined and verified to provide a reliable basis for action. Also, the mere act
of measuring human processes changes them. Measurements are both expensive and
disruptive; overzealous measuring can degrade the process we are trying to improve.
Software Process Development
The entire software task must be treated as a process that can be controlled, measured, and
improved. A process is a set of tasks that, when properly performed, produces the desired result.
A fully effective software process must consider the relationships of all the required tasks, the
tools and methods used, and the skill, training, and motivation of the people involved. To
improve their software capabilities, organizations must take six steps:
Understand the current status of their development process.
Develop a vision of the desired process.
Establish a list of required process improvement actions in order of priority.
Produce a plan to accomplish the required actions.
Commit the resources to execute the plan.
Start over at step 1 .
The framework addresses the six improvement steps by characterizing the software process into
one of five maturity levels. Software professionals and their managers can identify areas where
improvement actions will be most fruitful by establishing their organization's position in this
structure.
The Five Levels of Software Process Maturity

(1) I nitial - Characteristics: Chaotic--unpredictable cost, schedule, and quality performance.
Needed Actions: Planning (size and cost estimates and schedules), performance tracking, change
control, commitment management, Quality Assurance.
(2) Repeatable- Characteristics: Intuitive--cost and quality highly variable, reasonable control
of schedules, informal and ad hoc process methods and procedures.
Needed Actions: Develop process standards and definitions, assign process resources, establish
methods (requirements, design, inspection, and test).
(3) Defined - Characteristics: Qualitative--reliable costs and schedules, improving but
unpredictable quality performance.

Page 10 of 61
Needed Actions: Establish process measurements and quantitative quality goals, plans,
measurements, and tracking
(4) Managed - Characteristics: Quantitative--reasonable statistical control over product quality.
Needed Actions: Quantitative productivity plans and tracking, instrumented process
environment, economically justified technology investments
(5) Optimizing - Characteristics: Quantitative basis for continued capital investment in process
automation and improvement.
Needed Actions: Continued emphasis on process measurement and process methods for error
prevention.
Software Process Improvement
The software maturity framework has been found to reasonably represent the status and key
problem areas of many software organizations. As the practice of software evolves, the
framework provides a broader perspective on the improvement needs of software organizations.
Software process improvement actions are grouped as follows:

1. Organization - This deals with management leadership of software organizations typically
exercised through policies, resource allocation, communications, and training.
Policies state the basic principles of organizational behavior.
The organization structure establishes basic responsibilities and allocates resources.
Management oversight concerns management awareness of organizational performance.
Communication deals with the means to ensure available knowledge to support timely action.
Training ensures that the software professionals are aware of and capable of using the pertinent
standards, procedures, methods, and tools.

2. Project management - this deal with the normal activities of planning, tracking, project
control, and subcontracting.
Planning includes the preparation of plans and the operation of the planning system.
The tracking and review systems ensure that appropriate activities are tracked against the plan
and that deviation are reported to management.
Project control provides for control and protection of the critical elements of the software project
and its process. Subcontracting concerns the means used to ensure that subcontracted resources
perform in accordance with established policies, procedures, and standards.

3. Process management - With improving organizational maturity, a process infrastructure is
established to uniformly support and guide the projects' work. This involves process definition,
process execution, data gathering and analysis, and process control.
The process definition provides a standardized framework for task implementation, evaluation,
and improvement. Process execution defines the methods and techniques used to produce quality
products.
Analysis deals with the measurements made of software products and processes and the uses
made of this data. Process control concerns the establishment of mechanisms to assure the

Page 11 of 61
performance of the defined process and process monitoring and adjustment where improvements
are needed.

4. Technology - This topic deals with technology insertion and environments that covers the
means to identify and install needed technology, while environments include the tools and
facilities that support the management and execution of the defined process.
Process Management
The critical steps are:
1. Install a management review system. This involves senior management and ensures that
plans are produced, approved, and tracked in an orderly way.
2. Insist on a comprehensive development plan. This must include code size estimates,
resource estimates and a schedule.
3. Set up a Software Configuration Management (SCM) function. This is crucial to
maintaining control and must be in place and operational before completion of detailed
design. These functions should then be expanded to include requirements and design.
4. Ensure that a Software Quality Assurance (SQA) organization is established and
sufficiently staffed to review a reasonable sample of the work products. Until there is
evidence that the work is done according to plan, this is essentially a 100% review. With
successful experience the sampling percentage can be reduced.
5. Establish rate charts for tracking the plan. Typical milestones are: requirements
completed and approved, the operational concept reviewed and approved, high-level
design completed and reviewed, percent of modules with detailed design completed,
percent of modules through code and unit test. Similar rate charts can be established for
each phase of test.
MANAGING SYSTEM DEVELOPMENT PROJECT

Principal causes of information systems project failure.

Insufficient or improper user participation in the systems development process
Lack of management support
High levels of complexity and risk in the systems development process
Poor management of the implementation process
Some systems require extensive organizational change

Effective system project management focuses on 4Ps i.e.
People : recruiting, selection, performance management, training, compensation,
career development, organization and work design and team/ culture development
Product : product objective and scope should be established first, alternative solutions
considered, parameters established etc. defining the product , therefore it is possible to
estimate cost, effectiveness and project breakdown to manageable schedules
Process : frame work activities from which a comprehensive plan for system
development can be established

Page 12 of 61
Number of framework activities, made up of i.e. tasks, milestone, work products
and the project team adapts quality assurance points.
Umbrella activities such as quality assurance, system configuration management
and measurement lays the process model
Project : is planned and controlled system (project) undertaking to achieve a goal/
attain a solution
Project is a human activity that achieves a clear objective against a timescale
Characteristics:
i) Specific objectives to be completed within certain specifications
ii) Has a defined time and task schedule
iii) Has a defined budget
iv) Requires resources such as people
v) There is no practice or rehearsal
vi) Change: a project commissions change

SOFTWARE PROCESS CHANGE
The six requirements for software process change are:
1. Involve management - Significant change requires new priorities, additional resources, and
consistent support. Senior managers will not provide such backing until they are convinced that
the improvement program makes sense.
2. Get technical support - This is best obtained through the technical opinion leaders (those
whose opinions are widely respected). When they perceive that a proposal addresses their key
concerns, they will generally convince the others. However, when the technical community is
directed to implement something they don't believe in, it is much more likely to fail.
3. Involve all management levels - While the senior managers provide the resources and the
technical professionals do the work, the middle managers make the daily decisions on what is
done. When they don't support the plan, their priorities will not be adjusted, and progress will be
painfully slow or nonexistent.
4. Establish an aggressive strategy and a conservative plan - While senior management will be
attracted by an aggressive strategy, the middle managers will insist on a plan that they know how
to implement. It is thus essential to be both aggressive and realistic. The strategy must be visible,
but the plan must provide frequent achievable steps toward the strategic goals.
5. Stay aware of the current situation - It is essential to stay in touch with current problems.
Issues change, and elegant solutions to last year's problems may no longer be pertinent. While
important changes take time, the plan must keep pace with current needs.
6. Keep progress visible - People easily become discouraged if they don't see frequent evidence
of progress. Advertise success, periodically reward the key contributors, and maintain
enthusiasm and excitement.

Page 13 of 61

SOFTWARE DEVELOPMENT TOOLS
Any System Development is represented by a model called System Development Life Cycle
(SDLC) contains 5 stages that flow from one to the next in order (hence the 'waterfall' imagery).
There are alternatives to SDLC that includes, Waterfall model, Prototyping, Iterative
Development, Spiral Model of the SDLC etc
THE CLASSIC LIFE CYCLE OR WATERFALL MODEL
The following activities occur during the life cycle paradigm:
System Engineering. Top level customer requirements are identified, functional and
system interfaces are defined and the relation of this software to overall business function is
established.
Analysis. Detailed requirements necessary to define the function and performance of the
software are defined. The information domain for the system is analyzed to identify data
flow characteristics, key data objects and overall data structure.
Design. Detailed requirements are translated into a series of system representations that
depict how the software will be constructed.
The design encompasses a description of program structure, data structure and detailed
procedural descriptions of the software.
Code. Design must be translated into a machine executable form.
The coding step accomplishes this translation through the use of conventional
programming languages (e.g., C, Ada, Pascal) or fourth generation languages.
Testing. Testing is a multi-step activity that serves to verify that each software component
properly performs its required function and validates that the system as a whole meets
overall customer requirements.
Maintenance. Maintenance is the re-application of each of the preceding activities for
existing software. The re-application may be required to correct an error in the original
software, to adapt the software to changes in its external environment (e.g., new hardware,
operating system), or to provide enhancement to function or performance requested by the
customer.
This is the most widely used approach to software engineering. It leads to systematic, rational
software development, but like any generic model, the life cycle paradigm can be problematic
for the following reasons:
1. The rigid sequential flow of the model is rarely encountered in real life. Iteration can
occur causing the sequence of steps to become muddled.
2. It is often difficult for the customer to provide a detailed specification of what is required
early in the process. Yet this model requires a definite specification as a necessary
building block for subsequent steps.

Page 14 of 61
3. Much time can pass before any operational elements of the system are available for
customer evaluation. If a major error in implementation is made, it may not be uncovered
until much later.
Do these potential problems mean that the life cycle paradigm should be avoided? Absolutely
not! They do mean, however, that the application of this software engineering paradigm must be
carefully managed to ensure successful results.
PROTOTYPING
Prototyping moves the developer and customer toward a "quick" implementation. Prototyping
begins with requirements gathering. Meetings between developer and customer are conducted to
determine overall system objectives and functional and performance requirements. The developer
then applies a set of tools to develop a quick design and build a working model (the "prototype")
of some element(s) of the system. The customer or user "test drives" the prototype, evaluating its
function and recommending changes to better meet customer needs. Iteration occurs as this process
is repeated, and an acceptable model is derived. The developer then moves to "productize" the
prototype by applying many of the steps described for the classic life cycle. In object oriented
programming a library of reusable objects (data structures and associated procedures) the
software engineer can rapidly create prototypes and production programs.
The benefits of prototyping are:
1. A working model is provided to the customer/user early in the process, enabling early
assessment and bolstering confidence,
2. The developer gains experience and insight by building the model, thereby resulting in a
more solid implementation of "the real thing"
3. The prototype serves to clarify otherwise vague requirements, reducing ambiguity and
improving communication between developer and user.
But prototyping also has a set of inherent problems:
1. The user sees what appears to be a fully working system (in actuality, it is a partially
working model) and believes that the prototype (a model) can be easily transformed into
a production system. This is rarely the case. Yet many users have pressured developers
into releasing prototypes for production use that have been unreliable, and worse,
virtually un - maintainable.
2. The developer often makes technical compromises to build a "quick and dirty" model.
Sometimes these compromises are propagated into the production system, resulting in
implementation and maintenance problems.
3. Prototyping is applicable only to a limited class of problems. In general, a prototype is
valuable when heavy human-machine interaction occurs, when complex output is to be
produced or when new or untested algorithms are to be applied. It is far less beneficial for
large, batch-oriented processing or embedded process control applications.
ITERATIVE DEVELOPMENT
The problems with the Waterfall Model created a demand for a new method of developing

Page 15 of 61
Systems which could provide faster results, require less up-front information, and offer greater
flexibility. With Iterative Development, the project is divided into small parts. This allows the
development team to demonstrate results earlier on in the process and obtain valuable feedback
from system users.
Often, each iteration actually a mini-Waterfall process with the feedback from one phase providing
vital information for the design of the next phase. In a variation of this model, the software products
which are produced at the end of each step (or series of steps) can go into production immediately
functions that will be included in the developed system. Prototyping is comprised of the following
steps:

Requirements Definition /Collection - Similar to the Conceptualization phase of the
Waterfall Model, but not as comprehensive, the information collected is usually limited
to a subset of the complete system requirements

Design - Once the initial layer of requirements information is collected, or new
information is gathered, it is rapidly integrated into a new or existing design so that it may
be folded into the prototype.

Prototype Creation / Modification -The information from the design is rapidly rolled into
a prototype. This may mean the creation/modification of paper information, new coding,
or modifications to existing coding.

Assessment - The prototype is presented to the customer for review. Comments and
suggestions are collected from the customer.
Prototype Refinement - Information collected from the customer is digested and the
prototype is refined. The developer revises the prototype to make it more effective and
efficient.
System Implementation -In most cases, the system is rewritten once requirements are
understood. Sometimes, the Iterative process eventually produces a working system that
can be the cornerstone for the fully functional system.

Problems/Challenges associated with the Iterative Model
While the Iterative Model addresses many of the problems associated with the Waterfall Model, it
does present new challenges.
The user community needs to be actively involved throughout the project. While this
involvement is a positive for the project, it is demanding on the time of the staff and can
add project delay.
Communication and coordination skills take center stage in project development.
Informal requests for improvement after each phase may lead to confusion -- a controlled
mechanism for handling substantive requests needs to be developed.
The Iterative Model can lead to scope creep, since user feedback following each phase
may lead to increased customer demands. As users see the system develop, they may
realize the potential of other system capabilities which would enhance their work.



Page 16 of 61
THE SPIRAL MODEL
The term spiral is used to describe the process that is followed as the development of the system
takes place. With each iteration around the spiral (beginning at the center and working outward),
progressively more complete versions of the system are built.
Risk assessment is included as a step in the development process as a means of evaluating each
version of the system to determine whether or not development should continue. If the customer
decides that any identified risks are too great, the project may be halted.
For example, if a substantial increase in cost or project completion time is identified during one
phase of risk assessment, the customer or the developer may decide that it does not make sense to
continue with the project, since the increased cost or lengthened timeframe may make continuation
of the project impractical or unfeasible.
The Spiral Model is made up of the following steps:

Project Objectives - Similar to the system conception phase of the Waterfall Model
Objectives are determined, possible obstacles are identified and alternative approaches are
weighed
Risk Assessment - Possible alternatives are examined by the developer, and associated
risks/problems are identified. Resolutions of the risks are evaluated and weighed in the
consideration of project continuation. Sometimes prototyping is used to clarify needs.

Engineering & Production - Detailed requirements are determined and the software
piece is developed.

Planning and Management - The customer is given an opportunity to analyze the results
of the version created in the Engineering step and to offer feedback to the developer.

Problems/Challenges associated with the Spiral Model
Due to the relative newness of the Spiral Model, it is difficult to assess its strengths and
weaknesses
However, the risk assessment component of the Spiral Model provides both
developers and customers with a measuring tool that earlier Process Models do not have.
The measurement of risk is a feature that occurs everyday in real-life situations, but
unfortunately) not as often in the system development industry.
The practical nature of this tool helps to make the Spiral Model a more realistic Process
Model than some of its predecessors.



Page 17 of 61
REQUIREMENTS ENGINEERING
Requirements: the features that the system must have or a constraint that it must satisfy to be
accepted by the client - a model of the system that aims to be correct, complete, consistent, and
verifiable.
Requirements engineering (in the context of systems engineering) is concerned with the
acquisition, analysis, specification, validation, and management of software requirements.
Software requirements express the needs and constraints that are placed upon a software product
that contribute to the satisfaction of some real world application, alternately, the properties that
must be exhibited in order to solve some real world problem.
Requirements engineering objective: to elicit that information necessary
to discover how to partition the system;
to identify which requirements should be allocated to which components
Requirements elicitation: to systematically extract and inventory the requirements of the system
from a combination of human stakeholders, the system's environment, feasibility studies, market
analyses, business plans, analyses of competing products and domain knowledge.
Stakeholders include Users, Customers, Market analysts, Regulators, System developers
Required skills - Technical skills, Ability to acquire an understanding of application domain,
Interpersonal skills
Issues
For many systems:
Clients do not know what is technically feasible.
Clients may change their minds once they see the possibilities more clearly.
Knowledge may be tacit (clients may not be able to accurately describe what they do).
Discoveries made during the later phases may force retrofitting requirements.
There are many social, political, legal, financial, and/or psychological factors.
It will be impossible to say with certainty what the requirements are, let alone whether or
not they are met, until the system is actually in place and running.
CHARACTERISTICS OF REQUIREMENTS
Emergent - they gradually emerge from interactions between requirements engineers and the
client organization
Open - they are always subject to change because organization and their contexts continually
change
Local - requirements must be interpreted in the context of a particular organization at a particular
time
Contingent- because they are an evolving outcome of an on-going processes or builds on prior
interactions and documents.

Page 18 of 61
Situated - because they can only be understood in relation to the particular, concrete situation in
which it actually occurs.
Vague- because they are only elaborated to the degree that it is useful to do so; the rest is left
grounded in tacit knowledge.
Embodied - because they are tied to bodies in particular physical situations, so that the particular
way that bodies are embedded in as situation may be essential to some interpretations
Retrospective hypothesis: Only post hoc explanations for situated events can attain relative
stability and independence from context. Thus, it is impossible to completely formalize
requirements because they cannot be fully separated from their social context.
System requirements are a complex combination of requirements from different people at
different levels of an organization and from the environment in which the system must operate.
REQUIREMENTS ELICITATION
Requirements elicitation: to systematically extract and inventory the requirements of the system
from a combination of human stakeholders, the system's environment, feasibility studies, market
analyses, business plans, analyses of competing products and domain knowledge. Corresponds to
data collection
Activities
Identify stakeholders
Establish relationships between the development team and the customer
Identify actors scenarios use cases; refining use cases relationships among use cases participating
objects nonfunctional requirements

Resource requirements
Goals - the high-level objectives of the system. Must assess the value (relative to priority) and
cost of goals..
Domain knowledge
System stakeholders
The operational environment
The organizational environment: business processes conditioned by the structure, culture, and
internal politics.

Elicitation techniques
Users may have difficulty describing their tasks, may leave important information unstated, or
may be unwilling or unable to cooperate. Corresponds to the tools and methods for data
collection
Interviews between requirements team and the client organization
structured / unstructured interview, Scenarios, Prototypes, paper mockups, a hastily built
software that exhibits the key functionality of the target product, 4GL SQL, Interpreted
languages (Smalltalk, Prolog, Lisp, Java, Unix shell, Hypertext, ...)





Page 19 of 61
TECHNICAL DOCUMENTATION REVIEW
Requirements Analysis - is the process of analyzing requirements to: detect and resolve conflicts
between requirements, discover the bounds of the system and how it must interact with its
environment,
During this process elaborate system requirements to software requirements
Market analysis
Competitive system assessment
Reverse engineering
Simulations
Benchmarking processes and systems

Other considerations include
User Stories - written by the customers describing things that the system needs to do for them.
Scenario - an example of system use in terms of a series of interactions between the user and the
system.
Use case- an abstraction that describes a class of scenarios.
Metrics - how rapidly can the client's needs be determined rate of convergence - how frequently
the requirements change during the requirements phase

Product and process parameters
Product parameters are requirements on the system which include
Functional requirements these are capabilities which are domain specific
Non-functional requirements also referred as constraints or quality requirements which have
emergent properties of the interoperation of system components and include Performance,
Maintainability, Operational (Installation, Administration, Modifiability) Safety, Reliability and
electro-magnetic compatibility requirements

Process parameters - are constraints on the development of the system where the requirements
must be stated clearly, unambiguously, appropriately and quantitatively; avoiding vague and
unverifiable requirements that depend on subjective judgment for interpretation.
Emergent requirements - depend on how all the system components inter-operate and are
dependent on the system architecture.
Quality requirements - should be measurable and prioritized.
Requirements Analysis Concerns classification, Conceptual modeling, Architectural design
requirements allocation and negotiation
Requirement Specification is definition document on Software requirements specification
(SRS), Document structure, quality and standards
Requirements Validation involves Conducting of requirements reviews, Prototyping, Model
validation and Acceptance tests
The tasks involved in the requirements engineering process are iterative and consists of the
following

Page 20 of 61
Establish the aims of the project - User needs, Domain information, Standards
Repeat
1. Elicit the requirements
o Informal statement of requirements
2. Requirements analysis and negotiate trade-offs (budgetary, technical, regulatory, and
other)
o Agreed requirements
3. Requirements specification
o Draft requirements document
4. Requirements validation
o Requirements document and validation report
Until acceptance of requirements document and validation report
DELIVERABLES
Systems Requirements
(Also known as requirements definition document, user requirements document, or concept of
operations document - IEEE std 1362-1998) should have the following:-
Purpose - Defining high-level system requirements from the stakeholders' perspective;
and Serve as a basis for validating the system requirements.
Readership/Language - Representatives of the system stakeholders; written in terms of
the customer's domain.
Contents - Complete list of system requirements having Background information such
as: overall objectives for the system, description of the target environment, constraints
and non-functional requirements. The contents also include: conceptual models
illustrating the system context, usage scenarios, principle domain entities, data,
information and workflows
Structure Style which is understandable and consistent
Software Requirements
(Software requirements specification SRS should have; IEEE std 1362-1998, IEEE std 830-
1998)
Purpose / content - detailed requirements derived from the system requirements that
may form the basis of a contract between developer and customer.
Readership/Language - readers have some knowledge of software engineering concepts
Structure style that minimizes effort needed to read and locate information



Page 21 of 61
4. SOFTWARE DESIGN
SOFTWARE CONSTRUCTION
Software construction is a fundamental act of software engineering involving construction of
working meaningful software through a combination of coding, validation, and testing (unit
testing) by a programmer.
Design - Description of the internal structure and organization of a system with global solutions
expressed as a set of smaller solutions.
Construction process concerns finding a complete and executable solution to a problem
Tools - compilers, version control systems, debuggers, code generators, specialized editors, tools
for path and coverage analysis, test scaffolding and documentation tools
Distinction between design and construction:
1. Scale or size:
o very small projects are "construction sized" and neither need or require an explicit
design phase
o very large project may require an interactive relationship between design and
construction
2. As similar methods are used in design and construction, design is often as much
construction as it is design.
3. Design will always include some degree of guessing or approximations that turn out to be
wrong and will require corrective actions during construction.
Styles of construction
- Concerns use of Linguistic construction methods, formal and informal and aims mainly to have
Reduction in complexity
- Usually complexity in form of:
Design patterns
Software templates
Functions, procedures, code blocks
Objects and data structures
Encapsulation and abstract data types
Component libraries and frameworks


Page 22 of 61
DESIGN CONCEPTS
Three fundamental design concepts are essential to a maintainable system:
Modularity - helps to isolate functional elements of the system. One module may be debugged,
improved, or extended with minimal personnel interaction or system discontinuity.
Specification - The key to production success of any modular construct is a rigid specification of
the interfaces; the specification, as a side benefit, aids in the maintenance task by supplying the
documentation necessary to train, understand, and provide maintenance.
Generality - is essential to satisfy the requirement for extensibility. From this viewpoint,
specification should encompass from the innermost primitive functions outward to the
generalized functions such as a general file management system.
Availability - Software components (routines), to be widely applicable to different machines and
users should be available in families arranged according to precision, robustness, generality and
time-space performance. Software production in the large would be enormously helped by the
availability of spectra of high quality routines
SOFTWARE DESIGN DOCUMENTATION
Need to have a design document providing for
Architectural design A top-level structure and organization of the system being described
and various components are identified (how the system is decomposed and organized into
components and must describe the interfaces between these components).

Implementation design - is sufficiently describing each component to allow for its coding.

The programmer - provides creativity and insight into how to solve new, difficult problems,
plus the ability to express those solutions with sufficient precision to be meaningful to the
computer.
The Computer - provides astonishing, reliability, retention, and speed of performance.

The implementation - Code that fully and correctly processes data for its entire problem space,
anticipates and handles all plausible (and some implausible) classes of errors, runs efficiently,
and is structured to be resilient and easy-to-change over time.

Programming languages - that have to be of Higher-level and domain specific with features,
Functions and procedures, Functional and Logic programming, Concurrent and real-time
programming, Program generators, Mathematical libraries and Spreadsheets, OOP (Visual
programming and creation of user interfaces)

A programmer should anticipation of diversity and prepare by having:- Complete and sufficient
method sets, OO methods, Table driven software, Configuration files and internationalization,
Naming and coding styles, Reuse and repositories, Self-describing software (plug and play),

Page 23 of 61
Parameterization, Generics, Objects and Object classes, Error handling, Extensible frameworks,
Visual configuration specification, Separation of GUI design and functionality implementation

Structuring for validation require both Modular design and Structured programming where
the programmer Style guides for stepwise refinement that includes:- Assertion-based, State
machine logic, Redundant systems with self-diagnosis and fail-safe methods, Hot spot analysis
and performance tuning. Also use Numerical analysis that has Complete and sufficient design of
OO class methods with Dynamic validation of visual requests

CRITERIA FOR DESIGN LANGUAGE
The criteria that are proposed for the choice of a formal design language are:
1. It should be easy to learn its basic use.
2. Formulations in the language should be suggestive and thought provoking.
3. It should allow meaningful manipulation of both expressions and program segments.
4. It should lead to the development of aesthetic criteria for judging the quality of a design.
5. It should be convenient for documentation.
6. It should be uncommitted to any particular technology or problem area.
7. It should be applicable in a consistent way to systems of any foreseeable complexity.
8. It should be executable on a machine
ASPECTS OF OO PROGRAMMING
Object-oriented analysis is a method of analysis that examines requirements from the
perspective of the classes and objects found in the vocabulary of the problem domain
Object-oriented design is a method of design encompassing the process of object-oriented
decomposition and a notation for depicting logical, physical, static and dynamic models of the
system under design
The key features of the OO approach and the languages that support it are such that
Provide modularity, encapsulation, specification, etc,
Provide a design language close to human conceptualization,
Allow software manufacturing to be managed more effectively.
A language is object-oriented if and only if it satisfies the following requirements:
It supports objects that are data abstractions with an interface of named operations
Objects have an associated type [class]
Types [classes] may inherit attributes from super-types [super-classes]


Page 24 of 61
The properties of an Object Oriented Programming Language are:
Abstraction - denotes the essential characteristics of an object that distinguish it from
other kinds of objects and thus provide crisply defined conceptual boundaries, relative to
the perspective of the viewer
Encapsulation - is the process of hiding all of the details of an object that do not contribute
to its essential characteristics
Modularity - is the property of a system that has been decomposed into a set of cohesive
and loosely coupled modules
Hierarchy - is a ranking or ordering of abstractions
Typing - is the enforcement of the class of an object, such that objects of different types
may not be interchanged, or they may be interchanged only in very restricted ways
Concurrency - is the property that distinguishes an active object from one that is not active
Persistence - is the property of an object through which its existence transcends time (i.e.
the object continues to exist after its creator ceases to exist) and/or space (i.e. the objects
location moves from the address space in which it was created)
i) Object-oriented analysis, design and programming are being presented as major advances
in software engineering.
ii) Hardware is of larger magnitude and available memory larger, but our ability to make
that hardware perform effectively, and to manage this on schedule and to budget is still
very poor.
Reasons why we often fail to complete systems with large programs on schedule?
1. Inability to make realistic program design schedules and meet them. For the following
reasons:
o Underestimation of time to gather requirements and define system functions
o Underestimation of time to produce a workable (cost and time - wise) program design.
o Underestimation of time to test individual programs.
o Underestimation of time to integrate complete program into the system and complete
acceptance tests.
o Underestimation of time and effort needed to correct and retest program changes.
o Failure to provide time for restructuring program due to changes in requirements.
o Failure to keep documentation up-to-date.
2. Underestimation of system time required to perform complex functions.
3. Underestimation of program and data memory requirements. Tendency to set end date for
job completion and then to try to meet the schedule by attempting to bring more manpower
to the job by splitting job into program design blocks in advance of having defined the
overall system plan well enough to define the individual program blocks and their
appropriate interfaces.


Page 25 of 61
5. SOFTWARE PROJECT MANAGEMENT

DEFI NI TI ON OF TERMS
Software engineering management - the application of management activities that includes:-
planning, coordinating, resource allocation, monitoring, controlling and reporting to insure that
the development of software is systematically disciplined and measurable.

Rationale management - involves problem generation, solutions considered, the criteria used to
evaluate the alternatives and the decision making process

Project management - oversight activities that insure the delivery of a high-quality system on
time and within budget including planning, budgeting, hiring and organizing developers into
teams

Software configuration management - establishes baseline, monitors and controls changes in
work products and versions.

System Testing - concerns finding differences between the system and its models by executing
the system with sample input data sets and includes:-
o Unit testing - object design compared with object and subsystem.
o Integration testing - combinations of subsystems and comparing them with the system
design model.
o Acceptance testing delivering accepted system to the users
Software Engineering Management Context
Clients often do not appreciate the complexity inherent in software engineering, particularly to
the impact of changed client requirements
The software engineering process itself will generate the need for new or changed client
requirements
Software development is an iterative rather than linear process, thus need to maintain a balance
between creativity and discipline
Management to have an underlying theory concerning Software products which are intangible
and cannot be easily tested
Also the degree of Software complexity and the rapid pace of change in the underlying
technology
There Is need to carry out management in the following

Organizational Management consideration on Policy, Personnel, Communication, Portfolio
and Procurement management

Project Management during the initiation and scope definition, Planning, Enactment, Review,
evaluation and Closure of the project


Page 26 of 61
Software Engineering Measurement issues concerning goals, selection of software and its
development, Collection of data, Software measurement models and Organizational comparison

Policy management includes means of policy development, Policy dissemination and
enforcement, Development and deployment of standards

Personnel management - Hiring and retention, Training and motivation, mentoring for career
Development, enhancing Communication channels and media, Meeting procedures, Written,
Oral or Negotiation presentations

Portfolio management consideration of multiple clients and/or projects, Strategy development
and coordination, General investment management techniques, Project selection and construction

Procurement management involves Procurement planning and selection, Supplier and
contract coordination

Process Management
Contains the following aspects
Initiation and scope definition considered are Determination and negotiation of requirements,
Feasibility analysis (technical, operational, financial, social/political), Process for review and
revision of requirements
Planning - Process planning, Project planning, Determine deliverables, Effort, schedule and cost
estimation, Resource allocation, Risk management, Quality management, Plan management
Implementation - Enactment of plans, Implementation of measurement process, Monitor
process, Control process, Reporting
Review and evaluation - Determining satisfaction of requirements, Reviewing and evaluating
performance,
Closure - Determining closure and Closure activities
PROJECT SCHEDULING

Estimation of time and resources required to complete activities and organization then in a coherent
sequence.

Involves separating the work (project) into separate activities and judging the time required to
complete these activities, some of which are carried out in parallel

Schedules must: -
Properly co-ordinate the parallel activities properly
Avoid situation where whole project is delayed for a critical task to be finished

Page 27 of 61

Schedules must have allowances (error allowances) that can cause delays in completion therefore
flexible

They must also estimate resources needed to complete each task (human effort, hardware,
software, finance (budget) etc)

NB: key estimation is to estimate as if nothing will go wrong, then increase the estimate to cover
anticipated problems. Also add a further contingency factor to cover the problems.

Project schedule is usually presented as a set of charts showing
Work breakdown
Activity dependency
Staff allocation

Such charts include: -
Activity bar charts
Activity network chart
Gantt charts (staff allocation Vs time chart)
Gantt Charts
A Gantt chart is a horizontal bar chart that illustrates a project task against a calendar.
The horizontal position of the bar shows the start and end of the activity, and the length of
the bar indicates its duration.
For the work in progress the actual dates are shown on the horizontal axis
A vertical line indicates a current or reporting date
To reduce complexity a Gantt chart for a large project can have a master chart displaying
the major activity groups (where each activity represents several task) and is followed by
individual Gantt charts that show the tasks assigned to team members.
The chart can be used to track and report progress as it presents a clear picture of project
status
It clearly shows overlapping tasks- tasks that can be performed at that same time
The bars can be shaded to clearly indicate percentage completion and project progress
Popular due to its simplicity it is easy to read, learn, prepare and use
More effective than PERT/CPM charts when one is seeking to communicate schedules
They do not show activity dependencies. One cannot determine from the Gantt chart the
impact on the entire project caused by single activity that falls behind schedule.
The length of the bar only indicates the time span for completing an activity not the number
of people assigned or the person days required

PERT/CPM

Page 28 of 61
Program Evaluation and Review Technique (PERT), (Also Critical Path Method CPM) is
a graphical network model that depicts a projects tasks and the relationships between those
tasks. The project is shown as a network diagram with the activities shown as vectors and
events displayed as nodes.
Shows all individual activities and dependencies
It forms the basis for planning and provides management with the ability to plan for best
possible use of resources to achieve a given goal within time and cost limitations
It provides visibility and allows management to control unique programs as opposed to
repetitive situations
Helps management handle the uncertainties involved in programs by answering such
questions as how time delays in certain elements influence others as well as the project
completion. This provides management with a means for evaluating alternatives
It provides a basic structure for reporting information
Reveals interdependencies of activities
Facilitates hat if exercises
It allows one to perform scheduling risk analysis
Allows a large amount of sophisticated data to be presented in a well organized diagram
from which both the contractor and the customer can make joint decisions
Allows one to evaluate the effect of changes in the program
More effective than Gantt charts when you want to study the relationships between tasks
Requires intensive labour and time
The complexity of the charts adds to implementation problems
Has more data requirements thus is expensive to maintain
Is utilized mainly in large and complex projects

Gantt Charts and PERT/CPM are not mutually exclusive techniques project managers often
use both methods. Neither handles the scheduling of personnel and allocation of resources

NETWORK ANALYSIS
Network analysis is a generic name for a family of related techniques developed to aid
management to plan and control projects. It provides planning and control information on time,
cost and resource aspects of a project. It is most suitable where the projects are complex, large or
restrictions exist.
The critical path method is applied most where a network is drawn either an activity on arrow or
activity on node network. In the network analysis a project is broken down into consistent activities
and their presentation in a diagrammatic form. In the CPM one has to analyze the project, draw
the network, estimate the time and cost, locate the critical path, schedule the project, monitor and
control the progress of the project and revise the plan.

Example: draw a network and find the critical path for the following project
Activity Proceeding Activity Duration
A - 4
B A 2
C B 10
D A 2

Page 29 of 61
E D 5
F A 2
G F 4
H G 3
J C 6
K C, E 6
L H 3


ADVANTAGES OF PROJECT MANAGEMENT TOOLS
Easier visualization of relationships a produced network diagram shows how the different
tasks and activities relate together making it easier to understand a project intuitively, improving
planning and enhancing communication details of the project plan to the interested parties

More effective planning CPM forces the management to think the project through for it requires
careful detailed planning and high discipline that justifies its use

Better focusing on the problem areas the technique enables the manager to pin-point likely
bottle-necks and problem areas before they can occur

Improve resource allocation resources can be directed where most needed thus reducing costs
and speeding up completion of the project, e.g. overtime can be eliminated or confined to those
tasks where it will do most good

Strong alternative options management can simulate the effect of alternative causes of action;
and gauge the effect of problems in carrying out particular tasks and making contingency plans

Management by exception CPM identifies those actions whose timely completion is critical to
the overall timetable and enables the leeway on other actions to be calculated. This enables the
management to focus their attention on important areas of the project
Improve project monitoring by comparing the actual performance of each task with the
expected the manager can immediately recognize when the problems are occurring, identify their
causes and take appropriate action in time to rescue the project.
Importance of project scheduling
The project manager musts know the duration of each activity, the order in which the
activities must be performed, the start and end times for each activity, and who will be
assigned each specific task.
The scheduling allows for a balanced activity time estimate, sequences and personnel
assignments to achieve a workable schedule, which is essential for project success
The schedule provides:
Effective use of estimating processes
Ease in project control
Ease in time or cost revisions

Page 30 of 61
Allows for better communication of project tasks and deadlines

Criteria for defining project success

Completed:
i) Within the allocated time
ii) Within the budgeted cost
iii) At the proper performance or specification level
iv) With acceptance by the customer/user
v) With minimum or mutually agreed upon scope changes
vi) Without disturbing the main work flow of the organization
vii) Without changing the corporate culture
viii) Within the required quality and standards thus you can use the customers name as a
reference

Potential benefits of project management
i) Identification of functional responsibilities to ensure that all activities are accounted
for regardless of personnel turnover
ii) Minimizing the need for continuous reporting
iii) Identification of time limits for scheduling
iv) Measurement of accomplishment against plans
v) Early identification of problems so that corrective action may follow
vi) Improved estimating capability for future planning
vii) Knowing when objectives cannot be met or will be exceeded


TOTAL QUALITY MANAGEMENT (TQM)
Based on the work of Dr W Edwards Deming, who is reputed to have transformed Japanese
products from shoddy to first in the world, He developed 14 points for management to transform
organizations.

Consider the following aspects that can be implemented during TQM
1. Create constancy of purpose toward improvement of product and service, with the aim to
become competitive and to stay in business and to provide jobs.
2. Adopt the new philosophy. We are in a new economic age. Western management must
awaken to the challenge, must learn their responsibilities, and take on leadership for
change.
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on
a mass basis by building quality into the product in the first place.
4. End the practice of awarding business on the basis of price tag. Instead, minimize total
cost. Move toward a single supplier for any one item, with a long-term relationship of
loyalty and trust.

Page 31 of 61
5. Improve constantly and forever the system of production and service, to improve quality
and productivity and thus constantly decrease costs.
6. Institute training on the job.
7. Institute leadership. The aim of leadership should be to help people and machines and
gadgets to do a better job. Leadership of management is in need of overhaul, as well as
leadership of production workers.
8. Drive out fear, so that everyone may work effectively for the company.
9. Break down barriers between departments. People in research, design, sales, and
production must work as a team to foresee problems in production and in use that may be
encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the work force that ask for zero defects
and new levels of productivity.
11. Eliminate work standards (quotas) on the factory floor. Substitute leadership. Eliminate
management by objective. Eliminate management by numbers and numerical goals.
Substitute leadership.
12. Remove barriers that rob the hourly worker of his right to pride of workmanship. The
responsibility of supervisors must be changed from stressing sheer numbers to quality.
Remove barriers that rob people in management and engineering of their right to pride of
workmanship. This means abolishment of the annual merit rating and management by
objective.
13. Encourage education and self- improvement for everyone.
14. Take action to accomplish the transformation.

6. SOFTWARE PROJECT ESTIMATION CONCERNS

COST AND EFFORT ESTIMATION
Software project managers are responsible for controlling project budgets so, they must be able
to make estimates of how much a software development is going to cost. The principal
components of project costs are:

Computer resources - These include the estimating storage space, processors, terminals,
drivers, servers, output media etc. Generally sufficient allowance is given for data growth that
involves work files and transaction documents

Project overheads - Overheads may include the many supporting activities that form an integral
part of the project e.g. assessing technical performance, commitment of personnel, project
control, system demonstration, project reviews etc
Also considered are Traveling, Training and Effort Costs (e.g. paying software engineers)

PROJECT ESTIMATION
System (software) cost and effort estimate can never be exact, too many variables, human,
technical, environmental, political can affect system cost and efforts applied to development
The four major tasks undertaken by the project manager when preparing estimates for a given
project

Page 32 of 61
Assess the project
Identify all the activities
Evaluate all the net resource activities
Cost the resources
Project estimation strive to achieve a reliable cost and effort estimation

A number of options arise trying to achieve this: -

a) Delay estimation until late in the project (estimates done after the project)
b) Base estimates on similar projects that have already been completed
c) Use relating simple decomposition technique to generate project cost and effort
estimates
d) Use one or more empirical models for system cost and effort estimation

1
st
option is not practical. Estimate must be provided upfront
2
nd
option only works in similar projects
3
rd
and 4
th
should be used together to check on one another

decomposition techniques take a divide and conquer approach to project estimation. Project is
decomposed (divided) into major functions and related activities and cost and efforts estimated on
each.

Empirical estimation models: based an experience (historical data) and takes a form

D= f(vi)
Where d = one of a number of estimated values (e.g. effort, cost, project duration etc)
Vi = selected independent parameters (e.g. estimated *LOC or *

FP)



DECOMPOSITION TECHNIQUES - SOFTWARE/ SYSTEM SIZING

Accuracy of system (software) project estimate is predicted on a number of things
a) Degree to which the planner has properly estimated the size of the product to be built.
b) Ability to translate the size estimate into human efforts, calendar time and money.
c) The degree to which project plan reflects the abilities of the system development team
d) The ability of product requirements and the environment that supports the system development
efforts

Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project


Lines of Code and Function points




Page 33 of 61
QUANTITATIVE MANAGEMENT AND ASSURANCE

Responsibility of quality managers to ensure the required level of quality is achieved

Definition Quality management involves defining appropriate procedures and standards and
checking that all engineers (developers) follow them

It depends on developing a quality culture

System quality is multi-dimensional the product should meet specifications
a) Depending on customer need and wants , as well as the developers needs/ requirements
which may not be included in specification
b) Some qualities are difficult to measure
c) Some specification are incomplete

"
Quality is hard to define, impossible to measure and easy to recognize.
Definition Quality is continually satisfying customer requirements (Smith 1987)
International Standards Organization (ISO) The totality of features and characteristics of a
product or service that bear on the ability to satisfy specified or implied needs (ISO 1986)

Garvins view of quality (Garvin 1984) identifies five views of quality
a) The transcendent view Quality is immeasurable but can be seen, sensed or felt and
appreciated e.g. art or music
b) Product based view Quality is measured by the attributes/ ingredients in a product
c) User based view Quality is fitness for purpose, meeting needs as specified
d) Value based view the ability to provide the customer with the product/ services they
want at the price they can afford.



SOFTWARE COST ESTIMATION
The dominant cost is the effort cost. This is the most difficult to estimate and control, and has the
most significant effect on overall costs. Software costing should be carried out objectively with
the aim of accurately predicting the cost to the contractor of developing the software. Software
cost estimation is a continuing activity which starts at the proposal stage and continues throughout
the lifetime of a project. Projects normally have a budget, and continual cost estimation is
necessary to ensure that spending is in line with the budget. Effort can be measured in staff-hours
or staff-months (Used to be known as man-hours or man-months). Boehm (1981) discusses seven
techniques of software cost estimation:
(1) Algorithmic cost modeling - A model is developed using historical cost information which
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required.



Page 34 of 61
(2) Expert judgement - One or more experts on the software development techniques to be used
and on the application domain are consulted. They each estimate the project cost and the final
cost estimate is arrived at by consensus.
(3) Estimation by analogy - This technique is applicable when other projects in the same
application domain have been completed. The cost of a new project is estimated by analogy with
these completed projects.
(4) Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In
software costing, this means that the cost is determined by available resources rather than by
objective assessment. If the software has to be delivered in 12 months and 5 people are available,
the effort required is estimated to be 60 person-months.
(5) Pricing to win - The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer's budget and not on the
software functionality.
(6) Top- down estimation - A cost estimate is established by considering the overall
functionality of the product and how that functionality is provided by interacting sub-functions.
Cost estimates are made on the basis of the logical function rather than the components
implementing that function.
(7) Bottom- up estimation - The cost of each component is estimated. All these costs are added
to produce a final cost estimate.

ESTIMATION METHODS / TOOLS
Estimation may be the most difficult task in an entire software project. Almost all software cost
estimation is related to human resources. This is different from other engineering disciplines,
which have to deal with more physical resources.

Many estimation techniques use some estimate of size, such as
KLOC (Thousands of lines of code)
FP (Function points)
OP (Object points)

Estimation techniques for software projects are:
Using historical data
Decomposition techniques
Empirical models
A combination of any or all of the above

HISTORICAL DATA
Using historical data for estimation is pretty self-explanatory. This method uses track record on
previous projects to make estimates for the new project.
Main advantage: It is specific to that organization

Page 35 of 61
Main disadvantages:
Continuous process improvement is sometimes hard to factor in
Some projects may be very different than previous ones
ALGORITHMIC COST MODELING
Costs are analyzed using mathematical formulae linking costs with metrics. The most commonly
used metric for cost estimation is the number of lines of source code in the finished system
(which of course is not known). Size estimation may involve estimation by analogy with other
projects, estimation by ranking the sizes of system components and using a known reference
component to estimate the component size or may simply be a question of engineering
judgement.
Code size estimates are uncertain because they depend on hardware and software choices, use of
a commercial database management system etc. An alternative to using code size as the
estimated product attribute is the use of `function- points', which are related to the functionality
of the software rather than to its size. Function points are computed by counting the following
software characteristics:
External inputs and outputs.
User interactions.
External interfaces.
Files used by the system.
Each of these is then individually assessed for complexity and given a weighting value which
varies from 3 (for simple external inputs) to 15 (for complex internal files). The function point
count is computed by multiplying each raw count by the estimated weight and summing all
values, then multiplied by the project complexity factors which consider the overall complexity
of the project according to a range of factors such as the degree of distributed processing, the
amount of reuse, the performance, and so on.
Function point counts can be used in conjunction with lines of code estimation techniques. The
number of function points is used to estimate the final code size.
Based on historical data analysis, the average number of lines of code in a particular language
required to implement a function point can be estimated (AVC). The estimated code size for a
new application is computed as follows: Code size = AVC x Number of function points
The advantage of this approach is that the number of function points can often be estimated from
the requirements specification so an early code size prediction can be made. Levels of selected
software languages relative to Assembler language

DECOMPOSITION TECHNIQUES
For estimation can be one of two types:

Page 36 of 61
Problem-based - Use functional or object decomposition, estimate each object, and sum the
result. Estimates either use historical data or empirical techniques.
Process-based - Estimate effort and cost for each task of the process, and then sum the result.
Estimates use historical data
Main advantage: Easier to estimate smaller parts
Main disadvantage: More variables involved means more potential errors

EMPIRICAL MODELS
For estimation uses formulae of the form g = f(x)
Where g is the value to be estimated (cost, effort or project duration) and x is a parameter such as
KLOC, FP or OP. Most formulae involving KLOC consistently show that there is an almost
linear relationship between KLOC and estimated total effort.

Main advantage: Easy to use
Main disadvantage: Models are usually derived from a limited number of projects, but are used
in a more generalized fashion


SOFTWARE LIFE CYCLE MANAGEMNT (SLIM)
SLIM was constructed by the USA army in 1978 to cover projects exceeding 70 KLOC. This
Putmans model assumes that the effort for software development project is distributed similarly
to a collection of Raleigh curves, one for each major development activity.

The model is based on empirical studies and relates the size (S), technology factor (C), total project
effort in person years (td); thus S = CK
1/3
td
4/3

The equation allows one to assess the effect of varying delivery date on the total effort needed to
complete the project. Thus for a 10%decrease in elapsed time
S = CK
1/3
td
4/3
= CK
1/3
(0.9td)
4/3
i.e. K/K = 1.52 a 52% increase in total life cycle effort.

Advantages of SLIM
Uses linear programming to consider development constraints on both cost and effort
Has fewer parameters needed to generate an estimate (over COCOMO)

Disadvantages of SLIM
Model usually found insufficiency
Not suitable for small projects
Estimates are extremely sensitive to the technology factor

CONSTRUCTIVE COST MODEL
COCOMO is probably the most well-known and well-established empirical model. It was first
introduced by Barry Boehm in 1981. It has since evolved into a more comprehensive model
named COCOMO II. COCOMO II has a variety of formulae
For various stages of the project
For various size parameters (KLOC, FP, object points)
For various types of project teams


Page 37 of 61
COCOMO - Most widely used model for effort and cost estimation and considers a wide variety
of factors. Projects fall into three categories: organic, semidetached, and embedded, characterized
by their size. In the basic model which uses only source size: There is also an intermediate model
which, as well as size, uses 15 other cost drivers. Cost Drivers for the COCOMO Model.
Software reliability
Size of application database
Complexity
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language expertise
Performance requirements
Memory constraints
Volatility of virtual machine
Environment
Turnaround time
Use of software tools
Application of software engineering methods
Required development schedule
Values are assigned by the manager.




Page 38 of 61

The intermediate model is more accurate than the basic model.



Page 39 of 61
7. SOFTWARE METRICS
From a survey of managers and technicians:
quality of external documentation
programming language
well-defined programming practices
availability of tools
programmer experience in data processing
programmer experience in the functional area
effect of project communication
independent modules for individual assignment
In an experiment, five programming teams were given a different objective each: When
productivity was evaluated each team ranked first in its primary objective. This shows that
programmers respond to a goal.
minimum internal memory
output clarity
program clarity
minimum source statements
minimum hours
Define Software Metrics as a set of the measures that are considered and are to be
incorporated for quality software product
These set of Software Metrics and include:-

Software Maintainability - This is the main programming costs in most installations, and is
affected by data structures, logical structure, documentation, diagnostic tools, and by personnel
attributes such as specialization, experience, training, intelligence, motivation. Maintenance
includes the cost of rewriting, testing, debugging and integrating new features. Methods for
improving maintainability are:
inspections
automated audits of comments
test path analysis programs
use of pseudo code documentation
dual maintenance of source code
modularity
Structured program logic flow.
Maintainability is "the ease with which changes can be made to satisfy new requirements or to
correct deficiencies" [Balci 1997]. Well designed software should be flexible enough to
accommodate future changes that will be needed as new requirements come to light. Since
maintenance accounts for nearly 70% of the cost of the software life cycle [Schach 1999], the
importance of this quality characteristic cannot be overemphasized. Quite often the programmer
responsible for writing a section of code is not the one who must maintain it. For this reason, the

Page 40 of 61
quality of the software documentation significantly affects the maintainability of the software
product.
Software Correctness is "the degree with which software adheres to its specified requirements"
[Balci 1997]. At the start of the software life cycle, the requirements for the software are
determined and formalized in the requirements specification document. Well designed software
should meet all the stated requirements. While it might seem obvious that software should be
correct, the reality is that this characteristic is one of the hardest to assess. Because of the
tremendous complexity of software products, it is impossible to perform exhaustive execution-
based testing to insure that no errors will occur when the software is run. Also, it is important to
remember that some products of the software life cycle such as the design specification cannot be
"executed" for testing. Instead, these products must be tested with various other techniques such
as formal proofs, inspections, and walkthroughs.
Software Reusability is "the ease with which software can be reused in developing other software"
[Balci 1997]. By reusing existing software, developers can create more complex software in a
shorter amount of time. Reuse is already a common technique employed in other engineering
disciplines. For example, when a house is constructed, the trusses which support the roof are
typically purchased preassembled. Unless a special design is needed, the architect will not bother
to design a new truss for the house. Instead, he or she will simply reuse an existing design that has
proven itself to be reliable. In much the same way, software can be designed to accommodate reuse
in many situations. A simple example of software reuse could be the development of an efficient
sorting routine that can be incorporated in many future applications.
Software Documentation - is one of the items which is said to lead to high maintenance costs. It
is not just the program listing with comments. A program librarian must be responsible for the
system documentation, but programmers are responsible for the technical writing. Other aids may
be text editors, and Source Code Control System (SCCS) tool for producing records. Some
companies insist that programmers dictate any test or changes onto a tape every day.

Software Complexity - is a measure of the resources which must be expanded in developing,
maintaining, or using a software product. A large share of the resources is used to find errors,
debug, and retest; thus, an associated measure of complexity is the number of software errors.
Items to consider under resources are:
time and memory space;
man-hours to supervise, comprehend, design, code, test, maintain and change software;
number of interfaces;
scope of support software, e.g. assemblers, interpreters, compilers, operating systems,
maintenance program editors, debuggers, and other utilities;
the amount of reused code modules;
travel expenses;
secretarial and technical publications support;
Overheads relating to support of personnel and system hardware.

Page 41 of 61
Consideration of storage, complexity, and processing time may or may not be considered in the
conceptualization stage. For example, storage of a large database might be considered, whereas if
one already existed, it could be left until system-specification.
Measures of complexity are useful to:
1. Rank competitive designs and give a "distance" between rankings.
2. Rank difficulty of various modules in order to assign personnel.
3. Judge whether subdivision of a module is necessary.
4. Measure progress and quality during development.

Software Reliability - Reliability means: issues that related to the design of the product which
will operate well for a substantial length of time. a metric which is the probability of operational
success of software.
Probabilistic Models can refer to deterministic events (e.g. motor burns out) when it cannot be
predicted when they will occur; or random events. The probability space -- the space of all
possible occurrences must first be defined, e.g. in a probability model for program error it is all
possible paths in a program. Then the rules for selection are specified, e.g. for each path,
combinations of initial conditions and input values. A software failure occurs when an execution
sequence containing an error is processed.

Software Reliability Theory is the application of probability theory to the modeling of failures
and the prediction of success probability. Thus Software reliability is the probability that the
program performs successfully, according to specifications, for a given time period. The precise
statements / Specifications of:
the host machine
the operating system and support software
the operating environment
the definition of success
details of hardware interfaces with the machine
details of ranges and rates of I/O data
The operational procedures.
Errors are found from a system failure, and may be: hardware, software, operator, or unresolved.
Time may be divided into:
operating time,
calendar time during operation,
calendar time during development,
man-hours of coding,
development, testing,
debugging,
Computer test times.

Page 42 of 61
Software is repairable if it can be debugged and the errors corrected. This may not be possible
without inconveniencing the user, e.g. air-traffic control system.
Software Reliability is "the frequency and criticality of software failure, where failure is an
unacceptable effect or behavior occurring under permissible operating conditions" [Balci 1997].
The frequency of software failure is measured by the average time between failures. The
criticality of software failure is measured by the average time required for repair. Ideally,
software engineers want their products to fail as little as possible (i.e., demonstrate high
correctness) and be as easy as possible to fix (i.e., demonstrate good maintainability). For some
real-time systems such as air traffic control or heart monitors, reliability becomes the most
important software quality characteristic. However, it would be difficult to imagine a highly
reliable system that did not also demonstrate high correctness and good maintainability.
Software availability - is the probability that the program is performing successfully, according
to specifications, at a given point in time. Availability is defined as:
1. The ratio of systems up at some instant to the size of the population studied (no. of
systems).
2. The ratio of observed uptime to the sum of the uptime and downtime:
A = T (up) / (T (up) + T (down)) (of a single system).
These measurements are used to:
Quantify A and compare with other systems or goals.
Track A over time to see if it increases as errors are found.
Plan for repair personnel, facilities (e.g. test time) and alternative service.
If the system is still in the design and development phase then a third definition is used:
1. the ratio of the mean time to failure (uptimes) and the sum of the mean time to failure and
the mean time to repair (downtime): A = MTTF / (MTTF + MTTR)
Various hypotheses exist about program errors, and seem to be true, but no controlled tests have
been run to prove or disprove them:
1. Bugs per line constant. There are fewer errors per line in a high level language. Many
types of errors in machine code do not exist in HOL.
2. Memory shortage encourages bugs. Mainly due to programming "tricks" used to squeeze
code.
3. Heavy load causes errors to occur. Very difficult to document and test heavy loads.
4. Tuning reduces error occurrences rate. This involves removing errors for a class of input
data. If new inputs are needed, new errors could occur, and the system (hardware and
software) must be retuned.
Further hypotheses about errors:

Page 43 of 61
1. The normalized number of errors is constant. Normalization is the total number of errors
divided by the number of machine language instruction.
2. The normalized error-removal rate is constant. These two hypotheses apply over similar
programs.
3. Bug characteristics remain unchanged as debugging proceeds. Those found in the first
few weeks are representative of the total bug population.
4. Independent debugging results in similar programs. When two independent debuggers
work on a large program, the evolution of the program is such that the difference between
their versions is negligible.
Many researchers have put forward models of reliability based on measures of the hardware, the
software, and the operator; and used them for prediction, comparative analysis, and development
control. Error reliability and availability models provide a quantitative measure of the goodness
of the software. There are still many unanswered questions.

Software Portability is "the ease with which software can be used on computer configurations
other than its current one" [Balci 1997]. Porting software to other computer configurations is
important for several reasons. First, "good software products can have a life of 15 years or more,
whereas hardware is frequently changed at least every 4 or 5 years. Thus good software can be
implemented, over its lifetime, on three or more different hardware configurations" [Schach 1999].
Second, porting software to a new computer configuration may be less expensive than developing
analogous software from scratch. Third, the sales of "shrink-wrapped software" can be increased
because a greater market for the software is available.

Software Efficiency is "the degree with which software fulfills its purpose without waste of
resources" [Balci 1997]. Efficiency is really a multifaceted quality characteristic and must be
assessed with respect to a particular resource such as execution time or storage space. One measure
of efficiency is the speed of a program's execution. Another measure is the amount of storage space
the program requires for execution. Often these two measures are inversely related, that is,
increasing the execution efficiency causes a decrease in the space efficiency. This relationship is
known as the space-time tradeoff. When it is not possible to design a software product with
efficiency in every aspect, the most important resources of the software are given priority.

Software Comprehensibility - This includes such issues as:
high and low-level comments
mnemonic variable names
complexity of control flow
general program type
Sheppard did experiments with professional programmers, types of program (engineering,
statistical, nonnumeric), levels of structure, and levels of mnemonic variable names. He found that
the least structured program was most difficult to reconstruct (after studying for 25 minutes) and
the partially structured one was easiest. No differences were found for mnemonic variable names,
or order of presentation of programs.


Page 44 of 61
Summarize the SW Metrics as well as the aspects of quality assurance as follows

They define a hierarchical software characteristic tree, the arrow indicates logical implication. The lowest level
characteristics are combined into medium level characteristics. The lowest level are recommended as quantitative
metrics. They define each one. Then they evaluated each by their correlation with program quality, potential benefits
in terms of insights and decision inputs for the developer and user, quantifiable, feasibility of automating evaluation.
The list is more useful as a check to programmers rather than a guide to program construction.

SOFTWARE TESTING
Defect testing - Testing programs to establish the presence of system defects. The goal is to
discover defects in programs. A successful defect test is a test which causes a program to behave
in an anomalous way. Tests show the presence not the absence of defects
Component testing - Testing of individual program components, usually the responsibility of
the component developer (except sometimes for critical systems) Tests are derived from the
developers experience
Integration testing - Testing of groups of components integrated to create a system or sub-
system. The responsibility of an independent testing team Tests are based on a system
specification
Black-box testing - An approach to testing where the program is considered as a black-box in
that the program test cases are based on the system specification
Test data - Inputs which have been devised to test the system
Test cases - Inputs to test the system and the predicted outputs from these inputs if the
system operates according to its specification

Page 45 of 61
Testing guidelines
Test software with sequences which have only a single value
Use sequences of different sizes in different tests
Derive tests so that all the elements of the sequence are accessed
White Box Testing - also referred to as Structural testing and it is a derivation of test cases
according to program structure, where Knowledge of the program is used to identify additional
test cases especially ascertaining the conditions and elements of arrays
Path testing - The objective of path testing is to ensure that the set of test cases is such that each
path through the program is executed at least once. The starting point for path testing is a
program flow graph that shows nodes representing program decisions and arcs representing the
flow of control
Top-down testing - Starts with high-level system and integrate from the top-downwards
replacing individual components by stubs where appropriate
Bottom-up testing - Integrate individual components in the various levels until the complete
system is created.
NB. In practice, most of testing approaches involves a combination of these strategies
Interface testing - Takes place when modules or sub-systems are integrated to create larger
systems. Objectives are to detect faults due to interface errors or invalid assumptions about
interfaces. Particularly important for object-oriented development as objects are defined by their
interfaces
Interfaces types
Parameter interfaces - Data passed from one procedure to another
Shared memory interfaces - Block of memory is shared between procedures
Procedural interfaces - Sub-system encapsulates set of procedures to be called by other systems
Message passing interfaces - Sub-systems request services from other sub-systems
Stress testing - Stress testing checks for unacceptable loss of service or data. Particularly
relevant to distributed systems which can exhibit severe degradation as a
network becomes overloaded. Stressing the system often causes defects to be revealed.
Object-oriented testing - The components to be tested are object classes that are instantiated as
objects an extension of white-box testing.
Scenario-based testing - Identify scenarios from use-cases and supplement these with
interaction diagrams that show the objects involved in the scenario


Page 46 of 61
SOFTWARE VERIFICATION AND VALIDATION
Verification and Validation are concerned with assuring that a software system meets a user's
needs
Validation: validation shows that the program meets the customers needs. The software should
do what the user really requires. The designers are guided by the notion of whether they are
building the right product
Verification: Verification shows conformance with specification. The software should conform
to its functional specification. The designers are guided by the notion whether they are building
the product right
Static and dynamic verification
Static verification on Software inspections is concerned with analysis of the static system
representation to discover problems within software product based on document and code
analysis
Dynamic Verification - concerns Software testing with exercising and observing product
behaviour where the system is executed with test data and its operational behaviour is observed

Program testing is done to reveal the presence of errors NOT their absence. A successful test
is a test which discovers one or more errors. The only validation technique for non-functional
requirements
Verification and validation should establish confidence that the software is fit for purpose it is
designed for. This does NOT mean completely free of defects rather, it must be good enough for
its intended use which determines the degree of confidence needed. Depends on systems
purpose, user expectations and marketing environment
Software function concerned with the level of confidence depends on how critical the
software is to an organisation
User expectations - Users may have low expectations of certain kinds of software
Marketing environment - Getting a product to the market early which may be more important
than finding defects in the program
SOFTWARE TESTING

Definition 2 - Testing involves actual execution of program code using representative test data
sets to exercise the program and outputs are examined to detect any deviation from the expected
output

Page 47 of 61

Definition 1- Testing is classified as dynamic verification and validation activities

Reviews can be applied to:
Requirement specifications
High level system designs
Detailed designs
Program code
User documentation
Operation of delivered system

Objectives of Testing
1. To demonstrate the operation of the software.
2. To detect errors in the software and therefore:
Obtain a level of confidence,
Produce measure of quality.

THE TESTING PROCESS
Systems in this case are tested, as a single unit therefore testing should proceed in stages, where
testing is carried out incrementally in conjunction with system implementation.
The most widely used testing process consists of 5 stages:
(a) Unit testing
(b) Module testing
(c) Sub-system testing
(d) System testing
(e) Acceptance (alpha) testing.

(A) UNIT TESTING
Unit testing is where individual components are tested independently to ensure they operate
correctly.

(B) MODULE TESTING
A module is a collection of dependent components e.g. an object class, an abstract data type or
collection of procedures and functions.
Module testing is where related components (modules) are tested without other system modules.

(C) SUB-SYSTEM TESTING
Sub-systems are integrated to make up a system.
Sub-system testing aims at finding errors of unanticipated interactions between sub-systems and
system components. Sub-system testing also aims at validating that the system meets the functional
and non-functional components.

(D) ACCEPTANCE TESTING (ALPHA TESTING)

Page 48 of 61
Acceptance testing is also known as alpha testing or last testing.
In this case the system is tested with real data (from client) and not simulated test data.
Acceptance testing:
Reveals errors and omissions in systems requirements definition.
Test whether the system meets the users needs or if the system performance is acceptable.
Acceptance testing is carried out till users /clients agree its an acceptable implementation of the
system.

N/B 1:Beta testing
Beta testing approach is used for software to be marketed.
It involves delivering it to a number of potential customers who agree to use it and report problems
to the developers.
After this feedback, it is modified and released again for another beta testing or general use.

N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing module testing sub-system testing - system testing- acceptance testing). But object
oriented development is different and levels have clear/ distinct
Operations and data forms objects units
Object integrated forms class (equivalent to) modules
Therefore class testing is cluster testing.


TEST PLANNING
Test planning is setting out standards for the testing process rather than describing product tests.
Test plans allow developers get an overall picture of the system tests as well as ensure required
hardware, software, resources are available to the testing team.
Components of a test plan:
Testing process
This is a description of the major phases of the testing process.
Requirement traceability
This is a plan to test all requirements individually.
Testing schedule
This includes the overall testing schedule and resource allocation.
Test recording procedures
This is the systematic recording of test results.
Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g. staff shortage
should be anticipated here.

N/B - Test plan should be revised regularly.

Page 49 of 61
TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and development process
used: -
Top-down testing
This involves testing from most abstract component downwards.
Bottom-up testing
This involves testing from fundamental components upwards.
Thread testing
This is testing for systems with multiple processes where the processing of transactions
threads through these processes.
Stress testing
This relies on stressing the system by going beyond the specified limits therefore
testing on how well it can cope with overload situations.
Back to back testing
It is used to test versions of a system and compare the outputs.

N/B- Large systems are usually tested using a mixture of strategies.

Top-down testing
Tests high levels of a system before testing its detailed components. The program is represented
as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-systems) are
implemented and tested through the same way and continues to the bottom component (unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.

Disadvantages of Using Top down Testing
1. It is difficult to implement because:
Stubs are required to simulate lower levels of the system. Complex components are impractical
to produce a stub that can be tested correctly.
Requires knowledge of internal pointer representation
2. Test output is difficult to observe. Some higher levels do not generate output therefore must
be forced to do so e.g. (classes) therefore create an artificial environment to generate test
results.
N/b therefore it is not appropriate for object oriented systems but individual systems may be tested.


Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the hierarchy,
then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been tested.

Page 50 of 61
2. It is appropriate for object oriented systems because individual objects can be tested using their
own test drivers, then integrated and collectively tested.


Thread testing (transaction flow testing-by Bezier 1990)
This is for testing real time systems.
It is an event-based approach where tests are based on the events, which trigger system actions.
It may be used after objects have been individually tested and integrated into sub-system.
-Processing of each external event threads its way through the system processes or
objects with processing carried out at each stage
It involves identifying and executing each possible processing thread.
The system should be analyzed to identify as many threads as possible.
After each thread has been tested with a single event, processing of multiple events of same type
should be tested without events of any other type (multiple-input thread testing).
After multiple-input thread testing, the system is tested for its reactions to more than one class of
simultaneous event i.e. multiple thread testing.



Page 51 of 61
SOFTWARE MAINTENACE
Definition 1 - Maintenance is the process of changing a system after it has been delivered and is
in use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.

Definition 2 - Maintenance is the evolution i.e. process of changing a system to maintain its ability
to survive.
The maintenance stage of system development involves
a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived

Information is fed back to all previous development phases and errors and omissions in original
software requirements are discovered, program and design errors found and need for new software
functionality identified.
TYPES OF SOFTWARE MAINTENANCE
The following are the different types of maintenance:

Corrective maintenance - This involves fixing discovered errors in software (Coding errors,
design errors, requirement errors) once the software is implemented and is in full operation, it is
examined to see if it has met the objectives set out in the original specifications. Unforeseen
problems may need to be overcome, and may involve returning to the earlier stages in the system
development life cycle to take corrective actions

Adaptive maintenance - This is changing the software to operate in a different environment
(operating system, hardware) this doesnt radically change the software functionality. After
running the software for some time, the original environment e.g. operating System and the
peripherals, for which the software was developed, may change.
At this stage the software will be modified to accommodate the changes that will have occurred in
its external environment. This could even call for a repeat of the system development life cycle
yet again

Perfective maintenance - Implementing new functional or non-functional system requirements,
generated by software customers as their organization or business changes. Also as the software is
used, the user will recognize additional functions that could provide benefits or enhance the
software if added to it

Preventive maintenance - Making changes on software to prevent possible problems or
difficulties (collapse, slow down, stalling, self-destructive e.g. Y2K).

Operation stage involves
use of documentation to train users of system and its resource
system configuration

Page 52 of 61
repairs and maintenance
safety precautions
date control
Train user to get help on the system.

Maintenance cost (fixing bugs) is usually higher than what software is original due to: -
I. Program being maintained may be old, and not consistent to modern software engineering
techniques. They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change requests. This is
mainly since complexity of the system may make it difficult to assess the effects of a change.
III. Changes made tend to degrade system structure, making it harder to understand and make
further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its documentation is
unreliable therefore need for a new one.

Factors affecting maintenance
Module independence - Use of design methods that allow easy change through concepts such as
functional independence or object classes (where one can be maintained independently)
Quality of documentation - A program is easier to understand when supported by clear and
concise documentation.
Programming language and style - Use of a high level language and adopting a consistent style
through out the code.
Program validation and testing - Comprehensive validation of system design and program
testing will reduce corrective maintenance.
Configuration management - Ensure that all system documentation is kept consistent through
out various releases of system (documentation of new editions.)
Understanding of current system and staff availability - Original development staff may not
always be available. Undocumented code can be difficult to understand (team management).
Application domain - Clear and understood requirements.
Hardware stability concerns for the equipment to be fault tolerant
Dependence of program on external environment


SOFTWARE MAINTENANCE PROCESS

Page 53 of 61
Maintenance process is triggered by
(a) A set of change requests from users, management or customers.
(b) Cost and impact of the changes are assumed, If acceptable,
(c) New release is planned involving maintenance elements (adaptive, corrective perfective..)

NB. Changes are implemented and validated and new versions of system released.


SOFTWARE CONFIGURATION MANAGEMENT
Software configuration is a collection of the items that comprise all information produced as part
of the software process. The output of software process is information and includes PC programs,
documentation and the data (in its program and external to it).

Software Configuration Management
Definition 1
These are a set of activities that are developed to manage change through out the life cycle of PC
software.
Changes are caused by the following:-
New customer needs
New business / market conditions and rules
Budgetary or scheduling constraints etc
Reorganization or restructuring of business for growth

Definition 2
The process which controls the changes made to a system, and manages the different versions of
the evolving software product. It involves development and application of procedures and
standards for managing an evolving system product. Procedures should be developed for building
system releasing them to customers
Standards should be developed for recording for recording and processing proposed system
changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes. Controlled
systems are called baselines. They are the starting point for controlled evolution.

Software may exist in different configurations (versions).
produced for different computers (hardware)
produced for different operating system
produced for different client-specific functions etc

Configuration managers are responsible for keeping track of difference between software versions
and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct customers at the
appropriate time

Configuration management and associated documentation should be based on a set of standards,
which should be published in configuration management hand book (or quality handbook) E.g.
IEEE standards 8238-1983 which is standard for configuration management plans.

Page 54 of 61

Main configuration managements activities:
1. Configuration management planning (planning for product evolution)
2. Managing changes to the systems
3. Controlling versions and releases (of systems)
4. Building systems from other components

Benefits of effective configuration management

Better communication among staff
Better communication with the customer
Better technical intelligence
Reduced confusion for changes
Screening of frivolous changes
Provides a paper trail

Configuration Management and Planning
Configuration management and planning takes control of the systems after they have been
developed therefore planning the process must start during development.
The plan should be developed as part of overall project planning process.
The plan should include:
(a) Definitions of what entities are to be managed and formal scheme for identifying these entities.
(b) Statement of configuration management team.
(c) Configuration management policies for change and version control / management.
(d) Description of the tools to be used in configuration management and the process to be used.
(e) Definition of the configuration database which will be used to record configuration
information. (Recording and retrieval of project information.)
(f) Description of management of external information
(g) Auditing procedures.

Configuration database is used to record all relevant information relating to configuration to:
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
Customers who have taken delivery of a particular version
Hardware and software operating system requirements to run a given version.
The number of versions of system so far made and when they were made etc





CONFIGURATION MANAGEMENT TOOLS
Configuration Management CM- is a procedural process so it can be modelled and integrated
with a version management system

Page 55 of 61
Configuration Management processes are standardised and involve applying pre-defined
procedures so as to manage large amounts of data
Configuration Management tools
Form editor - to support processing the change request forms
Workflow system - to define who does what and to automate information transfer
Change database - that manages change proposals and is linked to a VM system
Version and release identification - Systems assign identifiers automatically when a new
version is submitted to the system
Storage management - System stores the differences between versions rather than all the
version code
Change history recording - Record reasons for version creation
I ndependent development - Only one version at a time may be checked out for change or
Parallel working on different versions

CASE TOOLS FOR CONFIGURATION MANAGEMENT
Computer Aided Software Engineering (CASE) tools support for CM is therefore essential and
are available ranging from stand-alone tools to integrated CM workbenches
CASE Tools is a software package that supports construction and maintenance of a logical
system specification model
Designed to support rules and interactions of models defined in a specific methodology
Also permit software prototyping and code generation
Aim to automate document production process by ensuring automation of analysis and design
operations

ADVANTAGES OF CASE TOOLS
Make construction of the various analysis and design logical elements easy e.g. DFD,
ERM etc
Integration of separate elements allowing software to do additional tasks e.g. rechecking
and notifying on defined data and programs
Streamline the development of the analysis documentation allowing for use of graphics
and manipulation of the data dictionaries
Allow for easy maintenance of specifications which in turn will be more reliably updated
Enforce rigorous standards for all developers and projects making communication more
efficient
Check specifications for errors, omissions and inconsistencies
Provide everyone on the project team with easy access to the latest updates and project
specifications
Encourage iterative refinements resulting to higher quality system that better meets the
needs of the users

DISADVANTAGES
CASE products can be expensive
CASE technology is not yet fully evolved so its software is often large and inflexible
Products may not provide a fully integrated development environment
There is usually a long time for learning before the tools can be effectively used i.e. no
soon benefits realized

Page 56 of 61
Analysts must have a mastery of the structured analysis and design techniques if they are to
exploit CASE tools
Time and cost estimates may have to be inflated to allow for an extended learning period of
CASE tools

PROGRAM EVALUATION DYNAMICS
Program evolution is the study of system change Lehmans Laws (by Lehman Beladay 1985) about
system change.
The Laws are:
a) Law of Continuing change - Program used in real world environment must change or become
progressively less useful in the environment.
b) Law of Increasing complexity - Program changes make its structure more complex therefore
extra resources must be devoted to preserving and simplifying the structure.
c) Law of Large program evolution - Program evaluation in self-regulating process.
d) Law of Organization stability - In a program lifetime its rate of development is approximately
constant and independent of resources devoted to system development.
e) Law of Conservation familiarity - Over the system lifetime the incremental change in each
release is approximately constant



SOFTWARE ENGINEERING DOCUMENTATION
It is a very important aid to maintenance engineers.
Definition - It includes all documents describing implementation of the system from requirements
specification to final test plan.
The documents include:
Requirement documents and an associated rationale
System architecture documents
Design description
Program source code
Validation documents on how validation is done
Maintenance guide for possible /known problems.

Documentation should be
Clear and non-ambiguous
Structured and directive
Readable and presentable
Tool-assisted (case tools) in production (automation).

SYSTEM DOCUMENTATION
Items for documentation to be produced for a software product include:-
System Request this is a written request that identifies deficiencies in the current system
besides requesting for change

Page 57 of 61
Feasibility Report this indicates the economic, legal, technical and operational feasibility of
the proposed project
Preliminary Investigation Report this is a report to the management clearly specifying the
identified problems within the system and what further action to be taken is also recommended
System Requirements report this specifies the entire end user and management
requirements, all the alternatives plans, their costs and the recommendations to the management
System Design Specification it contains the designs for the inputs, outputs, program files and
procedures
User Manual it guides the user in the implementation and installation of the information
system
Maintenance Report a record of the maintenance tasks done
Software Code this refers to the code written for the information system
Test Report this should contain test details e.g. sample test data and results etc
Tutorials - a brief demonstration and exercise to introduce the user to the working of the
software product

SOFTWARE DOCUMENTATION
The typical items included in the software documentation are
Introduction shows the organizations principles, abstracts for other sections and notation
guide
Computer characteristics a general description with particular attention to key attributes and
summarized features
Hardware interfaces a concise description of information received or transmitted by the
computer
Software functions shows what the software must do to meet requirements, in various
situations and in response to various events
Timing constraints how often and how fast each function must be performed
Accuracy constraints how close output values must be ideal to expected values for them to be
acceptable
Response to undesired events what the software must do in events e.g. sensor goes down,
invalid data etc
Program sub-sets what the program should do it if it cannot do everything
Fundamental assumptions the characteristics of the program that will stay the same, no
matter what changes are made
Changes the type of changes that have been made or are expected
Sources annotated list of documentation and personnel, indicating the types of questions each
can answer
Glossary most documentation is fraught with acronyms and technical terms






9. SOFTWARE QUALITY EVALUATION

Page 58 of 61
This concerns identifying key issues or measures that should show where a program is deficient.
Managers must decide on the relative importance of:
On-time delivery of the software product
Efficient use of resources e.g. processing units, memory, peripheral devices etc
Maintainable code issues e.g. comprehensibility, modifiability, portability etc

Problem areas cited software production include
1. User demands for enhancements, extensions
2. Quality of system documentation
3. Competing demands on maintenance personnel time
4. Quality of original programs
5. Meeting scheduled commitments
6. Lack of user understanding of system
7. Availability of maintenance program personnel
8. Adequacy of system design specifications
9. Turnover of maintenance personnel
10. Unrealistic user expectations
11. Processing time of system
12. Forecasting personnel requirements
13. Skills of maintenance personnel
14. Changes to hardware and software
15. Budgetary pressures
16. Adherence to programming standards in maintenance
17. Data integrity
18. Motivation of maintenance personnel
19. Application failures
20. Maintenance programming productivity
21. Hardware and software reliability
22. Storage requirements
23. Management support of system
24. Lack of user interest in system

Quality Assurance
Quality management System
- relevant procedures and standards to be followed
- Quality Assurance assessments to be carried out

Definition controls to that
- Relevant procedures and standards are followed
- Relevant deliverables are produced


Page 59 of 61
Software quality assurance techniques:

The quality and reliability of software can be improved by using
A standard development methodology
Software metrics
Thorough testing procedures
Allocating resources to put more emphasis on the analysis and design stages of systems
development.

Standards specification to be applied during development enforces Quality of products. The
specifications should include Quality Assurance (QA) standards to be adopted, which should be
of one of the recognized standards or clients specified ones e.g.

Correctness - ensures the system operates correctly and provides the value to its user and performs
the required functions therefore defects must be fixed/ corrected

Maintainability - is the ease with which system can be corrected if an error is encountered, adept
if its environment changes or enhance if the user desires a change in requirements

Integrity - is the measure of the system ability to withstand attacks (accidental or intentional) to
its security in terms of data processing, program performance and documentation

Usability - is the measure of user friendliness of a system as measured in terms of physical and
intellectual skills required to learn the system, the time required to become moderately efficient in
using it, the net increase in productivity if used by moderately efficient user, and the general user
attitude towards the system.

System Quality can be looked at in two ways: -
Quality of design is the characteristic the designers specify for an item/ product. The grade of
the materials, tolerance and performance specifications
Quality of conformance - The degree to which the design specification are followed during
development and construction (implementation)

QUALITY ASSURANCE
Since quality should be measurable, then quality assurance needs to be put in place
Quality Assurance consists of Auditing and reporting functions of the management
Quality Assurance must outline the standards to be adopted i.e. either International recognized
standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during project lifetime,
which includes: -

Design and Program reviews
Program monitoring and reporting
Quality Assurance related procedure
Test procedure and Fault reporting
Delivery and Liaison mechanisms

Page 60 of 61
Safety aspects and Resource usage

Quality Assurance system should be managed independently of development and production
departments and clients should have a right to access the contractors Quality Assurance System
and Plan

Quality Assurance builds clients confidence (increase acceptability) as well as contractors own
confidence in knowing that they are building the right system and that it will be highly acceptable

Testing and error correction assures system will perform as expected without defects or collapse
and also ensures accuracy and reliability.
POOR QUALITY SYSTEM
High cost of maintenance and correcting errors (unnecessary maintenance)
Low productivity due to poor performance
Unreliability in terms of functionalities
Risk of injury to safety critical systems (e.g. robots)
Loss of business due to errors
Lack of confidence to (in) the developers by clients.


PROFESSIONAL ISSUES IN SYSTEMS DEVELOPMENT

System development is a profession and belongs to the engineering discipline that employs
scientific methods in solving problems and providing solutions to the society.

Profession is an employment (not mechanical), that require some degree of learning, a calling,
habitual employment is a collective body of persons engaged in any profession

The main professional task in system development is on management of the tasks, with an aim of
producing system that meets users needs, on time and within budget.

Therefore main concerns of the management are: - Planning, Progress monitoring and Quality
control
There are a number of tasks carried out in an engineering organization and are classified into their
function: -
Production - activities that directly contribute to creating products and services the organization
sells
Quality management - activities necessary to ensure the quality of products / services maintained
at this agreed level
Research and development - ways of creating / improving products and production process
Sales and Marketing - selling products / services and involves activities such as advertising,
transporting, distribution etc


Page 61 of 61
I NDI VI DUAL PROFESSI ONAL RESPONSI BILI TI ES
Do not harm others ethical behaviour concerned with both helping clients satisfy their needs
and not hurting them
Be competent IT Professionals master the complex body of knowledge in their profession; a
challenging issue because IT is dynamic and rapidly evolving field. Wrong advice to the client
can be costly
Maintain independence and avoid conflict of interests in excising of their professional
duties, they should be free from influence, guidance or control of other parties e.g. vendors. Thus
avoid corruption and fraud
Match clients expectations it is unethical to misrepresent either your qualifications or ability
to perform a certain job
Maintain fiduciary responsibility IP to hold in trust information provided to them
Safeguard client and source privacy ensure privacy of all private and personal information and
do not leak it
Protect records safeguard records they generate and keep on business transactions with their
clients
Safeguard intellectual property they are trustees of information and software and hence must
recognize that these are intellectual property that must be safeguarded
Provide quality information the creator of information / products must disclose information
about the quality and even the source of information in a report or product record
Avoid selection bias IT Professionals routinely make selection decisions at various stages of
the information life cycle. They must avoid the bias of prevailing point of view. Selection is
related to censorship
Be a steward of a clients assets, energy and attention - Provide information at the right time,
right place and at the right cost
Manage gate-keeping and censorship and obtain informed consent
Obtain confidential information and keep client confidentiality
Abide by laws, contracts, and license agreements; Exercising Professional Judgement

Você também pode gostar