Escolar Documentos
Profissional Documentos
Cultura Documentos
An Introduction to Software
Engineering
1.0 INTRODUCTION
W
e are all aware that software has become a part and parcel of our daily life in ones own way.
It has become the key element in the evolution of computer-based systems and products.
Today, software takes on a dual role. It is a product, and at the same time, the vehicle for
delivering a product. As a product, it delivers the computing potential embodied by computer hardware.
Whether it resides within a food processor or a washing machine or a cellular phone or operates inside a
computer, software is an information transformer producing, managing, acquiring, modifying, displaying
or transmitting information that can be as simple as a bit or as complex as an image. As a vehicle used to
deliver a product, software acts as the basis for the control of the computer (operating systems), the
communication of information (networks), and the creation and control of other programs ( software tools
and environments). Today, software is working both explicitly and behind the scenes in virtually all
aspects of our lives, making it more comfortable and effective. For this reason, software engineering is
more important than ever. Good software engineering practices must ensure that the software makes a
positive contribution towards the role of our lives.
1.1 OBJECTIVES
At the end of this chapter you should be able to answer these questions:
• What software is, its important characteristics, its components and applications ?
• What is software engineering?
• Is it possible to have a generic view of software engineering ?
1.2 SOFTWARE
Software is a set of instructions or computer programs that when executed provide desired function
and performance. It is both a process and a product. To gain an understanding of software, it is important
to examine the characteristics of software, which differ considerably from those of hardware.
exhibiting strengths and weaknesses, but all having a series of generic phases in common. In the next sub
section that follow, let us see some of the important software process models.
Verification Verification
Detailed design
document
Coding
Testing and
Programs integration
Verification
Test plan,
Test report
and Manuals
very attractive for the customers, since they get an actual feel of the system that they indent to use. And
this model is very well suited for complicated and large systems for which there is no manual process or
existing system that can be used to determine the requirements. The process model of this approach is
shown in figure 1.2
Requirements
analysis
Design
Design Code Test
Code
Test
The development of the prototype starts when the preliminary version of the requirements specification
document has be developed. At this stage, there is a reasonable understanding of the system and its
needs. After the prototype has been developed, the clients are permitted to use the prototype system.
Based on the clients feedback, the developer incorporates the suggested changes to the system and gives
the modified system to the clients use. This cycle is repeated until the client has no modifications on the
system, at which stage the final requirements specification is ready for further processes like designing,
coding and testing.
The important characteristics of this model are:
• For prototyping, the cost of the requirements analysis must be must be kept low, in-order for
it to be feasible
• The development approach followed is quick and dirty, the focus is on quicker development
rather than on the quality
• Only minimal documentation is required because it is a throw away prototype model
• This model is very useful in projects where requirements are not properly understood in the
beginning
• It is an excellent method for reducing some types of risks involved with a projects
8 Chapter 1 - An Introduction to Software Engineering
Increment 1
System Engineering
Delivery of 1st
analysis design code test
increment
Increment 2
Delivery
analysis design code test of 2nd
increment
Increment 3
Delivery
analysis design code test of 3nd
increment
calendar time
1.5 SUMMARY
Software is a set of instructions or computer programs that when executed provide desired function
and performance. It is both a process and a product. Software engineering is a discipline that integrates
process, methods, and tools for the development of computer software. A number of process for software
engineering have been proposed each with its own strengths and weaknesses. And you have learnt some
of the important aspects of software, methods of engineering it in this unit. You will be reading more on
the generic phases involved in the development of software in the subsequent chapters.
BSIT 44 Software Engineering 11
I Answers
1. instructions or computer programs
2. product
3. discipline
4. requirements analysis
5. adaptive maintenance
6. linear order
7. quick and dirty
8. incremental
1.8 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house
Chapter 2
2.0 INTRODUCTION
I
n the previous chapter, the stages of system development and also some of the important process
models were discussed. Each process model of software development process includes a set of
activities aimed at capturing requirements : understanding what the customers and users expect the
system to do. Thus, understanding the system is through what is known as requirements capturing and
analysis. In this chapter, we will see some of the ways of analyzing the problem and explore the
characteristics of requirements. We shall also see how to document the requirements for use by the
design and test teams.
2.1 OBJECTIVES
At the end of this chapter you should be able to
• It decomposes the problem into component parts. The simple act of writing down software
requirements in a well-designed format organizes information, places borders around the
problem, solidifies ideas, and helps break down the problem into its component parts in an
orderly fashion.
• It serves as an input to the design specification. As mentioned previously, the SRS serves as
the parent document to subsequent documents, such as the software design specification
and statement of work. Therefore, the SRS must contain sufficient detail in the functional
system requirements so that a design solution can be devised.
• It serves as a product validation check. The SRS also serves as the parent document for
testing and validation strategies that will be applied to the requirements for verification.
Documentation
Problem Problem Prototyping and
analysis description and testing Validation
Informal approach
This approach has no defined methodology. The information about the system is obtained by interaction
with the client, end users , study of the existing system etc.. It uses conceptual modeling. That is, the
problem and the system model are built in the minds of the analysts and it is directly translated to SRS.
Once the initial draft of the SRS is ready, it may be used in further meetings.
Structured analysis
This method focuses on the functions performed in the problem domain and the data input and output
by these functions. It is a top-down refinement process, it helps an analyst to decide upon the type of
information to be obtained at various stages of analysis. This technique mainly depends on two types of
data representation methods, the data flow diagram (DFD) and data dictionary.
BSIT 44 Software Engineering 17
Description
Data Store
A repository of information. In the physical model, this represents a file, table, etc. In the logical model,
a data store is an object or entity.
Symbol: Two parallel lines (Yourdon notation), or an open ended rectangle (G&S notation)
Data Flows
The directional movement of data to and from External Entities, the process and Data Stores. In the
18 Chapter 2 - System Analysis and Requirements Specification
physical model, when it flows into a data store, it means a write, update, delete, etc. Flows out of Data
Stores, mean read, query, display, select types of transaction.
Symbol: Solid line with arrow. Each data flow is identified with a descriptive name that represents the
information (data packet) on the data flow.
An example of DFD for an employee payment system is shown in figure 2.2
Data Dictionary
The data dictionary is a repository of various data flows defined in a DFD. It states the structure of
each data flow in a DFD. The components in the structure of a data flow may also be specified in the
data dictionary, as well as the structure of files.
The notations used to define the data structure are:
+ ( plus ) represents a sequence or composition
| (vertical bar) represents selection means one OR the other and
* represents repetition means one or more occurrences.
Prototyping
In this method of problem analysis, a partial system is constructed, which is used by the client and the
developers to have a better understanding of the problem and the needs. There are two approaches to this
methods: throwaway and evolutionary prototyping.
Throwaway prototyping
In this approach, the prototype is constructed with an idea that it will be discarded after the completion
of analysis and the final system will be built from the scratch.
Evolutionary prototyping
In this approach, the prototype is built with an idea that it will later be converted into the final system.
In order for prototyping for requirements analysis to be feasible, its cost must be kept low. And the
cost of developing and running a prototype can be around 10% of the total development cost of the
software.
BSIT 44 Software Engineering 19
Overtime rate
Employee record
Pay *
Pay
Get rate Weekly Overtime
employee pay pay
file *
*
Regular
Hours Overtime Company
Hours Regular Total Records
Hours pay
Employee ID
Net
pay Issue
Deduct paycheck
taxes
Worker
check
Company
Records
Worker
Tax rates
The following figure 2.3 gives the data dictionary associated with the DFD of an employee payment
system.
Modifiable The logical, hierarchical structure of the SRS should facilitate any necessary
modifications (grouping related issues together and separating them from
unrelated issues makes the SRS easier to modify).
Ranked Individual requirements of an SRS are hierarchically arranged according to
stability, security, perceived ease/difficulty of implementation, or other parameter
that helps in the design of that and subsequent documents.
Testable An SRS must be stated in such a manner that unambiguous assessment criteria
(pass/fail or some quantitative measure) can be derived from the SRS itself.
Traceable Each requirement in an SRS must be uniquely identified to a source (use case,
government requirement, industry standard, etc.)
Unambiguous SRS must contain requirements statements that can be interpreted in one way
only. This is another area that creates significant problems for SRS development
because of the use of natural language.
Valid A valid SRS is one in which all parties and project participants can understand,
analyze, accept, or approve it. This is one of the main reasons SRSs are written
using natural language.
Verifiable A verifiable SRS is consistent from one level of abstraction to another. Most
attributes of a specification are subjective and a conclusive assessment of quality
requires a technical review by domain experts. Using indicators of strength and
weakness provide some evidence that preferred attributes are or are not present.
What makes an SRS good? How do we know when weve written a quality specification? The
most obvious answer is that a quality specification is one that fully addresses all the customer requirements
for a particular product or system. While many quality attributes of an SRS are subjective, we do need
indicators or measures that provide a sense of how strong or weak the language is in an SRS. A strong
SRS is one in which the requirements are tightly, unambiguously, and precisely defined in such a way that
leaves no other interpretation or meaning to any individual requirement.
1. Interfaces
2. Functional Capabilities
3. Performance Levels
4. Data Structures/Elements
5. Safety
6. Reliability
7. Security/Privacy
8. Quality
9. Constraints and Limitations
But, how do these general topics translate into an SRS document? What, specifically, does an SRS
document include? How is it structured? And how do you get started?
An SRS document typically includes four ingredients, as given below:
1. A template
2. A method for identifying requirements and linking sources
3. Business operation rules
4. A traceability matrix
The first and biggest step to writing an SRS is to select an existing template that you can fine tune for
your organizational needs (if you dont have one already). Theres not a standard specification template
for all projects in all industries because the individual requirements that populate an SRS are unique not
only from company to company, but also from project to project within any one company. The key is to
select an existing template or specification to begin with, and then adapt it to meet ones needs.
Structured English
Natural languages have been widely used for specifying requirements since they have an advantage
that it easily understood both by the client and the developer. The use of natural language has some
problems associated with it. Written requirements become necessary as the system become more and
more complex as against the requirements conveyed verbally, using the natural language for smaller
systems. However, written requirements are imprecise and ambiguous. So, analysts are making an effort
to move from natural languages towards formal languages for requirements specification. To reduce
these problems, the natural language is used in a structured manner. If English is used as a natural
language, requirements are broken into sections and paragraphs and paragraph further into subparagraphs.
Some insist on using words like shall, perhaps, should, and try to restrict the use of common phrases
in order to improve precision and reduce ambiguity.
Regular Expressions
Regular expressions can be used to specify the structure of symbol strings formally. They can be
considered as grammar for specifying the valid sequences in a language and can be processed automatically.
Some basic constructs like atoms, composition, alternation, closure are used in regular expressions, to
define many data streams.
Decision Tables
It is formal, table-based notation that can be used to check the qualities like completeness and lack of
ambiguity in requirements specification. They are helpful in specifying complex decision logic.
5
C1 Y
C2 conditions
C3
actions
The decision table has two parts. The top part specifies different conditions, and the bottom part
specifies different actions.
24 Chapter 2 - System Analysis and Requirements Specification
2.6 SUMMARY
Requirements collection is crucial to the development of successful information systems. To achieve a
high level of IS quality, it is essential that the SRS be developed in a systematic and comprehensive way.
If this is done, the system meet the users needs, and will lead to user satisfaction. If it is not done, the
software is likely to not meet the users requirements, even if the software conforms with the specification
and has few defects. Theres so much more we could say about requirements and specifications. Hopefully,
this information will help you get started when you are called uponor step upto help the development
team. Writing top-quality requirements specifications begins with a complete definition of customer
requirements. Coupled with a natural language that incorporates strength and weakness quality indicators
not to mention the adoption of a good SRS templatetechnical communications professionals well-trained
in requirements gathering, template design, and natural language use are in the best position to create and
add value to such critical project documentation. In the next chapter we will see how to transform these
requirements analyzed into design.
I Answers
1. requirements
2. Software Requirements Specification (SRS)
3. requirements definition and specification
4. data dictionary
5. rectangle
6. strong
7. Checklists
2.8 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition),
Narosa publishing house
3. Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5
Chapter 3
System Design
3.0 INTRODUCTION
I
n the last chapter, we learned how to work with the customers to determine what they want out of
the proposed system. The outcome of requirements analysis and specification phase was a system
requirements specification document. This serves two purposes : one for the customer to capture
their needs and the other for the designers to explain the problem in technical terms. The next step in
development is to translate those desires into a solution: a design that will satisfy the customers needs. In
this chapter we will see what to do and how to do it.
3.1 OBJECTIVES
At the end of this chapter you should be able to
• know the process of designing a software
• describe the different types of design
• document the design specifications
• say what coupling is and name the different types of coupling
• say what cohesion is and mention the different types of cohesion
• appreciate the importance of various design notations
• give the main objective of transform analysis and transaction analysis
• give the necessary steps involved in transform analysis as well as transaction analysis
System design
Using this, the modules that are needed for the system are decided, the specifications of these modules
and how these modules need to be connected are also decided.
Detailed design
Using this, the internal design of the modules are decided or how the specifications of the modules can
be satisfied are decided. This type of design essentially expands the system design to contain more
detailed description of the processing logic and data structures so that the design is sufficiently complete
for coding.
A design can be object-oriented or function-oriented. In function-oriented design, the design consists
of module definitions, with each module supporting a functional abstraction. In object-oriented design, the
modules in the design represent data abstraction.
• The design should not reinvent the wheel design time should be used for representing new
ideas and integrating the already existing design patterns instead of going for reinvention.
• The design should minimize the intellectual distance between the software and the
problem as it exists in the real world the structure of the software design should reflect the
structure of the problem domain.
• The design should exhibit uniformity and integration a design is uniform if it appears that
one person developed the entire thing. In order to achieve this, the design rules, format, style
etc. will have to defined for the design team before design work begins. If the interfaces are
well defined for the design components then the design is said to be integrated.
• The design should be structured to accommodate change
• The design should be structured to degrade gently, even when aberrant data, events, or
operating conditions are encountered
• Design is not coding, coding is not design design model has a higher level of abstraction
than the source code. Major decisions are at design phase and only small decisions are
taken at the implementation phase.
• The design should be assessed for quality as it is being created, not after the fact a number
of design concepts and design measures are available and can be used to access quality of
the software.
• The design should be reviewed to minimize conceptual errors major semantic errors like
omissions, ambiguity, inconsistency etc. have to be addressed by the designer before dealing
with the syntax of the design model.
When the design principles described above are properly applied, the software engineer creates a
design that exhibits both internal and external quality factors. External quality factors are those properties
of the software like, speed, reliability, correctness etc., that can be readily observed by the users. Internal
quality factors are those that lead to high quality design software. In order to achieve internal quality
factors, the designer must understand the basic deign concepts.
3.4.1 Abstraction
Abstraction is a means of describing a program function, at an appropriate level of detail. It deals with
problems at some level of generalization without regard to irrelevant low level details. At the highest level
of abstraction, a solution is stated in broad terms using the language of problem environment. At the lower
levels of abstraction, more procedural details are given.
There are three important levels of abstraction
• Procedural abstraction :
A procedural or functional abstraction is a named sequence of
instructions that has a specific and limited function.
• Data abstraction :
A data abstraction is a named collection of data that describes
abstract data types, objects, operations on objects by suppressing
the representation and manipulation details.
Consider an example sentence, open the door. Here the word open is an of example of procedural
abstraction, which implies a long sequence of procedural steps like: walk to the door, hold the door-knob,
turn the knob and pull the door, move away from the door. The word door is an example of data
abstraction, which has certain attributes like dimensions, weight, door type etc., that describe the door.
• Control abstraction :
It implies a program control mechanism without specifying internal
details. That is, stating the desired effect without stating exact
mechanism of control, co-routines, exception handling.
Example: ON interrupt DO
save STACK_A and call Exception_handler_a;
diagram ( as shown in figure 3.1) is the most common structure followed, which represents hierarchy of
modules. The terms like depth, width, Fan-in , Fan-out are usually associated in describing and measuring
the program structure.
Depth refers to the number of levels of control .
Width refers to the overall span of control.
Fan-out is a measure of the number of modules that are directly controlled by another module.
Fan-in indicates how many modules directly control a given module.
The relationship between modules is said to be either super-ordinate or subordinate. A module that
controls another module is said to be super-ordinate to it. And a module controlled by another is said to be
subordinate to the controller.
M
Fan-out
depth
a b c
d e j k m
f g h
n o p q
i Fan- in
r
width
problem-related information. Classic data structures include scalar, sequential, linked-list, n-dimensional,
and hierarchical. Data structure, along with program structure, makes up the software architecture.
3.4.6 Modularity
A module is a named entity that:
1. Contains instructions, processing logic, and data structures.
2. Can be separately compiled and stored in a library.
3. Can be included in a program.
4. Module segments can be used by invoking a name and some parameters.
5. Modules can use other modules.
Modularity derives from the architecture. Modularity is a logical partitioning of the software design
that allows complex software to be manageable for purposes of implementation and maintenance. The
logic of partitioning may be based on related functions, implementation considerations, data links, or other
criteria. Modularity does imply interface overhead related to information exchange between modules and
execution of modules. There are five important criteria for defining an effective modular system that
enable us to evaluate a design method:
1. Modular decomposability
If a design method provides a systematic way for decomposing a problem into sub problems, it will
reduce the complexity of the over all problem, there by achieving an effective modular solution.
2. Modular composability
If a design method enables the existing design components to be assembled in to a new system, it will
produce a modular solution for the problem.
3. Modular understandability
If a module can be understood as a single unit without referring to other modules, it will be easier to
build a module and make changes easily.
4. Modular continuity
If any small changes to the system requirements result in changes to individual modules, rather than
system-wide changes, the impact of change induced on it will be minimum.
BSIT 44 Software Engineering 35
5. Modular protection
If an aberrant condition occurs with in a module and its effects are constrained within that module, the
impact of error induced on it will be minimum.
3.5.2 Cohesion
Cohesion is a measure of the relative functional strength of a module. It is an extension of information
hiding concept. A cohesive module must perform a single task within a software procedure, requiring little
interaction with procedures that are performed in other parts of a program. Cohesion may be represented
in various levels ranging from low measure to high measure.
Strongest cohesion is most desirable (7), weakest cohesion (1) is least desirable.
1. Coincidental cohesion (no apparent relationship among module elements).
2. Logical cohesion (some inter-element relationships exist, e.g. several related functions,
math library)
3. Temporal cohesion (elements are usually bound through logic (2) and are executed at one
time, i.e. same invocation of the module, e.g. initialization module).
4. Communication cohesion (all elements are executed at one time and also refer to the
same data, e.g. I/O module).
5. Sequential cohesion (output of one element is input to the next, model structure bears
close resemblance to the problem structure or procedure).
6. Functional cohesion (all elements relate to performance of a single function).
7. Information cohesion (complex data structure with all its functions/operators, concrete
realization of data abstraction, objects).
3.5.3 Coupling
Coupling is a measure of the relative interdependence among modules. Strength of coupling depends
on interface complexity, type of connections and communication between modules. In software design,
BSIT 44 Software Engineering 37
we look for the lowest possible coupling. Simple connectivity among modules results in software that is
easier to understand and less prone to error propagation in the system. Shown below are the different
types of module coupling.
The strongest (1) is least desirable, the weakest (5) is most desirable.
1. Content coupling (cross modification of local data by other modules).
2. Common coupling (global data cross coupling).
3. Control coupling (control flag etc. module controls sequencing of processing in another
module).
4. Stamp coupling (selective sharing of global data items).
5. Data coupling (parameter lists are used to pass/protect data items).
I Scope
A. System objectives
B. Major software requirements
C. Design constraints, limitations
II Data design
A. Data objects and resultant data structures
B. File and database structures
1. External file structure
a. logical structure
b. logical record description
c. access method
2. global data
3. file and data cross reference
38 Chapter 3 - System Design
IV Interface design
A. Human-machine interface specification
V Procedural design
For each module:
A. Processing narratives
B. Interface description
D. Modules used
F. Comments/restrictions/limitations
VI Requirements Cross-reference
2. Integration strategy
3. Special considerations
IX Appendices
Section I gives the overall scope (overview of system objective, interfaces, major software functions,
external data bases, major constraints) of the design effort. Much of the information is derived from the
SRS.
Section II gives the data design, describing the external file structures, internal data structures and a
cross reference that connects data objects to specific files.
Section III gives the architectural design, indicating how the program architecture has been derived
from analysis model.
Section IV describe the interface design, emphasizing on the human-machine interface specifications
and design rules.
Section V is about procedural design. Here, each module is described with an English language
processing narrative. Along with it, the interface description, internal data structures, comments associated
with each and every modules is described.
Section VI contains a requirements cross-reference. The purpose of this is to establish all requirements
are satisfied by the software design and to indicate which modules are very important to the implementation
of specific requirements.
Section VII is about the Verification which includes testing guidelines, integration strategy, special
considerations ( physical constraints, high-speed constraints, memory management ).
Section VIII and IX is on notes and appendices.
Design Fundamentals
Three distinctive aspects of an information system are addressed during the software design. Data
design is involved with the organization, access methods, associativity, and processing alternatives of the
systems data. Architectural (preliminary) design defines the components, or modules, of the system and
the relationships that exist between them. Procedural (detailed) design uses the products of the data and
architectural design phases to describe the processing details of the system module internals.
40 Chapter 3 - System Design
Software design methods attempt to help the designer in the following aspects:
• they assist in partitioning the software into smaller components and reducing complexity
• they help to identify and isolate data structures and functions
• they attempt to provide some measure of software quality.
7. A software design and programming language should support the specification and realization
of abstract data types the implementation and the corresponding design of a sophisticated
data structure can be made very difficult if direct specification of the structure does not
exist.
So, a well designed data can lead to better program structure and modularity, and reduced procedural
complexity.
Transform flow
In the fundamental system model (context level DFD), information must enter and exit software in an
external world form. The data entered through a keyboard, information shown on a computer display
etc., are examples external world information
42 Chapter 3 - System Design
Internal data
representation
Time
The external data that enters the system must be converted in to an internal form for processing. The
information that enters the system along the paths that transform external data into an internal form is
known as incoming flow. The transition form external to internal data form occurs at the kernel of the
software. The incoming data moves through the transform center and from this it moves out of the
software through the paths called outgoing flow. This is shown in figure 3.4. the over all flow of data
occurs in a sequential manner. When a segment of a DFD shows these characteristics we say transform
flow is present.
Transaction flow
The information flow in a system is characterized by a single data item called a transaction. A
transaction triggers other data flow along one of many paths as shown in figure 3.5. The transaction flow
is characterized by data moving along an incoming path that converts external world information into a
transaction. The center of information flow from where many actions paths start is called a transaction
center. When the external information enters the transaction center, the transaction is evaluated and
based on its value, the flow along one of many action paths is initiated.
BSIT 44 Software Engineering 43
Action
Transaction
paths
Transaction
center
4. Separate the transform center by specifying incoming and outgoing flow boundaries incoming
flow is a path in which information is converted from internal form to external form, and
outgoing is the vice-versa.
5. Perform the first level factoring - Factoring is the process of decomposing a module into
main and subordinate modules.
6. Perform the second level factoring
7. Refine the first iteration program structure using design heuristics for improved software
quality
7. Refine the first iteration architecture using design heuristics for improved software quality
Design issues
There are four design issues that need to be addressed while designing an interface.
action is given out by the software. The system response time depends on two factors : length of the
response time and variability, the deviation from average response time.
· Command labeling
Earlier, the typed command was the most common type of interaction the user used to have with the
system. Now-a days, even though users are after window-oriented, point and pick interfaces, still many
users are after the command-oriented interaction. So , some design issues that correspond to the command
mode interaction need to be addressed:
- Will every menu option have a corresponding command ?
- What forms will commands take ? (i.e. a typed key or function keys or control sequence
such as ^P , ^D)
- Can commands be customized ? etc.
BSIT 44 Software Engineering 47
Design evaluation
The user interface prototype once created after the design has to be evaluated to check whether it
meets the user requirements. And the design evaluation cycle is shown in figure 3.8
The user interface evaluation cycle begins with the creation of a first level prototype soon after the
preliminary design is over. This prototype is evaluated by the user and the comments about the interface
is passed on to the interface designer. The designer then studies the evaluation report and performs
modifications to the design thus creating a next level of prototype. The evaluation process continues until
no further modifications to the interface design are suggested.
Preliminary
design
Build
prototype #1
interface
Build
prototype #n
interface
Evaluation is
studied by
designer
Interface design is
complete
Flow Charts
Prior to the structured programming revolution, flowcharts were the predominant method of representing
program logic. Flowcharts are limited by a physical view of the system that is improperly applied before
overall logical requirements are understood. Figure 3.9 illustrates the three structured constructs : sequence,
repetition and selection.
Sequence : If-then-else:
condition
F
First task T
Then
Else part
part
Next task
Selection :
F
T
Case Case
condition part
F
F
BSIT 44 Software Engineering 51
Repetition :
(a) Do-While (b) Repeat-Until
Loop
Loop
condition
Loop task
T
F
F Loop
task condition
T
Box diagram
It is an another graphical tool which can be used to develop procedural design.
It is also known as N-S charts or Nassi-Shneiderman charts. The fundamental element of this tool is
a box. The graphical representation of structural constructs are shown in figure 3.10.
52 Chapter 3 - System Design
Sequence : If-then-Else:
Condition
First
F T
task
Next
task
Else part Then part
Next + 1
task
Repetition:
Loop condition
Repeat -Until
Part
Do - while
part
Loop condition
Selection:
Case condition
Case Case
Part part
Decision tables
They provide a notation that translates actions and conditions in to a tabular form. The organization of
a decision table is given below:
Example: A limited-entry decision table (2N entries, N is the number of conditions) . It has a list of
conditions and actions. The actions are based on the combinations of conditions. And the occurrence of
any action depends on the decision rules
54 Chapter 3 - System Design
Decision Rule
Rule numbers R1 R2 R3 R4 R5 R6 R7 R8
Condition 1 Y N Y N N Y N Y
Condition 2 Y N N Y N N Y Y
Condition 3 Y N N N Y Y Y N
Action 1 x x
Action 1 x x x
Action 1 x
Action 1 x x x
Example
Function parameter is string containing value to be converted into
an integer (single characters are enclosed in single quotation marks, e.g.
i is character representation of letter i)
Declare
Value to be passed back via the function
Pointer to string that is being converted
Sign of value being converted
BSIT 44 Software Engineering 55
3.8 SUMMARY
In this chapter, we have looked at what it means to design a system. We have also seen that software
design process involves four distinct but interrelated activities, data design, architectural design, interface
design and procedural design. When you build a system, you should keep in mind several important
characteristics such as modularity, levels of abstraction, coupling, cohesion, prototyping etc.. and you
should be able to come out with a design document at the end of a design process. In the next chapter,
we will see how to translate the design into implementation.
I Answers:
1. data abstraction.
2. control abstraction
3. cohesion
4. data coupling
5. transaction mapping.
6. transform mapping
7. condition
8. program design language
3.10 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house
3. Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5
Chapter 4
Software Coding
4.0 INTRODUCTION
S
o far, we were able to understood the users problem, in the form of requirements and the way
of addressing them in the form of design to get a high-level solution. Now, we must focus on
implementing the solution as software. That is, we must write the programs that implement the
design. Even though, there are many ways to implement a design, and many languages and tools that are
available; this chapter does not teach you how to program rather, it explains some of the software
engineering practices that you need to follow when you write the code.
4.1 OBJECTIVES
In this chapter, we will look at
• Standards of programming
• Guidelines for programming
• Appreciate the importance of documentation with respect to programming
• Types of documentation - Internal and external documentation
• Understand the use of programming tools
are generally involved, and a great level of cooperation and coordination is required. Thus, it is very
important for others to understand not only what you have written, but also why you have written it and
how it fits in their work. For these reasons, one must know the organizations standards and procedures
before beginning to write code.
Standards and procedures can help you to
• organize your thoughts and avoid mistakes. Some procedures involve methods of documenting
your code so it is clear and easy to follow.
• Translate designs to code. By structuring code according to standards, it is possible to maintain
the correspondence between design components and code components.
Control structures
Many of the control structures for a program component are given by the architecture and design of a
system. And the given design is translated in to code. In case of some architecture, such as implicit
invocation and object-oriented design, control is based on the system states and changes in variables. In
procedural designs, control depends on the structure of the code it self. However, irrespective of the
design type, it is important that the program structure should reflect the designs control structure.
Let us look at some of the guidelines that are applicable here:
• The code shall be written in such a way that it can be read easily from top-down
• The concept of modularity need to be followed in-order to hide the implementation details
and to make the code more understandable, easy to test and maintain
• Use parameter names and comments to exhibit coupling among code components
Algorithms
The program design often specifies a class of algorithms to be used in coding. For example, the design
may tell the programmer to use binary search technique. Even though, a programmer has lot of flexibility
BSIT 44 Software Engineering 61
in converting the algorithm to code, it depends on the constraints of the implementation language and
hardware.
Data structures
In writing programs, one should format and store data in such a manner that the data management and
manipulations are straight forward. The programs design may specify some of the data structures that
can be used in implementing functions. In general, data structures can influence the organization and flow
of a program. In some cases, it can even influence the choice of programming language. For example,
LISP is a language for list processing. It is so designed that it contains structures that make it much
easier for handling lists than other languages. In general, the data structure must be very carefully considered
while deciding the language for implementation.
General guidelines
There are several strategies that are useful in preserving the design quality of a program;
• Localizing input and output those parts of a program that read input or generate output are
highly specialized and must reflect characteristics of the underlying hardware and software.
Because of this dependency, the program sections performing input and output functions are
sometimes difficult to test. Therefore, it is desirable to localize these sections in components
separate from rest of the code
• Pseudo-code can be used for transforming the design to code through a chosen programming
language. By adopting constructs and data representations without becoming involved
immediately in the specifics of each command, one can experiment and decide which
implementation is most desirable. In this way, code can be rearranged and restructured with
a minimum rewriting
• Revise the design and rewrite the code until one is completely satisfied with the result
• Reuse code components if possible
4.4 DOCUMENTATION
Documentation is an important activity in the implementation phase. The program documentation is a
set of written descriptions that explain to a reader what the programs do and how they do it. There are
two kinds of program documentation :
• Internal documentation
• External documentation
62 Chapter 4 - Software Coding
Internal documentation
Internal documentation is a descriptive material written directly with in the program, at a level appropriate
for a programmer. It contains information directed at some one who will be reading the source code of
the program. The information contains a description of data structures, algorithms and control flow.
Usually, this information is placed at the beginning of each component in a set of comments called the
header comment block. The header comment block acts as an introduction to the program. It has the
following information for each of the code component :
- what is the name of the component
- who wrote it
- where does the component fit in the general system design
- when was the component written and revised
- why the component exists
- how the component uses its data structures, algorithms and controls
Program comments
Comments in a program are textual statements that are meant for program readers and are not
executed. Comments, if properly written and kept consistent with the code, can be invaluable during
maintenance. Comments enlighten the readers as they move through the programs, helping them to
understand how the intentions described in the header are implemented in the code. Providing comments
for modules in a program are very useful. And the comments for a module are often called prologue for
the module. It is desirable that prologue contains the following information :
• Module functionality
• Parameters and their purpose
• Assumptions about the inputs, if any
• Global variables accessed and/or modified in the module
Comments have a place even in clearly structured and well-written code. Although code clarity and
structure minimize the need for other comments, additional comments are useful whenever helpful
information can be added to a component. It is very important that the comments need to be updated to
reflect changes whenever the code is revised.
else result = 4;
Documenting data
When a system handles many files of different types and purposes, along with flags and passed
parameters, it is very difficult for program readers to understand the way in which data is structured and
used. So, a data map is very essential to document data.
External documentation
It is a part of overall system documentation. It is intended to be read by those people who may never
look at the actual code. It explains things more broadly then that of programs comments.
The external code documentation contains the following description :
• Problem description
It explains what problem is being addressed by the component, when the
component is invoked and why it is needed.
• Algorithm description
Addresses the choice of the algorithms, once when the existence of the
component is clear. It explains each algorithm used by the component,
including formulae, boundary conditions etc.
• Data description
This describes the flow of data at the component level. Usually, data flow
diagrams along with data dictionaries are used in the form of references.
• User groups, depending on the nature of usage, their exposure and training levels of users
• Volume of information presented
• Document usage mode, whether it has to be an instructional mode, reference mode or both
Generally, the documentation set could consists of installation manual, operations manual, procedure
manual, user manual, error message manual etc. However, the main idea here is to ensure that the users
have complete documentation on how to install, use and manage the software.
Any user document would generally consists of the following components:
1. Title page
2. Restrictions
3. Warranties
4. Table of contents
5. List of illustrations
6. Introduction
7. Body of document
8. Error conditions and recovery
9. Appendices
10. Bibliography
11. Glossary
12. Index
The introduction would generally give an idea of the intended users for whom the document is addressed,
how to use it, information on other related documents, conventions followed etc.
The body of the document gives the main contents of the document. This generally include:
1. Scope
2. Prerequisites for usage
3. Preparatory instructions
4. Cautions and warnings
5. Description of each task giving
66 Chapter 4 - Software Coding
4.7 SUMMARY
In this chapter, we have looked at the several guidelines for implementing programs. It is necessary
that certain points need to be considered while writing the programs by the programmer, such as
organizational standards and guidelines to be followed, concept of code re-usability, incorporation of system-
wide error-handling strategy and proper program documentation. We were able to see the importance of
programming tools, and also address some of the important issues in implementing the design to produce
high quality software. In the subsequent chapters, we will discuss the way of testing the code and how to
make the software a quality software.
BSIT 44 Software Engineering 67
I Answers
1. algorithms
2. external
3. textual statements
4. browsing tools
5. executable code
68 Chapter 4 - Software Coding
4.9 REFERENCES
1. Shari Lawrence Peleeger, software engineering theory and practice, Pearson education,
2001, ed. 2
2. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
3. Pankaj Jalote, An Integrated approach to Software Engineering (second edition),Narosa
publishing house
4. Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5
Chapter 5
Software Testing
5.0 INTRODUCTION
S
oftware testing is a critical element of software quality assurance and represents the ultimate
review of specification, design and coding. In the process of software development, errors can
occur at any stage and during any phase. The errors encountered during the earlier phases of
software development life cycle (SDLC) such as requirements, design phases are likely to be carried to
the coding phase also, in addition to the errors generated at the coding phase. This is particularly true
because, in earlier phases most of the verification techniques are manual and no executable code exists.
Because code is the only product that can be executed and its actual behavior can be observed, testing is
the phase where errors remaining from all the phases must be detected. Hence, testing performs a very
critical role for quality assurance and for ensuring the reliability of software. In this chapter, we discuss
software testing fundamentals, techniques for software test case design and different strategies for software
testing.
5.1 OBJECTIVES
At the end of this chapter you should be able to
• Say what software testing is
• State testing objectives
• Understand the basic principles behind testing
• Differentiate white-box from black-box testing method
• Derive test cases using white-box or black-box methods
• Discuss unit testing, integration testing, validation testing and system testing and their
importance
The key to software testing is trying to find the modes of failure something that requires exhaustively
testing the code on all possible inputs. For most programs, this is computationally infeasible. It is commonplace
to attempt to test as many of the syntactic features of the code as possible.
Once source code has been generated, software must be tested to uncover (and correct) as many
errors as possible before delivery to the customer. The goal is to design a series of test cases that have
a high likelihood of finding errors - but how? Thats where software testing techniques enter the picture.
These techniques provide systematic guidance for designing tests that:
(2) exercise the input and output domains of the program to uncover errors in program function,
behavior, and performance.
Techniques that try to exercise as much of the code as possible are called white-box software
testing techniques. Techniques that do not consider the codes structure when test cases are selected are
called black-box techniques.
BSIT 44 Software Engineering 71
Testing objectives
The following statements serve as the objectives for testing :
1. testing is a process of executing a program with the intent of finding error
2. a good test case is one that has a high probability of finding an as-yet undiscovered error
3. a successful test is one that uncovers as-yet undiscovered error.
Testing principles
Here are some basic principles that are applicable to software testing, that need to be followed before
applying methods to design test cases:
• All tests should be traceable to customer requirements
• Tests should be planned long before testing begins
• The Pareto principle ( implies that 80 percent of all errors uncovered during testing will likely
be traceable to 20 percent of all program modules) applies to software testing
• Testing should begin in small and progress towards testing in large, from module level
testing towards entire system testing.
• Exhaustive testing is not possible
• The tests should be conducted by an independent third party, so that testing will be very
effective
Testing function
- will find errors in software
- It will show that the software functions appear to be working according to the specifications
- Indicates software reliability and software quality as whole
But, Testing cannot show the absence of defects it can only show the presence of errors.
1. White-Box Testing
2. Black-Box Testing
Black Box Testing : It is also known as functional testing. Knowing the specified function that the
product has been designed to perform, tests can be conducted to demonstrate that each function is fully
operational
Examples: Boundary value analysis, Equivalence partitioning method
White Box Testing : It is also known as structural testing. Knowing the internal working of product,
test can be conducted to ensure that all internal operations perform according to specification that all
internal components have been adequately used.
Example : Basis path testing
Sequence if
while
Figure 5.1
- Uses flow graphs for representing the control flow or logical flow, as shown in Figure 5.1
To illustrates the use of flow graph, we will consider the procedural design representation and it is
represented as a flow chart, as shown in figure 5.2. Here, a flow chart is used to depict program control
structure. Figure 5.3 maps the flowchart into a corresponding flow graph. Each circle, called a flow
graph node, represents one or more procedural statements. A sequence of procedure boxes and decision
box can map into a single node. The arrow on the flow graph, called edges or links, represent flow of
control. Areas bounded by nodes and edges are called regions.
74 Chapter 5 - Software Testing
3 4
6
5
7 8
9
10
Cyclomatic complexity
- is a software measure, that provides a quantitative measure of logical complexity of a program
- When used in basis path method, it give a number called cyclomatic number. Cyclomatic
Number = E - N + 1 This number defines the number of independent paths in the basis set
of a program and also gives the upper bound for the number of tests that can be conducted
so that all statements have been executed at least once.
BSIT 44 Software Engineering 75
2,3
6 4,5
Nodes
7 8
9
Region
10
Edge
11
Here, paths 1-2-3 and 4 form the basis set for the flow graph.
Cyclomatric complexity (CC) can be computed in 3 ways:
1. The number of regions of a flow graph correspond to cyclomatic complexity
2. Cyclomatic complexity V(G) for a flow graph G is defined as V(G) = E - N + 2 where E =
flow graph edge, and N = number of nodes.
3. The cyclomatic complexity V(G) for a flow graph G is defined as V(G) = P + 1 where P is
the number of predicate nodes contained a flow graph.
Example:
1. CC = 4 (because, there are 4 regions)
2. V(G) = E - N + 2 = 11 9 + 2 = 2 + 2 = 4
3. V(G) = 3 + 1 = 4
This V(G) gives the upper bound for the number of independent paths that form the basis set.
Parallel
Undirected
links
link
Object
#3
In order to accomplish these steps, a software engineer begins by creating a graph - a collection of
nodes that represents objects, links that represents relationship between objects, node weights that describe
the properties of node and link weights that describe some characteristics of a link. the symbolic
representation of a graph is shown in figure 5.4. Once nodes have been identified, links, link weights
should be established. Each relationship is studied separately like transitive relation, symmetric relation,
reflexive relation.
Equivalence partitioning
It is a block box testing method. The Input domain of a program is taken and it is divided into classes
of data from which test cases can be derived. A test case will handle each class of data and produce
different classes of errors. Test case design for equivalent partitioning is based on the evaluation of
equivalence classes for input condition.
Equivalence class
- equivalence class of objects are present if the set of objects have relationships like transitive,
reflexive and symmetric.
78 Chapter 5 - Software Testing
- represents a set of valid or invalid states for input conditions. Input conditions being: specific
numeric value, a range of values, a set of related values, and Boolean conditions.
Command :
Input condition, set - containing commands as said above.
BSIT 44 Software Engineering 79
Next equivalence classes are derived using guidelines for input conditions, for which text cases are
developed and executed.
• Testing is conducted by the developer of the software and an independent test group
• Testing and debugging are different activities, but debugging must be accommodated in any
testing strategy
1) First the module interface is tested to ensure that information properly flows into and out of
the program unit under test.
2) The local data structure is examined to ensure that data stored temporarily maintains its
integrands during all steps in an algorithms execution.
3) Boundary conditions are tested to ensure that the module operates properly at boundaries
established to limit or restrict processing.
4) All Independent paths (basis paths) through the control structure are executed to ensure that
all statements in a module have been executed at-least once.
5) Finally, all error handling paths are tested.
Driver
Interface
Local data structures
Boundary conditions
Module to be Independent paths
tested Error handling paths
stub stub
Test Cases
results
In most applications, a driver is a software or main program that accepts test case data, passes it to
the module to be tested and prints the relevant results.
A stub is a dummy sub program, that replaces the modules that are subordinate to the module to be
tested. It does minimal data manipulation, prints verification of entry and returns.
Unit testing is advantages when a module to be tested defines only one function, and that the number
of test cases are reduced which in turn makes error handling easier. A module with high cohesion
simplifies unit.
1. Incremental Integration : The program is constructed and tested in small segments where
errors are easier to isolate and correct. Interfaces are more likely to be tested completely
and a systematic test approach is applied.
2. Non-incremental integration: It uses a Big bang approach to construct the program.
All modules are combined in advance. And the entire program is tested as a whole. Errors
are more and difficult to isolate and correct.
m1
m2 m3 m4
m5 m6 m7
m8
Breadth First Integration : Incorporates all modules directly subordinate at each level, moving
across the structure horizontally.
From figure 5.6 , modules m2, m3 and m4 are integrated first, next control level, m5, m6 and so on.
2. Bottom-up Integration :
The modules at the lowest levels in the program structure are constructed and tested. Here need for
stub is eliminated. Figure 5.7
BSIT 44 Software Engineering 85
mc
ma mb
Driver
D1 D2 D3
Cluster 3
Cluster 1
Cluster 2
Advantages:
- This method is very simple.
- Substantial decrease in the number of drivers if the top two levels of program structure are
integrated top-down
- Lack of stubs
- easier test case design.
Disadvantages : Program as an entity does not exist until the last module is added.
Regression Testing:
Is an activity that helps to ensure that changes that often take place, do not introduce additional errors.
These software changes do occur every time a new module is added as part of integration testing - new
data paths are established, new input may occur and new control logic is invoked.
- This testing may be conducted manually, by re-executing a subset of all test cases using
automated capture play back tools (these tools enable the software engineer to capture test
cases and results for subsequent playback and comparison)
- Focuses on critical module function.
The regression test suite contains three different classes of test cases:
1) A representative sample of tests that will exercise all software functions.
2) Additional tests that focus on software functions, that are likely to be affected by the change.
3) Tests that focus on the software components that have been changed.
So, selection of an integration strategy depends upon software characteristics and sometimes project
schedule. In general, a combined approach called as sandwich testing is used that takes characteristics
of both approaches.
3. Sandwich testing :
A best compromise between two methods. It uses a top-down strategy for upper levels of the program
structure, coupled with a bottom-up strategy for subordinate levels.
Configuration Review:
- It is also termed as audit
- It is an input element of the validation process.
- This is done to ensure that all elements of the software configuration have been properly
developed, are catalogued and detailed to support the maintenance phase of software life
cycle.
Recovery Testing:
Is a system test that forces the software to fail in a variety of ways and verifies that recovery is
properly performed. (i.e. a system failure must be corrected within a specified period of time). Recovery
can be automatic or manual. If automatic, re-initialization, check pointing mechanisms, data recovery, and
restart are each evaluated for correctness. If recovery requires manual intervention, the meantime to
repair is evaluated to determine whether it is within acceptable limits.
Security Testing:
It attempts to verify that protection mechanisms built into a system will in fact protect it from improper
penetration (like hackers attempt to penetrate systems for sport, for revenge, for illicit personal gain etc).
During this testing, the tester plays the role of the individual who desire to penetrate the system. So, the
system designer has to make the penetration cost greater than the value of the information to be obtained.
Stress Testing:
It is designed to confront programs with abnormal situations. This testing executes a system in a
manner that demands resources in abnormal quantity, frequency or volume.
A variation of stress testing is called sensitivity Testing.
Performance Testing :
It is mainly used for testing real-time and embedded systems. It is designed to test
run -time performance of software within the context of an integrated system. It is often combined
with stress testing.
5.4.5 Debugging
- It occurs as a consequence of successful testing.
- When a test case uncovers an error, then debugging, a process that results in the removal of
errors occur.
BSIT 44 Software Engineering 89
2. Backtracking:
- fairly common, can be successfully used in small programs.
- beginning at the site where symptom has been found, the source code is traced backward
(manually) until the site of course is found.
3. Cause elimination:
- uses the concept of binary partitioning
- A cause hypothesis is devised and data related to the error occurrence are used to prove
or disprove the hypothesis.
Testing is the costliest activity in the software development. So, it has to be done efficiently. And
testing cannot be done all a sudden, a careful planning is required and the plan has be executed properly.
The testing process focuses on how testing proceed for a particular project.
Structural testing (White Box testing) , Is best suited for testing only one module (and not for entire
program therefore it is difficult to generate test cases to get the desired coverage) It is good for detecting
logic errors, computational errors and Interface errors.
Functional Testing (Black - Box Testing), is best for testing the entire program or system. It is good
for input errors and data handling errors.
Code Reviews:
- Cost effective method of detecting errors.
- Code is reviewed by a team of people in a formal manner. Here faults are found directly.
- Does not require test case planning, test case generation or test case execution.
Features to be tested:
Includes all software features and its combinations to be tested. These features are the ones specified
in requirement or design documents like functional, performance, design constraints and attributes.
Test deliverables:
- List of test cases that were used
- Detailed results of testing
- Test summary report
- Test log
- Data about code coverage
In general, a test case specification report , test summary report and test log should be supplied as
deliverables.
Schedule
Specifies the amount of time and effort to be spent on different activities of testing and testing of
different units that have been identified.
Personnel allocation:
Identifies the persons responsible for performing different activities.
The testing process usually begins with a test plan which is the basic document guiding the entire
testing of the software. It specifies the levels of testing and the units that need to be tested. For each of
92 Chapter 5 - Software Testing
the different units, first the test case specification are given and renewed, then during the test case
execution phase, the test cases are executed and various reports are produced for evaluating testing. The
main outputs of the execution phase are: The test log, the test summary report and the error report.
In basic model, it is assumed that, the failure intensity decreases with a constant rate with the number
of failures. See fig 5.8
It is given by the equation: λ(U) =λ0 (1- U/V0)
BSIT 44 Software Engineering 93
Intensity
Execution time
Where λ0 is the initial failure intensity at the start of execution; U is the expected number of
failures by a given time t; V0 is the total number of failures that would occur in infinite time.
Failure
intensity
Total failures
Reliability has two parameters, λ0 , V 0 whose values are used to predict the reliability of the
given software.
94 Chapter 5 - Software Testing
5.8 SUMMARY
Software testing is an important phase in the software development life cycle. It represents the ultimate
review of specification, design and coding. The main objective for test case design is to derive a set of
tests that can find out errors in the software. This can be achieved through the help of two test case
design approaches namely, white-box and black-box testing. White-box method focuses on the program
control structure where as black-box method focuses on finding errors in functional requirements. Apart
from knowing the test case design approaches, we also have the different strategies for software testing.
Others concepts like, the art of debugging, the test plan specification, metrics related to software testing
have been discussed in this unit. A thoroughly tested software maintains quality. A quality product is what
every one wants and appreciates. We shall look into quality and its related concepts in the subsequent
units.
I Answers
1. Black-Box Testing
2. cyclomatic complexity
3. transitive
4. non-incremental
5. test plan
6. performance testing
7. reliability
5.10 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house
3. Edward Kit, Software testing in the real world, Addison-Wesley publications, 2000, ed.1
Chapter 6
6.0 INTRODUCTION
S
oftware development is an umbrella activity with various phases involving several resources. The
most important resources being the man power and money. The software development is a lengthy
process which may take months or even years for completion. In order for the project to successfully
get completed, the large workforce has to be properly organized so that they contribute very efficiently
and effectively towards the project. So, proper management controls are very essential in controlling the
developments ensuring quality. Project management includes activities such as project planning, estimation,
risk analysis, scheduling, Software Quality Assurance and Software Configuration Management. However,
in the context of set of resources, planning involves estimation - an attempt to determine how much
money, how much effort, how many resources, and how much time it will take to build a specific software-
based system or product. In this chapter, we shall see the major activities addressed by project management
such as cost estimation, project scheduling, staffing and risk management. In the subsequent units, we
shall discuss on software configuration management, quality assurance.
6.1 OBJECTIVES
At the end of this chapter you should be able to
• know the importance of managing people, process and project
• describe the different types of team structuring
• say what project planning is
• discuss planning with respect to resources
People
The people are the backbone for software development. The software process or project is usually
populated with several people, who can be categorized as follows:
1. Senior managers, who define the business issues that often have more influence on the
project.
2. Project managers, who plan, motivate, organize, and control the practitioners who do software
work.
3. Practitioners, who deliver the technical skills that are necessary to engineer a product.
4. Customers, who specify the requirements for the software to be developed.
5. End users, who interact with the software once it is released for production.
Democratic decentralized
- It consists of ten or few number of programmers
- the goals of the group are set by consensus
- every member is considered for taking major decisions
- group leadership rotates among group members
- the structure results in many communication paths between people as shown in figure 6.1
Controlled centralized
- it consists of a chief programmer, who is responsible for all the major technical decisions of
the project. He also does most of the design activities and allocates coding part to each and
every member of the team
- under him, he has a backup programmer, program librarian, and programmers
- the backup programmer helps the chief in making decisions and takes over the role of chief
in absence of the chief programmer
BSIT 44 Software Engineering 99
- the program librarian is responsible for maintaining the documents and other communication
related work
- this team structure exhibits considerably less inter-personnel relationship. The structure is
shown in figure 6.2
Chief programmer
Librarian
Backup
Programmers
programmer
Project
The problem or project under consideration need to be well thought of at the beginning. A proper
understanding of the problem helps to prepare quantitative estimates and thus avoids several issues
100 Chapter 6 - Software Project Management
related to it. So, determining the scope of the problem must be established since, it marks the beginning
activity of the software project management.
The software scope must be unambiguous and understandable at the management and the technical
levels. It must address the following
- context
- information objectives
- function and performance
In order to make the problem easily manageable, the problem must be communicated from the customer
to the developer and partitioned or decomposed into several parts, and then allocated for work by software
team.
The decomposition approach can be seen from two view points:
• decomposition of the problem
• decomposition of the process
Software estimation is a form of problem solving. If the problem to be solved is quite complex to be
considered in one piece ( i.e. in terms of developing cost and effort estimation), then the problem is
decomposed into a set of smaller problems, which can be easily managed. Estimation makes use of one or
both of the above approaches of decomposition.
Process
Deciding about a process model is very important for a project manager, since there are a number of
process models. He must decide the most appropriate model for the project, then define a preliminary plan
based on the set of common process framework activities. Once the preliminary plan is ready, process
decomposition begins. That is, a complete plan showing the work tasks required for process framework
activities is discussed in project scheduling activity.
People
Cost in a project is due to the requirements for software, hardware and human resources. The bulk of
the cost of software development is due to the human resources needed, and most cost estimation
procedures focus on this aspect. Most cost estimates are determined in terms of person-months (PM).
As the cost of the project depends on the nature and characteristics of the project, at any point, the
accuracy of the estimate will depend on the amount of reliable information we have about the final
product. When the project is being initiated or during the feasibility study, we have only some idea of the
data the system will get and produce, the major functionality of the system. There is a great deal of
uncertainty about the actual specifications of the system. As we specify the system more fully and
accurately, the uncertainties are reduced and more accurate cost estimates can be made. Despite the
limitations, cost estimation models have matured considerably and generally give fairly accurate estimates.
E = A + B * (ev)C
Where,
A, B, and C are empirically derived constants, E is the effort in persons-months and ev is the estimation
variable ( either in LOC or FP )
BSIT 44 Software Engineering 103
KLOC = thousands of lines of code or thousands of delivered source instructions (DSI) or KDSI.
Software project Ab Bb Cb Db
Table 2 : Effort Equations for three development Modes in Basic COCOMO Model
E = Ai(LOC)Bi * EAF
Where,
E is the effort applied in persons-months
LOC is the estimated number of delivered lines of code for the project.
Ai, Bi are the coefficients whose values are given in table 3.
Table 3 :
Software Project Ai Bi
Organic 3.2 1.05
Semidetached 3.0 1.12
Embedded 2.8 1.20
106 Chapter 6 - Software Project Management
There are many candidate factors to consider in developing a better model for estimating the cost of a
software project. There are two principles to reduce the large number of candidate factors to a relatively
manageable number of factors for practical cost estimation:
· General Significance: this tends to eliminate factors which are significant only in a relatively
small fraction of specialized situations.
· Independence: this tends to eliminate factors which are strongly correlated with product size,
and to compress a number of factors which tend to be highly correlated on projects into a
single factor.
COCOMO model uses 15 cost drivers based on these principles. These cost drivers are grouped into
four categories: software product attributes, computer attributes, personnel attributes, and project attributes.
Product Attributes
Computer Attributes
TIME Execution Time Constraint
The degree of the execution constraint imposed upon a software product.
Personnel Attributes
ACAP Analyst Capability
The level of capability of the analysts working on a software product.
Project Attributes
MODP Use of Modern Programming Practices
The degree to which modern programming practices ( MPPs ) are used in developing software
product.
108 Chapter 6 - Software Project Management
• A specification need to be testable to the extent possible, that one can define a clear pass/
fail test for determining whether or not the developed software will satisfy the specification
Step 7: Follow-up
• Once a software project is started, it is essential to gather data on its actual costs and
progress and compare these to the estimates
Risk,
A chance or possibility of danger, loss, injury or other adverse consequences.
A definition from Oxford Dictionary.
Risk Management,
Software risk management is an emerging discipline whose objectives are to identify
address and eliminate software risk items before rework. [IEEE 89]
There are different categories of risks that can be considered, in the first category, it is:
• Project risks, threaten the project plan
• Technical risks, threaten the quality and timeliness of the software to be built
• Business risks, threaten the viability of the software to be built In the next category, it is:
• Known risks, can be uncovered from careful evaluation of the project plan
• Predictable risks, are obtained from the past project experience
• Unpredictable risks, they cannot be known in advance and are difficult to find
Both generic and product-specific risks need to be identified systematically, for which a risk item
checklist can be used.
The risk item check list is given below:
• Product size
• Business impact
• Customer characteristics
• Poor definition
• Development environment
• Technology to be built
• Staff size and experience
Where, the second column shows the category of risk encountered such as, PS is project size risk, BU
is business risk etc. The value of impact implies that 1 is catastrophic, 2 is critical, 3 is marginal and 4 is
negligible. The column RMMM contains a pointer to Risk Mitigation, Monitoring and Management plan
that is developed for all risks.
may provide an indication of whether the risk is becoming more or less likely. In addition to monitoring the
risk factors, the project manager should also monitor the effectiveness of risk mitigation steps.
The risk management and contingency planning assumes that mitigation efforts have failed and that
the risk has become a reality.
Risk aversion
It is very easy to acknowledge the fact that there are large risks present when undertaking software
development projects. It can therefore be tempting for software development managers to completely
ignore risk management approaches, risk management may imply that the failure of the software project
is almost a certainty.
Let us discuss a problem of risk aversion by considering the following example based on a recent risk
management paper [Boehm, DeMarco 97]. In this example a software project manager may be seen to
avoid risks and display a confident can-do attitude.
For example if two more personnel are needed to complete final testing of a software project so it may
meet its deadline. This is considered a small risk and is easily solvable by adding the required staff. In
contrast an example of a fatal risk may be that the date set for the release of the software was a hopeless
and drastic underestimation. The attitude of can-do project manager for the small risk situation is obviously
flawed if ignored but in the case of the large risk would be considered less incompetent.
Poor Infrastructure
The second largest problem when it comes to assessing software risk management is the lack of a
good infrastructure in the organization that would allow support to be given to effective risk management.
A crisis may occur within a project or be precipitated by a customer. The project manager may be
diverted by this activity from managing risk assessment activity. Insufficient staffing or appropriate decisions
from senior management may mean that the systematic approach to risk assessment is not in place.
Design flaws may occur in the project and be missed. Another problem is that staffing resources from
elsewhere may be diverted to deal with a crisis in a project and therefore cause problems when risks are
reassessed. If problems had been identified and risks managed from the start of the project then it is likely
that many would have been avoided as they were anticipated.
114 Chapter 6 - Software Project Management
I Introduction
1. Scope and purpose of document
2. Overview of major risks
3. Responsibilities
a. Management
b. Technical staff
b. Monitoring
i. Factors to be monitored
ii. Monitoring approach
c. Management
i. Contingency plan
ii. Special considerations
V Summary
6.6.1 Scheduling
Software project scheduling is an activity that involves:
1. identification of a set of tasks
2. establishing the interdependencies among the tasks
3. identification of resources including people
4. estimation of effort associated with each task
5. creation of a task network, a graphical representation of the task flow for a project
6. identification of a time-line schedule
Scheduling focuses on the project control activity which are based on project work breakdown
structure. Project control is necessary in-order to monitor the progress of activities against plans, to
ensure that goals are being approached and eventually being achieved.
116 Chapter 6 - Software Project Management
Compiler project
Figure 6.1 shows the work break down structure for compiler construction
Most of the project control techniques are based on breaking the goal of the project into several
intermediate goals. And each intermediate goal is decomposed further into further goals. This process is
repeated until, each goal is small enough to be well understood.
There are two general scheduling techniques followed, they are :
• Gantt charts
• PERT charts
Gantt charts
• A Gantt chart is a project control technique that can be used for several purposes like
scheduling, budgeting and resource planning.
• It is a bar chart. Each bar represents an activity, drawn against time-line.
• The length of each bar is proportional to the length of the time planned for the activity.
• Help in scheduling of project activities but not identifying them.
• Simple and easy to understand
• used for small and medium sized projects
• they show the tasks and their duration clearly
• they do not show inter task dependencies
BSIT 44 Software Engineering 117
PERT Chart
• It is Program Evaluation and Review Technique.
• It is a network of boxes (or circles) and arrows.
• Each box represents an activity
• Arrows indicate the dependencies of activities on one another
• The activity at the head of the arrow cannot start until activity at the tail of the arrow is
finished.
• Some boxes can be designated as mile stones
• PERT can have starting and ending dates
• PERT is suitable for larger projects
• Not as simple as Gantt charts
Start
Design
Build scanner
Build Parser
Integration &
finish
Mar 7’ 94
Build Scanner
Mar 7’ 94
Jan 1’94 Jan 3’94 Nov 14’ 94
Build Parser
Start Design Integration &
Mar 7’ 94 testing
Build code
generation Finish
Mar 13’ 95
Mar 7’ 94
Write manual
I Introduction
A. Scope and purpose of the document
B. Project objectives
BSIT 44 Software Engineering 119
II Project Estimates
A. Historical data used for estimates
B. Estimation techniques
C. Estimation of effort, cost, duration
IV Schedule
A. Project work break down structure
B. Task network
C. Time-line-chart
D. Resource chart
V Project Resources
A. Hardware and software
B. People
C. Special resources
VI Staff Organization
A. Team structure
B. Management reporting
VIII Appendices
120 Chapter 6 - Software Project Management
6.7 SUMMARY
Software management is an umbrella activity within software engineering. It has three Ps which
influence on it. Namely, people, process and problem. Its very important to manage them. People are
very important for developing and maintaining software projects. So, in order to increase the efficiency of
work, people need to be teamed properly. The project management activity encompasses activities like
risk management, estimating project costs, schedules, tracking, and control.
I Answers
1. decomposition
2. Gantt chart
3. COnstructive COst MOdel
4. E = Ab * (KLOC)Bb (person-months)
5. Program Evaluation and Review Technique
6. technical risks
6.9 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house
3. Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5
Chapter 7
7.0 INTRODUCTION
O
ne of the important factors driving any production discipline is quality. An important goal of
software engineering is to produce a high quality software. And this goal is achieved through
what is known as software quality assurance. Software Quality Assurance ( SQA ) is an umbrella
activity that is applied at each step in the software development process. SQA Encompasses the activities
like : a quality management approach, effective software engineering techniques, Formal Technical
Reviews, (FTR) applied thought the software process, a multi tired testing strategy, control of software
documentation and changes made to it, a procedure to assure compliance with software development
standard and measurement and reporting mechanisms. In this chapter, we focus on concepts related to
quality, activities that need to be carried out for producing high quality software products, and the international
standards that need to be followed for software engineering.
7.1 OBJECTIVES
At the end of this chapter you should be able to
• Define terms like quality, quality control, quality assurance
• Discuss the impact of cost on quality
• State the important quality assurance activities
• Answer the question What is cost impact of software defects ?
• Understand the defect amplification model
Quality
Quality refers to a measurable characteristic or attribute of an item - such as color, length etc. and the
measurable characteristics with reference to a program include cyclomatic complexity, cohesion, lines of
code etc. While considering these characteristics, two kinds of quality may generally be encountered:
quality of design and quality of conformance.
Quality of design refers to the characteristics that the designers specify for an item such as the
grade of the materials, performance specifications .
Quality of conformance is the degree to which the design specifications are followed during
manufacturing.
However, in software development, quality of design includes requirements, specifications, and design
of the system. Quality of conformance is an issue which mainly depends on implementation.
Cost of Quality
It includes all costs incurred towards achieving quality. That is, it includes the cost of perceiving the
quality and its related activities. Quality costs may be divided into costs associated with prevention, appraisal,
and failure.
Types of reviews
1. Informal review Informal reviews could be unscheduled and ad-hoc wherein issues could
be discussed and clarifications obtained. These could also be called coffee shop reviews.
2. Formal review - also called as Formal Technical Review (FTR)
- This is the most effective filter for software engineering processes
- This is an effective means of improving software quality
Software defects
Defect is a product anomaly. It is a quality problem that is discovered after the software has been
released to end users.
Development step
Defects Detection
Objectives
1. To detect the errors in functions, logic or implementation found in software.
2. To verify that the software under review meets its requirements.
3. To ensure that the software has been represented according to predefined standards.
4. To achieve software with consistent quality and on time.
5. To make projects more manageable.
6. It acts as a training ground for junior engineers to observe different approaches to software
analysis, design, implementation.
7. Serves to promote backup and continuity.
8. FTR includes walkthroughs, inspections, round-robin reviews.
BSIT 44 Software Engineering 127
The SEI has associated key process areas ( KPA ) with each maturity levels. Each KPA is described
by identifying the following characteristics :
• Goals
• Commitments
• Abilities
• Activities
• Methods for monitoring implementation
• Methods for verifying implementation
The factor product revision deals with maintainability, flexibility and testability. These three factors
are shown in figure 7.2.
Correctness
The extent to which a program satisfies its specification and fulfills the customers mission objectives.
Reliability
The extent to which a program can be expected to perform its intended function with required precision.
Efficiency
The amount of computing resources and code required by program to perform its function.
Integrity
The extent to which access to software or data by unauthorized persons can be controlled.
Usability
The effort required to learn, operate, prepare input, and interpret output of a program.
Maintainability
The effort required to locate and fix an error in a program.
Flexibility
The effort required to modify an operational program.
Testability
The effort required to test a program to ensure that it performs its intended function.
Portability
The effort required to transfer the program from one hardware and/or software system environment
to another.
Reusability
The extent to which a program can be reused in other applications.
Interoperability
The effort required to couple one system to another.
132 Chapter 7 - Software Quality Assurance
Usually, these quality factors are associated with software quality metrics like audibility, accuracy,
completeness, conciseness, consistency, data commonality, error tolerance, execution efficiency,
expandability etc.
Maintainability Portability
Flexibility Reusability
Testability Interoperability
Product Revision Product transitions
Product operations
Correctness
Reliability
Usability
Integrity
Efficiency
7.9 SUMMARY
Software quality assurance is an umbrella activity that is applied at each step in the software process.
The SQA activity includes procedures for effective application of software methods and tools, FTRs,
testing strategies etc., Software review is very important and is an important SQA activity. It serves as
a filter for the software process, removing errors while they are relatively inexpensive to find and correct.
Software quality factors help to measure the quality of software product. And software metrics help to
provide a quantitative way to access the quality of internal product attributes.
BSIT 44 Software Engineering 133
I Answers
1. Quality assurance
2. Software reviews
3. walkthroughs
4. ISO 9001
5. Capability Maturity Model
6. Product operations
7. Integrity
7.11 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house.
Chapter 8
Software Configuration
Management
8.0 INTRODUCTION
W
e know that, through out software development process, the software consists of a collection
of items such as programs, data and documents that can easily change. And it is also seen that
during development process the requirements, design and code often get changed. So, this
early changeable nature of software and the fact the changes often take place in the process of software
development demands that the changes taking place need to be controlled. So, Software Configuration
Management ( SCM ), a discipline that systematically control the changes that are occurring during the
software development process is followed. The goal of SCM is to maximize production by minimizing
mistakes. SCM is an umbrella that is applied through out the software process. SCM activities are
developed to identify change, control change, ensure that change is properly being implemented and
report change to others who may be interested in it. In this chapter, we will discuss about baselines, SCM
process, version control, change control and how they influence on software quality.
8.1 OBJECTIVES
At the end of this chapter, you should be able to :
• Define Software configuration management
• Say what baseline is ?
• Appreciate the importance of baselines in software development process
• Identify the responsibilities of SCM process
• Identify the importance of versions and the way it is controlled
8.2 BASELINES
The changes occurring to a software, in a software process is quite common. This is because, software
passes through different kinds of users, who are subjected to think differently at different times. Customers
demand for changes in their requirements, developers may want a change in their technical approach,
managers may want to modify project management approach and so on. But, changes left wildly without
control is very difficult to manage. So, software configuration management concept called Baseline is
used.
A baseline is a SCM concept that helps us to control change without seriously impeding justifiable
change. As per IEEE, a baseline is defined as :
A specification or product that has been formally reviewed and agreed upon, that thereafter serves as
the basis for further development, and that can be changed only through formal change control procedure.
Before a SCI item becomes a baseline, change can be made quickly and informally. However, once
baseline is established, the changes that can be made become specific, and formal procedure must be
applied to evaluate and verify each change.
A baseline is a milestone in the software development process that is marked by the delivery of one or
more Software Configuration Items ( SCI ), and these Software Configuration Items can be approved
through Formal Technical Reviews. Figure 8.1 Baselines
BSIT 44 Software Engineering 137
System Engineering
System specification
Requirements Analysis
Software Requirements
specification
Software Design
Design specification
Coding
Source code
Testing
Release
Operational system
Software engineering activities produce one or more SCIs. After SCIs are reviewed and approved,
they are placed in a project database. When a member of software engineering team wants to make a
modification to a baseline SCI, it is copied from the project database into engineers private workspace.
This extracted SCI can be modified only by following the SCM controls. Figure 8.2 shows the modification
path for a baselined SCI.
modified
SCIs approved
Project
SCIs database
Software Stored
Formal
engineering SCIs Technical
tasks
review
SCIs
Extracted
SCM SCIs
controls
The following SCIs are the target for the configuration management technique and form a set of
baselines :
1. system specification
2. software project plan
3. software requirements specification
a. graphical analysis model
b. process specifications
c. prototypes
d. mathematical specifications
4. preliminary user manual
5. design specification
a. data design description
b. architectural design description
c. module design description
d. interface design description
6. source code listing
7. test specification
a. test plan and procedure
b. test cases and recorded results
8. operation and installation manuals
9. executable program
a. module executable code
b. linked modules
10. database description
a. schema and file structure
b. initial content
11. as-built user manual
12. maintenance documents
a. software problem reports
b. maintenance change orders
c. engineering change orders
13. standards and procedures for software engineering
140 Chapter 8 - Software Configuration Management
Data model
Design specification
Interface description
PDL
Module interconnection language will help in describing interdependencies among configuration objects
and enables any version of a system to be constructed automatically. Since, objects evolve through out the
software process, they need to be described about the changes. For this, a graph called evolution graph is
used. The diagrammatic representation of en evolution graph is shown in figure 8.4.
OBJ
OBJ
2.1
2.0
OBJ OBJ
1.1.1 1.1.2
Here, configuration object OBJ 1.0 undergoes revision and becomes OBJ 1.1, this OBJ 1.1
then undergoes minor change and results in OBJ 1.1.1 and OBJ 1.1.2. A major update at OBJ 1.1
becomes OBJ 1.2 and similarly it evolves as OBJ 1.3, OBJ 1.4 etc.
Version representation
There are two version representations :
1. Evolution graph
2. Object-pool representation
Evolution graph
This is shown in figure 8.5, here each node represents an aggregate object ( i.e. complete version of
software ). Each version of software is a collection of SCIs and may be composed of different variants.
let us consider an example: a version of a simple program that is composed of five components. Let us say
that component 4 is used only when software is implemented using color displays and component 5 is used
only when the software is implemented using monochrome displays. This is shown in figure 8.5.
2 3
variants
4 5
variants
component
object
version
- The user submits the change request for the current version
- The developer will evaluate the change request of software in order to access the technical
merits, side effects, overall impact on configuration objects etc.
- He then prepares a report called change report and submits to change control authority
( CCA )
- A CCA is a person or group who makes a final decision based on the change report either to
deny the change request or grant the change request.
- Every time a change is approved an engineering change order ( ECO ) is generated. This
ECO describes what changes are to be made, the constraints imposed, criteria for review
and audit.
- The object to be changed is checked-out from project database and it is modified and
appropriate SQA activities are applied.
- The changed object is then checked-in to the project database and appropriate version
control mechanisms are used to create next version of software.
check-in and check-out processes implement two important elements of change control :
1. access control
2. synchronization control
BSIT 44 Software Engineering 145
Developer evaluates
make changes
Access control : this governs which software engineer has authority to access and modify a particular
configuration object.
Synchronization control : this ensures that parallel changes made by two different people, dont overwrite
one another. For this purpose locks are used in the database.
- CSR is generated on regular basis, so that it keeps the management and practitioners informed
about the important changes
- A CSR entry is made each time when a SCI is assigned new or updated identification, a
change is approved by CCA and a configuration audit is conducted
- The output of CSR, may be placed in an on-line database for developers, maintainers to
access the changed information
- CSR is an SCM task that answers the questions like :
1. What happened ?
2. Who did it ?
3. When did it happen ?
4. What else will be affected ? etc.
8.4 SUMMARY
Software configuration management is an umbrella activity that is applied through out the software
process. SCM identifies, controls, audits, and reports modifications that occurs during the software
development process and after the release of software. SCM can be viewed as a software quality assurance
activity that is applied through out the software process. Once a configuration object has been developed
and reviewed, it becomes a baseline. Changes to a base-lined object results in the creation of new version
of that object, for which controls need to be applied. Change control is a procedural activity that ensures
quality and consistency as changes are made to configuration object. The software configuration audit is
an SQA activity that helps to maintain quality even when changes are applied to software. The information
about the changes taking place is given out in the form of status report.
148 Chapter 8 - Software Configuration Management
I Answers
1. Software Configuration Items
2. Formal Technical Reviews
3. Software Quality Assurance
4. Module interconnection language
5. Object-pool representation
6. Engineering Change Order
7. Synchronization control
8. Status reporting
8.6 REFERENCES
1. Roger S. Pressman : Software Engineering A Practitioners Approach (Fourth Edition)
McGraw-Hill 1997.
2. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa
publishing house
3. Richard Fairly, Software engineering concepts, McGraw-Hill Inc.,1985
GLOSSARY
allocation The process of distributing requirements, resources, or other entities among the
components of a system or program.
analysis In system/software engineering, the process of studying a system by partitioning the
system into parts (functions or objects) and determining how the parts relate to each other to understand
the whole.
architectural design The process of defining a collection of hardware and software components
and their interfaces to establish the framework for the development of a computer system
audit An independent examination of a work product or set of work products to assess compliance
with specifications, standards, contractual agreements, or other criteria
code In software engineering, computer instructions and data definitions expressed in a programming
language or in a form output by an assembler, compiler, or other translator.
coding In software engineering, the process of expressing a computer program in a programming
language.
configuration auditing In configuration management, an independent examination of the configuration
status to compare with the physical configuration.
configuration control In configuration management, an element of configuration management
consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification.
150 Glossary
BSIT 44 Software Engineering 151
cost estimate The estimated cost to perform a stipulated task or acquire an item.
data design Design of a programs data, especially table design in database applications.
design The process of defining the architecture, components, interfaces, and other characteristics
of a system or component.
error The difference between a computed, observed, or measured value or condition and the true,
specified, or theoretically correct value or condition. action that produces an incorrect result.]
functional design The process of defining the working relationships among the components of
a system.
152 Glossary
initiation and scope A non-specific term for work performed early in a software project that
includes high-level statements of requirements and rough estimates of project cost and schedule.
inspections A static analysis technique that relies on visual examination of development products
to detect errors, violations of development standards, and other problems.
metric A quantitative measure of the degree to which a system, component, or process possesses
a given attribute.
performance The degree to which a system or component accomplishes its designated functions
within given constraints, such as speed, accuracy, or memory usage.
performance analysis techniques and tools Techniques and tools that are used to measure and
evaluate the performance of a software system
process A sequence of steps performed for a given purpose; for example, the software development
process or An executable unit managed by an operating system scheduler. Or To perform operations
on data.
process management (Software Engineering Process) The direction, control, and coordination or
work performed to develop a product or perform a service.
product attributes Characteristics of a software product. Can refer either to general characteristics
such as reliability, maintainability, and usability or to specific features of a software product.
BSIT 44 Software Engineering 153
product measure A metric (e.g. measures per person-month) that can be used to measure the
characteristics of delivered documents and software.
project management A system of procedures, practices, technologies, and know-how that provides
the planning, organizing, staffing, directing, and controlling necessary to successfully manage an engineering
project.
project plan A document that describes the technical and management approach to be followed
for a project.
quality analysis A planned and systematic pattern of all actions necessary to provide adequate
confidence that an item or product conforms to established technical requirements.
quality attribute A feature or characteristic that affects an items quality. In a hierarchy of quality
attributes, higher-level attributes may be called quality factors, and lower level attributes may be called
quality attributes.
quality management (Software Engineering Management - Planning) That aspect of the overall
management function that determines and implements the quality policy.
requirement A condition or capability that must be met or possessed by a system or system
component to satisfy a contract, standard, specification, or other formally imposed documents
requirements elicitation (software requirements) The process through which the customers
(buyers and/or users) and developer (contractor) of a software system discover, review, articulate, and
understand the requirements of the system.
requirements engineering a method of obtaining a precise formal specification from the informal
and often vague requirements with a customer.
reviews A process or meeting during which a work product, or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment or approval.
risk management (Software Engineering Management) In system/software engineering, an
umbrella title for the processes used to manage risk.. It is an organized means of identifying and
measuring risk (risk assessment) and developing, selecting, and managing options (risk analysis) for resolving
(risk handling) these risks. The primary goal of risk management is to identify and respond to potential
problems with sufficient lead-time to avoid a crisis situation.
schedule and cost estimates (Software Engineering Management - Planning) The management
activity of determining the probable cost of an activity or product and the time to complete the activity or
deliver the product.
Software Computer programs, procedures, and possibly associated documentation and data pertaining
to the operation of a computer system. Contrast with hardware.
154 Glossary
test coverage (software testing) The degree to which a given test or set of tests addresses all
specified requirements for a given system or component.
test coverage of code (software testing) The amount of code actually executed during the test
process.
test design (software testing) Documentation specifying the details of the test approach for a
software feature or combination of software features and identifying the associated tests.
test documentation (software testing) Documentation describing plans for, or results of, the
testing of a system or component. Types include test case specification, test incident report, test log, test
plan, test procedure, test report.
test execution (software testing) Act of performing one or more test cases.
testing strategies (software testing) In software engineering, one of a number of approaches
used for testing software.
user interface An interface that enables information to be passed between a human user and
hardware or software components of a computer system.
V&V (verification and validation) The process of determining whether the requirements for a
system or component are complete and correct, the products of each development phase fulfill the
requirements or conditions imposed by the previous phase, and the final system or component complies
with specified requirements.
Validation The process of evaluating a system or component during or at the end of the development
process to determine whether it satisfies specified requirements.
verification The process of evaluating a system or component to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that phase.
version An initial release or re-release of a computer software configuration item, associated
with a complete compilation or recompilation of the computer software configuration item.
156 Reference Books
REFERENCE BOOKS
1. Roger S. Pressman, Software engineering : A practitioners approach, MCGraw-Hill
publication, 1997, ed.4
2. Edward Kit, Software testing in the real world, Addison-Wesley publications, 2000, ed.1
3. Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5
4. Pankaj Jalote, An integrated approach to software engineering, Narosa publications,
1997, ed. 2
5. Shari Lawrence Peleeger, software engineering theory and practice, Pearson education,
2001, ed. 2
6. Richard Fairly, Software engineering concepts, McGraw-Hill Inc.,1985
7. Barry W. Boehm, Software Engineering Economics, Prentice-Hall Inc., 1981.