Você está na página 1de 88

Chapter -1

Overview of software Engineering & the software Development Process

Q.1 Define software and explain its impact.


Software is the set of instructions encompasses programs that execute within a computer of any
size and architecture, documents that encompass hard-copy and virtual forms, and data that
combine numbers and text. It also includes representations of pictorial, video, and audio
information.
QUICK
Softwares impact on the society and culture continues to be profound. As its importance grows,
the software community continually attempts to develop technologies that will make it easier,
faster, and less expensive to build high-quality computer programs. Some of these technologies
are targeted at a specific application domain like web-site design and implementation; others
focus on a technology domain such as object oriented systems and still others are broad-based
like operating systems such as LINUX. However, a software technology has to develop useful
information. The technology encompasses a process, a set of methods, and an array of tools
called as software engineering.

Q.2 Explain the Evolving Role of Software.


Nowadays, software plays a major role with dual activity. It is a product like a vehicle. As a
product, it delivers the computing potential embodied by computer hardware or a network of
computers that are accessible by local hardware. Whether the product or software resides within
a mobile phone or operates inside a mainframe computer, software is an information transformer
likely to produce, manage, acquire, modify, display or transmit the information.
The software
provides good product with useful information
transforms the data so that it can be more useful in a local context
manages business information to enhance competitiveness
provides a gateway to worldwide networks like internet
The role of computer software has undergone significant change over a time span of little more
than 50 years.

Q.3 Explain the software evolution/laws of software evolution.

1. (1974) Continuing Change E-type systems must be continually adapted or they


become progressively less satisfactory.
2. (1974) Increasing Complexity As an E-type system evolves its complexity increases
unless work is done to maintain or reduce it.
3. (1974) Self Regulation E-type system evolution process is self regulating with
distribution of product and process measures close to normal.
4. (1978) Conservation of Organizational Stability- The average effective global activity
rate in an evolving E-type system is invariant over product lifetime.
5. (1978) Conservation of Familiarity As an E-type system evolves all associated with it,
developers, sales personnel, users, for example, must maintain mastery of its content and

Page 1
behavior to achieve satisfactory evolution. Excessive growth diminishes that mastery.
Hence the average incremental growth remains invariant as the system evolves.
6. (1991) Continuing Growth The functional content of E-type systems must be
continually increased to maintain user satisfaction over their lifetime.
7. (1996) Declining Quality The quality of E-type systems will appear to be declining
unless they are rigorously maintained and adapted to operational environment changes.
8. (1996) Feedback System (first stated 1974, formalized as law 1996) E-type evolution
processes constitute multi-level, multi-loop, multi-agent feedback systems and must be
treated as such to achieve significant improvement over any reasonable base.

Q.4 Define legacy software system and state its reasons of use and causes of no use.

1. A legacy system is an old method, technology, computer system, or application program.

Organizations can have compelling reasons for keeping a legacy system, such as:

The system works satisfactorily, and the owner sees no reason for changing it.
The costs of redesigning or replacing the system are prohibitive because it is large,
monolithic, and/or complex.
Retraining on a new system would be costly in lost time and money, compared to the
anticipated appreciable benefits of replacing it (which may be zero).
The system requires near-constant availability, so it cannot be taken out of service, and
the cost of designing a new system with a similar availability level is high. Examples
include systems to handle customers' accounts in banks, computer reservation systems,
air traffic control, energy distribution (power grids), nuclear power plants, military
defense installations, and systems such as the tops database.
The way that the system works is not well understood. Such a situation can occur when
the designers of the system have left the organization and the system has either not been
fully documented or documentation has been lost.
The user expects that the system can easily be replaced when this becomes necessary.

Legacy systems are considered to be potentially problematic by many software engineers for
several reasons.

Legacy systems often run on obsolete (and usually slow) hardware, and spare parts for
such computers may become increasingly difficult to obtain.
If legacy software runs on only antiquated hardware, the cost of maintaining the system
may eventually outweigh the cost of replacing both the software and hardware unless
some form of emulation or backward compatibility allows the software to run on new
hardware.
These systems can be hard to maintain, improve, and expand because there is a general
lack of understanding of the system; the staff who were experts on it have retired or
forgotten what they knew about it, and staff who entered the field after it became
"legacy" never learned about it in the first place. This can be worsened by lack or loss of
documentation.

Page 2
Legacy systems may have vulnerabilities in older operating systems or applications due
to lack of security patches being available or applied. There can also be production
configurations that cause security problems. These issues can put the legacy system at
risk of being compromised by attackers or knowledgeable insiders.
Integration with newer systems may also be difficult because new software may use
completely different technologies. The kind of bridge hardware and software that
becomes available for different technologies that are popular at the same time are often
not developed for differing technologies in different times, because of the lack of a large
demand for it and the lack of associated reward of a large market economies of scale,
though some of this "glue" does get developed by vendors and enthusiasts of particular
legacy technologies.
Q.5 Define software and explain its characteristics.
Software is defined as:

A document that describe the operation and use of the program.


Data structure that enables the program to manipulate the information.
Instructions that when executed provides the desired features or function.

Following are the characteristics of software engineering:

Software is not manufactured in the classical sense, but it is developed or engineered:

Software or hardware both get manufactured in the same manner and both of them uses
the design model to implement the product. The only difference is in their implementation part.
They both differ in their coding part. So, it is said that software is not manufactured but it is
developed or engineered. The only difference lies in the cost of both the hardware and software.

Software doesn't wear out:

The hardware can wear out whereas software can't. In case of hardware we have a
"bathtub" like curve, which is a curve that lies in between failure-rate and time. In this curve, in
the starting time there is relatively high failure rate. But, after some period of time, defects get
corrected and failure-rate drops to a steady-state for some time period. But, the failure-rate again
rises due to the effects of rain, dust, temperature extreme and many other environment effects.
The hardware begins to wear out.

But, the software is not responsible to the failure rate of hardware. The failure rate of
software can be understood by the "idealized curve". In this type of curve the failure rate in the
initial state is very high. But, the errors in the software get corrected and the curve flattens.
However, the implication is clear that the software can "deteriorate" it does not "wear out".

This can be explained by the actual curve. As soon as that error gets corrected the curve
encounters another spike that means another error in the software. After some time the steady

Page 3
state of the software don't remains steady and the failure rate begins to rise. If hardware gets
failed then it can be replaced but there is no replacement in case of software.

Software is flexible:

Software is said to be "flexile" because, software can be developed for doing anything
and any type of problem. Software can be developed for solving any type of problem and can
also be changed if the requirements change.

Reusability of component:

In this world the reusability of the component is a natural part of engineering process.
You don't need to start writing the software from the scratch. You can use the initially developed
software to develop the new one e.g. if an engineer wants to manufacture a television then he can
take picture tube from one spare TV, and cabinet from another and like this by joining the parts
he can easily manufacture a TV set quickly.

In the similar manner, software can be developed. You don't need to start writing the
code from the scratch. But you can use already developed software to develop another one, e.g. a
graphical user interface can be developed by the software which are developed prior to your new
software. You don't need to think over it that how it can be developed. You just use the already
developed code for it.

Although the industry is moving toward component-based assembly, most software


continues to be custom built:
Consider the manner in which the control hardware for a computer-based product is
designed and built. The design engineer draws a simple schematic of the digital circuitry, does
some fundamental analysis to assure that proper function will be achieved, and then goes to the
shelf where catalogs of digital components exist.
Each integrated circuit (called an IC or a chip) has a part number, a defined and validated
function, a well-defined interface, and a standard set of integration guidelines. After each
component is selected, it can be ordered off the shelf. As an engineering discipline evolves, a
collection of standard design components is created. Standard screws and off-the-shelf integrated
circuits are standard components that are used by mechanical and electrical engineers to design
new systems. The reusable components have been created so that the engineer can concentrate
on the truly innovative elements of a design, that is, the parts of the design that represent
something new. In the hardware world, component reuse is a natural part of the engineering
process.
A software component should be designed and implemented so that it can be reused in
many different programs. In the 1960s, we built scientific subroutine libraries that were reusable
in a broad array of engineering and scientific applications. These subroutine libraries reused
well-defined algorithms in an effective manner but had a limited domain of application and not
extended algorithm only but included data structure too. Modern reusable components
encapsulate both data and the processing applied to the data, enabling the software engineer to
create new applications from reusable parts.

Page 4
Q.6 Define software engineering
Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach
to the development, operation, and maintenance of software, and the study of these approaches;
that is, the application of engineering to software. It is the application of engineering to software
because it integrates significant mathematics, computer science and practices whose origins are
in engineering. It is also defined as a systematic approach to the analysis, design, assessment,
implementation, testing, maintenance and reengineering of software, that is, the application of
engineering to software. OR

Software engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and works efficiently on real machines.
The IEEE has developed a more comprehensive definition when it states;
Software Engineering is the application of a systematic, disciplined, quantifiable approach to
the development, operation, and maintenance of software that is, the application of engineering
to software.

Q.7 Explain the changing nature of software.


Software has become integral part of most of the fields of human life. We name a field
and we find the usage of software in that field. Software applications are grouped in to eight
areas for convenience as explained below.
(i) System software: Infrastructure software comes under this category like compilers, operating
systems, editors, drivers, etc. Basically system software is a collection of programs to provide
service to other programs.
(ii) Real time software: This software is used to monitor, control and analyze real world events
as they occur. An example may be software required for weather forecasting. Such software will
gather and process the status of temperature, humidity and other environmental parameters to
forecast the weather.
(iii) Embedded software: This type of software is placed in Read-Only-Memory (ROM) of
the product and controls the various functions of the product. The product could be an aircraft,
automobile, security system, signaling system, control unit of power plants, etc. The embedded
software handles hardware components and is also termed as intelligent software.
(iv) Business software: This is the largest application area. The software designed to process
business applications is called business software. Business software could be payroll, file
monitoring system, employee management, and account management. It may also be a data
warehousing tool which helps us to take decisions based on available data. Management
information system, enterprise resource planning (ERP) and such other software are popular
examples of business software.
(v) Personal computer software: The software used in personal computers is covered in this
category. Examples are word processors, computer graphics, multimedia and animating Business
software embedded software Real time software System software Engineering and scientific
software Web based software Artificial intelligence software Personal computer
Software tools, database management, computer games etc. This is a very upcoming area and
many big organizations are concentrating their effort here due to large customer base.

Page 5
(vi) Artificial Intelligence Software: Artificial Intelligence software makes use of nonnumeric
algorithms to solve complex problems that are not amenable to computation or straight forward
analysis. Examples are expert systems, artificial neural network, signal processing software etc.
(vii) Web based software: The software related to web applications comes under this category.
Examples are CGI, HTML, Java, Perl, DHTML etc.
(viii) Engineering and scientific software: Scientific and engineering application software are
grouped in this category. Huge computing is normally required to process data. Examples are
CAD/CAM package, SPSS, MATLAB, Engineering Pro, Circuit analyzers etc.
The expectations from software are increasing in modern civilization. Software of any of
the above groups has a specialized role to play. Customers and development organizations desire
more features which may not be always possible to provide. Another trend has emerged to
provide source code to the customers and organizations so that they can make modifications for
their needs. This trend is particularly visible in infrastructure software like data bases, operating
systems, compilers etc. Software where source codes are available, are known as open source.
Organizations can develop software applications around such source codes. Some of the
examples of open source software are LINUX, MySQL, PHP, open office, Apache web
server etc. Open source software has risen to great prominence. We may say that these are the
programs whose licenses give users the freedom to run the program for any purpose, to study and
modify the program, and to redistribute copies of either the original or modified program
without paying royalties to original developers. Whether open source software is better than
proprietary software? Answer is not easy. Both schools of thought are in the market. However,
popularity of many open source software give confidence to every user. They may also help us to
develop small business applications at low cost.

Q.8 Explain software myths in detail.

Many causes of a software affliction can be traced to a mythology during the


development of software. Software myths propagated misinformation and confusion.
Software myths had a number of attributes that made them insidious. Today, most
knowledgeable professionals recognize myths for what they are misleading attitudes that have
caused serious problems for managers and technical people alike. However, old attitudes and
habits are difficult to modify, and remnants of software myths are still believed.
Management myths. Managers with software responsibility like managers in most disciplines,
are often under pressure to maintain budgets, keep schedules from slipping, and improve quality.
Like a drowning person who grasps at a straw, a software manager often grasps at belief in a
software myth, if that belief will lesson the pressure.
Myth: We already have a book that's full of standards and procedures for building software,
won't that provide my people with everything they need to know?
Reality: The book of standards may very well exist, but is it used? Are software practitioners
aware of its existence? Does it reflect modern software engineering practice? Is it complete? Is it
streamlined to improve time to delivery while still maintaining a focus on quality? In many
cases, the answer to all of these questions is no.
Myth: My people have state-of-the-art software development tools, after all, we buy them
the newest computers.
Reality: It takes much more than the latest model mainframe, workstation, or PC to do high-
quality software development. Computer-aided software engineering (CASE) tools are more

Page 6
important than hardware for achieving good quality and productivity, yet the majority of
software developers still do not use them effectively.
Myth: If we get behind schedule, we can add more programmers and catch up.
Reality: Software development is not a mechanistic process like manufacturing. In the words of
Brooks "adding people to a late software project makes it later." At first, this statement may
seem counterintuitive. However, as new people are added, people who were working must spend
time educating the newcomers, thereby reducing the amount of time spent on productive
development effort. People can be added but only in a planned and well-coordinated manner.
Myth: If I decide to outsource the software project to a third party, I can just relax and let that
firm build it.
Reality: If an organization does not understand how to manage and control software projects
internally, it will invariably struggle when it outsources software projects.
Customer myths. A customer who requests computer software may be a person at the next desk,
a technical group down the hall, the marketing/sales department, or an outside company that has
requested software under contract. In many cases, the customer believes myths about software
because software managers and practitioners do little to correct misinformation. Myths lead to
false expectations (by the customer) and ultimately, dissatisfaction with the developer.
Myth: A general statement of objectives is sufficient to begin writing programs we can fill in the
details later.
Reality: A poor up-front definition is the major cause of failed software efforts. A formal and
detailed description of the information domain, function, behavior, performance, interfaces,
design constraints, and validation criteria is essential. These characteristics can be determined
only after thorough communication between customer and developer.
Myth: Project requirements continually change, but change can be easily accommodated because
software is flexible.
Reality: It is true that software requirements change, but the impact of change varies with the
time at which it is introduced. Figure 1.3 illustrates the impact of change. If serious attention is
given to up-front definition, early requests for change can be accommodated easily. The
customer can review requirements and recommend modifications with relatively little impact on
cost. When changes are requested during software design, the cost impact grows rapidly.
Resources have been committed and a design framework has been established. Change can cause
upheaval that requires additional resources and major design modification, that is, additional
cost. Changes in function, performance, interface, or other characteristics during implementation
(code and test) have a severe impact on cost. Change, when requested after software is in
production, can be over an order of magnitude more expensive than the same change requested
earlier.
Practitioner's myths. Myths that are still believed by software practitioners have been fostered
by 50 years of programming culture. During the early days of software, programming was
viewed as an art form. Old ways and attitudes die hard.
Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that "the sooner you begin 'writing code', the longer it'll take you to
get done." Industry data indicate that between 60 and 80 percent of all effort expended on
software will be expended after it is delivered to the customer for the first time.
Myth: Until I get the program "running" I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied from
the inception of a projectthe formal technical review. Software reviews are a "quality filter"

Page 7
that have been found to be more effective than testing for finding certain classes of software
defects.
Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many
elements. Documentation provides a foundation for successful engineering and, more important,
guidance for software support.
Myth: Software engineering will make us create voluminous and unnecessary documentation
and will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating quality. Better
quality leads to reduced rework. And reduced rework results in faster delivery times. Many
software professionals recognize the fallacy of the myths just described. Regrettably, habitual
attitudes and methods foster poor management and technical practices, even when reality dictates
a better approach. Recognition of software realities is the first step toward formulation of
practical solutions for software engineering.

Q.9 Explain software engineering as layered technology approach.

tools

methods

process model

a quality focus

Software engineering is a layered technology. Most engineering approaches (including


software engineering) must rest on an organizational commitment to quality. The bedrock that
supports software engineering is a quality focus layer.
-Quality: a product should meet its specification. This is problematical for software systems.
There is a tension between customer quality requirements (efficiency, reliability, etc.), developer
quality requirements (maintainability, reusability, etc.), users (usability, efficiency, etc.), and etc.
But note:
Some quality requirements are difficult to specify in an unambiguous way.
Software specifications are usually incomplete and often inconsistent.
-Process: The foundation for software engineering is the processlayer. Software engineering
process is the glue that holds the technology together and enables rational and timely
development of computer software. The work products are produced, milestones are established,
quality is ensured, and changes are properly managed.
-Methods: Software engineering methods provides the technical how-tos for building software.
Methods encompass a broad array of tasks that include the requirements analysis, design,
program construction, testing, and support.
-Tools: Software engineering tools provide automated or semi-automated supports for the
process and the methods. When the tools are integrated so that the information created by one

Page 8
tool can be used by another, a system for the support of software development, called computer-
aided software engineering (CASE). CASE combine software, hardware, and software
engineering database.
Q.10 Explain generic view of software engineering.

A Generic View of Software Engineering is the analysis, design, construction,


verification, and management of technical entities. Regardless of entities to be engineered, the
following questions must be asked and answered:
. What is the problem to be solved?
. What characteristics of entity are used to solve the problem?
. How will the entity (and the solution) be realized?
. How will the entity be constructed?
. What approach will be used to uncover the errors that were made in the design and construction
of the entity?
. How will the entity be supported over the long term, when the corrections, adaptations, and
enhancements are requested by the user of the entity?

Software is engineered by applying three distinct phases that focus on definition, development,
and support.
- The definition phase focuses on what.
- The development phase focuses on how.
- The support phase focuses on change. The changes due to enhancements by changing
customer requirements. The support phase reapplies the steps of definition and development
phases. Four types of change are encountered during the support phase:
Correction, Adaptation, Enhancement, Prevention.

Q.11 Explain generic process framework activities.

Each framework activity is populated by a set of software engineering actions. An action, e.g.
design, is a collection of related tasks that produce a major software engineering work product.

Communication lots of communication and collaboration with customer and other


stakeholders. (Encompasses requirements gathering.)

Planning establishes plan for software engineering work that follows. Describes technical
tasks, likely risks, required resources, works products and a work schedule

Modeling encompasses creation of models that allow the developer and customer to better
understand software requirements and the design that will achieve those requirements.

Modeling Activity composed of two software engineering actions


Analysis composed of work tasks (e.g. requirement gathering, elaboration, specification and
validation) that lead to creation of analysis model and/or requirements specification.

design encompasses work tasks such as data design, architectural design, interface design and
component level design. (Leads to creation of design model and/or a design specification.)

Page 9
Construction code generation and testing.
Deployment software, partial or complete, is delivered to the customer who evaluates it and
provides feedback.

For example:
A requirement gathering is an important software engineering action that occurs in during the
communications activity. Goal is to understand what various stakeholders want from the
software that is to be built.
For a small, simple project, the requirements gathering task set might be:
1. Make list of stakeholders 2. Invite stakeholders to an informal meeting 3. Ask each one to
make a list of features and functions required 4. Discuss requirements and build a final list 5.
prioritize requirements 6. note areas of uncertainty

For larger, more complex project:


1. Make list of stakeholders 2. interview each stakeholder separately to determine overall wants
and needs 3. Build preliminary list of functions and features based on stakeholder input
4. Schedule series of facilitated requirements gathering meetings 5. Conduct meetings 6. Produce
informal user scenarios as part of each meeting 7. refine user scenarios based on feedback
8. Build revised list of requirements 9. Use quality function deployment to prioritize
requirements 10. Package requirements so that they can be delivered incrementally 11.note
constraints that will be placed on system 12.discuss methods for validating the system

Q.12 Enlist and explain different umbrella activities.


Framework is augmented by a number of umbrella activities. Typical ones are:
Software project tracking and control - allows software team to assess progress against project
plan and take necessary action to maintain schedule.
Risk management assess risk that may affect the outcome of the project or the product
quality.
Formal technical reviews assess software engineering work products to uncover and remove
errors before they are propagated to the next action or activity.
Measurement defines and collects process, project and product measures that assist team in
developing software
Software configuration management manages the effect of change throughout the software
process
Reusability management defines criteria for work product reuse and establishes mechanism to
achieve reusable components
Work product preparation and production included work activities required to create work
products such as documents, logs, forms and lists.

Q.13 Why do we Need Processes?

The answer obviously is because to perceive hard things to change in easy things in later phases
of the project. It is mandatory to control the development through a well-defined and systematic
process. To control the huge developments, formally defined processes are needed. The

Page
10
development processes are required to provide visibility into the projects. Visibility in turn aids
timely management control and mid-course corrections against the expected errors and crisis. It
helps developers to weed out faults quite early before they cause to entire failure. This also
avoids cascading of faults into later phases where, accumulation of increased number of faults
leads to failure. The adoption of a formal development process with well define unambiguous
and complete policies will lead to the benefits of correct requirements to generate the correct
product in order to meet the customer's needs preciously and inclusion of necessary features that
will in turn, reduce the post-development cost.

Q.14 State the objectives of software process improvement.

As you design, implement, and adjust your software process improvement program, its
important to keep these four primary objectives of process improvement in sight:
1. To understand the current state of software engineering and management practice in an
organization.
2. To select improvement areas where changes can yield the greatest long-term benefits.
3. To focus on adding value to the business, not on achieving someones notion of Process
Utopia.
4. To prosper by combining effective processes with skilled, motivated, and creative people.

Q.15 Explain the waterfall model/Linear Sequential Model.

This is the first model ever formalized, and other process models are based on this approach to
development. It suggests a systematic and sequential approach to the development of the
software. It begins by analyzing the system, progressing to the analysis of the software, design,
coding, testing and maintenance. It insists that a phase cannot begin unless the previous phase is
finished. Figure shows this type of software process model.

Page
11
Q.16 State the problems of waterfall model/linear sequential model.

Problems with the waterfall model are:


Real projects rarely follow the sequential flow and changes can cause confusion.
This model has difficulty accommodating requirements change
The customer will not see a working version until the project is nearly complete
Developers are often blocked unnecessarily, due to previous tasks not being done System /
Information Engineering Analysis Test Code Design

Q.17 Explain the incremental model and its advantages.

The Incremental Model combines elements of the Linear Sequential Model (applied repetitively)
with the iterative philosophy of prototyping. When an Incremental Model is used, the first
increment is often the core product. The subsequent iterations are the supporting
functionalities or the add-on features that a customer would like to see. More specifically, the
model is designed, implemented and tested as a series of incremental builds until the product is
finished.

Advantages

It is useful when staffing is unavailable for the complete implementation.


Can be implemented with fewer staff people.
If the core product is well received then the additional staff can be added.
Customers can be involved at an early stage.
Each iteration delivers a functionally operational product and thus customers can get to see the
working version of the product at each stage.

Q.18 State advantages and disadvantages of incremental process model.

Advantages
Generates working software quickly and early during the software life cycle.
More flexible less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its iteration.
Each iteration is an easily managed milestone.

Disadvantages
Each phase of iteration is rigid.
Problems may arise pertaining to system architecture because not all requirements are
gathered up front for the entire software life cycle.

Page
12
Q.19 Explain advantages and disadvantages of RAD model.

Rapid Application Development (RAD) is a linear sequential software development process


model that emphasizes an extremely short development cycle.
- A high-speed adaptation of linear sequential model
- Component-based construction
- Effective when requirements are well understood and project scope is constrained.
Advantages:
- Short development time
- Cost reduction due to software reuse and component-based construction
Problems:
- For large, but scalable projects, RAD requires sufficient resources.
- RAD requires developers and customers who are committed to the schedule.
- Constructed software is project-specific, and may not be well modularized.
- Its quality depends on the quality of existing components.
- Not appropriate projects with high technical risk and new technologies.

Q.20 Explain CMMI model levels.

The Capability Maturity Model was originally developed as a tool for objectively assessing the
ability of government contractors' processes to perform a contracted software project. The model
is based on the process maturity framework.

Maturity model

A maturity model can be viewed as a set of structured levels that describe how well the
behaviors, practices and processes of an organization can reliably and sustainably produce
required outcomes. A maturity model may provide, for example, a place to start, the benefit of a
communitys prior experiences, a common language and a shared vision, a framework for
prioritizing actions, a way to define what improvement means for your organization.

Levels

There are five levels defined along the continuum of the model and, according to the SEI:
"Predictability, effectiveness, and control of an organization's software processes are believed to
improve as the organization moves up these five levels.

Level 1 - Initial (Chaotic)


It is characteristic of processes at this level that they are (typically) undocumented and in
a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it
may help to ensure that existing processes are maintained during times of stress.

Page
13
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place and used to establish consistency of process
performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability
is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving
process performance through both incremental and innovative technological
changes/improvements.

Q.21 Explain prototype model and its advantages and disadvantages.

It begins with requirements gathering. Developer and Customers meet and define the
overall objectives of the software, identify whatever requirements are known and identify the
areas which require further definition. In many instances the client only has a general view of
what is expected from the software product. In such a scenario where there is an absence of
detailed information regarding the input to the system, the processing needs and the output
requirements, the prototyping model may be employed. This model reflects an attempt to
increase the flexibility of the development process by allowing the client to interact and
experiment with a working representation of the product.

Advantages

It could serve as the first system.

The customer doesnt need to wait long as in the Linear Model.

Feedback from customers is received periodically and the changes dont come as a last minute
surprise.

Disadvantages

Customer could believe the prototype as the working version.

Developer also could make the implementation compromises where he could make the quick
fixes to the prototype and make is as a working version.

Page
14
Often clients expect that a few minor changes to the prototype will more than suffice their
needs. They fail to realize that no consideration was given to the overall quality of the software
in the rush to develop the prototype.

Q.22 State the advantages and disadvantages of prototype model.

Advantages:
- Easy and quick to identify customer requirements
- Customers can validate the prototype at the earlier stage and provide their inputs and feedback
- Good to deal with the following cases:
Customer cannot provide the detailed requirements
Very complicated system-user interactions
Use new technologies, hardware and algorithms
Develop new domain application systems
Problems:
-The prototype can serve as the first system.
-Developers usually attempt to develop the product based on the prototype.
-Developers often make implementation compromises in order to get a prototyping that is
working quickly.
-Customers may be unaware that the prototype is not a product, which is held with.

Q.23 Explain spiral model.

Spiral Model It was originally proposed by Boehm. It is an evolutionary software process model
that couples the iterative nature of prototyping with the controlled and systematic aspects of
linear sequential model. It provides potential rapid development of incremental versions of the
software. An important feature of this model is that it has risk analysis as one of its framework of
activities. Therefore, it requires risk assessment expertise. Figure shows an example of a spiral
model.

Page
15
Page
16
Q.24 State the advantages and disadvantages of spiral model.

The Spiral Model is an evolutionary software process model that couples the iterative nature of
prototyping with the controlled and systematic aspects of the Linear Sequential Model. Using the
Spiral Model the software is developed in a series of incremental releases. Unlike the Iteration
Model where in the first product is a core product, in the Spiral Model the early iterations could
result in a paper model or a prototype. However, during later iterations more complex
functionalities could be added. A Spiral Model, combines the iterative nature of prototyping with
the controlled and systematic aspects of the Waterfall Model, therein providing the potential for
rapid development of incremental versions of the software.

Advantages of the Spiral Model


Realistic approach to the development because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at each evolutionary
level.
The model uses prototyping as a risk reduction mechanism and allows for the development of
prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic waterfall model, and also
incorporates into it an iterative framework that more reflect the real world.
Disadvantages of the Spiral Model
One should possess considerable risk-assessment expertise
It has not been employed as much proven models (e.g. the Waterfall Model) and hence may prove
difficult to sell to the client.

Q.25 Explain the Personal Software Process (PSP) Objectives and its framework activities.

The developer uses some process to build computer software. The process can be ad hoc, may
change on a daily basis, may not be efficient, effective or even successful, but a process does
exist. In the PSP methods following objectives are achieved to a defined process, planning the
work, gathering data, and using these data to analyze and improve the process.

- PSP helps software engineers to understand and improve their performance by using a
disciplined, data-driven procedure.
- PSP emphasizes personal measurement of both the work product that is produced and the
resultant quality of the work product.
- PSP makes the practitioner responsible for project planning.
- PSP empowers the practitioner to control the quality of all software work products developed.
- PSP helps software engineers to improve their estimating and planning skills, make
commitments they can keep, manage the quality of their projects, and reduce the number of
defects in their work.
- PSP helps developers produce zero-defect, quality products on schedule.
- PSP emphasizes the need to record and analyze the types of errors you make, so strategies can
be developed to eliminate them.

Page
17
Framework Activities Used During PSP are:
The PSP model defines five framework activities:
- Planning: Isolates requirements and based on these develops both size and resource estimates.
A defect estimate is also made.
- High Level Design: External specifications for each component to be constructed are developed
and a component design is created. Prototypes are built and all issues are recorded and tracked.
- High Level Design Review: Formal verification methods are applied to uncover errors in
design.
- Development: The component level design is refined and reviewed. Code is generated,
reviewed, compiled, and tested.
- Postmortem: The effectiveness of the process is determined using the measures and metrics
collected.

Q.26 Explain team software process model.

Team Software Process (TSP) scripts define elements of the team process and the following
framework activities:

- LAUNCH
It reviews course objectives and describes the TSP structure and content. It assigns teams and
roles to students and describes the customer needs statement. It also establishes team and
individual goals.
- STRATEGY
It creates a conceptual design for the product and establishes the development strategy and
decide what will be produced in each cycle. Strategy makes initial size and effort estimates and
establishes a configuration management plan, reuse plan and risk management.
- PLAN
It estimates the size of each artifact to be developed. Planning also identifies tasks to be
performed, estimates time to complete each task;, assign tasks to team members, make a weekly
schedule for task completion and make a quality plan.
- REQUIREMENTS
Requirements analyze need statements and interview the customers, specify and inspect the
requirements and develop a system test plan.
- DESIGN
It creates a high-level design, specify the design, inspect the design, develop an integration test
plan.
- IMPLEMENT
Implementation uses the PSP to implement modules/units, creates detailed design of
modules/units, reviews the design, translates the design to code, review the code,
compile and test the modules/units and analyze the quality of the modules/units.
- TEST
Testing builds and integrate the system. It conducts a system test and produce user
documentation.
- POSTMORTEM
It conducts a postmortem analysis, writes a cycle report and produce peer and team evaluations.

Page
18
TSP makes use of a wide variety of scripts, forms, and standards that serve to guide team
members in their work. A script defines specific process activities and other more detailed work
functions that are part of the team process. TSP recognizes that the best software teams are self
directed. Team members set project objectives, adapt the process to meet their needs, have
control over schedule, and through measurement and analysis of the metrics collected, work
continually to improve team's approach to software engineering.
Q.27 Define process pattern and explain type of it.
A process pattern describes a collection of general techniques, actions, and/or tasks for
developing object-oriented software. Process patterns are the reusable building blocks from
which your organization will develop a tailored software process that meets its exact needs.

There are three types of process patterns. In order of increasing scale they are:
1. Task process patterns. This type of process pattern depicts the detailed steps to perform a
specific task, such as the Technical Review and Reuse First process patterns.
2. Stage process patterns. This type of process pattern depicts the steps, which are often
performed iteratively, of a single project stage. A project stage is a higher-level form of process
pattern, one that is often composed of several task process patterns. A stage process pattern is
presented for each project stage of a software process.
3. Phase process patterns. This type of process pattern depicts the interactions between the
stage process patterns for a single project phase, such as the Initiate and Delivery phases. A
phase process pattern is a collection of two or more stage process patterns. Project phases are
performed in a serial manner; this is true of both structured development and of object
development. Phase process patterns are performed in serial order, made up of stage process
patterns which are performed iteratively.

Page
19
Chapter 2 Software Engineering requirements and development of analysis & design
models

Q.1 Explain the essence of software engineering practice.

The Essence of SW ENGG Practice


1. Understand the problem(C& A)
2. Plan a solution (M&D)
3. Carry out the plan (code)
4. Examine the result for accuracy (T& Q A)
The series of common sense steps lead to essential questions adapted as below.

Understand the problem: Who are the stake holders? What are the unknowns? Can the problem
be compartmentalized? Is it possible to represent smaller problems that may be easier to
understand? Can the problem be represented graphically?
Plan the Solution: Have you seen similar problems before? Has a similar problem been solved?
Can problems be defined? Can you represent a solution in a manner that leads to effective
implementation? Or can a design model be created?
Carry out the plan: Does the solution confirm to the plan? Is each component part of the solution
is probably correct?
Examine the result: Is it possible to test each component part of the solution? Does a solution
produces the result that confirms data, functions, features and behavior that are required?

Q.2 Explain the seven core principles of software engineering.

The Seven Core principles: Principle is an important underlying law or assumption required in a
system of thought.
1. THE REASON IT ALL EXISTS (To provide value to its customers)
2. KISS (All design should be as simple as possible, but not simpler) No quick & dirty.
3. MAINTAIN THE VISION (Clear vision is essential)
4. WHAT YOUP RODUCE OTHERS WILL CONSUME (Someone else has to understand)
5. BE OPEN TO THE FUTURE (Never design yourself in to a corner)
6. PLAN AHEAD FOR REUSE (Long term cost advantage)
7. THINK!(Placing clear, complete thought before action almost always produces better results)

Q.3 State and Explain communication principles.

Communication practice (requirements elicitation)


Listen & Focus on speakers words
Prepare before you communicates.
Someone should facilitate the Activity.
Face to Face communication is the best
Take notes and Document decisions
Strive for Collaboration.
Stay focused, modularize the discussion
If something is unclear draw a picture.

Page
20
Move on regardless once you agree to something, move on.
Negotiation is not a contest or game. It works well when both the parties win

Q.4 State and Explain planning principles.

Understand the project scope


Involve the customer (and other stakeholders)
Recognize that planning is iterative
Estimate based on what you know
Consider risk
Be realistic
Adjust granularity as you plan
Define how quality will be achieved
Define how youll accommodate changes
Track what youve planned

Q.5 Explain W5HH principles of planning.


Why the system is begin developed? (business reason)
What will be done? (functionality)
When will it be accomplished? (timeline)
Who is responsible? (work assignment)
Where are they located (organizationally)?
How will the job be done technically and managerially?
How much of each resource is needed?

Q.6 State and Explain analysis modeling principles.

Represent the information domain


input, output, internal data storage
Represent software functions
Various features of SW
Represent software behavior
In terms of a consequence of external events
Partition these representations
Move from essence toward implementation

Q.7 State and Explain design modeling principles.

Design must be traceable to the analysis model


Always consider architecture
Focus on the design of data
Interfaces (both user and internal) must be designed
Components should exhibit functional independence
Components should be loosely coupled

Page
21
Design representation should be easily understood
The design model should be developed iteratively

Q.8 State and Explain construction principles.

Preparation principles: Before you write one line of code, be sure you:

1. Understand of the problem youre trying to solve (see communication and modeling)
2. Understand basic design principles and concepts.
3. Pick a programming language that meets the needs of the software to be built and the
environment in which it will operate.
4. Select a programming environment that provides tools that will make your work easier.
5. Create a set of unit tests that will be applied once the component you code is
completed.

Coding principles: As you begin writing code, be sure you:

1. Constrain your algorithms by following structured programming practice.


2. Select data structures that will meet the needs of the design.
3. Understand the software architecture and create interfaces that are consistent with it.
4. Keep conditional logic as simple as possible.
5. Create nested loops in a way that makes them easily testable.
6. Select meaningful variable names and follow other local coding standards.
7. Write code that is self-documenting.
8. Create a visual layout (e.g., indentation and blank lines) that aids understanding.

Validation Principles: After youve completed your first coding pass, be sure you:
1. Conduct a code walkthrough when appropriate.
2. Perform unit tests and correct errors youve uncovered.
3. Refactor the code.

Q.9 State and Explain testing principles.

1. All tests should be traceable to requirements


2. Tests should be planned
3. The Pareto Principle applies to testing
4. Testing begins in the small and moves toward in the large
5. Exhaustive testing is not possible

Q.10 State and Explain deployment principle.

1. Manage customer expectations for each increment


2. A complete delivery package should be assembled and tested
3. A support regime should be established
4. Instructional materials must be provided to end-users

Page
22
5. Buggy software should be fixed first, delivered later

Q.11 What are the requirements engineering tasks?

Requirements Engineering :
* Provides a solid approach for addressing challenges in software project.
* Must be adapted to the needs of the: Process, project and product and the people doing the
work.
* Begins during the communication activity and continues into the modeling activity.
* Helps software engineers to better understand the problem they will work to solve.

Requirements Engineering Tasks :


- Inception
* A task that defines the scope and nature of the problem to be solved.
* Software engineers ask context-free questions.
* Intent is to establish a basic understanding of the problem, the people who want a solution,
nature of the solution that is desired and the effectiveness of preliminary communication and
collaboration between the customer and the developer.

- Elicitation
Ask the customer, the users and others about the objectives of the system, what is to be
accomplished, how the system of product fits into the needs of the business and finally, how the
system or product is to be used on a day-to-day basis.
Why Elicitation is Difficult?
* Problems of scope.
* Problems of understanding.
* Problems of volatility.

- Elaboration
* Basic requirements (obtained from the customer during inception and elicitation) are refined
and modified.
* Focuses on developing a refined technical model of software functions, features and
constraints.
* Driven by the creation and refinement of user scenarios.
* End-result: analysis model that defines the informational, functional and behavioral domain of
the problem.

- Negotiation
* Theres no winner and loser in an effective negotiation.
* Customers usually ask for more rather than what can be achieved.

Page
23
* Some proposed conflicting requirements.
* The requirements engineer must reconcile these conflicts through a process of negotiation.

- Specification
* It can be written document, a set of graphical models, a formal mathematical model, a
collection of usage scenarios, a prototype, or any combination of these.
* It is the final work product produced by the requirements engineer.
* It serves as the foundation for subsequent software engineering activities.
* It describes the function and performance of a computer-based system and the constraints that
will govern its development.

- Validation
* Work products produced are assessed for quality in this step.
* A task which examines the specification to ensure that all software requirements have been
stated unambiguously.
* That inconsistencies, omissions and errors have been detected and corrected.
* That work products conform to the standards established for the process, project and the
product

Q.12 How will you initiate requirements engineering process?

The steps involved to initiate requirements engineering process are as below.

STEP 1: Identify stakeholders. A stakeholder is anyone who benefits in a direct or indirect way
from the system which is being developed. The business operations managers, product managers,
marketing people, internal and external customers, end-users, and others are the common people
to interview. It is important at this step to create a list of people who will contribute input as
requirements are elicited. The list of users will grow as more and more people get involved in
elicitation.

STEP 2: Recognize multiple view points. It is important to remember that different stakeholder
would have a different view of the system. Each would gain different benefits when the system is
a success; each should have different risks if the development fails .At this step, categorize all
stakeholder information and requirements. Also, identify requirements that are inconsistent and
in conflict with one another. It should be organized in such a way that stakeholders can decide on
a consistent set of requirements for the system.

STEP 3: Work toward collaboration. The success of most projects would rely on collaboration.
To achieve this, find areas within the requirements that are common to stakeholders. However,
the challenge here is addressing inconsistencies and conflicts. Collaboration does not mean that a
committee decides on the requirements of the system. In many cases, to resolve conflicts a

Page
24
project champion, normally a business manager or senior technologist, decides which
requirements are included when the software is developed.

STEP 4: Ask the First Question. To define the scope and nature of the problem, questions are
asked to the customers and stakeholders. These questions may be categorized. As an example,
consider the following questions: Stakeholder's or Customer's Motivation: 1. Who is behind the
request for this work? 2. Why are they requesting such a work? 3. Who are the end-users of the
system? 4. What are the benefits when the system has been developed successfully? 5. Are there
any other ways in providing the solution to the problem? What are the alternatives? Customer's
and Stakeholder's Perception: 1. how can one characterized a "good" output of the software? 2.
What are the problems that will be addressed by the software? 3. What is the business
environment to which the system will be built?4. Are there any special performance issues or
constraints that will affect the way the solution is approached? Effectiveness of the
Communication:1. Are we asking the right people the right questions?2. Are the answers they are
providing "official"?3. Are the questions relevant to the problem?4. Am I asking too many
questions?5. Can anyone else provide additional information?6. Is there anything else that I need
to know?

Q.13 How will you elicit the requirements?

Elicitation is a task that helps the customer define what is required. However, this is not an easy
task. Among the problems encountered in elicitation are discussed below:

1.Problems of Scope. It is important that the boundaries of the system be clearly and properly
defined. It is important to avoid using too much technical detail because it may confuse rather
than clarify the system's objectives.

2.Problems of Understanding. It is sometimes very difficult for the customers or users to


completely define what they needed. Sometimes they have a poor understanding of the
capabilities and limitations of their computing environment, or they don't have a full
understanding of the problem domain. They sometimes may even omit information believing
that it is obvious.

3.Problems of Volatility. It is inevitable that requirements change overtime. To help overcome


these problems, software engineers must approach the requirements gathering activity in an
organized and systematic manner.

Collaborative Requirements Gathering: Unlike inception where Q&A (Question and Answer)
approach is used, elicitation makes use of a requirements elicitation format that combines the
elements of problem solving, elaboration, negotiation, and specification. It requires the
cooperation of a group of end users and developers to elicit requirements. They work together
to:identify the problem propose elements of the solution negotiate different approaches

Page
25
specify a preliminary set of solution requirements Joint Application Development is one
collaborative requirement gathering technique that is popularly used to elicit requirements.

Quality Function Deployment: Quality Function Deployment is a technique that emphasizes an


understanding of what is valuable to the customer. Then, deploy these values throughout the
engineering process. It identifies three types of requirements:

1 . Normal Requirements These requirements directly reflect the objectives and goals stated for a
product or system during meetings with the customer. It means that if the requirements are
present, the customer is satisfied.

2 . Expected Requirements These requirements are implicit to the product or system and may be
so fundamental that the customer does not explicitly state them. The absence of these
requirement may cause for significant dissatisfaction. Examples of expected requirements are
ease of human or machine interaction, overall operation correctness and reliability, and ease of
software installation.

3 . Exciting Requirements These requirements reflect features that go beyond the customer's
expectations and prove to be very satisfying when present. With the succeeding meetings with
the team, value analysis is conducted to determine the relative priority of requirement based on
three deployments, namely, function deployment , information deployment , and task
deployment . Function deployment is used to determine the value of each function that is
required for the system. Information deployment identifies both data objects and events that the
system must consume and produce. This is related to a function. Task deployment examines the
behavior of the product or system within the context of its environment. From the value analysis,
each requirements are categorized based on the three types of requirements.

Elicitation Work Product: The output of the elicitation task can vary depending on size of the
system or product to be built. For most systems, the output or work products include:A
statement of need and feasibility A bounded statement of scope for the system or product A
list of customer, users, and other stakeholders who participated in requirements elicitation. A
description of the system's technical environment A priority list of requirements, preferably, in
terms of functions, objects and domain constraints that apply to each Elaboration.

Q.14 State the guidelines of negotiating requirements.

Below are some guidelines in negotiating with stakeholders

1.Remember that negotiation is not competition. Everybody should compromise. At some level,
everybody should feel that their concerns have been addressed, or that they have achieved
something.2.Have a strategy. Listen to what the parties want to achieve. Decide on how we are
going to make everything happen.3.Listen effectively . Listening shows that you are concern.
Try not to formulate your response or reaction while the other is speaking. You might get

Page
26
something that can help you negotiate later on.4.Focus on the other party's interest . Don't take
hard positions if you want to avoid conflict.5.Don't make it personal. Focus on the problem that
needs to besolved.6.Be creative. Don't be afraid to think out of the box.7.Be ready to commit .
Once an agreement has been reached, commit to the agreement and move on to other matters.

Q.15 How will you validate requirements?

Questions as suggested are listed below to serve as a guideline for validating the work products
of the requirement engineering phase

1. Is each requirement consistent with the overall objective for the system orproduct?2. Have all
requirements been specified at the proper level of abstraction? That is, do some requirements
provide a level of technical detail that is not appropriate at the stage?3. Is the requirement really
necessary or does it represent an add-on feature that may not be essential to the objective of the
system?4. Is each requirement bounded and clear?5. Does each requirement have attribution?
That is, is a source (generally, a specific individual) noted for each requirement?6. Do any of the
requirements conflict with other requirements?7. Is each requirement achievable in the technical
environment that will house the system or product?8. Is each requirement testable, once
implemented?9. Does the requirement model properly reflect the information, function and
behavior of the system to be built?10.Has the requirements model been "partitioned" in a way
that exposes progressively more detailed information about the system?11.Have the requirements
pattern been used to simplify the requirements model? Have all patterns been properly validated?
Are all patterns consistent with customer requirements?

Q.16 State the analysis rules of thumb.


1. The model should focus on requirements that are visible within the problem or business
domain. The level of abstraction should be relatively high.
2. Each element of the analysis model should add to an overall understanding of software
requirements provide insight into the information domain function of the system behavior of the
system
3. Delay consideration of infrastructure and other non-functional models until design.
4. Minimize coupling throughout the system.
5. Be certain that the analysis model provides value to all stakeholders.
6. Keep the model as simple as it can be.

Q.17 What is domain analysis? How to do it?

Software domain analysis is the identification, analysis, and specification of common


requirements from a specific application domain, typically for reuse on multiple projects within
that application domain. It can be done in following way.

1. Define the domain to be investigated.


2. Collect a representative sample of applications in the domain.

Page
27
3. Analyze each application in the sample.
4. Develop an analysis model for the objects.

Q.18 Explain input/output for domain analysis.

This model describes domain analysis as an activity that takes multiple sources of input,
produces many different kinds of output, and is heavily parameterized. For example, one
parameter is the development paradigm (e.g., SA, Jackson, OO). Raw domain knowledge from
any relevant source is taken as input. Participants in the process can be, among others, domain
experts and analysts. Outputs are (semi)formalized concepts, domain processes, standards,
logical architectures, etc. Subsequent activities produce generic design fragments, frameworks,
etc.

Q.19 What is the purpose of data flow diagram?


A data flow diagram:
Shows processes and flow of data in and out of these processes.
Does not show control structures (loops, etc.)

Page
28
Contains 5 graphic symbols (shown later)
Uses layers to decompose complex systems (show later)
Can be used to show logical and physical
Were a quantum leap forward to other techniques at the time, I.e. monolithic descriptions with
globs of text!
Still used today to document business and/or other processes.

Q.20 What is structured analysis? Explain primary steps.

Structured Analysis

The traditional approach to analysis is focused on cost/benefit and feasibility analysis, hardware
and software selection and personal considerations. It focuses more on the physical system rather
than the logical system.

Structured analysis is a new set of techniques and graphical tools that allows the analyst to
develop a system specification that is easily understood by the individuals using the system. It
focuses more on the logical system. It is a way to focus on functions rather than physical
implementations. It encouraged more graphical data flow diagrams (DFDs) wherever possible to
help communicate better with the user. It differentiates between logical and physical systems. It
removes the physical checkpoints and introduces the logical equivalents. It builds a logical
system with system characteristics and inter-relationships before moving to implementation.

The primary steps in Structured Analysis are:

Study the affected system, and user areas resulting in a physical data flow diagram (DFD.)
Remove the physical checkpoints and replace them with a logical equivalent, thus resulting
in a logical data flow diagram (DFD.)
Model the new Logical System
Establish the man/machine interface. The process modifies the logical data flow diagram
(DFD) and considers the hardware needed to implement the system.
Quantify Cost and Benefits. Here this step is to cost-justify the system leading to the
selection of appropriate hardware.

Q.21 Explain Object oriented analysis and design approach.

Object-oriented analysis and design (OOAD) is a software engineering approach that models a
system as a group of interacting objects. Each object represents some entity of interest in the
system being modeled, and is characterized by its class, its state (data elements), and its
behavior. Various models can be created to show the static structure, dynamic behavior, and run-
time deployment of these collaborating objects. There are a number of different notations for
representing these models, such as the Unified Modeling Language (UML).

Page
29
Object-oriented analysis (OOA) applies object-modeling techniques to analyze the functional
requirements for a system. Object-oriented design (OOD) elaborates the analysis models to
produce implementation specifications. OOA focuses on what the system does, OOD on how the
system does it.

Object-oriented systems

An object-oriented system is composed of objects. The behavior of the system results from the
collaboration of those objects. Collaboration between objects involves those sending messages to
each other. Sending a message differs from calling a function in that when a target object
receives a message, it decides on its own what function to carry out to service that message. The
same message may be implemented by many different functions, the one selected depending on
the state of the target object.

The implementation of "message sending" varies depending on the architecture of the system
being modeled, and the location of the objects being communicated with.

Object-oriented analysis

Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem
domain), to develop a conceptual model that can then be used to complete the task. A typical
OOA model would describe computer software that could be used to satisfy a set of customer-
defined requirements. During the analysis phase of problem-solving, the analyst might consider a
written requirements statement, a formal vision document, or interviews with stakeholders or
other interested parties. The task to be addressed might be divided into several subtasks (or
domains), each representing a different business, technological, or other areas of interest. Each
subtask would be analyzed separately. Implementation constraints, (e.g., concurrency,
distribution, persistence, or how the system is to be built) are not considered during the analysis
phase; rather, they are addressed during object-oriented design (OOD).

The conceptual model that results from OOA will typically consist of a set of use cases, one or
more UML class diagrams, and a number of interaction diagrams. It may also include some kind
of user interface mock-up.

Object-oriented design

During object-oriented design (OOD), a developer applies implementation constraints to the


conceptual model produced in object-oriented analysis. Such constraints could include not only
constraints imposed by the chosen architecture but also any non-functional technological or
environmental constraints, such as transaction throughput, response time, run-time platform,
development environment, or those inherent in the programming language. Concepts in the
analysis model are mapped onto implementation classes and interfaces resulting in a model of
the solution domain, i.e., a detailed description of how the system is to be built.

Page
30
Q.22 What are the rules of drawing data flow diagram?

At least one input or output data flow for external entity


At least one input data flow and/or at least one output data flow for a process
Output data flows usually have different names than input data flows for a process
Data flows only in one direction
Every data flow connects to at least one process
At least one input data flow for a process
At least one output data flow for a process
Process from external entity cannot move directly to another external entity
At least one input data flow for a data store
At least one output data flow for a data store
Data from one data store cannot move directly to another data store

Q.23 What is data control flow model? Give guidelines to draw it.

Flow models focus on the flow of data objects as they are transformed by processing functions.
There are applications which are driven by events rather than data and produce information that
is controlled and process information keeping in mind the time and performance. In such
situations, control flow diagrams comes into picture along with data flow modeling.

There are some guidelines to select potential events for a control flow diagram:
- all sensors that are read by the software are listed.
- all interrupt conditions are listed.
- all switches actuated by operator are listed.
- all data conditions are listed.
- all control items are reviewed.
- all states that describe the behavior of a system are listed.
- all transitions between states are defined.
- all possible omissions should be kept in focus.

Q.24 Explain control flow specifications.


The Control Specification contains a state diagram which is sequential specification of
behavior. It contains a program activation table which is a combinatorial specification of
behavior. Control specification does not give any information about the inner working of the
processes activated as a result of this behavior.

Here are a few takeaways to consider when you are designing solutions that need to drive
behavior change.
Define the desired behavior change you want to observe;
Feed this into the business strategy and design process; let it guide these processes;

Page
31
Define your target audience, and then go a bit outside the norm. You often learn more
from those who dont meet youre assumed or expected specifications;
Conduct research and understand the behavioral predictors of the population (attitudes,
norms, control, stages of change). Qualitative and quantitative data is needed here;
Monitor, measure and modify. Remember, changing a behavior can take time, so lets be
patient!
Q.25 Explain the objectives of analysis model and draw its structure.

The analysis model must achieve three primary objectives:


1. To describe what the customer requires.
2. To establish a basis for the creation of a software design.
3. To define a set of requirements that can be validated once the software is built. To accomplish
these objectives, the analysis model derived during structured analysis takes the form
llustrated in Figure 1.

Page
32
At the core of the model lies the data dictionarya repository that contains descriptions of all
data objects consumed or produced by the software.
Three different diagrams surround the core.
The entity relation diagram (ERD) depicts relationships between data objects. The ERD is the
notation that is used to conduct the data modeling activity.

The attributes of each data object noted in the ERD can be described using a data object
description.
The data flow diagram (DFD) serves two purposes:
1. To provide an indication of how data are transformed as they move through the system.
2. To depict the functions (and subfunctions) that transform the data flow.

The DFD provides additional information that is used during the analysis of the information
domain and serves as a basis for the modeling of function.
A description of each function presented in the DFD is contained in a process specification
(PSPEC).
The state transition diagram (STD) indicates how the system behaves as a consequence of
external events. To accomplish this, the STD represents the various modes of behavior
(called states) of the system and the manner in which transitions are made from state to state.
The STD serves as the basis for behavioral modeling.

Additional information about the control aspects of the software is contained in the control
specification (CSPEC).

Q.26 Discuss data modeling concepts of analysis model.

Data Modeling
Data modeling methods make use of the entity relationship diagram. The ERD enables a software
engineer to identify data objects and their relationships using a graphical notation.
In the context of structured analysis, the ERD defines all data that are entered, stored, transformed,
and produced within an application.
The entity relationship diagram focuses solely on data (and therefore satisfies the first operational
analysis principles), representing a "data network" that exists for a given system. The ERD is
especially useful for applications in which data and the relationships that govern data are complex.
Unlike the data flow diagram, data modeling considers data independent of the processing that
transforms the data.
2.1 Data Objects, Attributes, and Relationships
The data model consists of three interrelated pieces of information: the data object, the attributes that
describe the data object, and the relationships that connect data objects to one another.
Data objects. A data object is a representation of almost any composite information that must be
understood by software. By composite information, we mean something that has a number of
different properties or attributes. Therefore, width (a single value) would not be a valid data object,
but dimensions (incorporating height, width, and depth) could be defined as an object.
A data object can be an external entity (e.g., anything that produces or consumes information), a
thing (e.g., a report or a display), an occurrence (e.g., a telephone call)

Page
33
or event (e.g., an alarm), a role (e.g., salesperson), an organizational unit (e.g., accounting
department), a place (e.g., a warehouse), or a structure (e.g., a file).
EX:
A person or a car (Figure 2) can be viewed as a data object in the sense that either can be defined in
terms of a set of attributes. The data object description incorporates the data object and all of its
attributes.

Figure 2 Data objects, attributes and relationships


Data objects (represented in bold) are related to one another. For example, person can own car,
where the relationship own connotes a specific "connection between person and car. The
relationships are always defined by the context of the problem that is being analyzed.
A data object encapsulates data onlythere is no reference within a data object to operations that act
on the data (This distinction separates the data object from the class or object defined as part of the
object-oriented paradigm). Therefore, the data object can be represented as a table as shown in Figure
3. The headings in the table reflect attributes of the object. In this case, a car is defined in terms of
make, model, ID number, body type color and owner. The body of the table represents specific
instances of the data object.
For example, a Chevy Corvette is an instance of the data object car.

Page
34
Attributes. Attributes define the properties of a data object and take on one of three different
characteristics. They can be used to (1) name an instance of the data object, (2) describe the instance,
or (3) make reference to another instance in another table.
In addition, one or more of the attributes must be defined as an identifierthat is, the identifier
attribute becomes a "key" when we want to find an instance of the data object. In some cases, values
for the identifier(s) are unique, although this is not a requirement. Referring to the data object car, a
reasonable identifier might be the ID number. The set of attributes that is appropriate for a given data
object is determined through an understanding of the problem context.
Relationships. Data objects are connected to one another in different ways. Consider two data
objects, book and bookstore. These objects can be represented using the simple notation illustrated
in Figure 4a. A connection is established between book and bookstore because the two objects are
related. But what are the relationships? To determine the answer, we must understand the role of
books and bookstores within the context of the software to be built. We can define a set of
object/relationship pairs that define the relevant relationships. For example,
A bookstore orders books.
A bookstore displays books.
A bookstore stocks books.
A bookstore sells books.
A bookstore returns books.
The relationships orders, displays, stocks, sells, and returns define the relevant connections between
book and bookstore. Figure illustrates these object/relationship pairs graphically.

Page
35
2.2 Cardinality and Modality
The elements of data modelingdata objects, attributes, and relationships provide the basis for
understanding the information domain of a problem. However, additional information related to these
basic elements must also be understood.
We have defined a set of objects and represented the object/relationship pairs that bind them. But a
simple pair that states: object X relates to object Y does not provide enough information for
software engineering purposes. We must understand how many occurrences of object X are related
to how many occurrences of object Y. This leads to a data modeling concept called cardinality.
Cardinality. The data model must be capable of representing the number of occurrences objects in a
given relationship. Tillmann defines the cardinality of an object/relationship pair in the following
manner:
Cardinality is the specification of the number of occurrences of one [object] that can be related to the
number of occurrences of another [object].
Cardinality is usually expressed as simply 'one' or 'many.'
Taking into consideration all combinations of 'one' and 'many,' two [objects] can be related as
One-to-one (l:l)An occurrence of [object] 'A' can relate to one and only one occurrence of
[object] 'B,' and an occurrence of 'B' can relate to only one occurrence of 'A.'
One-to-many (l:N)One occurrence of [object] 'A' can relate to one or many occurrences of
[object] 'B,' but an occurrence of 'B' can relate to only one occurrence of 'A.'
Many-to-many (M:N)An occurrence of [object] 'A' can relate to one or more occurrences of 'B,'
while an occurrence of 'B' can relate to one or more occurrences of 'A.'
Cardinality defines the maximum number of objects that can participate in a relationship. It does
not, however, provide an indication of whether or not a particular data object must participate in the
relationship. To specify this information, the data model adds modality to the object/relationship pair.

Page
36
Modality. The modality of a relationship is 0 if there is no explicit need for the relationship to occur
or the relationship is optional. The modality is 1 if an occurrence of the relationship is mandatory. To
illustrate, consider software that is used by a local telephone company to process requests for field
service. A customer indicates that there is a problem. If the problem is diagnosed as relatively simple,
a single repair action occurs. However, if the problem is complex, multiple repair actions may be
required. Figure 5 illustrates the relationship, cardinality, and modality between the data objects
customer and repair action.

Referring to the figure, a one to many cardinality relationship is established. That is, a single
customer can be provided with zero or many repair actions. The symbols on the relationship
connection closest to the data object rectangles indicate cardinality.
The vertical bar indicates one and the three-pronged fork indicates many.
Modality is indicated by the symbols that are further away from the data object rectangles. The
second vertical bar on the left indicates that there must be a customer for a repair action to occur. The
circle on the right indicates that there may be no repair action required for the type of problem
reported by the customer.
2.3 Entity/Relationship Diagrams
The object/relationship pair is the cornerstone of the data model. These pairs can be represented
graphically using the entity/relationship diagram. The ERD was originally proposed by Peter Chen
for the design of relational database systems and has been extended by others. A set of primary
components are identified for the ERD: data objects, attributes, relationships, and various type
indicators. The primary purpose of the ERD is to represent data objects and their relationships.
Rudimentary ERD notation has already been introduced in Section 2. Data objects are represented by
a labeled rectangle. Relationships are indicated with a labeled line connecting objects. In some
variations of the ERD, the connecting line contains a diamond that is labeled with the relationship.
Connections between data objects and relationships are established using a variety of special symbols
that indicate cardinality and modality.
The relationship between the data objects car and manufacturer would be represented as shown in
Figure. One manufacturer builds one or many cars. Given the context implied by the ERD, the
specification of the data object car would be radically different from the earlier specification. By
examining the symbols at the end of the connection line between objects, it can be seen that the
modality of both occurrences is mandatory (the vertical lines).

Page
37
Q.27 Enlist the steps of analyzing object oriented model.
1. Basic user requirements must be communicated between the customer and the SW engineer
2. Classes must be identified (Attributes and methods are to be defined)
3. A class hierarchy is defined
4. Object-to-object relationships should be represented
5. Object behavior must be modeled
6. Tasks 1 through 5 are repeated until the model is Complete

Q. 28 Explain Scenario-Based Modeling analysis.

In scenario based model [Use-cases] are simply an aid to defining what exists outside the
system (actors) and what should be performed by the system (use-cases). You should follow
below questions while analysis.

(1) What should we write about?

(2) How much should we write about it?

(3) How detailed should we make our description?

(4) How should we organize the description?

In this model following concepts are used.

Page
38
Use-Cases A scenario that describes a thread of usage for a system

actors represent roles people or devices play as the system functions

users can play a number of different roles for a given scenario

Q.29 Give steps in developing use case.

Developing a use case

1. What are the main tasks or functions that are performed by the actor?

2 What system information will the actor acquire, produce or change?

3.What information does the actor desire from the system?

Q.30 Show different symbols used in data flow diagram.

Q. 31 What are Data Flow Diagrams?

Data flow diagrams illustrate how data is processed by a system in terms of inputs and outputs.

Page
39
Data flow diagrams are one of the three essential perspectives of the structured-systems analysis
and design method. The sponsor of a project and the end users will need to be briefed and
consulted throughout all stages of a system's evolution. With a data flow diagram, users are able
to visualize how the system will operate, what the system will accomplish, and how the system
will be implemented. The old system's dataflow diagrams can be drawn up and compared with
the new system's data flow diagrams to draw comparisons to implement a more efficient system.
Data flow diagrams can be used to provide the end user with a physical idea of where the data
they input ultimately has an effect upon the structure of the whole system from order to dispatch
to report. How any system is developed can be determined through a data flow diagram model.

Q. 32 Explain Context Diagram.

A context diagram is a top level (also known as Level 0) data flow diagram. It only contains one
process node (process 0) that generalizes the function of the entire system in relationship to
external entities.

Q.33 What is data dictionary?

The term Data Dictionary and Data Repository are used to indicate a more general software
utility than a catalogue. A Catalogue is closely coupled with the DBMS Software; it provides
the information stored in it to user and the DBA, but it is mainly accessed by the various
software modules of the DBMS itself, such as DDL and DML compilers, the query optimizer,
the transaction processor, report generators, and the constraint enforcer. On the other hand, a
Data Dictionary is a data structure that stores meta-data, i.e., data about data. The Software
package for a stand-alone Data Dictionary or Data Repository may interact with the software
modules of the DBMS, but it is mainly used by the Designers, Users and Administrators of a
computer system for information resource management. These systems are used to maintain

Page
40
information on system hardware and software configuration, documentation, application and
users as well as other information relevant to system administration.

Q.34 Explain the purpose of behavioral model and state general steps of its analysis.

The behavioral model indicates how software will respond to external events or stimuli. To
create the model, the analyst must perform the following steps.

1. Evaluate all use cases to fully understand the sequence of interaction within the
system.
2. Identify events that drive the interaction sequence and understand how these
events relate to specific classes.
3. Create a sequence for each use case.
4. Build a state diagram for the system
5. Review the behavioral model to verify accuracy and consistency.

Q. 35 What are quality guidelines for good design?

1. The design should have a recognizable architecture that has been created using known
architectural styles or patterns, that consists of good design components, and that can be created
in an evolutionary manner.

2. The design should be modular that are logically partitioned into subsystems and elements.

3. The design should consist of unique data representations, architecture, interfaces, and
components.

4. The design of the data structure should lead to the design of the appropriate classes that are
derived from known data patterns.

5. The design of the components should have independent functional characteristics.

6. The design should have interfaces that reduce the complexity of the links between components
and the environment.

7. The design is derived from using a method repetitively to obtain information during the
requirements engineering phase.

8. The design should use a notation that conveys its meaning. The above guidelines encourages
good design through the application of fundamental design principles, systematic methodology
and thorough review.

Q.36 Explain design concepts in short.

1.Abstraction: When designing a modular system, many level of abstractions are used. As
software engineers, we define different levels of abstractions as we design the blueprint of the

Page
41
software. At the higher level of abstraction, we state the solution using broad terms. When we
iterate to much lower level of abstractions, a detailed description of the solution is defined. Two
types of abstractions are created, namely, data abstractions and procedural abstractions. Data
abstractions refer to the named collection of data that describes the information required by the
system. Procedural abstractions refer to the sequences of commands or instructions that have
specific limited actions.

2.Modularity: Modularity is the characteristic of a software that allows its development and
maintenance to be manageable. The software is decomposed into pieces called modules. They
are named and addressable components when linked and working together satisfy a requirement.
Design is modularized so that we can easily develop a plan for software increments,
accommodate changes easily, test and debug effectively, and maintain the system with little side-
effects. In object-oriented approach, they are called classes. Modularity leads to information
hiding.

3.Information hiding: Means hiding the details (attributes and operations) of the module or class
from all others that have no need for such information. Modules and classes communicate
through interfaces, thus, enforces access constraints on data and procedural details. This limits or
controls the propagation of changes and errors when modifications are done to the modules or
classes. Modularity also encourages functional independence.

4.Functional Independence: It is the characteristic of a module or class to address a specific


function as defined by the requirements. They are achieved by defining modules that do a single
task or function and have just enough interaction with other modules. Good design uses two
important criteria: coupling and cohesion.

5.Coupling: It is the degree of interconnectedness between design objects as represented by the


number of links an object has and by the degree of interaction it has with other objects. For
object-oriented design, two types of coupling are used.1.Interaction Coupling is the measure of
the number of message types an object sends to another object, and the number of parameters
passed with these message types. Good interaction coupling is kept to a minimum to avoid
possible change ripples through the interface.2.Inheritance Coupling is the degree to which a
subclass actually needs the features (attributes and operations) it inherits from its base class. One
minimize the number of attributes and operations that are unnecessarily inherited.

6.Cohesion : It is the measure to which an element (attribute, operation, or class within a


package) contributes to a single purpose. For object-oriented design, three types of cohesion are
used.

1.Operation Cohesion is the degree to which an operation focuses on a single functional


requirement. Good design produces highly cohesive operations.2.Class Cohesion is the degree to
which a class is focused on a singlerequirement.3.Specialization Cohesion address the semantic
cohesion of inheritance. Inheritance definition should reflect true inheritance rather than sharing

Page
42
syntactic structure. 7.Refinement: Refinement is also known as the process of elaboration.
Abstraction complements refinement as they enable a software engineer to specify the behavior
and data of a class or module yet suppressing low levels of detail. It helps the software engineer
in creating a complete design model as the design evolves. Refinement helps the software
engineer to uncover the details as the development progresses.

8.Refactoring: Refactoring is a technique that simplifies the design of the component without
changing its function and behavior. It is a process of changing the software so that the external
behavior remains the same and the internal structures are improved. During refactoring, the
design model is checked for redundancy, unused design elements, inefficient or unnecessary
algorithms, poorly constructed or inappropriate data structures or any other design failures.
These are corrected to produce a better design.

Q.37 Explain design models in short.

The Design Model The work product of the design engineering phase is the design model which
consists of the architectural design, data design, interface design and component-level design.

1. Architectural Design: This refers to the overall structure of the software. It includes the ways
in which it provides conceptual integrity for a system. It represents layers, subsystems and
components. It is modeled using the package diagram of UML.

2.Data Design: This refers to the design and organization of data. Entity classes that are defined
in the requirements engineering phase are refined to create the logical database design. Persistent
classes are developed to access data from a database server. It is modeled using the class
diagram.

3.Interface Design: This refers to the design of the interaction of the system with its
environment, particularly, the human-interaction aspects. It includes the dialog and screen
designs. Report and form layouts are included. It uses the class diagram and state transition
diagrams.

4.Component-level Design: This refers to the design of the internal behavior of each classes. Of
particular interest are the control classes. This is the most important design because the
functional requirements are represented by these classes. It uses the class diagrams and
component diagrams.

5.Deployment-level Design: This refers to the design of the how the software will be deployed
for operational use. Software functionality, subsystems and components are distributed to the
physical environment that will support the software. The deployment diagram will be used to
represent this model.

Q.38 How will you draw design model and explain different elements of it.

Page
43
Software design is applied regardless of the software process model that is used. Beginning once
software requirements have been analyzed and specified, software design is the first of three
technical activitiesdesign, code generation, and testthat are required to build and verify the
software. Each activity transforms information in a manner that ultimately results in validated
computer software.
Each of the elements of the analysis model provides information that is necessary to create the
four design models required for a complete specification of design. The flow of information
during software design is illustrated in Figure1.

Software requirements, manifested by the data, functional, and behavioral models, feed the
design task. Using one of a number of design methods, the design task produces a data design, an
architectural design, an interface design, and a component design.

The data design transforms the information domain model created during analysis into the data
structures that will be required to implement the software. The data objects and relationships defined
in the entity relationship diagram and the detailed data content depicted in the data dictionary provide
the basis for the data design activity.
Part of data design may occur in conjunction with the design of software architecture. More detailed
data design occurs as each software component is designed.
The architectural design defines the relationship between major structural elements of the software,
the design patterns that can be used to achieve the requirements that have been defined for the
system, and the constraints that affect the way in which architectural design patterns can be applied.
The architectural design representation the framework of a computer-based systemcan be

Page
44
derived from the system specification, the analysis model, and the interaction of subsystems defined
within the analysis model.
The interface design describes how the software communicates within itself, with systems that
interoperate with it, and with humans who use it. An interface implies a flow of information (e.g.,
data and/or control) and a specific type of behavior. Therefore, data and control flow diagrams
provide much of the information required for interface design.
The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the PSPEC, CSPEC, and
STD serve as the basis for component design.
During design we make decisions that will ultimately affect the success of software construction and,
as important, the ease with which software can be maintained.

Q.39 What is the importance of deign?

The importance of software design can be stated with a single wordquality.


Design is the place where quality is fostered in software engineering. Design provides us with
representations of software that can be assessed for quality. Design is the only way that we can
accurately translate a customer's requirements into a finished software product or system.
Software design serves as the foundation for all the software engineering and software support
steps that follow. Without design, we risk building an unstable systemone that will fail when
small changes are made; one that may be difficult to test; one whose quality cannot be assessed
until late in the software process, when time is short and many dollars have already been spent.

Q.40 What are characteristics of good design?

Three characteristics have been suggested that serve as a guide for the evaluation of a good
design:
1-The design must implement all of the explicit requirements contained in the analysis model,
and it must accommodate all of the implicit requirements desired by the customer.
2-The design must be a readable, understandable guide for those who generate code and for those
who test and subsequently support the software.
3-The design should provide a complete picture of the software, addressing the data, functional,
and behavioral domains from an implementation perspective.

Page
45
Chapter 3 Testing strategies and methods

Software is tested to detect defects or faults before they are given to the end-users. It is a known
fact that it is very difficult to correct a fault once the software is in use. Software is tested to
demonstrate the existence of a fault or defect because the goal of software testing is to discover
them. In a sense, a test is successful only when a fault is detected or is a result of the failure of
the testing procedure. Fault Identification is the process of identifying the cause of failure; they
may be several. Fault Correction and Removal is the process of making changes to the software
and system to remove the fault. Software testing encompasses a set of activities with the primary
goal of discovering faults and defects.

Q.1 State the objectives of software testing.

It has the following objectives:To design a test case with high probability of finding as-yet
undiscovered bugs To execute the program with the intent of finding bugs

Q.2 What Is Software Testing?

Software testing is a process of verifying and validating that a software application or


program 1. Meets the business and technical requirements that guided its design and
development, and 2. Works as expected. Software testing also identifies important defects,
flaws, or errors in the application code that must be fixed.

Software testing has three main purposes: verification, validation, and defect finding. The
verification process confirms that the software meets its technical specifications. A
specification is a description of a function in terms of a measurable output value given a
specific input value under specific preconditions. A simple specification may be along the line of
a SQL query retrieving data for a single account against the multi-month account-summary
table must return these eight fields <list> ordered by month within 3 seconds of submission.
The validation process confirms that the software meets the business requirements. A simple
example of a business requirement is After choosing a branch office name, information about
the branchs customer account managers will appear in a new window. The window will present
manager identification and summary information about each managers customer base: <list of
data elements>. Other requirements provide details on how the data will be summarized,
formatted and displayed. A defect is a variance between the expected and actual result. The
defects ultimate source may be traced to a fault introduced in the specification, design, or
development (coding) phases.

Q.3 What is the goal of Software Testing?


* Demonstrate That Faults Are Not Present
* Find Errors
* Ensure That All The Functionality Is Implemented
* Ensure The Customer Will Be Able To Get His Work Done

Page
46
Q. 4 What is Verification and Validation?
* Verification: Are we doing the job right? The set of activities that ensure that software
correctly implements a specific function. (i.e. The process of determining whether or not
products of a given phase of the software development cycle fulfill the requirements established
during previous phase). Ex: - Technical reviews, quality & configuration audits, performance
monitoring, simulation, feasibility study, documentation review, database review, algorithm
analysis etc
* Validation: Are we doing the right job? The set of activities that ensure that the software that
has been built is traceable to customer requirements.(An attempt to find errors by executing the
program in a real environment ). Ex: - Unit testing, system testing and installation testing etc

Q.5 What's a 'test case'?


A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain
particulars such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results

Q. 6 What is a software error ?


A mismatch between the program and its specification is an error in the program if and only if
the specifications exists and is correct.

Q.7 What is a test plan?


A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product.

Elements of test planning


* Establish objectives for each test phase
* Establish schedules for each test activity
* Determine the availability of tools, resources
* Establish the standards and procedures to be used for planning and conducting the tests and
reporting test results
* Set the criteria for test completion as well as for the success of each test

The Structured Approach to Testing

Test Planning
* Define what to test
* Identify Functions to be tested

Page
47
* Test conditions
* Manual or Automated
* Prioritize to identify Most Important Tests
* Record Document References

Test Design
* Define how to test
* Identify Test Specifications
* Build detailed test scripts
* Quick Script generation
* Documents

Test Execution
* Define when to test
* Build test execution schedule
* Record test results

Q.8 What is Acceptance Testing?

Making sure the software works correctly for intended user in his or her normal work
environment.

Q.9 Explain the four types of tests under System Testing.


Recovery testing - checks the system's ability to recover from failures
Security testing - verifies that system protection mechanism prevent improper
penetration or data alteration
Stress testing - program is checked to see how well it deals with abnormal
resource demands (i.e., quantity, frequency, or volume)
Performance testing - designed to test the run-time performance of software,
especially real-time software

Q.10 How testing is carried out?

Software Testing Strategies integrate software test case design methods into a well
planned series of steps that result in the successful implementation of the software. It is a road
map that helps the end-users, software developers and quality assurance group conduct software
testing. It has goals of ensuring that the software constructed implements a specific function
(verification), and that the software is constructed traceable to a customer requirements
(validation).Software testing is performed by a variety of people. Software developers are
responsible for testing individual program units before they perform the integration of these

Page
48
program units. However, they may have vested interest of demonstrating that the code is error-
free. The Quality Assurance group may be tasked to perform the test. Their main goal is to
uncover as many errors as possible. They ask the software developers to correct any errors that
they have discovered. Finally, the software is tested as a whole system.

Q.11 What is Black box testing?

-box testing is a test design technique that focuses on testing the functional aspect of the software
whether it complies with functional requirements. Software engineers derive sets of input
conditions that will fully test all functional requirements of the software. It defines a set of test
cases that finds incorrect or missing functions, errors in interface, errors in data structure, errors
in external database access, performance errors, and errors in initialization and termination.

Q.12 Explain Graph-based Testing.

Graph-based Testing is a black-box testing technique that uses objects that are modeled
in software and the relationships among these objects. Understanding the dynamics on how these
objects communicate and collaborate with one another can derive test cases. Developing Test
Cases Using Graph-based Testing

STEP 1. Create a graph of software objects and identify the relationship of these objects. Using
nodes and edges, create a graph of software objects. Nodes represent the software objects.
Properties can be used to describe the nodes and edges. For object oriented software engineering,
the collaboration diagram is a good input for the graph based testing because you dont need to
create a graph.

STEP 2. Traverse the graph to define test cases. The derived test cases are: Test Case 1:Find
Athlete UI class sends a request to retrieve a list of athlete based on a search criteria. The request
is sent to the Find Athlete Record.

Q.13 What is Equivalence Testing?

Equivalence Testing is a black-box testing technique that uses the input domain of the program.
It divides the input domain into sets of data from which test cases can be derived. Derived test
cases are used to uncover errors that reflect a class of errors. Thus, reduces the effort in testing
the software. It makes use of equivalence classes, which are sets of valid and invalid states that
an input may be in. Guidelines in Identifying Equivalence Classes

1. Input Condition is specified as a range of value. The test case is one valid input, and two
invalid equivalence classes.

2. Input Condition requires a specific value. The test case is one valid, and two invalid
equivalence classes.

3. Input condition specifies a member of a set. The test case is one valid and one invalid.

Page
49
4. Input condition is Boolean. The test case is one valid, and one invalid.

Q. 14 What is Unit Testing?

Unit testing is the basic level of testing. It has an intention to test the smaller building blocks of a
program. It is the process of executing each module to confirm that each performs its assigned
function. It involves testing the interface, local data structures, boundary conditions, independent
paths and error handling paths..To test the module, a driver and stub is used. A Driver is a
program that accepts test case data, passes data to the component to be tested, and prints relevant
results. A Stub is program that performs support activities such as data manipulation, queries of
states of the component being tested and prints verification of entry. If the driver and stub require
a lot of effort to develop, unit testing may be delayed until integration testing. To create effective
unit tests, one needs to understand the behavior of the unit of software that one is testing. This is
usually done by decomposing the software requirements into simple testable behaviors. It is
important that software requirements can be translated into tests.

Q.15 What is validation testing?

Validation testing starts after the culmination of system testing. It consists of a series of
black-box tests cases that demonstrate conformity with the requirements. The software is
completely packaged as a system and that interface errors among software components have
been uncovered and corrected. It focuses on user-visible actions and user-recognizable output. A
Validation Criteria is a document containing all user-visible attributes of the software. It is the
basis for validation testing. If the software exhibits these visible attributes, then, the software
complies with the requirements.

Q.16 Explain Alpha & Beta Testing.

Alpha & Beta Testing is a series of acceptance tests to enable customer o rend-user to validate
all requirements. It can range from informal test drive to planned and systematically executed
tests. This test allows end-users to uncover errors and defects that only them can find. Alpha
testing is conducted at a controlled environment, normally, at the developers site. End-users are
asked to use the system as if it is being used naturally while the developers are recording errors
and usage problems. Beta testing, on the other hand, is conducted at one or more customer sites
and developers are not present. At this time, end-users are the ones recording all errors and usage
problems. They report them to the developer for fixes.

Q.17 Why Alpha and Beta Testing?

If a software application will be used by many users then it is impossible for software developer
and tester to forecast how the customers actually use a program. Customer may use odd
combination of data regularly. Thus most of software product vendors use a process called alpha

Page
50
and beta testing to uncover errors that only the end users may find. Alpha and Beta testing is
done by customers (end users) rather than testing professionals.

A] Alpha Testing

- Alpha testing is performed by Customer at the developers site.

- Alpha testing is conducted in a controlled environment which means testing is conducted in


development environment in the presence of developer, testers and end users.

- The developer guide the users about application and records defects and usage issues while
testing. This is also called developer looking over the shoulder of user.

B] Beta Testing

- Many times we have heard term Beta release/version. It is related to beta testing .

- Beta testing is performed by end users at end users site.

- Unlike alpha testing the developers are not present so we can say that beta testing is conducted
in uncontrolled environment.

- The users records all issues and problems occurred during use of application and reports these
issues to developer regularly.

- The software engineers take care of all issues reported during beta testing and makes necessary
modifications and then prepare the product for final release to the entire customer base.

Q .18 Enlist the various black box testing.

Black box testing: Not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.

Types of Black box testing is given below.

Functional testing - It covers how well the system executes the function as define by the End
user or specification.

Page
51
System testing- that is based on overall requirements specifications, covers all combined parts
of a system.

Sanity test - A sanity test is a narrow regression test that focuses on one or a few areas of
functionality after taking latest build.

Smoke test -After latest build test the software with roughly testing is called smoke test.

Regression testing - re-testing after modifications of the software or its environment.

Confirmation testing - is testing fixes to a set of defects.

Acceptance testing - final testing based on specifications of the customer.

Performance testing - To test the performance of project with or beyond the load. It covers
Load & stress Testing.

Load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.

Stress testing - testing an application under unusually heavy loads, heavy repetition of certain
actions or input of values with beyond the limit.

Usability testing - testing for 'user-friendliness'.

Recovery testing - testing how well a system recovers from crashes, hardware failures, or other
disastrous problems.

Compatibility testing - testing how well software performs in a particular


hardware/software/operating system/network/etc. environment.

Exploratory testing - To test the software without any specification, test plan, or test
case..tester do it by itself experience.

ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.

User acceptance testing - determining if software is satisfactory to an end-user or customer.

Comparison test - comparing software weaknesses and strengths to competing products.

Alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing.

Beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release.

Shakeout test -To identify an initial, test which verifies that a load has been successful and all
software is accessible and functioning as expected.

Page
52
Q.19 Enlist various white box testing.

White box testing: based on knowledge of the internal logic of an application's code. Tests are
based on coverage of code statements, branches, paths, conditions.
Types of Blabk box testing is given below.

Unit testing -To test particular functions or code modules. Typically done by the programmer.it
requires detailed knowledge of the internal program design and code.

Integration Testing: - testing of combined parts of an application to determine if they function


together correctly. There are 2 types of Integration Testing.

Top down and Bottom up Both are coming under Integration


testing

a) Top down Integration Testing b) Bottom up Integration Testing

Top Down:In this approach testing is conducted from main module to sub module. if the sub
module is not developed a temporary programmed called STUB.

Bottom Up:In this approach testing is conducted from sub module to main module.if the main
module is not developed a temporary programmed called DRIVERS.

Security testing - testing how well the system protects against unauthorized internal or
external access etc.

Code coverage - it describe the level on which source code of program a has been tested. It
covers number of Coverage criteria.

1. Statement Coverage
2. Decision/Branch Coverage
3. Condition Coverage
4. Path Coverage

Q.20 Explain the v-model of software testing.

Software testing is too important to leave to the end of the project, and the V-Model of
testing incorporates testing into the entire software development life cycle. In a diagram of the
V-Model, the V proceeds down and then up, from left to right depicting the basic sequence of
development and testing activities. The model highlights the existence of different levels of
testing and depicts the way each relates to a different development phase. Like any model, the V-
Model has detractors and arguably has deficiencies and alternatives but it clearly illustrates that
testing can and should start at the very beginning of the project. In the requirements gathering
stage the business requirements can verify and validate the business case used to justify the
project. The business requirements are also used to guide the user acceptance testing. The model

Page
53
illustrates how each subsequent phase should verify and validate work done in the previous
phase, and how work done during development is used to guide the individual testing phases.
This interconnectedness lets us identify important errors, omissions, and other problems before
they can do serious harm.

Q. 21 Explain unit testing.

A series of stand-alone tests are conducted during Unit Testing. Each test examines an
individual component that is new or has been modified. A unit test is also called a module test
because it tests the individual units of code that comprise the application. Each test validates a
single module that, based on the technical design documents, was built to perform a certain task
with the expectation that it will behave in a specific way or produce specific results. Unit tests
focus on functionality and reliability, and the entry and exit criteria can be the same for each
module or specific to a particular module. Unit testing is done in a test environment prior to
system integration. If a defect is discovered during a unit test, the severity of the defect will
dictate whether or not it will be fixed before the module is approved. Sample Entry and Exit
Criteria for Unit Testing Entry Criteria Business Requirements are at least 80% complete and
have been approved to-date Technical Design has been finalized and approved Development
environment has been established and is stable Code development for the module is complete
Exit Criteria Code has version control in place No known major or critical defects prevents
any modules from moving to System Testing A testing transition meeting has be held and the
developers signed off Project Manager approval has been received

Q.22 What is system testing?

System Testing tests all components and modules that are new, changed, affected by a
change, or needed to form the complete application. The system test may require involvement of
other systems but this should be minimized as much as possible to reduce the risk of externally-
induced problems. Testing the interaction with other parts of the complete system comes in
Integration Testing. The emphasis in system testing is validating and verifying the functional
design specification and seeing how all the modules work together. The first system test is often
a smoke test. This is an informal quick-and-dirty run through of the applications major functions
without bothering with details. The term comes from the hardware testing practice of turning on
a new piece of equipment for the first time and considering it a success if it doesnt start smoking
or burst into flame. System testing requires many test runs because it entails feature by feature
validation of behavior using a wide range of both normal and erroneous test inputs and data. The
Test Plan is critical here because it contains descriptions of the test cases, the sequence in which
the tests must be executed, and the documentation needed to be collected in each run. When an
error or defect is discovered, previously executed system tests must be rerun after the repair is

Page
54
made to make sure that the modifications didnt cause other problems. Sample Entry and Exit
Criteria for System Testing Entry Criteria Unit Testing for each module has been completed
and approved; each module is under version control An incident tracking plan has been
approved A system testing environment has been established The system testing schedule is
approved and in place Exit Criteria Application meets all documented business and functional
requirements No known critical defects prevent moving to the Integration Testing All
appropriate parties have approved the completed tests A testing transition meeting has be held
and the developers signed off

Q. 23 Explain integration testing.

Integration testing examines all the components and modules that are new, changed,
affected by a change, or needed to form a complete system. Where system testing tries to
minimize outside factors, integration testing requires involvement of other systems and interfaces
with other applications, including those owned by an outside vendor, external partners, or the
customer. Sample Entry and Exit Criteria for Integration Testing Entry Criteria System
testing has been completed and signed off Outstanding issues and defects have been identified
and documented Test scripts and schedule are ready The integration testing environment is
established Exit Criteria All systems involved passed integration testing and meet agreed upon
functionality and performance requirements Outstanding defects have been identified,
documented, and presented to the business sponsor Stress, performance, and load tests have
been satisfactorily conducted The implementation plan is final draft stage A testing transition
meeting has been held and everyone has signed off Integration testing has a number of sub-types
of tests that may or may not be used, depending on the application being tested or expected usage
patterns.

Q.24 What is Compatibility Testing?

Compatibility tests insures that the application works with differently configured systems
based on what the users have or may have. When testing a web interface, this means testing for
compatibility with different browsers and connection speeds.

Q.25 What is Performance Testing?

Performance tests are used to evaluate and understand the applications scalability

Q.26 What is Stress Testing?

Stress Testing is performance testing at higher than normal simulated loads. Stressing runs the
system or application beyond the limits of its specified requirements to determine the load under
which it fails and how it fails.

Page
55
Q.27 What is Load Testing?

Load tests are the opposite of stress tests. They test the capability of the application to function
properly under expected normal production conditions and measure the response times for
critical transactions or processes to determine if they are within limits specified in the business
requirements and design documents or that they meet Service Level Agreements.

Q.28 Explain Regression testing.

It is also known as validation testing and provides a consistent, repeatable validation of each
change to an application under development or being modified. Each time a defect is fixed, the
potential exists to inadvertently introduce new errors, problems, and defects. An element of
uncertainty is introduced about ability of the application to repeat everything that went right up
to the point of failure. Regression testing is the probably selective retesting of an application or
system that has been modified to insure that no previously working components, functions, or
features fail as a result of the repairs. Regression testing is conducted in parallel with other tests
and can be viewed as a quality control tool to ensure that the newly modified code still complies
with its specified requirements and that unmodified code has not been affected by the change. It
is important to understand that regression testing doesnt test that a specific defect has been
fixed. Regression testing tests that the rest of the application up to the point or repair was not
adversely affected by the fix.

Q.29 Explain Risk Driven Testing.


What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency,
or everything that could go wrong, risk analysis is appropriate to most software development
projects. This requires judgment skills, common sense, and experience.

Considerations can include:


- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which aspects of the application are most important to the customer?
- Which parts of the code are most complex, and thus most subject to errors?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of tests could easily cover multiple functionality?
Whenever there's too much to do and not enough time to do it, we have to prioritize so that at
least the most important things get done. So prioritization has received a lot of attention. The

Page
56
approach is called Risk Driven Testing. Here's how you do it: Take the pieces of your system,
whatever you use - modules, functions, section of the requirements - and rate each piece on two
variables, Impact and Likelihood.

Risk has two components: Impact and Likelihood

Impact
is what would happen if this piece somehow malfunctioned. Would it destroy the customer
database? Or would it just mean that the column headings in a report didn't quite line up?

Likelihood
is an estimate of how probable it is that this piece would fail. Together, Impact and Likelihood
determine Risk for the piece.

Q.30 What is a software error?


A mismatch between the program and its specification is an error in the Program if and only if
the specification exists and is correct.
Example: -
* The date on the report title is wrong
* The system hangs if more than 20 users try to commit at the same time
* The user interface is not standard across programs

Q. 31 What are Categories of Software errors?


* User Interface errors
* Functionality errors
* Performance errors
* Output errors
* documentation errors

Q.32 What Do You Do When You Find a Bug?


IF A BUG IS FOUND,
* alert the developers that a bug exists
* show them how to reproduce the bug
* ensure that if the developer fixes the bug it is fixed correctly and the fix
* didn't break anything else

Page
57
* keep management apprised of the outstanding bugs and correction trends
Bug Writing Tips
Ideally you should be able to write bug report clearly enough for a developer to reproduce and
fix the problem, and another QA engineer to verify the fix without them having to go back to
you, the author, for more information.
To write a fully effective report you must :-
* Explain how to reproduce the problem
* Analyze the error so you can describe it in a minimum number of steps
* Write a report that is complete and easy to understand
Q.33 What are Debugging methods?

Debugging (removal of a defect) occurs as a consequence of successful testing.


Some people are better at debugging than others.

Common approaches:
Brute force - memory dumps and run-time traces are examined for clues
to error causes
Backtracking - source code is examined by looking backwards from
symptom to potential causes of errors
Cause elimination - uses binary partitioning to reduce the number of locations potential
(where errors can exist)
Bug Removal Considerations
Is the cause of the bug reproduced in another part of the program?
What "next bug" might be introduced by the fix that is being proposed?
What could have been done to prevent this bug in the first place?

Q.34 Why is debugging so difficult? What are the characteristics of bugs which provide
some clues?

1. The symptom and the cause may be geographically remote. That is, the symptom may appear
in one part of a program, while the cause may actually be located at a site that is far removed.
Highly coupled program structures exacerbate this situation.
2. The symptom may disappear (temporarily) when another error is corrected.
3. The symptom may actually be caused by non errors (e.g., round-off inaccuracies).
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in
which input ordering is indeterminate).
7. The symptom may be intermittent. This is particularly common in embedded systems that
couple hardware and software inextricably.
8. The symptom may be due to causes that are distributed across a number of tasks running on
different processors.

Page
58
Chapter 4 Project Management Spectrum

Q.1 Explain the software management spectrum.

The management of software development is heavily dependent on four factors: People, Product,
Process, and Project. Software development is a people centric activity. Hence, success of the
project is on the shoulders of the people who are involved in the development.

The People
Software development requires good managers. The managers, who can understand the
psychology of people and provide good leadership,. A good manager cannot ensure the success
of the project, but can increase the probability of success. The areas to be given priority are:
proper selection, training, compensation, career development, work culture etc. Managers face
challenges. It requires mental toughness to endure inner pain. We need to plan for the best, be
prepared for the worst, expect surprises, but continue to move forward anyway. Hence, manager
selection is most crucial and critical. It is the responsibility of a manager to manage, motivate,
encourage, guide and control the people of his/her team.
The Product
What do we want to deliver to the customer? Obviously, a product; a solution to his/her
problems. Hence, objectives and scope of work should be defined clearly to understand the
requirements. Alternate solutions should be discussed. It may help the managers to select a
best approach within constraints imposed by delivery deadlines, budgetary restrictions,
personnel availability, technical interfaces etc. Without well defined requirements, it may be
impossible to define reasonable estimates of the cost, development time and schedule for the
project.
The Process
The process is the way in which we produce software. It provides the framework from
which a comprehensive plan for software development can be established. If the process is weak,
the end product will undoubtedly suffer. There are many life cycle models and process
improvements models. Depending on the type of project, a suitable model is to be selected. Now-
a-days CMM (Capability Maturity Model) has become almost a standard for process framework.
The process priority is after people and product, however, it plays very critical role for the
success of the project. A small number of framework activities are applicable to all software
projects, regardless of their size and complexity. A number of different task sets, tasks,
milestones, work products, and quality assurance points, enable the framework activities to be
adapted to the characteristics of the project and the requirements of the project team.
The Project
A proper planning is required to monitor the status of development and to control the
complexity. Most of the projects are coming late with cost overruns of more than 100%. In order
to manage a successful project, we must understand what can go wrong and how to do it right.
We should define concrete requirements (although very difficult) and freeze these requirements.
Changes should not be incorporated to avoid software surprises. Software surprises are always
risky and we should minimize them. We should have a planning mechanism to give warning
before the occurrence of any surprise.

Q.2 Enlist the various stakeholders in the software management process.

Page
59
1. Senior managers, who define the business issues that often have significant influence on the
project.

2. Project (technical) managers, who must plan, motivate, organize, and control the practitioners
who do software work.

3. Practitioners, who deliver the technical skills that, are necessary to engineer a product or
application.

4. Customers, who specify the requirements for the software to be engineered.

5. End users, who interact with the software once it is released for productions use.

Q.3 What do we look for when we select someone to lead a software project ?

Following model of leadership should be considered.

Motivation The ability to encourage ( by push or pull) technical people to produce to their
best ability.

Organization The ability to mold existing processes( or invent new ones) that will enable the
initial concept to be translated into a final product.

Ideas or innovation The ability to encourage people to create and feel creative even when they
must work within bounds established for a particular software product or application.

Q.4 What are the characteristics of good project manager?

The characteristics that define an effective project manager emphasizes four key traits.

Problem solving An effective software project manager can diagnose that technical and
organizational issues that are most relevant, systematically structure a solution or properly
motivate other practitioners to develop the solution, apply lessons learned from past projects to
new situations, and remain flexible enough to change direction if initial attempts at problem
solution are fruitless.

Managerial identity A good project manager must take charge of the project. She must have the
confidence to assume control when necessary and the assurance to allow good technical people
to follow their instincts.

Achievement To optimize the productivity of a project team, a manager must reward initiative
and accomplishment, and demonstrate through his own actions that controlled risk taking will
not be punished.

Page
60
Influence and Team Building An effective project manager must be able to read people; she
must be able to understand verbal and nonverbal signals and react to the needs of the people
sending these signals. The manager must remain under control in high stress situations.

Q.5 Explain the four organizational paradigms for software engineering teams.

1. A closed paradigm structures a team along a traditional hierarchy of authority. Such teams can
work well when producing software that is quite similar to past efforts, but they will be less
likely to be innovative when working within the closed paradigm.

2. The random paradigm structures a team loosely and depends on individual initiative of the
team members. When innovation or technological breakthrough is required, teams following the
random paradigm will excel. But such teams may\ struggle when orderly performance is
required.

3. The open paradigm attempts to structure a team in manner that achieves some of the controls
associated with the closed paradigm but also much of the innovation that occurs when using the
random paradigm. Work is performed collaboratively with heavy communication and consensus
based decision making. Open paradigm team structures are well suited to the solution of complex
problems, but may not perform as efficiently as other teams.
4. The synchronous paradigm relies on the natural compartmentalization of a problem and
organizes team members to work on pieces of the problems with little active communication
among themselves.

Q. 6 What is the intent of adaptable process model?

Software projects span many different types of organizations, a variety of different application
areas, and a wide range of technologies; it is likely that any process model developed for use on
these projects will have to be adapted to local circumstances.
The intent of the Adaptable Process Model is to:
1. Provide a common process framework for all projects;
2. Define generic framework activities that are applicable across all projects;
3. Define a process model that can be adapted to local requirements, standards, and
culture;
4. Define a process model that is appropriate regardless of the paradigm like linear sequential life
cycle model, prototyping, evolutionary model that has chosen for process flow;
5. Provide guidance to project teams that will enable these teams to adapt the APM
intelligently to their environment.
The process model is adapted by considering two characteristics of the project:
(1) project type, and
(2) a set of adaptation criteria that defines the degree of rigor with which software engineering is
to be applied.

Q. 7 What type of projects could be developed?

Page
61
Project type refers to the characteristics of the project. In this context, the following
project types are defined:
Concept Development Projects that are initiated to explore some new business concept or
application of some new technology,
New Application Development Projects that are undertaken as a consequence of a specific
customer request.
Application Enhancement Projects that occur when existing software under goes major
modifications to function, performance or interfaces that are observable by the end-user,
Application Maintenance Projects that correct, adapt, or extend existing software in ways that
may not be immediately obvious to the end user.
Reengineering Projects that are undertaken with the intent of rebuilding an existing system in
whole or in part.
Web Application Development Projects that are undertaken when web sites and related
internet-based applications must be developed.

Q. 8 State the reasons of late delivery of projects.

Although there are many reasons why software is delivered late, most can be traced to one or
more of the following root causes:
An unrealistic deadline established by someone outside the software development group and
forced on managers and practitioner's within the group.
Changing customer requirements that are not reflected in schedule changes.
An honest underestimate of the amount of effort and/or the number of resources that will be
required to do the job.
Predictable and/or unpredictable risks that were not considered when the project commenced.
Technical difficulties that could not have been foreseen in advance.
Human difficulties that could not have been foreseen in advance.
Miscommunication among project staff that results in delays.
A failure by project management to recognize that the project is falling behind schedule and a
lack of action to correct the problem.

Aggressive deadlines are a fact of life in the software business. Sometimes such deadlines are
demanded for reasons that are legitimate, from the created in a manner that enables the software
team to meet the delivery deadline established.

Q. 9 Explain project scheduling principles.

Page
62
Project Scheduling is the task that describes the software development process for a particular
project. It enumerates phases or stages of the projects, breaks each into discrete tasks or activities
to be done, portrays the interactions among these pieces of work and estimates the time that each
task or activity will take. It is a time-phased sequencing of activities subject to precedence
relationships, time constraints, and resource limitations to accomplish specific objectives. It is a
team process that gives the start point of formulating a work program. It is iterative process and
must be flexible to accommodate changes. There are certain basic principles used in project
scheduling. Some of them are enumerated below.

Compartmentalization. The project must be compartmentalized into a number of manageable


activities and tasks. To accomplish compartmentalization, both the product and the process are
decomposed.
Interdependency. The interdependency of each compartmentalized activity or task must
be determined. Some tasks must occur in sequence while others can occur in parallel.
Some activities cannot commence until the work product produced by another is available. Other
activities can occur independently.
Time allocation. Each task to be scheduled must be allocated some number of work units. In
addition, each task must be assigned a start date and a completion date that are a function of the
interdependencies and whether work will be conducted on a full-time or part-time basis.
Effort validation. Every project has a defined number of staff members. As time allocation
occurs, the project manager must ensure that no more than the allocated number of people have
been scheduled at any given time. For example, consider a project that has three assigned staff
members. On a given day, seven concurrent tasks must be accomplished. Each task requires 0.50
person days of effort. More effort has been allocated than there are people to do the work.
Defined responsibilities. Every task that is scheduled should be assigned to a specific team
member.
Defined outcomes. Every task that is scheduled should have a defined outcome. For software
projects, the outcome is normally a work product or a part of a work product. Work products are
often combined in deliverables.
Defined milestones. Every task or group of tasks should be associated with a project milestone.
A milestone is accomplished when one or more work products has been reviewed for quality and
has been approved. Each of these principles is applied as the project schedule evolves.

Q. 10 How Problem Decomposition is achieved?


Problem decomposition, sometimes called partitioning or problem elaboration, is an activity that
sits at the core of software requirements analysis. During the scoping activity no attempt is made
to fully decompose the problem. Rather, decomposition is applied in two major areas: (1) the
functionality that must be delivered and (2) the process that will be used to deliver it. Human
beings tend to apply a divide and conquer strategy when they are confronted with complex
problems. Stated simply, a complex problem is partitioned into smaller problems that are more
manageable. This is the strategy that applies as project planning begins. Software functions,
described in the statement of scope, are evaluated and refined to provide more detail prior to the
beginning of estimation. Because both cost and schedule estimates are functionally oriented,
some degree of decomposition is often useful.

Page
63
As an example, consider a project that will build a new word-processing product. Among
the unique features of the product are continuous voice as well as keyboard input, extremely
sophisticated automatic copy edit features, page layout capability, automatic indexing and
table of contents, and others. The project manager must first establish a statement of scope that
bounds these features (as well as other more mundane functions such as editing, file
management, document production, and the like).
For example, will continuous voice input require that the product be trained by the
user? Specifically, what capabilities will the copy edit feature provide? Just how sophisticated
will the page layout capability be? As the statement of scope evolves, a first level of partitioning
naturally occurs. The project team learns that the marketing department has talked with potential
customers and found that the following functions should be part of automatic copy editing:
(1) Spell checking; (2) sentence grammar checking, (3) reference checking for large documents
(e.g., is a reference to a bibliography entry found in the list of entries in the bibliography?), and
(4) section and chapter reference validation for large documents. Each of these features
represents a sub function to be implemented in software. Each can be further refined if the
decomposition will make planning easier.

Q.11 How Process Decomposition is achieved?


A software team should have a significant degree of flexibility in choosing the software
engineering paradigm that is best for the project and the software engineering tasks that populate
the process model once it is chosen. A relatively small project that is similar to past efforts might
be best accomplished using the linear sequential approach. If very tight time constraints are
imposed and the problem can be heavily compartmentalized, the RAD model is probably the
right option. If the deadline is so tight that full functionality cannot reasonably be delivered, an
incremental strategy might be best. Similarly, projects with other characteristics (e.g., uncertain
requirements, breakthrough technology, difficult customers, significant reuse potential) will lead
to the selection of other process models.
Once the process model has been chosen, the common process framework (CPF) is
adapted to it. In every case, customer communication, planning, risk analysis, engineering,
construction and release, customer evaluationcan be fitted to the paradigm. It will work for
linear models, for iterative and incremental models, for evolutionary models, and even for
concurrent or component assembly models. The CPF is invariant and serves as the basis for all
software work performed by a software organization. But actual work tasks do vary. Process
decomposition commences when the project manager asks, How do we accomplish this CPF
activity? For example, a small, relatively simple project might require the following work tasks
for the customer communication activity:
1. Develop list of clarification issues.
2. Meet with customer to address clarification issues.
3. Jointly develop a statement of scope.
4. Review the statement of scope with all concerned.
5. Modify the statement of scope as required.
These events might occur over a period of less than 48 hours. They represent a process
decomposition that is appropriate for the small, relatively simple project.
Now, we consider a more complex project, which has a broader scope and more
significant business impact. Such a project might require the following work tasks for the
customer communication activity:

Page
64
1. Review the customer request.
2. Plan and schedule a formal, facilitated meeting with the customer.
3. Conduct research to specify the proposed solution and existing approaches.
4. Prepare a working document and an agenda for the formal meeting.
5. Conduct the meeting.
6. Jointly develop mini-specs that reflect data, function, and behavioral features of the software.
7. Review each mini-spec for correctness, consistency, and lack of ambiguity.
8. Assemble the mini-specs into a scoping document.
9. Review the scoping document with all concerned.
10. Modify the scoping document as required.

Q. 12 Explain the Framework for PERT and CPM project planning techniques.

Essentially, there are six steps which are common to both the techniques. The procedure is listed
below:

I. Define the Project and all of its significant activities or tasks. The Project (made up of
several tasks) should have only a single start activity and a single finish activity.

II. Develop the relationships among the activities. Decide which activities must precede and
which must follow others.

III. Draw the "Network" connecting all the activities. Each Activity should have unique event
numbers. Dummy arrows are used where required to avoid giving the same numbering to
two activities.

IV. Assign time and/or cost estimates to each activity

V. Compute the longest time path through the network. This is called the critical path.

VI. Use the Network to help plan, schedule, monitor and control the project.

The Key Concept used by CPM/PERT is that a small set of activities, which make up the longest
path through the activity network control the entire project. If these "critical" activities could be
identified and assigned to responsible persons, management resources could be optimally used
by concentrating on the few activities which determine the fate of the entire project.

Non-critical activities can be re planned, rescheduled and resources for them can be reallocated
flexibly, without affecting the whole project.

Q. 13 What are the types of risks?


Project risks - threaten the project plan
Technical risks - threaten product quality and the timeliness of the schedule
Business risks - threaten the viability of the software to be built (market risks, strategic risks,
management risks, budget risks)

Page
65
Known risks - predictable from careful evaluation of current project plan and those
extrapolated from past project experience
Unknown risks - some problems simply occur without warning

Q.14 What are the risk strategies?

There are two types of Risk Strategies i.e. Reactive Vs. Proactive.
Reactive risk strategies have been laughingly called the Indiana Jones school of risk
management. In the movies that carried his name, Indiana Jones, when faced with
overwhelming difficulty, would invariably say, Dont worry, Ill think of something! never
worrying about problems until they happened, Indy would react in some heroic way. Sadly, the
average software project manager is not Indiana Jones and the members of the software project
team are not his trusty sidekicks. Yet, the majority of software teams rely solely on reactive risk
strategies.

A considerably more intelligent strategy for risk management is to be proactive. A


proactive strategy begins long before technical work is initiated. Potential risks are identified,
their probability and impact are assessed, and they are prioritized by importance. Then, the
software team establishes a plan for managing risk. The primary objective is to avoid risk, but
because not all risks can be avoided, the team works to develop a contingency plan that will
enable it to respond in a controlled and effective manner.

Q. 15 Enlist the two characteristics of Software Risks.


Risks have two characteristics
Uncertainty The event that characterizes the risk may or may not happen; i.e. there are
no 100% probable risks.
Loss If the risk becomes a reality, unwanted consequences or losses will occur.

Q. 16 How will you identify the risks?


One method for identifying risks is to create a risk item checklist. The Checklist can be
used for risk identification and focuses on some subset of known and predictable risks in the
following generic subcategories.
Product size risks associated with the overall size of the software to be built or
modified
Business impact risk associated with constraints imposed by management or the
marketplace
Customer characteristics risks associated with the sophistication of the customer and
the developers ability to communicate with the customer in a timely manner.
Process definition- risks associated with the degree to which the software process has
been defined and is followed by the development organization.
Development environment risks associated with the availability and quality of the
tools to be used to build the product.

Page
66
Technology to be built risks associated with the complexity of the system to be built
and the newness of the technology that is packaged by the system.
Staff size and experience risks associated with the overall technical and project
experience of the software engineers who will do the wok.

Q. 17 What is Risk Mitigation, Monitoring and Management?

An effective strategy for risk management must consider three issues:


Risk avoidance.
Risk monitoring, and
Risk management and contingency planning
If a software team adopts a proactive approach to risk avoidance is always to best
strategy. This is achieved by developing a plan for risk mitigation. For example, assume that
high staff turnover is noted as a project risk, r1,. Based on past history and management intuition,
the likelihood, l1, of high turnover is estimated to be .70 and impact, x1 is projected to have a
critical impact on project cost and schedule.
To mitigate this risk, project management must develop a strategy for reducing turnover.
Among the possible steps to be taken are these:
Meet with current staff to determine caused for turnover (E.g. poor working conditions, low
pay, competitive job market)
Act to mitigate those causes that are under management control before the project starts.
Once project commences, assume turnover will occur and develop techniques to ensure
continuity when people leave
Organize project teams so that information about each development activity is widely
dispersed.
Define documentation standards and establish mechanisms to be sure that documents are
developed in a timely manner
Conduct peer reviews of all work so that more than one person is familiar with the work
Define a backup staff member for every critical technologist.

As the project proceeds, risk monitoring activities commence. The project manager
monitors factors that may provide an indication of whether the risk is becoming more or less
likely. In the case of high staff turnover, the following factors can be monitored:

General attitude of team members based on project pressures.


The degree to which the team has jelled.
Interpersonal relationships among team members.
Potential problems with compensation and benefits.
The availability of jobs within the company and outside it.

In addition to monitoring these factors, the project manager should monitor the effectiveness
of risk mitigation steps. This is one mechanism for ensuring continuity, should a critical

Page
67
individual leave the project. The project manager should monitor documents carefully to ensure
that each can stand on its own and that each imparts information that would be necessary if a
newcomer were forced to join the software team somewhere in the middle of the project.

Q.18 Explain the RMMM PLAN.


A risk management strategy can be included in the software project plan or the risk
management steps can be organized into a separate Risk Mitigation, Monitoring and
Management Plan. The RMMM plan documents all work performed as part of risk analysis and
are used by the project manager as part of the overall project plan.
Once RMMM has been documented and the project has begun, risk mitigation and
monitoring steps commence.
Risk monitoring is a project tracking activity with three primary objectives: (I) to assess
whether predicted risks do, in fact, occur; (2) to ensure that risk aversion steps defined for the
risk are being properly applied; and (3) to collect information that can be used for future risk
analysis.
Risk management is viewed as a career path in the organization and those that practice it
are treated as professionals.
Risk analysis functions are given independence in the organization even though that may
make it hard to "control."
They use modern tools and are not disdainful of sophisticated and proven approaches.

They measure their effectiveness with metrics. Project decisions are made on a "risk-adjusted"

basis. Continuous improvement is achieved through regular repetition. They participate in


professional interchanges through conferences and journals, sharing what they have learned.

Q.19 What is Software Configuration Management?

It has the responsibility to control change. It identifies software configuration items and
various version of the software. It also audits the software configuration items to ensure that they
have been properly developed and that the reporting of changes is applied to the configuration.
Five tasks are done, namely, change identification, version control, change control, configuration
audition and reporting.

Page
68
Change Identification is the process of identifying items that change throughout the software
development process. It specifically identifies software configuration items (SCIs) and uniquely
gives it an identifier and name.

Version Control is the procedures and tools to manage the different version of configuration
objects that are created during the software development process. A version is a collection of
SCIs that can be used almost independently.

Change Control consists of human procedures and automated tools to provide a mechanism for
the control of change. It has the following subset of task:A change request is submitted and
evaluated to assess technical merit, potential side effects, overall impact on other SCIs. A
change report is created to identify the final decision on the status and priority of the change.
The Engineering Change Order (ECO) is created that describes the change to be make, the
constraints, and the criteria for review and audit. The SCIs is checked out for modification. It
is, then, checked in after the review and subject to version control.

Configuration Audit is the process of assessing a SCIs for characteristics that are generally not
considered during a formal technical review. It answers the following questions: Was the
change specified in the ECO done? Are there any additional modification made?Did the formal
technical review assessed the technical correctness of the work product?Did the software
engineering procedures been followed?Did the SCM procedures been followed?Have all
appropriate SCIs been updated?

Q. 20 What are the Features of Software Configuration Management?

Software Configuration Management is the process of tracking and controlling the software
changes. The basic features provided by any SCM tools are as follows:

Concurrency Management
Version Control
Synchronization

Let us go through all these features one by one.

Page
69
Concurrency Management
When two or more tasks are happening at same time it is known as concurrent operation. If we
talk concurrency in context to SCM it means that the same file being edited by multiple persons
at the same time. If concurrency is not managed properly with SCM tools then it may lead to
very severe problems.

Version Control
The second important feature provided by SCM tools is version control. SCM tool uses
archiving method or saves every change made to file so that it is possible for use to roll back to
previous version in case of any problems.

Synchronization
This is another feature provided by SCM tools where the user is allowed to checkout more than
one files or entire copy of repository. The user then works on the required files and checks in the
changes back to repository, also they can update there local copy of periodically to stay updated
with the changes made by other team members. This is known as synchronization.

Q. 21 Why do we need Software Configuration Management?

The main goal of SCM is to identify, control, maintain, and verify the versions of software
configuration items. An SCM solution tries to meet the following goals:
* Account for all the IT assets and configurations within the organization and its services
* Provide accurate information on configurations and their documentation to support all the other
Service Management processes
* Provide a sound basis for Incident Management, Problem Management, Change Management
and Release Management
* Verify the configuration records against the infrastructure and correct any exceptions

In addition, these are some of the common scenarios


* Simultaneous Update: When two or more programmers work separately on the same program,
the last one to make the changes can easily destroy the other's work.
* Shared Code: Often, when a bug is fixed in code is shared by several programmers, some of
them are not notified.
* Common Code: In large systems, when common program functions are modified, all the users
need to know. Without effective code management?, there is no way to be sure of finding and
alerting every user.

Q. 22 State the benefits of Software Configuration Management.

Some benefits to the organization that are obtained through the use of Software Configuration
Management:

Page
70
* Control: SCM offers the opportunity to review, approve, and incorporate changes to the
configuration item. It provides a standard method for traceable and controlled changes to the
baseline-version of a configuration item.
* Management: SCM provides a process of automatic identification and guidance of
configuration items in their whole life cycle.
* Cost Savings: Using SCM, cost savings can be realized throughout the whole life cycle of a
configuration. By actively defining, tracing, and auditing configuration items, failures can be
detected early, and corrective action can be taken. No confusion in task division can occur.
* Quality: Deliverables are checked continuously during the software development process. By
checking the human-work in an automated way, the level of compliance of deliverables with
predefined requirements can be maximized

Q. 23 Explain Philosophy of clean room software engineering


Clean room software engineering is to develop code increments that are right the first
time and verify their correctness before testing, rather than relying on costly defect removal
processes. It involves the integrated use of software engineering modeling, program verification,
and statistical software quality assurance. Under clean room software engineering, the analysis
and design models are created using a box structure representation. A box encapsulates some
system component at a specific level of abstraction. Correctness verification is applied once the
box structure design is complete. Once correctness has been verified for each box structure,
statistical usage testing commences. This involves defining a set of usage scenarios and
determining the probability of use for each scenario. Random data is generated which conform to
the usage probabilities. The resulting error records are analyzed, and the reliability of the
software is determined for the software component.

Q. 24 What are distinguishing Characteristics of Clean room Techniques?


The clean room techniques are, Makes extensive use of statistical quality control Verifies design
specification using mathematically-based proof of correctness Relies heavily on statistical use
testing to uncover high impact errors.
The mathematics used in formal software engineering methods relies heavily on set
theory and logic. In many safety critical or mission critical systems, failures can have a high cost.
Many safety critical systems can not be completely tested without endangering the lives of the
people they are designed to protect. Use of formal methods reduces the number of specification
errors dramatically, which means that the customer will encounter fewer errors when the product
is deployed.

Q. 25 Explain Clean Room Strategy.


The clean room approach makes use of a specialized version of the incremental software
model. A pipeline of software increments is developed by small independent software
engineering teams. As each increment is certified, it is integrated in the whole. Hence,
functionality of the system grows with time. The sequence of clean room tasks for each
increment is illustrated.

Page
71
Overall system or product requirements are developed using the system engineering
methods discussed. Once functionality has been assigned to the software element of the system,
the pipeline of clean room increments is initiated.
The following tasks occur:
Increment planning. A project plan that adopts the incremental strategy is developed. The
functionality of each increment, its projected size, and a clean room development schedule are
created. Special care must be taken to ensure that certified increments will be integrated in a
timely manner.
Requirements gathering. Using techniques similar to those introduced, a more-detailed
description of customer-level requirements is developed.
Box structure specification. A specification method that makes use of box structures is used to
describe the functional specification. Conforming to the operational analysis principles
discussed, box structures isolate and separate the creative definition of behavior, data, and
procedures at each level of refinement.
Formal design. Using the box structure approach, clean room design is a natural and seamless
extension of specification. Although it is possible to make a clear distinction between the two
activities, specifications called black boxes are iteratively refined within an increment to become
analogous to architectural and component-level designs called state boxes and clear boxes,
respectively.
Correctness verification. The clean room team conducts a series of rigorous correctness
verification activities on the design and then the code. Verification begins with the highest-level
box structure and moves toward design detail and code. The first level of correctness verification
occurs by applying a set of correctness questions. If these do not demonstrate that the
specification is correct, more formal methods for verification are used.
Code generation, inspection, and verification. The box structure specifications, represented in
a specialized language, are translated into the appropriate programming language. Standard
walkthrough or inspection techniques are then used to ensure semantic conformance of the code
and box structures and syntactic correctness of the code. Then correctness verification is
conducted for the source code.
Statistical test planning. The projected usage of the software is analyzed and a suite of test
cases that exercise a probability distribution of usage are planned and designed.
Statistical use testing. Recalling that exhaustive testing of computer software is impossible, it is
always necessary to design a finite number of test cases. Statistical use techniques execute a
series of tests derived from a statistical sample of all possible program executions by all users
from a targeted population.
Certification. Once verification, inspection, and usage testing have been completed, the
increment is certified as ready for integration.

Q.26 What is component based software development?


This has the potential advantage of delivering highly reliable software products in a very
short time. CBSE encourages the use of predictable architectural patterns and standard software
infrastructures that improve overall product quality. CBSE encompasses two parallel engineering
activities, domain engineering and component based development. Domain engineering explores
the application domain with the specific intent of finding functional, behavioral, and data
components that are candidates for reuse and places them in reuse libraries. Component-based
development elicits requirements from the customer and selects an appropriate architectural style

Page
72
to meet the objectives of the system to be built. The next steps are to select potential components
for reuse, qualify the components to be sure they fit the system architecture properly, adapt the
components if they must be modified to integrate them, and then integrate the components into
subsystems within the application. Custom components are engineered only when existing
components cannot be reused. Formal technical reviews and testing are applied to ensure the
quality of the analysis model and the design model. The resulting code is tested to uncover errors
in the newly developed software.

Q.27 What is Reverse Engineering?


The term reverse engineering has its origins in the hardware world. A company
disassembles a competitive hardware product in an effort to understand its competitor's design
and manufacturing "secrets." These secrets could be easily understood if the competitor's design
and manufacturing specifications were obtained. But these documents are proprietary and
unavailable to the company doing the reverse engineering. In essence, successful reverse
engineering derives one or more design and manufacturing specifications for a product by
examining actual specimens of the product.
Reverse engineering for software is quite similar. In most cases, however, the program to
be reverse engineered is not a competitor's. Rather, it is the company's own work. The "secrets"
to be understood are obscure because no specification was ever developed. Therefore, reverse
engineering for software is the process of analyzing a program in an effort to create a
representation of the program at a higher level of abstraction than source code. Reverse
engineering is a process of design recovery.
Reverse engineering tools extract data, architectural, and procedural design information
from an existing program.

Reverse engineering conjures an image of the "magic slot." We feed an unstructured,


undocumented source listing into the slot and out the other end comes full documentation for the
computer program. Unfortunately, the magic slot doesn't exist.
Reverse engineering can extract design information from source code, but the abstraction
level, the completeness of the documentation, the degree to which tools and a human analyst
work together, and the directionality of the process are highly variable.

Q.28 Explain different levels in reverse engineering.

The abstraction level of a reverse engineering process and the tools used to effect it refers to the
sophistication of the design information that can be extracted from source code. Ideally, the
abstraction level should be as high as possible. That is, the reverse engineering process should be
capable of deriving procedural design representations, program and data structure information,
data and control flow models, and entity relationship models. As the abstraction level increases,
the software engineer is provided with information that will allow easier understanding of the
program.

The completeness of a reverse engineering process refers to the level of detail that is provided at
an abstraction level. In most cases, the completeness decreases as the abstraction level increases.
For example, given a source code listing, it is relatively easy to develop a complete procedural
design representation. Simple data flow representations may also be derived, but it is far more
difficult to develop a complete set of data flow diagrams or entity-relationship models.

Page
73
Completeness improves in direct proportion to the amount of analysis performed by the person
doing reverse engineering.

Interactivity refers to the degree to which the human is "integrated" with automated tools to
create an effective reverse engineering process. In most cases, as the abstraction level increases,
interactivity must increase or completeness will suffer.

The directionality of the reverse engineering process is one way, all information extracted from
the source code is provided to the software engineer who can then use it during any maintenance
activity. If directionality is two way, the information is fed to a reengineering tool that attempts
to restructure or regenerate the old program. The reverse engineering process is represented.
Before reverse engineering activities can commence, unstructured (dirty) source code is
restructured so that it contains only the structured programming constructs. This makes the
source code easier to read and provides the basis for all the subsequent reverse engineering
activities.
The core of reverse engineering is an activity called extract abstractions. The engineer
must evaluate the old program and from the source code, extract a meaningful specification of
the processing that is performed, the user interface that is applied, and the program data
structures or database that is used.

Page
74
Chapter 5 Software Quality Management & Estimation

Q.1 What are process assessment standards?

Common process guidelines are briefly examined below.

Capability Maturity Model Integration(CMMI). It was formulated by the Software


Engineering Institute (SEI). It is a process meta-model that is based on a set of system and
software engineering capabilities that must exists within an organization as the organization
reaches different level of capability and maturity of its development process.

ISO 9000:2000 for Software. It is a generic standard that applies to any organization that
wants to improve the overall quality of the products, systems or services that it provides.

Software Process Improvement and Capability Determination (SPICE). It is a standard


that defines a set of requirements for software process assessment. The intent of the standard is
to assist organization in developing an objective evaluation of the efficacy of any defined
software process.

Q.2 What is quality?

Quality is the total characteristic of an entity to satisfy stated and implied needs. The
characteristics or attributes must be measurable so that they can be compared to known
standards. Or

Quality is a characteristic or attribute of something. As an attribute of an item, quality refers


to measurable characteristics things we are able to compare to known standards such as length,
color, electrical properties, and malleability. However, software, largely an intellectual entity, is
more challenging to characterize than physical objects. When we examine an item based on its
measurable characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance.

Quality of design refers to the characteristics that designers specify for an item. The grade of
materials, tolerances, and performance specifications all contribute to the quality of design. As
higher-grade materials are used, tighter tolerances and greater levels of performance are
specified, the design quality of a product increases, if the product is manufactured according to
specifications.
Quality of conformance is the degree to which the design specifications are followed during
manufacturing. Again, the greater the degree of conformance, the higher is the level of quality of
conformance. In software development, quality of design encompasses requirements,
specifications, and the design of the system. Quality of conformance is an issue focused
primarily on implementation. If the implementation follows the design and the resulting system
meets its requirements and performance goals, conformance quality is high.

Q.3 What is Quality Control?

Page
75
Quality control involves the series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements placed upon it. Quality
control includes a feedback loop to the process that created the work product. The combination
of measurement and feedback allows us to tune the process when the work products created fail
to meet their specifications. This approach views quality control as part of the manufacturing
process. Quality control activities may be fully automated, entirely manual, or a combination of
automated tools and human interaction. A key concept of quality control is that all work products
have defined, measurable specifications to which we may compare the output of each process.
The feedback loop is essential to minimize the defects produced.

Q.4 What is Quality Assurance?


Quality assurance consists of the auditing and reporting functions of management. The goal of
quality assurance is to provide management with the data necessary to be informed about product
quality, thereby gaining insight and confidence that product quality is meeting its goals. Of
course, if the data provided through quality assurance identify problems, it is managements
responsibility to address the problems and apply the necessary resources to resolve quality
issues.

Q. 5 What do you mean by Cost of Quality?


The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-
related activities. Cost of quality studies are conducted to provide a base- line for the current cost
of quality, identify opportunities for reducing the cost of quality, and provide a normalized basis
of comparison. The basis of normalization is almost always dollars. Once we have normalized
quality costs on a dollar basis, we have the necessary data to evaluate where the opportunities lie
to improve our processes. Furthermore, we can evaluate the effect of changes in dollar-based
terms.
Quality costs may be divided into costs associated with prevention, appraisal, and failure.

Prevention costs include


quality planning
formal technical reviews
test equipment
training

Appraisal costs include activities to gain insight into product condition the first time through
each process. Examples of appraisal costs include
In-process and inter process inspection
Equipment calibration and maintenance
testing

Failure costs are those that would disappear if no defects appeared before shipping a product to
customers. Failure costs may be subdivided into internal failure costs and external failure costs.
Internal failure costs are incurred when we detect a defect in our product prior to shipment.
Internal failure costs include
rework
repair

Page
76
failure mode analysis
External failure costs are associated with defects found after the product has been shipped to the
customer. Examples of external failure costs are
complaint resolution
product return and replacement
help line support
warranty work

Q. 6 What is Software quality assurance?


Software quality assurance is composed of a variety of tasks associated with two different
constituenciesthe software engineers who do technical work and an SQA group that has
responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.
Software engineers address quality by applying solid technical methods and measures,
conducting formal technical reviews, and performing well-planned software testing.

Q.7 What is SQA plan?

SQA plan for a project. The plan is developed during project planning and is reviewed by all
interested parties. Quality assurance activities performed by the software engineering team and
the SQA group are governed by the plan. The plan identifies
evaluations to be performed
audits and reviews to be performed
standards that are applicable to the project
procedures for error reporting and tracking
documents to be produced by the SQA group
amount of feedback provided to the software project team

Q. 8 What are SQA activities?

The Software Engineering Institute recommends a set of SQA activities that address
quality assurance planning, oversight, record keeping, analysis, and reporting. These activities
are performed by an independent SQA group that:

Participates in the development of the projects software process description.


The software team selects a process for the work to be performed. The SQA group reviews the
process description for compliance with organizational policy, internal software standards,
externally imposed standards (e.g., ISO-9001), and other parts of the software project plan.
Reviews software engineering activities to verify compliance with the defined software
process. The SQA group identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as part
of the software process. The SQA group reviews selected work products; identifies, documents,
and tracks deviations; verifies that corrections have been made; and periodically reports the
results of its work to the project manager.

Page
77
Ensures that deviations in software work and work products are documented and handled
according to a documented procedure. Deviations may be encountered in the project plan,
process description, applicable standards, or technical work products.
Records any noncompliance and reports to senior management. Noncompliance items are
tracked until they are resolved.

In addition to these activities, the SQA group coordinates the control and management of
change and helps to collect and analyze software metrics.

Q. 9 State the objectives of Formal Technical Reviews.

A formal technical review is a software quality assurance activity performed by software


engineers (and others). The objectives of the FTR are
(1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verify that the software under review meets its requirements;
(3) to ensure that the software has been represented according to predefined standards;
(4) to achieve software that is developed in a uniform manner; and
(5) to make projects more manageable.

Q. 10 What are Formal Technical Review Guidelines for improving quality?

Guidelines for the conduct of formal technical reviews must be established in advance,
distributed to all reviewers, agreed upon, and then followed. A review that is uncontrolled can
often be worse that no review at all. The following represents a minimum set of guidelines for
formal technical reviews:
1. Review the product, not the producer. An FTR involves people and egos. Conducted properly,
the FTR should leave all participants with a warm feeling of accomplishment. Conducted
improperly, the FTR can take on the aura of an inquisition. Errors should be pointed out gently;
the tone of the meeting should be loose and constructive; the intent should not be to embarrass or
belittle. The review leader should conduct the review meeting to ensure that the proper tone and
attitude are maintained and should immediately halt a review that has gotten out of control.
2. Set an agenda and maintain it. One of the key maladies of meetings of all types is drift. An
FTR must be kept on track and on schedule. The review leader is chartered with the
responsibility for maintaining the meeting schedule and should not be afraid to nudge people
when drift sets in.
3. Limit debate and rebuttal. When an issue is raised by a reviewer, there may not be universal
agreement on its impact. Rather than spending time debating the question, the issue should be
recorded for further discussion off-line.
4. Enunciate problem areas, but don't attempt to solve every problem noted. A review is not a
problem-solving session. The solution of a problem can often be accomplished by the producer
alone or with the help of only one other individual. Problem solving should be postponed until
after the review meeting.
5. Take written notes. It is sometimes a good idea for the recorder to make notes on a wall board,
so that wording and priorities can be assessed by other reviewers as information is recorded.
6. Limit the number of participants and insist upon advance preparation. Two heads are better
than one, but 14 are not necessarily better than 4. Keep the number of people involved to the

Page
78
necessary minimum. However, all review team members must prepare in advance. Written
comments should be solicited by the review leader.
7. Develop a checklist for each product that is likely to be reviewed. A checklist helps the review
leader to structure the FTR meeting and helps each reviewer to focus on important issues.
Checklists should be developed for analysis, design, code, and even test documents.
8. Allocate resources and schedule time for FTRs. For reviews to be effective, they should be
scheduled as a task during the software engineering process. In addition, time should be
scheduled for the inevitable modifications that will occur as the result of an FTR.
9. Conduct meaningful training for all reviewers. To be effective all review participants should
receive some formal training. The training should stress both process-related issues and the
human psychological side of reviews. Freedman and Weinberg estimate a one-month learning
curve for every 20 people who are to participate effectively in reviews.
10. Review your early reviews. Debriefing can be beneficial in uncovering problems with the
review process itself. The very first product to be reviewed should be the review guidelines
themselves. Because many variables have an impact on a successful review, a software
organization should experiment to determine what approach works best in a local context. Porter
and his colleagues provide excellent guidance for this type of experimentation.

Q.11 Explain ISO 9001 quality standard.

ISO 9001 is the quality assurance standard that applies to software engineering. The standard
contains 20 requirements that must be present for an effective quality assurance system. Because
the ISO 9001 standard is applicable to all engineering disciplines, a special set of ISO guidelines
(ISO 9000-3) have been developed to help interest the standard for use in the software process
The 20 requirements delineated by ISO 9001 address the following topics:
1. Management responsibility
2. Quality system
3. Contract review
4. Design control
5. Document and data control
6. Purchasing
7. Control of customer supplied product
8. Product identification and tractability
9. Process control
10. Inspection and testing
11. Control of inspection, measuring, and test equipment
12. Inspection and test status
13. Control of nonconforming product
14. Corrective and preventive action
15. Handling, storage, packaging, preservation, and delivery
16. Control of quality records
17. Internal quality audits
18. Training
19. Servicing
20. Statistical techniques

Page
79
Q.12 Explain six sigma quality standards.

Software Six Sigma is a strategy to enhance and sustain continuous improvements in software
development process and quality management. It uses data and statistical analysis to measure and
improve company's performance by eliminating defects in manufacturing and service related
processes.

ATTRIBUTES OF SIX SIGMA


- genuine metric data.
- accurate planning.
- real time analysis and decision support by the use of statistical tools.
- high quality product.
- software improvement costs and benefits.

STEPS IN SIX SIGMA METHODOLOGY

- Customer requirements are defined, project goals via well defined methods.
- Quality performance is determined by measuring existing process and its output.
- Analyzing the defect metrics.
- Process improvement is done by eliminating the root causes of defects.
- Process control to ensure changes made in future will not introduce the cause of defects again.

These steps are referred to as DMAIC(define, measure, analyze, improve and control)
method.

- Design the process to avoid the root causes of defects and to meet customer requirements.
- Verify the process model will avoid defects and meet customer requirements.
This variation is called DMADV (define, measure, analyze, design, and verify) method.

Or

Design for Six Sigma (DFSS) is a separate and emerging business-process management
methodology related to traditional Six Sigma. While the tools and order used in Six Sigma
require a process to be in place and functioning, DFSS has the objective of determining the needs
of customers and the business, and driving those needs into the product solution so created.
DFSS is relevant to the complex system/product synthesis phase, especially in the context of
unprecedented system development. It is process generation in contrast with process
improvement.

DMADV, Define Measure Analyze Design Verify, is sometimes synonymously referred


to as DFSS. The traditional DMAIC (Define Measure Analyze Improve Control) Six
Sigma process, as it is usually practiced, which is focused on evolutionary and continuous

Page
80
improvement manufacturing or service process development, usually occurs after initial system
or product design and development have been largely completed. DMAIC Six Sigma as
practiced is usually consumed with solving existing manufacturing or service process problems
and removal of the defects and variation associated with defects. On the other hand, DFSS (or
DMADV) strives to generate a new process where none existed, or where an existing process is
deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end
in mind of optimally building the efficiencies of Six Sigma methodology into the process before
implementation; traditional Six Sigma seeks for continuous improvement after a process already
exists.

Q.13 Explain McCalls metrics contributing to software quality factors.

McCalls metrics contributing to software quality factors


Audit ability The ease with which conformance to standards can be checked
Accuracy The precision of computations and control Communication
Commonality The degree to which standard interfaces, protocols and bandwidth are used
Completeness The degree to which full implementation of the required function has been
achieved
Conciseness The compactness of the program in terms of lines of code
Consistency The use of uniform design and documentation techniques throughout the
software development protocol
Data commonality The use of standard data structures and types throughout the program
Error tolerance The damage that occurs when a program encounters an error
Execution efficiency The run-time performance of the program
Expandability The degree to which architectural, data or procedural design can be
extended
Generality The breadth of potential application of program components
Hardware independence The degree to which the software is decoupled from the
hardware on which it operates
Instrumentation The degree to which the program monitors its own operations and
identifies errors that do occur
Modularity The functional independence of program components
Operability The ease of operation of the program
Security The availability of mechanisms that control or protect programs and data
Self documentation The degree to which the source code provides meaningful
documentation
Simplicity The degree to which a program can be understood without difficulty
Software system independence The degree to which the program is independent of
nonstandard programming language features, operating system characteristics and other
environmental constraints
Traceability The ability to trace a design representation or actual program component
back to requirements
Training The degree to which the software assists in enabling new users to apply the
system

Page
81
OR
1. Correctness: The extent to which a program satisfies its specifications .
2. Reliability: The extent to which a program its specifications .
3. Efficiency: The amount of computing sources and code required and code required by a program
to perform its function .
4. Integrity: Extent to which access to software or data by unauthorized persons can be controlled.
5. Usability: The ease with which a user is able to navigate to the system.
6. Maintainability : Effort required to fix and test the error.
7. Flexibility : Effort required to modify an already operational program.
8. Testability: Effort required to test a program so that it performs its intended function.
9. Portability: Effort required to port an application from one system to another.
10. Reusability: Extent to which a program / sub-program that can be re-used in another
applications.
11. Interoperability : Extent required to couple one system to another.

Q.14 Explain Hewlett Packard developed quality factors.

Functionality - is assessed by evaluating the features and capabilities of the delivered


program and the overall security of the system.
Usability - is assessed by considering human factors, overall aesthetics, look and feel and
easy of learning.
Reliability - is assessed by measuring the frequency of failure, accuracy of output, the
mean-time-to-failure(MTTF), ability to recover from failure.
Performance - is assessed by processing speed, response time, resource utilization,
throughput and efficiency.
Supportability - is assessed by the ability to extend the program (extensibility),
adaptability, serviceability and maintainability.

Q.15 Explain the ISO 9126 standard key quality attributes.

Functionality - degree to which software satisfies stated needs.


Reliability - the amount of time the software is up and running.
Usability - the degree to which a software is easy to use.
Efficiency - the degree to which software makes an optimum utilization of the resources.
Maintainability - the ease with which the software can be modified.
Portability - the ease with which a software can be migrated from one environment to the other.

Q. 16 What are the Observations on Estimation?

Estimation carries inherent risk and it is this risk that leads to uncertainty.
Project complexity has a strong effect on uncertainty that is inherent in planning.
Complexity, however, is relative measure that is affected by familiarity with past effort. A real
time application might be perceived as exceedingly complex to a software group that has

Page
82
previously developed only batch applications. The same real time application might be perceived
as run-of-themill for a software group that has been heavily involved in high speed process
control. A number of quantitative software complexity measures have been proposed. Such
measures are applied at the design or code level and are therefore difficult to use during software
planning ( before a design and code exist). However other, more subjective assessments of
complexity can be established early in the planning process.

Project size is another important factor that can affect the accuracy of estimates. As size
increase, the interdependency among various elements of the software grows rapidly. Problem
decomposition, an important approach to estimating, becomes more difficult because
ecomposed elements may still be formidable.
The degree of structural uncertainty also has an effect on estimation risk. In this context,
structure refers to the degree to which requirements have been solidified, the ease with which
function can be compartmentalized, and the hierarchical nature of information that must be
processed.
The availability of historical information also determines estimation risk.
When comprehensive software metrics are available for past projects, estimates can be made
with greater assurance; schedules can be established to avoid past difficulties, and overall risk is
reduced.

To achieve reliable cost and effort estimates, a number of options arise.


1. Delay estimation until late in the project (obviously, we can achieve 100% accurate estimates
after the project is completed)
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost and effort estimates.
4. Use one or more empirical models for software cost and effort estimation.

Q.17 Explain Project Estimation - Decomposition techniques based on software sizing.

In the estimation of software cost and effort, many variables that include human, technical,
environmental, political can affect final cost and effort applied to software. It is a complex
process. Decomposing the problem and re-characterization it into a set of smaller problems is an
approach for software project estimation.

Decomposition approach has two point of views :


- problem decomposition
- process decomposition

Before estimating, estimating the size of the software is very important. The size of the software
to be built can be estimated using lines of code which is a direct measure or through function
point which is an indirect measure. Estimating software size is a major challenge. Size is the
quantifiable outcome of software project.

Approaches to sizing problems are:

Page
83
- Fuzzy logic sizing: Identify the application, its magnitude and refining the magnitude within
original range.
- Function point sizing: Estimates of information domain characteristics are developed.
- Standard component sizing: The number of occurrences of each standard component
is estimated and project data is used to calculate the delivered size per standard component.
- Change sizing: Suppose existing software is modified in some way. In this approach, the
number and type of modifications are estimated.

Q.18 Explain COCOMO Model of estimation.

One very widely used algorithmic software cost model is the Constructive Cost Model
(COCOMO). The basic COCOMO model has a very simple form:

MAN-MONTHS = K1* (Thousands of Delivered Source Instructions) K2

Where K1 and K2 are two parameters dependent on the application and development
environment.

Estimates from the basic COCOMO model can be made more accurate by taking into account
other factors concerning the required characteristics of the software to be developed, the
qualification and experience of the development team, and the software development
environment. Some of these factors are:

Complexity of the software

1. Required reliability
2. Size of data base
3. Required efficiency (memory and execution time)
4. Analyst and programmer capability
5. Experience of team in the application area
6. Experience of team with the programming language and computer
7. Use of tools and software engineering practices

Many of these factors affect the person months required by an order of magnitude or more.
COCOMO assumes that the system and software requirements have already been defined, and
that these requirements are stable. This is often not the case.

COCOMO model is a regression model. It is based on the analysis of 63 selected projects. The
primary input is KDSI. The problems are:

1. In early phase of system life-cycle, the size is estimated with great uncertainty value. So,
the accurate cost estimate can not be arrived at.
2. The cost estimation equation is derived from the analysis of 63 selected projects. It
usually have some problems outside of its particular environment. For this reason, the
recalibration is necessary.

Page
84
Q.19 What are the Non-algorithmic Methods of software cost estimation.

Analogy costing: This method requires one or more completed projects that are similar to the
new project and derives the estimation through reasoning by analogy using the actual costs of
previous projects. Estimation by analogy can be done either at the total project level or at
subsystem level. The total project level has the advantage that all cost components of the system
will be considered while the subsystem level has the advantage of providing a more detailed
assessment of the similarities and differences between the new project and the completed
projects. The strength of this method is that the estimate is based on actual project experience.
However, it is not clear to what extend the previous project is actually representative of the
constraints, environment and functions to be performed by the new system.

Expert judgment: This method involves consulting one or more experts. The experts provide
estimates using their own methods and experience. Expert-consensus mechanisms such as Delphi
technique or PERT will be used to resolve the inconsistencies in the estimates. The Delphi
technique works as follows:
1) The coordinator presents each expert with a specification and a form to record estimates.
2) Each expert fills in the form individually (without discussing with others) and is allowed
to ask the coordinator questions.
3) The coordinator prepares a summary of all estimates from the experts (including mean or
median) on a form requesting another iteration of the experts estimates and the rationale
for the estimates.
4) Repeat steps 2)-3) as many rounds as appropriate.
A modification of the Delphi technique proposed by Boehm and Fahquhar seems to be
more effective: Before the estimation, a group meeting involving the coordinator and experts is
arranged to discuss the estimation issues. In step 3), the experts do not need to give any rationale
for the estimates. Instead, after each round of estimation, the coordinator calls a meeting to have
experts discussing those points where their estimates varied widely.

Parkinson: Using Parkinson's principle work expands to fill the available volume, the cost is
determined (not estimated) by the available resources rather than based on an objective
assessment. If the software has to be delivered in 12 months and 5 people are available, the effort
is estimated to be 60 person-months. Although it sometimes gives good estimation, this method
is not recommended as it may provide very unrealistic estimates. Also, this method does not
promote good software engineering practice.

Price-to-win: The software cost is estimated to be the best price to win the project. The
estimation is based on the customer's budget instead of the software functionality. For example,
if a reasonable estimation for a project costs 100 person-months but the customer can only afford
60 person-months, it is common that the estimator is asked to modify the estimation to fit 60
person months effort in order to win the project. This is again not a good practice since it is very
likely to cause a bad delay of delivery or force the development team to work overtime.

Bottom-up: In this approach, each component of the software system is separately estimated and
the results aggregated to produce an estimate for the overall system. The requirement for this

Page
85
approach is that an initial design must be in place that indicates how the system is decomposed
into different components.

Top-down: This approach is the opposite of the bottom-up method. An overall cost estimate for
the system is derived from global properties, using either algorithmic or non-algorithmic
methods. The total cost can then be split up among the various components. This approach is
more suitable for cost estimation at the early stage.

Q. 20 How Software sizing is considered while determining the cost of software?

The software size is the most important factor that affects the software cost. The line of
code and function point are the most popular metrics.

Line of Code: This is the number of lines of the delivered source code of the software,
excluding comments and blank lines and is commonly known as LOC . Although LOC is
programming language dependent, it is the most widely used software size metric. Most models
relate this measurement to the software cost. However, exact LOC can only be obtained after the
project has completed. Estimating the code size of a program before it is actually built is almost
as hard as estimating the cost of the program.

A typical method for estimating the code size is to use experts' judgment together with a
technique called PERT. It involves experts' judgment of three possible code-sizes.
PERT can also be used for individual components to obtain an estimate of the software system
by summing up the estimates of all the components.

Function points: This is a measurement based on the functionality of the program and was first
introduced by Albrecht . The total number of function points depends on the counts of distinct
(in terms of format or processing logic) types in the following five classes:
1. User-input types: data or control user-input types
2. User-output types: output data types to the user that leaves the system
3. Inquiry types: interactive inputs requiring a response
4. Internal file types: files (logical groups of information) that are used and shared inside the
system
5. External file types: files that are passed or shared between the system and other systems

Each of these types is individually assigned one of three complexity levels of {1 = simple, 2 =
medium, 3 = complex} and given a weighting value that varies from 3 (for simple input) to 15
(for complex internal files).

Extensions of function point: Feature point extends the function points to include algorithms
as a new class. An algorithm is defined as the set of rules which must be completely expressed to
solve a significant computational problem. For example, a square root routine can be considered
as an algorithm. Each algorithm used is given a weight ranging from 1 (elementary) to
10 (sophisticated algorithms) and the feature point is the weighted sum of the algorithms plus the
function points. This measurement is especially useful for systems with few input/output and

Page
86
high algorithmic complexity, such as mathematical software, discrete simulations, and military
applications.

Another extension of function points is full function point (FFP) for measuring real-time
applications, by also taking into consideration the control aspect of such applications. FFP
introduces two new control data function types and four new control transactional function types.
A detailed description of this new measurement and counting procedure can be found in.

Object points: While feature point and FFP extend the function point, the object point measures
the size from a different dimension. This measurement is based on the number and complexity of
the following objects: screens, reports and 3GL components. Each of these objects is counted
and given a weight ranging from 1 (simple screen) to 10 (3GL component) and the object point is
the weighted sum of all these objects. This is a relatively new measurement and it has not been
very popular. But because it is easy to use at the early phase of the development cycle and also
measures software size reasonably well, this measurement has been used in major estimation
models such as COCOMO II for cost estimation.

Q.21 Explain various software cost deciding techniques.

There are many models for software estimation available and prevalent in the industry.
Researchers have been working on formal estimation techniques since 1960. Early work in
estimation which was typically based on regression analysis or mathematical models of other
domains, work during 1970s and 1980s derived models from historical data of various software
projects. Among many estimation models expert estimation, COCOMO, Function Point and
derivatives of function point like Use Case Point, Object Points are most commonly used. While
Lines Of Code (LOC) is most commonly used size measure for 3GL programming and
estimation of procedural languages, IFPUG FPA originally invented by Allen Alrecht at IBM has
been adopted by most in the industry as alternative to LOC for sizing development and
enhancement of business applications. FPA provides measure of functionality based on end user
view of application software functionality. Some of the commonly used estimation techniques
are as follows:

Lines of Code (LOC): A formal method to measure size by counting number of lines
of Code, Source Lines of Code (SLOC) has two variants- Physical SLOC and Logical
SLOC. While two measures can vary significantly care must be taken to compare
results from two different projects and clear guideline must be laid out for the
organization.
IFPUG FPA: Formal method to measure size of business applications. Introduces
complexity factor for size defined as function of input, output, query, external input
file and internal logical file.
Mark II FPA: Proposed and developed by Mark Simons and useful for measuring
size for functionality in real time systems where transactions have embedded data

Page
87
COSMIC Full Function Point (FFP): Proposed in 1999, compliant to ISO 14143.
Applicable for estimating business applications that have data rich processing where
complexity is determined by capability to handle large chunks of data and real time
applications where functionality is expressed in terms of logics and algorithms.
Quick Function Point (QFP): Derived out of FPA and uses expert judgment. Mostly
useful for arriving at a ballpark estimate for budgetary and marketing purposes or
where go-no go decision is required during project selection process.
Object Points: Best suited for estimating customizations. Based on count of raw
objects, complexity of each object and weighted points.
COCOMO 2.0: Based on COCOMO 81 which was developed by Barry Boehme.
Model is based on the motivation of software reuse, application generators,
economies or diseconomies of scale and process maturity and helps estimate effort for
sizes calculated in terms of SLOC, FPA, Mark IIFP or any other method.
Predictive Object Points: Tuned towards estimation of the object oriented software
projects. Calculated based on weighted methods per class, count of top level classes,
average number of children, and depth of inheritance.
Estimation by Analogy: Cost of project is computed by comparing the project to a
similar project in the same domain. The estimate is accurate if similar project data is
available.

Page
88

Você também pode gostar