Escolar Documentos
Profissional Documentos
Cultura Documentos
pravin_chalke241088@yahoo.co.in
Notes On
System Analysis and Design
For Master of Computer Application
Semester-I
According to University of Mumbai Syllabus
By: Pravin Prabhakar Chalke
Pravin_chalke241088@yahoo.co.in
1
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Index
1. Introduction
a. Systems & computer based systems, types of information system
b. System analysis & design
c. Role, task & Attribute of the system analyst
2. Approaches to system development
a. SDLC
b. Explanation of the phases
c. Different models their advantages and disadvantages
i. Waterfall approach
ii. Iterative approach
iii. Extreme programming
iv. Rad model
v. Unified process
vi. Evolutionary software process model
1. Incremental model
2. Spiral model
vii. Concurrent development model
3. Analysis: investigating system requirements
a. Activities of the analysis phase
b. Fact finding methods
i. Review existing reports, forms and procedure descriptions
ii. Conduct interviews
iii. Observe & document business. Processes
iv. Build prototypes
v. Questionnaires
vi. Conduct jad sessions
c. Validate the requirements
i. Structured walkthroughs
4. Feasibility Analysis
a. Feasibility study and cost estimates
b. Cost benefit analysis
c. Identification of list of deliverables
5. Modeling system requirements
a. Data flow diagrams logical and physical
b. Structured English
c. Decision tables
d. Decision trees
e. Entity relationship diagram
6. Design
a. Design phase activities
2
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
3
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Chapter 1- Introduction
System
• A system is a group of elements working together to achieve a common goal.
• These element are related to each other in the work that they carry out.
• They communicate with each other in order to coordinate and control the delivery of the
total work of the system.
Further system is part of large system.
Characteristic of System
As per the definition of System we have following characteristics:
• The system works to achieve a common Goal Eg. Goal of traffic system is to facilitate
road with high safety and speed to all the road users equally in a fair manner.
• The system has several components working together to contribute their respective part
to meet the overall objective of the system. Together they all, the system is said to be
working. Eg. Traffic system has the vehicle driver signals, traffic police etc as element of
system.
• If any of the components is missing or not working as desired, it affects the performance
of the system as a whole. thus one can identify all the necessary tasks that all the
components together are required to perform, so that entire system meet its goal.
• The system components communicate with each other and you will be able to identify
what they communicate with each other (DATA), when (EVENT), and what is purpose of
the communication , such asco-ordinate the work or control the work etc.
4
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
We speak of political systems and educational systems, of avionics systems and manufacturing
systems, of banking systems and subway systems. The word tells us little. We use the adjective
describing system to understand the context in which the word is used. Webster's Dictionary
defines system in the following way:
1. a set or arrangement of things so related as to form a unity or organic whole;
2. a set of facts, principles, rules, etc., classified and arranged in an orderly form so as to
show a logical plan linking the various parts;
3. a method or plan of classification or arrangement;
4. an established way of doing something; method; procedure . . .
Five additional definitions are provided in the dictionary, yet no precise synonym is
suggested. System is a special word. Borrowing from Webster's definition, we define a
computer-based system as A set or arrangement of elements that are organized to accomplish
some predefined goal by processing information.
The goal may be to support some business function or to develop a product that can
be sold to generate business revenue. To accomplish the goal, a computer-based system
makes use of a variety of system elements:
• Software. Computer programs, data structures, and related documentation that serve to
effect the logical method, procedure, or control that is required.
• Hardware. Electronic devices that provide computing capability, the interconnectivity
devices (e.g., network switches, telecommunications devices) that enable the flow of
data, and electromechanical devices (e.g., sensors, motors, pumps) that provide external
world function.
• People. Users and operators of hardware and software.
• Database. A large, organized collection of information that is accessed via software.
• Documentation. Descriptive information (e.g., hardcopy manuals, on-line help files, Web
sites) that portrays the use and/or operation of the system.
• Procedures. The steps that define the specific use of each system element or the
procedural context in which the system resides.
At the next level in the hierarchy, a manufacturing cell is defined. The manufacturing
cell is a computer-based system that may have elements of its own (e.g., computers, mechanical
fixtures) and also integrates the macro elements that we have called numerical control
machine, robot, and data entry device.
To summarize, the manufacturing cell and its macro elements each are composed of
system elements with the generic labels: software, hardware, people, database, procedures,
and documentation. In some cases, macro elements may share a generic element. For example,
the robot and the NC machine both might be managed by a single operator (the people
element). In other cases, generic elements are exclusive to one system.
The role of the system engineer is to define the elements for a specific computerbased
6
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
system in the context of the overall hierarchy of systems (macro elements). In the sections that
follow, we examine the tasks that constitute computer system engineering.
Classification of System
There are various types of system. To have a good understanding of these systems, these
can be categorized in many ways. Some of the categories are open or closed, physical or
abstract and natural or man made information systems, which are explained next.
Classification of systems can be done in many ways.
Physical or Abstract System
Physical systems are tangible entities that we can feel and touch. These may be static or
dynamic in nature. For example, take a computer center. Desks and chairs are the static parts,
which assist in the working of the center. Static parts don't change. The dynamic systems are
constantly changing. Computer systems are dynamic system. Programs, data, and applications
can change according to the user's needs.
Abstract systems are conceptual. These are not physical entities. They may be formulas,
representation or model of a real system.
7
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Information System
In business we mainly deal with information systems we'll further explore these systems.
We will be talking about different types of information systems prevalent in the industry.
Information system deals with data of the organizations. The purposes of Information
system are to process input, maintain data, produce reports, handle queries, handle on line
transactions, generate reports, and other output. These maintain huge databases, handle
hundreds of queries etc. The transformation of data into information is primary function of
information system.
These types of systems depend upon computers for performing their objectives. A
computer based business system involves six interdependent elements. These are hardware
(machines), software, people (programmers, managers or users), procedures, data, and
information (processed data). All six elements interact to convert data into information. System
analysis relies heavily upon computers to solve problems. For these types of systems, analyst
should have a sound understanding of computer technologies.
In the following section, we explore three most important information systems namely,
transaction processing system, management information system and decision support system,
and examine how computers assist in maintaining Information systems.
8
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
is doomed to failure. Because of its integrated component, the modification to the system is
quite difficult and the system development takes a fairly long time.
(b) Distributed Information System
There are opinion that development of an integrated information system is embodied with
several practical problems and therefore, not feasible. This view has been reinforced by the
failure of integrated systems in various large organisations. The concept of a distributed
information system has emerged as an alternative to the integrated information system. In the
distributed information system, there are information sub-systems that form islands of
information systems. The distributed information system aims at establishing relatively
independent sub-systems, which are, however, connected through communication interfaces.
Following are the advantages of the distributed information system:
• The processing equipment as well as database are dispersed, bringing them closer to the
users.
• It does not involve huge initial investment as is required in an integrated system.
• It is more flexible and changes can be easily taken care of as per user’s requirements.
• The problem of data security and control can be handled more easily than in an integrated
system.
• There is no need of standby facilities because equipment breakdowns are not as severe as
in an integrated system.
The drawbacks of the distributed system are:
• It does not eliminate duplication of activities and redundancy in maintaining files.
• Coordination of activities becomes a problem.
• It needs more channels of communication than in an integrated system.
It is possible to consider several alternative approaches, which fall between the two
extremes - a completely integrated information system and a totally independent sub-system. It
is to be studied carefully what degree of integration is required for developing an information
system. It depends on how the management wants to manage the organisation, and the level of
diversity within the organisation.
Transaction processing systems provide speed and accuracy, and can be programmed to follow
routines functions of the organization.
Defining Requirement: The basic step for any system analyst is to understand the requirements
of the users. This is achieved by various fact finding techniques like interviewing, observation,
questionnaire etc. The information should be collected in such a way that it will be useful to
develop such a system which can provide additional features to the users apart from the
desired.
Prioritizing Requirements: Number of users use the system in the organization. Each one has a
different requirement and retrieves different information. Due to certain limitations in
computing capacity it may not be possible to satisfy the needs of all the users. Even if the
10
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
computer capacity is good enough is it necessary to take some tasks and update the tasks as per
the changing requirements. Hence it is important to create list of priorities according to users
requirements. The best way to overcome the above limitations is to have a common formal or
informal discussion with the users of the system. This helps the system analyst to arrive at a
better conclusion.
Gathering Facts, data and opinions of Users: After determining the necessary needs and
collecting useful information the analyst starts the development of the system with active
cooperation from the users of the system. Time to time, the users update the analyst with the
necessary information for developing the system. The analyst while developing the system
continuously consults the users and acquires their views and opinions.
Evaluation and Analysis: As the analyst maintains continuous he constantly changes and
modifies the system to make it better and more user friendly for the users.
Solving Problems: The analyst must provide alternate solutions to the management and should
a in dept study of the system to avoid future problems. The analyst should provide with some
flexible alternatives to the management which will help the manager to pick the system which
provides the best solution.
Drawing Specifications: The analyst must draw certain specifications which will be useful for the
manager. The analyst should lay the specification which can be easily understood by the
manager and they should be purely non-technical. The specifications must be in detailed and in
well presented form.
System design: Logical design of system is implemented and the design must be modular.
Evaluating Systems: Evaluate the system after it has been used for some time, Plan the
periodicity for evaluation and modify the system as needed.
11
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
4. Ability To Communicate
An analyst is also required to orally present his design to groups of users. Such
oral presentations are often made to non-technical management personnel. He must
thus be able to organize his thoughts and present them in a language easily understood
by users. Good oral presentations and satisfactory replies to question are essential to
convince management about the usefulness of computer-based information system.
5. An Analytical Mind
Analysts are required to find solutions to problems. A good analyst must be able
to perceive the core of problem and discard redundant data and information in the
problem statement. Any practical solution is normally non-ideal and requires
appropriate trade-offs. A good analyst would use appropriate analytical tools as
necessary and use commonsense.
6. Breath Of Knowledge
A systems analyst has to work with persons performing various jobs in an
organization. He may have to interact with accountants, sales persons, clerical staff,
production supervisors, store offices, purchase officers, directors etc. A system analyst
must understand how they do their jobs and design system to enable them to do their
12
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
jobs better. During his career he will be working with a variety of organizations such as
hospitals, hotels, departmental stores, transport companies, educational institutions,
etc. Thus a general broad-base education is very useful.
13
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
PHASES IN SDLC:
1. CONCEPTION (or selection) This phase initiates the process of examining whether an
organisation should opt for computerisation. The 4 major areas investigated during
the phase are:
• What are the problems?
• What are the end-objectives?
• What are the benefits?
• What are the areas to be covered?
The Proposal Form or the Project Request Form will contain details of these 4 areas.
• Ideas are conceived in the conception phase. Source of ideas can be :
• Request from users
• Opportunities created by technological advances
• Ideas from previous systems studies.
• Outside sources.
During this phase, the Systems Analyst meets the user and gathers the following
information:
14
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
3. DETAILED ANALYSIS PHASE: The objective is to carry out a detailed analysis of every aspect
of the existing system and prepare systems specifications of the proposed system.
4. DESIGN PHASE: In this phase we try to find out how to achieve the automated solution.
5. DEVELOPMENT PHASE: The objective of this phase is to obtain an operational system, fully
documented. The main activities in this phase are:
a) Coding the programs - i.e. writing programs based on user specifications in the
design phase.
b) Test programs for errors - individually.
c) Debug the programs - identify errors and correct them.
d) Test the system as a whole unit.
e) Test with historical data.
f) Document the system. Documents are in the form of three manuals:
• User Manual for User Dept.
• Systems Manual for Analysts / Programmers
• Operations manual for Operators.
16
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
6. IMPLEMENTATION PHASE: In this phase, the old system is replace by the new system. The
activities required are: training the users and operators, and conversion from old system
to new system.
7. EVALUATION PHASE: This phase allows us to assess the actual performance of the
system. Purpose of this phase is:
a) To examine the efficiency of the system
b) Compare actual achievements with plans
c) Provide feedback to systems analyst
d) Assess the performance of the system in the following aspects:
• Actual development cost and operating cost
• Realised benefits
• Timings achieved
• User satisfaction
• Error rates
• Problem areas
• Ease of use
system level and progresses through analysis, design, coding testing and maintenance. Figure
1.1 shows a diagrammatic
representation of this model.
System engineering and analysis: Work on software development begins by establishing the
requirements for all elements of the system. System engineering and analysis involves gathering
of requirements at the system level, as well as basic top-level design and analysis. The
requirement gathering focuses especially on the software. The analyst must understand the
information domain of the software as well as the required function, performance and
interfacing. Requirements are documented and reviewed with the client.
Design: Software design is a multi-step process that focuses on data structures, software
architecture, procedural detail, and interface characterization. The design process translates
requirements into a representation of the software that can be assessed for quality before
coding begins. The design phase is also documented and becomes a part of the software
configuration.
Coding: The design must be translated into a machine-readable form. Coding performs this task.
If the design phase is dealt with in detail, the coding can be done mechanically.
Testing: Once code is generated, it has to be tested. Testing focuses on the logic as well as the
function of the program to ensure that the code is error free and that o/p matches the
requirement specifications.
Maintenance: Software undergoes change with time. Changes may occur on account of errors
encountered, to adapt to changes in the external environment or to enhance the functionality
and / or performance. Software maintenance reapplies each of the preceding life cycles to the
existing program.
The classic life cycle is one of the oldest models in use. However, there are a few associated
problems. Some of the disadvantages are given below.
Disadvantages:
18
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• Real projects rarely follow the sequential flow that the model proposes. Iteration always
occurs and creates problems in the application of the model.
• It is difficult for the client to state all requirements explicitly. The classic life cycle
requires this and it is thus difficult to accommodate the natural uncertainty that occurs
at the beginning of any new project.
• A working version of the program is not available until late in the project time span. A
major blunder may remain undetected until the working program is reviewed which is
potentially disastrous.
In spite of these problems the life-cycle method has an important place in software
engineering work. Some of the reasons are given below.
Advantages:
• The model provides a template into which methods for analysis, design, coding, testing
and maintenance can be placed.
• The steps of this model are very similar to the generic steps that are applicable to all
software engineering models.
• It is significantly preferable to a haphazard approach to software development.
Iterative Approach
The iterative enhancement life cycle model counters the third limitation of the waterfall
model and tries to combine the benefits of both prototyping and the waterfall model. The basic
idea is that the software should be developed in increments, where each increment adds some
functional capability to the system until the full system is implemented. At each step extensions
and design modifications can be made. An advantage of this approach is that it can result in
better testing, since testing each increment is likely to be easier than testing entire system like in
the waterfall model. Furthermore, as in prototyping, the increments provides feedback to the
client which is useful for determining the final requirements of the system.
In the first step of iterative enhancement model, a simple initial implementation is done
for a subset of the overall problem. This subset is the one that contains some of the key aspects
of the problem which are easy to understand and implement, and which forms a useful and
usable system. A project control list is created which contains, in an order, all the tasks that
must be performed to obtain the final implementation. This project control list gives an idea of
how far the project is at any given step from the final system.
Each step consists of removing the next step from the list. Designing the implementation
for the selected task, coding and testing the implementation, and performing an analysis of the
19
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
partial system obtained after this step and updating the list as a result of the analysis. These
three phases are called the design phase, implementation phase and analysis phase. The
process is iterated until the project control list is empty, at the time the final implementation of
the system will be available. The process involved in iterative enhancement model is shown in
the figure below.
The project control list guides the iteration steps and keeps track of all tasks that must
be done. The tasks in the list can be including redesign of defective components found during
analysis. Each entry in that list is a task that should be performed in one step of the iterative
enhancement process, and should be simple enough to be completely understood. Selecting
tasks in this manner will minimize the chances of errors and reduce the redesign work.
Advantages
The advantages of the Iterative Model are:
• Faster Coding, testing and Design Phases
• Facilitates the support for changes within the life cycle
Disadvantages
The disadvantages of the Iterative Model are:
• More time spent in review and analysis
• A lot of steps that need to be followed in this model
• Delay in one phase can have detrimental effect on the software as a whole
Extreme Programming
The first Extreme Programming project was started March 6, 1996. Extreme
Programming is one of several popular Agile Processes. It has already been proven to be very
successful at many companies of all different sizes and industries world wide.
20
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The most surprising aspect of Extreme Programming is its simple rules. Extreme
Programming is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces
make no sense, but when combined together a complete picture can be seen. The rules may
seem awkward and perhaps even naive at first, but are based on sound values and principles.
Our rules set expectations between team members but are not the end goal themselves.
You will come to realize these rules define an environment that promotes team collaboration
and empowerment that is your goal. Once achieved productive teamwork will continue even as
rules are changed to fit your company's specific needs.
This flow chart shows how Extreme Programming's rules work together. Customers enjoy
being partners in the software process, developers actively contribute regardless of experience
level, and managers concentrate on communication and relationships. Unproductive activities
have been trimmed to reduce costs and frustration of everyone involved.
21
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Advantages
• Customer focus increase the chance that the software produced will actually meet the
needs of the users
• The focus on small, incremental release decreases the risk on your project:
o by showing that your approach works and
o by putting functionality in the hands of your users, enabling them to provide
timely feedback regarding your work.
• Continuous testing and integration helps to increase the quality of your work
• XP is attractive to programmers who normally are unwilling to adopt a software process,
enabling your organization to manage its software efforts better.
Disadvantages
22
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• Data modeling: The information flow defined, as a part of the business-modeling phase
is refined into a set of data objects that are needed to support the business. The
attributes of each object are identified and the relationships between these objects are
defined.
• Process modeling: The data objects defined in the previous phase are transformed to
achieve the information flow necessary to implement a business function. Processing
descriptions are created for data manipulation.
• Application generation: RAD assumes the use of fourth generation techniques. Rather
than using third generation languages, the RAD process works to reuse existing
programming components whenever possible or create reusable components. In all
23
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
In general, if a business function can be modularized in a way that enables each function
to be completed in less than three months, it is a candidate for RAD. Each major function can be
addressed by a separate RAD team and then integrated to form a whole.
Advantages:
• Modularized approach to development
• Creation and use of reusable components
• Drastic reduction in development time
Disadvantages:
• For large projects, sufficient human resources are needed to create the right number of
RAD teams.
• Not all types of applications are appropriate for RAD. If a system cannot be modularized,
building the necessary components for RAD will be difficult.
• Not appropriate when the technical risks are high. For example, when an application
makes heavy use of new technology or when the software requires a high degree of
interoperability with existing programs.
Incremental model
This model combines elements of the linear sequential model with the iterative
philosophy of prototyping. The incremental model applies linear sequences in a staggered
fashion as time progresses. Each linear sequence produces a deliverable increment of the
software. For example, word processing software may deliver basic file management, editing
and document production functions in the first increment. More sophisticated editing and
document production in the second increment, spelling and grammar checking in the third
increment, advanced page layout in the fourth increment and so on. The process flow for any
increment can incorporate the prototyping model.
When an incremental model is used, the first increment is often a core product. Hence,
basic requirements are met, but supplementary features remain undelivered. The client uses the
core product. As a result of his evaluation, a plan is developed for the next increment. The plan
24
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
addresses improvement of the core features and addition of supplementary features. This
process is repeated following delivery of each increment, until the complete product is produced.
As opposed to prototyping, incremental models focus on the delivery of an operational product
after every iteration.
Advantages:
• Particularly useful when staffing is inadequate for a complete implementation by the
business deadline.
• Early increments can be implemented with fewer people. If the core product is well
received, additional staff can be added to implement the next increment.
• Increments can be planned to manage technical risks. For example, the system may
require availability of some hardware that is under development. It may be possible to
plan early increments without the use of this hardware, thus enabling partial
functionality and avoiding unnecessary delay.
Disadvantages
• Each phase of an iteration is rigid and do not overlap each other.
• Problems may arise pertaining to system architecture because not all requirements are
gathered up front for the entire software life cycle.
25
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Spiral model
The spiral model in software engineering has been designed to incorporate the best
features of both the classic life cycle and the prototype models, while at the same time adding
an element of risktaking analysis that is missing in these models. The model, represented in
figure 1.3, defines four major activities defined by the four quadrants of the figure:
• Planning: Determination of objectives, alternatives and constraints.
• Risk analysis: Analysis of alternatives and identification or resolution of risks.
• Engineering: Development of the next level product.
• Customer evaluation: Assessment of the results of engineering.
An interesting aspect of the spiral model is the radial dimension as depicted in the figure.
With each successive iteration around the spiral, progressively more complete versions of the
software are built. During the first circuit around the spiral, objectives, alternatives and
constraints are defined and risks are identified and analyzed. If risk analysis indicates that there
is an uncertainty in the requirements, prototyping may be used in the engineering quadrant to
assist both the developer and the client.
The client now evaluates the engineering work and makes suggestions for improvement. At
each loop around the spiral, the risk analysis results in a .go / no . go. decision. If risks are too
great the project can be terminated.
In most cases however, the spiral flow continues outward toward a more complete model of
the system, and ultimately to the operational system itself. Every circuit around the spiral
requires engineering that can be accomplished using the life cycle or the prototype models. It
should be noted, that the number of development activities increase as activities move away
from the center of the spiral.
26
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Like all other models, the spiral model too has a few associated problems, which are
discussed below.
Disadvantages:
• It may be difficult to convince clients that the evolutionary approach is controllable.
• It demands considerable risk assessment expertise and relies on this for success.
• If major risk is not uncovered, problems will undoubtedly occur.
• The model is relatively new and has not been as widely used as the life cycle or the
prototype models. It will take a few more years to determine efficiency of this process
with certainty.
This model however is one of the most realistic approaches available for software
engineering. It also has a few advantages, which are discussed below.
Advantages:
• The evolutionary approach enables developers and clients to understand and react to
risks at an evolutionary level.
• It uses prototyping as a risk reduction mechanism and allows the developer to use this
approach at any stage of the development.
• It uses the systematic approach suggested by the classic life cycle method but
incorporates it into an iterative framework that is more realistic.
27
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• This model demands an evaluation of risks at all stages and should reduce risks before
they become problematic, if properly applied.
Prototype model
Often a customer has defined a set of objectives for software, but not identified the
detailed input, processing or output requirements. In other cases, the developer may be unsure
of the efficiency of an algorithm, the adaptability of the operating system or the form that the
human-machine interaction should take. In these situations, a prototyping approach may be the
best approach. Prototyping is a process that enables the developer to create a model of the
software that must be built. The sequence of events for the prototyping model is illustrated in
figure. Prototyping begins with requirements gathering. The developer and the client meet and
define the overall objectives for the software, identify the requirements, and outline areas
where further definition is required. In the next phase a quick design is created. This focuses on
those aspects of the software that are visible to the user (e.g. i/p approaches and o/p formats).
The quick design leads to the construction of the prototype. This prototype is evaluated by the
client / user and is used to refine requirements for the software to be developed. A process of
iteration occurs as the prototype is .tuned. to satisfy the needs of the client, while at the same
time enabling the developer to more clearly understand what needs to be done.
The prototyping model has a few associated problems. These are discussed below.
Disadvantages:
• The client sees what is apparently a working version of the software unaware that in the
rush to develop a working model, software quality and long-term maintainability is not
considered. When informed that the system must be rebuilt, most clients demand that
the existing application be fixed and made a working product. Often software developers
are forced to relent.
28
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Although problems may occur prototyping may be an effective model for software
engineering. Some of the advantages of this model are enumerated below.
Advantages:
• It is especially useful in situations where requirements are not clearly defined at the
beginning and are not understood both by the client and the developer.
• Prototyping is also helpful in situations where an application is built for the first time
with no precedents to be followed. In such circumstances, unforeseen eventualities may
occur which cannot be predicted and can only be dealt with when encountered.
Object oriented technologies provide the technical framework for a component based
process model for software engineering. This model emphasizes the creation of classes that
encapsulate both data and the algorithms used to manipulate the data. The component-based
development (CBD) model incorporates many characteristics of the spiral model. It is
evolutionary in nature, thus demanding an iterative approach to software creation.
However, the model composes applications from pre-packaged software components called
classes. The engineering begins with the identification of candidate classes. This is done by
examining the data to be manipulated, and the algorithms that will be used to accomplish this
manipulation. Corresponding data and algorithms are packaged into a class. Classes created in
past applications are stored in a class library. Once candidate classes are identified the class
library is searched to see if a match exists. If it does, these classes are extracted from the library
and reused. If it does not exist, it is engineered using object-oriented techniques. The first
iteration of the application is then composed. Process flow moves to the spiral and will
ultimately re-enter the CBD during subsequent passes through the engineering activity.
Advantages:
• The CBD model leads to software reuse, and reusability provides software engineers with
a number of measurable benefits.
• This model leads to a 70% reduction in development cycle time and an 84% reduction in
projection cost.
Disadvantages:
29
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• The results mentioned above are inherently dependent on the robustness of the
component library.
30
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The term .fourth generation techniques. encompasses a broad array of software tools
that have one thing in common: each enables the s/ware engineer to specify some characteristic
of the software at a higher level. The tool then automatically generates source code based on
the developer’s specifications.
Currently, a software development environment that supports the 4GT model includes
some or all of the following tools: nonprocedural languages for database query, report
generation, data manipulation, screen interaction and definition, code generation, high-level
graphics capability, spreadsheet capability, automated generation of HTML, etc. initially many
of these tools were available only for very specific application domains, but today 4GT
environments have been extended to address most software application categories.
Like all other models, 4GT begins with a requirements gathering phase. Ideally, the
customer would describe the requirements, which are directly translated into an operational
prototype. Practically, however, the client may be unsure of the requirements, may be
ambiguous in his specs or may be unable to specify information in a manner that a 4GT tool can
use. Thus, the client/developer dialog remains an essential part of the development process.
For small applications, it may be possible to move directly from the requirements
gathering phase to the implementation phase using a nonprocedural fourth generation
language. However for larger projects a design strategy is necessary. Otherwise, the same
difficulties are likely to arise as with conventional approaches. As with all other models, the 4GT
model has both merits and demerits.
These are enumerated below:
Advantages:
• Dramatic reduction in software development time.
• Improved productivity for software developers.
Disadvantages:
• Not much easier to use as compared to programming languages.
31
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
32
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Figure provides a schematic representation of one activity with the concurrent process
model. The activity—analysis—may be in any one of the states10 noted at any given time.
Similarly, other activities (e.g., design or customer communication) can be represented in an
analogous manner. All activities exist concurrently but reside in different states. For example,
early in a project the customer communication activity (not shown in the figure) has completed
its first iteration and exists in the awaiting changes state. The analysis activity (which existed in
the none state while initial customer communication was completed) now makes a transition
into the under development state. If, however, the customer indicates that changes in
requirements must be made, the analysis activity moves from the under development state into
the awaiting changes state.
The concurrent process model defines a series of events that will trigger transitions from
state to state for each of the software engineering activities. For example, during early stages of
design, an inconsistency in the analysis model is uncovered. This generates the event analysis
model correction which will trigger the analysis activity from the done state into the awaiting
changes state.
The concurrent process model is often used as the paradigm for the development of
client/server11 applications. A client/server system is composed of a set of functional
components. When applied to client/server, the concurrent process model defines activities in
two dimensions a system dimension and a component dimension. System level issues are
addressed using three activities: design, assembly, and use. The component dimension is
addressed with two activities: design and realization. Concurrency is achieved in two ways:
(1) system and component activities occur simultaneously and can be modeled using the state-
oriented approach described previously;
(2) a typical client/server application is implemented with many components, each of which can
be designed and realized concurrently.
V-Shaped Model
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of
processes. Each phase must be completed before the next phase begins. Testing is emphasized
in this model more so than the waterfall model though. The testing procedures are developed
early in the life cycle before any coding is done, during each of the phases preceding
implementation.
Requirements begin the life cycle model just like the waterfall model. Before
development is started, a system test plan is created. The test plan focuses on meeting the
functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration
test plan is created in this phase as well in order to test the pieces of the software systems
ability to work together.
The low-level design phase is where the actual software components are designed, and
unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is
complete, the path of execution continues up the right side of the V where the test plans
developed earlier are now put to use.
Advantages
• Simple and easy to use.
• Each phase has specific deliverables.
• Higher chance of success over the waterfall model due to the development of test plans
early on during the life cycle.
• Works well for small projects where requirements are easily understood.
Disadvantages
34
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
35
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Both the developer and the customer take an active role in requirements analysis and
specification. The customer attempts to reformulate a sometimes, unclear concept of software
function and performance into concrete detail. The developer acts as interrogator, consultant
and problem-solver.
• Requirement analysis is a software engineering task that bridges the gap between
system level software allocation and software design.
• It enables the system engineer to specify software function and performance indicate
software’s interface with other system elements and establish design constraints that
the software must meet.
• It allows the software engineer to refine the software allocation and build models of the
process, data and behavioral domains that will be treated by software.
• It provides the software designer with a representation of information and function that
can be translated into data, architectural and procedural design.
• It also provides the developer and the client with the means to assess quality once the
Software is built.
The principles of requirement analysis call upon the analyst to systematically approach
the specification of the system to be developed. This means that the analysis has to be done
using the available information. Generally, all computer systems are looked upon as information
processing systems, since they process data input and produce a useful output.
The logical view of a system gives the overall feel of how the system operates. Any
system performs three generic functions: input, output and processing. The logical view focuses
on the problem-specific functions. This helps the analyst to identify the functional model of the
system. The functional model begins with a single context level model. Over a series of
iterations, more and more functional detail is provided, until all system functionality is
represented.
The physical view of the system focuses on the operations being performed on the data
that is either taken as input or generated as output. This view determines the actions to be
performed on the data under specific conditions. This helps the analyst to identify the behavioral
36
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
model of the system. The analyst can determine an observable mode of behavior that changes
only when some event occurs. Examples of such events are:
i) An internal clock indicating some specified time has passed.
ii) A mouse movement.
iii) An external time signal.
iii) Modeling
We create models to gain a better understanding of the actual entity to be built. The
software model must be capable of modeling the information that software transforms, the
functions that enable the transformation to occur and the behavior of the system during
transformation. Models created serve a number of important roles:
• The model aids the analyst in understanding the information, function and behavior
of the system, thus making the requirement analysis easier and more systematic.
• The model becomes the focal point or review and the key to determining the
completeness, consistency and accuracy of the specification.
37
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• The model becomes the foundation for design, providing the designer with an
essential representation of software that can be mapped into an implementation
context.
iv) Specification
There is no doubt that the mode of specification has much to do with the quality of the
solution. The quality, timeliness and completeness of the software may be adversely
affected by incomplete or inconsistent specifications.
v) Review
Both the software developer and the client conduct a review of the software
requirements specification. Because the specification forms the foundation of the development
phase, extreme care is taken in conducting the review.
The review is first conducted at a macroscopic level. The reviewers attempt to ensure
that the specification is complete, consistent and accurate. In the next phase, the review is
conducted at a detailed level. Here, the concern is on the wording of the specification. The
developer attempts to uncover problems that may be hidden within the specification content.
computer procedures, and an outline of current problems. This background information should
be checked carefully before beginning the detailed fact-finding task. You should now be ready to
begin asking questions, and, as a result of your planning, these questions will be relevant,
focused and appropriate.
In carrying out your investigation you will be collecting information about the current
system, and, by recording the problems and requirements described by users of the current
system, building up a picture of the required system. The facts gathered from each part of the
client’s organisation will be concerned primarily with the current system and how it operates,
and will include some or all of the following: details of inputs to and outputs from the system;
how information is stored; volumes and frequencies of data; any trends that can be identified;
and specific problems, with examples if possible, that are experienced by users.
In order to collect this data and related information, a range of fact-finding methods can
be used. Those most commonly used include interviewing, questionnaires, observation,
searching records and document analysis. Each of these methods is described in this chapter,
but it is important to remember that they are not mutually exclusive. More than one method
may be used during a factfinding session: for example, during a fact-finding interview a
questionnaire may be completed, records inspected and documents analysed.
39
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Using this information, an assessment of the volatility of the information can be made,
and the usefulness of existing information can be questioned if it appears that some file data is
merely updated, often inaccurate or little used. All of the information collected by record
searching can be used to cross-check information given by users of the system. This doesn’t
imply that user opinion will be inaccurate, but discrepancies can be evaluated and the reasons
for them discussed.
Where there are a large number of documents, statistical sampling can be used. This will
involve sampling randomly or systematically to provide the required quantitative and qualitative
information. This can be perfectly satisfactory if the analyst understands the concepts behind
statistical sampling, but is a very hazardous exercise for the untrained. One particularly common
fallacy is to draw conclusions from a non-representative sample. Extra care should be taken in
estimating volumes and data field sizes from the evidence of a sample. For instance, a small
sample of cash receipts inspected during a mid-month slack period might indicate an average of
40 per day, with a maximum value of £1500 for any one item. Such a sample used
indiscriminately for system design might be disastrous if the real-life situation was that the
number of receipts ranged from 20 to 2000 per day depending upon the time in the month, and
that, exceptionally, cheques for over £100,000 were received. It is therefore recommended that
more than one sample is taken and the results compared to ensure consistency.
Conduct Interview
Interviewing is the most commonly used, and normally most useful, fact-finding
technique. We can interview to collect information from individuals face-to-face. There can be
several objectives to using interviewing, such as finding out facts, verifying facts, clarifying facts,
generating enthusiasm, getting the end-user involved, identifying requirements, and gathering
ideas and opinions. However, using the interviewing technique requires good communication
skills for dealing effectively with people who have different values, priorities, opinions,
motivations, and personalities. As with other fact-finding techniques, interviewing is not always
the best method for all situations. The advantages and disadvantages of using interviewing as a
fact-finding technique are listed in Table.
There are two types of interview: unstructured and structured. Unstructured interviews
are conducted with only a general objective in mind and with few, if any, specific questions. The
interviewer counts on the interviewee to provide a framework and direction to the interview.
This type of interview frequently loses focus and, for this reason, it often does not work well for
database analysis and design.
In structured interviews, the interviewer has a specific set of questions to ask the
interviewee. Depending on the interviewee’s responses, the interviewer will direct additional
40
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The analyst may also be involved in undertaking planned or conscious observations when
it is decided to use this technique for part of the study. This will involve watching an operation
for a period to see exactly what happens. Clearly, formal observation should only be used if
agreement is given and users are prepared to cooperate. Observation should be done openly, as
covert observation can undermine any trust and goodwill the analyst manages to build up. The
technique is particularly good for tracing bottlenecks and checking facts that have already been
noted.
A checklist, highlighting useful areas for observation, is shown as Figure.
41
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Document Analysis
When investigating the data flowing through a system another useful technique is to
collect documents that show how this information is organised. Such documents might include
reports, forms, organisation charts or formal lists. In order to fully understand the purpose of a
document and its importance to the business, the analyst must ask questions about how, where,
why and when it is used. This is called document analysis, and is another fact-finding technique
available. It is particularly powerful when used in combination with one or more of the other
techniques.
Build Prototypes
Prototyping:
a) Prototyping is a technique for quickly building a functioning but incomplete model of the
information system.
b) A prototype is a small, representative or working model of user’s requirements of a
proposed design for an information system.
c) The Development of the prototype is based on the fact that the requirements are seldom
fully known at the beginning of the project. The idea is to build a first simplified version
42
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
of the system and seek feedback from the people involved in order to then design a
better subsequent version. This process is repeated until the system meets the client
condition of acceptance.
d) Any given prototype may omit certain functions or features until such a time as the
prototype has sufficiently involved into an acceptable implementation of requirements.
e) There are two types of prototyping models
• Throw-away prototyping - In this model, the prototype is discarded once it has served
the purpose of clarifying the system requirements.
• Explanatory prototyping - In this model, the prototype evolves into the main system.
Steps in Prototyping:
a) Requirement Analysis:
Identify the users information and operating requirements.
b) Prototype creation or modification :
Develop a working prototype that focuses on only the most important functions using a
basic database.
c) Customer Evalution :
Allow the customers to use the prototype and evaluate it.Gather the feedback, reviews,
comments and suggestion.
d) Prototype refinement :
Integrate the most important changes into the current prototype on the basis of
customer feedback.
e) System development and implementation :
Once the refined prototype satisfies all the clients conditions of acceptance, it is
transformed into the actual system.
Advantages :
• Shorter development time.
• More accurate user requirements.
43
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Disadvantages :
• An appropriate operating system or programming languages may be used simply
because is it available.
• The completed system may not contain all the features and final touch. For instance
headings titles and page numbers in the report may be missing.
• File organization may be temporary nd record structures may left incomplete.
• Processing and input controls may be missing and documentation of system may have
been avoided entirely.
• Development of system may become never ending process as changes will keep
happening.
• Adds to cost and time of the developing system if left uncontrolled.
Application :
• This method is most useful for unique applications where developers have little
information or experience or where risk or error may be high.
• It is useful to test the feasibility of the system or to identify user requirements.
Questionnaries
The use of a questionnaire might seem an attractive idea, as data can be collected from
a lot of people without having to visit them all. Although this is true, the method should be used
with care, as it is difficult to design a questionnaire that is both simple and comprehensive. Also,
unless the questions are kept short and clear, they may be misunderstood by those questioned,
making the data collected unreliable.
44
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• a classification section for collecting information that can later be used for analysing and
summarising the total data, such as age, sex, grade, job title, location;
• a data section made up of questions designed to elicit the specific information being
sought by the analyst.
These three sections can clearly be observed in the questionnaire in Figure: the heading
section is at the top (name and date), the next line down is for recording classification
information (job title, department and section), and the data section is designed to gather
information about the duties associated with a particular job.
The questionnaire designer aims to formulate questions in the data section that are not
open to misinterpretation, that are unbiased, and which are certain to obtain the exact data
required. This is not easy, but the principles of questioning described earlier in this chapter
provide a useful guide. A mixture of open and closed questions can be used, although it is useful
to restrict the scope of open questions so that the questionnaire focuses on the area under
investigation. To ensure that your questions are unambiguous, you will need to test them on a
sample of other people, and if the questionnaire is going to be the major factfinding instrument
the whole document will have to be field tested and validated before its widespread use.
45
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
date by which the questionnaire should be returned. Even with this explanation, an expectation
that all the questionnaires will be completed and returned is not realistic. Some people object to
filling in forms, some will forget, and others delay completing them until they are eventually
lost! To conclude, then, a questionnaire can be a useful fact-finding instrument, but care must
be taken with the design. A poor design can mean that the form is difficult to complete, which
will result in poor-quality information being returned to the analyst.
46
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The success of a JAD session depends on the presence of all key stakeholders and their
contribution and decisions.
Purpose:
1. The purpose of structured walkthrough is to find area where improvement can be made
in the system or the development process.
2. Structured walkthrough are often employed to enhance quality and to provide guidance
to system analyst and programmers.
3. A walkthrough should be viewed by the programmers and analyst as an opportunity to
receive assistance not as an obstacle to be avoided tolerated.
4. The structured walkthrough can be used to process a constructive and cost effective
management tool after detailed investigation following design and during program
development.
REQUIREMENT REVIEW:
1. A requirement review is a structured walkthrough conducted to examine therequirement
specifications formulated by the analyst.
2. It is also called as specification review.
3. It aims at examining the functional activities and processes that the review the new
system will handle.
4. it includes documentation that participants read and study price to actual walkthrough.
47
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
DESIGN REVIEW
1. Design review focuses on design specification for meeting previously identified system
requirements
2. The purpose of this type of structured walkthrough is to determine whether the
proposed design will meet the requirement effectively and efficiently.
3. If the participant find discrepancies between the design and requprement they will point
out them discuss them.
CODE REVIEW:
1. A code review is structured walkthrough conducted to examine the program code
developed in a system along its documentation.
2. It is used for new systems and for systems under maintenance
3. A code review does not deal with the entire software but rather with individual modules
or major components in a program.
4. when programs are reviewed the participants also assess the execution efficiency use of
standard data names and modules and program errors.
48
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Feasibility Study
A feasibility study is a preliminary study undertaken before the real work of a project
starts to ascertain the likelihood of the project's success. It is an analysis of possible solutions to
a problem and a recommendation on the best solution to use. It involves evaluating how the
solution will fit into the corporation. It, for example, can decide whether an order processing be
carried out by a new system more efficiently than the previous one.
Feasibility studies are useful and valid for many kinds of projects. Evaluations of a new
business venture both from new groups and established businesses, are the most common, but
not the only usage. Studies can help groups decide to expand existing services, build or remodel
facilities, change methods of operation, add new products, or even merge with another
business. A feasibility study assists decision makers whenever they need to consider alternative
development opportunities.
Feasibility studies permit planners to outline their ideas on paper before implementing
them. This can reveal errors in project design before their implementation negatively affects the
49
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
project. Applying the lessons gained from a feasibility study can significantly lower the project
costs. The study presents the risks and returns associated with the project so the prospective
members can evaluate them. There is no "magic number" or correct rate of return a project
needs to obtain before a group decides to proceed. The acceptable level of return and
appropriate risk rate will vary for individual members depending on their personal situation.
Cooperatives serve the needs and enhance the economic returns of their members, and not
outside investors, so the appropriate economic rate of return for a cooperative project may be
lower than those required by projects of investor-owned firms. Potential members should
evaluate the returns of a cooperative project to see how it would affect the returns of all of their
business operations.
The proposed project usually requires both risk capital from members and debt capital
from banks and other financiers to become operational. Lenders typically require an objective
evaluation of a project prior to investing. A feasibility study conducted by someone without a
vested interest in the project outcome can provide this assessment.
The feasibility study evaluates the project’s potential for success. The perceived
objectivity of the evaluation is an important factor in the credibility placed on the study by
potential investors and financiers. Also, the creation of the study requires a strong background
both in the financial and technical aspects of the project. For these reasons, outside consultants
conduct most studies.
Feasibility studies for a cooperative are similar to those for other businesses, with one
exception. Cooperative members use it to be successful in enhancing their personal businesses,
so a study conducted for a cooperative must address how the project will impact members as
individuals in addition to how it will affect the cooperative as a whole.
50
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The feasibility study is conducted to assist the decision-makers in making the decision
that will be in the best interest of the school food service operation. The extensive research,
conducted in a non-biased manner, will provide data upon which to base a decision.
A feasibility study could be used to test a new working system, which could be used because:
• The current system may no longer suit its purpose,
• Technological advancement may have rendered the current system redundant,
• The business is expanding, allowing it to cope with extra work load,
• Customers are complaining about the speed and quality of work the business provides,
• Competitors are not winning a big enough market share due to an effective integration
of a computerized system.
Although few businesses would not benefit from a computerized system at all, the
process of carrying out this feasibility study makes the purchaser/client think carefully about
how it is going to be used.
After request clarification, analyst proposes some solutions. After that for each solution
it is checked whether it is practical to implement that solution.
This is done through feasibility study. In this various aspects like whether it is technically
or economically feasible or not. So depending upon the aspect on which feasibility is being done
it can be categorized into four classes:
• Technical Feasibility
• Economic Feasibility
• Operational Feasibility
• Legal Feasibility
The outcome of the feasibility study should be very clear. It should answer the following issues.
• Is there an alternate way to do the job in a better way?
• What is recommended?
51
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• The team consists of analysts and user staff enough collective expertise to devise a
solution to the problem.
5. Determine and Evaluate performance and cost effectiveness of each candidate system.
• Each candidate systems performance is evaluated against the system performance
requirements set prior to the feasibility study.
• Whatever the criteria, there has to be a close match as practicable, although trade-offs
are often necessary to select the best system.
• The cost encompasses both designing and installing the system.
• It includes user training, updating the physical facilities and documenting..
• System performance criteria are evaluated against the cost of each system to determine
which system is likely to be the most cost effective and also meets the performance
requirements.
52
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• Costs are most easily determined when the benefits of the system are tangible and
measurable.
• An additional factor to consider is the cost of the study design and development.
• In many respects the cost of the study phase is a “sunk cost” (fixed cost).
• Including it in the project cost estimate is optional.
8. Feasibility Report
• The feasibility report is a formal document for management use.
• Brief enough and sufficiently non technical to be understandable, yet detailed enough to
provide the basis for system design.
• The report contains the following sections:
o Cover letter – general findings and recommendations to be considered.
o Table of contents – specifies the location of the various parts of the report.
o Overview – Is a narrative explanation of the purpose & scope of the project. The
reason for undertaking the feasibility study, and the department (s) involved or
affected by the candidate system.
o Detailed findings outline the methods used in the present system, the system
effectiveness & efficiency as well as operating costs are emphasized. This section
also provides a description of the objectives and general procedures of the
candidate system.
o Economic justification- details point by point cost comparisons and preliminary
cost estimates for the development and operation of the candidate system. A
return on investment (ROI) analysis of the project is also included.
o Recommendations & conclusions
53
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
o Appendix – document all memos and data compiled during the investigation,
they are placed at the end of the report for reference.
Technical Feasibility
In technical feasibility the following issues are taken into consideration.
• Whether the required technology is available or not
• Whether the required resources are available
o Manpower- programmers, testers & debuggers
o Software and hardware
Operational Feasibility
Operational feasibility is mainly concerned with issues like whether the system will be
used if it is developed and implemented. Whether there will be resistance from users that will
effect the possible application benefits? The essential questions that help in testing the
operational feasibility of a system are following.
• Will the proposed system really benefit the organization? Does the overall response
increase? Will accessibility of information be lost? Will the system effect the customers in
considerable way?
Legal Feasibility
It includes study concerning contracts, liability, violations, and legal other traps
frequently unknown to the technical staff.
Market Feasibility
Another concern is market variability and impact on the project. This area should not be
confused with the Economic Feasibility. The market needs analysis to view the potential impacts
of market demand, competitive activities, etc. and "divertible" market share available. Price war
activities by competitors, whether local, regional, national or international, must also be
analyzed for early contingency funding and debt service negotiations during the start-up, ramp-
up, and commercial start-up phases of the project.
Environmental Feasibility
Often a killer of projects through long, drawn-out approval processes and outright active
opposition by those claiming environmental concerns. This is an aspect worthy of real attention
in the very early stages of a project. Concern must be shown and action must be taken to
address any and all environmental concerns raised or anticipated. A perfect example was the
recent attempt by Disney to build a theme park in Virginia. After a lot of funds and efforts,
Disney could not overcome the local opposition to the environmental impact that the Disney
project would have on the historic Manassas battleground area.
Political Feasibility
A politically feasible project may be referred to as a "politically correct project." Political
considerations often dictate direction for a proposed project. This is particularly true for large
projects with national visibility that may have significant government inputs and political
implications. For example, political necessity may be a source of support for a project regardless
55
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
of the project's merits. On the other hand, worthy projects may face insurmountable opposition
simply because of political factors. Political feasibility analysis requires an evaluation of the
compatibility of project goals with the prevailing goals of the political system.
Safety Feasibility
Safety feasibility is another important aspect that should be considered in project
planning. Safety feasibility refers to an analysis of whether the project is capable of being
implemented and operated safely with minimal adverse effects on the environment.
Unfortunately, environmental impact assessment is often not adequately addressed in
complex projects. As an example, the North America Free Trade Agreement (NAFTA) between
the U.S., Canada, and Mexico was temporarily suspended in 1993 because of the legal
consideration of the potential environmental impacts of the projects to be undertaken under the
agreement.
Social Feasibility
Social feasibility addresses the influences that a proposed project may have on the social
system in the project environment. The ambient social structure may be such that certain
categories of workers may be in short supply or nonexistent. The effect of the project on the
social status of the project participants must be assessed to ensure compatibility. It should be
recognized that workers in certain industries may have certain status symbols within the society.
Cultural Feasibility
Cultural feasibility deals with the compatibility of the proposed project with the cultural
setup of the project environment. In labor-intensive projects, planned functions must be
integrated with the local cultural practices and beliefs. For example, religious beliefs may
influence what an individual is willing to do or not do.
Financial Feasibility
Financial feasibility should be distinguished from economic feasibility. Financial
feasibility involves the capability of the project organization to raise the appropriate funds
needed to implement the proposed project. Project financing can be a major obstacle in large
multi-party projects because of the level of capital required. Loan availability, credit worthiness,
equity, and loan schedule are important aspects of financial feasibility analysis.
56
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Cost benefit analysis helps to give management a picture of the costs, benefits and risks.
It usually involves comparing alternate investments.
Cost benefit determines the benefits and savings that are expected from the system and
compares them with the expected costs.
The cost of an information system involves the development cost and maintenance cost.
The development costs are one time investment whereas maintenance costs are recurring. The
development cost is basically the costs incurred during the various stages of the system
development.
Each phase of the life cycle has a cost. Some examples are :
• Personnel
• Equipment
• Supplies
• Overheads
• Consultants' fees
Sources of benefits :
Benefits are usually come under two major sources : decreased costs or increased
revenues.
Costs savings come from greater efficiency in the operations of the company. Areas
to look for reduced costs include the following :
1. Reducing staff due to automating manaual functions or increasing efficiency.
2. Maintaining constant staff with increasing volumes of work.
3. Decreasing operating expenses, such as shipping charges for emergency shipments.
4. Reducing error rates due to automated editing or validation.
5. Reducing bad accounts or bad credit losses.
6. Collecting accounts receivables more quickly.
7. Reducing the costs of goods with the volume discounts and purchases.
There are several cost factors/elements. These are hardware, personnel, facility,
operating, and supply costs.
In a broad sense the costs can be divided into two types
57
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
1. Development costs-
Development costs that are incurred during the development of the system are one time
investment.
• Wages
• Equipment
2. Operating costs,
e.g. , Wages
Supplies
Overheads
Another classification of the costs can be:
Hardware/software costs:
It includes the cost of purchasing or leasing of computers and it's peripherals. Software costs
involves required software costs.
Personnel costs:
It is the money, spent on the people involved in the development of the system. These
expenditures include salaries, other benefits such as health insurance, conveyance allowance,
etc.
Facility costs:
Expenses incurred during the preparation of the physical site where the system will be
operational. These can be wiring, flooring, acoustics, lighting, and air conditioning.
Operating costs:
Operating costs are the expenses required for the day to day running of the system. This
includes the maintenance of the system. That can be in the form of maintaining the hardware or
application programs or money paid to professionals responsible for running or maintaining the
system.
Supply costs:
These are variable costs that vary proportionately with the amount of use of paper, ribbons,
disks, and the like. These should be estimated and included in the overall cost ofthe system.
Benefits
We can define benefit as
Profit or Benefit = Income – Costs
The system will provide some benefits also. Benefits can be tangible or intangible, direct
or indirect. In cost benefit analysis, the first task is to identify each benefit and assign a
monetary value to it.
The two main benefits are improved performance and minimized processing costs.
Further costs and benefits can be categorized as
Benefits are also tangible or intangible. For example, more customer satisfaction,
improved company status, etc are all intangible benefits. Whereas improved response time,
producing error free output such as producing reports are all tangible benefits. Both tangible
and intangible costs and benefits should be considered in the evaluation process.
Indirect costs result from the operations that are not directly associated with the system.
Insurance, maintenance, heat, light, air conditioning are all indirect costs.
59
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
5. Cash-flow analysis
6. Break-even analysis
61
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
% = 30000/80000
=.375
Break-even Analysis:
Once we have determined what is estimated cost and benefit of the system it is also
essential to know in what time will the benefits are realized. For that break-even analysis is
done.
Break -even is the point where the cost of the proposed system and that of the current
one are equal. Break-even method compares the costs of the current and candidate systems. In
developing any candidate system, initially the costs exceed those of the current system. This is
an investment period. When both costs are equal, it is break-even. Beyond that point, the
candidate system provides greater benefit than the old one. This is return period.
Fig. 3.1 is a break-even chart comparing the costs of current and candidate systems. The
attributes are processing cost and processing volume. Straight lines are used to show the
model's relationships in terms of the variable, fixed, and total costs of two processing methods
and their economic benefits. B' point is break-even. Area after B' is return period. A'AB' area is
investment area. From the chart, it can be concluded that when the transaction are lower than
70,000 then the present system is economical while more than 70,000 transaction would prefer
the candidate system.
Cash-flow Analysis:
62
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Some projects, such as those carried out by computer and word processors services,
produce revenues from an investment in computer systems. Cash-flow analysis keeps track of
accumulated costs and revenues on a regular basis.
Payback analysis:
The pay back method is a common measure of the relative time value of a project. It determines
the time it takes for the accumulated benefits to equal the initial investment. It is easy to
calculate and allows two or more activities to be ranked. The payback period may be
computed by the following formula:
Where
A=Capital Investment
B=Investment Credit
C=Cost investment.
D=Companies income tax.
E=State and local taxes.
F=Life of capital
G=Time to install system.
H=Benefits and Savings.
63
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
LIST OF DELIVERABLES
When the design of an information system is complete the specifications are
documented in a form that outlines the features of the application
These specifications are termed as the ‘deliverables’ or the ‘design book’ by the system
analysts.
No design is complete without the design book, since it contains all the details that must
be included in the computer software, datasets & procedures that the working information
system comprises.
The deliverables include the following:
1. Layout charts
Input & output descriptions showing the location of all details shown on reports, documents, &
display screens.
2. Record layouts:
Descriptions of the data items in transaction & master files, as well as related database
schematics.
3. Coding systems:
Descriptions of the codes that explain or identify types of transactions, classification, &
categories of events or entities.
4. Procedure Specification:
Planned procedures for installing & operating the system when it is constructed.
5. Program specifications:
Charts, tables & graphic descriptions of the modules & components of computer
software & the interaction between each as well as the functions performed &
data used or produced by each.
6. Development plan:
Timetables describing elapsed calendar time for development activities; personnel
staffing plans for systems analysts, programmers , & other personnel; preliminary
testing & implementation plans.
7. Cost Package:
Anticipated expenses for development, implementation and operation of the new
system, focusing on such major cost categories as personnel, equipment,
communications, facilities and supplies.
64
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
DFDs help system designers and others during initial analysis stages visualize a current
system or one that may be necessary to meet new requirements. Systems analysts prefer
working with DFDs, particularly when they require a clear understanding of the boundary
between existing systems and postulated systems. DFDs represent the following:
1. External devices sending and receiving data
2. Processes that change that data
3. Data flows themselves
4. Data storage locations
Purpose/Objective:
65
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The purpose of data flow diagrams is to provide a semantic bridge between users and systems
developers. The diagrams are:
• graphical, eliminating thousands of words;
• logical representations, modeling WHAT a system does, rather than physical models
showing HOW it does it;
• hierarchical, showing systems at any level of detail; and
• jargonless, allowing user understanding and reviewing.
The goal of data flow diagramming is to have a commonly understood model of a system. The
diagrams are the basis of structured systems analysis. Data flow diagrams are supported by
other techniques of structured systems analysis such as data structure d iagrams, data
dictionaries, and procedure-representing techniques such as decision tables, decision trees, and
structured English.
Data flow diagrams have the objective of avoiding the cost of:
• user/developer misunderstanding of a system, resulting in a need to redo systems or in
not using the system.
• having to start documentation from scratch when the physical system changes since the
logical system, WHAT gets done, often remains the same when technology changes.
• systems inefficiencies because a system gets "computerized" before it gets
"systematized".
• being unable to evaluate system project boundaries or degree of automation, resulting
in a project of inappropriate scope.
< Cash
> Resource Flow
The External Entity symbol represents sources of data to the system or destinations of data from
the system.
The Data Flow symbol represents movement of data.
66
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The Data Store symbol represents data that is not moving (delayed data at rest).
The Process symbol represents an activity that transforms or manipulates the data (combines,
reorders, converts, etc.).
Any system can be represented at any level of detail by these four symbols.
67
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
usually restricted to early, high-level diagrams and are used when a description of the physical
flow of materials is considered to be important to help the analysis.
68
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
69
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Structured English
This is another method of overcoming ambiguity in defining conditions and actions. This
method uses narrative statements to describe a procedure. It works well for decision analysis
and can be carried over into programming and software development. Structured English uses
three basic types of statements to describe a process. They are:
• Sequence structures
A sequence structure is a single step or action included in a process. It does not depend
on the existence of any condition, and when encountered, is always taken.
• Decision structures
The action sequences are often included within decision structures that identify
conditions. Decision structures occur when two or more actions can be taken,
depending on the value for a certain condition.
• Iteration structures
In routine operating activities, it is common to find that certain activities are repeated
while a certain condition exists or until a condition occurs. Iteration instructions permit
analysts to describe these cases.
70
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The two building blocks of Structured English are (1) structured logic or instructions
organized into nested or grouped procedures, and (2) simple English statements such as add,
multiply, move, etc. (strong, active, specific verbs).
Five conventions to follow when using Structured English:
• Express all logic in terms of sequential structures, decision structures, or iterations.
• Use and capitalize accepted keywords such as: IF, THEN, ELSE, DO, DO WHILE, DO UNTIL,
PERFORM
• Indent blocks of statements to show their hierarchy (nesting) clearly.
• When words or phrases have been defined in the Data Dictionary, underline those words
or phrases to indicate that they have a specialized, reserved meaning.
Be careful when using "and" and "or" as well as "greater than" and "greater than or equal to"
and other logical comparisons.
Decision tables
Decision tables are a precise yet compact way to model complicated logic. Decision
tables, like if-then-else and switch-case statements, associate conditions with actions to
perform. But, unlike the control structures found in traditional programming languages,
decision tables can associate many independent conditions with several actions in an elegant
way.
Structure
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants
Conditions Condition
Alternative
Actions Action Entities
71
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Each decision corresponds to a variable, relation or predicate whose possible values are
listed among the condition alternatives. Each action is a procedure or operation to perform, and
the entries specify whether (or in what order) the action is to be performed for the set of
condition alternatives the entry corresponds to. Many decision tables include in their condition
alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables,
especially when a given condition has little influence on the actions to be performed. In some
cases, entire conditions thought to be important initially are found to be irrelevant when none of
the conditions influence which actions are performed.
Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented. Some decision tables use simple
true/false values to represent the alternatives to a condition (akin to ifthen- else), other tables
may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or
probabilistic representations for condition alternatives. In a similar way, action entries can
simply represent whether an action is to be performed (check the actions to perform), or in more
advanced decision tables, the sequencing of actions to perform (number the actions to perform).
Example
The limited-entry decision table is the simplest to describe. The condition alternatives are
simple Boolean values, and the action entries are check-marks, representing which of the
actions in a given column are to be performed. A technical support company writes a decision
table to diagnose printer problems based upon symptoms described to them over the phone
from their clients.
Printer troubleshooter
Of course, this is just a simple example (and it does not necessarily correspond to the
reality of printer troubleshooting), but even so, it is possible to see how decision tables can scale
to several conditions with many possibilities.
Decision tables make it easy to observe that all possible conditions are accounted for. In
the example above, every possible combination of the three conditions is given. In decision
tables, when conditions are omitted, it is obvious even at a glance that logic is missing. Compare
this to traditional control structures, where it is not easy to notice gaps in program logic with a
mere glance --- sometimes it is difficult to follow which conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand that a
programmer think of all possible conditions. With traditional control structures, it is easy to
forget about corner cases, especially when the else statement is optional. Since logic is so
important to programming, decision tables are an excellent tool for designing control logic. In
one incredible anecdote, after a failed 6 man-year attempt to describe program logic for a file
maintenance system using flow charts, four people solved the problem using decision tables in
just four weeks. Choosing the right tool for the problem is fundamental.
Decision tree
In operations research, specifically in decision analysis, a decision tree is a decision
support tool that uses a graph or model of decisions and their possible consequences, including
chance event outcomes, resource costs, and utility. A decision tree is used to identify the
strategy most likely to reach a goal. Another use of trees is as a descriptive means for
calculating conditional probabilities. In data mining and machine learning, a decision tree is a
predictive model; that is, a mapping from observations about an item to conclusions about its
target value.
More descriptive names for such tree models are classification tree or reduction tree. In
these tree structures, leaves represent classifications and branches represent conjunctions of
features that lead to those classifications. The machine learning technique for inducing a
decision tree from data is called decision tree learning, or (colloquially)
Four major steps in building Decision Trees:
1. Identify the conditions
2. Identify the outcomes (condition alternatives) for each decision
3. Identify the actions
4. Identify the rules.
Data Directory
A data dictionary is a set of metadata that contains definitions and representations of
data elements. Within the context of a DBMS, a data dictionary is a read-only set of tables and
views. Amongst other things, a data dictionary holds the following information:
• Precise definition of data elements
• Usernames, roles and privileges
73
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• Schema objects
• Integrity constraints
• Stored procedures and triggers
• General database structure
• Space allocations
74
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Beyond ER Design
4. Schema Refinement:
This step is to analyze the collection of relations in our relational database schema to
identify potential problems, and refine it.
Entities are specific objects or things in the mini-world that are represented in the database.
For example, Employee or staff, Department or Branch , Project are called Entity.
Types of Attributes:
• Simple (Atomic) Vs. Composite
• Single Valued Vs. Multi Valued
• Stored Vs. Derived
• Null Values
76
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Null Values
In some cases a particular entity may not have an applicable value for an attribute. For
example, a college degree attribute applies only to persons with college degrees. For such
situation, a special value called null is created.
A relationship is an association among two or more entities. For example we may have
the relationship that Antony works in the Marketing department. A relationship type R among
the n entity types E1,E2,… ,En defines a set of associations or a relationship set- among entities
from these entity types.
Informally each relationship instance ri in R is an association of entities, where the
association includes exactly one entity from each participating entity type. Each such
relationship instance ri represents the facts that the entities participating in ri are related in
some way in the corresponding mini world situation. For example consider a relationship type
Works_for between the two entity types Employee and Department, which associates each
employee with the department for which the employee works. Each relationship instance in the
relationship set Works_for associates one employee entity and one department entity. Figure
illustrates this example.
Degree of a Relationship:
The degree of a relationship is the number of participating entity types. Hence the above
work_for relationship is of degree two. A relationship type of degree two is called Binary, and
one of degree three is called Ternary. An example of Ternary relationship is given below.
78
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Relationship of degree:
o two is binary
o three is ternary
o four is quaternary
79
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
80
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Types of Entity:
1. Strong Entity
a. Entity which has a key attribute in its attribute list.
2. Weak Entity
a. Entity which doesn’t have the Key attribute.
ER Diagram
Notations for ER Diagram
82
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Sample ER diagram for a Company Schema with structural Constraints is shown below.
83
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Identifying Relationship:
It is a relationship between Strong entity and weak entity.
Data Directory
The data dictionary (or data repository) or system catalog is an important part of the
DBMS. It contains data about data (or metadata). It means that it contains the actual database
descriptions used by the DBMS. In most DBMSs, the data dictionary is active and integrated. It
means that the DBMS checks the data dictionary every time the database is accessed. The data
dictionary contains the following information.
• Logical structure of database
• Schemas, mappings and constraints
• Description about application programs.
• Descriptions of record types, data item types, and data aggregates in the database.
• Description about physical database design, such as storage structures, access paths etc.
• Descriptions about users of DBMS and their access rights.
85
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
example, each data item, its source, its name, its uses, its meaning, its relationship to other
items etc. is stored in data dictionary
A data dictionary is a catalog - a repository - of the elements in the system. Data dictionary is
list of all the elements composing the data flowing through a system. The major elements are
data flows, data stores, and processes.
2. To communicate a common meaning for all system elements:- Data dictionaries assist in
ensuring common meanings for system elements and activities.
For example: Order processing (Sales orders from customers are processed so that specified
items can be shipped) -have one of its data element invoice. This is a common business term but
does this mean same for all the people referring it?
Does invoice mean the amount owed by the supplier?
Does the amount include tax and shipping costs?
How is one specific invoice identified among others?
Answers to these questions will clarify and define systems requirements by more completely
describing the data used or produced in the system. Data dictionaries record additional details
about the data flow in a system so that all persons involved can quickly look up the description
of data flows, data stores, or processes.
3. To document the features of the system :- Features include the parts or components and the
characteristics that distinguish each. We want to know about the processes are data stores. But
we also need to know under what circumstances each process is performed and hoe often the
circumstances occur. Once the features have been articulated and recorded, all participants in
the project will have a common source for information about the system.
86
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
For example: Payment voucher - vendor details include field vendor telephone in which area can
be optional if local phone. Item details can be repeated for each item. Purchasing Authorisation
field may be added after invoice arrives if special order.
4. To facilitate analysis of the details in order to evaluate characteristics and determine where
system changes should be made:- The fourth reason for using data dictionaries is to determine
whether new features are needed in a system or whether changes of any type are in order.
Optimum Format: The structures and elements are on the ERD. The Data Flows, Processes, and
External Entities are from the DFDs. Number the objects on the diagrams and refer to those
objects in the data dictionary by thier numbers. Use "dot notation." Example: Diagram 1 has 4
processes, 1,2,3,4. The explosion for process 1 will be Diagram 1.1, 1.2, etc, depending on how
much detail can be derived from your Diagram 1. Explosions of 1.1 would be 1.1.1, 1.1.2, etc.
The Listings of your objects have the page numbers of the object descriptions.
Here are the definitions of these items. Keep in mind that a data dictionary can be created
INDEPENDENT of the how the system will be implemented (ie A DBMS such as Sybase or Oracle,
spreadsheet, or yellow legal pad.) Please refer to the Casebook Appendix on page 79 for each of
the following.
Data Stores: A inventory of data. The whole of the data in a small system. A database. Very
large systems may have more than one database. Data Store Listing: The A# page is a reference
to the page number of each data store description. More than one data store description on
each page is OK. Data Store Description:
Brief description: A description which distinguishes this data store from others. In FSS, I
would anticipate only one data store.
Organization/Volume/Physical Implementation Info: How is the data store organized, ie
geographically, functionally. How big (MB) the data store is expected to be. What
implementation considerations are there? If this is a proposed data dictionary, keep your
mind open, if it’s an existing data dictionary, go ahead and note what DBMS (or color legal
pad) you will be using. Note any other implementation issues.
Contents: List the structures in the data store.
Inflows and outflows: List these.
Data Structures: Specific arrangements of data attributed (elements) that define the
organization of a single instance of a data flow. Data structures belong to a particular data
store. Think about this... Remember that this could be manifested in any way from a yellow legal
pad to an advanced DBMS. One instance of data flow could be the answers on a form. The
87
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
answers would be put in to an organized structure. However, I’d like for you to keep your mind
open to the possibility that the "answers on your form may actually populate more than one
"table" in a database. For example, you may desire to collect information about an instance of
an "employee." Employees pave payroll information, perhaps work address information, and
human resources information. Because the purpose of this information is to serve several
systems or data flows, you might want to store the whole of the employee information in
several data structures. Data Structure Listing: Same as Data Store Listing above. Data Structure
Description:
Brief Description: Same as above.
Contents: List the data element (or attribute) names. Just list them, no need to give
details.
Used in Data Store(s), Data Structure(s), Data Flows(s): List these.
Data Elements: The descriptive property or characteristic of an entity. In database terms, this is
a "attribute" or a "field." In Microsoft and other vendor terms, this is a "property." Therefore,
the data element is part of a data structure, which is part of a data store. It has a source, and
there may be data entry rules you wish to enforce on this element. Data Element Listing: Same
as above. Data Element Description:
Brief Description: Same as above. Must distinguish from other elements.
Value Range/interpretation: Has to do with datatype or range of acceptible values.
Example: a data element for zip code is any group of 5 digits (or 5+4) Must be numbers,
must be 5 digits. Or, another example, a code must be "A" through "E."
Source/Policy/Control/Editing Info: Where does the data element come from? Example:
Source: "from order form line 12" What are some business rules about this data? Must
this element also exist in another structure before it can exist in this one?
Used in Data Store(s), Data Structure(s), Data Flows(s): List these.
Data Flows: Represent an input of data to a process, or the output of data (or information) from
a process. A data flow is also used to represent the creation, deletion, or updating of data in a
file or database (called a "data sore" in the Data Flow Diagram (DFD.) Note that a data flow is
not a process, but the passing of data only. These are represented in the DFD. Each data flow is
a part of a DFD Explosion of a particular process. Data Flow Listing: Same as above. Data Flow
Description: Note the DFD Explosion #.
Brief Description: Same as above.
Volume/Physical Implementation Info: What is the volume of this data flow per day,
week, month, whatever is appropriate? Does this dataflow require an Internet
connection? A car?
Source and Destination: Fill out as appropriate. What are the sources and destinations of
this data flow?
88
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Processes: Work performed on or in response to, incoming data flows or conditions. The FSS
Context Diagram includes "FSS System" and all the inputs and outputs of the FSS system. The
DFD Diagram 0 (if you’d included everything, just as an example) would be 1. Accounts Payable,
2. Order Processing, 3. Catering Processing, 4. Payroll Processing, 5. Accounts Payable
Processing, 6. Warehouse Processing, 7. Inventory Processing, and 8. Store Processing. Each of
these processes has inputs and outputs that are independent of the other. Process Listing: Same
as above. Process Description: Note the DFD Explosion #.
Brief Description: Same as above.
Purpose/Physical Implementation Info: Why do we need this process? State a specific
purpose. Indicate any implementation issues. Does the process need to accept certain
types of data? Certain type of DBMS or ODBC? What are the inflows and outflows? (If
inflows and outflows are the same... then you have a data flow and not a process.)
External Entities: A person, organization unit, other system, or other organization that lies
outside the scope of the project but that interacts with the system being studied. External
entities provide the net inputs into a system and receive net outputs from the system. This and
the others are definitions from the book. Basically, external entities are those entities that are
outside the scope of the project or system at hand. External Entities Listing: Same as above.
External Entities Description:
89
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Chapter 6- Design
Design is the first step in the development phase of any engineering product or system. It
may be defined as the process of applying various techniques and principles for the purpose of
defining a device, a process or a system in sufficient detail to permit its physical realization.
The designer.s goal is to produce a model of an entity that will later be built. The process by
which the model is developed combines:
• Intuition and judgment based on experience in building similar entities
• A set of principles and/or heuristics that guide the way in which the model evolves
• A set of criteria that enables the quality to be judged.
Using one of a number of design methods, the design step produces a data design, an
architectural design and a procedural design. The data design transforms the information
domain model created during analysis into the data structures that will be required to
implement the software. The architectural design defines the relationship between major
structural components of the program. The procedural design transforms structural components
into a procedural of the software. Source code is generated and testing is conducted to
integrate and validate the software. Software design is important primarily to create quality
software. Design is the place where quality is fostered in the software development. Design
provides us with a representation of the software that can be assessed for quality. Design is the
only way one can accurately translate a customer.s requirements into a finished software
product or a system.
90
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Consequently, each program symbol on the system flowchart represents an EXEC statement and
each file or peripheral device symbol linked to a program by a flowline implies a need for one DD
statement. Working backward, preparing a system flowchart from a JCL listing is good way to
identify a program’s linkages.
A system flowchart’s symbols represent physical components, and the mere act of
drawing one implies a physical decision. Consequently, system flowcharts are poor analysis tools
because the appropriate time for making physical decisions is after analysis has been
completed.
A system flowchart can be misleading. For example, an on-line storage symbol might
represent a diskette, a hard disk, a CD-ROM, or some combination of secondary storage devices.
Given such ambiguity, two experts looking at the same flowchart might reasonably envision two
different physical systems. Consequently, the analyst’s intent must be clearly documented in an
attached set of notes.
91
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Predefined processes
The flowchart for a complex system can be quite large. An off-page connector symbol
(resembling a small home plate) can be used to continue the flowchart on a subsequent page,
but multiple-page flowcharts are difficult to read. When faced with a complex system, a good
approach is to draw a high-level flowchart showing key functions as predefined processes and
then explode those predefined processes to the appropriate level on subsequent pages.
Predefined processes are similar to subroutines.
For example, Figure 37.2 shows a system flowchart for processing a just-arrived
shipment into inventory. Note that the shipment is checked (in a predefined process) against the
Shipping documents (the printer symbol) and recorded (the rectangle) in both the Inventory file
and the Vendor file using data from the Item ordered file.
Figure 37.3 is a flowchart of the predefined process named Check shipment. When a shipment
arrives, it is inspected manually and either rejected or tentatively accepted. The appropriate
data are then input via a keyboard/display unit to the Record shipment program (the link to
Figure 37.2). Unless the program finds something wrong with the shipment, the newly arrived
stock is released to the warehouse. If the shipment is rejected for any reason, the Shipping
documents are marked and sent to the reorder process.
Structure Chart
A structure chart is a hierarchy chart with arrows showing the flow of data and control
information between the modules. Structure charts are used as design tools for functionally
decomposing structured programs.
92
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
A structure chart is a hierarchy chart that shows the data and control information flows
between the modules. (Figure shows a partial structure chart.) Each module is represented as a
rectangle. Each data flow (or data couple) is shown as an arrow with an open circle at the origin
end. A control couple (a flow of control information such as a flag or a switch setting) is shown
as an arrow with a solid circle at the origin end, see the control couple labeled Reorder flag
between Process sale and Process transaction in Figure. (Note: In this program design, Initiate
reorder is an independent (not shown) level-2 module called by Process transaction when the
Reorder flag is set.) As appropriate, the names of the data elements, data composites, and/or
control fields are written alongside the arrows.
A structure chart does not show the program’s sequence, selection, or repetitive logical
structures; those details are inside the modules, which are viewed as black boxes. However,
some designers identify high-level case structures by adding a transaction center to a key
control module. For example, the solid diamond (the transaction center symbol) at the bottom
of the Process transaction module indicates that, based on the transaction type, either Process
sale, Process customer return, or Process shipment arrival is called.
A data couple might list a composite item; for example, Get data passes a complete transaction
and the associated master record back to Process transaction. Higher-level modules generally
select substructures or specific data elements from a composite and pass them down to their
children. At the bottom of the structure, the detailed computational modules accept and return
data elements.
The structured program designer’s objective is to define independent, cohesive, loosely
coupled modules. Coupling is a function of the amount of data and control information flowing
between two modules, and the structure chart graphically shows the data and control flows. An
excessive number of data or control flows suggests a poor design or a need for further
decomposition.
93
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
These are broadly termed as Software development (design) tools. CASE (Computer Aided
Software Engineering) tool is another broader set of tools.
94
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
GUI Design:
The word GUI stands for Graphical User Interface (GUI). Since the PC was invented, the
computer systems and application systems have started considering the GUI as one of the most
important aspect of system design. The GUI design component in every application system will
vary depending upon the importance of the user interactions for the success of the system.
1. Provide the best way for people to interact with computers. This is commonly known as
Human Computer Interaction (HCI).
2. The presentation should be user friendly. User-friendly Interface is helpful eg. It should
not only tell user that he has committed an error, but also provide guidance as to how
he/she can rectify it soon.
3. It should provide information on what is the error and how to fix it.
4. User-friendliness also includes tolerant and adaptable.
5. A good GUI makes the User more productive.
6. A good GUI is more effective, because it finds the best solutions to a problem.
7. It is also efficient because it helps the User to find such solution much quickly and with
the least error.
8. For a user, using a computer system, his workspace is the computer screen. The goal of
a good GUI is to make the best, if not all, of a Users workspace.
9. A good GUI should be robust. High robustness implies that the interface should not fall
because of some action taken by the user. Also, any user error should not lead to a
system breakdown.
10. The Usability of a GUI is expected to be high. The usability is measured in the various
terms.
11. A good GUI is of high Analytical capability, most of the information needed by user
appears on the screen.
12. With a good GUI, user finds the work easier and more pleasant.
13. The user should be happy and confident to use the interface.
14. A good GUI has a high cognitive workload ability i.e. the mental efforts required of the
user to use the system should be the latest. In fact, the GUI closely approximates the
user’s mental model or reactions to the screen.
15. For a good GUI, the user satisfaction is HIGH.
These are the Characteristics or Goals of a good GUI.
1. HIPO is a commonly used method for developing software.An acronym for Hierachical
Input Process Output,this method was developed by IBM for its large,complex
Operating Systems.
96
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
2. Purpose: The assumption on which HIPO is based is that one oftenly loses track of the
intended function of large system. This is one reason why it is difficult to compare
existing systems against their original specifications and therefore why failure can occur
even in system that are technically well formulated. From the user view, single function
can often extend across several modules. The concerns of the analyst is that
understanding, describing and documenting the modules and their interaction in a way
that provides sufficient details out that does not lose sight of the larger picture.
3. HIPO diagrams are graphic, rather than prose or narrative, descriptions of the system.
4. They assest the analyst in answering three guidelines.
a. What does the system or module do?
b. How does to do it?
c. What are the inputs and outputs?
5. A HIPO descriptions for a system consists of
a. Visual of the the table of contents(VTOC),and
b. Functional diagrams.
6. Advantages:
a. HIPO diagrams are effective for documenting a system.
b. They also aid designers and force them to think about how specification will be
met and where activities and components must be linked together.
7. Disadvantages:
a. They rely on a set of specialized symbols that require explanation. an extra
concern when compared to the simplicity of, for example, data flow diagrams.
b. HIPO diagrams are not as easy to use communication purpose as many people
would like.And of course, they donot guarantee error free systems. Hence, their
greatest strength is the documentation of a system.
Warnier-Orr diagrams
A Warnier-Orr diagram, a graphical representation of a horizontal hierarchy with
brackets separating the levels, is used to plan or document a data structure, a set of detailed
logic, a program, or a system.
Warnier-Orr diagrams are excellent tools for describing, planning, or documenting data
structures. They can show a data structure or a logical structure at a glance. Because only a
limited number of symbols are required, specialized software is unnecessary and diagrams can
be created quickly by hand. The basic elements of the technique are easy to learn and easy to
explain. Warnier-Orr diagrams map well to structured code.
The structured requirements definition methodology and, by extension, Warnier-Orr
diagrams are not as well known as other methodologies or tools. Consequently, there are
relatively few software tools to create and/or maintain Warnier-Orr diagrams and relatively few
systems analysts or information system consultants who are proficient with them.
97
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
In-out diagrams
A Warnier-Orr diagram shows a data structure or a logical structure as a horizontal
hierarchy with brackets separating the levels. Once the major tasks are identified, the systems
analyst or information system consultant prepares an in-out Warnier-Orr diagram to document
the application’s primary inputs and outputs.
For example, Figure shows an in-out diagram for a batch inventory update application.
Start at the left (the top of the hierarchy). The large bracket shows that the program, Update
Inventory, performs five primary processes ranging from Get Transaction at the top to Write
Reorder at the bottom. The letter N in parentheses under Update Inventory means that the
program is repeated many (1 or more) times. The digit 1 in parentheses under Get Transaction
(and the next three processes) means the process is performed once. The (0, 1) under Write
Reorder means the process is repeated 0 or 1 times, depending on a run-time condition. (Stock
may or may not be reordered as a result of any given transaction.)
Data flow into and out from every process. The process inputs and outputs are identified
to the right of the in-out diagram. For example, the Get Transaction process reads an Invoice
and passes it to a subsequent process. The last column is a list of the program’s primary input
and output data structures. Note how the brackets indicate the hierarchical levels.
98
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Data structures
After the in-out diagram is prepared, the data structures are documented. For example,
Figure 31.2 shows the data structure for an invoice.
The highest level composite, Invoice, is noted at the left. The N in parentheses under the
data name means that there are many (one or more) invoices. Moving to the right of the first
bracket are the components that make up an invoice. Invoice number, Date-of-sale, Customer
telephone, Subtotal, Sales tax, and Total due are data elements, while Customer name,
Customer address, and Item purchased are composite items that are further decomposed.
Consider the composite item Customer name. The composite name appears at the left
separated from its lower-level data elements by a bracket. Three of the data elements that
make up Customer name are conditional; Customer title (Dr, Mr., Ms.), Customer middle (not
everyone has a middle name), and Customer suffix (Sr., Jr., III) may or may not be present on a
given Invoice. The entry (0, 1) under a data element name indicates that it occurs 0 or 1 times.
A given sales transaction might include several different products, so Item purchased is a
repetitive data structure that consists of one or more sets of the data elements Stock number,
Description, Units, Unit price, and Item total. The letter M in parenthesis under Item purchased
indicates that the substructure is repeated an unknown number of times. (Note: M and N are
different values.) The composite item, Units, can hold either Weight or Quantity, but not both.
The “plus sign in a circle” is an exclusive or symbol.
99
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
1. The ability to show the relationship between processes and steps in a process is not
unique to Warnier/Orr diagrams, not is the use of iteration ,or treatment of individual
cases.
2. Both the structured flowcharts and structured-English methods do this equality well.
However, the approach used to develop systems definitions with Warnier/Orr diagrams
are different and fit well with those used in logical system design.
3. To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems
output and using on output-oriented analysis.
On paper, the development moves from left to right. First, the intended output or results
of processing are defined..
4. At the next level, shown by inclusion with a bracket, the steps needed to produce the
output are defined. Each step in turn is further defined. Additional brackets group the
processes required to produce the result on the next level.
5. A complete Warnier/Orr diagram includes both process groupings and data
requirements. Both elements are listed for each process or process component. These
data elements are the ones needed to determine which alternative or case should be
handled by the system and to carry out the process.
6. The analyst must determine where each data element originates how it is used, and
how individual elements are combined. When the definition is completed, data structure
for each process is documented. It is turn, is used by the programmers, who work from
the diagrams to code the software.
Advantages:
1. Warnier/Orr diagrams offer some distinct advantages to system experts. They are simple
in appearance and easy to understand. Yet they are powerful design tools.
2. They have the advantage of showing grouping of processes and the data that must be
passed from level to level.
3. In addition, the sequence of working backwards ensures that the system will be result-
oriented.
4. This method is also a natural process. with structured flowcharts. for example it is often
necessary to determine the innermost steps before interactions and modularity.
100
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Design Database
101
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Input Design
Goals of Input Design:
1. The data which is input to a computerized system should be correct.. If the data is carelessly
entered will lead to erroneous results. The system designer should ensure that system
prevents such errors.
2. The Volume of data should be considered since if any error occurs, it will take long time to
find out the source of error. The design of inputs involves the following four tasks:
a. Identify data inputs devices and mechanism.
b. Identify the input data and attributes
c. Identify input controls
d. Prototype the input forms
a. Identify data inputs devices and mechanism.
To reduce input error:
1. Automate the data entry process for reducing human errors.
2. Validate the data completely and correctly at the location where it is entered.
Reject the wrong data at its source only.
b. Identify the input data and attributes
1. It involves identifying information flow across the system boundary.
2. When the input form is designed the designer ensures that all these data
elements are provided for entry, validations and storage.
c. Identify input controls
1. Input integrity controls are used with all kinds of mechanisms which help
reduces the data entry errors at input stage and ensure completeness.
2. Various error detection and correction techniques are applied.
d. prototype the input forms
1. the users should be provided with the form prototype and related
functionality including validation, help ,error message.
Output Design
Goals of Output Design:
In order to select an appropriate output device and to design a proper output format,
it is necessary to understand the objectives the output is expected to serve.
Following questions can address these objectives:
1. Who will use the report?
2. What is the proposed use of the report?
3. What is the volume of the output?
102
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
103
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Chapter 8- Testing
Testing is a process of executing a program with the intent of finding an error. A good test
case is one that has a high probability of finding an as yet undiscovered error. A successful test is
one that uncovers previously undiscovered errors. The main objective of testing, thus, is to
design tests that uncover different classes of errors, and to do so with minimum time and effort
spent.
Two classes of input are provided to the test process:
• A software configuration that includes a Software Requirement Specification, Design
Specification and source code.
• A test configuration that includes a test plan and procedure, any testing tools to be used
and testing cases with their expected results.
Tests are conducted and the results are compared to the expected results. When erroneous
data are uncovered, an error is implied and debugging commences. The results of testing give a
qualitative indication of the software quality and reliability. If severe errors requiring design
modification are encountered regularly, the reliability is suspect. On the other hand, uncovering
of errors that can be easily fixed implies either that the software quality is good, or that the test
plan is inadequate.
Testing Objectives
In an excellent book on software testing, Glen Myers [MYE79] states a number of rules
that can serve well as testing objectives:
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as-yetundiscovered
error.
3. A successful test is one that uncovers an as-yet-undiscovered error.
These objectives imply a dramatic change in viewpoint. They move counter to the commonly
held view that a successful test is one in which no errors are found. Our objective is to design
tests that systematically uncover different classes of errors and to do so with a minimum
amount of time and effort.
Testing Principles
Before applying methods to design effective test cases, a software engineer must
understand the basic principles that guide software testing.
• All tests should be traceable to customer requirements.
As we have seen, the objective of software testing is to uncover errors. It follows that the
most severe defects (from the customer’s point of view) are those that cause the program to fail
to meet its requirements.
• Testing should begin “in the small” and progress toward testing “in
the large.”
The first tests planned and executed generally focus on individual components. As
testing progresses, focus shifts in an attempt to find errors in integrated clusters of components
and ultimately in the entire system.
A strategy for software testing must accommodate low-level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as high-level
tests that validate major system functions against customer requirements. A strategy must
provide guidance for the practitioner and a set of milestones for the manager. Because the steps
of the test strategy occur at a time when dead-line pressure begins to rise, progress must be
measurable and problems must surface as early as possible.
Even though code testing seems like an ideal method for testing software, it does not
guarantee against software failures due to faulty data. Also, it is realistically not possible to
perform such extensive and exhaustive testing in large organizations with complex systems.
106
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Basis path testing is a white box testing technique. The basis path enables the test case
designer to derive a logical complexity measure of a procedural design and use this measure as
a guide for defining the basis set of execution paths. Test cases derived to exercise the basis set
are guaranteed to execute every statement in the program at least once during testing.
2. Cyclomatic complexity
Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical complexity of a program. It defines the number of independent paths in the basis set of a
107
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
program, and hence establishes the number of tests that must be conducted to ensure that
every statement is executed at least once.
An independent path is defined as any path in the program that introduces at least one
new processing statement or condition. When stated in terms of a flow graph, an independent
path must traverse at least one edge that has not been followed before.
For example, in figure 11.2 (B), the set of independent paths is:
Path 1: 1 . 11
Path 2: 1 . 2 . 3 . 4 . 5 . 10 . 1 . 11
Path 3: 1 . 2 . 3 . 6 . 8 . 9 . 10 . 1 . 11
Path 4: 1 . 2 . 3 . 6 . 7 . 9 . 10 . 1 . 11
Note that each of these paths introduces a new edge. Paths 1, 2, 3 and 4 thus constitute a set of
basis paths for the flow graph in figure 11.2 (B). That is, if tests can be designed to force
execution along these paths, every statement in the program will be executed at least once and
every condition will be evaluated on its true and false side.
The number of paths is determined by the cyclomatic complexity. Complexity is computed in one
of three ways.
i. The number of regions of the flow graph corresponds to the cyclomatic complexity.
ii. Cyclomatic complexity V(G) for a flow graph, G, is also defined as :
V(G) = E . N + 2
where E is the number of edges and N is the number of nodes in the graph.
iii. Cyclomatic complexity V(G) for a flow graph, G, is also defined as :
108
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.
109
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The condition testing method focuses on testing each condition in the program.
The purpose of condition testing is to detect not only errors in the conditions of a
program but also other errors in the program.
3) Loop testing
Loop testing is another white-box testing technique that focuses exclusively on
the validity of loop constructs. Four different classes of loops can be defined:
a) Simple loops . These are tested by applying the following tests:
• Skip the loop entirely
• Only one pass through the loop
• m passes through the loop where m < n (n is the maximum allowable number of
passes through the loop)
• n-1, n, n+1 passes through the loop.
b) Nested loops . If we were to extend the same strategy used for simple loops to
nested loops, the number of tests would geometrically increase with the level of
nesting. A different method is used to test these loops, which helps to reduce the
number of tests:
• Start at the innermost loop. Set all other loops to their minimum value.
• Conduct simple tests for the inner loop while holding other loops at their
minimum iteration value. Conduct additional tests for out of range and excluded
values.
• Work outward, conducting tests for the next loop, while holding all outer loops at
their minimum iteration value.
• Continue until all loops are tested.
110
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
c) Concatenated loops
Concatenated loops can be tested using the approach defined for simple loops, if the
two loops are independent of each other. If however, the loop counter for the first
loop is used as the initial value for the second loop, then the two loops are
dependant. In this case the approach recommended for nested loops is suggested.
d) Unstructured loops
Whenever possible, this class of loops should be redesigned to reflect the use of
structured programming constructs.
In this method, the system is treated as a black box, i.e. the analyst does not look into
the code to see if each path is followed. Due to this, specification testing is not complete.
However, it is a complementary approach to white box testing that is likely to uncover a
different class of errors than the white-box methods. Some of the black-box testing methods are
dealt with in the following subsections.
1. Equivalence partitioning
This is a black box testing method that divides the input domain of a program into
classes of data from which test cases can be derived. An ideal test case uncovers a class of
errors that might otherwise require many cases to be executed before the general error is
observed. Equivalence partitioning attempts to define a test case that uncovers classes of errors,
thus reducing the total number of test cases that must be developed.
Test case design for equivalence partitioning is based on an evaluation of equivalence
classes for an input condition. An equivalence class represents a set of valid or invalid states for
input conditions. Typically, an input condition is either a specific numeric value, a range of
values, a set of related values or a Boolean condition. Equivalence classes may be defined
according to the following guidelines:
111
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
i) If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
ii) If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
iii) If an input condition specifies a member of a set, one valid and one invalid
equivalence
class are defined.
iv) If an input condition is Boolean, one valid and one invalid equivalence class are
defined.
For example, consider an application that requires the following data:
• Area code . blank or 3-digit number
• Prefix . 3-digit number not beginning with 0 or 1
• Suffix . 4-digit number
• Password . 6-digit alphanumeric string
• Commands . check, deposit, etc.
The input conditions associated with the above data elements can be specified as follows:
• Area code: Input condition, Boolean . the area code may or may not be present
Input condition, range . values defined between 000 . 999
• Prefix: Input condition, range . values defined between 200 . 999
• Suffix: Input condition, range . values defined between 0000 . 9999
• Password: Input condition, Boolean . the password may or may not be present
Input condition, value . 6-character string
• Commands: Input condition, set . containing commands noted previously
Applying the guidelines for the derivation for equivalence classes, test cases for each
domain data item can be developed and executed. Test cases are developed so that the largest
number of attributes of an equivalence class, are exercised at once.
ii) If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum
and maximum are also tested.
iii) Apply guidelines (i) and (ii) to output conditions; i.e. test cases should be designed to
generate the maximum and minimum number of possible values as output.
iv) If internal data structures have prescribed boundaries, test the data structure at its
boundary (e.g. an array has a defined limit of 100 entries)
3. Comparison testing
There are some situations where the reliability of the system is absolutely critical. In
some applications, redundant hardware and software are often used to minimize the possibility
of errors. When redundant software is developed, separate software engineering teams develop
independent versions of an application using the same specifications. In such situations, both
versions can be tested using the same test data to ensure that they provide identical outputs.
Then all the versions are executed in parallel with a real-time comparison of results to ensure
consistency.
If the output is different, each of the applications is investigated to determine if a defect
in one or more versions is responsible for the difference. In most cases, this comparison can be
done with automated tools.
This method is not foolproof. If the specifications are incorrect, it is likely that all versions
of the software reflect the error. In addition, if the comparison provides identical, but incorrect
results, comparison testing fails to detect the error.
Unit testing
113
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Unit testing focuses verification effort on the smallest unit of software design . the
software module. Using the component level design description as a guide, important control
paths are tested to uncover errors within the boundary of the module. The unit test is white box
oriented, and the steps can be conducted in parallel for multiple modules.
• The module interface is tested to ensure that information flows properly into and out of
the unit under test.
• The local data structure is tested to ensure that data stored temporarily maintains its
integrity during all the steps in the execution.
• Boundary conditions are tested to establish that the module operates properly at the
established boundaries.
• All independent paths through the module are exercised to ensure that all statements
have been executed at least once.
• Finally, all error-handling paths are also tested.
Unit testing is normally considered an addition to the coding. After source level code is
developed, reviewed and verified for syntax, unit test case design begins. Because a module is
not a stand-alone program, driver, and / or stub software must be developed. A driver program
is no more than a main program that accepts test case data and passes it to the component to
be tested and prints the relevant results. Stubs serve to replace subordinate modules to the
module under test. Drivers and stubs represent overhead, as it is additional software that must
be developed (not formally designed) but not delivered with the final software. Unit testing
becomes simpler when a component is designed with high cohesion. When only one function is
addresses by a component, the number of test cases is reduced and errors are more easily
predicted and uncovered.
Integration testing
Interfacing of various modules can cause problems. Data can be lost across an interface,
one module may affect the other, and individually acceptable imprecision may be magnified
when combined.
Integration testing is a systematic technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with interfacing. The
objective is to take unit tested components and build a program structure that has been
dictated by design.
There is often a tendency to attempt a non-incremental approach. All components are
combined in advance. The entire program is tested as a whole. This results in an abundance of
errors. Correction becomes difficult because isolation of causes is complicated.
Incremental integration is the antithesis of the above approach. The program is
constructed and tested in small increments, where errors are easier to isolate and correct,
interfaces are tested more completely and a systematic test approach is applied. The following
subsections deal with the different approaches to incremental integration.
114
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Top-down testing
Top-down integration testing is an incremental approach to construction of program
structure. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module. Modules subordinate to the main module are
incorporated into the structure in steps.
The integration process is performed in five steps:
i) The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main module.
ii) Depending on the integration approach selected, subordinate stubs are replaced one
at a time with actual components, in breadth-first or depth-first order.
iii) Tests are conducted as each component is integrated.
iv) On completion of each set of tests, another stub is replaced by the real component.
v) Regression testing may be conducted to ensure that new errors have not been
introduced.
The top-down strategy verifies major control or decision points early in the test process.
In a well-factored program structure, decision-making occurs at higher levels in the hierarchy
and is thus encountered first.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems
can arise. The most common of these problems occurs when processing at low levels in the
hierarchy is required to adequately test upper levels.
In such cases another approach to integration testing is used . the bottom-up method discussed
in the next subsection.
Bottom-up testing
Bottom-up integration testing, as the name implies, begins construction and testing with
atomic modules (i.e. components at the lowest levels in the program structure). Because
components are integrated form the bottom-up, processing required for components
subordinate to a given level is always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented using the following steps:
i) Low-level components are combined into clusters that perform a specific sub-function.
ii) A driver is written to co-ordinate test case input and output.
iii) The cluster is tested.
iv) Drivers are removed and clusters are combined moving upward in the program structure.
115
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
An overall plan for integration of the software and a description of specific tests are
documented in a test specification. This document contains a test plan and a test procedure. It is
a work product of the software process, and becomes a part of the software configuration.
Validation testing
Software validation is achieved through a series of black box tests that demonstrate
conformity with requirements. A test plan outlines the classes of tests to be conducted and a
test procedure defines specific test cases that will be used to demonstrate conformity with
requirements.
After each validation test case has been conducted, one of two possible conditions exist:
i) The function or performance characteristics conform to the specifications and are
accepted.
ii) A deviation form specifications is discovered and a deficiency list is created.
If software is developed for the use of many customers, it is impractical to perform formal
acceptance tests with each one. Many software product builders use a process called alpha
and beta testing to uncover errors that only the end-user is able to find.
The alpha test is conducted at the developer.s site by the customer. The software is used in a
natural setting with the developer recording errors and usage problems. Alpha tests are
performed in a controlled environment.
Beta tests are conducted at one or more customer sites by the end-users of the software.
Unlike alpha testing, the developer is generally not present. The beta test is thus a live
application of the software in an environment that cannot be controlled by the developer.
The customer records all the errors and reports these to the developer at regular intervals.
As a result of problems recorded or reported during these tests, the developers make
modifications and prepare for the release of the entire customer base.
System testing
Software is only one element of a larger computer-based system. Ultimately, software is
incorporated with other system elements and a series of system integration and validation tests
are conducted. These tests fall outside the scope of the software process and are not conducted
solely by software engineers. However, steps taken during software design and testing can
greatly improve the probability of successful software integration in a larger system.
System testing is actually a series of tests whose purpose is to fully exercise the computer-based
system. Although each test has a different purpose, all work to verify that system elements have
been properly integrated and perform allocated functions.
116
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
i) Recovery testing
Many computer-based systems must recover from faults and resume processing within a
pre-specified time. In some cases, a system must be fault-tolerant, i.e. processing faults
must not cause overall system function to cease. In other cases, a system failure must be
corrected within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed. If recovery is automatic, re-initialization,
check-pointing mechanisms, data recovery and restart are evaluated for correctness. If
recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
117
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Test data
The type of data that is used for testing is called test data. There are two very important
sources of data each with their own sets of advantages and disadvantages. They are:
i) Live test data
Live test data is actually extracted from organization files. After a system is partially
constructed, analysts may ask the users to key in data from their normal activities. The
analyst uses this set of data to partially test the system.
It is difficult to obtain live data in sufficient amounts to perform extensive testing.
Although this data is realistic, i.e. typical for regular processing activities, it does not help
to test for unusual occurrences, thus ignoring factors that may cause system failure.
Test data
The type of data that is used for testing is called test data. There are two very important
sources of data each with their own sets of advantages and disadvantages. They are:
i) Live test data
Live test data is actually extracted from organization files. After a system is partially
constructed, analysts may ask the users to key in data from their normal activities. The
analyst uses this set of data to partially test the system.
It is difficult to obtain live data in sufficient amounts to perform extensive testing.
Although this data is realistic, i.e. typical for regular processing activities, it does not help
to test for unusual occurrences, thus ignoring factors that may cause system failure.
maintained in a testing library. Test data can be generated to test all combinations of
formats and values. Since they are generated artificially, large volumes can be generated for
extensive testing. The test data should be capable of testing all the functionality and
revealing the bugs in the system. For this reason the library must contain the following
classes of data:
• Correct test data
• Erroneous test data
• Boundary condition test data
119
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
the system. It extends the software beyond its original functional requirements. This type of
maintenance involves more time and money than both corrective and adaptive
maintenance.
For example:
• Automatic generation of dates and invoice numbers.
• Reports with graphical analysis such as pie charts, bar charts, etc.
• Providing on-line help system
121
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Chapter 10-Documentation
CASE tools
Today software engineers use tools that are analogous to the computer-aided design and
engineering tools used by hardware engineers. There are several different tools with different
functionalities. They are:
i) Business systems planning tools
By modeling the strategic information requirements of an organization, these tools
provide a meta-model from which specific information systems are derived. Business
systems planning tools help software developers to create information systems that route
data to those that need the information. The transfer of data is improved and the
decisionmaking
is expedited.
ii) Project management tools
Using these management tools, a project manager can generate useful estimates of cost,
effort and duration of a software project, plan a work schedule and track projects on a
continuous basis. In addition, the manager can use these tools to collect metrics that will
establish a baseline for software development quality and productivity.
iii) Support tools
This category encompasses document production tools, network system software,
databases, electronic mail, bulletin boards and configuration management tools that are
used to control and manage the information that is created as the software is developed.
iv) Analysis and design tools
These enable the software engineer to model the system that is being built. These tools
assist in the creation on\f the model and an assessment of the model.s quality. By
performing consistency and validity tests on each model, these tools provide the engineer
with the insight and help to eliminate errors before they propagate into the program.
v) Programming tools
System software utilities, editors, compilers, debuggers are a legitimate part of CASE
tools. In addition to these, new and powerful tools can be added. Object-oriented
programming tools, 4G languages, advanced database query systems all fall into this tool
category.
vi) Integration and testing tools
Testing tools provide a variety of different levels of support for the software testing steps
that are applied as part of the software engineering process. Some tools provide direct
support for the design of test cases and are used in the early stages of testing. Other tools,
such as automatic regression testing and test data generation tools, are used during
integration and validation testing and can help reduce the amount of effort required for
testing.
vii) Prototyping and simulation tools
These tools span a wide range of tools that include simple screen painters to simulation
122
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
products for timing and sizing analysis of real-time embedded systems. At their most
fundamental, prototyping tools focus on the creation of screens and reports that will
enable a user to understand the input and output domain of an information system.
viii) Maintenance tools
These tools can help to decompose an existing program and provide the engineer with
some insight. However, the engineer must use intuition, design sense and intelligence to
complete the reverse engineering process and / or to re-engineer the application.
ix) Framework tools
These tools provide a framework from which an integrated project support environment
(IPSE) can be created. In most cases, framework tools actually provide database
management and configuration management capabilities along with utilities that enable
tools from different vendors to be integrated into the IPSE.
Types of Documentation
Documentation is an important part of software engineering that is often overlooked.
Types of documentation include:
• Architecture/Design - Overview of software. Includes relations to an environment and
construction principles to be used in design of software components.
• Technical - Documentation of code, algorithms, interfaces, and APIs.
• End User - Manuals for the end-user, system administrators and support staff.
• Marketing - Product briefs and promotional collateral
Architecture/Design Documentation
Architecture documentation is a special breed of design documents. In a way,
architecture documents are third derivative from the code (design documents being
second derivative, and code documents being first). Very little in the architecture documents is
specific to the code itself. These documents do not describe how to program a particular
routine, or even why that particular routine exists in the form that it does, but instead merely
lays out the general requirements that would motivate the existence of such a routine. A good
architecture document is short on details but thick on explanation. It may suggest approaches
for lower level design, but leave the actual exploration trade studies to other documents.
Technical Documentation
This is what most programmers mean when using the term software documentation.
When creating software, code alone is insufficient. There must be some text along with it to
describe various aspects of its intended operation. This documentation is usually embedded
within the source code itself so it is readily accessible to anyone who may be traversing it.
User Documentation
Unlike code documents, user documents are usually far divorced from the source
123
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Marketing Documentation
For many applications it is necessary to have some promotional materials to encourage
casual observers to spend more time learning about the product. This form of documentation
has three purposes:-
1. To excite the potential user about the product and instill in them a desire for becoming more
involved with it.
2. To inform them about what exactly the product does, so that their expectations are in line
with what they will be receiving.
To explain the position of this product with respect to other alternatives.
124
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
2. Under what circumstances or for what purposes would one use an interview rather than
other data collection methods?Explain (May06-12Marks)
OR
Discuss hoe interview technique can be used to need of a new system(May-03)
Ans
a) Interview is a fact finding technique whereby the system analyst collects information
from individuals through face to face interaction.
b) there are two types of interviews:
i. Unstructured interviews: this is an interview that is conducted with only a general
goal or subject in mind and with few, if any specific questions.
ii. Structured interviews: This is an interview in which the interviewer has a specific set
of questions to ask the interviewee.
c) unstructured interviews tend to involve asking open ended questions while structured
interviews tend to involve asking more close ended questions.
d) Advantages:
• Interviews gives the analyst an opportunity to motivate the interviewee to respond
freely and openly to questions.
• Interviews allow the system analyst to probe (penetrating investigation) for more
feedback from the interviewee.
• Interviews permit the system analyst to adapt or reword questions for each
individual.
• A good system analyst may be able to obtain information by observing the
interviewee’s body movements and facial expressions as well as listening to verbal
replies to questions.
e) This technique is advised in the following situations:-
• Where the application system under consideration is highly specific to the user
organization.
• When the application system may not very specialize but the practices followed this
organization may be specific.
126
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
• The organization does not have any documentation where the part of information
requirements are documented or any such documentation is irrelevant or not
available or cannot be shared with developers due to privacy issues etc.
• The organization is not decided on the details of the practices, the new application
system would demand or would be used to implement new practices but want to
decided when responding to the information requirements determination .
f) A structured interview meeting is useful in the following situation:-
• When the development team and the user team members know the broad system
environment with high familiarity. This reduces the amount of communication to
significantly to just a few words or a couple of sentences.
• Responding to the questions involves collecting data from different sources or person
and/or analyzing it and/or making the analysis ready for discussion at the meeting,
in order to save on the duration of meeting.
• The costly communication media such as international phone/ conference call are to
be used.
• Some of all of the members of the user team represent the top management or
external consultants or specialist.
g) A semi structured interview meeting is useful in the following situations :-
• Usually in the initial stage of information requirements determination, the users and
the developers need to exchange significant amount of basic information. The
structured interview meetings may not be effective here since there is not enough
information to ask questions of importance.
• Also very initially or with the new organization , the personal meetings are important
because ,they help the development team not only in knowing the decisions but also
in understanding the decision making process of the organization ,the role of every
user team members in the decision and their organizational and personal interests.
These observations during the meetings can help the software development team
significantly in future communications.
• When the new application system is not a standard one and /or the users follow
radically different or highly specialized business practices ,then it is very difficult to
predict which questions are important from the point of determining the information
requirements.
• When the members of the development team expects to participate in formulating
some new information requirements or freezing them, as an advisor, say then
interview meeting has to be personal and therefore, semi structured.
• If the user’s are generally available , located very close and the number of top
management users or external consultant is nil or a negligible issue, then the
personal meeting are conducted.
• When there are no warranted reasons that only the structured interviews are to be
conducted.
127
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
3. What are the CASE tools? explain some CASE tools used for prototyping. (May-06-
15Marks, Nov-03, M-05, Dec-04).
Ans: Computer assisted software engineering (CASE)
• Computer assisted software engineering (CASE) is the use of automated software tools
that support the drawing and analysis of system models and associated specifications.
Some tools also provide pr typing and code generation facilities.
• At the center of any true CASE tool’s architecture is a developer’s database called a CASE
repository where developer’s can store system models, detailed descriptions and
specifications and other products of systems developments.
• A CASE tool enables people working on a software project to store data about a project,
its plan and schedules, to be able to track its progress and, make changes easily, analyze
and store data about user, store the design of a s stem through automation.
• A CASE environment makes system development economical and practical.The
automated tools and environment provides a mechanism for system personnel to
capture the document and model and information system.
• A CASE environment is a number of CASE tools, which use integrated approach to
support the interaction between environment’s components and user of the
environment.
CASE Components
CASE tools generally include five components - diagrammatic tools, an information
repository, interface generators, code generators, and management tools.
1. Diagrammatic Tools
• Typically, they include the capabilities to produce data flow diagrams, data structure
diagrams, and program structure charts.
• These high-level tools are essential for support of structured analysis mythology and
CASE tools incorporate structured analysis extensively.
• They support the capability to draw diagram in chart and to store the details
internally. When changes must be made the nature of changes is described to the
system which can then withdraw the entire diagram automatically.
• The ability to change and redraw eliminates an activity that analyst finds both
tedious and undesirable.
• While dictionary are designed so that the information is easily accessible. They also
includes built-in control and safeguards to preserve the accuracy and consistency of
the system details.
• The use of authorization levels process validation and procedures for testing
consistency of the description ensures that access to definitions and the revisions
made to them in the information repository occur properly according to the
prescribed procedures.
3. Interface Generators
• System interfaces are the means by which users interact with an application, both to
enter information and data and to receive information.
• Interface generator provides the capability to prepare mockups and prototypes of
user interfaces.
• Typically the support the rapid creation of demonstration system menus,
presentation screens and report layouts.
• Interfaces generators are an important element for application prototyping,
although they are useful with all developments methods.
4. Code Generators:
• Code generators automated the preparations of computer software.
• They incorporate method that allows the conversion of system specifications into
executable source code.
• The best generator will produce the approximately 75 percent of the source code for
an application. The rest must be written by hand. The hand coding as this process is
termed, is still necessary.
• Because CASE tools are general purpose tools not limited any specific area such as
manufacturing control, investment portfolio analysis, or accounts management, the
challenge of fully automating software generation is substantial.
• The greatest benefits accrue in the code generator are integrated with the central
information repository such as combination achieved objective of creating reusable
computer code.
• When specification change code can be regenerated by feeding details from data
dictionary through the code generators. The dictionary contents can be reused to
prepare the executable code.
5. Management Tools:
CASE systems also assist project manager in maintaining efficiency and effectiveness
throughout the application development process.
• The CASE components assist development manager in the scheduling of the analysis
and designing activities and allocation of resources to different project activities.
129
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
130
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
4. What is cost benefit analysis? Describe any two methods of performing same(May-
06,May-04).
Ans:
Cost -benefit analysis:
Cost benefit analysis is a procedure that gives the picture of various costs, benefits, and
rules associated with each alternative system.
The cost - benefit analysis is a part of economic feasibility study of a system.
131
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The following are the methods of performing cost and benefit analysis:
1. Net benefit analysis.
2. Present value analysis.
3. Payback analysis.
4. Break-even analysis.
5. Cash flow analysis.
6. Return on investment analysis.
5. Explain the concept of Normalization with examples. why would you denormilzation?
(May-04)
Ans:
Normalization:
Normalization is a process of simplifying the relationship between data elements in a record.
Through normalization a collection of data in a record structure is replaced by successive record
structures that are simpler and more predictable and therefore more manageable.
Normalization is carried out for four reasons:
1. To structure the data so that any important relationships between entities can be
represented.
132
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
133
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
license number. Thus if you know the serial number of a vehicle, you can determine the state
license number. This is functional dependency.
In contrast, if a motor vehicle record contains the name of all individuals who drive the
vehicle, functional dependency is lost. If we know the license number, we do not know who the
driver is -there can be many. And if we know the name of the driver, we do not know the specific
license number or vehicle serial number, since a driver can be associated with more than one
vehicle in the file. Thus to achieve second normal form, every data item in a record that is not
dependent on the primary key of the record should be removed and used to form a separate
relation.
Denormilzation
Performance needs dictate very quick retrieval capability for data stored in relational
databases. To accomplish this, sometimes the decision is made to denormalize the physical
implementation. Denormalization is the process of putting one fact in numerous places. This
speeds data retrieval at the expense of data modification. Of course, a normalized set of
relational tables is the optimal environment and should be implemented for whenever possible.
Yet, in the real world, denormalization is sometimes necessary. Denormalization is not
necessarily a bad decision if implemented wisely. You should always consider these issues before
denormalizing:
• Can the system achieve acceptable performance without denormalizing?
• Will the performance of the system after denormalizing still be unacceptable?
• Will the system be less reliable due to denormalization?
If the answer to any of these questions is "yes," then you should avoid denormalization
because any benefit that is accrued will not exceed the cost. If, after considering these issues,
you decide to denormalize be sure to adhere to the general guidelines that follow.
134
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
6. Write detailed note about the different levels and methods of testing software (May-06).
Ans:
A test case is a set of data that the system will process as normal input. However the
data are created with the express intent of determining whther the system will process them
correctly There are two logical strategies for testing software that is the strategic of code
testing and specification testing
CODE TESTING:
The code testing strategy examines the logic of the program to follow this testing
method the analyst develops test cases that result in executing every instruction in the program
or module that is every path through the program is tested
This testing strategy does not indicate whether the code meets it’s specification nor does
it determine whether all aspects are even implemented. Code testing also does not check the
range of data that the program will accept even through when the software failure occur in
actual size it is frequently because users submitted data outside of expected ranges(for example
a sales order for $1 the largest in the history of the organization)
SPECIFICATION TESTING:
135
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
To perform specification testing the analyst examines the specifications stating what the
program should do and how it should perform under various conditions. Then test cases are
developed for each condition or combination of condition and submitted for processing. by
examining the results the analyst can determine whether the program perform according to it’s
specified requirement
LEVELS OF TESTING:
Systems are not designed as entire systems nor are they tested as single systems.The
analyst must perform both unit and system testing
UNIT TESTING:
In unit testing the analyst tests the programs making up a system.(for this reason
sometimes unit testing is also called as program testing)
Unit testing focuses first on the modules independently of one another to locate error.This
enables the tester to detect errors in coding and logic that are contained within that module
alone. Unit testing can be performed only from bottom up,starting with the smallest and lowest
level modules are proceeding one at time .for each module in bottom up testing a shot program
executes the module and provides the needed data so that the module is asked to perform the
way it will when embedded within the larger system.when bottom level modules tested
attention turns to those on the next level that use the lower ones. They are tested individually
and the linked with the previously examined lower level modules
SYSTEM TESTING:
System testing does not test the software but rather the integration of each module
in the system .it also tests to find discrepancies between the system and its original objective
current specifications , and system documentation. The primary concern is the compatibility of
individual modules .Analyst trying to find areas where modules have been designed with the
different specifications for data length,type and data element name For example one module
may expect the data item for customer identification number to be character data item
The most common question that defines validity is :Does the instrument measure
what it is measuring? It refers to the notion that the question asked are worded to
produce the information sought.in validity the emphasis is on what is being
measured .
2. System Integrity:
System integrity refers to the proper functioning of hardware and programs, appropriate
physical security, and safety against external threats such as eavesdropping and wiretapping.
3. Privacy:
Privacy defines the rights of the users or organizations to determine what information
they are willing to share with or accept from others and how the organization can be protected
against unwelcome, unfair, or excessive dissemination of information about it.
4. Confidentiality:
The term confidentiality is a special status given to sensitive information in a database to
minimize the possible invasion of privacy. It is an attribute of information that characterizes its
need for protection.
Disaster/Recovery Planning:
1. Disaster/recovery planning is a means of addressing the concern for system availability
by identifying potential exposure, prioritizing applications, and designing safeguards
that minimize loss if disaster occurs. It means that no matter what the disaster, the
system can be recovered. The business will survive because a disaster/recovery plan
allows quick recovery under the circumstances.
2. In disaster/recovery planning, management’s primary roll is to accept the need for the
consistency planning, select an alternative measure, and recognize the benefits that can
be delivered from establishing a disaster/recovery plan. Top management should
137
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
establishing a disaster/recovery policy and commit corporate support staff for its
implementation
3. The user’s role is also important. The user’s responsibilities include the following:
a. Identifying critical applications, why they are critical, and how computer
unavailability would affect the department.
b. Approving data protection procedures and determining how long and how well
operations will continue without the data.
c. Funding the costs of backup.
9. What are the roles of the system Analyst in system analysis design?(Nov-03,May-01)
Ans:
The various roles of system analyst are as follows:
1 Change Agent:
The analyst may be viewed as an agent of change. A candidate system is designed to
introduce change and reorientation in how the user organization handles information or makes
decisions .In role of a change agent, the system analyst may select various styles to introduce
change to user organization. The styles range from that of the persuader to imposer. In
between, there are the catalyst and confronter roles. When user appears to have a tolerance for
change ,the persuader or catalyst(helper)styles is appropriate. On the other hand , when drastic
changes are requires, it may be necessary to adopt the confronter or even imposer style .No
matter what style is used; however the goal is same :to achieve acceptance of candidate system
with a minimum of resistance.
3 Architect:
Just as an architect related the client abstract design requirement and the contractor’s
detailed building plan, an analyst relates the user’s logical design requirement with the detailed
physical system design,As an architect,the analyst also creates a detailed physical system design
of the candidate systems.
4 Psychologist
In system development, system are built around people. The analyst plays the role
138
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
of psychologist in the way he/she reaches people, interprets their thoughts, assesses their
behaviour and draws conclusions from these interactions. Understanding inetrfunctional
relationships is important.It is important that analyst be aware of people’s feelings and be
prepared to get around things in a graceful way. The art of listening is important in evaluating
responses and feedback.
5 Salesperson:
Selling change can be as crucial as initiating change. Selling the system actually
takes place at each step in the system life cycle. Sales skills and persuasiveness are crucial to the
success of system.
6 Motivator:
The analyst role as a motivator becomes obvious during the first few weeks after
implementation of new system and during times when turnover results in new people being
trained to work with the candidate system. The amount of dedication if takes to motivate the
users often taxes the analyst abilities to maintain the pace.
7 Politcian:
Related to the role of motivator is that of politician. In implementing a candidate
system , the analyst tries to appeases all parties involved.Diplomacy and fitness in dealing with
the people can improve acceptance of the system.In as much as a politician must have the
support of his/her constituency, so as good analyst’s goal to have the support of the users’s
staff.
139
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Interpersonal skills deal with relationships and interface of the analyst with people in the
business.The interpersonal skills relevant to system include the following:
a. Good interpersonal Communication skills:
An analyst must be able to communicate, both orally and in writing. Communication is
not just reports, telephone conversations and interviewes. It is people talking, listening,
feeling and reaching to one another; their experiences and reactions. Opening communication
channel are a must for system development.
b. Good interpersonal Relations skills:
An analyst interacts with all stackholders in a system development project. There
interaction requires effective interpersonal skills that enable the analyst o deal with a group
dynamics, bussiness politics, conflict and change.
iii) General knowledge of Business process and terminology:
System analyst must be able to communicate with business experts to gain on
understanding of their problems and needs. They should avail themselves of every opportunity
to complete basic business literacy courses such as financial accounting, management or cost
accounting, finance, marketing , manufacturing or operations management, quality
management, economics and business law.
iv) General Problem solving skills:
The system analyst must be able to tke a large bussiness problem, break down that
problem into its parts, determine problem causes and effects and then recommended a
solution.Analyst must avoid the tendancy to suggest the solution before analyzing the problem.
v) Flexibility and Adaptability:
No two are alike.Accordingly there is no single, magical approach or standard that is
equally applicable to all projects.Successful system analyst learn to be flexible and to adopt to
unique challenges and situations.
vi) Character and Ethics:
The nature of the systems analyst job requires strong character and a sense of right and
wrong.Analysts often gain access to sensitive and confidential facts and information that are
not meant for public disclosure.
140
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
subsequent vesion. This process is repeated until the system meets the clients condition of
acceptance.
4. any given prototype may omit certain functions or features until such a time as the
prototype has sufficiently evolved into an acceptable implementation of requirements.
Advantages:
1. Shorter development time.
2. more accurate user requirements
3. greater user participation and support
4. relatively inexpensive to build as compared with the cost of conventional system.
This method is most useful for unique applications where developers have little information
or experience or where risk of error may be high. It is useful to test the feasibility of the system
or to identify user requirements.
12. Discuss features of good user interface design, using login context(M-03).
Ans:
The features of the good user interface design are as follows:
1. Provide the best way for people to interact with computers. This is commonly known as
Human Computer Interaction. (HCI).
2. The presentation should be user friendly. User- friendly Interface is helpful e.g. it should
not only tell user that he has committed and error, but also provide guidance as to how
she can rectify it soon.
3. It should provide information on what is the error and how to fix it.
4. User-friendly ness also includes tolerant and adaptable.
5. A good GUI makes the User more productive.
141
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
6. A good GUI is more effective, because it finds the best solutions to a problem.
7. It is also efficient because it helps the User to find such solution much quickly and with
the least error.
8. For a User, using a computer system, his workspace is the computer screen. The goal of
the good GUI is to make the best, if not all, of a User’s workspace.
9. A Good GUI should be robust. High robustness implies that the interface should not fail
because of some action taken by the User. Also, and User error should not lead to a
system breakdown.
10. The Usability o0f the GUI is expected to be high. The Usability is measures in the various
terms.
11. A good GUI is of high Analytical capability; most of the information needed by the User
appears on the screen.
12. With a good GUI, User finds the work easier and more pleasant.
13. The User should be happy and confident to use the interface.
14. A good GUI has a high cognitive workload ability i.e. the mental efforts required of the
User to use the system should be the least. In fact, the GUI closely approximates the
User’s mental model or reactions to the screen.
15. For a good GUI, the User satisfaction is high.
142
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The prototype form can be provided to the users. The steps are as follows :
a) The users should be provided with the form prototype and related functionality including
validations, help, error messages.
b) The users should be invited to use the prototype forms and the designers should observe
them.
c) The designers should observe their body language, in terms of ease of use, comfort
levels, satisfaction levels, help asked etc. they also should note the errors in the data
entry.
d) The designers should ask the user’s their critical feedback.
e) The designers should improve the GUI design and associated functionality and run
second test runs for the users in the same way.
144
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
f) This exercise should be continued until the input form delivers expected levels of
usability.
145
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
The common heads of tangible benefits vary from organisation to organisation, but some of
them are as follows:-
i. Benefits due to reduced business cycle times i.e production cycle, marketing cycle, etc.
ii. Benefits due to increased efficiency
iii. Savings in the salary of operational users
iv. Savings in the space rent and taxes
v. Savings in the cost of stationary, telephone and other communication costs, etc.
vi. Savings in the costs due to the use of storage media
vii. Benefits due to overall increase in profits before tax.
The common heads of intangible benefits vary from organisation to organisation, but some of
them are as follows:-
i. Benefits due to increased working satisfaction of employees.
146
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
ii. Benefits due to increased quality of products and/or services provided to the end-
customer.
iii. Benefits in terms of business growth due to better customer services
iv. Benefits due to increased brand image
v. Benefits due to captured errors, which could not be captured in the current system.
vi. Savings on costs of extra activities that were carried out and now not required to be
carried out in the new system.
17. Describe the concept and procedure used in constructing DFD’s.Using and example of your
own to illustrate.[Apr-04]
Data flow diagram (DFD):
1. DFD is a process model used to depict the flow of data through a system & work or
processing performed by the system.
2. It also known as Bubble Chart Transformation graph & process model.
3. DFD is a graphic tool to describe & analyze the movement of data through a system,
using the processes, stores of data & delay in system.
4. DFD are of two types:
A) Physical DFD
B) Logical DFD
Physical DFD:
Represents implementation-dependent view of current system & system show what task
are carried out & how they are performed. Physical Characteristics are:
1. name of people
2. form & document names or numbers
3. names of department
4. master & transaction files
5. locations
6. names of procedures
Logical DFD:
They represent implementation-independent view of the system & focus on the flow of
the specific devices, storage locations, or people in the system. They do not specify physical
characteristics Listed above for physical DFD’s.
The most useful approach to develop an accurate & complete description system begins with
the development of a physical DFD & then they are converted to logical DFD
Procedure:
Step 1: make a list of business activities & use it to determine:
1 External entities
147
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
2 Data flows
3 Processes
4 Data stores
Step 2: draw a context level diagram:
Context level diagram is a top level diagram & contains Only one process representing
the entire system. Anything that not inside the context diagram will not be the part of the
system study.
Step3: develop process chart:
It is also called as hierarchy charts or decomposition diagram. It shows top down
functional decomposition of the system.
Step4: develop the first level DFD:
It is also known as diagram 0 or level 0 diagram. It is the explosion of the context level
diagram. It includes data stores & external entities. Here the processes are number.
Step 5: draw more detailed level:
Each process in diagram 0 may in turn be exploded to create a more detailed DFD. New
data flows & data stores are added. There are decomposition/ leveling of processes. E.g. library
management system
20. Discuss the six special system test? Give special examples?
Ans:
148
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
2. Storage Testing:
This test is to be carried out to determine the capacity of the system to store transaction
data on a disk or in other files. Capacities her are measured in terms of the number of records
that a disk will handle or a file can contain. If this test is not carried out then there are
possibilities that during installation one may discover that, there is not enough storage capacity
for transactions and master file records.
4. Recovery Testing:
Analyst must never be too sure of anything. He must always be prepared for the
worst. One should assume that the system will fail and data will be damaged or lost. Even
though plans and procedures are written to cover these situations, they also must be tested.
5. Procedure Testing:
Documentation & manuals telling the user how to perform certain functions and tests
quite easily by asking the user to follow them exactly through a series of events. By not including
instructions about aspects such as, when to depress the enter key, removing the diskettes before
149
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
putting off the power and so on, could cause problems. This type of testing brings out what is
not mentioned in the documentation, and also the errors in them.
6. Human Factors:
In case during processing, the screen goes blank the operator may start to wonder as to
what is happening and the operator may just do things like press the enter key number of times,
or switch off the system and so on, but if a message is displayed saying that the processing is in
progress and asking the operator to wait, then these types of problems can be avoided.
Thus, during this test we determine how users will use the system when processing
data or preparing reports.
As we have noticed that these special test are used for some special situations, and
hence the name as Special System Tests.
21. There are 2 ways of debugging program software bottom up and top down. How do they
differ?
Ans:
Its a long-standing principle of programming style that the functional elements of a
program should not be too large. If some component of a program grows beyond the stage
where it's readily comprehensible, it becomes a mass of complexity which conceals errors.Such
software will be hard to read, hard to test, and hard to debug.
In accordance with this principle, a large program must be divided into pieces, and the
larger the program, the more it must be divided. How do you divide a program? The traditional
approach is called top-down design: you say "the purpose of the program is to do these seven
things, so I divide it into seven major subroutines. The first subroutine has to do these four
things, so it in turn will have four of its own subroutines," and so on. This process continues until
the whole program has the right level of granularity-- each part large enough to do something
substantial, but small enough to be understood as a single unit.
As well as top-down design, they follow a principle which could be called bottom-up
design-- changing the language to suit the problem.It's worth emphasizing that bottom-up
design doesn't mean just writing the same program in a different order.When you work bottom-
up, you usually end up with a different program. Instead of a single, monolithic program, you
will get a larger language with more abstract operators, and a smaller program written in it.
Instead of a lintel, you'll get an arch.
150
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
thus less chance for errors there. As industrial designers strive to reduce the number of
moving parts in a machine, experienced Lisp programmers use bottom-up design to
reduce the size and complexity of their programs.
2. Bottom-up design promotes code re-use. When you write two or more programs, many
of the utilities you wrote for the first program will also be useful in the succeeding ones.
Once you've acquired a large substrate of utilities, writing a new program can take only
a fraction of the effort it would require if you had to start with raw Lisp.
3. Bottom-up design makes programs easier to read. An instance of this type of abstraction
asks the reader to understand a general-purpose operator; an instance of functional
abstraction asks the reader to understand a special-purpose subroutine.
4. Because it causes you always to be on the lookout for patterns in your code, working
bottom-up helps to clarify your ideas about the design of your program. If two distant
components of a program are similar in form, you'll be led to notice the similarity and
perhaps to redesign the program in a simpler way.
Top-Down Advantages
1.Design errors are trapped earlier
2.A working (prototype) system
3.Enables early validation of design
4.No test drivers are needed
5.The control program plus a few modules forms a basic early prototype
6.Interface errors discovered early
7.Modular features aid debugging
Disadvantages
1.Difficult to produce a stub for a complex component
2.Difficult to observe test output from top-level modules
3.Test stubs are needed
4.Extended early phases dictate a slow manpower buildup
5.Errors in critical modules at low levels are found late
Bottom-Up
Advantages
1.Easier to create test cases and observe output
2.Uses simple drivers for low-level modules to provide data and the interface
3.Natural fit with OO development techniques
4.No test stubs are needed
5.Easier to adjust manpower needs
6.Errors in critical modules are found early
151
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
22. Explain how you would expect documentation to help analyst and designers.
Introduction:
Documentation is not a step in SDLC. It is an activity on-going in every phase of SDLC. It
is about developing documents initially as a draft, later on the review document and then a
signed-off document. The document is born, either after it is signed-off by an authority or after
its review. It cries initial version number. However, the document also undergoes changes and
then the only way to keep your document up tom date is to incorporate these changes.
Software Documentation helps Analysts and Designers in the following ways:
1. The development of software starts with abstract ideas in the minds of the Top
Management of User organization, and these ideas take different forms as the software
development takes place. The Documentation is the only link between the entire
complex processes of software development.
2. The documentation is a written communication, therefore, it can be used for future
reference as the software development advances, or even after the software is
developed, it is useful for keeping the software up to date.
3. The documentation carried out during a SDLC stage, say system analysis, is useful for the
respective system developer to draft his/her ideas in the form which is shareable with
the other team members or Users. Thus it acts as a very important media for
communication.
4. The document reviewer(s) can use the document for pointing out the deficiencies in
them, only because the abstract ideas or models are documented. Thus, documentation
provides facility to make abstract ideas, tangible.
5. When the draft document is reviewed and recommendations incorporated, the same is
useful for the next stage developers, to base their work on. Thus documentation of a
stage is important for the next stage.
6. Documentation is a very important because it documents very important decisions about
freezing the system requirements, the system design and implementation decisions,
agreed between the Users and Developers or amongst the developers themselves.
7. Documentation provides a lot of information about the software system. This makes it
very useful tool to know about the software system even without using it.
152
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
8. Since the team members in a software development team, keep adding, as the software
development project goes on, the documentation acts as important source of detailed
and complete information for the newly joined members.
9. Also, the User organization may spread implementation of a successful software system
to few other locations in the organization. The documentation will help the new Users to
know the operations of the software system. The same advantage can be drawn when a
new User joins the existing team of Users. Thus documentation makes the User’s
productive on the job, very fast and at low cost.
10. Documentation is live and important as long as the software is in use by the User
organization.
11. When the User organization starts developing a new software system to replace this
one, even then the documentation is useful. E.G. The system analysts can refer to the
documentation as a starting point for discussions on the documentation as a starting
point for discussions on the new system requirements.
Hence, we can say that Software documentation is a very important aspect of SDLC.
23. what are the major threads in the system security. Which is one of the most serious and
important and why?
System Security Threats
ComAir’s system crash on December 24, 2004, was just one example showing that the
availability of data and system operations is essential to ensure business continuity. Due to
resource constraints, organizations cannot implement unlimited controls to protect their
systems. Instead, they should understand the major threats, and implement effective controls
accordingly. An effective internal control structure cannot be implemented overnight, and
internal control over financial reporting must be a continuing process.
The term “system security threats” refers to the acts or incidents that can and will affect
the integrity of business systems, which in turn will affect the reliability and privacy of business
data. Most organizations are dependent on computer systems to function, and thus must deal
with systems security threats. Small firms, however, are often understaffed for basic
information technology (IT) functions as well as system security skills. Nonetheless, to protect a
company’s systems and ensure business continuity, all organizations must designate an
individual or a group with the responsibilities for system security. Outsourcing system security
functions may be a less expensive alternative for small organizations
153
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Viruses
A computer virus is a software code that can multiply and propagate itself. A virus can
spread into another computer via e-mail, downloading files from the Internet, or opening a
contaminated file. It is almost impossible to completely protect a network computer from virus
attacks; the CSI/FBI survey indicated that virus attacks were the most widespread attack for six
straight years since 2000.
Denial of Service
A denial of service (DoS) attack is specifically designed to interrupt normal system
functions and affect legitimate users’ access to the system. Hostile users send a flood of fake
requests to a server, overwhelming it and making a connection between the server and
legitimate clients difficult or impossible to establish. The distributed denial of service (DDoS)
allows the hacker to launch a massive, coordinated attack from thousands of hijacked (zombie)
computers remotely controlled by the hacker. A massive DDoS attack can paralyze a network
system and bring down giant websites. For example, the 2000 DDoS attacks brought down
websites such as Yahoo! And eBay for hours. Unfortunately, any computer system can be a
hacker’s target as long as it is connected to the Internet.
154
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
access when there is no business need. The LAN should be in a controlled environment accessed
by authorized employees only. Employees should be allowed to access only the data necessary
for them to perform their jobs.
System Penetration
Hackers penetrate systems illegally to steal information, modify data, or harm the
system.
Telecom Fraud
In the past, telecom fraud involved fake use of telecommunication (telephone) facilities.
Intruder often hacked into a company’s private branch exchange (PBX) and administration or
maintenance port for personal gains, including free long-distance calls, stealing (changing)
information in voicemail boxes, diverting calls illegally,wiretapping, and eavesdrop
155
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
intrusion-prevention systems and attack web applications directly. They can inject commands
into databases via the web application user interfaces and surreptitiously steal data, such as
customer and credit card information.
Website Defacement
Website defacement is the sabotage of webpages by hackers inserting or altering
information. The altered webpages may mislead unknowing users and represent negative
publicity that could affect a company’s image and credibility. Web defacement is in essence a
system attack, and the attackers often take advantage of undisclosed system vulnerabilities or
unpatched systems.
24. What do you mean by structured analysis? describe the various tools used for structured
analysis wiht pros n cons of each
STRUCTURED ANALYSIS:-
SET OF TECHNIQUES & GRAPHICAL TOOLS THAT ALLOW THE ANALYST TO DEVELOP
A NEW KIND OF SYSTEM SPECIFICATIONS THAT ARE UNDERSTANDABLE TO THE USER.
Tools of Structured Analysis are:
1: DATA FLOW DIAGRAM ( DFD )--[ Bubble Chart ]
Modular Design;
Symbols:
i. External Entities: A Rectangle or Cube -- Represents SOURCE or
DESTINATION.
ii. Data Flows/Arrows: Shows MOTION of Data.
iii.Processes/Circles/Bubbles: Transform INCOMING to OUTGOING
functions.
iv. Open Rectangles: File or Data Store.
( Data at rest or temporary repository of
data )
-DFD describes data flow logically rahter than how it is processed.
-Independent of H/W, S/W, Data Structure or File Organization.
ADV- Ability to represent data flow.
Useful in HIGH & LOW level Analysis.
Provides Good System Documentation.
DIS ADV- Weak input & output details.
Confusing Initially.
Iterations.
156
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
3: DECISION TREE--
ADV- Used to Verify Logic.
Used where Complex Decisions are Involved.
Used to Calculate Discount or Commissions in Inventory Control System.
DIS ADV- A Large no. of Branches with MANY THROUGH PATHS will make S.A
difficult rather than Easy.
4: DECISION TABLE-- A Matrix of rows & coloumns that shows conditions & actions.
LIMITED/EXTENDED/MIXED ENTRY DECISION TABLE.
CONDITION STUB | CONDITION ENTRY
ACTION STUB | ACTION ENTRY
ADV- Condition Statement identifies relevant conditions.
Condition Entries tell which value applies for any particular
condition.
Actions are based on the above condition statements & entries.
DIS ADV- Drawing tables becomes Cumbersome if there are many
conditions and Respective entries.
5: STRUCTURED ENGLISH-- 3 Statements are used:
i. SEQUENCE- All statements written as Sequence get executed
Sequentially.
The Execution does not depend upon the existence of any
other statement.
ii. Selection- Make a choice from given options.
Choice is made based on conditions.
Normally condition is written after 'IF' & action after
'THEN'.
For 2 way selection 'ELSE' is used.
iii.Iteration- When a set of statements is to be performed a no. of
times,the statements are put iin a loop.
ADV- Easy to understand.
157
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
26. Build a current Admission for MCA system .Draw context level diagram, DFD upto Two
level, ER diagram and a dataflow and data stores and a process. Draw input, output
screen.(May-04)
EVENT LIST FOR THE CURRENT MCA ADMISSION SYSTEM
1. Administrator enters the college details.
2. Issue of admission forms.
3. Administrator enters the student details into the system.
4. Administrator verifies the details of the student.
5. System generates the hall tickets for the student.
6. Administrator updates the CET score of student in the system.
7. System generates the score card for the students.
8. Student enters his list of preference of college into the system.
9. System generates college-wise student list according to CET score.
10. System sends the list to the college as well as student.
Input Files
1) Student Details Form:
Student Name: ___________________________
Student Address:
Student Contact NO:
Student Qualification:
Students Percentge 12th:
Students Percentage 10th:
Students Degree Percentage:
(optional)
2) Student preference List:
Student_rollNo: _________
Preference No 1:
Preference No 2:
Preference No 3:
159
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
Preference No 4:
Preference No 5:
3) Stdent CET Details :
Student id :
Student rollno:
Student Score:
Student Percentile:
4) College List :
College Name:
College Address:
Sets Available :
Fees :
OUTPUT FILES
1) Student ScoreCard
Student RollNo :
Student Name :
Student SCore :
Percentile :
2) Student List to College
RollNo Name Score Percentile
1
2
3
4
5
6
7
8
9
26. RAD
Rapid Application Development (RAD) is an incremental software development process
model that emphasizes an extremely short development cycle. If requirements are well
understood and project scope is constrained the RAD process enables a development team to
create a fully functional system within very short time periods( eg. 60 to 90 days)
1. Business modeling:
The information flow among business functions is modeled in a way that answers the
following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
2. Data Modeling:
The information flow defined as part of the business modeling phase is refined into a set
of data objects that are needed to support the business.
The characteristics (called attributes) of each object are identified and the relationships
between these objects defined.
3. Process Modeling:
The data objects defined in the data modeling phase are transformed to achieve the
information flow necessary to implement a business function.
Processing descriptions are created for adding, modifying deleting or retrieving a data
object.
4. Application generation:
RAD assumes the use of fourth generation techniques. Rather than creating software
using conventional third generation programming languages the RAD process works to
reuse existing program components (when necessary). In all cases, automated tools are
used to facilitate construction of the software.
5. Testing and Turnover:
Since the RAD process emphasizes reuse, many of the program components have already
been tested. This reduces overall testing time. However new components must be tested
and all interfaces must be fully exercised.
The time constraints imposed on a RAD project demand “scalable scope”. If a business
application can be modularized in a way that enables each major function to be completed in
less than three months, it is a candidate for RAD. Each major function can be addressed by a
separate RAD team and then integrated to form a whole.
161
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
If high performance is an issue and performance is to be achieved through tuning the interfaces
to system components the RAD approach may not work.
• RAD is not appropriate when technical risks are high. This occurs when a new application
makes heavy use of new technology or when the new software requires a high degree of
interoperability with existing computer programs.
162
Pravin Prabhakar Chalke
pravin_chalke241088@yahoo.co.in
http://www.orkut.co.in/Main#Community?cmm=103405711
163