Escolar Documentos
Profissional Documentos
Cultura Documentos
UNIT I
INTRODUCTION
SUMMARY
REVIEW QUESTIONS
Data the raw material for information is defined as groups of non random symbols
such as ₤ and which represent quantities, actions, objects etc. data items in information
systems are formed from characters. These alphabetic, numeric or special symbols are
organized for processing purposes into data structures, file structures and databases. Data
relevant to processing of information and decision making may also be in the form of text
images or voice. However, information generally defined as data that is meaningful or useful
to the recipient Data items are therefore the raw material for producing information.
For instance when I want to take decision of “purchasing of bike” mileages, pickup is
information where as kilometre from destination1 to destination 2 will be useful in
calculating the mileage and helps in taking decision termed data. The terms “information”
and “data” are frequently used interchangeably.
The system utilizes computer hardware and software manual procedures model for analysis,
planning, control and decision making and a database.
INTEGRATED SYSTEMS
In the design of information system data and application is aparted. The separate
database is maintained while the integration is across many applications and to a variety of
users.
IMPORTANCE OF A DATABASE
When all access to and use of the database is controlled through a database
management system. All applications utilizing a particular data item access the same data
item which is stored in only one place. A single updating of the data item updates it for all
uses. Integration through a database management system requires a central authority for the
database. The data can be stored in one central computer or dispersed among several
computers; the overriding requirement is that there be an organizational function to exercise
control.
UTILIZATION OF MODELS
Data need to be processed to attain the decision in the business. To do this, processing
of data items is based on a decision model. For instance, an investment decision relative to
new capital expenditures might be processed in terms of capital expenditure decision model.
The word decision has been derived form the Latin word ‘decidere’ which means ‘a
cutting away or a cutting off. Thus, a decision involves a cut of alternatives between those
that are desirable and those that are not desirable. The decision is a kind of choice of a
desirable alternative. Decision making is a process to arrive at a decision; the process by
which an individual or organisation selects one position or action from several alternatives.
Shull et al have defined decision making as follows:
Is relevant for solving unique / unusual problem in which various alternatives cannot
be decided in advance. For such a decision, the situation is not well - structured and the
outcomes of various alternatives cannot be arranged in advance. For instance if an
organisation wants to take actions for expansion, it may have several alternative routes like
going for a taking over or acquisition of an existing company. In each situation, the managers
evaluate the likely outcomes of each alternative to arrive at a decision consider ing various
factors, many of which lie outside the organisation. Non - programmed decisions are novel
and non-recurring. Therefore, readymade solutions are not available. Since these decisions
are of high importance because of their long - term consequences, these are made by
managers at higher levels in the organisation.
The decision maker makes today’s decision for future conditions whose Impact is
known in future period. The future conditions for a decision vary along a continuum ranging
from criteria of perfect certainty to condition of complete uncertainty.
In each of these conditions, knowledge of outcome of the decision differs. An
outcome defines what will happen if a particular alternative or course of action is chosen and
implemented. Knowledge of outcome of each decision alternative is important when there
are multiple alternatives and only one alternative is to be chosen. In the analysis for decision
making, three types of knowledge with respect to outcomes are usually distinguished as
shown in Table.
TABLE 1.1
TABLE 1.2
PROBLEM IDENTIFICATION
Without formulating the problem the identified problem seems to vague. At this
stage, the problem identified earlier, is more precisely defined and complexity get clarified.
MacGrimmon and Taylor have suggested four strategies for reducing complexity and
formulating a problem:
ALTERNATIVE GENERATION
In this phase, decision maker generates possible alternatives through which the
problem can be solved. If there is only one way of solving a problem, no question of decision
arises. So, the decision maker must try to find out the various alternatives available in order
to get the most satisfactory result of a decision. Identification of various alternatives not only
serves the purpose of selecting the most satisfactory one, but It also avoids bottlenecks in
operation as alternatives are available if a particular decision goes wrong. However, it should
be borne in mind that it may not be possible to consider all alternatives either because some
of the alternatives cannot be considered for selection because of obvious limitations of the
decision maker or information about all alternatives may not be available. Therefore, while
generating alternatives, the concept of limiting, factor should be applied.
A decision maker can use several sources for identifying alternatives - his own past
experience; practices followed by others, using creative forecasting and statistical techniques,
research.
CHOOSING AN ALTERNATIVE
In this phase, the best alternative is chosen to solve the problem. Evaluation of
various alternatives presents a clear picture as to how each of these contributes to solution of
the problem. A comparison is made among likely outcomes of the various alternatives and
the most appropriate one is chosen. Choice aspect of decision making is, thus, related to
deciding the most acceptable alternative which fits with the organisational objectives. It may
be seen that the chosen alternative should be acceptable in the light of organisational
objectives, and it is not necessary that the chosen alternative is the best one. However, all
alternatives available for decision making will not be taken for detailed evaluation because of
the obvious limitations of managers in evaluating all alternatives.
IMPLEMENTATION
Even though the actual process of decision making ends with the choice of an
alternative through which the objectives can be achieved, this phase will help manager to r
know what way their choice has contributed. The implementation of decision may be seen as
an integral aspect of decision.
There are different methods to evaluate various alternatives through which a problem
can be solved. In evaluating alternatives, an attempt is made to find out the likely outcome of
each alternative so that the alternative which is likely to provide maximum outcome is
chosen. In evaluating the likely outcomes of various alternatives, generally, following
methods are used:
1. Optimization techniques.
2. Pay-off matrices.
3. Decision tree.
4. Decision table.
5. Game theory.
6. Elimination by aspects.
7. Decisional balance sheet
DEFINITION
FIGURE 1.2
Input Output
Process
System model
FIGURE 1.3 Storage Subsystem
memory
units
Processing Subsystem
Input Subsystem Output Subsystem
External
Interfaces
The probabilistic system is named based on it probabilistic nature; so there are chance
for error on the prediction of the output of the system. A pricing system (for instance share
market price) is a probabilistic system which involves prediction of demand and market. But
the exact value at any given time is not known.
SUBSYSTEMS
The system is integration of various subsystems and interfaces between them. This is
a basic concept in analysis and development of systems. A development of complex system
is tedious, so the system is decomposed into subsystems. This process of decomposition of
system is continued until subsystems could be of independent in behaviour and manageable
in size. For instance the funds management system is described in the figure as a combination
of subsystems.
FIGURE 1.4
Funds
Management
system
The information system processes the received inputs of data and produces the
outputs. The basic system model consists of input, process, and output as well as the data
storage. Instead of collecting data and transforming data to information, keeping the storage
will help in subsequent use. This basic information processing model is useful in
understanding not only the overall information processing system but also the individual
information processes applications. Each application may be analyzed in terms of input,
storage. Processing and output. The information processing system has functional subsystems
and activities based subsystem.
FIGURE: 1.5
Unstructured
Higher
Level
Management
Strategic
Planning
Management
Control
Lower
Level Operational
Management Control
and
clerical
Structured
decisions
Transaction processing
Strategi
c
Plannin
PRODUCTION SUBSYSTEM
PERSONNEL SUBSYSTEM
LOGISTICS SUBSYSTEM
Operational Control & DM
(MIS,DSS)
Integrated database
SUBSYSTEMS OF AN MIS
TABLE 1.4
TPS serve the need of operational level of the organization. It is defined as recording
the daily day to day transactions basis for the conduct of business. It is highly structured.
Transaction processing is performed manually or with mechanical machines; computer-based
data processing has altered the speed and complexity of transaction processing, but not the
basic function.
TPS are major producers of information for other types of systems available in
various other level of management. Without transaction processing, normal organizational
functioning would be impossible, and the data for management activities would not be
available. For instance, in banking the transaction processing systems supply data to the
organization’s ledger which are responsible for maintaining records of the organization for
analyzing the performance of the organization through balance sheet and so on. TPS can
connect the organization with its stake holders (customers and suppliers).
The transaction processing cycle begins with a transaction which is recorded in some
way. Although hand-written forms are still very common, transactions are often recorded
directly to a computer by the use of an online terminal. Recording of the transaction is
generally the trigger to produce a transaction document. Data from the transaction is
frequently required for the updating of master files; this updating may be performed
concurrently with the processing of transaction document or by a subsequent computer run.
FIGURE 1.7
Sales or purchase
transactions entry
Transactions form
Assemble hatch
Valid
Transaction
Conversion Transactions report
from
Input records
Form to Validation
Records
Invalid Control
Correction Records log
Decision support system and executive information system and expert system are
used in middle and top level of management which will be discussed in unit 3.
MANAGEMENT ACCOUNTING
BEHAVIOURAL THEORY
MIS helps in effective functioning of the organization. The fields of management and
behavioural theory helps in understanding the functions of a MIS and behaviour of the
managers in each level of the organization such individual decision making and group
decision making, individual and team motivation, leadership styles and traits of leader,
organizational change management and group dynamics, and organisation structure and
culture and so on. The knowledge of these concepts helps the designer of MIS to understand
the behaviour pattern and the types of decisions made by manager at each level of the
organization.
OPERATIONS RESEARCH
COMPUTER SCIENCE
Computer science deals with the hardware and software of computer systems. The
knowledge of computer science enables information storage, processing, and retrieval faster.
Computer science covers the concepts of algorithms, computation and data structures which
are important in development of MIS. However, modern MIS is not merely an extension of
computer science but the emphasis in MIS is on the application of the technical capabilities
that computer science has made available. The fundamental processes of management
information systems are more related to organizational processes and organizational
effectiveness then computer algorithms.
As long as organizations are small and have limited operational goals manual
information systems are satisfactory. Many trends in the development of industry and
commerce have made computer -based information systems essential to efficiently run
organizations. These are: -
The size of organizations is becoming larger. This is particularly true in India due to
increase in population and rapid rate of industrial development.
Computer - based processing enables the same data to be processed in many ways,
based on needs, thereby allowing managers to look at the performance of an organization
from different angles.
As the volume of data has increased and the variety of information and their
timeliness is now of great importance, computer – based information processing has now
become essential for efficiently managing organizations.
The general socio - economic environment demands more up-to-date and accurate
information. Human systems are changing faster than ever before. Governmental regulations
have become complex. Organizations have to interact with many other interested parties such
as consumer groups, environmental protection groups, financial institutions, etc., which did
not exist before.
All the above developments demand decision making based on up-to-date, well
analyzed and presented information rather than thumb rules and hunches of an earlier era.
1.8 THE ROLE OF A SYSTEMS ANALYST
MIS is possible without computer too. When the organization goes for the computer
based information system the role of the system analyst plays a vital role. A systems
analyst’s primary responsibility is to identify information needs of an organization and obtain
a logical design of an information system which will meet these needs. Such an information
system will be a combination of manual and computer - based procedures to process a
collection of data, useful to the managers in taking decisions. He must have the knowledge of
the data flow and process of the organization.
The personnel involved in the management information systems are: end users (clerical,
operational and top level managers); professionals (database administrator, programmers).
The systems analyst coordinates the efforts of all these personnel to effectively develop and
operate computer - based information systems.
The requirement must be collected from the users of the system. Users may not be
very familiar with coding terminology. The system analyst must collect the requirement from
the users of the information systems. This is best achieved by having a common meeting with
all the users and arriving at a consensus.
The systems analyst requires good interpersonal relations and diplomacy. He must be able to
convince all the users about the soundness of the group decision and obtain their cooperation.
An analyst studies the problem in depth and helps in choosing the best solutions and the
relative difficulties in implementing each of other alternatives.
An analyst is responsible in obtaining the functional specification; it must be precise and
detailed. it must be non - technical so as users ,clerks, middle level managers and top
managers of organization able to understand.
He is also responsible to design the system which must be understandable, accommodate
changes easily. The analyst must know the latest design tools to assist him in his task. As part
of the design he must also create a system test plan.
A systems analyst must be familiar with all the functions of the organizations. He
must aware of new technological changes since he is responsible for the feasibility analyses.
He must be a good listener,a good communicator, a good diplomat, conflict revolver
,motivator, influencer.
Based on the needs and requirements of the organization MIS has been evolving
though the period of time. MIS was manually operational before the invention of computer
application in this area. The following table illustrates the evolution of information system.
TABLE 1.6
All organizations are divided into many departments or sections with each department
having an assigned functional responsibility. Consider for example, an educational institute
such as Anna University. It will typically have besides academic departments a central
administrative office. The administrative office will be divided in to many sections each with
an assigned function. Typically the functions will be student section which will normally deal
with student records, students admission etc. an accounts section, purchase section stores
section, personnel section, medical section and student hostel office. A hierarchical chart of
the sections is shown in figure and their functions in table. A manufacturing organization for
instance will have the functions shown in table division of an organization in to departments
with specified functions is mainly intended to let each department focus on an area of
responsibility. All departments will have to coordinate their activities to meet the overall
objectives of the organization. This coordination normally provided by higher level
management in the organization.Functions of various departments of a University.
TABLE 1.7
Sections Functions
Student section Students’ admission records
Administering admission tests
Students’ academic records
Students’ registration information
Placement
Accounts section University budget
Payroll
General ledger of receipts/ payments
TABLE 1.8
Sections Functions
Production Production planning and
control
Maintenance management
Bill of materials processing
Marketing Order processing
Advertising
Customer records/follow up
Sales analyses
Finance Billing, payments
Payroll
Costing
Share accounting
Budget and finance planning
Tax planning
Resource mobilization
Personnel Recruitment
Records
Training
Deployment of labor
Assessment/promotions
Stores Stock ledger keeping
Issues/reorder
Receipts
Enquiry processing
Purchase Order processing
Vendor development
Vendor selection
Maintenance Physical facilities
Communication facilities
Electricity and water supply
Research and Production improvement
development Product development
Product testing
Product design
We see that there are some common functions such as personnel, purchase, stores and
accounts and there are organization specific functions such as students section in a university
and production section in a manufacturing organization. This is a general observation.
Information processing methods, however, have general features regardless of the
organization for which they are designed.
SUMMARY
The vital role of information in an organization has been discussed. how the
manager should react to the information of external environment and internal environment.
The information systems have become essential for helping organizations deal with changes
in global economies and the business. The kinds of systems built today are very important for
the organization’s overall performance, especially in today’s highly globalize and
information- based economy. Information systems are driving both daily operations and
organizational strategy. Powerful computers, software and networks, including the internet ,
have helped organizations become more flexile , eliminate layers of management , separate
work from location , coordinate with suppliers and customers, and restructure work flows ,
giving new powers to both line workers and management. Information technology provides
managers with tools for more precise planning forecasting and monitoring of the business. To
maximize the advantages of information technology, there is a much greater need to plan the
organization’s information architecture and information technology infrastructure.
REVIEW QUESTIONS
SUMMARY
REVIEW QUESTIONS
2.1 OVERVIEW OF SYSTEM DEVELOPMENT
The systems life cycle is the oldest method for building information systems.
The life cycle methodology is a phased approach to building a system, dividing
systems development into formal stages.
The systems life cycle methodology maintains a very formal division of labour
between end users and information systems specialists. Technical specialists, such as
system analysts and programmers, are responsible for much of the systems analysis,
design, and implementation work; end users are limited to providing information
requirements and reviewing the technical staff’s work. The life cycle also emphasizes
formal specifications and paperwork; so many documents are generated during the
course of a systems project.
The systems life cycle is still used for building large complex systems that
require a rigorous and formal requirements analysis, predefined specifications, and
tight controls over the systems - building process. However, the systems life cycle
approach can be costly, time consuming, and inflexible. Although systems builders
can go back and forth among stages in the life cycle, the systems life cycle is
predominantly a “waterfall” approach in which tasks in one stage are completed
before work for the next stage begins. Activities can be repeated, but volumes of new
documents must be generated and steps retraced if requirements and specifications
need to be revised. This encourages freezing of specifications relatively early in the
development process. The life cycle approach is also not suitable for many small
desktop systems, which tend to be less structured and more individualized.
The systems development life cycle activities depicted here usually take place
in sequential order. But some of the activities may need to be repeated or some may
take place simultaneously, depending on the approach to system building that is being
employed.
The systems analyst creates a road map of the existing organization and
systems, identifying the primary owners and users of data along with existing
hardware and software. The systems analyst then details the problems of existing
systems. By examining documents, work papers, and procedures; observing system
operations; and interviewing key users of the systems, the analyst can identify the
problem areas and objectives a solution would achieve. Often the solution requires
building a new information system or improving an existing one. The systems
analysis would include a feasibility study to determine whether that solution was
feasible, or achievable, from a financial, technical, and organizational standpoint.
Building a system can be broken down into six core activities.
FIGURE 2.1
System Analysis
System Design
Programming
Testing
Conversion
Production and
Maintenance
TABLE 2.1
1. FEASIBILITY STUDY
ECONOMIC FEASIBILITY
TECHNICAL FEASIBILITY
OPERATIONAL FEASIBILITY
LEGAL FEASIBILITY
Legal feasibility tries to ensure whether the new system meets the
requirements of various information technology regulations, such as privacy laws,
computer crime laws, software theft laws, malicious access to the data, international
laws, etc. Government of India has formulated a comprehensive Act, known as
Information Technology Act 2000 that provides regulations for the use of information
technology.
Many feasibility studies are disillusioning for both users and analysts. These
studies often presuppose that when the feasibility document is being prepared the
analyst is in a position to evaluate solution. In order to make feasibility study report
meaningful for decision making, a logical procedure must be followed which consists
of the following steps.
1. FORMATION OF A PROJECT TEAM:
The team may consist of system analysts and user staff. In some cases, even
outside consultants and information system specialists may be included in the team.
The team should have a project leader who may guide the other members as to how to
proceed for the job.
Keeping in mind the objectives for which a new system is required, alternative
systems should be identified. All systems work on the principle of equifinality. This
principle suggests that a system can reach the same final state from differing initial
conditions and by a variety of paths. It implies that many information systems may be
able to achieve pre-determined objectives though their approaches may be different.
Therefore, at this stage, a number of systems should be identified so that the project
team has several alternatives to choose a system that best fits organisational
requirements.
At this stage the team identifies the various characteristics of systems so that
those systems that do not meet the initial selection criteria are eliminated as it is
difficult and time consuming to have a detailed evaluation of large number of
systems. These initial criteria may be in the form of volume of investment required,
operational efficiency organisational constraints, etc. Only those systems that meet
these initial: criteria go to the next step.
Detailed performance and cost evaluation is carried for those systems which
pass successfully through the previous step. At this stage, performance of each system
is evaluated against the performance criteria set before the start of the feasibility,
study. Whatever the criteria may be set, there has to be as close a match as possible.
Besides performance, each system should be evaluated in terms of costs too. The
costs include all types of costs - cost of initial investment in additional hardware,
software, and physical facilities, development and installation cost, cost of training,
updating the software, documentation and the recurring operating cost. In many
systems, initial investment cost may be low but its operating cost may be high to
offset the advantage of lower initial cost. This fact should be taken into account.
Weights can be assigning to performance criteria like system accuracy, growth
potential, response time, user friendly, etc. In the same way, weights can be assigned
to different components of costs.
Based on the feasibility report, management takes suitable action including the
final selection of a system. Usually, a project report contains the following items:
1. Covering letter containing briefly the nature of the study, general finding and
recommendations.
2. Table of contents indicating location of various parts of the Study.
3. Overview of the study indicating its objectives and reasons undertaking it.
4. Detailed findings indicating the projected performance of the system and
costs involved.
5. Recommendations and conclusions suggesting the management the most
beneficial and cost - effective system.
6. System design and implementation schedule indicating the time to be taken in
completing various activities and the time by which the system becomes ready
for operational use.
After the final decision about a system is made by management, subsequent activities
proceed.
2. REQUIREMENT ANALYSIS
Requirement analysis defines the scope of the system and the functions it is
expected to perform. If the system is not designed according to information
requirements it will fail to achieve its objective in spite of choosing the best system.
SECONDARY DATA
OBSERVATION
INTERVIEWS
Information can be collected about the likely requirement through the personal
interviews. Interview is a formal, in-depth conversation conducted to gather
information about how the present systems work and what modifications are required
in them. Interviews can be used for two main purposes - as an exploratory devise to
identify relations or verify information and to capture information as it exists. In
conducting an interview, the analyst should proceed in the, following manner:
QUESTIONNAIRES
The systems designer details the system specifications that will deliver the
functions identified during systems analysis. These specifications should address all
of the managerial, organizational, and technological components of the system
solution.
The physical system design phase, also called internal or detailed design,
consists of activities to prepare the detailed technical design of the application system.
The physical system design is based on the information requirements and the
conceptual design.
The physical system design work is performed by systems analysts and other
technical personnel. Users may participate in this phase, but much of the work
requires data processing expertise instead of user function expertise. There are
number different methods an analyst may employ. By supporting the systematic
process of expanding the level of detail and documenting the results, the methods aid
in reducing the complexity and cost of developing application systems and in
improving the reliability and modifiability of the designs.
Conceptual model: specifies the relationships between data. Using entity relationship
analyses group the data.
E-R diagram
based on existing and
potential Design conceptual model
applications of database
Evaluate performance of
physical model
Implement
• One relationship
• Fixed maximum number of relationship
• Variable number of relationship
FIGURE 2.3
The next step is to design a logical model of the database. The logical model
depends on the DBMS software. If it is a relational DBMS then relations form the
logical data model. There are two other DBMS, called the network DBMS and
hierarchical DBMS. Logical data model for these are different from relations. We will
not discuss these in this text.
The physical data model is designed to ensure good performance. In this step
data on frequency of use of data elements, access time needs, etc., are taken into
account. The physical data model is implemented and evaluated. The system structure
is shown in Figure 2.4
2.1.1.3. PROGRAMMING
2.1.1.4. TESTING
SYSTEM TESTING
ACCEPTANCE TESTING
The systems development team works with users to devise a systematic test
plan. The test plan includes all of the preparations for the series of tests we have just
described.
Figure 2.5 shows an example of a test plan. The general condition being tested
is a record change. The documentation consists of a series of test-plan screens
maintained on a database that is ideally suited to this kind of application.
2.1.1.5. CONVERSION
Conversion is the process of changing from the old system to the new system.
Four main conversion strategies can be employed: the parallel strategy, the direct
cutover strategy, the pilot study strategy, and the phased approach strategy.
In a parallel strategy both the old system and its potential replacement are
run together for a time until everyone is assured that the new one functions correctly.
This is the safest conversion approach because, in the event of errors or processing
disruptions, the old system can still be used as a backup. However, this approach is
very expensive, and additional staff or resources may be required to run the extra
system.
The direct cutover strategy replaces the old system entirely with the new
system on an appointed day. It is a very risky approach that can potentially be more
costly than running two systems in parallel if serious problems with the new system
are found. There is no other system to fall back on. Dislocations, disruptions, and the
cost of corrections may be enormous.
The pilot study strategy introduces the new system to only a limited area of
the organization, such as a single department or operating unit. When this pilot
version is complete and working smoothly, it is installed, throughout the rest of the
organization, either simultaneously or in stages.
The phased approach strategy introduces the new system in stages, either by
functions or by organizational units. If, for example, the system is introduced by
functions, a new payroll system might begin with hourly workers who are paid
weekly, followed six months later by adding salaried employees (who are paid
monthly) to the system. If the system is introduced by organizational units, corporate
headquarters might be converted first, followed by outlying operating units four
months later.
Moving from an old system to a new one requires that end users be trained to
use the new system. Detailed documentation showing how the system works from
both a technical and end-user standpoint is finalized during conversion time for use in
training and everyday operations. Lack of proper training and documentation
contributes to system failure, so this portion of the systems development process is
very important.A sample test plan to test a record change.
After the new system is installed and conversion is complete, the system is
said to be in production. During this stage, the system will be reviewed by both users
and technical specialists to determine how well it has met its original objectives and to
decide whether any revisions or modifications are in order. in some instances, a
formal post implementation audit document is prepared. After the system has been
fine-tuned, it must be maintained while it is in production to correct errors, meet
requirements, or improve processing efficiency. Changes in hardware, software,
documentation, or procedures to a production system to correct errors, meet new
requirements, or improve processing efficiency are termed maintenance.
In analysing the present system and likely future requirements of the proposed
system- analyst collects a great deal of relatively unstructured data through procedure
manuals, interviews, questionnaires, and other sources, The traditional approach is to
organize and convert the data through system flowcharts which support future
development of the system and simplify communication with the users. However, a
system flowchart represents a physical rather than a logical system. It makes difficult
to distinguish between what happens and how it happens In the system. In order to
overcome this problem, structured analysis is undertaken which is a set of techniques
and graphical tools ad allow the analyst to develop a new kind of system
specifications that are easily understandable to the users. Structured analysis uses data
flow diagram. Structured analysis has the following features.
The primary tool for representing a system’s component process and the flow
of data between them of the data flow diagram (DFD). The data flow diagram offers
a logical graphic model of information flow, partitioning a system into modules that
show manageable levels of details. It rigorously specifies the processes or
transformations that occur within each module and the interfaces that exist between
them.
FIGURE 2.6
Entity
Figure shows a simple data flow diagram for a mail – in university course
registration system. The rounded boxed represent processes, which portray the
transformation of data. The square box represents an external entity, which is an
originator or receiver of data, located outside the boundaries of the system being
modelled.
The open rectangles represent data stores, which are either manual or
automated inventories of data. The arrows represent data flows, which show the
movement between processes, external entities, and data stores. They always contain
packets of data with the name or content of each data flow listed beside the arrow.
This data flow diagram shows that students submit registration forms with
their name, identification number, and the numbers of the courses they with to take.
In process 1.0 the system verifies that each courses selected is still open by
referencing the university’s course file. The file distinguishes course that are open
from those that have been canceled or filled. Process 1.0 then determines which of
the student’s selections can be accepted or rejected. Process 2.0 enrolls the student in
the course for which he or she has been accepted. It updates the university’s course
file with the student’s name and identification number and recalculates the class size.
If maximum enrollment has been reached, the course number is flagged as closed.
Process 2.0 also updates the university’s student master file with information about
mew students or changes in address. Process 3.0 then sends each student applicant a
confirmation- of – registration letter listing the courses for which he or she is
registered and noting the course selections that could not be fulfilled.
The diagrams can be used to depict higher - level processes as well as lower -
level details. Through leveled data flow diagrams, a complex process can be broken
down into successive levels of details. An entire system can be divided into
subsystems with a high - level data flow diagram. Each subsystem in turn, can be
divided into additional subsystems with second - level data flow diagrams, and the
lower - level subsystems can be broken down again until the lowest level of details
has been reached.
The system has three processes: Verify availability (1.0), Enroll student
(2.0), and Confirm registration (3.0). The name and content of each of the data flows
appear adjacent to each arrow. There is one external entity in this system: the student.
There are two data stores: the student master file and the course file.
FIGURE 2.7
Requested-courses Open-courses
Student
1.0
Verify
availability
Course
details
Confirmation - 2.0
letter Enroll Student master file
Student Student-details
Course file
Registration
3.0
Confirm Course-enrolment
registration
Figure 2.7 Data flow diagram for mail – in university registration systems.
Some of the other available technique:
Warnier – Orr:
True
Test of
A Condition
P
Not
Not True
True True
C D
B
• Primary users: This includes instructions for how to interpret a report, how to select
different options for a report, etc. If the user can execute the system directly, as in on-line
queries, it includes detailed instructions for accessing the system and formulating
different types of queries.
• Secondary users: This includes detailed instructions on how to enter each kind of input. It
is more oriented to “how to” and less to “what” for different inputs (when compared with
the instructions to primary users).
• Computer operating personnel: There are generally maintenance procedures to be
performed by computer operators and/or control personnel. Procedures include
instructions for quality assurance, backing up system files, maintaining program
documentation, etc.
• Training procedures: In some cases, a separate training manual or set of training screens
is developed for the implementation stage and subsequent training.
One of the two groups may be responsible for writing procedures analysts or users. The
advantages of analyst-written procedures are technical accuracy, project control over
completion, and conformance to the overall documentation; disadvantages are the tendency of
analysts to write in technical jargon or technical abbreviations and to assume users have
knowledge of the technical environment. The advantages of user-written procedures are an
appropriate level of technical description and instructions that are understandable;
disadvantages are the difficulty of assuring clear, complete instruction. A mixed strategy is
one in which analysts and users work together to produce a technically correct,
understandable manual.
TABLE 2.2
Step 1: Identify the user’s basic information requirements. In this stage, the user
articulates his or her basic needs in terms of output from the system. The designer’s
responsibility is to establish realistic user expectations and to estimate the cost of
developing an operational prototype. The data elements required are defined and their
availability determined. The basic models to be computerized are kept as simple as
possible.
Step 2: Develop the initial prototype system. The objective of this step is to build a
functional interactive application system that meets the user’s basic stated information
requirements. The system designer has the responsibility for building the system using
very high level development languages or other application development tools.
Emphasis is placed on speed of building rather than efficiency of operation. The
initial prototype respond only to the user’s basic requirements: it is understood to be
incomplete. The early prototype is delivered to the user.
Step 3: Use of the prototype system to refine the user’s requirements. This step allows
the user to gain hands-on experience with the system in order to understand his or her
information needs and what the system does and does not do to meet those needs. It is
expected that the user will find problems with the first version. The user rather than
the designer decides when changes are necessary and thus controls the overall
development time.
Step 4: Revise and enhance the prototype system. The designer makes requested
changes using the same principles as stated in step 2. Only the changes the user
requests are made.
1. Basic needs
2. Scope of
application
3. Estimated costs
Initial
Prototype
Yes Is the
User/designer
Satisfied?
No
Operational Enhanced
Prototype Working Working
Prototype Prototype
A spiral model fits well, when we are developing large systems, where the
specifications cannot be ascertained in one stroke completely and correctly. Some of
them get surfaced when the system is put to use after its testing. The continuous
revision of these steps in the system development is very common and then the
designers call them as versions. The new version provides an additional functionality,
features, and facilities to the user, and addresses the issues of the users of the system
viz. performances, response, security and so on.
FIGURE 2.10
Fig 2.10 Figure spiral model
Some types of information systems can be developed by end users with little
or no formal assistance from technical specialists. This phenomenon is called end -
user development. A series of software tools categorized as fourth-generation
languages makes this possible. Fourth - generation languages are software tools that
enable end users to create reports or develop software applications with minimal or no
technical assistance. Some of these fourth - generation tools also enhance professional
programmers’ productivity.
End users are most likely to work with PC software tools and query languages.
Query languages are software tools that provide immediate online answers to
requests for information that are not predefined, such as “Who are the highest -
performing sales representatives? Query languages are often tied to data management
software and to database management systems.
On the whole, end - user - developed systems can be completed more rapidly
than those developed through the conventional systems life cycle. Allowing users to
specify their own business needs improves requirements gathering and often leads to a
higher level of user involvement and satisfaction with the systems. However, fourth -
generation tools still can not replace conventional tools for some business application
because they cannot easily handle the processing of large numbers of transactions or
applications with extensive procedural logic and updating requirements.
End - user computing also poses organisational risks because it occurs outside
of traditional mechanisms for information systems management and control. When
systems are created rapidly, without a formal development ,methodology, testing and
documentation may be inadequate. Control over data can be lost in systems outside
the traditional information systems department.
The software for most systems today is not developed in - house but is
purchased from external sources. Firms can rent the software from an applications
service provider; they can purchase a software package from a commercial vendor.
Or they can have a custom application developed by an outside outsourcing firm.
During the past several decades, many systems have been built on an
application software package foundation. Many applications are common to all
business organizations - for example, payroll, accounts receivable, general ledger, or
inventory control. For such universal functions with standard processes that do not
change a great deal over time, a generalized system will fulfil the requirements of
many organizations.
If an organization has unique requirements that the package does not address,
many packages include capabilities for customization. Customization features allow
a software package to be modified to meet an organization’s unique requirements
without destroying the integrity of the package software. If a great deal of
customization is required, additional programming and customization work may
become so expensive and time consuming that they negate many of the advantages of
software packages.
Figure 2.11 shows how package costs in relation to total implementation costs
rise with the degree of customization. The initial purchase price of the package can be
deceptive because of these hidden implementation costs. If the vendor releases new
versions of the package, the overall costs of customization will be magnified because
these changes will need to be synchronized with future versions of the software.
When a system is developed using an application software package, systems analysis
will include a package evaluation effort. The most important evaluation criteria are
the functions provided by the package, flexibility, user friendliness, hardware and
software resources, database requirements, installation and maintenance efforts,
documentation, vendor quality, and cost. The package evaluation process often is
based on a Request for Proposal (RFP), which is a detailed list of questions
submitted to packaged - software vendors. When a software package solution is
selected, the organization no longer has total control over the system design process.
Instead of tailoring the system design specifications directly to user requirements, the
design effort will consist of trying to mould user requirements to conform to the
features of the package. If the organization’s requirements conflict with the way the
package works and the package cannot be customized, the organization will have to
adapt to the package and change its procedures. Even if the organization’s business
processes seem compatible with those supported by a software package, the package
may be too constraining if these business processes are continually changing
FIGURE 2.11
10
9
8
Total Cost 7
6
5
4
3
2
1
2 4 6 8
Packing Cost
OUTSOURCING
If a firm does not want to use its internal resources to build or operate
information systems, it can outsource the work to an external organization that
specializes in providing these services. Application service providers (ASPs) are one
form of outsourcing. An Application Service Provide (ASP) is a business that
delivers and manages application and computer services from remote computer
centers to multiple users using the Internet or a private network. Instead of buying
and installing software programs, subscribing companies can rent the same functions
from these services. Users pay for the use of this software either on a subscription or
per - transaction basis.
The ASP’s solution combines package software applications and all of the
related hardware, system software, network, and other infrastructure services that the
customers otherwise would have purchase, integrate, and manage independently. The
ASP customer interacts with a single entity instead of an array of technologies and
service vendors.
TABLE 2.3
The process of system design and implementation was described The most common
methodology applied to large, highly structured application systems is the system
development life cycle. This represents a linear assurance strategy but can be modified to an
iterative assurance strategy. The phases in the life cycle provide a basis for system
management and control by breaking the process down into small, well-defined segments. An
experimental assurance strategy is generally accomplished by prototyping. The prototyping
methodology is an iterative process carried out by a user and designer. Very high level
development languages are used to build a system quickly and iterate modifications based on
the user’s actual experience with the prototype.
The design phase includes four types of activities: physical system design, physical
database design, program development and procedure development. Physical system design
involves translating information requirements and conceptual design into technical
specifications and general flow of processing.
The system development life cycle includes conversion to the new system, ongoing
operation and maintenance and post audit. Conversion activities are acceptance testing, file
building and user training and the various other approaches of system development has been
discussed.
REVIEW QUESTIONS
INFORMATION SYSTEM
REVIEW QUESTIONS
CHAPTER SUMMARY
Most organizations are structured along functional lines or areas. This functional
structure is usually apparent from an organization chart, which typically shows vice
presidents under the president.
Management Information System can be divided to produce tailor made reports in the
above mentioned functional areas individually.
Figure 3.1 shows typical inputs, function-specific subsystems, and outputs of a financial
MIS.
FIGURE 3.1
Profit/Loss and cost Systems are two financial subsystems that organize revenue
and cost data for the firm. Revenue and expense data for various departments is captured
by the Transaction Processing System (TPS) becomes a primary internal source of
financial information system.
Many departments within an organization are profit centers, which mean they
track total expenses and net profits. An investment division of a large insurance or credit
card company is an example of a profit center. Other departments may be revenue
centers, which are divisions within the company that primarily track sales or revenues,
such as a marketing or sales department.
Still other departments may be cost centers, which are divisions within a
company that do not directly generate revenue, such as manufacturing or research and
development. These units incur costs with little or no direct revenues.
AUDITING
Auditing is the process of analyzing the financial situation of a firm whether the
reports and statements produced by the FIS are accurate. Because financial statements,
such as income statements and balance sheets, are used by so many people and
organizations (investors, bankers, insurance companies, federal and state government
agencies, competitors, and customers) hence sound auditing procedures are important.
Auditing can reveal potential fraud and can also reveal false or misleading information in
a firm.
Internal uses of funds include additional inventory, new or updated plants and
equipment, additional labor, the acquisition of other companies, new computer systems,
marketing and advertising, raw materials, land, investments in new products, and
research and development. External uses of funds are typically investment related. On
occasion, a company might have excess cash from sales that is placed into an external
investment. External uses of funds often include bank accounts, stocks, bonds, bills,
notes, futures, options, and foreign currency.
Figure 3.2 gives an overview of some of the manufacturing MIS inputs, subsystems,
and outputs. The subsystems and outputs of the manufacturing MIS monitor and control
the flow of materials, products, and services through the organization. Some common
information subsystems and out - used in manufacturing are:
• Master Production Scheduling
• Inventory Control
• Process Control
• Quality Control and Testing
FIGURE 3.2
In any manufacturing company the critical tasks are production scheduling and
inventory control. The overall objective of master production scheduling is to provide
detailed plans for both short-term and long-range scheduling of manufacturing facilities.
Master production scheduling software packages can include forecasting techniques that
attempt to determine current and future demand for products and services. After current
demand has been determined and future demand has been estimated, the master
production scheduling package can determine the best way to use the manufacturing
facility and all its related equipment. The result of the process is a detailed plan that
reveals a schedule for every item that will be manufactured.
Another inventory technique used when the demand for one item is dependent on
the demand for another is called material requirements planning (MRP). The basic
goal of MRP is to determine- when finished products, like automobiles or airplanes, are
needed and then to work backward to determine deadlines and resources needed, such as
engines and tires, to complete the final product on schedule.
PROCESS CONTROL
Information generated from quality – control programs can help workers locate
problems in manufacturing equipment. Quality Control reports can also be used to design
better products. With the increased emphasis on quality, workers should continue to rely
on the reports and outputs from this important application.
These subsystems and their outputs help marketing managers and executives increase
sales, reduce marketing expenses, and develop plans for future products and services to
meet the changing needs of customers.
MARKETING RESEARCH
Popular Marketing Research (MR) tools are Surveys, pilot studies and
Interviews. The objective of MR is to conduct a formal study market and customer
preferences. Marketing research can identify prospects as well as the features that current
customers really want in a good or service. Such attributes as style, color, size,
appearance, and general fit can be investigated through marketing research. Pricing,
distribution channels, guarantees and warranties, and customer service can also be
determined.
Once entered into the marketing information _ystem, data collected from
marketing research projects is manipulated to generate reports on key indicators like
customer satisfaction and total service calls. Forecasting demand can be an important
result of marketing research of sophisticated software. Demand forecasts for products and
services are also critical to make sure raw materials and supplies are properly managed
PRODUCT DEVELOPMENT
Databases
of Databases
internal of
data external
data
Marketing DSS
Marketing ESS
Transaction
Business Marketing MIS
Processing Databases
Transactions of valid Marketing
Systems
transactions application
from each TPS databases
Marketing
ES
Marketing research
Operational Product development
databases
Promotion and advertising
Product pricing
Sales by customer
Sales by salesperson
Sales by product
Pricing report
Total service calls
Customer satisfaction
Figure 3.3Marketing Information System
PRODUCT PRICING
Product pricing is the key area of an organization which is determine by the Sales
analysis that identifies products, sales personnel, and customers that contribute to profits
and those that do not. A variety of reports can be generated to help managers make good
sales decisions. The sales - by - product report list all major products and their sales for a
period of time, such as a month. This report shows which products are doing well and
which ones need improvement or should be discarded altogether. The sales - by -
salesperson report lists total sales for each salesperson for each week or month. This
report can also be subdivided by product to show which products are being sold by each
salesperson. Sales - by - customer report are a tool to use to identify high - and low -
volume customers.
Human resource subsystems and outputs range from the determination of human
resource needs and hiring through retirement and outplacement. Most medium and large
organizations have computer systems to assist with human resource planting, hiring,
training and skills inventory, and wage and salary administration. Outputs of the HRIS
include reports such as human resource planning reports, job application review profiles;
skills inventory reports, and salary surveys.
One of the first aspects of any HRIS is determining personnel and human needs.
The overall purpose of this MIS subsystem is to put the right number and kinds of
employees in the right jobs when they are needed. Effective human resource planning
requires defining the future number of employees needed and anticipating the future
supply of people for these jobs.
HRIS can be used to help grade and select potential employees. For every
candidate, the results of interviews, tests, and company visits can be analyzed by the
system and printed. This report, called a job applicant review profile, can assist corporate
recruiting teams in final selection. Web based screening for the job applicants is adopted
by many companies. Applicants use a template to load their resume onto the Internet site.
HR managers can then access these resumes and identify applicants they are interested in
interviewing.
Some jobs, such as programming, equipment repair, and tax preparation, require
very specific training. Other jobs may require general training about the organizational
culture, orientation, dress standards, and expectations of the organization. Today, many
organizations conduct their own training, with the assistance of information systems and
technology. Self - paced training can involve computerized tutorials, video programs, and
CD-ROM books and materials. Distance learning, where training and classes are
conducted over the Internet, is also becoming a viable alternative to more traditional
training and learning approaches. This text and supporting material, for example, can be
used in a distance - learning environment.
FIGURE 3.4
Databases Databases
of of
internal external
data data Human
resource
Payroll DSS
Order
Processing
Personnel
Human
Transaction Databases resource
Business Human
Processing of valid ESS
Transactions resource Human
Systems transactions
om each TPS MIS resource
databases
Human
resource
ES
Benefit reports
Salary surveys
Scheduling reports
Training test scores
Job applicant profiles
Needs and planning
reports
Employee schedules are developed for each employee, showing their job
assignments over the next week or month. Job placements are often determined based on
skills inventory reports, which show which employee might be best suited to a particular
job.
The last of the major HRIS subsystems involves determining wages, salaries, and
benefits, including medical payments, savings plans, and retirement accounts. Wage data,
such as industry averages for positions, can be taken from the corporate database and
manipulated by the HRIS to provide wage information and reports to higher levels of
management. Wage and salary administration also entails designing retirement programs
for employees. Some companies use computerized retirement programs to help
employees gain the most from their retirement accounts and options.
Expert system is a knowledge based information system that uses its knowledge
about a specific complex application area to act as an expert consultant to end users. An
expert system is knowledge intensive program that solves a problem by capturing the
expertise of a human in limited domains of knowledge and experience. Expert systems
provide answers to questions in a very specific problem area by making humanlike
inferences about knowledge contained in specialized knowledge base. They must also be
able to explain their reasoning process and conclusions to a user.
User interface: user interface helps interface the user with external peripherals.
The instructions are given to get the explanation from the system. The familiar user
interface is graphical user interface. In return system explains the various steps involved
in the problem solution.
Expert systems are being used for many different types of applications, and the
variety of applications is expected to continue to increase. However, you should realize
that expert systems typically accomplish one or more generic uses. As you can see, expert
systems are being used in many different fields, including medicine, engineering, the
physical sciences, and business. Expert systems now help diagnose illnesses, search for
minerals, analyze compounds, recommend repairs, and do financial planning. So from a
strategic business standpoint, expert systems can and are being used to improve every
step of the product cycle of a business, from finding customers to shipping products to
providing customer service.
FIGURE 3.5
User Interface
Expert Interface Engine Knowledge
Advice Programs Programs Base
User Workstation
Knowledge
Acquisition Knowledge
Programs Engineering
The major limitations of expert systems arise from their limited focus inability to
learn, maintenance problems, and developmental cost Expert a excel only in solving
specific types of problems in a limited domain of knowledge. They fail miserably in
solving problems requiring a broad knowledge base and subjective problem solving.
They do well with specific types of operational or analytical tasks, but falter at subjective
managerial decision making.
Expert systems may also be difficult and costly to develop and maintain properly.
The costs of knowledge engineers, lost expert time, and hardware and software resources
may be too high to offset the benefits expected from some applications. Also, expert
systems can’t maintain them selves. That is, they can’t learn from experience but must be
taught new knowledge and modified new expertise is needed to match developments in
their subject areas.
Many real world situations do not fit the suitability criteria for expert system
solutions. Hundreds of rules may be required to capture the assumptions, facts, and
reasoning that are involved in even simple problem situations. For example, a task that
might take an expert a few minutes to accomplish might require an expert system with
hundreds of rules and take several months to develop.
The easiest way to develop an expert system is to use an expert system shell as a
development tool. An expert system shell is a software package consisting of an expert
system without its kernel, that is, its knowledge base. This leaves a shell of software (the
inference engine and user interface programs) with generic inferencing and user interface
capabilities. Other development tools (such as rule editors and user interface generators)
are added in making the shell a powerful expert system development tools.
Expert system shells are now available as relatively low-cost software packages
that help users develop their own expert systems on microcomputers. They allow trained
users to develop the knowledge base for a specific expert system application. For
example, one shell uses a spreadsheet format to help end users develop IF - THEN rules,
automatically generating rules based on examples furnished by a user. Once a knowledge
base is constructed, it is used with the shell’s inference engine and user interface modules
as a complete expert system on a specific subject area. Other software tools may require
an IT specialist to develop expert systems.
Experts skilled in the use of expert system shells could develop their own expert
systems. If a shell is used, facts and rules of thumb about a specific domain can be
defined and entered into a knowledge base with the help of a rule editor or other
knowledge acquisition tool. A limited working prototype of the knowledge base is then
constructed, tested, and evaluated using the inference engine and user interface programs
of the shell. Domain experts can modify the knowledge base, then retest the system and
evaluate the results. This process is repeated until the knowledge base and the shell result
in acceptable expert systems.
FIGURE 3.6
Determining requirements
Identifying experts
Implementing results
An EIS is a special type of DSS, and, like a DSS, an EIS is designed to support
higher - level decision making in the organization. The two systems are, however,
different in important ways. DSSs provide a variety of modeling and analysis tools to
enable users to thoroughly analyze problems - that is, they allow users to answer
questions. EISs present structured information about aspects of the organization that
executives consider important - in other words, they allow executives to ask the right
questions.
CASE STUDY
SOURCE:
REVIEW QUESTIONS:
1. Assessment of objective.
2. Organisational change.
3. Assessing the future.
Firm’s infrastructure
Technology
Procurement
Primary activities
Primary activities are those that are invo1ve creation of product / service.
Identification of primary activities requires the isolation of activities that are
technologically or strategically distinct. Porter has classified these primary activities
into five groups which art as follows:-
For effective use of value chain, it is not just enough that various activities are
performed efficiently but these must be performed in a coordinative way so that each
activity contributes positively to other activities.
Value chain analysis provides the best framework for identifying the
Information that an organisation needs and designing its informati0r systems. Table
presents the Information systems required for effective performance of various
activities of the value chain.
Costs and benefits are not easily comparable. For this purpose, Capital
budgeting models are used. After the evaluation of costs and benefits of various
competing systems is completed, the results obtained have to be interpreted to arrive
at the final choice of a system. The interpretation and final choice are mostly
subjective requiring judgment and intuition.
TABLE 4.2
Costs Benefits
Hardware Tangible
Telecommunications Increased productivity
Software Low operating costs
Personnel Reduced workforce
Lower computer costs
Accessories: Lower outside vendor costs
Computer forms Reduced rate of growth in
expenses
Computer Ink/ribbon Reduced facility costs
Ups Intangible
Improved organisational planning
Services Increased organisational flexibility
Insurance Improved decision making
Maintenance Improved operations
Improved asset utilisation
Physical facilities Enhanced employee morale
Building Increased job satisfaction
Furniture Higher customer satisfaction
Improved organisational image
Consider Solution B of the hostel mess management problem. The direct costs are:
1. Cost of PC/XT, printer, voltage regulator = Rs. 70,000
2. Cost of space (nil). No extra space allocated
3. Cost of systems analysts/Programmers/Consultants for 3 months
= Rs. 15,000
4. Stationery cost/Floppy cost /Maintenance/Electricity = Rs. 900 per month
5. Capital cost = Rs. 85,000
6. Recurring cost = Rs. 900 per month.
INTANGIBLE BENEFITS
Once the costs of the project and the benefits have been quantified, the next
step is to find out whether the benefits justify the cost. There are two ways of finding
out this. They are known as the payback method and the present value method. The
payback method is used to find out in how many years the money spent is recovered
as benefits. In the example considered, we found that the cost was Rs. 85,000 and the
benefits Rs. 10,280 per month. Thus in 8.3 months we recover Rs. 85,324 Which
exceeds the cost. The payback is thus 8.3 months. It can be compared with
alternatives to finalize the systems when there is more alternative.
4.5 CONTROLS IN INFORMATION SYSTEM
Careful entry of data will reduce incorrect data. Special measures should, be taken
during input preparation
This was discussed in
(a) Sequence numbering
(b) Batch controls
(c) Data entry and verification
(d) Record totals
(e) Self - checking digit.
(a) Sequence numbering:
Each data record is given a sequence number. This provides the information to
pick the incorrect record when an error is detected in it. A missing record can also be
tracked using the sequence number.
Input data records consist of batches of about 100 records and each batch is
numbered. One or more important fields in each record is selected and the values in
the fields in all the records are added to form a batch control total. A batch control
record is designed, which includes information sach as number of records in the batch
and the batch control total is as batch totals. This batch control record is entered at the
end of each batch of records into the disk file. It is used by the data validation
program to check that no records in the batch are missed and there is no data entry
error in the selected field(s). In Table we show a batch of records and the
TABLE
Records are read one by one by the validation program and counted. The
program also counts the number of A’s, B’s, C’s, D’s and F’s in the batch. The count
of records and the count of A’s, B’s, etc. are compared with those in the batch control
record. If they match there is no error. Otherwise, a data entry error is detected. The
error could be in any of the 100 records. Thus a manual comparison of each record in
the file with that in the fo is now needed. This is feasible only if the number of
records in a batch is not very large.
The same set of data records are entered by two different operators and two
files are created. Records in these two files are compared by a program. Any
difference in two 0yesponding records indicates an error. The records are then
retrieved and compared with that in the data entry form.
Selected fields in a record are summed and the sum is entered as a separate
field in each record. The data validation program sums the fields as entered by the
data entry operator and compares this sum with the manually computed hash total
field. Any difference is signaled and the record is manually checked.
For important fields, modulus - N check digits are used to detect errors. Other
checks carried out by a data validation program are:
g) Missing data. If a full record is missing the batch control total will indicate it.
If no value is entered for a field it should be checked and indicated by the data
validation program.
h) Data records in wrong order. Sequence numbers in records are used to detect
this error.
i) Inter - field relationship check. If different fields are related in a known way
this relationship can be used to check the data entered. For example, if the
entry for year of study of a student in a school is 4 and the age entry is 72, then
there is probably an error.
j) File header. All input data files should have a record at the beginning, giving
it identification. Every program using a file should check the identification to
ensure that it is processing the right file.
We again reiterate that careful design of controls for data entry and input
validation program is essential in data processing. As volumes of data processed are
large and consequences of data errors in flies can be very expensive, “cleaning” input
data before they get into a database is very important.
Another field known as a proof cost is appended to the record. The modified record is:
The same quantity can be calculated in two different ways and compared. For
example, in a payroll system for each record the system calculates:
Gross pay, deductions, net pay
The gross pay, deductions and net pay can be accumulated separately and one may
check if (sum gross pay - sum deduction) = sum net pay.
(c) Relationship check:
When a known relationship between two data elements exists and one of the
elements is correct, it is possible to check the other element by using a relationship.
For example, if a discount is known for a set of sales then the procedure can sum all
sales and sum all rebates and check if rebates total equals (discount * sum of all
sales).
If sum of sales = 150,000 and discount is 5% then the rebate total should equal 7500.
Check point procedure divides a long job into a series of short ones. Each
short run is independent and is checked independently. If the check is correct then
enough information is written on the disk to enable automatic restarting of processing
from this check point. If the check is incorrect the last part of the process is discarded;
errors, if any, are corrected and the system is restarted from the previous check point.
The proper use of check point and restart procedure in a program improves the
operational efficiency of a data processing system. If power failure, or hardware or
software failure occurs, then the system can be restored to the last check point. The
processing done between the last check point and the system failure is the only work
that is wasted.
Many key fields in data input to a computer-based system are coded. Codes
are used to identify persons (e.g. students Roll no.), products, Components, materials,
machines, vendors, customers, locations, etc. The main reasons for coding are:
For example, far bank accounts one may assign a serial number for each
account holder. As and when a new account is opened, the person is given the next
serial number. The advantage of this method is that it is concise, precise and
expandable. It is, however, not meaningful. From the serial number it is not possible
to find out anything about the account holder. It is also not comprehensive.
The block codes use blocks of serial numbers. For example, account numbers
0000 to 9999 may be used for savings accounts in a bank; 10000 to 99999 for current
accounts; 100000 to 999999 for special deposit accounts, etc. A similar block code
can be used in other areas such as coding items in a store, and assigning roll numbers
to students. This code is expandable and more meaningful than the serial number
coding. It is precise but not comprehensive.
(iii) Group classification code.
FIGURE
Status (UG/PG)
2005 24 05 2 101
Often mnemonics instead of numbers make the code more meaningful. For
instance, in the above representation we can write the code as
FIGURE
2005 24 CS UG 101
Computer Undergraduate
Science
These codes use some or all of the digits to describe a value of the product
being coded. For example, the code
FIGURE
BA M 95 C B Style
Describes a T Shirt for males of chest size 95 cm., made of cotton of blue
colour and round neck. The value may be directly used in some calculation. This code
is meaningful, precise, comprehensive and expandable. It is not concise.
There are often special coding schemes which are standardized. Simple
examples of international standardization are Dewey decimal classification codes
used for books for simplicity of locating books in libraries and the ISBN
(International Standard Book Numbers) used by publishers all over the world to
assign a number to a book when it is published.
By experimental study it has been found that the common types of errors
committed during data entry are as shown in Table. This table also gives the
probability of occurrence of each of these errors.
Table 4.3 - Common Errors Made during Data Entry
Single transcription
(One digit 45687 49687 86
incorrectly typed)
Transposition error
(Two digits are 96845 96485 9
interchanged) 96845 94865
All other errors 5
If a code is designed which is able to detect the two types of common errors,
namely, single transcription and transposition errors, it will be reasonably good. Such
a code has been designed and is called modulus 11 code. We will first describe how
such a code is constructed, then see how it is able to detect errors, and finally show
the theoretical basis of this design.
Given a set of codes, they are transformed to another set of codes with error
detecting property as follows:
Weights
----------------------------------------------------------------
Divide the weighted sum of digits by 11.
Append (11 - remainder) to the right of the code. This is the new code to be
used. In this case, as the remainder is 4, we append (11 - 4) = 7 to the code and get
the new code 487937. The remainder 7 appended to the code is called a check digit. If
the remainder after division is 1, then (11 - remainder) = (11 - 1) = 10.. In such a
case the character X is appended to the code. If the remainder after division is 0, then
0 is appended to the code.
In Table we show some examples of codes and their equivalent modulus - 11 Codes.
45687 4 * 6 + 5 * 5 + 6 * 4 + 8 * 3 + 7 * 2 = 111
111/11 = quotient = 10 remainder = 1.
Digit to be appended (11- 1) = 10. Use X to represent
10 45687 X
68748 687480
65432 654329
69752 697524
An algorithm for generating the modulus - 11 code is given below. Let dndn - 1
… d2 be the given code (n < 10).
Algorithm for finding d1 to be appended to the code as the least significant
digit
weighted sum 0
for i = 2 to n do
weighted sum weighted sum + i * d,
endfor
r weighted sum mod 11
(r is the remainder obtained by dividing weighted sum by 11)
if r = 0
then d1 = 0
else if r = 1
then d1 = X
else d1 = (11 - r)
endif
endif
Append d1 to code.
if d1 = X
then weighted sum 10
else weighted sum d1
endif:
for i = 2 to n do
weighted sum weighted sum + i * di
endfor:
r = weighted sum mod 11
if r = 0
then “No error”
else “Error in code”
endif.
The scope of a system test should include both manual operations and
computerized operations. System testing is a comprehensive evaluation of the
programs, manual procedures, computer operations and controls. System tests may be
classified as program tests, string tests, system tests, pilot tests and parallel tests.
These are now discussed.
These are designed to test the logic of programs. Program logic is normally
very complicated and it is practically impossible to test all the paths taken by a
program. Normally individual modules are tested. Test data are generated to test all
logical paths in the module. A very good tool to determine the paths to be taken is to
make a decision table listing of all condition tests. All logically possible rules in the
decision table can then be determined and used to generate a complete set of test data.
Most common errors in programs occur at boundary points in tests. Test data should
test all boundary points where there is a transition from true to false or vice versa in
the conditions being tested in the program. Mother common error is made in counting.
Thus the program must be tested with counts which are one more or one less than
what is specified. Some tools are available which analyze a program and create a flow
chart equivalent to it. Such a tool is useful in generating test data.
These are carried out on a set of related program modules. The purpose of a
string test is to ensure that data are correctly transferred from one program in the
string to the next. For example, in a student result processing system, the grade
processing module for post - graduate students must use valid output from the post -
graduate student registration module.
These are used to test all programs which together constitute the system. In a
payroll system for, instance, all programs in the system such as calculation of bonus,
overtime, and costing program (which uses the output of the payroll system) are to be
tested together. Testing is conducted using synthetic data. Both valid and invalid
transactions are used in this test. For example, a non - existent employee code may be
used in a transaction to see if it is rejected. Similarly, some unreasonable data may be
used to check whether the input controls function as expected.
A set of transactions which have been run on the present system are collected.
Results of their processing on the existing manual system are also kept. This set is
used as test data when a computer - based system is initially developed. The two
results are matched. The reason for any discrepancy is investigated to modify the new
system.
The aim here is similar to pilot tests. In this, however, both manual and
computer - based systems are run simultaneously for a period of time and the results
from the two systems are compared. This is a good method for complex systems but is
expensive.
Preliminary tests on a system before it is released for general use are known as
alpha tests. This is followed by a beta testing period when the system is released for
general use and carefully observed for malfunctions. After this phase it is put to
general unrestricted use.
Data entering a data processing system and the programs processing the data
must be kept secure. By security we mean protecting the data and programs against
accidental or intentional modification or destruction or disclosure to unauthorized
persons. It is the responsibility of a systems analyst to ensure the security of both data
and procedures. The following requirements should be met to ensure security:
1. The data and programs must be protected from theft, fire, disk corruption
and other types of physical destruction. Duplicate copies are kept in a fire-
proof vault in a place away from the data processing centre. This is
particularly important for financial data.
2. Data should be reconstructable in case of loss despite precautions. Backup
copies of master files and transaction files are kept.
3. The system should be tamper - proof. Password system and file security
keys are used to bar unauthorized access. If password system is broken by
clever programmer a secrecy transformation may be used to transform the
stored data. Even if the data is accessed, it will not be meaningful if
transformed. One simple data transformation would be to map every
character in a file to another character by using a secret mapping table. For
example, a table may transform 1 to 8, 2 to 7, 3 to 5,……. A to W, B to U,
C to X, ………….. The table is kept secret. A person who gains
unauthorized access cannot understand the data. An authorized person will
get back the original data by applying the inverse transformation using the
table.
4. Any person gaining access to a file should be identified. Thus attempts to
access data is logged and identity is also recorded. This will inhibit
potential data lock breakers.
5. Only authorized persons should be allowed to change data. Password
system is used to prevent unauthorized access. Every access, authorized or
unauthorized, should be logged by the system. My change should be
monitored by the system.
Virus also spreads through computer networks and replicates itself on many
computers connected to the network. It is thus essential for a security system to
protect files from viruses. One physical control is not to allow free copying of floppy
disks from unknown sources. Vaccines against viruses have appeared in the market
which examine boot sectors of floppies and detect attempts to copy themselves. A
warning is issued to a user when this is found and saves the user’s disk from getting
infected.
REVIEW QUESTIONS:
SYSTEM AUDIT
5.1 INTRODUCTION
5.2 SOFTWARE ENGINEERING QUALITIES
5.2.1 DIMENSION OF SOFTWARE QUALITIES
5.3 A SOFTWARE QUALITY ENGINEERING PROGRAM
5.3.1 QUALITIES AND ATTRIBUTES
5.3.2 QUALITY EVALUATIONS
5.3.3 NONCONFORMANCE ANALYSIS
5.3.4 FAULT TOLERANCE ENGINEERING
5.3.5 TECHNIQUES AND TOOLS
5.4 SOFTWARE RELIABILITY AND METRICS
5.4.1 SOFTWARE RELIABILITY METRICS:
5.4.1.1 PRODUCT METRICS
5.4.1.2 FUNCTION POINT METRIC
5.4.1.3 TEST COVERAGE METRICS
5.4.1.4 PROJECT MANAGEMENT METRICS
5.4.1.5 PROCESS METRICS
5.4.1.6 FAULT AND FAILURE METRICS
5.5 VERIFICATION AND VALIDATION:
5.5.1 VERIFICATION TECHNIQUES
5.5.1.1 DYNAMIC TESTING
5.5.1.2 FUNCTIONAL TESTING
5.5.1.3 STRUCTURAL TESTING
5.5.1.4 RANDOM TESTING
5.5.1.5 STATIC TESTING
5.5.1.6 CONSISTENCY TECHNIQUES
5.5.1.7 MEASUREMENT TECHNIQUES
5.5.2 VALIDATION TECHNIQUES
5.5.2.1 FORMAL METHODS
5.5.2.2 FAULT INJECTION
5.5.2.3 HARDWARE FAULT INJECTION
5.5.2.4 SOFTWARE FAULT INJECTION
5.5.2.5 DEPENDABILITY ANALYSIS
5.5.2.6 HAZARD ANALYSIS RISK ANALYSIS
5.5.3 TYPES OF TESTING
5.5.3.1 COMPONENT.
5.5.3.2 ACCEPTANCE.
5.5.3.3 INTERFACE SYSTEM.
5.5.3.4 RELEASE
5.6 SOFTWARE QUALITY ASSURANCE ACTIVITIES
5.6.1 SQA RELATIONSHIPS TO OTHER ASSURANCE ACTIVITIES:
5.6.1.1 CONFIGURATION MANAGEMENT MONITORING
5.6.1.2 VERIFICATION AND VALIDATION MONITORING:
5.6.1.3 FORMAL TEST MONITORING
5.7 SOFTWARE QUALITY ASSURANCE
5.7.1 CONCEPTS AND INITIATION PHASE
5.7.2 2. SOFTWARE REQUIREMENTS PHASE
5.7.3 SOFTWARE ARCHITECTURAL (PRELIMINARY) DESIGN PHASE
5.7.4 SOFTWARE DETAILED DESIGN PHASE
5.7.5 SOFTWARE IMPLEMENTATION PHASE
5.7.6 SOFTWARE INTEGRATION AND TEST PHASE
5.7.7 SOFTWARE ACCEPTANCES AND DELIVERY PHASE
5.7.8 SOFTWARE SUSTAINING ENGINEERING AND OPERATIONS
PHASE
5.7.9 TECHNIQUES AND TOOLS
5.8 SOFTWARE DESIGN QUALITY:
5.8.1 COUPLING
5.8.2 COHESION
5.8.3 CHARACTERISTICS OF PERFECT DESIGN:
5.9 SOFTWARE REVIEWS, INSPECTIONS AND WALKTHROUGHS
5.10 CAPABILITY MATURITY MODEL
SUMMARY
REVIEW QUESTIONS
LEARNING OBJECTIVES
Primarily, companies appreciate the benefits they gain from an effective and
updated information system. However, frequently, risks emerging as new technologies
are implemented are not fully conceived, neither is that issue considered in the risk
analysis of business processes. Still, successful organizations perceive and manage the
risks related to the implementation of new technologies and establish the required quality,
reliability and security demands to their information systems. At the same time, they
demand that the abovementioned requirements be realized at an expense as small as
possible.
At the same time, an adequate level of know how is required to establish tasks for
a company's information system, order an information system and check how well the
information system developed conforms to the requirements set.
For information systems there are generally two kinds of auditors: internal and
external. Internal auditors work for the same organization that owns the information
system whereas external auditors are hired form the outside. Conformity to the
requirement is done at various phases of the system cycle.
5.2 SOFTWARE ENGINEERING QUALITIES
Quality must be built into a software product during its development to satisfy
quality requirements established for it. SQE ensures that the process of incorporating
quality in the software is done properly, and that the resulting software product meets the
quality requirements. The degree of conformance to quality requirements usually must
be determined by analysis, while functional requirements are demonstrated by testing.
SQE performs a function complementary to software development engineering. Their
common goal is to ensure that a safe, reliable, and quality engineered software product is
developed.
Qualities for which an SQE evaluation is to be done must first be selected and
requirements set for them. Some commonly used dimensions of qualities are:
1. Reliability
2. Maintainability
3. Transportability
4. Interoperability
5. Testability
6. Usability
7. Reusability
8. Traceability
9. Sustainability and
10. Efficiency.
1. Reliability
2. Maintainability
3. Transportability
4. Interoperability
5. Efficiency
The two software qualities, which command the most attention, are reliability
and maintainability. Some practical programs and techniques have been developed to
improve the reliability and maintainability of software, even if they are not measurable or
predictable. The types of activities that might be included in an SQE program are
described here in terms of these two qualities. These activities could be used as a model
for the SQE activities for additional qualities.
5.3.1 Qualities and Attributes
An initial step in laying out an SQE program is to select the qualities that are
important in the context of the use of the software that is being developed. For example,
the highest priority qualities for flight software are usually reliability and efficiency. If
revised flight software can be up-linked during flight, maintainability may be of interest,
but considerations like transportability will not drive the design or implementation. On
the other hand, the use of science analysis software might require ease of change and
maintainability, with reliability a concern and efficiency not a driver at all.
After the software qualities are selected and ranked, specific attributes of the
software that help to increase those qualities should be identified. For example,
modularity is an attribute that tends to increase both reliability and maintainability.
Modular software is designed to result in code that is apportioned into small, self-
contained, functionally unique components or units. Modular code is easier to maintain,
because the interactions between units of code are easily understood, and low-level
functions are contained in few units of code. Modular code is also more reliable, because
it is easier to completely test a small, self-contained unit.
Not all software qualities are so simply related to measurable design and code
attributes, and no quality is so simple that it can be easily measured. The idea is to select
or devise measurable, analyzable, or testable design and code attributes that will increase
the desired qualities. Attributes like information hiding, strength, cohesion, and coupling
should be considered.
Once some decisions have been made about the quality objectives and software
attributes, quality evaluations can be done. The intent in an evaluation is to measure the
effectiveness of a standard or procedure in promoting the desired attributes of the
software product. For example, the design and coding standards should undergo a quality
evaluation. If modularity is desired, the standards should clearly say so and should set
standards for the size of units or components. Since internal documentation is linked to
maintainability, the documentation standards should be clear and require good internal
documentation.
Quality of designs and code should also be evaluated. This can be done as a
part of the walkthrough or inspection process, or a quality audit can be done. In either
case, the implementation is evaluated against the standard and against the evaluator's
knowledge of good software engineering practices, and examples of poor quality in the
product are identified for possible correction.
For software that must be of high reliability, a fault tolerance activity should be
established. It should identify software, which provides and accomplishes critical
functions and requirements. For this software, the engineering activity should determine
and develop techniques, which will ensure that the needed reliability or fault tolerance
will be attained. Some of the techniques that have been developed for high reliability
environments include:
Input data checking and error tolerance. For example, if out-of-range or missing
input data can affect reliability, then sophisticated error checking and data
interpolation/extrapolation schemes may significantly improve reliability.
Metrics:
Metrics are quantitative values, usually computed from the design or code that measure
the quality in question, or some attribute of the software related to the quality. Many
metrics have been invented, and a number have been successfully used in specific
environments, but none has gained widespread acceptance. A metric (noun) is the
measurement of a particular characteristic of a program's performance or efficiency.
Reliability:
Measuring software reliability remains a difficult problem because we don't have a good
understanding of the nature of software. There is no clear definition to what aspects are
related to software reliability. We cannot find a suitable way to measure software
reliability, and most of the aspects related to software reliability. Even the most obvious
product metrics such as software size have not uniform definition.
Test coverage metrics are a way of estimating fault and reliability by performing
tests on software products, based on the assumption that software reliability is a function
of the portion of software that has been successfully verified or tested. Detailed
discussion about various software testing methods can be found in topic Software
Testing.
5.4.1.4 Project management metrics
Based on the assumption that the quality of the product is a direct function of the
process, process metrics can be used to estimate, monitor and improve the reliability and
quality of software. ISO-9000 certification, or "quality management standards", is the
generic reference for a family of standards developed by the International Standards
Organization (ISO).
The goal of collecting fault and failure metrics is to be able to determine when the
software is approaching failure-free execution. Minimally, both the number of faults
found during testing (i.e., before delivery) and the failures (or other problems) reported
by users after delivery are collected, summarized and analyzed to achieve this goal. Test
strategy is highly relative to the effectiveness of fault metrics, because if the testing
scenario does not cover the full functionality of the software, the software may pass all
tests and yet be prone to failure once delivered. Usually, failure metrics are based upon
customer information regarding failures found after release of the software. The failure
data collected is therefore used to calculate failure density, Mean Time Between Failures
(MTBF) or other parameters to measure or predict software reliability.
There are many different verification techniques but they all basically fall into 2
major categories - dynamic testing and static testing.
• Functional testing - Testing that involves identifying and testing all the functions
of the system as defined within the requirements. This form of testing is an
example of black box testing since it involves no knowledge of the
implementation of the system.
• Structural testing - Testing that has full knowledge of the implementation of the
system and is an example of white-box testing. It uses the information from the
internal structure of a system to devise tests to check the operation of individual
components. Functional and structural testing both chooses test cases that
investigate a particular characteristic of the system.
• Random testing - Testing that freely chooses test cases among the set of all
possible test cases. The use of randomly determined inputs can detect faults that
go undetected by other systematic testing techniques. Exhaustive testing, where
the input test cases consist of every possible set of input values, is a form of
random testing. Although exhaustive testing performed at every stage in the life
cycle results in a complete verification of the system, it is realistically impossible
to accomplish.
• Static testing - Testing that does not involve the operation of the system or
component. Some of these techniques are performed manually while others are
automated. Static testing can be further divided into 2 categories - techniques that
analyze consistency and techniques that measure some program property.
Validation Techniques
There are also numerous validation techniques, including formal methods, fault injection,
and dependability analysis. Validation usually takes place at the end of the development
cycle, and looks at the complete system as opposed to verification, which focuses on
smaller sub-systems.
• Formal methods - Formal methods is not only a verification technique but also a
validation technique. Formal methods mean the use of mathematical and logical
techniques to express, investigate, and analyze the specification, design,
documentation, and behavior of both hardware and software.
• Hardware fault injection - Can also be called physical fault injection because we
are actually injecting faults into the physical hardware.
• Software fault injection - Errors are injected into the memory of the computer by
software techniques. Software fault injection is basically a simulation of hardware
fault injection.
• Hazard analysis - Involves using guidelines to identify hazards, their root causes,
and possible countermeasures.
• Risk analysis - Takes hazard analysis further by identifying the possible
consequences of each hazard and their probability of occurring.
Verification and validation can be performed by the same organization performing the
design, development, and implementation but sometimes it is performed by an
independent testing agency
The software verification process is pervasive through out the development lifecycle. It is
usually a combination of review, analyses, and testing. Review and analyses are
performed on the following different components.
• Requirements analyses - To detect and report requirements errors that may have
surfaced during the software requirements and design process.
• Software architecture - To detect and report errors that occurred during the
development of the software architecture.
• Source code - To detect and report errors that developed during source coding.
• Outputs of the integration process - To ensure that the results of the integration
process are complete and correct.
• Test cases and their procedures and results - To ensure that the testing is
performed accurately and completely.
The 2 main objectives of the software testing process is to demonstrate that it satisfies all
the requirements and to demonstrate that errors leading to unacceptable failure conditions
are removed. The testing process includes the following different types of testing.
Types of Testing
• Component.
• Acceptance.
• Interface System.
• Release
Component Testing: Starting from the bottom the first test level is "Component
Testing", sometimes called Unit Testing. It involves checking that each feature specified
in the "Component Design" has been implemented in the component.
In theory an independent tester should do this, but in practice the developer usually does
it, as they are the only people who understand how a component works. The problem
with a component is that it performs only a small part of the functionality of a system,
and it relies on co-operating with other parts of the system, which may not have been
built yet. To overcome this, the developer either builds, or uses special software to trick
the component into believing it is working in a fully functional system.
Interface Testing: As the components are constructed and tested they are then linked
together to check if they work with each other. It is a fact that two components that have
passed all their tests, when connected to each other produce one new component full of
faults. These tests can be done by specialists, or by the developers.
Interface Testing is not focused on what the components are doing but on how they
communicate with each other, as specified in the "System Design". The "System Design"
defines relationships between components, and this involves stating:
• What a component can expect from another component in terms of services. How
these services will be asked for.
• How they will be given.
• How to handle non-standard conditions, i.e. errors.
• Tests are constructed to deal with each of these.
The tests are organized to check all the interfaces, until all the components have been
built and interfaced to each other producing the whole system. System Testing:
Once the entire system has been built then it has to be tested against the "System
Specification" to check if it delivers the features required. It is still developer focused,
although specialist developers known as systems testers are normally employed to do it.
In essence System Testing is not about checking the individual parts of the design, but
about checking the system as a whole. In effect it is one giant component.
System testing can involve a number of specialist types of test to see if all the functional
and non-functional requirements have been met. In addition to functional requirements
these may include the following types of testing for the non-functional requirements:
• Load testing. Load testing is a generic term covering Performance Testing and
Stress Testing.
• Systems Testing checks that the system that was specified has been delivered
• Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer should always do acceptance testing. The customer
knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment.
The forms of the tests may follow those in system testing, but at all times they are
informed by the business needs.
Release Testing: Even if a system meets all its requirements, there is still a case to
be answered that it will benefit the business. The linking of "Business Case" to Release
Testing is looser than the others, but is still important.
Release Testing is about seeing if the new or changed system will work in the existing
business environment. Mainly this means the technical environment, and checks concerns
such as:
These tests are usually run the by the computer operations team in a business. The
answers to their questions could have significant a financial impact if new computer
hardware should be required, and adversely affect the "Business Case".
It would appear obvious that the operations team should be involved right from the start
of a project to give their opinion of the impact a new system may have. They could then
make sure the "Business Case" is relatively sound, at least from the capital expenditure,
and ongoing running costs aspects. However in practice many operations teams only find
out about a project just weeks before it is supposed to go live, which can result in major
problems.
The standard includes a plan that outlines the information necessary for the certification
of the software and the software verification plan. A section also details tool
qualification. This is necessary when processes in the standard are eliminated, reduced, or
automated by using a software tool without following the software verification process.
There are an abundance of verification and validation tools and techniques. It is important
that in selecting verification and validation tools, all stages of the development cycle are
covered. For example, Table lists the techniques used from the requirements analysis
stage through the validation stage. Sources such as the Software Engineer's Reference
Book (McDermid, 1992), Standard for Software Component Testing (British Computer
Society, 1995), and standards such as DO-178B and IEC 1508 are useful in selecting
appropriate tools and techniques.
Static Dynamic
Requirements analysis and functional specification
Walkthroughs
Design reviews
Checklists
Top-level design
Walkthroughs
Design reviews
Checklists
Formal proofs
Fagan inspection
Detailed design
Walkthroughs
Design reviews
Control flow analysis
Data flow analysis
Symbolic execution
Checklists
Fagan inspection
Metrics
Implementation
Static analysis Functional testing
Boundary value analysis
Structured-based testing
Probabilistic testing
Error guessing
Process simulation
Error seeding
Integration testing
Walkthroughs Functional testing
Design reviews Time and memory tests
Sneak circuit analysis Boundary value analysis
Performance testing
Stress testing
Probabilistic testing
Error guessing
Validation
Functional testing
There are many organizations and companies that perform independent verification and
validation. SEEC COBOL Analyst 2000 and SEEC Smart Change 2000 are used for
verification and the SEEC COBOL Slicer and SEEC/TestDirector are used for validation.
Documentation Standards specify form and content for planning, control, and product
documentation and provide consistency throughout a project. Design Standards specify
the form and content of the design product. They provide rules and methods for
translating the software requirements into the software design and for representing it in
the design documentation.
Code Standards specify the language in which the code is to be written and define any
restrictions on use of language features. They define legal language structures, style
conventions, rules for data structures and interfaces, and internal code documentation.
Procedures are explicit steps to be followed in carrying out a process. All processes
should have documented procedures. Examples of processes for which procedures are
needed are configuration management, nonconformance reporting and corrective action,
testing, and formal inspections.
the Management Plan describes the software development control processes, such as
configuration management, for which there have to be procedures, and contains a list of
the product standards. Standards are to be documented according to the Standards and
Guidelines in the Product Specification. The planning activities required to assure that
both products and processes comply with designated standards and procedures are
described in the QA portion of the Management Plan.
Product evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the project's Management Plan
are correctly carried out and that the project's procedures and standards are followed.
Products are monitored for conformance to standards and processes are monitored for
conformance to procedures. Audits are a key technique used to perform product
evaluation and process monitoring. Review of the Management Plan should ensure that
appropriate SQA approval points are built into these processes. Product evaluation is an
SQA activity that assures standards are being followed. Ideally, the first products
monitored by SQA should be the project's standards and procedures. SQA assures that
clear and achievable standards exist and then evaluates compliance of the software
product to the established standards. Product evaluation assures that the software product
reflects the requirements of the applicable standard(s) as identified in the Management
Plan.
Process monitoring is an SQA activity that ensures that appropriate steps to carry out the
process are being followed. SQA monitors processes by comparing the actual steps
carried out with those in the documented procedures. The Assurance section of the
Management Plan specifies the methods to be used by the SQA process monitoring
activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product in
depth, comparing them to established procedures and standards. Audits are used to
review management, technical, and assurance processes to provide an indication of the
quality and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being
followed, that required documentation is maintained, and that the developer's status
reports accurately reflect the status of the activity. The SQA product is an audit report to
management consisting of findings and recommendations to bring the development into
conformance with standards and/or procedures.
SQA assures that software Configuration Management (CM) activities are performed in
accordance with the CM plans, standards, and procedures. SQA reviews the CM plans for
compliance with software CM policies and requirements and provides follow-up for
nonconformance. SQA audits the CM functions for adherence to standards and
procedures and prepares reports of its findings. The CM activities monitored and audited
by SQA include baseline control, configuration identification, configuration control,
configuration status accounting, and configuration authentication. SQA also monitors
and audits the software library. SQA assures that baselines are established and
consistently maintained for use in subsequent baseline development and control.
Software configuration identification is consistent and accurate with respect to the
numbering or naming of computer programs, software modules, software units, and
associated software documents.
Formal software reviews should be conducted at the end of each phase of the life
cycle to identify problems and determine whether the interim product meets all applicable
requirements. Examples of formal reviews are the Preliminary Design Review (PDR),
Critical Design Review (CDR), and Test Readiness Review (TRR). A review looks at
the overall picture of the product being developed to see if it satisfies its requirements.
Reviews are part of the development process, designed to provide a ready/not-ready
decision to begin the next phase. In formal reviews, actual work done is compared with
established standards. SQA's main objective in reviews is to assure that the Management
and Development Plans have been followed and that the product is ready to proceed with
the next phase of development. Although the decision to proceed is a management
decision, SQA is responsible for advising management and participating in the decision.
SQA assures that formal software testing such as acceptance testing is done in
accordance with plans and procedures. SQA reviews testing documentation for
completeness and adherence to standards. The documentation review includes test plans,
test specifications, test procedures, and test reports. SQA monitors testing and provides
follow-up on non conformances. By test monitoring, SQA assures software
completeness and readiness for delivery.
The objectives of SQA in monitoring formal software testing are to assure that:
• The test procedures are testing the software requirements in accordance with test
plans.
• The test procedures are verifiable.
• The correct or "advertised" version of the software is being tested (by SQA
monitoring of the CM activity).
• The test procedures are followed.
• Non conformances occurring during testing (that is, any incident not expected in
the test procedures) are noted and recorded.
• Test reports are accurate and complete.
• Regression testing is conducted to assure non conformances have been corrected.
• Resolution of all non conformances takes place prior to delivery.
Software testing verifies that the software meets its requirements. The quality of testing
is assured by verifying that project requirements are satisfied and that the testing process
is in accordance with the test plans and procedures.
Software Quality Assurance during the Software Acquisition Life Cycle(Water fall
model)
Let us review few life cycle models and various kind of activity held in each phases.
In addition to the general activities described there are phase- specific SQA activities that
should be conducted during the Software Acquisition Life Cycle. At the conclusion of
each phase, SQA concurrence is a key element in the management decision to initiate the
following life cycle phase. Suggested activities for each phase are described below.
SQA should be involved in both writing and reviewing the Management Plan in order to
assure that the processes, procedures, and standards identified in the plan are appropriate,
clear, specific, and auditable. During this phase, SQA also provides the QA section of the
Management Plan.
Determine goals, inputs, outputs, storage, logic, critical response time, inflexible I/O,
conversion requirements, security and backup, storage amounts, inflexible
hardware/software, performance targets, cost/benefit analysis revision, audit
requirements, plan and budget for future.
During the software requirements phase, SQA assures that software requirements are
complete, testable, and properly expressed as functional, performance, and interface
requirements.
SQA activities during the implementation phase include the audit of:
• Results of coding and design activities including the schedule contained in the
Software Development Plan.
• Status of all deliverable items.
• Configuration management activities and the software development library.
• Nonconformance reporting and corrective action system.
As a minimum, SQA activities during the software acceptance and delivery phase include
assuring the performance of a final configuration audit to demonstrate that all deliverable
items are ready for delivery.
During this phase, there will be mini-development cycles to enhance or correct the
software. During these development cycles, SQA conducts the appropriate phase-
specific activities described above.
SQA should evaluate its needs for assurance tools versus those available off-the-shelf for
applicability to the specific project, and must develop the others it requires. Useful tools
might include audit and inspection checklists and automatic code standards analyzers.
Choosing the best model is also playing a vital role in bringing the quality product. let us
review the pros and cons of familiar methodologies.
Problems with the waterfall life cycle from an individual projects perspective.
Building and modifying a model system in response to user feedback until the user likes
the system
Advantages
Disadvantages
• Requires high upfront costs (software for database, modeling, report generation,
screen generation)
• Difficult to use when building large systems.
• Sometimes difficult to maintain user enthusiasm
• User never satisfied
• Tendency not to document
Coupling:
At first glance, coupling may seem like the perfect idea. However, you should not lose
sight of what it really requires - the tight binding of one application domain to the next.
As a consequence of this requirement, all coupled source and target systems will have to
be extensively changed to couple them (in most cases).
Further, as events and circumstances evolve over time, any change to any source or target
system demands a corresponding change to the coupled systems as well. Coupling creates
one system out of many, with each tightly dependent upon the other. Service-oriented
application integration clearly leverages coupling in how applications are bound together.
Of course, the degree of coupling that occurs is really dependent on the architect, and
how he or she binds source and target systems together. In some instances systems are
tightly coupled, meaning they are dependent on each other. In other instance, the are
loosely coupled, meaning that they are more independent. There are, of course, more pros
and cons of coupling that should be considered in context of the problem you’re looking
to solve.
1. Content (worst)
2. Common
3. Control
4. Stamp
5. Data (best)
On the pros side you have:
• The ability to bind systems by sharing behavior, and bound data, versus simple
sharing information. This provides the integration solution set with the ability to
share services that could be redundant to the integrated systems, and thus
reducing development costs.
• The ability to tightly couples processes as well as shared behavior. This means
that process integration engines, layered on top of service-oriented integration
solutions have a better ability to bind actually behavior (functions), versus just
simple move information from place to place.
The downsides include:
• The need, in many cases, the change source and target systems to couple services.
This adds cost because development and testing time is involved; it’s no longer a
matter of leveraging an interface which is abstracted from the core system.
• The fact that systems coupled could cease to function if one or more of the
coupled systems go down. This means that a single system failure could bring
down all of coupled systems, and thus creating vulnerability.
Cohesion
2. Logical
3. Temporal
4. Procedural
5. Communicational
• You don't want a change in one module to cause errors to ripple throughout your
system. Encapsulate information, make modules highly cohesive, seek low
coupling among modules.
• Examples
o Procedures & functions in classical Procedural Languages
o Objects & methods within objects in Object Oriented Programming
Languages
• Good vs. Bad Modules
o Modules in themselves are not “good”
o Must design them to have good properties
i) CPU using 3 chips (modules) ii) CPU using 3 chips (modules)
Chip 1 Chip 2
Chip 1 Chip 2
Registers
ALU
ALU
Chip 3
Shifter
SHIFTER NOT gates
Chip 3
Software Inspection is the most formal, commonly used form of peer review. The key
feature of an inspection is the use of the checklists to facilitate error detection and defined
roles for participants.
Focus of the software inspection is on identifying problems and not on resolving them.
Suggested software inspection participant roles:
o Moderator is responsible for leading the inspection.
o Reader - Leads the inspection through the logic.
o Recorder - documents found during inspection
o Reviewers - identifies and describes possible problems and defects
o Author - contribute his understanding.
o a list of questions prepared by each team member after reading the program or
unit under review
o flow charts
o data dictionaries, list of variables, classes etc.
The code walkthrough, like the inspection, is a set of procedures and error-detection
techniques for group code reading. It shares much in common with the inspection
process, but the procedures are slightly different, and a different error-detection technique
is employed.
Role of participants:
Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours in
duration. The walkthrough team consists of three to five people. One of these people
plays a role similar to that of the moderator in the inspection process, another person
plays the role of a secretary (a person who records all errors found), and a third person
plays the role of a "tester”. Suggestions as to who the three to five people should be vary.
Of course, the programmer is one among these people. Suggestions for the other
participants include (1) a highly experienced programmer, (2) a programming-language
expert, (3) a new programmer (to give a fresh, unbiased outlook), (4) the person who will
eventually maintain the program, (5) someone from a different project, and (6) someone
from the same programming team as the programmer.
Defined level. Here, standards for the processes of software development and
maintenance are introduced and documented (including project management). The
software process for both management and engineering activities is documented,
standardized, and integrated into a standard software process for the organization. During
the introduction of standards, a transition to more effective technologies occurs. There is
a special quality management department for building and maintaining these standards.
All projects use an approved, tailored version of the organization's standard software
process for developing and maintaining software. A program of constant, advanced
training of staff is required for achievement of this level. Starting with this level, the
degree of organizational dependence on the qualities of particular developers decreases
and the process does not tend to roll back to the previous level in critical situations. These
defined standards give the organization a commitment to perform because the
organization follows a written policy for implementing software process improvements
and senior management sponsors the organization's activities for software process
improvement.
3. Managed level. There are quantitative indices (for both software and process as a
whole) established in the organization. Detailed measures of the software process
and product quality are collected. Better project management is achieved due to
the decrease of digression in different project indices. However, sensible
variations in process efficiency may be different from random variations (noise),
especially in mastered areas. Both the software process and products are
quantitatively understood and controlled.
4. Optimizing level. Improvement procedures are carried out not only for existing
processes, but also for evaluation of the efficiency of newly introduced innovative
technologies. The main goal of an organization on this level is permanent
improvement of existing processes. Continuous process improvement is enabled
by quantitative feedback from the process and from piloting innovative ideas and
technologies. This should anticipate possible errors and defects and decrease the
costs of software development, by creating reusable components for example.
A process area (PA) contains the goals that must be reached in order to improve a
software process. A PA is said to be satisfied when procedures are in place to reach
the corresponding goals. A software organization has achieved a specific maturity
level once all the corresponding PAs are satisfied. The process areas (PA's) have the
following features:
The different maturity levels have different process areas pre-defined as shown in the
figure above. For instance, the SEI CMMI Level 5 has 3 PA's defined:
Process change management: To identify the causes of defects and prevent them from
recurring.
Technology change management: To identify beneficial new technology and transfer them
in an orderly manner
The Software Engineering Institute (SEI) constantly analyzes the results of CMM usage
by different companies and perfects the model taking into account-accumulated
experience. The CMM establishes a yardstick against which it is possible to judge, in a
repeatable way, the maturity of an organization's software process and compare it to the
state of the practice of the industry. The CMM can also be used by an organization to
plan improvements to its software process. It also reflects the needs of individuals
performing software process, improvement and software process assessments. Software
capability evaluation is documented and is publicly available. At the Optimizing Level or
the CMM level5, the entire organization is focused on continuous Quantitative feedback
from previous projects is used to improve the project management, usually using pilot
projects, using the skills shown in level 4. The focus is on continuous process
improvement.
Software project teams in SEI CMM Level 5 organizations analyze defects to determine
their causes. Software processes are evaluated to prevent known types of defects from
recurring, and lessons learned are disseminated to other projects. The software process
capability of Level 5 organizations can be characterized as continuously improving
because Level 5 organizations are continuously striving to improve the range of their
process capability, thereby improving the process performance of their projects.
Improvement occurs both by incremental advancements in the existing process and by
innovations using new technologies and methods.
The Capability Maturity Model has been criticized in that, that it does not
describe how to create an effective software development organization. The traits it
measures are in practice very hard to develop in an organization, even though they are
very easy to recognize. However, it cannot be denied that the Capability Maturity Model
reliably assesses an organization's sophistication about software development.
Summary
Auditing is a way of ensuring the quality of the information contained in the system.
Auditing refers to having an expert who is not involved in setting up or using a system
examines information in order to ascertain its reliability. In each phases of the
development cycle we apply quality ingredients and adds value to the product.
Specifically in software design modular design is one of the fundamental principles of a
good design. A module having high cohesion and low coupling is said to be functionally
independent of other modules. A software metric is any type of measurement, which
relates to software system, process or related documentation. The specific metrics that are
relevant depend on the project, the goals of the quality management team and the type of
software that is being developed. Discussed the quality assurance of software products
and related documents. At the end the various levels in capability maturity model has
been discussed.
Review questions: