Você está na página 1de 126

A-PDF Merger DEMO : Purchase from www.A-PDF.

com to remove the watermark

UNIT I

INTRODUCTION

1.1 CONCEPTS OF DATA AND INFORMATION


1.2 DEFINITION OF A MANAGEMENT INFORMATION SYSTEM
1.3 CONCEPTS OF DECISION MAKING
1.4 CONCEPTS OF SYSTEM
1.5 INFORMATION SYSTEMS AS A SYSTEM
1.5.1 ORGANIZATIONAL FUNCTION SUBSYSTEMS
1.5.2. ACTIVITIES SUBSYSTEMS
1.6 MIS AND OTHER DISCIPLINES
1.7 NEED FOR COMPUTER-BASED INFORMATION SYSTEM
1.8 THE ROLE OF A SYSTEMS ANALYST
1.9 EVOLUTION OF MANAGEMENT INFORMATION SYSTEMS
1.10 BUSINESS CASES

SUMMARY
REVIEW QUESTIONS

After reading this unit you could able to understand,


The distinction between data and information
Concepts of system and its classification
Concepts of information
Definitions of management information systems
Need for computer based information system
Functional and hierarchy levels of management
Evolution of Management information system
1.1 CONCEPTS OF DATA AND INFORMATION

Data the raw material for information is defined as groups of non random symbols
such as ₤ and  which represent quantities, actions, objects etc. data items in information
systems are formed from characters. These alphabetic, numeric or special symbols are
organized for processing purposes into data structures, file structures and databases. Data
relevant to processing of information and decision making may also be in the form of text
images or voice. However, information generally defined as data that is meaningful or useful
to the recipient Data items are therefore the raw material for producing information.

For instance when I want to take decision of “purchasing of bike” mileages, pickup is
information where as kilometre from destination1 to destination 2 will be useful in
calculating the mileage and helps in taking decision termed data. The terms “information”
and “data” are frequently used interchangeably.

1.2 DEFINITION OF A MANAGEMENT INFORMATION SYSTEM

A definition of management information system, as the term is generally understood,


is an integrated, user – machine system for providing information to support operations,
management, and decision – making functions in an organization. The system utilizes
computer hardware and software; manual procedures; models for analysis, planning, control
and decision making; and a database. The fact that it is an integrated system does not mean
that it is a single, monolithic structure; rather, it means that the parts fit into an overall
design. The elements of the definition are highlighted below:

A management information system is an integrated user machine system for providing


information. To support the operation, management, analysis, and decision making function
in an organization

The system utilizes computer hardware and software manual procedures model for analysis,
planning, control and decision making and a database.

INTEGRATED SYSTEMS

Management information system is an integrated base for organizational individual


subsystems which is developed for and by users of various levels of organization. If the
system is not integrated, the individual applications may be inconsistent and incompatible.
Data may not be compatible across applications that same data may be specified differently.
There may be chance for redundant modules when actually a single application serves more
than one need. A user wanting to perform analysis using data from two difficult applications
may find the task very difficult any sometimes impossible.

The first step to integrate diverse information system applications is to prepare an


overall information system plan which determines how they fit in with other functions. In
sort, the information system is designed as a planned federation of small individual sub
systems.

Information system integration is also achieved through standards, guidelines, and


procedures set by the MIS functions. The enforcement of such standards and procedures
permits diverse applications to share data, meet audit and control requirements, and be shared
by multiple users. For instance an application may be developed to run on a particular small
computer. Standards for integration may dictate that the equipment selected be compatible
with existing computers and that the application be designed for communication with the
centralized database.

In the design of information system data and application is aparted. The separate
database is maintained while the integration is across many applications and to a variety of
users.

IMPORTANCE OF A DATABASE

The underlying concept of a database is that data needs to be managed in order to be


available for, processing and have appropriate quality. This data management includes both
software and organization. The software to create and manage a database is a database
management system.

When all access to and use of the database is controlled through a database
management system. All applications utilizing a particular data item access the same data
item which is stored in only one place. A single updating of the data item updates it for all
uses. Integration through a database management system requires a central authority for the
database. The data can be stored in one central computer or dispersed among several
computers; the overriding requirement is that there be an organizational function to exercise
control.

UTILIZATION OF MODELS

Data need to be processed to attain the decision in the business. To do this, processing
of data items is based on a decision model. For instance, an investment decision relative to
new capital expenditures might be processed in terms of capital expenditure decision model.

Decision model can be used to support different stages in the decision-making


process. “Intelligence” models can be used to search for problems and / or opportunities.
Models can be used to identify and analyze possible solution. Choice models such as
optimization models may be used to find the most desirable solution.

1.3 CONCEPTS OF DECISION MAKING

The word decision has been derived form the Latin word ‘decidere’ which means ‘a
cutting away or a cutting off. Thus, a decision involves a cut of alternatives between those
that are desirable and those that are not desirable. The decision is a kind of choice of a
desirable alternative. Decision making is a process to arrive at a decision; the process by
which an individual or organisation selects one position or action from several alternatives.
Shull et al have defined decision making as follows:

“Decision making is a conscious process involving both individual and social


phenomena based upon factual and value premises which conclude with a choice of one
behavioural activity from among one or more alternatives with the intention of moving
toward some desired state of affairs.”
Managers’ life is filled with making decisions after decisions. Looking at the role of
decision making in management, William Moore has equated it with management when he
says that “management means decision making.” In fact, decision making permeates all
managerial functions and classical management theorists have viewed it as the centre of
managerial activities. Though this classical view on decision making is not exactly true,
decision making is one of managers more challenging roles. This is the reason why
information systems designers have focused their maximum attention on designing systems
that help managers communicate and distribute information for decision making.

Decisions may be classified into various categories, which are,


Routine and non – routine.
Programmed and non – programmed
Strategic and tactical or operational decisions.

Programmed / structured and non - programmed / unstructured are mutually


exclusive. Strategic decisions are non - programmed and non - routine while tactical or
operational decisions are mostly routine and programmed. Therefore, understanding of
programmed and non - programmed decisions is important.

PROGRAMMED DECISION OR STRUCTURED DECISION

Is routine decision which is made within the organizational system governing


policies and rules. These policies and rules are established well in advance to solve recurring
problems in the organisation. The factors affecting decision making are static and well –
structured. For example, the problem relating to recruitment of employees is solved by
recruiting those employees who meet selection criteria. These criteria are established by
recruitment policy and the managers have just to decide which employees meet selection
criteria and the decision is made accordingly. Programmed decisions are comparatively
easier to make as these relate to the problems which are solved by considering internal
organizational factors. Decisions are made by lower levels managers in the organization.

NON - PROGRAMMED DECISION OR UNSTRUCTURED DECISION

Is relevant for solving unique / unusual problem in which various alternatives cannot
be decided in advance. For such a decision, the situation is not well - structured and the
outcomes of various alternatives cannot be arranged in advance. For instance if an
organisation wants to take actions for expansion, it may have several alternative routes like
going for a taking over or acquisition of an existing company. In each situation, the managers
evaluate the likely outcomes of each alternative to arrive at a decision consider ing various
factors, many of which lie outside the organisation. Non - programmed decisions are novel
and non-recurring. Therefore, readymade solutions are not available. Since these decisions
are of high importance because of their long - term consequences, these are made by
managers at higher levels in the organisation.

DECISION - MAKING CONDITIONS

The decision maker makes today’s decision for future conditions whose Impact is
known in future period. The future conditions for a decision vary along a continuum ranging
from criteria of perfect certainty to condition of complete uncertainty.
In each of these conditions, knowledge of outcome of the decision differs. An
outcome defines what will happen if a particular alternative or course of action is chosen and
implemented. Knowledge of outcome of each decision alternative is important when there
are multiple alternatives and only one alternative is to be chosen. In the analysis for decision
making, three types of knowledge with respect to outcomes are usually distinguished as
shown in Table.

TABLE 1.1

Conditions Outcome Knowledge of outcome

Certainty Only one outcome of each Complete and accurate


alternative. knowledge of the outcome of
each alternative
Risk Multiple outcomes for each Probability of occurrence can
alternative be attached to each outcome.

Uncertainty Multiple outcomes for each No knowledge of the


alternative probability to be attached to
each outcome.

Table :-OUTCOMES IN DIFFERENT DECISION - MAKING CONDITIONS

Decision making strategy differs, based on the variation in the knowledge of


outcomes of different alternatives in different decision – making conditions. The degree of
structuring in a decision can be seen in terms of continuum in which degree of structuring
varies as shown in table.

TABLE 1.2

Management Nature of Level of Support systems required


level decision structuring

Top Strategic Low Strategic information systems


Expert systems
Executive support systems

Decision Support System


Management information
systems

Lower Operational High


Transaction processing
systems
Office Automation system
TABLE: - MANAGERIAL DECISIONS, LEVEL OF STRUCTURING AND
SUPPORT SYSTEMS

As we move upward in managerial hierarchy, we find that the degree of structuring in


decision making gets gradually reduced and support systems required for decision making
gradually tend to be unstructured. After clarifying the concept of decision and decision
making as well as type of decisions, let us understand the decision making process by which
a decision is arrived.

DECISION - MAKING PROCESS FIGURE 1.1

Decision is the outcome of a dynamic process which is


consist of various steps and involves various factors. Process of Problem
decision making process has been shown in Figure. identification

Process presented is more relevant for non - programmed


decisions. Problems that occur infrequently are unstructured
Alternative
and are characterized by a great deal of uncertainty regarding generation
the outcomes of various alternatives; require the managers to
utilize the entire process. For frequently occurring structured
problems, it is not necessary to consider the entire process
because decision rules are developed to handle such problems Choosing an
and it is not necessary to develop and evaluate various alternative
alternatives each time such a problem arises.

PHASES IN DECISION - MAKING PROCESS Implementation

PROBLEM IDENTIFICATION

In this phase of decision-making process involves searching the environment for


conditions calling for decisions. the problem is identified and formulated. A problem is the
difference between current state of affairs and expected state of affairs on the subject matter
of decision.

For instance, in organizational scenario, a problem will be found when there is


diversion outcome from the desired results. Manager develop the suitable model and tries to
identify the problem existence.

Without formulating the problem the identified problem seems to vague. At this
stage, the problem identified earlier, is more precisely defined and complexity get clarified.
MacGrimmon and Taylor have suggested four strategies for reducing complexity and
formulating a problem:

1. Determining the boundaries (clearly identifying what is included in the


problem).
2. Examining changes that may have precipitated the problem.
3. Factoring the problem into smaller sub-problems.
4. Focusing on controllable elements.

ALTERNATIVE GENERATION

In this phase, decision maker generates possible alternatives through which the
problem can be solved. If there is only one way of solving a problem, no question of decision
arises. So, the decision maker must try to find out the various alternatives available in order
to get the most satisfactory result of a decision. Identification of various alternatives not only
serves the purpose of selecting the most satisfactory one, but It also avoids bottlenecks in
operation as alternatives are available if a particular decision goes wrong. However, it should
be borne in mind that it may not be possible to consider all alternatives either because some
of the alternatives cannot be considered for selection because of obvious limitations of the
decision maker or information about all alternatives may not be available. Therefore, while
generating alternatives, the concept of limiting, factor should be applied.

A decision maker can use several sources for identifying alternatives - his own past
experience; practices followed by others, using creative forecasting and statistical techniques,
research.

CHOOSING AN ALTERNATIVE

In this phase, the best alternative is chosen to solve the problem. Evaluation of
various alternatives presents a clear picture as to how each of these contributes to solution of
the problem. A comparison is made among likely outcomes of the various alternatives and
the most appropriate one is chosen. Choice aspect of decision making is, thus, related to
deciding the most acceptable alternative which fits with the organisational objectives. It may
be seen that the chosen alternative should be acceptable in the light of organisational
objectives, and it is not necessary that the chosen alternative is the best one. However, all
alternatives available for decision making will not be taken for detailed evaluation because of
the obvious limitations of managers in evaluating all alternatives.

IMPLEMENTATION

Even though the actual process of decision making ends with the choice of an
alternative through which the objectives can be achieved, this phase will help manager to r
know what way their choice has contributed. The implementation of decision may be seen as
an integral aspect of decision.

Implementation of a decision requires the communication to subordinates, getting


acceptance of subordinates over the matters involved in the decision, and getting their
support for putting the decision into action. The decision should be effected at appropriate
time and in proper way to make the implementation more effective. The effectiveness of
implementation is important because it is only effective action through which organisational
objectives can be achieved. When a decision is put into action, it brings certain results. These
results provide indication whether decision making and its implementation is proper.
Therefore, managers should take follow-up action in the light of feedback received from the
results. If there is any deviation between objectives and results, this should be analyzed and
factors responsible for this deviation should be located. The feedback may also help in
reviewing the decision when conditions change which may require change in the decision.

METHODS OF DECIDING AMONG ALTERNATIVES

There are different methods to evaluate various alternatives through which a problem
can be solved. In evaluating alternatives, an attempt is made to find out the likely outcome of
each alternative so that the alternative which is likely to provide maximum outcome is
chosen. In evaluating the likely outcomes of various alternatives, generally, following
methods are used:

1. Optimization techniques.
2. Pay-off matrices.
3. Decision tree.
4. Decision table.
5. Game theory.
6. Elimination by aspects.
7. Decisional balance sheet

1.4 CONCEPTS OF SYSTEM

DEFINITION

Systems can be abstract or physical. An abstract system is an orderly arrangement of


interdependent ideas or constructs. For example, a Weapons system is an orderly
arrangement of the equipment, procedures, and personnel which make it possible to use a
Weapon. Accounting system consists of the records, rules, procedures, equipment, and
personnel which operate to record data, measure income, and prepare reports. Circulatory
system involves the heart and blood vessels which move blood through the body.

A physical system is a set of elements which operate together to accomplish an


objective. Computer system consists of several subsystems which functions together to
accomplish computer processing. Computer system is demonstrated as a simple system
model in the following figures.

FIGURE 1.2
Input Output

Process

System model
FIGURE 1.3 Storage Subsystem

memory
units

Processing Subsystem
Input Subsystem Output Subsystem

Input CPU Output

External
Interfaces

Figure: Computer as system

DETERMINISTIC AND PROBABILISTIC SYSTEMS

A deterministic system produces output in a predictable in nature. The interaction


between the subsystems will have the elements with certainty. When you describe system
based on the state of the system at a given point in time its and operation, the next state of the
system will be produced exactly, without error. An example is a correct combination of
elements and solution in the specified constraint will produce exactly known output.

The probabilistic system is named based on it probabilistic nature; so there are chance
for error on the prediction of the output of the system. A pricing system (for instance share
market price) is a probabilistic system which involves prediction of demand and market. But
the exact value at any given time is not known.

CLOSED AND OPEN SYSTEMS

A closed system is defined as a system of self-contained which does not exchange


material, information, or energy with its external environment. For instance, atom
radioactivity reaction in a insulated sealed, container is a closed systems will become
disorganized at the end.
Open systems exchange information, material, energy with the environment. For
instance, humans and organizational systems are examples of open systems. Open system
will react to the changes in the external environment and surveys in the environment.

SUBSYSTEMS

The system is integration of various subsystems and interfaces between them. This is
a basic concept in analysis and development of systems. A development of complex system
is tedious, so the system is decomposed into subsystems. This process of decomposition of
system is continued until subsystems could be of independent in behaviour and manageable
in size. For instance the funds management system is described in the figure as a combination
of subsystems.

FIGURE 1.4

Funds
Management
system

Internal fund Cash flow Check Working


flow system Printing capital
processing processing processing

1.5 INFORMATION SYSTEM AS A SYSTEM

The information system processes the received inputs of data and produces the
outputs. The basic system model consists of input, process, and output as well as the data
storage. Instead of collecting data and transforming data to information, keeping the storage
will help in subsequent use. This basic information processing model is useful in
understanding not only the overall information processing system but also the individual
information processes applications. Each application may be analyzed in terms of input,
storage. Processing and output. The information processing system has functional subsystems
and activities based subsystem.
FIGURE: 1.5

Unstructured
Higher
Level
Management

Strategic
Planning

Management
Control

Lower
Level Operational
Management Control
and
clerical
Structured
decisions
Transaction processing

The management information system has been described as a pyramid structure in


which the bottom layer consists of information for transaction processing, status inquiries,
etc; the next level consists of information recourses in support of day – to – day operation
and control; the third level consists of information system resources to aid in tactical
planning and decision making for management control; and the top level consists of
information resources to support strategic planning and policy making by higher levels of
management. Each level of information processing may make use of data provided for
lower levels; but new data about activities external to the organization may also be
introduced.
FIGURE 1.6

Strategi
c
Plannin

FINANCE AND ACCOUNTING SUBSYSTEM


SALES AND MARKETING SUBSYSTEMS
Management
Control & DM
(DSS,EIS)

PRODUCTION SUBSYSTEM

PERSONNEL SUBSYSTEM
LOGISTICS SUBSYSTEM
Operational Control & DM
(MIS,DSS)

Transaction processing & DM(TPS,OAS)

Integrated database

Figure: Conceptual frame work of Management Information Systems

SUBSYSTEMS OF AN MIS

Management Information System is comprehension of subsystem. There are two


categories of subsystems. One is based on the organizational functions which they support
and another one is based on managerial activities for which they are used.

1.5.1 ORGANIZATIONAL FUNCTION SUBSYSTEMS

The organizational functions are separable in nature based on the managerial


responsibility. Each functional subsystem has its own characteristic, procedure and models,
etc. Some of the support systems and data used for the subsystem is common for more than
one subsystems Typical major subsystems for a business organization engaged in
manufacturing are:
TABLE 1.3

Functional subsystem Functionality of the subsystem


Market research
Marketing Market testing
Demand forecasting
Sales planning
Material resource planning
Manufacturing Scheduling
Resource allocation
Cost control
Stock analysis
Logistics Inventory control
Distribution
Human resource Planning
Personnel Performance appraisal
Administrative control
Payroll
Financial control
Finance and accounting Cost analysis
Capital budgeting
Income measurement

1.5.2. ACTIVITIES SUBSYSTEMS

Another approach to understanding the structure of an information system is in terms


of the subsystems which perform various activities. Some of the activities subsystems will be
useful for more than one organizational function subsystem; others will be useful for only
one function. Examples of major activities subsystems are:

TABLE 1.4

Activity subsystem Some typical applications


Processing of orders shipments and
Transaction processing Receipts

Operational control Scheduling, analysis of performance reports

Management control Budget preparation and resource allocation

Strategic planning Formulation of objectives and strategic plans


TABLE 1.5

Level of Definition Comments


management
Strategic What will the organization serve Definition of goals, policies,
planning and what will it be like after five and general guidelines
years and beyond? Strategic plan charting course for
should include business to be in, organization. Determination
market it should sell to, etc. of organizational objectives.
Tactical Physical implementation of Acquisition of resources
planning strategic plans from one to five Acquisition tactics, plant
years. Reflected in capital location, new products.
expenditure budget and long - range Establishment and monitoring
staffing plan. What is the optimal of budgets.
product pricing pattern?
Operations Allocation of tasks to each Effective and efficient use of
planning organizational unit in order to existing facilities and
achieve objectives of tactical plan resources to carry out
between one to twelve months. activities within budget
Yearly budget. constraints.

TRANSACTION PROCESSING SYSTEM AND OFFICE AUTOMATION SYSTEM

TPS serve the need of operational level of the organization. It is defined as recording
the daily day to day transactions basis for the conduct of business. It is highly structured.
Transaction processing is performed manually or with mechanical machines; computer-based
data processing has altered the speed and complexity of transaction processing, but not the
basic function.

TPS are major producers of information for other types of systems available in
various other level of management. Without transaction processing, normal organizational
functioning would be impossible, and the data for management activities would not be
available. For instance, in banking the transaction processing systems supply data to the
organization’s ledger which are responsible for maintaining records of the organization for
analyzing the performance of the organization through balance sheet and so on. TPS can
connect the organization with its stake holders (customers and suppliers).

The transaction processing cycle begins with a transaction which is recorded in some
way. Although hand-written forms are still very common, transactions are often recorded
directly to a computer by the use of an online terminal. Recording of the transaction is
generally the trigger to produce a transaction document. Data from the transaction is
frequently required for the updating of master files; this updating may be performed
concurrently with the processing of transaction document or by a subsequent computer run.
FIGURE 1.7

Sales or purchase
transactions entry

Transactions form

Assemble hatch

Valid
Transaction
Conversion Transactions report
from
Input records
Form to Validation
Records

Invalid Control
Correction Records log

Figure: Transaction processing cycle.

Office automation is a popular term for the application of computer and


communications technology to office functions. It supports not only clerical office work but
also the work of management and professionals. Document preparation, message and
document communications, and public data services are some of the examples of OAS.
Office automation has a number of applications ranging from internal communication to
long-distance external communication. The major applications of office automation are in
following areas:
Word processing.
Desktop publishing.
Videotex.
Document imaging.
Electronic mail.
Electronic calendaring.
Video & Audio conferencing.
Facsimile transmission.

Decision support system and executive information system and expert system are
used in middle and top level of management which will be discussed in unit 3.

1.6 MIS AND OTHER DISCIPLINES

Concepts of MIS are found in management accounting, management and organization


theory, operations research, and computer science.

MANAGEMENT ACCOUNTING

Financial accounting and management accounting are the broader categories of


management accounting. Financial accounting deals with the measurement of profit for
accounting period and analysing the financial status at the end of the period. This will be
helpful for the stakeholders. It will have limited decision content towards management. In
contrast management accounting consists methods and concepts which is necessary for
planning, choosing among alternative business actions and control through the evaluation and
interpretation of performances. Here, the management accounting provides inputs for
decision making in the areas of planning and control.

BEHAVIOURAL THEORY

MIS helps in effective functioning of the organization. The fields of management and
behavioural theory helps in understanding the functions of a MIS and behaviour of the
managers in each level of the organization such individual decision making and group
decision making, individual and team motivation, leadership styles and traits of leader,
organizational change management and group dynamics, and organisation structure and
culture and so on. The knowledge of these concepts helps the designer of MIS to understand
the behaviour pattern and the types of decisions made by manager at each level of the
organization.

OPERATIONS RESEARCH

Operations research is a quantitative technique in managerial decision theory where


the managers search for the optimum result. In operations research, various mathematical and
decisive models are used for solving the problems. Operations research concepts are helpful
in development of models for “what if” analysis mainly used in DSS, MIS and helps in
computer - based solutions of many types of decision problems. In other word the systematic
approach to problem solving, use of models, and computer-based solutions are generally
incorporated in the decision support system.

COMPUTER SCIENCE

Computer science deals with the hardware and software of computer systems. The
knowledge of computer science enables information storage, processing, and retrieval faster.
Computer science covers the concepts of algorithms, computation and data structures which
are important in development of MIS. However, modern MIS is not merely an extension of
computer science but the emphasis in MIS is on the application of the technical capabilities
that computer science has made available. The fundamental processes of management
information systems are more related to organizational processes and organizational
effectiveness then computer algorithms.

Various academic disciplines are contributing to the development of MIS and it is


considered a separate field of study. Other academic disciplines help in designing and
developing MIS.

1.7 NEED FOR COMPUTER-BASED INFORMATION SYSTEM

As long as organizations are small and have limited operational goals manual
information systems are satisfactory. Many trends in the development of industry and
commerce have made computer -based information systems essential to efficiently run
organizations. These are: -

The size of organizations is becoming larger. This is particularly true in India due to
increase in population and rapid rate of industrial development.

Computer - based processing enables the same data to be processed in many ways,
based on needs, thereby allowing managers to look at the performance of an organization
from different angles.

As the volume of data has increased and the variety of information and their
timeliness is now of great importance, computer – based information processing has now
become essential for efficiently managing organizations.

Organizations are now distributed with many branches.Markets are becoming


competitive. To maintain favorable balance of payments in a country, organizations have to
be internationally competitive.

The general socio - economic environment demands more up-to-date and accurate
information. Human systems are changing faster than ever before. Governmental regulations
have become complex. Organizations have to interact with many other interested parties such
as consumer groups, environmental protection groups, financial institutions, etc., which did
not exist before.

All the above developments demand decision making based on up-to-date, well
analyzed and presented information rather than thumb rules and hunches of an earlier era.
1.8 THE ROLE OF A SYSTEMS ANALYST

MIS is possible without computer too. When the organization goes for the computer
based information system the role of the system analyst plays a vital role. A systems
analyst’s primary responsibility is to identify information needs of an organization and obtain
a logical design of an information system which will meet these needs. Such an information
system will be a combination of manual and computer - based procedures to process a
collection of data, useful to the managers in taking decisions. He must have the knowledge of
the data flow and process of the organization.

The personnel involved in the management information systems are: end users (clerical,
operational and top level managers); professionals (database administrator, programmers).

The systems analyst coordinates the efforts of all these personnel to effectively develop and
operate computer - based information systems.

The requirement must be collected from the users of the system. Users may not be
very familiar with coding terminology. The system analyst must collect the requirement from
the users of the information systems. This is best achieved by having a common meeting with
all the users and arriving at a consensus.
The systems analyst requires good interpersonal relations and diplomacy. He must be able to
convince all the users about the soundness of the group decision and obtain their cooperation.
An analyst studies the problem in depth and helps in choosing the best solutions and the
relative difficulties in implementing each of other alternatives.
An analyst is responsible in obtaining the functional specification; it must be precise and
detailed. it must be non - technical so as users ,clerks, middle level managers and top
managers of organization able to understand.
He is also responsible to design the system which must be understandable, accommodate
changes easily. The analyst must know the latest design tools to assist him in his task. As part
of the design he must also create a system test plan.

A systems analyst must be familiar with all the functions of the organizations. He
must aware of new technological changes since he is responsible for the feasibility analyses.
He must be a good listener,a good communicator, a good diplomat, conflict revolver
,motivator, influencer.

1.9 EVOLUTION OF MANAGEMENT INFORMATION SYSTEMS

Based on the needs and requirements of the organization MIS has been evolving
though the period of time. MIS was manually operational before the invention of computer
application in this area. The following table illustrates the evolution of information system.

TABLE 1.6

Decade Information systems Characteristics of information systems


1951-60 Electronic data processing Collecting, manipulating, storing of data.
No scope for decision making
1961-70 Management information Pervasive in all level of the management
system decisions.
Solution for structured decisions.
1971-80 Decision Support System, Analytical models for semi structured
expert system decisions

1981 and Artificial intelligence, Solution for unstructured decision making


above Executive information through advanced graphics
system
1985 and Knowledge management Intelligence workstation for knowledge
above system,end user computing work which involves thinking, processing
information and formulating analyses ,
recommendations and procedures

1.10 BUSINESS CASES

All organizations are divided into many departments or sections with each department
having an assigned functional responsibility. Consider for example, an educational institute
such as Anna University. It will typically have besides academic departments a central
administrative office. The administrative office will be divided in to many sections each with
an assigned function. Typically the functions will be student section which will normally deal
with student records, students admission etc. an accounts section, purchase section stores
section, personnel section, medical section and student hostel office. A hierarchical chart of
the sections is shown in figure and their functions in table. A manufacturing organization for
instance will have the functions shown in table division of an organization in to departments
with specified functions is mainly intended to let each department focus on an area of
responsibility. All departments will have to coordinate their activities to meet the overall
objectives of the organization. This coordination normally provided by higher level
management in the organization.Functions of various departments of a University.

TABLE 1.7

Sections Functions
Student section Students’ admission records
Administering admission tests
Students’ academic records
Students’ registration information
Placement
Accounts section University budget
Payroll
General ledger of receipts/ payments

Purchase section Scholarships


Order processing
Stores Vendor selection
Stock register maintenance
Issues
Hostel office Receipts
Mess records
Hostel purchases / stores
Room assignment

Medical centre Residents’ data


Medical records
Works department Medicine purchase/ stores
Building construction
Building maintenance

Supply Maintenance of electrical


installations, water

Personnel Maintenances of roads, gardens


Personnel records (leave, tenure)
Personnel assessment

Miscellaneous Personnel recruitment


Mailing
Telephones
Transport

We looked at two diverse organizations in this section – a University and


manufacturing organization. Functions of various departments of a manufacturing
organization.

TABLE 1.8

Sections Functions
Production Production planning and
control
Maintenance management
Bill of materials processing
Marketing Order processing
Advertising
Customer records/follow up
Sales analyses
Finance Billing, payments
Payroll
Costing
Share accounting
Budget and finance planning
Tax planning
Resource mobilization
Personnel Recruitment
Records
Training
Deployment of labor
Assessment/promotions
Stores Stock ledger keeping
Issues/reorder
Receipts
Enquiry processing
Purchase Order processing
Vendor development
Vendor selection
Maintenance Physical facilities
Communication facilities
Electricity and water supply
Research and Production improvement
development Product development
Product testing
Product design

We see that there are some common functions such as personnel, purchase, stores and
accounts and there are organization specific functions such as students section in a university
and production section in a manufacturing organization. This is a general observation.
Information processing methods, however, have general features regardless of the
organization for which they are designed.

SUMMARY

The vital role of information in an organization has been discussed. how the
manager should react to the information of external environment and internal environment.
The information systems have become essential for helping organizations deal with changes
in global economies and the business. The kinds of systems built today are very important for
the organization’s overall performance, especially in today’s highly globalize and
information- based economy. Information systems are driving both daily operations and
organizational strategy. Powerful computers, software and networks, including the internet ,
have helped organizations become more flexile , eliminate layers of management , separate
work from location , coordinate with suppliers and customers, and restructure work flows ,
giving new powers to both line workers and management. Information technology provides
managers with tools for more precise planning forecasting and monitoring of the business. To
maximize the advantages of information technology, there is a much greater need to plan the
organization’s information architecture and information technology infrastructure.

REVIEW QUESTIONS

1. Define Management Information systems.


2. What is system? Discuss some systems you know.
3. Discuss the types of subsystem in each level of the management.
4. Explain the subsystem based on the functionality of the organization.
UNIT 2
SYSTEM DEVELOPMENT

2.1 OVERVIEW OF SYSTEM DEVELOPMENT


2.1.1. SYSTEM DEVELOPMENT LIFE CYCLE
2.1.1.1. SYSTEMS ANALYSIS
2.1.1.2. SYSTEM DESIGN
2.1.1.2.1. PHYSICAL SYSTEM DESIGN
2.1.1.2.2. DATABASE DESIGN
2.1.1.3. PROGRAMMING
2.1.1.4. TESTING
2.1.1.5. CONVERSION
2.1.1.6. PRODUCTION AND MAINTENANCE
2.1.1.7. POST AUDIT
2.1.1a. STRUCTURED METHODOLOGIES
2.1.1a.1. STRUCTURED ANALYSIS
2.1.1a.2. STRUCTURED DESIGN
2.1.2. SPIRAL MODEL
2.1.3. RAPID APPLICATION DEVELOPMENT (RAD)
2.1.4. END - USER DEVELOPMENT
2.1.5. APPLICATION SOFTWARE PACKAGES AND OUTSOURCING
2.2. COMPARISION BETWEEN SYSTEM DEVELOPMENT APPROACHES

SUMMARY
REVIEW QUESTIONS
2.1 OVERVIEW OF SYSTEM DEVELOPMENT

The systems life cycle is the oldest method for building information systems.
The life cycle methodology is a phased approach to building a system, dividing
systems development into formal stages.

The systems life cycle methodology maintains a very formal division of labour
between end users and information systems specialists. Technical specialists, such as
system analysts and programmers, are responsible for much of the systems analysis,
design, and implementation work; end users are limited to providing information
requirements and reviewing the technical staff’s work. The life cycle also emphasizes
formal specifications and paperwork; so many documents are generated during the
course of a systems project.

The systems life cycle is still used for building large complex systems that
require a rigorous and formal requirements analysis, predefined specifications, and
tight controls over the systems - building process. However, the systems life cycle
approach can be costly, time consuming, and inflexible. Although systems builders
can go back and forth among stages in the life cycle, the systems life cycle is
predominantly a “waterfall” approach in which tasks in one stage are completed
before work for the next stage begins. Activities can be repeated, but volumes of new
documents must be generated and steps retraced if requirements and specifications
need to be revised. This encourages freezing of specifications relatively early in the
development process. The life cycle approach is also not suitable for many small
desktop systems, which tend to be less structured and more individualized.

The systems development life cycle activities depicted here usually take place
in sequential order. But some of the activities may need to be repeated or some may
take place simultaneously, depending on the approach to system building that is being
employed.

2.1.1. SYSTEM DEVELOPMENT LIFE CYCLE (water fall model)


2.1.1.1. SYSTEMS ANALYSIS
Systems analysis is the analysis of the problem that the organization will try to
solve with an information system. It consists of defining the problem, identifying its
causes, specifying the solution and identifying the information requirements that must
be met by a system solution.

The systems analyst creates a road map of the existing organization and
systems, identifying the primary owners and users of data along with existing
hardware and software. The systems analyst then details the problems of existing
systems. By examining documents, work papers, and procedures; observing system
operations; and interviewing key users of the systems, the analyst can identify the
problem areas and objectives a solution would achieve. Often the solution requires
building a new information system or improving an existing one. The systems
analysis would include a feasibility study to determine whether that solution was
feasible, or achievable, from a financial, technical, and organizational standpoint.
Building a system can be broken down into six core activities.
FIGURE 2.1

System Analysis

System Design

Programming

Testing

Conversion

Production and
Maintenance

Fig 2.1 Waterfall model


The feasibility study would determine whether the proposed system was a
good investment, whether the technology needed for the system was available and
could be handled by the firm’s information systems specialists, and whether the
organization could handle the changes introduced by the system.

TABLE 2.1

Phases in Activities in phases of


Comments
Life Cycle Life cycle

Systems Proposal definition Preparation of request for a proposal application


analysis Feasibility assessment Evaluation of feasibility and cost-benefit of proposed
application
Information Determination of information needed
requirement analysis
Conceptual Design User-oriented design of application
Systems Physical System Detailed design of flows and process
design Design Application processing system and
design Physical Database Preparation of program specification
Design Design of internal schema for data in
Database or design of files
Programming Program Development Translate design specifications into program
code
Procedure
Development Design of procedures and preparation of user
instructions
Testing Unit test Code testing.
Systems test System integrated testing

Conversion Acceptance test Final system test and conversion


Train users and technical staff

Operation & Operate the system Day-to-day operation


Maintenance Evaluate the system Finding the bugs and maintenance
Modify the system Modification
Post Audit
Evaluation of development process, application
system and results of use

TABLE 2.1: Systems Development

Normally, the systems analysis process identifies several alternative solutions


that the organization can pursue the process then assesses the feasibility of each. A
written systems proposal report describes the costs and benefits, advantages and
disadvantages of each alternative. It is up to management to determine which mix of
costs, benefits, technical features, and organizational impacts represents the most
desirable alternative.

System analysis involves the following aspects of system development:


1. Feasibility study.
2. Requirement analysis.

1. FEASIBILITY STUDY

Feasibility study is the process of determining whether a system is appropriate


in the context of organisational resources and constraints and meets the user
requirements. The basic objectives of the feasible study are to identify whether the
proposed system is feasible and will be more appropriate than the existing system,
before the commitment of organisational resources to the system.
A feasibility study covers economic feasibility, technical feasibility,
operational feasibility, and legal feasibility.

ECONOMIC FEASIBILITY

Economic feasibility Involves determination of whether the given system is


economically viable. It is done through cost / benefit analysis of the system to identify
whether the benefits are more than the costs. For a system to be economically viable,
its benefits must be more than its costs.

TECHNICAL FEASIBILITY

Technical feasibility identifies whether the proposed system is technically


viable with the available hardware, software, and technical resources. It centres on the
existing technical facilities and the extent to which these can support the proposed
addition. If additional technical facilities are required to implement the system,
technical feasibi1ity also takes into account whether the required technology is
available in the market and is compatible with the organisation’s existing technology.
The technological risk involved in the technology to be acquired is also assessed.

OPERATIONAL FEASIBILITY

Operational feasibility, also known as behavioural feasibility, determines


whether the proposed system will work effectively within the existing managerial and
organisational framework. Often a new system faces resistance from people as they
are inherently resistant to change while the new system requires change. It is a
common phenomenon that computer Installations have something to do with transfer,
retraining, and job status of employees. Similarly, resistance may come from
management too if the new system does not fit with the existing organisational
policies relating to information systems. For example, if the new system requires web
based technology while the existing policy is to have a fourth generation technology,
the new system will face resistance from the management. Operational feasibility
must take this factor account while measuring the operational viability.

LEGAL FEASIBILITY

Legal feasibility tries to ensure whether the new system meets the
requirements of various information technology regulations, such as privacy laws,
computer crime laws, software theft laws, malicious access to the data, international
laws, etc. Government of India has formulated a comprehensive Act, known as
Information Technology Act 2000 that provides regulations for the use of information
technology.

STEPS IN FEASIBILITY STUDIES

Many feasibility studies are disillusioning for both users and analysts. These
studies often presuppose that when the feasibility document is being prepared the
analyst is in a position to evaluate solution. In order to make feasibility study report
meaningful for decision making, a logical procedure must be followed which consists
of the following steps.
1. FORMATION OF A PROJECT TEAM:

The team may consist of system analysts and user staff. In some cases, even
outside consultants and information system specialists may be included in the team.
The team should have a project leader who may guide the other members as to how to
proceed for the job.

2. IDENTIFICATION OF SUITABLE SYSTEM:

Keeping in mind the objectives for which a new system is required, alternative
systems should be identified. All systems work on the principle of equifinality. This
principle suggests that a system can reach the same final state from differing initial
conditions and by a variety of paths. It implies that many information systems may be
able to achieve pre-determined objectives though their approaches may be different.
Therefore, at this stage, a number of systems should be identified so that the project
team has several alternatives to choose a system that best fits organisational
requirements.

3. IDENTIFICATION OF CHARACTERISTICS OF SYSTEMS:

At this stage the team identifies the various characteristics of systems so that
those systems that do not meet the initial selection criteria are eliminated as it is
difficult and time consuming to have a detailed evaluation of large number of
systems. These initial criteria may be in the form of volume of investment required,
operational efficiency organisational constraints, etc. Only those systems that meet
these initial: criteria go to the next step.

4. PERFORMANCE AND COST EVALUATION:

Detailed performance and cost evaluation is carried for those systems which
pass successfully through the previous step. At this stage, performance of each system
is evaluated against the performance criteria set before the start of the feasibility,
study. Whatever the criteria may be set, there has to be as close a match as possible.
Besides performance, each system should be evaluated in terms of costs too. The
costs include all types of costs - cost of initial investment in additional hardware,
software, and physical facilities, development and installation cost, cost of training,
updating the software, documentation and the recurring operating cost. In many
systems, initial investment cost may be low but its operating cost may be high to
offset the advantage of lower initial cost. This fact should be taken into account.
Weights can be assigning to performance criteria like system accuracy, growth
potential, response time, user friendly, etc. In the same way, weights can be assigned
to different components of costs.

5. FINAL SELECTION OF THE BEST INFORMATION SYSTEMS:

Based on the weights assigned to different systems, the system may be


selected that has the highest weight score. However, in actual practice, mere weight
scores of different systems are not the sole criteria for selecting a system but other
organisational factors like organisational policy to procure capital goods (hardware,
software, etc. are capital goods), resource constraints, suppliers’ reputation, etc. Plan
the important role in the selection of a system. Therefore, these factors should be
taken into account.

6. PREPARATION OF FEASIBILITY REPORT:

Based on the feasibility report, management takes suitable action including the
final selection of a system. Usually, a project report contains the following items:
1. Covering letter containing briefly the nature of the study, general finding and
recommendations.
2. Table of contents indicating location of various parts of the Study.
3. Overview of the study indicating its objectives and reasons undertaking it.
4. Detailed findings indicating the projected performance of the system and
costs involved.
5. Recommendations and conclusions suggesting the management the most
beneficial and cost - effective system.
6. System design and implementation schedule indicating the time to be taken in
completing various activities and the time by which the system becomes ready
for operational use.

After the final decision about a system is made by management, subsequent activities
proceed.

2. REQUIREMENT ANALYSIS

Requirement analysis defines the scope of the system and the functions it is
expected to perform. If the system is not designed according to information
requirements it will fail to achieve its objective in spite of choosing the best system.

What information is required by the end users of a system depends on the


organisation and its strategy to use information systems for generating competitive
advantage. At this stage, the system development team might be more interested in
analyzing how users do the job and what information they need for doing the job. For
this purpose, the team may use a variety of tools to identify the requirements -
secondary data (recorded), observation, interviews, questionnaires, and systems used
in other similar organizations.

SECONDARY DATA

Every organisation uses certain procedures and forms in performing different


function. A procedure is a series of related tasks that make up the chronological
sequence and the established way of performing the work to be accomplished. If an
organisation has adopted the policy of formalization, various procedures are codified
in a procedure manual. Information for completing a procedure can be found in the
manual. However, before extracting the information from the manual, it is necessary
for the project team to ensure that the manual is updated and reflects the present
situation.
Like procedures, an organisation uses different types of forms for conducting
its operations. Printed forms are widely used for capturing and proving information.
The objective of getting information from forms is to understand how forms are used.

OBSERVATION

Observation is another method of gathering information for analysis is on site


observation. The major objective of on - site observation is to get as close as possible
to the system being studied. For this reason, it is important that the analyst is
knowledgeable about the general makeup and activities of the system. The analyst
should try to find out through observation

1. Nature of system and what does it do?


2. User of the system and the person who is responsible for running the
system
3. The history of the system and how did it reach as present state?
4. Nature of system in comparing to other systems of the organisation?

INTERVIEWS

Information can be collected about the likely requirement through the personal
interviews. Interview is a formal, in-depth conversation conducted to gather
information about how the present systems work and what modifications are required
in them. Interviews can be used for two main purposes - as an exploratory devise to
identify relations or verify information and to capture information as it exists. In
conducting an interview, the analyst should proceed in the, following manner:

1. Setting stage for Interview by explaining the purpose of interview and


nature of information required.
2. Setting rapport with interview to sees as much information as possible.
3. Asking questions and initiating deliberations for seeking information.
4. Obtaining and recording information solicited through the interview.

QUESTIONNAIRES

A questionnaire is a formalized written schedule containing different questions


relevant to the object being studied. It obtains and records specified and relevant In
formation with tolerable accuracy and completeness A questionnaire can be directly
administered by the analyst himself and gets the questionnaire filled himself on the
basis of information provided by the respondent or can be administered indirectly by
requesting the respondent to fill-up the questionnaire. Detailed questionnaires are
quite useful for collecting quantitative information where the response size is quite
small and the response is well structured.

2.1.1.2. SYSTEMS DESIGN

Systems analysis describes what a system should do to meet information


requirements; systems design shows how the system will fulfil this objective. The
design of an information system is the overall plan or model for that system. Like the
blueprint of a building or house, it consists of all the specifications that give the
system its form and structure.

The systems designer details the system specifications that will deliver the
functions identified during systems analysis. These specifications should address all
of the managerial, organizational, and technological components of the system
solution.

It is useful to divide application design into a conceptual design phase and a


physical design phase. The conceptual design emphasizes the application as seen by
those who will operate or use the outputs of the system; the physical design translates
those requirements into specifications for implementing the system. The conceptual
design establishes the inputs and outputs, functions to be performed by the
application, and application audits and controls. Physical design specifies the actual
processing given in conceptual design.

Typical contents of a conceptual design report are the following:


• Input for the application
• Outputs produced by the application
• Functions to be performed by the application system.
• General flow of processing with relationships of major programs, files,
inputs, and outputs.
• Outlines of operating manuals user manuals, and training materials needed
for the application.
• Audit and control processes and procedures for ensuring.

2.1.1.2.1. PHYSICAL SYSTEM DESIGN

The physical system design phase, also called internal or detailed design,
consists of activities to prepare the detailed technical design of the application system.
The physical system design is based on the information requirements and the
conceptual design.

Physical design involves

• System design showing flow of work, programs, and user functions.


• Control design showing controls to be implemented at various points in the
flow of processing.
• Hardware specifications for the applications if new hardware is required
• Data communications requirements and specifications.
• The overall structure of programs required by the application with procedural
specifications on functions to be performed by, each.
• Security and backup provisions.
• An application test or quality assurance plan for the remainder of the
development.

The physical system design work is performed by systems analysts and other
technical personnel. Users may participate in this phase, but much of the work
requires data processing expertise instead of user function expertise. There are
number different methods an analyst may employ. By supporting the systematic
process of expanding the level of detail and documenting the results, the methods aid
in reducing the complexity and cost of developing application systems and in
improving the reliability and modifiability of the designs.

Generally, physical system design techniques achieve simplicity by subdividing


the application system into small, relatively self-contained modules. System modules
can be programs or procedures which are subsections of programs. System
complexity is reduced because each module can be developed, coded, and tested
relatively independently of the others. Reliability and modifiability are enhanced
because a change (whether a change in specifications, an enhancement, or a repair)
can be made to a system module with minimal, well - understood effects on the rest of
the system. The techniques are thus based on well - understood principles of systems
theory.

2.1.1.2.2. DATABASE DESIGN

Database design begins with designing the conceptual model of a database.


Conceptual modelling uses both current and potential applications and ends with an
E-R model and a data dictionary.

Conceptual model: specifies the relationships between data. Using entity relationship
analyses group the data.

Normalization: method of organizing data in to tables

1. To reduce duplication of data


2. Simplify functions like adding, deleting and changing data in other words
updating data and
3. Select data required for processing in other words retrieving data

Normalized database is later converted to a physical database


FIGURE 2.2

E-R diagram
based on existing and
potential Design conceptual model
applications of database

Features of Normalization of data


DBMS to be used model

Data on frequency of Design physical model of Use feedback


access, volume, additions database to improve
and deletions

Evaluate performance of
physical model

Implement

Fig 2.2 Steps in the design of a database system.

ENTITY- RELATIONSHIP MODEL:

Entity: Distinct real world items in an application (nouns)


Vendor supplies an item to a company
Entity: ex. Supplier; Product

Relationships: Which connect entities and represent meaningful dependencies


between them. (Verb)
Act of supplying defines relationship

Attributes which specify properties of entities and relationships


Entity-> rectangle
Relationship-> diamond shaped box
Orders placed to Suppliers
Entity: orders; Suppliers
Relationship: placed to

Relationship cardinality-the number of relationships in which an entity can appear

• One relationship
• Fixed maximum number of relationship
• Variable number of relationship

FIGURE 2.3

Fig 2.3 one to one relationship


FIGURE 2.4

Fig 2.4 E-R Diagram

The next step is to design a logical model of the database. The logical model
depends on the DBMS software. If it is a relational DBMS then relations form the
logical data model. There are two other DBMS, called the network DBMS and
hierarchical DBMS. Logical data model for these are different from relations. We will
not discuss these in this text.

The physical data model is designed to ensure good performance. In this step
data on frequency of use of data elements, access time needs, etc., are taken into
account. The physical data model is implemented and evaluated. The system structure
is shown in Figure 2.4

A good database system should


• Satisfy current and future application needs of an organization;
• Cater to unanticipated user requirements in the best possible way;
• Be expandable with the growth and changes in an organization;
• Be easy to modify with changes in hardware and software environment;
• Validate data before storage (validated data should be correct and remain
correct with changes in the system); and
• Allow only authorized persons to have access to the data stored in the
database.

2.1.1.3. PROGRAMMING

During the programming stage, system specifications that were prepared


during the design stage are translated into software program code. Today, many
organizations no longer do their own programming for new systems. Instead, they
purchase the software that meets the requirements for a new system from external
sources such as software packages from a commercial software vendor, software
services from an application service provider, or outsourcing firms that develop
custom application software for their clients.

2.1.1.4. TESTING

Exhaustive and thorough testing must be conducted to ascertain whether the


system produces the right results. Testing answers the question, “Will the system
produce the desired results under known conditions?”

Testing is time consuming: Test data must be carefully prepared, results


reviewed, and corrections made in the system. In some instances parts of the system
may have to be redesigned. The risks resulting from glossing over this step are
enormous.

Testing an information system can be broken down into three types of


activities: unit testing, system testing, and acceptance testing. Unit testing, or program
testing, consists of testing each program separately in the system. It is widely believed
that the purpose of such testing is to guarantee that programs are error free, but this
goal is realistically impossible. Testing should be viewed instead as a means of
locating errors in programs, focusing on finding all the ways to make a program fail.
Once they are pinpointed, problems can be corrected.

SYSTEM TESTING

Tests the functioning of the information system as a whole. It tries to


determine whether discrete modules will function together as planned and whether
discrepancies exist between the way the system actually works and the way it was
conceived. Among the areas examined are performance time, capacity for file storage
and handling peak loads, recovery and restart capabilities, and manual procedures.

ACCEPTANCE TESTING

Provides the final certification that the system is ready to be used in a


production setting. Systems tests are evaluated by users and reviewed by
management. When all parties are satisfied that the new system meets their standards,
the system is formally accepted for installation.

The systems development team works with users to devise a systematic test
plan. The test plan includes all of the preparations for the series of tests we have just
described.

Figure 2.5 shows an example of a test plan. The general condition being tested
is a record change. The documentation consists of a series of test-plan screens
maintained on a database that is ideally suited to this kind of application.
2.1.1.5. CONVERSION

Conversion is the process of changing from the old system to the new system.
Four main conversion strategies can be employed: the parallel strategy, the direct
cutover strategy, the pilot study strategy, and the phased approach strategy.

In a parallel strategy both the old system and its potential replacement are
run together for a time until everyone is assured that the new one functions correctly.
This is the safest conversion approach because, in the event of errors or processing
disruptions, the old system can still be used as a backup. However, this approach is
very expensive, and additional staff or resources may be required to run the extra
system.

The direct cutover strategy replaces the old system entirely with the new
system on an appointed day. It is a very risky approach that can potentially be more
costly than running two systems in parallel if serious problems with the new system
are found. There is no other system to fall back on. Dislocations, disruptions, and the
cost of corrections may be enormous.

The pilot study strategy introduces the new system to only a limited area of
the organization, such as a single department or operating unit. When this pilot
version is complete and working smoothly, it is installed, throughout the rest of the
organization, either simultaneously or in stages.

The phased approach strategy introduces the new system in stages, either by
functions or by organizational units. If, for example, the system is introduced by
functions, a new payroll system might begin with hourly workers who are paid
weekly, followed six months later by adding salaried employees (who are paid
monthly) to the system. If the system is introduced by organizational units, corporate
headquarters might be converted first, followed by outlying operating units four
months later.

Moving from an old system to a new one requires that end users be trained to
use the new system. Detailed documentation showing how the system works from
both a technical and end-user standpoint is finalized during conversion time for use in
training and everyday operations. Lack of proper training and documentation
contributes to system failure, so this portion of the systems development process is
very important.A sample test plan to test a record change.

When developing a test plan, it is imperative to include the various conditions


to be tested, the requirements for each condition tested, and the expected results. Test
plans require input from both end users and information systems specialists.
FIGURE 2.5

Address and Maintenance


Procedure Test Series 2
“Record Change Series”

Prepared By: Date: Version:

Test Condition Special Expected Output On Next


Ref. Tested Requirements Results Screen
2.0
Change records

2.1 Change existing


Key field Not allowed
record

2.2 Change nonexistent Invalid key


Other fields
record message

2.3 Change deleted Deleted record Deleted


record must be available message

2.4 Make second Transaction


Change 2.1 above OK if valid V45
record file
Transaction
2.5 Insert record OK if valid V45
file
Abort during Transaction
2.6 Abort 2.5 No change V45
change file

Fig 2.5 Test Plan

2.1.1.6. PRODUCTION AND MAINTENANCE

After the new system is installed and conversion is complete, the system is
said to be in production. During this stage, the system will be reviewed by both users
and technical specialists to determine how well it has met its original objectives and to
decide whether any revisions or modifications are in order. in some instances, a
formal post implementation audit document is prepared. After the system has been
fine-tuned, it must be maintained while it is in production to correct errors, meet
requirements, or improve processing efficiency. Changes in hardware, software,
documentation, or procedures to a production system to correct errors, meet new
requirements, or improve processing efficiency are termed maintenance.

Studies of maintenance have examined the amount of time required for


various maintenance tasks. Approximately 20 percent of the time is devoted to
debugging or correcting emergency production problems; another 20 percent is
concerned with changes in data, files, reports, hardware, or system software. But 60
percent of all maintenance work consists of making user enhancements, improving
documentation, and recoding system components for greater processing efficiency.
The amount of work in the third category of maintenance problems could be reduced
significantly through better systems analysis and design practices. Table summarizes
the systems development activities.

2.1.1.7. POST AUDIT

A desirable part of the system development life cycle is a review of the


application after it has been in operation for a period, such as a year. An audit team
with representatives from users, development, maintenance, operations, and perhaps
internal audit review the operation, use, cost, and benefits of the application.
Recommendations from a post audit include specific recommendations for dropping,
repairing, or enhancing an application and suggestions for improving the development
process on subsequent applications.

2.1.1a. STRUCTURED METHODOLOGIES

Structured methodologies have been used to document, analyze, and design


information systems since the 1970s. Structured refers to the fact that the techniques
are step by step, with each step building on the previous one. Structured
methodologies are top - down, progressing from the highest, most abstract level to the
lowest level of detail - from the general to the specific.

2.1.1a.1. STRUCTURED ANALYSIS

In analysing the present system and likely future requirements of the proposed
system- analyst collects a great deal of relatively unstructured data through procedure
manuals, interviews, questionnaires, and other sources, The traditional approach is to
organize and convert the data through system flowcharts which support future
development of the system and simplify communication with the users. However, a
system flowchart represents a physical rather than a logical system. It makes difficult
to distinguish between what happens and how it happens In the system. In order to
overcome this problem, structured analysis is undertaken which is a set of techniques
and graphical tools ad allow the analyst to develop a new kind of system
specifications that are easily understandable to the users. Structured analysis uses data
flow diagram. Structured analysis has the following features.

1. Structured analysis is graphic that presents a picture of what is being


specified and is a conceptually easy - to - understand presentation of the
application.
2. The process used in structured analysis is partitioned so that a clear picture of
progression from general to specific in the system flow emerges.
3. Structured analysis is logical rather than physical. It specifies in a precise,
concise, and highly readable manner the working of the system.
4. In structured analysis, certain tasks that are normally carried out late in the
system development are undertaken at the analysis phase. For example user
procedures are documented during analysis rather that in design or
implementation phase.

2.1.1a.2. STRUCTURED DESIGN

Structured development methods are process - oriented, focusing primarily on


modelling the processes, or actions that capture, store, manipulate, and distribute data
as the data flow through a system. These methods separate data from processes. A
separate programming procedure must be written every time someone wants to take
an action on a particular piece of data. The procedures act on data that the program
passes to them.

The primary tool for representing a system’s component process and the flow
of data between them of the data flow diagram (DFD). The data flow diagram offers
a logical graphic model of information flow, partitioning a system into modules that
show manageable levels of details. It rigorously specifies the processes or
transformations that occur within each module and the interfaces that exist between
them.

FIGURE 2.6

Entity

Flow of data Process Storage

Fig 2.6 Symbols in DFD

Figure shows a simple data flow diagram for a mail – in university course
registration system. The rounded boxed represent processes, which portray the
transformation of data. The square box represents an external entity, which is an
originator or receiver of data, located outside the boundaries of the system being
modelled.

The open rectangles represent data stores, which are either manual or
automated inventories of data. The arrows represent data flows, which show the
movement between processes, external entities, and data stores. They always contain
packets of data with the name or content of each data flow listed beside the arrow.

This data flow diagram shows that students submit registration forms with
their name, identification number, and the numbers of the courses they with to take.
In process 1.0 the system verifies that each courses selected is still open by
referencing the university’s course file. The file distinguishes course that are open
from those that have been canceled or filled. Process 1.0 then determines which of
the student’s selections can be accepted or rejected. Process 2.0 enrolls the student in
the course for which he or she has been accepted. It updates the university’s course
file with the student’s name and identification number and recalculates the class size.
If maximum enrollment has been reached, the course number is flagged as closed.
Process 2.0 also updates the university’s student master file with information about
mew students or changes in address. Process 3.0 then sends each student applicant a
confirmation- of – registration letter listing the courses for which he or she is
registered and noting the course selections that could not be fulfilled.

The diagrams can be used to depict higher - level processes as well as lower -
level details. Through leveled data flow diagrams, a complex process can be broken
down into successive levels of details. An entire system can be divided into
subsystems with a high - level data flow diagram. Each subsystem in turn, can be
divided into additional subsystems with second - level data flow diagrams, and the
lower - level subsystems can be broken down again until the lowest level of details
has been reached.

Another tool for structured analysis is a data dictionary, which contains


information about individual pieces of data and data groupings within a system. The
data dictionary defines the contents of data flows and data stores so that systems
builders understand exactly what pieces of data they contain. Process specifications
describe the transformation occurring within the lowest level of the data flow
diagrams. They express the logic for each process.

The system has three processes: Verify availability (1.0), Enroll student
(2.0), and Confirm registration (3.0). The name and content of each of the data flows
appear adjacent to each arrow. There is one external entity in this system: the student.
There are two data stores: the student master file and the course file.
FIGURE 2.7

Requested-courses Open-courses
Student
1.0
Verify
availability

Course
details
Confirmation - 2.0
letter Enroll Student master file
Student Student-details

Course file

Registration
3.0
Confirm Course-enrolment
registration

Figure 2.7 Data flow diagram for mail – in university registration systems.
Some of the other available technique:

Hierarchy – Input – Process – Output (HIPO):

Is a documentation technique which can also be used to communicate system


specifications to project participants throughout the design process. The three types of
charts, each depicting a finer level of detail.

Warnier – Orr:

The methodology was originally developed in France by Warnier; it has been


adapted and expanded by Orr. The methodology assists in defining the hierarchical
structure of systems It. can be used in defining the logical structure of the processing
system, data structures, and control structures and flow of control in programs. Nested
braces are used in documenting the hierarchical structures.

In structured programming, a small set of basic coding structures is used for


all program code, which generally results in programs that are straightforward in flow
of logic. Structured programming is related to modularity that use of the coding
structures ensures well - defined modules. The three primary coding structures in
structured programming are illustrated in figure.
FIGURE 2.8

Simple Selection Repetition


Sequence

True
Test of
A Condition
P

Not
Not True
True True

C D
B

Figure 2.8: Flowcharts of coding structures permitted in structured programme.


PROCEDURE DEVELOPMENT

Procedure development (manual, instruction sheets, input forms, HELP screens,


etc.) can take place concurrently with program development. Procedures should be
written for all personnel who have contact with the system.This includes the following:

• Primary users: This includes instructions for how to interpret a report, how to select
different options for a report, etc. If the user can execute the system directly, as in on-line
queries, it includes detailed instructions for accessing the system and formulating
different types of queries.

• Secondary users: This includes detailed instructions on how to enter each kind of input. It
is more oriented to “how to” and less to “what” for different inputs (when compared with
the instructions to primary users).
• Computer operating personnel: There are generally maintenance procedures to be
performed by computer operators and/or control personnel. Procedures include
instructions for quality assurance, backing up system files, maintaining program
documentation, etc.

• Training procedures: In some cases, a separate training manual or set of training screens
is developed for the implementation stage and subsequent training.

One of the two groups may be responsible for writing procedures analysts or users. The
advantages of analyst-written procedures are technical accuracy, project control over
completion, and conformance to the overall documentation; disadvantages are the tendency of
analysts to write in technical jargon or technical abbreviations and to assume users have
knowledge of the technical environment. The advantages of user-written procedures are an
appropriate level of technical description and instructions that are understandable;
disadvantages are the difficulty of assuring clear, complete instruction. A mixed strategy is
one in which analysts and users work together to produce a technically correct,
understandable manual.

TABLE 2.2

FORMAT CONDITIONS FOR USE


Conventional Sequenced set of instructions for one job title or unit.
(cookbook) Possible use for policy statements.
Playscript Sequenced set of instructions involving several job
lilies or units.
Caption Unsequenced procedures or policies.
Matrix Two conditions determine a procedure or an action (IF
and IF).
Several conditions and one or more procedures or
Decision table
actions.
Live specimen Supplement for all of above.
Flowchart Supplement to play script, matrix, or decision table.

Table 2.2 Procedure Development


It is important that procedures be kept current to coincide with changes to the
information system. A document numbering system that ties procedures to particular
programs, modules, or data files is useful for this purpose.

2.1.2. PROTOTYPE MODEL

Prototyping an application system is basically a four-step process as described


below. There are two significant roles: the user and the system designer.

Step 1: Identify the user’s basic information requirements. In this stage, the user
articulates his or her basic needs in terms of output from the system. The designer’s
responsibility is to establish realistic user expectations and to estimate the cost of
developing an operational prototype. The data elements required are defined and their
availability determined. The basic models to be computerized are kept as simple as
possible.

Step 2: Develop the initial prototype system. The objective of this step is to build a
functional interactive application system that meets the user’s basic stated information
requirements. The system designer has the responsibility for building the system using
very high level development languages or other application development tools.
Emphasis is placed on speed of building rather than efficiency of operation. The
initial prototype respond only to the user’s basic requirements: it is understood to be
incomplete. The early prototype is delivered to the user.

Step 3: Use of the prototype system to refine the user’s requirements. This step allows
the user to gain hands-on experience with the system in order to understand his or her
information needs and what the system does and does not do to meet those needs. It is
expected that the user will find problems with the first version. The user rather than
the designer decides when changes are necessary and thus controls the overall
development time.

Step 4: Revise and enhance the prototype system. The designer makes requested
changes using the same principles as stated in step 2. Only the changes the user
requests are made.

Speed in modifying the system and returning it to the user is emphasized.

The Model is depicted below: fig 2.9


FIGURE 2.9

Identify the basic


Information requirements

1. Basic needs
2. Scope of
application
3. Estimated costs

Develop the initial


prototype

Initial
Prototype

Use the prototype


System and refine
Requirements

Yes Is the
User/designer
Satisfied?

No
Operational Enhanced
Prototype Working Working
Prototype Prototype

Use prototype Use


as prototype as
specifications application
for application Revise
development And
Enhance Prototype
As illustrated above, steps 3 and 4 are iterative. The number of iterations may
vary considerably. There may be one of two reasons that iterative modification is
ceased. First, the user determines that the prototype is not useful and the working
prototype is discarded. Second, the user is satisfied with the system and it becomes an
“operational prototype”. It may be modified at a later stage, but at this point it is
considered usable and may be distributed to other users. Alternatively, it may “seed”
the idea of a new major application and be used to provide initial specifications for
the application development approach.

The prototyping methodology, as outlined above, has several significant


advantages in development of applications having high uncertainty as to
requirements.

• Ability to try out ideas without incurring large costs.


• Lower overall development costs when requirements change frequently.
• The ability to get a functioning system into the hands of the user quickly.
• Effective division of labor between the user professional and MIS
professional.
• Reduced application development time to achieve a functioning system.
• Effective utilization of scarce (human) resources.

A major difficulty with prototyping is management of the development


process because of frequent changes. Also, there may be a tendency to accept a
prototype as the final product when it should only be the basis for a fully-specified
design. For example, a prototype may not handle all exceptions or be complete as to
the controls. It “works,” but it is not complete.

2.1.3. SPIRAL MODEL

A spiral model fits well, when we are developing large systems, where the
specifications cannot be ascertained in one stroke completely and correctly. Some of
them get surfaced when the system is put to use after its testing. The continuous
revision of these steps in the system development is very common and then the
designers call them as versions. The new version provides an additional functionality,
features, and facilities to the user, and addresses the issues of the users of the system
viz. performances, response, security and so on.

FIGURE 2.10
Fig 2.10 Figure spiral model

2.1.4. RAPID APPLICATION DEVELOPMENT (RAD)

Object - oriented software tools, reusable software, prototyping, and fourth –


generation language tools are helping systems builders create working systems much
more rapidly than they could using traditional systems - building methods and
software tools. The term rapid application development (RAD) is used to describe
this process of creating workable systems in a very short period of time. R&D can
include the use of visual programming and other tools for building graphical user
interfaces, iterative prototyping of key system elements, the automation of program
code generation, and close teamwork among end users and information systems
specialists. Simple systems often can be assembled from pre built components. The
process does not have to be sequential, and key parts development can occur
simultaneously.

Sometimes a technique called joint application design (JAD) is used to


accelerate the generation of information requirements and to develop the initial
systems design. JAD brings end users and information systems specialists together in
an interactive session to discuss the system’s design. Properly prepared and
facilitated, JAD sessions can significantly speed up the design phase and involve
users at an intense level.

2.1.5. END - USER DEVELOPMENT

Some types of information systems can be developed by end users with little
or no formal assistance from technical specialists. This phenomenon is called end -
user development. A series of software tools categorized as fourth-generation
languages makes this possible. Fourth - generation languages are software tools that
enable end users to create reports or develop software applications with minimal or no
technical assistance. Some of these fourth - generation tools also enhance professional
programmers’ productivity.

Fourth - generation languages tend to be nonprocedural, or less procedural,


than conventional programming languages. Procedural languages require specification
of the sequence of steps, or procedures, that tell the computer what to do and how to
do it. Nonprocedural languages need only specify what has to be accomplished rather
than provide vide details about how to carry out the task.

End users are most likely to work with PC software tools and query languages.
Query languages are software tools that provide immediate online answers to
requests for information that are not predefined, such as “Who are the highest -
performing sales representatives? Query languages are often tied to data management
software and to database management systems.

On the whole, end - user - developed systems can be completed more rapidly
than those developed through the conventional systems life cycle. Allowing users to
specify their own business needs improves requirements gathering and often leads to a
higher level of user involvement and satisfaction with the systems. However, fourth -
generation tools still can not replace conventional tools for some business application
because they cannot easily handle the processing of large numbers of transactions or
applications with extensive procedural logic and updating requirements.

End - user computing also poses organisational risks because it occurs outside
of traditional mechanisms for information systems management and control. When
systems are created rapidly, without a formal development ,methodology, testing and
documentation may be inadequate. Control over data can be lost in systems outside
the traditional information systems department.

To help organisations maximize the benefits of end – user applications


development, management should control the development of end – user applications
buy requiring cost justification of end – user information system projects and by
establishing hardware, software, and quality standards for user – developed
applications.

2.1.6. APPLICATION SOFTWARE PACKAGES AND OUTSOURCING

The software for most systems today is not developed in - house but is
purchased from external sources. Firms can rent the software from an applications
service provider; they can purchase a software package from a commercial vendor.
Or they can have a custom application developed by an outside outsourcing firm.

APPLICATION SOFTWARE PACKAGES

During the past several decades, many systems have been built on an
application software package foundation. Many applications are common to all
business organizations - for example, payroll, accounts receivable, general ledger, or
inventory control. For such universal functions with standard processes that do not
change a great deal over time, a generalized system will fulfil the requirements of
many organizations.

If a software package can fulfil most of an organization’s requirements, the


company does not have to write its own software. The company can save time and
money by using the prewritten, predesigned, pretested software programs from the
package. Package vendors supply much of the ongoing maintenance and support for
the system, including enhancements to keep the system in line with ongoing technical
and business developments.

If an organization has unique requirements that the package does not address,
many packages include capabilities for customization. Customization features allow
a software package to be modified to meet an organization’s unique requirements
without destroying the integrity of the package software. If a great deal of
customization is required, additional programming and customization work may
become so expensive and time consuming that they negate many of the advantages of
software packages.

Figure 2.11 shows how package costs in relation to total implementation costs
rise with the degree of customization. The initial purchase price of the package can be
deceptive because of these hidden implementation costs. If the vendor releases new
versions of the package, the overall costs of customization will be magnified because
these changes will need to be synchronized with future versions of the software.
When a system is developed using an application software package, systems analysis
will include a package evaluation effort. The most important evaluation criteria are
the functions provided by the package, flexibility, user friendliness, hardware and
software resources, database requirements, installation and maintenance efforts,
documentation, vendor quality, and cost. The package evaluation process often is
based on a Request for Proposal (RFP), which is a detailed list of questions
submitted to packaged - software vendors. When a software package solution is
selected, the organization no longer has total control over the system design process.
Instead of tailoring the system design specifications directly to user requirements, the
design effort will consist of trying to mould user requirements to conform to the
features of the package. If the organization’s requirements conflict with the way the
package works and the package cannot be customized, the organization will have to
adapt to the package and change its procedures. Even if the organization’s business
processes seem compatible with those supported by a software package, the package
may be too constraining if these business processes are continually changing

FIGURE 2.11

10
9
8

Total Cost 7
6
5
4
3
2
1
2 4 6 8
Packing Cost

Fig 2.11 Total cost vs. Packaging Cost

OUTSOURCING

If a firm does not want to use its internal resources to build or operate
information systems, it can outsource the work to an external organization that
specializes in providing these services. Application service providers (ASPs) are one
form of outsourcing. An Application Service Provide (ASP) is a business that
delivers and manages application and computer services from remote computer
centers to multiple users using the Internet or a private network. Instead of buying
and installing software programs, subscribing companies can rent the same functions
from these services. Users pay for the use of this software either on a subscription or
per - transaction basis.

The ASP’s solution combines package software applications and all of the
related hardware, system software, network, and other infrastructure services that the
customers otherwise would have purchase, integrate, and manage independently. The
ASP customer interacts with a single entity instead of an array of technologies and
service vendors.

Subscribing companies would use the software and computer hardware


provided by the ASP as the technical platform for their systems. In another form of
outsourcing, a company could hire an external vendor to design and create the
software for its system, but that company would operate the system on its own
computers. The outsourcing vendor might be domestic or in another country.

Outsourcing has become popular because some organizations perceive it as


providing more value than an in-house computer centre or information systems staff.
The provider of outsourcing services benefits from economies of scale and
complementary core competencies that would be difficult for a firm that does not
specialize in information technology services to replicate. The vendor’s specialized
knowledge and skills can be shared with many different customers, and the
experience of working with so many information systems projects further enhances
the vendor’s expertise. Outsourcing enables a company with fluctuating needs for
computer processing to pay for only what it uses rather than build its own computer
center, which would be underutilized when there is no peak load. Some firms
outsource because their internal information systems staff cannot keep pace with
technological change or innovative business practices or because they want to free up
scarce and costly talent for activities with higher paybacks.

Not all organizations benefit from outsourcing, and the disadvantages of


outsourcing can create serious problems for organizations if they are not well
understood and managed. Many firms underestimate costs for identifying and
evaluating vendors of information technology services, for transitioning to a new
vendor, and for monitoring vendors to make sure they are fulfilling their contractual
obligations. These hidden costs can easily undercut anticipated benefits from
outsourcing. When a firm allocates the responsibility for developing and operating its
information systems to another organization, it can lose control over its information
systems function. If the organization lacks the expertise to negotiate a sound contract,
the firm’s dependency on the vendor could result in high costs or loss of control over
technological direction.

Firms should be especially cautious when using an outsourcer to develop or to


operate applications that give it some type of competitive advantage. A firm is most
likely to benefit from outsourcing if it understands exactly how the outsourcing
vendor will provide value and can manage the vendor relationship using an
appropriate outsourcing strategy. Table compares the advantages and disadvantages of
each of the systems-building alternatives.
2.2. COMPARISION BETWEEN SYSTEM DEVELOPMENT APPROACHES

TABLE 2.3

Approach Features Advantages Disadvantages


System life Sequential step - by - Useful for large Slow and expensive.
Cycle step formal process. complex systems and Discourages changes.
Written specification projects. Massive paperwork to
and approvals. manage.
Limited role of users.
Prototyping Requirements Rapid and relatively Inappropriate for large,
specified dynamically inexpensive. complex systems.
with experimental Useful when Can gloss over steps in
system requirements. analysis,
Rapid, informal, and Uncertain or when documentation, and
iterative process. end - user interface is testing.
Users continually very important.
interact with the Promotes user
Prototype. participation.
Application Commercial software Design, May not meet
Software eliminates need for programming, organization’s
Package internally developed installation and unique requirements
software programs. maintenance work May not perform many
reduced. business functions well.
Can save time and Extensive
cost when developing customization raises
common business development costs.
applications.
Reduces need for
internal information
systems resources.
End-User Systems created by Users control systems Can lead to
Development end users using - building. proliferation of
fourth - generation Saves development uncontrolled
software tools. time and cost. information
systems and data.
Rapid and informal. Reduces application Systems do not always
Minimal role of backlog. meet quality assurance
information systems standards.
specialists.
Outsourcing Systems built and Can reduce or control Loss of control over the
sometimes operated costs. information systems
by external vendor. Can produce systems function.
when internal Dependence on the
resources are not technical direction and
available or prosperity of external
technically deficient. vendors.
SUMMARY

The process of system design and implementation was described The most common
methodology applied to large, highly structured application systems is the system
development life cycle. This represents a linear assurance strategy but can be modified to an
iterative assurance strategy. The phases in the life cycle provide a basis for system
management and control by breaking the process down into small, well-defined segments. An
experimental assurance strategy is generally accomplished by prototyping. The prototyping
methodology is an iterative process carried out by a user and designer. Very high level
development languages are used to build a system quickly and iterate modifications based on
the user’s actual experience with the prototype.

In the analysis phases a proposal definition initiates a project. A feasibility


assessment is made to analyze alternative approaches. Once a solution is selected, information
requirements are determined. A general user-oriented conceptual design is prepared in the
conceptual design process.

The design phase includes four types of activities: physical system design, physical
database design, program development and procedure development. Physical system design
involves translating information requirements and conceptual design into technical
specifications and general flow of processing.

The system development life cycle includes conversion to the new system, ongoing
operation and maintenance and post audit. Conversion activities are acceptance testing, file
building and user training and the various other approaches of system development has been
discussed.

REVIEW QUESTIONS

1. Discuss the Various Lifecycle models in Software Development


2. What do you understand by the term Feasibility Study of a Solution?
3. Compare the different Systems Development Approaches
UNIT 3

INFORMATION SYSTEM

3.1 MANAGEMENT INFORMATION SYSTEM – OVERVIEW AND


FUNCTIONAL ASPECTS
3.1.1 FINANCIAL INFORMATION SYSTEM
3.1.2 MANUFACTURING INFORMATION SYSTEM
3.1.3 MARKETING INFORMATION SYSTEM
3.1.4 HUMAN RESOURCE INFORMATION SYSTEM

3.2 EXPERT SYSTEMS


3.3 EXECUTIVE INFORMATION SYSTEM
3.4 CASE

REVIEW QUESTIONS

CHAPTER SUMMARY

This chapter deals with functional aspects of Management Information System.


MIS is a computer based information system which deals with routine, systematic and
structured problems. It works with a Database Management System (DBMS). Other
systems dealt in this chapter are Expert Systems and Executive Information System
which are the Strategic components of an Information System.
3.1 Management Information System – Overview and Functional aspects

Most organizations are structured along functional lines or areas. This functional
structure is usually apparent from an organization chart, which typically shows vice
presidents under the president.

The traditional functional areas in any organization constitutes


• Finance
• Marketing
• Personnel (HR)
• R&D
• Operations

Management Information System can be divided to produce tailor made reports in the
above mentioned functional areas individually.

3.1.1 FINANCIAL INFORMATION SYSTEM

Financial Information System (FIS) is an information system that provides main


repository data used for managing and reporting financial information to all managers
and people within an organization and a broader set of people who are decision makers.

The primary functions of FIS are

• Recording of all financial transactions in general ledger accounts


• Generating financial reports to meet management and statutory requirements
• Controlling overall spending through budgetary controls embedded in the system
• Generating the financial statements
• Integrating financial and operational information from multiple sources, including
the Internet, into a single MIS.
• Providing easy access to data for both financial and non- financial users, often
through use of the corporate intranet to access corporate Web pages of financial
data and information.
• Making financial data available on a timely basis to shorten analysis turnaround
time.
• Enabling analysis of financial data along multiple dimensions-time, geography,
product, plant, customer.
• Analyzing historical and current financial activity.
• Monitoring and controlling the use of funds over time.

Figure 3.1 shows typical inputs, function-specific subsystems, and outputs of a financial
MIS.

FIS in addition provides information to Individuals and groups – Stockholders


and federal agencies. Public companies are required to disclose their financial results to
stockholders and the public. The federal government also requires financial statements
and information systems.

Financial Management Information System includes a unique role in adding value


to a company’s Business process by including Internal and External Systems that assist
the organization in acquiring, using and controlling of Cash, Funds and other vital
financial resources.

For example, a Projects development company might use a financial MIS


subsystem to help it use and manage funds. Suppose the firm takes Rs. 10,000 deposits
on condominium in a new development. Until construction begins, the company will be
able to invest these surplus funds. By using reports produced by the financial MIS,
finance staff can analyze investment alternatives. The company might invest in new
equipment or purchase global stocks and bonds. The profits generated from the
investment can be passed along to customers in different ways. The company can pay
stockholders dividends, buy higher quality materials, or sell the condominiums at a lower
cost.

FIGURE 3.1

Figure 3.1 FIS


Other important financial subsystems include,
• Profit/loss and cost accounting,
• Auditing

PROFIT/LOSS AND COST SYSTEMS

Profit/Loss and cost Systems are two financial subsystems that organize revenue
and cost data for the firm. Revenue and expense data for various departments is captured
by the Transaction Processing System (TPS) becomes a primary internal source of
financial information system.

Many departments within an organization are profit centers, which mean they
track total expenses and net profits. An investment division of a large insurance or credit
card company is an example of a profit center. Other departments may be revenue
centers, which are divisions within the company that primarily track sales or revenues,
such as a marketing or sales department.

Still other departments may be cost centers, which are divisions within a
company that do not directly generate revenue, such as manufacturing or research and
development. These units incur costs with little or no direct revenues.

DJ Pharmaceuticals, for example, constructed a supercomputer with 112


processors to help it accelerate drug research and development. The company hopes that
the estimated Rs. 50, 00,000 cost center will result in new or improved drugs. Data on
profit, revenue, and cost centers is gathered (mostly through the TPS but sometimes
through other channels as well), summarized, and reported by the profit loss and cost
subsystems of the financial MIS.

AUDITING

Auditing is the process of analyzing the financial situation of a firm whether the
reports and statements produced by the FIS are accurate. Because financial statements,
such as income statements and balance sheets, are used by so many people and
organizations (investors, bankers, insurance companies, federal and state government
agencies, competitors, and customers) hence sound auditing procedures are important.
Auditing can reveal potential fraud and can also reveal false or misleading information in
a firm.

Auditing can be classified into Internal and External. Internal auditing is


performed by individuals within the organization. For example, the finance department of
a corporation may use a team of employees to perform an audit. Typically, an internal
audit is conducted to see how well the organization is meeting established company goals
and objectives - no more than five weeks of inventory on hand, all travel reports
completed within one week of returning from a trip, and similar measures.
External auditing is performed by an outside group, such as an accounting or
consulting firm. The purpose of an external audit is to provide an unbiased picture of the
financial condition of an organization. Auditing can also uncover fraud and other
problems. In some cases, the financial picture from an external auditing firm may not
always completely reflect the performance of the company.

USES AND MANAGEMENT OF FUNDS

Usage and Management of funds is another important function of the FIS.


Companies that do not manage and use funds effectively often have lows profits or face
bankruptcy. To help with the funds usage and management, sane banks are backing a new
computerized payment System. The new system has the potential to clear payments in a
day instead of several days or more. Outputs from the funds usage and management
subsystem, when combined with other subsystems of the financial MIS, can locate
serious cash flow problems and help the organization increase profits.

Internal uses of funds include additional inventory, new or updated plants and
equipment, additional labor, the acquisition of other companies, new computer systems,
marketing and advertising, raw materials, land, investments in new products, and
research and development. External uses of funds are typically investment related. On
occasion, a company might have excess cash from sales that is placed into an external
investment. External uses of funds often include bank accounts, stocks, bonds, bills,
notes, futures, options, and foreign currency.

3.1.2 MANUFACTURING INFORMATION SYSTEM

The objective of the manufacturing MIS is to produce products that meet


customer needs - from the raw materials provided by suppliers to finished goods services
delivered to customers - at the lowest possible cost. All raw materials are converted to
finished goods; the manufacturing MIS monitors the process at almost every stage. Bar
codes, smart labels could make this process easier. The smart labels, made of chips and
tiny radio transmitters, allow materials and products to be monitored through the entire
manufacturing process: Procter & Gamble, Gillette, Wall - Mart are firms that research
into this new manufacturing MIS. Car manufacturers, which convert raw steel, plastic,
and other materials into a finished automobile, monitor the manufacturing process. In
doing so, the MIS helps provide the company the edge that can differentiate it from
competitors. The success of an organization can depend on manufacturing function.

Figure 3.2 gives an overview of some of the manufacturing MIS inputs, subsystems,
and outputs. The subsystems and outputs of the manufacturing MIS monitor and control
the flow of materials, products, and services through the organization. Some common
information subsystems and out - used in manufacturing are:
• Master Production Scheduling
• Inventory Control
• Process Control
• Quality Control and Testing
FIGURE 3.2

Figure 3.2 Manufacturing Information System


MASTER PRODUCTION SCHEDULING AND INVENTORY CONTROL

In any manufacturing company the critical tasks are production scheduling and
inventory control. The overall objective of master production scheduling is to provide
detailed plans for both short-term and long-range scheduling of manufacturing facilities.
Master production scheduling software packages can include forecasting techniques that
attempt to determine current and future demand for products and services. After current
demand has been determined and future demand has been estimated, the master
production scheduling package can determine the best way to use the manufacturing
facility and all its related equipment. The result of the process is a detailed plan that
reveals a schedule for every item that will be manufactured.

An important key to the manufacturing process is inventory control. Ford Motor


Company decided to use UPS Logistics to help the company speed the delivery of parts
to factories and finished cars to dealerships. The new inventory control system has
reduced by four days the time it typically takes to ship a finished vehicle to a dealership.
But more importantly, the new system has also reduced vehicle inventory by about
1billion dollars, saving the company 125 million dollars in annual inventory carrying
costs, which dramatically improves Ford’s probability. Many inventory control
techniques like Ford’s attempt to minimize inventory related costs.

The inventory control techniques used in organizations are Economic Order


Quantity (EOQ), Re Order Point (ROP) which determines how much and when to order.
Economic order quantity (EOQ) determines a way as to minimize the total inventory
costs. The “When to order?” question is based on inventory usage over time. Typically,
the question is answered in terms of a reorder point (ROP), which is a critical inventory
quantity level. When the inventory level for a particular item falls to the reorder point, or
critical level, a report might be output so that an order is immediately placed for the EOQ
of the product.

Another inventory technique used when the demand for one item is dependent on
the demand for another is called material requirements planning (MRP). The basic
goal of MRP is to determine- when finished products, like automobiles or airplanes, are
needed and then to work backward to determine deadlines and resources needed, such as
engines and tires, to complete the final product on schedule.

Manufacturing resource planning (MRPII) refers to an integrated, company


wide system based on network scheduling that enables people to run their business with a
high level of customer service and productivity, while lowering costs and inventories.
MRPII places a heavy emphasis on planning. This helps companies ensure that the right
product is in the right place at the right time.

Just – in – time (JIT) is a Japanese approach that maintains inventory at the


lowest levels without sacrificing the availability of finished products. With this approach,
inventory and materials are delivered just before they are used in a product. A JIT
inventory system would arrange for a car windshield to be delivered to the assembly line
only a few moments before it is secured to the automobile, rather than having it sitting
around the manufacturing facility while the car’s other components are being assembled.
Although JIT has many advantages, it also renders firms more vulnerable to process
disruptions.

PROCESS CONTROL

Managers can use a number of technologies to control and streamline the


manufacturing process. For example, computer can be used to directly control
manufacturing equipment using systems called Computer Aided Manufacturing
(CAM) and Computer Integrated Manufacturing. CAM systems can control drilling
machines, assembly lines, and more. Some of them operate quietly, are easy to program,
and have self – diagnostic routines to test for difficulties with the computer system or the
manufacturing equipment.

Computer Integrated Manufacturing (CIM) uses computers to link the


components of the production process into an effective system. The goal of CIM is to tie
together all aspects of production, including order processing, product design
manufacturing, inspection and quality control, and shipping. CIM systems also increase
efficiency by coordinating the actions of various production units. In some areas, CIM is
used for even broader functions. CIM can also be used to integrate all organizational
subsystems, not just the production systems. In automobile manufacturing, design
engineers can have their ideas evaluated by financial managers before new components
are built to see whether they are economically viable, saving not only time but also
money.

A Flexible Manufacturing System (FMS) allows manufacturing facilities


rapidly and efficiently change from making one product to another. In the middle of a
production run, for example, the production process can be changed to make a different
product or to switch manufacturing materials. Often a computer is used to direct and
implement the changes. FMS saves the time and cost to change manufacturing jobs can
be substantially reduced, and companies can react quickly to market needs and
competition. FMS is normally implemented using computer systems, robotics, and other
automated manufacturing equipment. New product specifications are fed into the
computer system, and the computer then makes the necessary changes.

QUALITY CONTROL AND TESTING

Quality Control (QC) is being emphasized in all the organizations due to


increased pressure from customers and concern for high quality. QC is a process that
ensures that the finished product meets the customers’ needs. Control charts are used to
measure weight, volume, temperature, or similar attributes are used for Continuous
Process. Then, upper and lower control chart limits are established. If these limits are
exceeded, the manufacturing equipment is inspected for possible defects or potential
problems.
Sampling process is allowed to accept or reject products when the manufacturing
process is not continuous. Acceptance sampling is used for items as simple as nuts and
bolts or as complex as airplanes. The development of the control chart limits and the
specific acceptance sampling plans can be fairly complex.

Whether the manufacturing operation is continuous or discrete, the results from


quality control are analyzed closely to identify opportunities for improvements. Teams
using the total quality management (TQM) or continuous improvement process often
analyze this data to increase the quality of the product or eliminate problems in the
manufacturing process. The result can be a cost reduction or increase in sales.

Information generated from quality – control programs can help workers locate
problems in manufacturing equipment. Quality Control reports can also be used to design
better products. With the increased emphasis on quality, workers should continue to rely
on the reports and outputs from this important application.

3.1.3. MARKETING MANAGEMENT INFORMATION SYSTEMS

A Marketing MIS supports managerial activities, in product development,


distribution, pricing decisions, promotional effectiveness, and sales forecasting.

Subsystems for Marketing Information System include,


• Marketing research
• Product Development
• Promotion and Advertising
• Product pricing.

These subsystems and their outputs help marketing managers and executives increase
sales, reduce marketing expenses, and develop plans for future products and services to
meet the changing needs of customers.

MARKETING RESEARCH

Popular Marketing Research (MR) tools are Surveys, pilot studies and
Interviews. The objective of MR is to conduct a formal study market and customer
preferences. Marketing research can identify prospects as well as the features that current
customers really want in a good or service. Such attributes as style, color, size,
appearance, and general fit can be investigated through marketing research. Pricing,
distribution channels, guarantees and warranties, and customer service can also be
determined.

Once entered into the marketing information _ystem, data collected from
marketing research projects is manipulated to generate reports on key indicators like
customer satisfaction and total service calls. Forecasting demand can be an important
result of marketing research of sophisticated software. Demand forecasts for products and
services are also critical to make sure raw materials and supplies are properly managed

PRODUCT DEVELOPMENT

Product development is the process of the conversion of raw materials into


finished goods and services, and focuses primarily on the physical attributes of the
product. Plant capacity, labor skills, engineering factors, and materials are important
factors in product development decisions. In many cases, a computer program is used to
analyze these various factors and to select the appropriate mix of labor, materials, plant
and equipment, and engineering designs. Computer Programs also assist Make – or – buy
decisions.

PROMOTION AND ADVERTISING

Advertising and Sales promotion, one of the important marketing functions


determines the product success through the type of promotion done. The size of the
promotion budget and the allocation of this budget among various campaigns are
important factors in planning. Television coverage, newspaper ads, promotional
brochures and literature, and training programs for salespeople are all components of
these campaigns. Because of the time and scheduling savings they offer, computer
programs are used to set up the original budget and to monitor expenditures and the
overall effectiveness of various promotional campaigns.
FIGURE 3.3

Databases
of Databases
internal of
data external
data
Marketing DSS

Marketing ESS
Transaction
Business Marketing MIS
Processing Databases
Transactions of valid Marketing
Systems
transactions application
from each TPS databases

Marketing
ES

Marketing research
Operational Product development
databases
Promotion and advertising
Product pricing

Sales by customer

Sales by salesperson

Sales by product

Pricing report
Total service calls

Customer satisfaction
Figure 3.3Marketing Information System
PRODUCT PRICING

Product pricing is the key area of an organization which is determine by the Sales
analysis that identifies products, sales personnel, and customers that contribute to profits
and those that do not. A variety of reports can be generated to help managers make good
sales decisions. The sales - by - product report list all major products and their sales for a
period of time, such as a month. This report shows which products are doing well and
which ones need improvement or should be discarded altogether. The sales - by -
salesperson report lists total sales for each salesperson for each week or month. This
report can also be subdivided by product to show which products are being sold by each
salesperson. Sales - by - customer report are a tool to use to identify high - and low -
volume customers.

3.1.4. HUMAN RESOURCE MANAGEMENT INFORMATION SYSTEMS

A human resource information system (HRIS) is concerned with activities


related to employees and potential employees of the organization Because the personnel
function relates to all other functional areas in the business, the human resource MIS
plays a valuable role in ensuring organizational success. Some of the activities performed
by this important MIS include workforce analysis and planning; hiring; training; job and
task assignment; and many other personnel - related issues. Personnel issues can include
offering new hires attractive stock option and incentive programs. Figure shows some of
the inputs, subsystems, and the HRIS.

Human resource subsystems and outputs range from the determination of human
resource needs and hiring through retirement and outplacement. Most medium and large
organizations have computer systems to assist with human resource planting, hiring,
training and skills inventory, and wage and salary administration. Outputs of the HRIS
include reports such as human resource planning reports, job application review profiles;
skills inventory reports, and salary surveys.

HUMAN RESOURCE PLANNING

One of the first aspects of any HRIS is determining personnel and human needs.
The overall purpose of this MIS subsystem is to put the right number and kinds of
employees in the right jobs when they are needed. Effective human resource planning
requires defining the future number of employees needed and anticipating the future
supply of people for these jobs.

PERSONNEL SELECTION AND RECRUITING

HRIS can be used to help grade and select potential employees. For every
candidate, the results of interviews, tests, and company visits can be analyzed by the
system and printed. This report, called a job applicant review profile, can assist corporate
recruiting teams in final selection. Web based screening for the job applicants is adopted
by many companies. Applicants use a template to load their resume onto the Internet site.
HR managers can then access these resumes and identify applicants they are interested in
interviewing.

TRAINING AND SKILLS INVENTORY

Some jobs, such as programming, equipment repair, and tax preparation, require
very specific training. Other jobs may require general training about the organizational
culture, orientation, dress standards, and expectations of the organization. Today, many
organizations conduct their own training, with the assistance of information systems and
technology. Self - paced training can involve computerized tutorials, video programs, and
CD-ROM books and materials. Distance learning, where training and classes are
conducted over the Internet, is also becoming a viable alternative to more traditional
training and learning approaches. This text and supporting material, for example, can be
used in a distance - learning environment.
FIGURE 3.4

Databases Databases
of of
internal external
data data Human
resource
Payroll DSS
Order
Processing
Personnel

Human
Transaction Databases resource
Business Human
Processing of valid ESS
Transactions resource Human
Systems transactions
om each TPS MIS resource
databases

Human
resource
ES

Needs and Planning assessments


Operational Recruiting
databases Training and skills development
Scheduling and assignment
Employee benefits

Benefit reports
Salary surveys
Scheduling reports
Training test scores
Job applicant profiles
Needs and planning
reports

Figure 3.4 HRIS


SCHEDULING AND JOB PLACEMENT

Employee schedules are developed for each employee, showing their job
assignments over the next week or month. Job placements are often determined based on
skills inventory reports, which show which employee might be best suited to a particular
job.

WAGE AND SALARY ADMINISTRATION

The last of the major HRIS subsystems involves determining wages, salaries, and
benefits, including medical payments, savings plans, and retirement accounts. Wage data,
such as industry averages for positions, can be taken from the corporate database and
manipulated by the HRIS to provide wage information and reports to higher levels of
management. Wage and salary administration also entails designing retirement programs
for employees. Some companies use computerized retirement programs to help
employees gain the most from their retirement accounts and options.

3.2 EXPERT SYSTEMS

Expert system is a knowledge based information system that uses its knowledge
about a specific complex application area to act as an expert consultant to end users. An
expert system is knowledge intensive program that solves a problem by capturing the
expertise of a human in limited domains of knowledge and experience. Expert systems
provide answers to questions in a very specific problem area by making humanlike
inferences about knowledge contained in specialized knowledge base. They must also be
able to explain their reasoning process and conclusions to a user.

COMPONENTS OF AN EXPERT SYSTEM

User interface: user interface helps interface the user with external peripherals.
The instructions are given to get the explanation from the system. The familiar user
interface is graphical user interface. In return system explains the various steps involved
in the problem solution.

Knowledge base: knowledge base of an expert system contains facts about a


specific subject area(instance ravi is a programmer) and heuristics that express the
reasoning procedures of an expert on the subject(if ravi is a programmer then he needs a
computer)
Interface engine: the interface engine examines the rules of the knowledge base one at a
time and checks for conditions. If the condition is true the specific action will be taken
which is called rule is fired.

Expert systems are being used for many different types of applications, and the
variety of applications is expected to continue to increase. However, you should realize
that expert systems typically accomplish one or more generic uses. As you can see, expert
systems are being used in many different fields, including medicine, engineering, the
physical sciences, and business. Expert systems now help diagnose illnesses, search for
minerals, analyze compounds, recommend repairs, and do financial planning. So from a
strategic business standpoint, expert systems can and are being used to improve every
step of the product cycle of a business, from finding customers to shipping products to
providing customer service.

BENEFITS OF EXPERT SYSTEMS

An expert system captures the expertise of an expert or group of experts in a


computer - based information system Thus, it can outperform a single human expert in
many problem situations. Expert systems also help preserve and reproduce the knowledge
of experts. They allow a company to preserve the expertise of an expert before she leaves
the organization.

FIGURE 3.5

THE EXPERT SYSTEM


Expert System Software

User Interface
Expert Interface Engine Knowledge
Advice Programs Programs Base

User Workstation

Expert System Development

Knowledge
Acquisition Knowledge
Programs Engineering

Workstation Expert and/or


Knowledge
Engineer

Fig 3.5 Expert System


LIMITATIONS OF EXPERT SYSTEMS

The major limitations of expert systems arise from their limited focus inability to
learn, maintenance problems, and developmental cost Expert a excel only in solving
specific types of problems in a limited domain of knowledge. They fail miserably in
solving problems requiring a broad knowledge base and subjective problem solving.
They do well with specific types of operational or analytical tasks, but falter at subjective
managerial decision making.

Expert systems may also be difficult and costly to develop and maintain properly.
The costs of knowledge engineers, lost expert time, and hardware and software resources
may be too high to offset the benefits expected from some applications. Also, expert
systems can’t maintain them selves. That is, they can’t learn from experience but must be
taught new knowledge and modified new expertise is needed to match developments in
their subject areas.

DEVELOPING EXPERT SYSTEMS

Many real world situations do not fit the suitability criteria for expert system
solutions. Hundreds of rules may be required to capture the assumptions, facts, and
reasoning that are involved in even simple problem situations. For example, a task that
might take an expert a few minutes to accomplish might require an expert system with
hundreds of rules and take several months to develop.

The easiest way to develop an expert system is to use an expert system shell as a
development tool. An expert system shell is a software package consisting of an expert
system without its kernel, that is, its knowledge base. This leaves a shell of software (the
inference engine and user interface programs) with generic inferencing and user interface
capabilities. Other development tools (such as rule editors and user interface generators)
are added in making the shell a powerful expert system development tools.

Expert system shells are now available as relatively low-cost software packages
that help users develop their own expert systems on microcomputers. They allow trained
users to develop the knowledge base for a specific expert system application. For
example, one shell uses a spreadsheet format to help end users develop IF - THEN rules,
automatically generating rules based on examples furnished by a user. Once a knowledge
base is constructed, it is used with the shell’s inference engine and user interface modules
as a complete expert system on a specific subject area. Other software tools may require
an IT specialist to develop expert systems.

Experts skilled in the use of expert system shells could develop their own expert
systems. If a shell is used, facts and rules of thumb about a specific domain can be
defined and entered into a knowledge base with the help of a rule editor or other
knowledge acquisition tool. A limited working prototype of the knowledge base is then
constructed, tested, and evaluated using the inference engine and user interface programs
of the shell. Domain experts can modify the knowledge base, then retest the system and
evaluate the results. This process is repeated until the knowledge base and the shell result
in acceptable expert systems.

FIGURE 3.6

Determining requirements

Identifying experts

Constructing Expert System Components

Implementing results

Maintaining and reviewing systems

Steps in expert systems development process

Fig 3.6 Expert System – Process flow

3.3 EXECUTIVE INFORMATION SYSTEMS

Many companies have developed systems to assist executive in decision making,


executive information system (EIS), is a specialized DSS that includes all hardware,
software, data, procedures, and people used to assist senior - level executives within the
organization EIS supports the actions of members of the board of directors, who are
responsible to stockholders. An EIS can also be used by individuals farther down in the
organizational structure. Once targeted at the top-level executive decision makers, EISs
are now marketed to - and used by -employees at other levels in the organization. In the
traditional view, EISs give top executives a means of tracking critical success factors.
Today, all levels of the organization share information from the same databases.
However, for our discussion, we will assume EISs remain in the upper management
levels, where they indicate important corporate issues, indicate new directions the
company may take, and help executives monitor the company’s progress.
CHARACTERISTICS OF EIS

An EIS is a special type of DSS, and, like a DSS, an EIS is designed to support
higher - level decision making in the organization. The two systems are, however,
different in important ways. DSSs provide a variety of modeling and analysis tools to
enable users to thoroughly analyze problems - that is, they allow users to answer
questions. EISs present structured information about aspects of the organization that
executives consider important - in other words, they allow executives to ask the right
questions.

Following are general characteristics of EISs:

• Tailored to individual executives. Tailored to individual executives; DSS are not


tailored to particular users. An EIS is an interactive, hand - on tool that allows an
executive to focus, filter, and organize data and information.
• EIS must be easy to learn and use and not overly complex.
• Have drill down abilities. An EIS allows executives to drill down get more
detailed information if needed. I
• Support the need for external data. Information from competitors, the federal
government, trade associations and journals, consultants, is needed to make top-
level orisons. An effective EIS is able to extract data useful to the decision
maker from a wide variety of sources including the Internet and ether
electronic publishing sources such as LexisNexis.
• Can help with situations that have a high degree of uncertainty, There is a high
degree of uncertainty with most executive decisions. Handling these unknown
situations using modeling and other E$S procedures helps top-level managers
measure the amount of risk in a decision.
• Have a flu into orientation. Executive decisions are future oriented, meaning that
decisions will have a broad impact for years or decades. The information sources
to support future-oriented decision making are usually informal - from golf
partners to members of social clubs or civic organizations.
• Are linked with, value-added business processes. Like other information systems,
EIS are linked with executive decision making about value - added business
processes. For instance, executive information systems can be used by car-rental
companies to analyze trends.
• An effective EIS should have the capability to support executive decisions with
many of these capabilities, such as strategic planning and organizing, crisis
management, and more.

CASE STUDY

SOURCE:

Multibase Company Limited is having a well - diversified business portfolio with


textiles including fabric and yarn manufacturing, paper and pulp, and cement. The
company has eight manufacturing locations - one for fabric manufacturing, two units for
yarn manufacturing, two units for paper and pulp. and three cement units, All these units
are located across the country. While head of each unit has - considerable operational
autonomy, strategic decisions concerning these units, such as capacity expansion,
procurement of new technology involving substantial investment, etc. are made at the
headquarters, located in a metropolitan city. The company headquarter monitor the
performance of every unit through weekly and monthly reports which are prepared by
computer - based information systems installed at each unit. Often considerable amount
of time of senior executives located at headquarters is taken away in analysing these
reports and drawing inferences for planning and control. This process allows little time to
senior executives for strategic thinking which they feel is must in the present competitive
environment. Based on this need, the Chairman of the company proposed to develop
suitable computer - based systems which might be helpful in understanding the current
status of various manufacturing units in terms of their overall performance, the type of
environmental constraints that operate in three business sectors, and the opportunities that
exist for enhancing capacity in these business areas. Assess the data requirements of the
proposed systems that would serve the company’s needs.

REVIEW QUESTIONS:

1.What is the difference between MIS and DSS?


2. What will an MIS provide in a Marketing Function?
3. With an example enumerate the application of EIS and ES
4. How is an HRIS useful in recruiting?
UNIT 4
IMPLEMENTATION AND CONTROL

4.1 ASSESSING RISK OF THE SYSTEM


4.1.1 ASSESSMENT OF OBJECTIVE:
4.2 VALUE CHAIN
4.2.1 PRIMARY ACTIVITIES
4.2.2 SUPPORT ACTIVITIES.
4.3 VALUE CHAIN AND INFORMATION SYSTEMS
4.4 COST / BENEFIT ANALYSIS FOR INFORMATION SYSTEM
4.4.1 PROCESS OF COST / BENEFIT ANALYSIS
4.4.2 EVALUATION OF COST AND BENEFITS
4.5 CONTROLS IN INFORMATION SYSTEM
4.5.1 OBJECTIVES OF CONTROL
4.5.2 CONTROL TECHNIQUES
4.6 CODING TECHNIQUES
4.6.1 REQUIREMENTS OF A CODING SCHEME
4.6.2 TYPES OF CODES
4.6.3 DETECTION OF ERROR IN CODES
4.6.4 CONSTRUCTING MODULUS - 11 CODES
4.7 ERROR DETECTION
4.7.1 ALGORITHM FOR ERROR DETECTION
4.8 TESTING OF INFORMATION SYSTEMS
4.9 SECURITY OF INFORMATION SYSTEMS
4.1 ASSESSING RISK OF THE SYSTEM

All projects relating to information systems cannot be developed and


implemented concurrently, priorities must be set. The organisation’s strategic plan
should be the basis for the information system strategic plan. Therefore, there should
be integration of information system plan to organisation plan.

An information system implementation plan has two time perspectives - long


range and short range. The long - range plan, which usually covers three to five years,
provides general guidelines for direction. The short - range plan provides a basis for
specific accountability as to operational and financial performance. Since the short -
range plan is derived out of the long - range p1an, both the plans should be fully
integrated. An Information system implementation plan consists of:

1. Assessment of objective.
2. Organisational change.
3. Assessing the future.

4.1.1 ASSESSMENT OF OBJECTIVE:

Is objectives should be defined so that those who are responsible for


developing the plan are clear. While defining the information system objectives,
following factors should be considered.

How information technology, both in terms of hardware and software, would


shape in future should be given adequate consideration. Though it is very difficult to
predict the nature of technological development at the time of preparing the plan,
organizations acting on proactive basis can plan the assimilation of new technology
easily because of time lag between technology development and its application.
Usually, it happens that technology development is announced much earlier than its
commercial use.

Future plans are affected by changes in technology, experience with the


systems that have been developed, changing needs for new systems, and changes in
the organisation itself. The plan should be updated in anticipation of these changes
rather than the actual changes.

4.2 VALUE CHAIN

Values chain analysis helps an organisation to define the information which it


needs to operate efficiently and thereby developing competitive advantage. Every
organisation performs a chain of activities. These activities are interrelated, and each
activity creates a value important to the whole chain. Based on this, Porter has
proposed the value chain to create more customer value6. Accordingly, every
organisation Is a collection of activities that are performed to design. Produce, market,
deliver, and support Its products. The value chain identifies nine strategically relevant
activities that create value in a business. These nine value - creating activities consist
of five primary activities and four support activities as shown in Figure.
FIGURE 4.1

Firm’s infrastructure

Human resource management

Technology

Procurement

Inbound Outbound Marketing


Operations and sales Service
logistic logistics

Primary activities

Fig 4.1 Value Chain

4.2.1 PRIMARY ACTIVITIES

Primary activities are those that are invo1ve creation of product / service.
Identification of primary activities requires the isolation of activities that are
technologically or strategically distinct. Porter has classified these primary activities
into five groups which art as follows:-

1. Inbound logistics - receiving, storing, and disseminating various inputs with a


purpose to transform them into outputs.
2. Operations - activities through which inputs are transformed in outputs.
3. Outbound logistics - activities related to receiving outputs, preparing delivery
schedules, physical distribution, etc.
4. Marketing and sales - activities related to advertising and sales - promotion,
selection and management of distribution channel, creating sales force, etc.
5. Service - providing after-sales service, supply of parts, providing training to
customers’ employees, etc.

4.2.2 SUPPORT ACTIVITIES.

Support activities are those that provide support to effective performance of


primary activities in value chain. Support activities are grouped into four categories as
follows: -
1. Firm’s Infrastructure - activities related to general management, accounting,
finance, legal, secretarial, etc.
2. Human resource management - activities related to employee recruitment,
development, appraisal, wage and salary administration, etc.
3. Technology - activities related to production of products/services and
creating and improving various activities in the entire value chain.
4. Procurement - activities related to obtaining purchased inputs like raw
materials, machinery, services, etc.

For effective use of value chain, it is not just enough that various activities are
performed efficiently but these must be performed in a coordinative way so that each
activity contributes positively to other activities.

4.3 VALUE CHAIN AND INFORMATION SYSTEMS

Value chain analysis provides the best framework for identifying the
Information that an organisation needs and designing its informati0r systems. Table
presents the Information systems required for effective performance of various
activities of the value chain.

Activity Information systems

Inbound logistics Automated warehousing information systems


Operations Computer - controlled machining systems
Outbound logistics Automated shipment scheduling systems
Marketing and sales Computerized ordering systems
Service Equipment maintenance system
Firm’s Infrastructure Electronic scheduling and messaging
HR management Human resource planning systems
Technology Computer - aided design systems
Procurement Computerized order systems

Table 4.1 – Value Chain and Information Systems

4.4 COST / BENEFIT ANALYSIS FOR INFORMATION SYSTEM

Identification of information which a system can provide is alone not a


sufficient criterion to gauge the utility of the system; it requires one step further, that
is, the utility must be seen in the context of costs involved in the system. Called
cost/benefit analysis.

4.4.1 PROCESS OF COST / BENEFIT ANALYSIS

Cost/benefit analysis is a process that presents a picture of various costs and


benefits associated with an information system. Being a process, it consists of several
activities as shown in Figure
FIGURE 4.2

Identification of Evaluation of Choice of


Costs and Costs and System.
Benefits. Benefits.

Figure 4.2 -Process Of Cost/Benefit Analysis

4.4.2 EVALUATION OF COST AND BENEFITS

Costs and benefits are not easily comparable. For this purpose, Capital
budgeting models are used. After the evaluation of costs and benefits of various
competing systems is completed, the results obtained have to be interpreted to arrive
at the final choice of a system. The interpretation and final choice are mostly
subjective requiring judgment and intuition.

TABLE 4.2

Costs Benefits
Hardware Tangible
Telecommunications Increased productivity
Software Low operating costs
Personnel Reduced workforce
Lower computer costs
Accessories: Lower outside vendor costs
Computer forms Reduced rate of growth in
expenses
Computer Ink/ribbon Reduced facility costs
Ups Intangible
Improved organisational planning
Services Increased organisational flexibility
Insurance Improved decision making
Maintenance Improved operations
Improved asset utilisation
Physical facilities Enhanced employee morale
Building Increased job satisfaction
Furniture Higher customer satisfaction
Improved organisational image

Table 4.2 – Costs and Benefits of Information Systems


EXAMPLE

Consider Solution B of the hostel mess management problem. The direct costs are:
1. Cost of PC/XT, printer, voltage regulator = Rs. 70,000
2. Cost of space (nil). No extra space allocated
3. Cost of systems analysts/Programmers/Consultants for 3 months
= Rs. 15,000
4. Stationery cost/Floppy cost /Maintenance/Electricity = Rs. 900 per month
5. Capital cost = Rs. 85,000
6. Recurring cost = Rs. 900 per month.

Benefits (Direct savings)

1. Savings per month due to inventory reduction and wastage


= 5% of mess bill of 400 students
= 0.75 * 400 * 30
= Rs. 9,000..
(Assume Rs. 15 bill per day per student. Savings per day is 75p. per student)
2. Savings in cartage (estimate) Rs. 200 per month.
3. Savings due to early payment to vendors
= 1.2% of total billing to vendors
= 12.5 * 400 * 30 * 0.012
= Rs. 1800 per month
(Rs. 12.50 per day is assumed to be material cost in mess bill)
4. Savings due to better collection (40 defaulting students, 1% interest per
month)
= 40 * 450 * 0.01
= Rs. 180 per month.

INTANGIBLE BENEFITS

1. Student satisfaction due to itemized bills and less variation


2. Better menu planning
Total benefits = Rs. 11,180 per month
Recurring cost = Rs. 900 per month
Net benefit per month = Rs. 10,280
Total cost = Rs. 85,000.

PAY BACK PERIOD –evaluation of the system

Once the costs of the project and the benefits have been quantified, the next
step is to find out whether the benefits justify the cost. There are two ways of finding
out this. They are known as the payback method and the present value method. The
payback method is used to find out in how many years the money spent is recovered
as benefits. In the example considered, we found that the cost was Rs. 85,000 and the
benefits Rs. 10,280 per month. Thus in 8.3 months we recover Rs. 85,324 Which
exceeds the cost. The payback is thus 8.3 months. It can be compared with
alternatives to finalize the systems when there is more alternative.
4.5 CONTROLS IN INFORMATION SYSTEM

Control is a method to ensure that a system processes data as it was designed


to. Controls are essential to ensure the reliability of the reports produced by the
system. The must be built into the design of the system. Some simple spot checks of
outputs for their correctness are not sufficient. Information systems should be
designed to handle massive volume of data. For example, the processing of SSLC
examination results in a will have to handle over a million records. To reliably
process such large volume of data, formal controls are essential. Methods should be
devised to control the flow of data in and out of the data processing system, and
ensure that all data entering the system are correct and included, and no duplicate data
enter the system. If some record is omitted by mistake, a method should be devised to
find the location of such an error without having to search the whole system.

Controls do cost more programming time, processing time, and memory


space. These considerations should not limit the extent of control. Necessary controls
should be planned and established during the initial stage of the design.

4.5.1 OBJECTIVES OF CONTROL

The primary objectives of controls are:


1. To make sure that the data entering the computer are correct
2. To check the clerical handling of data before these reach the computer.
3. To provide a method of auditing the steps in a procedure to detect quickly
where an error has occurred in the procedure.
4. To ensure that accounting for tax purpose or other legal requirements is
carried out according to the law.
5. To guard against fraud that may affect the financial standing or reputation of
the business.

4.5.2 CONTROL TECHNIQUES

(i) Organization measures.

It is necessary to assign well - defined responsibilities for input preparation,


delivery, output use, and operation and maintenance of systems. Specific
responsibilities for reporting any changes-in-a -system -must- be assigned. All
changes must be monitored. Performance and recording of operations should be in
different hands.

(ii) Input preparation controls.

Careful entry of data will reduce incorrect data. Special measures should, be taken
during input preparation
This was discussed in
(a) Sequence numbering
(b) Batch controls
(c) Data entry and verification
(d) Record totals
(e) Self - checking digit.
(a) Sequence numbering:

Each data record is given a sequence number. This provides the information to
pick the incorrect record when an error is detected in it. A missing record can also be
tracked using the sequence number.

(b) Batch control:

Input data records consist of batches of about 100 records and each batch is
numbered. One or more important fields in each record is selected and the values in
the fields in all the records are added to form a batch control total. A batch control
record is designed, which includes information sach as number of records in the batch
and the batch control total is as batch totals. This batch control record is entered at the
end of each batch of records into the disk file. It is used by the data validation
program to check that no records in the batch are missed and there is no data entry
error in the selected field(s). In Table we show a batch of records and the

TABLE

Serial no. Roll no Name Grade


1 20751 Bhaskar C
2 21101 Ganesh D
3 21202 Batch E
4 22313 John B
5 32141 Xavier A
- - - -
- - - -
- - - -
100 33527 Rajesh A

Corresponding batch control record.

Records are read one by one by the validation program and counted. The
program also counts the number of A’s, B’s, C’s, D’s and F’s in the batch. The count
of records and the count of A’s, B’s, etc. are compared with those in the batch control
record. If they match there is no error. Otherwise, a data entry error is detected. The
error could be in any of the 100 records. Thus a manual comparison of each record in
the file with that in the fo is now needed. This is feasible only if the number of
records in a batch is not very large.

(c) Data entry and verification:

The same set of data records are entered by two different operators and two
files are created. Records in these two files are compared by a program. Any
difference in two 0yesponding records indicates an error. The records are then
retrieved and compared with that in the data entry form.

(d) Record totals:

Selected fields in a record are summed and the sum is entered as a separate
field in each record. The data validation program sums the fields as entered by the
data entry operator and compares this sum with the manually computed hash total
field. Any difference is signaled and the record is manually checked.

(e) Check digit:

For important fields, modulus - N check digits are used to detect errors. Other
checks carried out by a data validation program are:

a) Character check. Data fields containing illegal characters (e.g., a numeric


field containing a character S instead of a number 5) due to data entry error are
checked by the input validation program.
b) Radix error. Data fields with known limits on values are checked. For
example, if a field for minutes has a number greater than 60 it is a mistake.
c) Range check. Check if a field lies within allowable range. For example, an
entry of 98 for number of hours worked per week by a factory worker is
wrong as it is larger than the maximum allowed number of working hours per
week.
d) Reasonableness check. Values of fields are checked to see whether they
are reasonable. For example, if the telephone charge of a customer is
entered as Rs. 6,000 when the customer’s average charge is around Rs.600,
then it is probably an error. Such errors should be indicated by a data
validation program for closer check.
e) Inconsistent data. For example, an entry 30-02-79 as a date is a mistake and
should be detected.
f) Incorrect values entered. These values are checked by using batch control
totals.

g) Missing data. If a full record is missing the batch control total will indicate it.
If no value is entered for a field it should be checked and indicated by the data
validation program.
h) Data records in wrong order. Sequence numbers in records are used to detect
this error.
i) Inter - field relationship check. If different fields are related in a known way
this relationship can be used to check the data entered. For example, if the
entry for year of study of a student in a school is 4 and the age entry is 72, then
there is probably an error.
j) File header. All input data files should have a record at the beginning, giving
it identification. Every program using a file should check the identification to
ensure that it is processing the right file.

We again reiterate that careful design of controls for data entry and input
validation program is essential in data processing. As volumes of data processed are
large and consequences of data errors in flies can be very expensive, “cleaning” input
data before they get into a database is very important.

(iii) PROCESSING CONTROL

(a) Proof figures:

These are used to check an important arithmetic operation in a procedure. This


check satisfies the dual purpose of detecting any data, entry / process error or a
hardware error. A proof figure is an additional data element entered in each record.
For example, assume the following record structure:

Item no., qty. supplied, cost per unit

Another field known as a proof cost is appended to the record. The modified record is:

Item no., qty. supplied, cost per unit, proof cost


Proof cost is manually calculated for each record in the following way:

Assume that the highest cost which is ever expected is H. Then

Proof cost = H - cost per unit


As part of the procedure, the following additional calculations are performed:

Total cost 0; total qty. 0; check total 0;


for all records do

Total qty. Total qty. + qty. supplied

Total cost total cost + qty. supplied * cost per unit


- Check total check total + qty. supplied * proof cost
end for
if total qty. * H = total cost + check total
then print “No error”
else print “Error”
endif

If an error message results then it could be due to a data entry error or an


arithmetic error. The location of the error may be found by breaking the for loop into
a number of sets and calculating the proof totals for each for loop.

(b) Two - way check:

The same quantity can be calculated in two different ways and compared. For
example, in a payroll system for each record the system calculates:
Gross pay, deductions, net pay
The gross pay, deductions and net pay can be accumulated separately and one may
check if (sum gross pay - sum deduction) = sum net pay.
(c) Relationship check:

When a known relationship between two data elements exists and one of the
elements is correct, it is possible to check the other element by using a relationship.
For example, if a discount is known for a set of sales then the procedure can sum all
sales and sum all rebates and check if rebates total equals (discount * sum of all
sales).
If sum of sales = 150,000 and discount is 5% then the rebate total should equal 7500.

(d) Check point and restart:

A check point procedure is a check routine which is executed at specific


intervals during the execution of a large data processing program. The point where the
check routine is run is called a check point. The purpose of a check point is to find
out if processing has been carried out correctly up to the check point. At check points,
quantities such as control totals and proof figures can be checked. If the processing is
correct the status of the machine is recorded by writing this on a disk. The normal
procedure is then resumed.

Check point procedure divides a long job into a series of short ones. Each
short run is independent and is checked independently. If the check is correct then
enough information is written on the disk to enable automatic restarting of processing
from this check point. If the check is incorrect the last part of the process is discarded;
errors, if any, are corrected and the system is restarted from the previous check point.

A restart procedure restores the status of the computer as it existed at the


specified check point. Restoring status implies restoring the values of all CPU
registers, restoration of accumulated totals, re-establishing switches and counters,
restoring the file status of secondary files, and so on.

The proper use of check point and restart procedure in a program improves the
operational efficiency of a data processing system. If power failure, or hardware or
software failure occurs, then the system can be restored to the last check point. The
processing done between the last check point and the system failure is the only work
that is wasted.

Restart procedure also allows intentional interruption of a running program.


Programs may be intentionally interrupted to accommodate priority jobs, emergency
maintenance, change of shift, etc.

4.6 CODING TECHNIQUES

Many key fields in data input to a computer-based system are coded. Codes
are used to identify persons (e.g. students Roll no.), products, Components, materials,
machines, vendors, customers, locations, etc. The main reasons for coding are:

(i) Unique identification:

Each item in a system should be identified uniquely and correctly. For


example, in a college there may be two students with the same name. (e.g. Us -
Gupta). Unless the two can be distinguished by, for example, using different roll
numbers, there will be confusion.

(ii) Cross - referencing.

Diverse activities in an organization give rise to transactions in different sub -


systems but affect the same item. For example, items are issued to a customer by the
stores. Customers are billed by the accounts department. The marketing department
keeps list of customers for follow - up on sales. If bills are not paid in time, stores
may suspend issue of items to customers. Stores and accounts have to correlate their
references to a customer. A common code is thus essential for cross - referencing.

(iii) Efficient storage and retrieval.

Code is a concise representation. It reduces data entry time and improves


reliability. Code as a key reduces storage space required for the data. Retrieval based
on key search is faster in a computer.

4.6.1 REQUIREMENTS OF A CODING SCHEME

A coding scheme must be concise. In other words, the number of digits /


characters used in a code must be minimal to reduce storage space of the code and
retrieval efficiency. It should be expandable, that is, it must allow new items to be
added easily. It should be meaningful and convey to a user some information about
the characteristics of the item to enable quick recognition and identification of the
item. It should be comprehensive. In other words, it should include the characteristics
relevant to all the related activities where the item will be involved. It should also be
precise, i.e. the scheme should produce unique, unambiguous code.

4.6.2 TYPES OF CODES

Some common codes are:

(i) Serial numbers.

For example, far bank accounts one may assign a serial number for each
account holder. As and when a new account is opened, the person is given the next
serial number. The advantage of this method is that it is concise, precise and
expandable. It is, however, not meaningful. From the serial number it is not possible
to find out anything about the account holder. It is also not comprehensive.

(ii) Block codes.

The block codes use blocks of serial numbers. For example, account numbers
0000 to 9999 may be used for savings accounts in a bank; 10000 to 99999 for current
accounts; 100000 to 999999 for special deposit accounts, etc. A similar block code
can be used in other areas such as coding items in a store, and assigning roll numbers
to students. This code is expandable and more meaningful than the serial number
coding. It is precise but not comprehensive.
(iii) Group classification code.

This is an improvement on the block code and is more meaningful. For


example, a method of assigning roll number to students may use the following
system:

FIGURE

Status (UG/PG)

2005 24 05 2 101

Year of Term Department Serial no.


admission admitted in the department

Often mnemonics instead of numbers make the code more meaningful. For
instance, in the above representation we can write the code as

FIGURE
2005 24 CS UG 101

Computer Undergraduate
Science

This code is meaningful, comprehensive, expandable and precise. It is,


however, not concise. The code can become very long if an attempt is made to make it
more comprehensive. This is a good coding method and is widely used. For example,
the PIN code used by the Postal Department is based on this idea.

(iv) Significant codes.

These codes use some or all of the digits to describe a value of the product
being coded. For example, the code
FIGURE

BA M 95 C B Style

Banian Male Chest Size Cotton Colour

Describes a T Shirt for males of chest size 95 cm., made of cotton of blue
colour and round neck. The value may be directly used in some calculation. This code
is meaningful, precise, comprehensive and expandable. It is not concise.

There are often special coding schemes which are standardized. Simple
examples of international standardization are Dewey decimal classification codes
used for books for simplicity of locating books in libraries and the ISBN
(International Standard Book Numbers) used by publishers all over the world to
assign a number to a book when it is published.

4.6.3 DETECTION OF ERROR IN CODES

In the previous section we have discussed the various methods of coding. We


did not specifically deal with codes with capability of error detection. When data are
entered into a computer by a data entry operator, typing errors can occur. It is
desirable to detect errors in typing at data entry stage to avoid delays and difficulties
which may arise if they are not detected early. For instance, an incorrect entry of a
customer’s account number when he cashes a cheque may, unless detected during
data entry, leads to debiting a wrong account. It is thus necessary to design a code
such that if there is an error in the code it can be detected during data entry by a
simple program. It is practically impossible to design a code so that any arbitrary
error(s) is detected. A code can be designed if the types o errors normally committed
in data entry are known. The code will then be able to detect all such errors.

By experimental study it has been found that the common types of errors
committed during data entry are as shown in Table. This table also gives the
probability of occurrence of each of these errors.
Table 4.3 - Common Errors Made during Data Entry

Type of error Example Occurrence (percent)

Single transcription
(One digit 45687 49687 86
incorrectly typed)
Transposition error
(Two digits are 96845 96485 9
interchanged) 96845 94865
All other errors 5

If a code is designed which is able to detect the two types of common errors,
namely, single transcription and transposition errors, it will be reasonably good. Such
a code has been designed and is called modulus 11 code. We will first describe how
such a code is constructed, then see how it is able to detect errors, and finally show
the theoretical basis of this design.

4.6.3 CONSTRUCTING MODULUS - 11 CODES

Given a set of codes, they are transformed to another set of codes with error
detecting property as follows:

Let the given code be : 48793


Multiply the least significant digit by 2, the digit to its left by 3, the digit to its
left by 4 and so on. Add the products. (The digits 2, 3, 4 etc. with which we multiply
the digits of the code are called weights). For the given code, 48793, by applying this
operation we get
Weighted sum of digits
= 4 * 6 + 8 * 5 + 7 * 4 + 9 * 3 + 3 * 2 = 125

Weights
----------------------------------------------------------------
Divide the weighted sum of digits by 11.

Append (11 - remainder) to the right of the code. This is the new code to be
used. In this case, as the remainder is 4, we append (11 - 4) = 7 to the code and get
the new code 487937. The remainder 7 appended to the code is called a check digit. If
the remainder after division is 1, then (11 - remainder) = (11 - 1) = 10.. In such a
case the character X is appended to the code. If the remainder after division is 0, then
0 is appended to the code.
In Table we show some examples of codes and their equivalent modulus - 11 Codes.

Table 4.4 - Illustration of Construction of Modulus - 11 code.

Code Modulus – 11 code

45687 4 * 6 + 5 * 5 + 6 * 4 + 8 * 3 + 7 * 2 = 111
111/11 = quotient = 10 remainder = 1.
Digit to be appended (11- 1) = 10. Use X to represent
10 45687 X
68748 687480

65432 654329
69752 697524

An algorithm for generating the modulus - 11 code is given below. Let dndn - 1
… d2 be the given code (n < 10).
Algorithm for finding d1 to be appended to the code as the least significant
digit
weighted sum 0
for i = 2 to n do
weighted sum weighted sum + i * d,
endfor
r weighted sum mod 11
(r is the remainder obtained by dividing weighted sum by 11)
if r = 0
then d1 = 0
else if r = 1
then d1 = X
else d1 = (11 - r)
endif
endif
Append d1 to code.

4.7 ERROR DETECTION

While a code is being entered by a data entry operator, a program multiplies


the digits in the code by the weights 1, 2, 3, . . ., starting form the least significant
digit. The sum of these products is divided by 11. If the remainder is not equal to 0,
then an error in data entry is indicated. The algorithm for error detection is as given
below:
Let the given code be
dndn-1……d1
4.7.1 ALGORITHM FOR ERROR DETECTION

if d1 = X
then weighted sum 10
else weighted sum d1
endif:
for i = 2 to n do
weighted sum weighted sum + i * di
endfor:
r = weighted sum mod 11
if r = 0
then “No error”
else “Error in code”
endif.

4.8 TESTING OF INFORMATION SYSTEMS

When a system is developed, it is hoped that it performs properly. In practice,


however, some errors always occur. The main purpose of testing an information
system is to find the errors and correct them. A successful test is one which finds an
error. The main objectives of system testing are:

• To ensure that during operation the system will perform as per


specifications.
• To make sure that the system meets users’ requirements during
operation.
• To verify that the controls incorporated in the system function as
intended.
• To see that when correct inputs are fed to the system the outputs are
correct.
• To make sure that during operation, incorrect input, processing and
outputs will be detected.

The scope of a system test should include both manual operations and
computerized operations. System testing is a comprehensive evaluation of the
programs, manual procedures, computer operations and controls. System tests may be
classified as program tests, string tests, system tests, pilot tests and parallel tests.
These are now discussed.

(i) Program tests:

These are designed to test the logic of programs. Program logic is normally
very complicated and it is practically impossible to test all the paths taken by a
program. Normally individual modules are tested. Test data are generated to test all
logical paths in the module. A very good tool to determine the paths to be taken is to
make a decision table listing of all condition tests. All logically possible rules in the
decision table can then be determined and used to generate a complete set of test data.
Most common errors in programs occur at boundary points in tests. Test data should
test all boundary points where there is a transition from true to false or vice versa in
the conditions being tested in the program. Mother common error is made in counting.
Thus the program must be tested with counts which are one more or one less than
what is specified. Some tools are available which analyze a program and create a flow
chart equivalent to it. Such a tool is useful in generating test data.

(ii) String tests:

These are carried out on a set of related program modules. The purpose of a
string test is to ensure that data are correctly transferred from one program in the
string to the next. For example, in a student result processing system, the grade
processing module for post - graduate students must use valid output from the post -
graduate student registration module.

(iii) System tests:

These are used to test all programs which together constitute the system. In a
payroll system for, instance, all programs in the system such as calculation of bonus,
overtime, and costing program (which uses the output of the payroll system) are to be
tested together. Testing is conducted using synthetic data. Both valid and invalid
transactions are used in this test. For example, a non - existent employee code may be
used in a transaction to see if it is rejected. Similarly, some unreasonable data may be
used to check whether the input controls function as expected.

(iv) Pilot tests:

A set of transactions which have been run on the present system are collected.
Results of their processing on the existing manual system are also kept. This set is
used as test data when a computer - based system is initially developed. The two
results are matched. The reason for any discrepancy is investigated to modify the new
system.

(v) Parallel runs:

The aim here is similar to pilot tests. In this, however, both manual and
computer - based systems are run simultaneously for a period of time and the results
from the two systems are compared. This is a good method for complex systems but is
expensive.

Preliminary tests on a system before it is released for general use are known as
alpha tests. This is followed by a beta testing period when the system is released for
general use and carefully observed for malfunctions. After this phase it is put to
general unrestricted use.

4.9 SECURITY OF INFORMATION SYSTEMS

Data entering a data processing system and the programs processing the data
must be kept secure. By security we mean protecting the data and programs against
accidental or intentional modification or destruction or disclosure to unauthorized
persons. It is the responsibility of a systems analyst to ensure the security of both data
and procedures. The following requirements should be met to ensure security:
1. The data and programs must be protected from theft, fire, disk corruption
and other types of physical destruction. Duplicate copies are kept in a fire-
proof vault in a place away from the data processing centre. This is
particularly important for financial data.
2. Data should be reconstructable in case of loss despite precautions. Backup
copies of master files and transaction files are kept.
3. The system should be tamper - proof. Password system and file security
keys are used to bar unauthorized access. If password system is broken by
clever programmer a secrecy transformation may be used to transform the
stored data. Even if the data is accessed, it will not be meaningful if
transformed. One simple data transformation would be to map every
character in a file to another character by using a secret mapping table. For
example, a table may transform 1 to 8, 2 to 7, 3 to 5,……. A to W, B to U,
C to X, ………….. The table is kept secret. A person who gains
unauthorized access cannot understand the data. An authorized person will
get back the original data by applying the inverse transformation using the
table.
4. Any person gaining access to a file should be identified. Thus attempts to
access data is logged and identity is also recorded. This will inhibit
potential data lock breakers.
5. Only authorized persons should be allowed to change data. Password
system is used to prevent unauthorized access. Every access, authorized or
unauthorized, should be logged by the system. My change should be
monitored by the system.

Besides security, another concern of a system designer is to ensure the


privacy of data, particularly individual’s data in a database. Privacy refers to the
rights of individuals and organizations to ensure that data about them is not revealed
to others without their consent. This issue of privacy goes beyond the merely
technical issue of data security. It 1is a moral, ethical and legal issue to be tackled by
society through appropriate legislation. With massive databases on individuals, which
store tax returns, medical data, personal data, etc., an unscrupulous person or even a
government may use it to harass an individual unless data privacy is ensured.

Another recent problem which affects file security is the prevalence of


computer virus. A virus is a small program written by a mischief maker and spreads
by copying itself from one computer’s secondary memory to another’s secondary
memory. For example, a virus known as brain virus copies itself from the boot sector
of a floppy disk in which it resides to a new floppy when it is mounted on a PC and
opened for writing. The virus thus spreads. Some viruses store a counter and every
time an infected floppy is copied the count decreases by 1. When the count reaches
zero, a program in it destroys files stored in the hard disk.

Virus also spreads through computer networks and replicates itself on many
computers connected to the network. It is thus essential for a security system to
protect files from viruses. One physical control is not to allow free copying of floppy
disks from unknown sources. Vaccines against viruses have appeared in the market
which examine boot sectors of floppies and detect attempts to copy themselves. A
warning is issued to a user when this is found and saves the user’s disk from getting
infected.

There is no fool - proof technique of protection against viruses in networks.


Strict access controls, logging of all accesses, and examining suspicious programs in
detail are the only means. The threat of viruses may decrease if virus propagators are
convicted and jailed when caught. This is already happening in the West.

REVIEW QUESTIONS:

1. Why are Controls necessary in an Information System?


2. Enumerate the Audit procedures
3. Why should Information Systems be audited?
4. Why are Systems Tests necessary?
UNIT 5

SYSTEM AUDIT

5.1 INTRODUCTION
5.2 SOFTWARE ENGINEERING QUALITIES
5.2.1 DIMENSION OF SOFTWARE QUALITIES
5.3 A SOFTWARE QUALITY ENGINEERING PROGRAM
5.3.1 QUALITIES AND ATTRIBUTES
5.3.2 QUALITY EVALUATIONS
5.3.3 NONCONFORMANCE ANALYSIS
5.3.4 FAULT TOLERANCE ENGINEERING
5.3.5 TECHNIQUES AND TOOLS
5.4 SOFTWARE RELIABILITY AND METRICS
5.4.1 SOFTWARE RELIABILITY METRICS:
5.4.1.1 PRODUCT METRICS
5.4.1.2 FUNCTION POINT METRIC
5.4.1.3 TEST COVERAGE METRICS
5.4.1.4 PROJECT MANAGEMENT METRICS
5.4.1.5 PROCESS METRICS
5.4.1.6 FAULT AND FAILURE METRICS
5.5 VERIFICATION AND VALIDATION:
5.5.1 VERIFICATION TECHNIQUES
5.5.1.1 DYNAMIC TESTING
5.5.1.2 FUNCTIONAL TESTING
5.5.1.3 STRUCTURAL TESTING
5.5.1.4 RANDOM TESTING
5.5.1.5 STATIC TESTING
5.5.1.6 CONSISTENCY TECHNIQUES
5.5.1.7 MEASUREMENT TECHNIQUES
5.5.2 VALIDATION TECHNIQUES
5.5.2.1 FORMAL METHODS
5.5.2.2 FAULT INJECTION
5.5.2.3 HARDWARE FAULT INJECTION
5.5.2.4 SOFTWARE FAULT INJECTION
5.5.2.5 DEPENDABILITY ANALYSIS
5.5.2.6 HAZARD ANALYSIS RISK ANALYSIS
5.5.3 TYPES OF TESTING
5.5.3.1 COMPONENT.
5.5.3.2 ACCEPTANCE.
5.5.3.3 INTERFACE SYSTEM.
5.5.3.4 RELEASE
5.6 SOFTWARE QUALITY ASSURANCE ACTIVITIES
5.6.1 SQA RELATIONSHIPS TO OTHER ASSURANCE ACTIVITIES:
5.6.1.1 CONFIGURATION MANAGEMENT MONITORING
5.6.1.2 VERIFICATION AND VALIDATION MONITORING:
5.6.1.3 FORMAL TEST MONITORING
5.7 SOFTWARE QUALITY ASSURANCE
5.7.1 CONCEPTS AND INITIATION PHASE
5.7.2 2. SOFTWARE REQUIREMENTS PHASE
5.7.3 SOFTWARE ARCHITECTURAL (PRELIMINARY) DESIGN PHASE
5.7.4 SOFTWARE DETAILED DESIGN PHASE
5.7.5 SOFTWARE IMPLEMENTATION PHASE
5.7.6 SOFTWARE INTEGRATION AND TEST PHASE
5.7.7 SOFTWARE ACCEPTANCES AND DELIVERY PHASE
5.7.8 SOFTWARE SUSTAINING ENGINEERING AND OPERATIONS
PHASE
5.7.9 TECHNIQUES AND TOOLS
5.8 SOFTWARE DESIGN QUALITY:
5.8.1 COUPLING
5.8.2 COHESION
5.8.3 CHARACTERISTICS OF PERFECT DESIGN:
5.9 SOFTWARE REVIEWS, INSPECTIONS AND WALKTHROUGHS
5.10 CAPABILITY MATURITY MODEL

SUMMARY

REVIEW QUESTIONS

LEARNING OBJECTIVES

After learning this unit you must be able to:

• Understand the concepts of software engineering qualities

• Analyze different dimension of software quality

• Identify the measurement tools for software reliability

• Explain the importance of quality and continuous improvement

• Describe the validation and verification techniques

• Understand the importance of software reviews, inspections and walkthroughs

• Identify the key areas of software quality model


5.1 INTRODUCTION

Information Systems is the heart of an organization. Its success is dependent on its


methodologies and its process framework. Teamed up with the right infrastructure,
companies who have properly planned information systems in place are the ones who
gain maximum competitive advantage.

Frequently, a manager of an information system faces a need to evaluate the


actual properties of an information system. The reason may imply assurance related to the
conformity of a development project delivered to the requirements set, or a proof to be
provided in terms of the quality of one's information system or any of its parts or, for
instance, a particular software security solution.

Characteristic of today's organizations is abundance of information and


information systems. Frequently, information and its support technologies are frequently
regarded as the most valuable assets. In the rapidly changing work environment,
expectations related to benefits from information technologies are high.
Therefore management demands emphasize that information systems should demonstrate
ever-enhancing quality, higher functionality, easier use, a shorter delivery period and
ever-improving service levels.

Primarily, companies appreciate the benefits they gain from an effective and
updated information system. However, frequently, risks emerging as new technologies
are implemented are not fully conceived, neither is that issue considered in the risk
analysis of business processes. Still, successful organizations perceive and manage the
risks related to the implementation of new technologies and establish the required quality,
reliability and security demands to their information systems. At the same time, they
demand that the abovementioned requirements be realized at an expense as small as
possible.

At the same time, an adequate level of know how is required to establish tasks for
a company's information system, order an information system and check how well the
information system developed conforms to the requirements set.

An auditing service, involving evaluation of an organization and its information


system according to well-known requirements and practices of IT supervision and
management regarding the evaluation of the conformity of different stages of the
software development process to the standards of the development process.

For information systems there are generally two kinds of auditors: internal and
external. Internal auditors work for the same organization that owns the information
system whereas external auditors are hired form the outside. Conformity to the
requirement is done at various phases of the system cycle.
5.2 SOFTWARE ENGINEERING QUALITIES

Software Quality Engineering (SQE) is a process that evaluates, assesses, and


improves the quality of software.
Software quality is often defined as “the degree to which software meets requirements for
reliability, maintainability, transportability, etc., as contrasted with functional,
performance, and interface requirements that are satisfied as a result of software
engineering.”

Quality must be built into a software product during its development to satisfy
quality requirements established for it. SQE ensures that the process of incorporating
quality in the software is done properly, and that the resulting software product meets the
quality requirements. The degree of conformance to quality requirements usually must
be determined by analysis, while functional requirements are demonstrated by testing.
SQE performs a function complementary to software development engineering. Their
common goal is to ensure that a safe, reliable, and quality engineered software product is
developed.

5.2.1 Dimension of Software Qualities

Qualities for which an SQE evaluation is to be done must first be selected and
requirements set for them. Some commonly used dimensions of qualities are:
1. Reliability
2. Maintainability
3. Transportability
4. Interoperability
5. Testability
6. Usability
7. Reusability
8. Traceability
9. Sustainability and
10. Efficiency.

Some of the key ones is discussed below.

1. Reliability

Hardware reliability is often defined in terms of the Mean- Time-To-Failure, or MTTF,


of a given set of equipment. An analogous notion is useful for software, although the
failure mechanisms are different and the mathematical predictions used for hardware
have not yet been usefully applied to software. Software reliability is often defined as the
extent to which a program can be expected to perform intended functions with required
precision over a given period of time. Software reliability engineering is concerned with
the detection and correction of errors in the software; even more, it is concerned with
techniques to compensate for unknown software errors and for problems in the hardware
and data environments in which the software must operate.

2. Maintainability

Software maintainability is defined as the ease of finding and correcting errors


in the software. It is analogous to the hardware quality of Mean-Time-To-Repair, or
MTTR. While there is as yet no way to directly measure or predict software
maintainability, there is a significant body of knowledge about software attributes that
make software easier to maintain. These include modularity, internal documentation,
code readability, and structured coding techniques. These same attributes also improve
sustainability, the ability to make improvements to the software.

3. Transportability

Transportability is defined as the ease of transporting a given set of software to


a new hardware and/or operating system environment.

4. Interoperability

Software interoperability is the ability of two or more software systems to


exchange information and to mutually use the exchanged information.

5. Efficiency

Efficiency is the extent to which software uses minimum hardware resources to


perform its functions.

Some of them will not be important to a specific software system, thus no


activities will be performed to assess or improve them. Maximizing some qualities may
cause others to be decreased. For example, increasing the efficiency of a piece of
software may require writing parts of it in assembly language. This will decrease the
transportability and maintainability of the software.

5.3 A SOFTWARE QUALITY ENGINEERING PROGRAM

The two software qualities, which command the most attention, are reliability
and maintainability. Some practical programs and techniques have been developed to
improve the reliability and maintainability of software, even if they are not measurable or
predictable. The types of activities that might be included in an SQE program are
described here in terms of these two qualities. These activities could be used as a model
for the SQE activities for additional qualities.
5.3.1 Qualities and Attributes

An initial step in laying out an SQE program is to select the qualities that are
important in the context of the use of the software that is being developed. For example,
the highest priority qualities for flight software are usually reliability and efficiency. If
revised flight software can be up-linked during flight, maintainability may be of interest,
but considerations like transportability will not drive the design or implementation. On
the other hand, the use of science analysis software might require ease of change and
maintainability, with reliability a concern and efficiency not a driver at all.

After the software qualities are selected and ranked, specific attributes of the
software that help to increase those qualities should be identified. For example,
modularity is an attribute that tends to increase both reliability and maintainability.
Modular software is designed to result in code that is apportioned into small, self-
contained, functionally unique components or units. Modular code is easier to maintain,
because the interactions between units of code are easily understood, and low-level
functions are contained in few units of code. Modular code is also more reliable, because
it is easier to completely test a small, self-contained unit.

Not all software qualities are so simply related to measurable design and code
attributes, and no quality is so simple that it can be easily measured. The idea is to select
or devise measurable, analyzable, or testable design and code attributes that will increase
the desired qualities. Attributes like information hiding, strength, cohesion, and coupling
should be considered.

5.3.2 Quality Evaluations

Once some decisions have been made about the quality objectives and software
attributes, quality evaluations can be done. The intent in an evaluation is to measure the
effectiveness of a standard or procedure in promoting the desired attributes of the
software product. For example, the design and coding standards should undergo a quality
evaluation. If modularity is desired, the standards should clearly say so and should set
standards for the size of units or components. Since internal documentation is linked to
maintainability, the documentation standards should be clear and require good internal
documentation.

Quality of designs and code should also be evaluated. This can be done as a
part of the walkthrough or inspection process, or a quality audit can be done. In either
case, the implementation is evaluated against the standard and against the evaluator's
knowledge of good software engineering practices, and examples of poor quality in the
product are identified for possible correction.

5.33. Nonconformance Analysis

One very useful SQE activity is an analysis of a project's nonconformance


records. The nonconformance should be analyzed for unexpectedly high numbers of
events in specific sections or modules of code. If areas of code are found that have had
an unusually high error count (assuming it is not because the code in question has been
tested more thoroughly), then the code should be examined. The high error count may be
due to poor quality code, an inappropriate design, or requirements that are not well
understood or defined. In any case, the analysis may indicate changes and rework that
can improve the reliability of the completed software. In addition to code problems, the
analysis may also reveal software development or maintenance processes that allow or
cause a high proportion of errors to be introduced into the software. If so, an evaluation
of the procedures may lead to changes, or an audit may discover that the procedures are
not being followed.

5.3.4 Fault Tolerance Engineering

For software that must be of high reliability, a fault tolerance activity should be
established. It should identify software, which provides and accomplishes critical
functions and requirements. For this software, the engineering activity should determine
and develop techniques, which will ensure that the needed reliability or fault tolerance
will be attained. Some of the techniques that have been developed for high reliability
environments include:

Input data checking and error tolerance. For example, if out-of-range or missing
input data can affect reliability, then sophisticated error checking and data
interpolation/extrapolation schemes may significantly improve reliability.

Proof of correctness. For limited amounts of code, formal "proof of correctness"


methods may be able to demonstrate that no errors exist.

N-Item voting. This is a design and implementation scheme where a number of


independent sets of software and hardware operate on the same input. Some comparison
(voting) scheme is used to determine which output to use. This is especially effective
where subtle timing or hardware errors may be present.

Independent development. In this scheme, one or more of the N-items are


independently developed units of software. This helps prevent the simultaneous failure
of all items due to a common coding error.

5.3.5 Techniques and Tools

Some of the useful fault-tolerance techniques are described above. Standard


statistical techniques can be used to manipulate nonconformance data. In addition, there
is considerable experimentation with the Failure Modes and Effects Analysis (FMEA)
technique adapted from hardware reliability engineering. In particular, the FMEA can be
used to identify failure modes or other assumable (hardware) system states which can
then lead the quality engineer to an analysis of the software that controls the system as it
assumes those states.
There are also tools that are useful for quality engineering. They include
system and software simulators, which allow the modeling of system behavior; dynamic
analyzers, which detect the portions of the code that are used most intensively; software
tools that are used to compute metrics from code or designs; and a host of special purpose
tools that can, for example, detect all system calls to help decide on portability limits. In
this let us review the various methodologies and concepts toward the qualities of the
information system.

5.4.1 SOFTWARE RELIABILITY AND METRICS

Metrics:

Metrics are quantitative values, usually computed from the design or code that measure
the quality in question, or some attribute of the software related to the quality. Many
metrics have been invented, and a number have been successfully used in specific
environments, but none has gained widespread acceptance. A metric (noun) is the
measurement of a particular characteristic of a program's performance or efficiency.

Reliability:

According to ANSI, Software Reliability is defined as “ the probability of failure-free


software operation for a specified period of time in a specified environment”. Although
Software Reliability is defined as a probabilistic function, and comes with the notion of
time, we must note that, different from traditional Hardware Reliability, Software
Reliability is not a direct function of time. Electronic and mechanical parts may become
"old" and wear out with time and usage, but software will not rust or wear-out during its
life cycle. Software will not change over time unless intentionally changed or upgraded.

Software Reliability is an important to attribute of software quality, together with


functionality, usability, performance, serviceability, capability, installability,
maintainability, and documentation. Software Reliability is hard to achieve, because the
complexity of software tends to be high. While any system with a high degree of
complexity, including software, will be hard to reach a certain level of reliability, system
developers tend to push complexity into the software layer, with the rapid growth of
system size and ease of doing so by upgrading the software. For example, large next-
generation aircraft will have over one million source lines of software on-board; next-
generation air traffic control systems will contain between one and two million lines; the
upcoming international Space Station will have over two million lines on-board and over
ten million lines of ground support software; several major life-critical defense systems
will have over five million source lines of software. While the complexity of software is
inversely related to software reliability, it is directly related to other important factors in
software quality, especially functionality, capability, etc. Emphasizing these features will
tend to add more complexity to software.
5.4.1 SOFTWARE RELIABILITY METRICS:

Measuring software reliability remains a difficult problem because we don't have a good
understanding of the nature of software. There is no clear definition to what aspects are
related to software reliability. We cannot find a suitable way to measure software
reliability, and most of the aspects related to software reliability. Even the most obvious
product metrics such as software size have not uniform definition.

It is tempting to measure something related to reliability to reflect the characteristics, if


we cannot measure reliability directly. The current practices of software reliability
measurement can be divided into four categories:

5.4.1.1 Product metrics

Software size is thought to be reflective of complexity, development effort and


reliability. Lines Of Code (LOC), or LOC in thousands (KLOC), is an intuitive initial
approach to measuring software size. But there is not a standard way of counting.
Typically, source code is used (SLOC, KSLOC) and comments and other non-executable
statements are not counted. This method cannot faithfully compare software not written
in the same language. The advent of new technologies of code reuse and code generation
technique also cast doubt on this simple method.

5.4.1.2 Function point metric

Function point metric is a method of measuring the functionality of a proposed


software development based upon a count of inputs, outputs, master files, inquires, and
interfaces. The method can be used to estimate the size of a software system as soon as
these functions can be identified. It is a measure of the functional complexity of the
program. It measures the functionality delivered to the user and is independent of the
programming language. It is used primarily for business systems; it is not proven in
scientific or real-time applications.

Complexity is directly related to software reliability, so representing complexity


is important. Complexity-oriented metrics is methods of determining the complexity of a
program’s control structure, by simplify the code into a graphical representation.

5.4.1.3 Test coverage metrics

Test coverage metrics are a way of estimating fault and reliability by performing
tests on software products, based on the assumption that software reliability is a function
of the portion of software that has been successfully verified or tested. Detailed
discussion about various software testing methods can be found in topic Software
Testing.
5.4.1.4 Project management metrics

Higher reliability can be achieved by using better development process, risk


management process, configuration management process, etc.

5.4.1.5 Process metrics

Based on the assumption that the quality of the product is a direct function of the
process, process metrics can be used to estimate, monitor and improve the reliability and
quality of software. ISO-9000 certification, or "quality management standards", is the
generic reference for a family of standards developed by the International Standards
Organization (ISO).

5.4.1.6 Fault and failure metrics

The goal of collecting fault and failure metrics is to be able to determine when the
software is approaching failure-free execution. Minimally, both the number of faults
found during testing (i.e., before delivery) and the failures (or other problems) reported
by users after delivery are collected, summarized and analyzed to achieve this goal. Test
strategy is highly relative to the effectiveness of fault metrics, because if the testing
scenario does not cover the full functionality of the software, the software may pass all
tests and yet be prone to failure once delivered. Usually, failure metrics are based upon
customer information regarding failures found after release of the software. The failure
data collected is therefore used to calculate failure density, Mean Time Between Failures
(MTBF) or other parameters to measure or predict software reliability.

5.5 VERIFICATION AND VALIDATION:

According to the IEEE Standard Glossary of Software Engineering Terminology,


verification is defines as "The process of evaluating a system or component to determine
whether the products of a given development phase satisfy the conditions imposed at the
start of that phase."

Validation, on the other hand, is defined as "The process of evaluating a system or


component during or at the end of the development process to determine whether it
satisfies specified requirements."

So verification simply demonstrates whether the output of a phase conforms to the


input of a phase as opposed to showing that the output is actually correct. Verification
will not detect errors resulting from incorrect input specification and these errors may
propagate without detection through later stages in the development cycle. It is not
enough to only depend on verification, so validation is necessary to check for problems
with the specification and to demonstrate that the system is operational.
5.5.1 Verification Techniques

There are many different verification techniques but they all basically fall into 2
major categories - dynamic testing and static testing.

• Dynamic testing - Testing that involves the execution of a system or component.


Basically, a number of test cases are chosen where each test case consists of test
data. These input test cases are used to determine output test results. Dynamic
testing can be further divided into three categories - functional testing, structural
testing, and random testing.

• Functional testing - Testing that involves identifying and testing all the functions
of the system as defined within the requirements. This form of testing is an
example of black box testing since it involves no knowledge of the
implementation of the system.
• Structural testing - Testing that has full knowledge of the implementation of the
system and is an example of white-box testing. It uses the information from the
internal structure of a system to devise tests to check the operation of individual
components. Functional and structural testing both chooses test cases that
investigate a particular characteristic of the system.
• Random testing - Testing that freely chooses test cases among the set of all
possible test cases. The use of randomly determined inputs can detect faults that
go undetected by other systematic testing techniques. Exhaustive testing, where
the input test cases consist of every possible set of input values, is a form of
random testing. Although exhaustive testing performed at every stage in the life
cycle results in a complete verification of the system, it is realistically impossible
to accomplish.

• Static testing - Testing that does not involve the operation of the system or
component. Some of these techniques are performed manually while others are
automated. Static testing can be further divided into 2 categories - techniques that
analyze consistency and techniques that measure some program property.

• Consistency techniques - Techniques that are used to insure program properties


such as correct syntax, correct parameter matching between procedures, correct
typing, and correct requirements and specifications translation.
• Measurement techniques - Techniques that measure properties such as error
proneness, understandability, and well-structured ness.

Validation Techniques

There are also numerous validation techniques, including formal methods, fault injection,
and dependability analysis. Validation usually takes place at the end of the development
cycle, and looks at the complete system as opposed to verification, which focuses on
smaller sub-systems.
• Formal methods - Formal methods is not only a verification technique but also a
validation technique. Formal methods mean the use of mathematical and logical
techniques to express, investigate, and analyze the specification, design,
documentation, and behavior of both hardware and software.

• Fault injection - Fault injection is the intentional activation of faults by either


hardware or software means to observe the system operation under fault
conditions.

• Hardware fault injection - Can also be called physical fault injection because we
are actually injecting faults into the physical hardware.
• Software fault injection - Errors are injected into the memory of the computer by
software techniques. Software fault injection is basically a simulation of hardware
fault injection.

• Dependability analysis - Dependability analysis involves identifying hazards and


then proposing methods that reduces the risk of the hazard occurring.

• Hazard analysis - Involves using guidelines to identify hazards, their root causes,
and possible countermeasures.
• Risk analysis - Takes hazard analysis further by identifying the possible
consequences of each hazard and their probability of occurring.

Verification and validation can be performed by the same organization performing the
design, development, and implementation but sometimes it is performed by an
independent testing agency

Verification and validation is a very time consuming process as it consists of planning


from the start, the development of test cases, the actual testing, and the analysis of the
testing results. It is important that there are people specifically in charge of Verification
& Validation that can work with the designers. Since exhaustive testing is not feasible for
any complex system, an issue that occurs is how much testing is enough testing. Sure, the
more testing the better but when do the cost and time of testing outweigh the advantages
gained from testing. The amount of time and money spent on Verification & Validation
will certainly vary from project to project. In many organizations, testing is done until
either or both time and money runs out. Whether this method is effective or not, it is a
technique used by many companies.

The software verification process is pervasive through out the development lifecycle. It is
usually a combination of review, analyses, and testing. Review and analyses are
performed on the following different components.

• Requirements analyses - To detect and report requirements errors that may have
surfaced during the software requirements and design process.
• Software architecture - To detect and report errors that occurred during the
development of the software architecture.
• Source code - To detect and report errors that developed during source coding.
• Outputs of the integration process - To ensure that the results of the integration
process are complete and correct.
• Test cases and their procedures and results - To ensure that the testing is
performed accurately and completely.

The 2 main objectives of the software testing process is to demonstrate that it satisfies all
the requirements and to demonstrate that errors leading to unacceptable failure conditions
are removed. The testing process includes the following different types of testing.

Types of Testing

The main software testing types are:

• Component.
• Acceptance.
• Interface System.
• Release

Component Testing: Starting from the bottom the first test level is "Component
Testing", sometimes called Unit Testing. It involves checking that each feature specified
in the "Component Design" has been implemented in the component.

In theory an independent tester should do this, but in practice the developer usually does
it, as they are the only people who understand how a component works. The problem
with a component is that it performs only a small part of the functionality of a system,
and it relies on co-operating with other parts of the system, which may not have been
built yet. To overcome this, the developer either builds, or uses special software to trick
the component into believing it is working in a fully functional system.

Software integration testing - To verify the interrelationships between the software


requirements and components and to verify the implementation of the requirements and
components in the software architecture.

Interface Testing: As the components are constructed and tested they are then linked
together to check if they work with each other. It is a fact that two components that have
passed all their tests, when connected to each other produce one new component full of
faults. These tests can be done by specialists, or by the developers.

Interface Testing is not focused on what the components are doing but on how they
communicate with each other, as specified in the "System Design". The "System Design"
defines relationships between components, and this involves stating:

• What a component can expect from another component in terms of services. How
these services will be asked for.
• How they will be given.
• How to handle non-standard conditions, i.e. errors.
• Tests are constructed to deal with each of these.

The tests are organized to check all the interfaces, until all the components have been
built and interfaced to each other producing the whole system. System Testing:

Once the entire system has been built then it has to be tested against the "System
Specification" to check if it delivers the features required. It is still developer focused,
although specialist developers known as systems testers are normally employed to do it.

In essence System Testing is not about checking the individual parts of the design, but
about checking the system as a whole. In effect it is one giant component.

System testing can involve a number of specialist types of test to see if all the functional
and non-functional requirements have been met. In addition to functional requirements
these may include the following types of testing for the non-functional requirements:

• Performance - Are the performance criteria met?


• Volume - Can large volumes of information be handled?
• Stress - Can peak volumes of information be handled?
• Documentation - Is the documentation usable for the system?
• Robustness - Does the system remain stable under adverse circumstances?

• Load testing. Load testing is a generic term covering Performance Testing and
Stress Testing.

• Performance testing. Performance testing can be applied to understand your


application 's scalability, or to benchmark the performance in an environment of
third party products such as servers and middleware for potential purchase. This
sort of testing is particularly useful to identify performance bottlenecks in high
use applications. Performance testing generally involves an automated test suite
as this allows easy simulation of a variety of normal, peak, and exceptional load
conditions.
• Stress testing: Testing conducted to evaluate a system or component at or beyond
the limits of its specified requirements to determine the load under which it fails
and how. A graceful degradation under load leading to non-catastrophic failure is
the desired result. Often Stress Testing is performed using the same process as
Performance Testing but employing a very high level of simulated load.
• Functional testing: Validating an application conforms to its specifications and
correctly performs all its required functions. This entails a series of tests that
perform a feature-by-feature validation of behavior, using a wide range of normal
and erroneous input data. This can involve testing of the product's user interface,
APIs, database management, security, installation, networking, etc
Acceptance Testing: Acceptance Testing checks the system against the
"Requirements". It is similar to systems testing in that the whole system is checked but
the important difference is the change in focus:

• Systems Testing checks that the system that was specified has been delivered
• Acceptance Testing checks that the system delivers what was requested.

The customer, and not the developer should always do acceptance testing. The customer
knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment.

The forms of the tests may follow those in system testing, but at all times they are
informed by the business needs.

Release Testing: Even if a system meets all its requirements, there is still a case to
be answered that it will benefit the business. The linking of "Business Case" to Release
Testing is looser than the others, but is still important.

Release Testing is about seeing if the new or changed system will work in the existing
business environment. Mainly this means the technical environment, and checks concerns
such as:

• Does it affect any other systems running on the hardware?


• Is it compatible with other systems?
• Does it have acceptable performance under load?

These tests are usually run the by the computer operations team in a business. The
answers to their questions could have significant a financial impact if new computer
hardware should be required, and adversely affect the "Business Case".

It would appear obvious that the operations team should be involved right from the start
of a project to give their opinion of the impact a new system may have. They could then
make sure the "Business Case" is relatively sound, at least from the capital expenditure,
and ongoing running costs aspects. However in practice many operations teams only find
out about a project just weeks before it is supposed to go live, which can result in major
problems.

Regression testing: Similar in scope to a functional test, a regression test allows a


consistent, repeatable validation of each new release of a product Such testing ensures
reported product defects have been corrected for each new release and that no new
quality problems were introduced in the maintenance process. Though regression testing
can be performed manually an automated test suite is often used to reduce the time and
resources needed to perform the required testing. There are many other types such as,
• Hardware/software integration testing - To verify that the software is operating
correctly in the computer environment.
• Low-level testing - To verify the implementation of software low-level
requirements.

The standard includes a plan that outlines the information necessary for the certification
of the software and the software verification plan. A section also details tool
qualification. This is necessary when processes in the standard are eliminated, reduced, or
automated by using a software tool without following the software verification process.

Available tools, techniques, and metrics

There are an abundance of verification and validation tools and techniques. It is important
that in selecting verification and validation tools, all stages of the development cycle are
covered. For example, Table lists the techniques used from the requirements analysis
stage through the validation stage. Sources such as the Software Engineer's Reference
Book (McDermid, 1992), Standard for Software Component Testing (British Computer
Society, 1995), and standards such as DO-178B and IEC 1508 are useful in selecting
appropriate tools and techniques.

Table: Use of testing methods throughout the development life cycle

Static Dynamic
Requirements analysis and functional specification
Walkthroughs
Design reviews
Checklists
Top-level design
Walkthroughs
Design reviews
Checklists
Formal proofs
Fagan inspection
Detailed design
Walkthroughs
Design reviews
Control flow analysis
Data flow analysis
Symbolic execution
Checklists
Fagan inspection
Metrics
Implementation
Static analysis Functional testing
Boundary value analysis
Structured-based testing
Probabilistic testing
Error guessing
Process simulation
Error seeding
Integration testing
Walkthroughs Functional testing
Design reviews Time and memory tests
Sneak circuit analysis Boundary value analysis
Performance testing
Stress testing
Probabilistic testing
Error guessing
Validation
Functional testing

There are many organizations and companies that perform independent verification and
validation. SEEC COBOL Analyst 2000 and SEEC Smart Change 2000 are used for
verification and the SEEC COBOL Slicer and SEEC/TestDirector are used for validation.

SOFTWARE QUALITY ASSURANCE

Concepts and Definitions:

Software Quality Assurance (SQA) is defined as a planned and systematic approach to


the evaluation of the quality of and adherence to software product standards, processes,
and procedures. SQA includes the process of assuring that standards and procedures are
established and are followed throughout the software acquisition life cycle. Compliance
with agreed-upon standards and procedures is evaluated through process monitoring,
product evaluation, and audits. Software development and control processes should
include quality assurance approval points, where an SQA evaluation of the product may
be done in relation to the applicable standards.

Standards and Procedures:


Establishing standards and procedures for software development is critical, since these
provide the framework from which the software evolves. Standards are the established
criteria to which the software products are compared. Procedures are the established
criteria to which the development and control processes are compared. Standards and
procedures establish the prescribed methods for developing software. The SQA role is to
ensure their existence and adequacy. Proper documentation of standards and procedures
is necessary since the SQA activities of process monitoring, product evaluation and
auditing rely upon unequivocal definitions to measure project compliance.

Types of standards include:

Documentation Standards specify form and content for planning, control, and product
documentation and provide consistency throughout a project. Design Standards specify
the form and content of the design product. They provide rules and methods for
translating the software requirements into the software design and for representing it in
the design documentation.

Code Standards specify the language in which the code is to be written and define any
restrictions on use of language features. They define legal language structures, style
conventions, rules for data structures and interfaces, and internal code documentation.

Procedures are explicit steps to be followed in carrying out a process. All processes
should have documented procedures. Examples of processes for which procedures are
needed are configuration management, nonconformance reporting and corrective action,
testing, and formal inspections.

the Management Plan describes the software development control processes, such as
configuration management, for which there have to be procedures, and contains a list of
the product standards. Standards are to be documented according to the Standards and
Guidelines in the Product Specification. The planning activities required to assure that
both products and processes comply with designated standards and procedures are
described in the QA portion of the Management Plan.

Software Quality Assurance Activities:

Product evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the project's Management Plan
are correctly carried out and that the project's procedures and standards are followed.
Products are monitored for conformance to standards and processes are monitored for
conformance to procedures. Audits are a key technique used to perform product
evaluation and process monitoring. Review of the Management Plan should ensure that
appropriate SQA approval points are built into these processes. Product evaluation is an
SQA activity that assures standards are being followed. Ideally, the first products
monitored by SQA should be the project's standards and procedures. SQA assures that
clear and achievable standards exist and then evaluates compliance of the software
product to the established standards. Product evaluation assures that the software product
reflects the requirements of the applicable standard(s) as identified in the Management
Plan.

Process monitoring is an SQA activity that ensures that appropriate steps to carry out the
process are being followed. SQA monitors processes by comparing the actual steps
carried out with those in the documented procedures. The Assurance section of the
Management Plan specifies the methods to be used by the SQA process monitoring
activity.

A fundamental SQA technique is the audit, which looks at a process and/or a product in
depth, comparing them to established procedures and standards. Audits are used to
review management, technical, and assurance processes to provide an indication of the
quality and status of the software product.

The purpose of an SQA audit is to assure that proper control procedures are being
followed, that required documentation is maintained, and that the developer's status
reports accurately reflect the status of the activity. The SQA product is an audit report to
management consisting of findings and recommendations to bring the development into
conformance with standards and/or procedures.

SQA Relationships to Other Assurance Activities:

Some of the more important relationships of SQA to other management and


assurance activities are described below.

1. Configuration Management Monitoring

SQA assures that software Configuration Management (CM) activities are performed in
accordance with the CM plans, standards, and procedures. SQA reviews the CM plans for
compliance with software CM policies and requirements and provides follow-up for
nonconformance. SQA audits the CM functions for adherence to standards and
procedures and prepares reports of its findings. The CM activities monitored and audited
by SQA include baseline control, configuration identification, configuration control,
configuration status accounting, and configuration authentication. SQA also monitors
and audits the software library. SQA assures that baselines are established and
consistently maintained for use in subsequent baseline development and control.
Software configuration identification is consistent and accurate with respect to the
numbering or naming of computer programs, software modules, software units, and
associated software documents.

Configuration control is maintained such that the software configuration used in


critical phases of testing, acceptance, and delivery is compatible with the associated
documentation. Configuration status accounting is performed accurately including the
recording and reporting of data reflecting the software's configuration identification,
proposed changes to the configuration identification, and the implementation status of
approved changes. Software configuration authentication is established by a series of
configuration reviews and audits that exhibit the performance required by the software
requirements specification and the configuration of the software is accurately reflected in
the software design documents.

Software development libraries provide for proper handling of software code,


documentation, media, and related data in their various forms and versions from the
time of their initial approval or acceptance until they have been incorporated into the final
media. Approved changes to baselined software are made properly and consistently in all
products and no unauthorized changes are made.

2. Verification and Validation Monitoring:

SQA assures Verification and Validation (V&V) activities by monitoring technical


reviews, inspections, and walkthroughs. The SQA role in formal testing is described in
the next section. The SQA role in reviews, inspections and walkthroughs is to observe,
participate as needed and verify that they were properly conducted and documented. SQA
also ensures that any actions required are assigned, documented, scheduled, and updated.

Formal software reviews should be conducted at the end of each phase of the life
cycle to identify problems and determine whether the interim product meets all applicable
requirements. Examples of formal reviews are the Preliminary Design Review (PDR),
Critical Design Review (CDR), and Test Readiness Review (TRR). A review looks at
the overall picture of the product being developed to see if it satisfies its requirements.
Reviews are part of the development process, designed to provide a ready/not-ready
decision to begin the next phase. In formal reviews, actual work done is compared with
established standards. SQA's main objective in reviews is to assure that the Management
and Development Plans have been followed and that the product is ready to proceed with
the next phase of development. Although the decision to proceed is a management
decision, SQA is responsible for advising management and participating in the decision.

An inspection or walkthrough is a detailed examination of a product on a step-by-


step or line-of-code by line-of-code basis to find errors. For inspections and
walkthroughs, SQA assures, at a minimum that the process is properly completed and
that needed follow-up is done. The inspection process may be used to measure
compliance to standards.

3. Formal Test Monitoring

SQA assures that formal software testing such as acceptance testing is done in
accordance with plans and procedures. SQA reviews testing documentation for
completeness and adherence to standards. The documentation review includes test plans,
test specifications, test procedures, and test reports. SQA monitors testing and provides
follow-up on non conformances. By test monitoring, SQA assures software
completeness and readiness for delivery.
The objectives of SQA in monitoring formal software testing are to assure that:

• The test procedures are testing the software requirements in accordance with test
plans.
• The test procedures are verifiable.
• The correct or "advertised" version of the software is being tested (by SQA
monitoring of the CM activity).
• The test procedures are followed.
• Non conformances occurring during testing (that is, any incident not expected in
the test procedures) are noted and recorded.
• Test reports are accurate and complete.
• Regression testing is conducted to assure non conformances have been corrected.
• Resolution of all non conformances takes place prior to delivery.

Software testing verifies that the software meets its requirements. The quality of testing
is assured by verifying that project requirements are satisfied and that the testing process
is in accordance with the test plans and procedures.

Software Quality Assurance during the Software Acquisition Life Cycle(Water fall
model)

Let us review few life cycle models and various kind of activity held in each phases.

• A conceptualization of the steps necessary to develop an IS


• A plan used by management to help control the development of an IS

In addition to the general activities described there are phase- specific SQA activities that
should be conducted during the Software Acquisition Life Cycle. At the conclusion of
each phase, SQA concurrence is a key element in the management decision to initiate the
following life cycle phase. Suggested activities for each phase are described below.

1. Software Concepts and Initiation Phase

SQA should be involved in both writing and reviewing the Management Plan in order to
assure that the processes, procedures, and standards identified in the plan are appropriate,
clear, specific, and auditable. During this phase, SQA also provides the QA section of the
Management Plan.

2. Software Requirements Phase

In this phase the following activities are held

Determine goals, inputs, outputs, storage, logic, critical response time, inflexible I/O,
conversion requirements, security and backup, storage amounts, inflexible
hardware/software, performance targets, cost/benefit analysis revision, audit
requirements, plan and budget for future.

During the software requirements phase, SQA assures that software requirements are
complete, testable, and properly expressed as functional, performance, and interface
requirements.

3. Software Architectural (Preliminary) Design Phase

SQA activities during the architectural (preliminary) design phase include:

• Assuring adherence to approved design standards as designated in the


Management Plan.
• Assuring all software requirements are allocated to software components,
hardware and software purchases.
• Assuring that a testing verification matrix exists and is kept up to date.
• Assuring the Interface Control Documents are in agreement with the standard in
form and content.
• Reviewing PDR documentation and assuring that all action items are resolved.
• Assuring the approved design is placed under configuration management.

4. Software Detailed Design Phase

SQA activities during the detailed design phase include:

• Assuring that approved design standards are followed.


• Assuring that allocated modules are included in the detailed design.
• Assuring that results of design inspections are included in the design.
• Reviewing CDR documentation and assuring that all action items are resolved.
• Module specifications, input/output specs., storage specs., implementation plan,
documentation specs.

5. Software Implementation Phase

SQA activities during the implementation phase include the audit of:

• Results of coding and design activities including the schedule contained in the
Software Development Plan.
• Status of all deliverable items.
• Configuration management activities and the software development library.
• Nonconformance reporting and corrective action system.

6. Software Integration and Test Phase

SQA activities during the integration and test phase include:


• Assuring readiness for testing of all deliverable items.
• Assuring that all tests are run according to test plans and procedures and that any
non conformances are reported and resolved.
• Assuring that test reports are complete and correct.
• Certifying that testing is complete and software and documentation are ready for
delivery.
• Participating in the Test Readiness Review and assuring all action items are
completed.

7. Software Acceptances and Delivery Phase

As a minimum, SQA activities during the software acceptance and delivery phase include
assuring the performance of a final configuration audit to demonstrate that all deliverable
items are ready for delivery.

8. Software Sustaining Engineering and Operations Phase

During this phase, there will be mini-development cycles to enhance or correct the
software. During these development cycles, SQA conducts the appropriate phase-
specific activities described above.

Techniques and Tools

SQA should evaluate its needs for assurance tools versus those available off-the-shelf for
applicability to the specific project, and must develop the others it requires. Useful tools
might include audit and inspection checklists and automatic code standards analyzers.

Choosing the best model is also playing a vital role in bringing the quality product. let us
review the pros and cons of familiar methodologies.

Problems with waterfall life cycle

• Does not work well with all individual projects.


• Lacks a strategic planning phase

Problems with the waterfall life cycle from an individual projects perspective.

• Takes too long (needs may change, high opportunity cost)


• Difficult to get user participation
• Difficult to determine the users needs when nothing in the current environment
resembles the new system (e.g., expert systems, DSS)
• Empirical evidence suggests high maintenance cost when developed with this
approach (this should not be true if done right)
Prototyping

Building and modifying a model system in response to user feedback until the user likes
the system

Advantages

• User sees a real system fast (1 or 2 days)


• User directly involved in specifying requirements.
• Made for change therefore possibly low maintenance cost
• Serves as a basis for discussion and helps identify requirements when there is no
current system like desired system.
• In some cases development cost is less

Disadvantages

• Requires high upfront costs (software for database, modeling, report generation,
screen generation)
• Difficult to use when building large systems.
• Sometimes difficult to maintain user enthusiasm
• User never satisfied
• Tendency not to document

SOFTWARE DESIGN QUALITY:

Coupling:

Degree of interaction between two modules Coupling in the context of application


integration, is the binding of applications together in such a way that they are dependent
on each other, sharing the same methods, interfaces, and perhaps data. This is the core
notion of service-oriented application integration where the applications are bound by
shared services, versus the simple exchange of information.

At first glance, coupling may seem like the perfect idea. However, you should not lose
sight of what it really requires - the tight binding of one application domain to the next.
As a consequence of this requirement, all coupled source and target systems will have to
be extensively changed to couple them (in most cases).
Further, as events and circumstances evolve over time, any change to any source or target
system demands a corresponding change to the coupled systems as well. Coupling creates
one system out of many, with each tightly dependent upon the other. Service-oriented
application integration clearly leverages coupling in how applications are bound together.
Of course, the degree of coupling that occurs is really dependent on the architect, and
how he or she binds source and target systems together. In some instances systems are
tightly coupled, meaning they are dependent on each other. In other instance, the are
loosely coupled, meaning that they are more independent. There are, of course, more pros
and cons of coupling that should be considered in context of the problem you’re looking
to solve.

Five levels of coupling are ,

1. Content (worst)

2. Common

3. Control

4. Stamp

5. Data (best)
On the pros side you have:
• The ability to bind systems by sharing behavior, and bound data, versus simple
sharing information. This provides the integration solution set with the ability to
share services that could be redundant to the integrated systems, and thus
reducing development costs.
• The ability to tightly couples processes as well as shared behavior. This means
that process integration engines, layered on top of service-oriented integration
solutions have a better ability to bind actually behavior (functions), versus just
simple move information from place to place.
The downsides include:
• The need, in many cases, the change source and target systems to couple services.
This adds cost because development and testing time is involved; it’s no longer a
matter of leveraging an interface which is abstracted from the core system.
• The fact that systems coupled could cease to function if one or more of the
coupled systems go down. This means that a single system failure could bring
down all of coupled systems, and thus creating vulnerability.
Cohesion

Degree of interaction within module.In contrast to coupling, cohesion is the "act or


state of sticking together" or "the logical agreement." Cohesively integrated source
and target systems are independent from one another. Changes to any source or target
system should not affect the others directly. In this scenario, information can be
shared between systems without worrying about changes to the applications or
databases, leveraging some type of loosely coupled middleware layer to move
information between applications and make adjustments for differences in application
semantics.

There are Seven levels of cohesion


1. Coincidental (worst)

2. Logical

3. Temporal

4. Procedural

5. Communicational

7. Functional | Informational (best)

The advantages of using cohesion included:


• The ability to avoid changing source and target systems just to facilitate
integration. You no longer have to make changes to the systems because the
points of integration are less invasive.
• The fact that a single system failure won’t bring down all connected systems.
Since the systems are not dependent, a failure typically won’t affect the integrated
systems (at least immediately).
The disadvantages include:
• The inability to provide visibility into the services layer, and thus gain value from
encapsulated business services and tactical functions. This is simple information
movement; there is usually no notion of services access, thus remote applications
can only see information not behavior, thus no reuse of services.

• You don't want a change in one module to cause errors to ripple throughout your
system. Encapsulate information, make modules highly cohesive, seek low
coupling among modules.

Characteristics of Perfect Design:

a module can be called as lexically contiguous sequence of program statements,


bounded by boundary elements, with aggregate identifier

• Examples
o Procedures & functions in classical Procedural Languages
o Objects & methods within objects in Object Oriented Programming
Languages
• Good vs. Bad Modules
o Modules in themselves are not “good”
o Must design them to have good properties
i) CPU using 3 chips (modules) ii) CPU using 3 chips (modules)
Chip 1 Chip 2

Chip 1 Chip 2

AND gates OR gates

Registers
ALU
ALU

Chip 3

Shifter
SHIFTER NOT gates
Chip 3

After analyzing the Two designs,

• they are functionally equivalent, but the 2nd is


o Hard to understand
o Hard to locate faults
o Difficult to extend or enhance
o Cannot be reused in another product. Implication
o Expensive to perform maintenance
• Good modules must be like the 1st design
o maximal relationships within modules (cohesion)
o minimal relationships between modules (coupling)
o this is the main contribution of structured design

SOFTWARE REVIEWS, INSPECTIONS AND WALKTHROUGHS

A quarter-century ago, Michael Fagan of IBM developed the software inspection


technique, a method for finding defects through manual examination of software work
products by a group of the author's peers. Many organizations have achieved dramatic
results from inspections, including IBM, Raytheon, Motorola, Hewlett Packard, and Bull
HN. However, other organizations have had difficulty getting any kind of software
review process going. Considering that effective technical reviews are one of the most
powerful software quality practices available, all software groups should become skilled
in their application.

Software Inspection. “A formal evaluation technique in which software requirements,


design, or code are examined in detail by person or group other than the author to detect
faults, violations of development standards, and other problems” [IEEE94]. A quality
improvement process for written material that consists of two dominant components:
product (document) improvement and process improvement (document production and
inspection).

Software Inspection is the most formal, commonly used form of peer review. The key
feature of an inspection is the use of the checklists to facilitate error detection and defined
roles for participants.

Focus of the software inspection is on identifying problems and not on resolving them.
Suggested software inspection participant roles:
o Moderator is responsible for leading the inspection.
o Reader - Leads the inspection through the logic.
o Recorder - documents found during inspection
o Reviewers - identifies and describes possible problems and defects
o Author - contribute his understanding.

Software Walkthrough. In the most usual form of term, a walkthrough is step-by-step


simulation of the execution of a procedure, as when walking through code line by line,
with an imagined set of inputs. The term has been extended to the review of material that
is not procedural, such as data descriptions, reference manuals, specifications, etc.

THE RULES OF A WALKTHROUGH


The rules governing a walkthrough are:
o Provide adequate time
o Use multiple sessions when necessary
o Prepare a set of test cases
o Provide a copy of the program being tested to each team member
o Provide other support materials

Other support materials may be required to efficiently conduct a Walkthrough. These


include:

o a list of questions prepared by each team member after reading the program or
unit under review
o flow charts
o data dictionaries, list of variables, classes etc.
The code walkthrough, like the inspection, is a set of procedures and error-detection
techniques for group code reading. It shares much in common with the inspection
process, but the procedures are slightly different, and a different error-detection technique
is employed.
Role of participants:
Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours in
duration. The walkthrough team consists of three to five people. One of these people
plays a role similar to that of the moderator in the inspection process, another person
plays the role of a secretary (a person who records all errors found), and a third person
plays the role of a "tester”. Suggestions as to who the three to five people should be vary.
Of course, the programmer is one among these people. Suggestions for the other
participants include (1) a highly experienced programmer, (2) a programming-language
expert, (3) a new programmer (to give a fresh, unbiased outlook), (4) the person who will
eventually maintain the program, (5) someone from a different project, and (6) someone
from the same programming team as the programmer.

Comparison of Software Walkthroughs versus Software Inspections


Software Walkthroughs Software Inspections
Participants Peer(s) led by author Peers in designated roles
Rigor Informal to formal Formal
Training None, informal, or structured Structured, preferably by teams
Required
Purpose Judge quality, find defects, Measure/improve quality of product and process
training
Effectiveness Low to medium Low to very high, depending on training and
commitment
CAPABILITY MATURITY MODEL

CMM (Capability Maturity Model) is a model of process maturity for software


development - an evolutionary model of the progress of a company’s abilities to develop
software.

According to the Carnegie Mellon University Software Engineering Institute,


“CMM is a common-sense application of software or business process management and
quality improvement concepts to software development and maintenance”. Its a
community-developed guide for evolving towards a culture of engineering excellence,
model for organizational improvement. The underlying structure for reliable and
consistent software process assessments and software capability evaluations.
In November 1986, the American Software Engineering Institute (SEI) in
cooperation with Mitre Corporation created the Capability Maturity Model for Software.
Development of this model was necessary so that the U.S. federal government could
objectively evaluate software providers and their abilities to manage large projects.
Many companies had been completing their projects with significant overruns in schedule
and budget. The development and application of CMM helps to solve this problem.

The key concept of the standard is organizational maturity. A mature organization


has clearly defined procedures for software development and project management.
These procedures are adjusted and perfected as required. In any software development
company there are standards for processes of development, testing, and software
application; and rules for appearance of final program code, components, interfaces, etc.

The goals sought to achieve are as follows:

o Continuous process improvement is planned.


o Participation in the organization's software process improvement activities is
organization wide.

The CMM model defines five levels of organizational maturity:


1. Initial level is a basis for comparison with the next levels. The software process is
characterized as ad hoc, and occasionally even chaotic. Few processes are
defined, and success depends on individual effort and heroics. In an organization
at the initial level, conditions are not stable for the development of quality
software. The results of any project depend totally on the manager’s personal
approach and the programmers’ experience, meaning the success of a particular
project can be repeated only if the same managers and programmers are assigned
to the next project. In addition, if managers or programmers leave the company,
the quality of produced software will sharply decrease. In many cases, the
development process comes down to writing code with minimal testing.

2. Repeatable level. Repeatable Basic project management processes are


established to track cost, schedule, and functionality. At this level, project
management technologies have been introduced in a company. The necessary
process discipline is in place to repeat earlier successes on projects with similar
applications. That project planning and management is based on accumulated
experience and there are standards for produced software (these standards are
documented) and there is a special quality management group. At critical times,
the process tends to roll back to the initial level.

Defined level. Here, standards for the processes of software development and
maintenance are introduced and documented (including project management). The
software process for both management and engineering activities is documented,
standardized, and integrated into a standard software process for the organization. During
the introduction of standards, a transition to more effective technologies occurs. There is
a special quality management department for building and maintaining these standards.
All projects use an approved, tailored version of the organization's standard software
process for developing and maintaining software. A program of constant, advanced
training of staff is required for achievement of this level. Starting with this level, the
degree of organizational dependence on the qualities of particular developers decreases
and the process does not tend to roll back to the previous level in critical situations. These
defined standards give the organization a commitment to perform because the
organization follows a written policy for implementing software process improvements
and senior management sponsors the organization's activities for software process
improvement.

3. Managed level. There are quantitative indices (for both software and process as a
whole) established in the organization. Detailed measures of the software process
and product quality are collected. Better project management is achieved due to
the decrease of digression in different project indices. However, sensible
variations in process efficiency may be different from random variations (noise),
especially in mastered areas. Both the software process and products are
quantitatively understood and controlled.

4. Optimizing level. Improvement procedures are carried out not only for existing
processes, but also for evaluation of the efficiency of newly introduced innovative
technologies. The main goal of an organization on this level is permanent
improvement of existing processes. Continuous process improvement is enabled
by quantitative feedback from the process and from piloting innovative ideas and
technologies. This should anticipate possible errors and defects and decrease the
costs of software development, by creating reusable components for example.

A process area (PA) contains the goals that must be reached in order to improve a
software process. A PA is said to be satisfied when procedures are in place to reach
the corresponding goals. A software organization has achieved a specific maturity
level once all the corresponding PAs are satisfied. The process areas (PA's) have the
following features:

1) Identify a cluster of related activities that, when performed collectively, achieve a


set of goals considered important for enhancing process capability.

2) Defined to reside at a single maturity level.

3) Identify the issues that must be addressed to achieve a maturity level.

The different maturity levels have different process areas pre-defined as shown in the
figure above. For instance, the SEI CMMI Level 5 has 3 PA's defined:

Process change management: To identify the causes of defects and prevent them from
recurring.

Technology change management: To identify beneficial new technology and transfer them
in an orderly manner

Defect prevention: To continually improve the process to improve quality, increase


productivity, and decrease development time.

The Process Area Activities performed include:

o A software process improvement program is established which empowers the


members of the organization to improve the processes of the organization.
o The group responsible for the organization's software process activities
coordinates the software process improvement activities.
o The organization develops and maintains a plan for software process
improvement according to a documented procedure.
o The software process improvement activities are performed in accordance with
the software process improvement plan.
o Software process improvement proposals are handled according to a documented
procedure.
o Members of the organization actively participate in teams to develop software
process improvements for assigned process areas.
o Where appropriate, the software process improvements are installed on a pilot
basis to determine their benefits and effectiveness before they are introduced into
normal practice.
o When the decision is made to transfer a software process improvement into
normal practice, the improvement is implemented according to a documented
procedure.
o Records of software process improvement activities are maintained.
o Software managers and technical staff receive feedback on the status and results
of the software process improvement activities on an event-driven basis.

The Software Engineering Institute (SEI) constantly analyzes the results of CMM usage
by different companies and perfects the model taking into account-accumulated
experience. The CMM establishes a yardstick against which it is possible to judge, in a
repeatable way, the maturity of an organization's software process and compare it to the
state of the practice of the industry. The CMM can also be used by an organization to
plan improvements to its software process. It also reflects the needs of individuals
performing software process, improvement and software process assessments. Software
capability evaluation is documented and is publicly available. At the Optimizing Level or
the CMM level5, the entire organization is focused on continuous Quantitative feedback
from previous projects is used to improve the project management, usually using pilot
projects, using the skills shown in level 4. The focus is on continuous process
improvement.

Software project teams in SEI CMM Level 5 organizations analyze defects to determine
their causes. Software processes are evaluated to prevent known types of defects from
recurring, and lessons learned are disseminated to other projects. The software process
capability of Level 5 organizations can be characterized as continuously improving
because Level 5 organizations are continuously striving to improve the range of their
process capability, thereby improving the process performance of their projects.
Improvement occurs both by incremental advancements in the existing process and by
innovations using new technologies and methods.

The Capability Maturity Model has been criticized in that, that it does not
describe how to create an effective software development organization. The traits it
measures are in practice very hard to develop in an organization, even though they are
very easy to recognize. However, it cannot be denied that the Capability Maturity Model
reliably assesses an organization's sophistication about software development.

Summary

Auditing is a way of ensuring the quality of the information contained in the system.
Auditing refers to having an expert who is not involved in setting up or using a system
examines information in order to ascertain its reliability. In each phases of the
development cycle we apply quality ingredients and adds value to the product.
Specifically in software design modular design is one of the fundamental principles of a
good design. A module having high cohesion and low coupling is said to be functionally
independent of other modules. A software metric is any type of measurement, which
relates to software system, process or related documentation. The specific metrics that are
relevant depend on the project, the goals of the quality management team and the type of
software that is being developed. Discussed the quality assurance of software products
and related documents. At the end the various levels in capability maturity model has
been discussed.

Review questions:

1. Discuss the dimension of software quality.


2. What do you mean by CMM?
3. Explain the various activities associated in each phase of the development cycle.
4. What is system audit? Why it is necessary?
5. What are software verification tests?
6. What is software metrics?
7. What is software quality assurance?
8. What is validation of software?

Você também pode gostar