Escolar Documentos
Profissional Documentos
Cultura Documentos
OF
SYSTEM ANALYSIS AND DESIGN
1. How data dictionary can be used during software development?(May065Marks)
Data Dictionary
a) Data dictionary are integral components of structured analysis; since data
flow diagrams by themselves do not fully describe a subject of the
investigation.
b) A data dictionary is a catalog-a repository of the element in the system.
c) In data dictionary one will find a list of all the elements composing the data
flowing through a system. The major elements are
Data flows
Data stores
Processes
d) The dictionary is developed during data flows analysis and assist the analysis
Involved in determining system requirements and its content are used during
System design as well.
e) The data dictionary contains the following description about the data: The name of the data element.
The physical source/ Destination name.
The type of the data element.
The size of data element.
Usage such as input or output or update etc
Reference(/s) of DFD process nos, where used.
Any special information useful for system specification such as
validation rules etc.
f) this is appropriately called as system development data dictionary, Since it is
created during the system development, facilitating the development
functions, used by the developers and is designed for the developers
information needs in mind.
g) for every single data element mentioned in the DFD there should be at least
one and only one unique data element in the data dictionary.
h) the type of data elements is considered as numeric, textual or image, audio
etc
i) usage should specify whether the referenced DFD process uses it as input
data(only read) or creates data output (e.g Insert) or update.
j) a data element can be mentioned with reference to multiple DFD processes
but in that case if the usages are different , then there should be one entry
for each usage.
k) the data dictionary is used as an important basic information during the
development stages.
l) Importance of data dictionary:
To manage the details in large system.
To communicate a common meaning for all system elements.
To document the features of the system.
To facilitate analysis of the details in order to evaluate characteristics
and determine where system changes should be made.
a)
b)
c)
d)
e)
f)
d)
5. What are the CASE tools? explain some CASE tools used for
prototyping. (May-06-15Marks, Nov-03, M-05, Dec-04).
Ans. Computer assisted software engineering (CASE)
Interface Generators
System interfaces are the means by which users interact
with an application, both to enter information and data
and to receive information.
Code Generators:
Variable costs are incurred on a regular basis. They are usually proportional
to work volume and continue as long as the system is in operation.
For example: The costs of computer forms vary in proportion to amount of
processing or the length of the reports required.
Variable benefits are realized on a regular basis.
For example: Consider a safe deposit tracking system that saves 20 minutes
preparing customer with manual system.
The following are the methods of performing cost and benefit analysis:
Net benefit analysis.
Present value analysis.
Payback analysis.
Break-even analysis.
Cash flow analysis.
Return on investment analysis.
rates, inflation and other factors that alter the value of the investment. Present value
analysis controls for these problems by calculating the costs and benefits of the
system in terms of todays value at the investment and then comparing across
alternatives.
Present value = future value
---------------(1+i)n
Net present value is equal to discounted benefits minus discounted costs.
Second normal form is achieved when a record is in first normal form and each item
in the record is fully dependent on the primary record key for identification. In other
words, the analyst seeks functional dependency.
For example: State motor vehicle departments go to great lengths to ensure that
only one vehicle in the state is assigned a specific license tag number. The license
number is uniquely identifies a specific vehicle; a vehicles serial number is
associated with one and only one state license number. Thus if you know the serial
number of a vehicle, you can determine the state license number. This is functional
dependency.
In contrast, if a motor vehicle record contains the name of all individuals who
drive the vehicle, functional dependency is lost. If we know the license number, we
do not know who the driver is there can be many. And if we know the name of the
driver, we do not know the specific license number or vehicle serial number, since a
driver can be associated with more than one vehicle in the file.
Thus to achieve second normal form, every data item in a record that is not
dependent on the primary key of the record should be removed and used to form a
separate relation.
Third Normal Form:
Third normal form is achieved when transitive dependencies are removed from
record design. The following is the example of third normal form:
A, B, C are three data items in a record.
If C is a functionally dependent on B and
B is functionally dependent on A,
Then C is functionally dependent on A
Therefore, a transitive dependency exists.
In data management, transitive dependency is a concern because data can
inadvertently be lost when the relationship is hidden. In the above case, if A is
deleted, then B and C are deleted also, whether or not this is intended. This problem
is eliminated by designing the record for third normal form. Conversion to third
normal form removes the transitive dependency by splitting the relation into two
separate relations.
Denormilzation
Performance needs dictate very quick retrieval capability for data stored in relational
databases. To accomplish this, sometimes the decision is made to denormalize the
physical implementation. Denormalization is the process of putting one fact in
numerous places. This speeds data retrieval at the expense of data modification.
Of course, a normalized set of relational tables is the optimal environment and
should be implemented for whenever possible. Yet, in the real world, denormalization
is sometimes necessary. Denormalization is not necessarily a bad decision if
implemented wisely. You should always consider these issues before denormalizing:
If the answer to any of these questions is "yes," then you should avoid
denormalization because any benefit that is accrued will not exceed the cost. If, after
considering these issues, you decide to denormalize be sure to adhere to the general
guidelines that follow.
Many critical queries and reports exist which rely upon data from more than
one table. Often times these requests need to be processed in an on-line
environment.
Repeating groups exist which need to be processed in a group instead of
individually.
Many calculations need to be applied to one or many columns before queries
can be successfully answered.
Tables need to be accessed in different ways by different users during the
same timeframe.
Many large primary keys exist which are clumsy to query and consume a
large amount of DASD when carried as foreign key columns in related tables.
Certain columns are queried a large percentage of the time. Consider 60% or
greater to be a cautionary number flagging denormalization as an option.
Be aware that each new RDBMS release usually brings enhanced performance and
improved access options that may reduce the need for denormalization. However,
most of the popular RDBMS products on occasion will require denormalized data
structures. There are many different types of denormalized tables which can resolve
the performance problems caused when accessing fully normalized data. The
following topics will detail the different types and give advice on when to implement
each of the denormalization types.
6. Write detailed note about the different levels and methods of testing
software (May-06).
Ans. A table that is not sufficiently normalized can suffer from logical
inconsistencies of various types, and from anomalies involving data operations. In
such a table:
The same fact can be expressed on multiple records; therefore updates to the
table may result in logical inconsistencies. For example, each record in an
unnormalized "DVD Rentals" table might contain a DVD ID, Member ID, and
Member Address; thus a change of address for a particular member will
potentially need to be applied to multiple records. If the update is not carried
through successfullyif, that is, the member's address is updated on some
records but not othersthen the table is left in an inconsistent state.
Specifically, the table provides conflicting answers to the question of what this
particular member's address is. This phenomenon is known as an update
anomaly.
There are circumstances in which certain facts cannot be recorded at all. In
the above example, if it is the case that Member Address is held only in the
"DVD Rentals" table, then we cannot record the address of a member who
has not yet rented any DVDs. This phenomenon is known as an insertion
anomaly.
There are circumstances in which the deletion of data representing certain
facts necessitates the deletion of data representing completely different facts.
For example, suppose a table has the attributes Student ID, Course ID, and
Lecturer ID (a given student is enrolled in a given course, which is taught by
a given lecturer). If the number of students enrolled in the course temporarily
drops to zero, the last of the records referencing that course must be deleted
meaning, as a side-effect, that the table no longer tells us which lecturer
has been assigned to teach the course. This phenomenon is known as a
deletion anomaly.
History
Edgar F. Codd first proposed the process of normalization and what came to be
known as the 1st normal form:
There is, in fact, a very simple elimination [1] procedure which we shall call
normalization. Through decomposition non-simple domains are replaced by
"domains whose elements are atomic (non-decomposable) values."
Edgar F. Codd, A Relational Model of Data for Large Shared Data Banks[2]
In his paper, Edgar F. Codd used the term "non-simple" domains to describe a
heterogeneous data structure, but later researchers would refer to such a structure
as an abstract data type.
Normal forms
The normal forms (abbrev. NF) of relational database theory provide criteria for
determining a table's degree of vulnerability to logical inconsistencies and anomalies.
The higher the normal form applicable to a table, the less vulnerable it is to such
inconsistencies and anomalies. Each table has a "highest normal form" (HNF): by
definition, a table always meets the requirements of its HNF and of all normal forms
lower than its HNF; also by definition, a table fails to meet the requirements of any
normal form higher than its HNF.
The normal forms are applicable to individual tables; to say that an entire database
is in normal form n is to say that all of its tables are in normal form n.
Newcomers to database design sometimes suppose that normalization proceeds in
an iterative fashion, i.e. a 1NF design is first normalized to 2NF, then to 3NF, and so
on. This is not an accurate description of how normalization typically works. A
sensibly designed table is likely to be in 3NF on the first attempt; furthermore, if it is
3NF, it is overwhelmingly likely to have an HNF of 5NF. Achieving the "higher"
normal forms (above 3NF) does not usually require an extra expenditure of effort on
the part of the designer, because 3NF tables usually need no modification to meet
the requirements of these higher normal forms.
Edgar F. Codd originally defined the first three normal forms (1NF, 2NF, and 3NF).
These normal forms have been summarized as requiring that all non-key attributes
be dependent on "the key, the whole key and nothing but the key". The fourth and
fifth normal forms (4NF and 5NF) deal specifically with the representation of manyto-many and one-to-many relationships among attributes. Sixth normal form (6NF)
incorporates considerations relevant to temporal databases.
First normal form
Main article: First normal form
The criteria for first normal form (1NF) are:
A table must be guaranteed not to have any duplicate records;
therefore it must have at least one candidate key.
There must be no repeating groups, i.e. no attributes which occur a
different number of times on different records. For example, suppose
that an employee can have multiple skills: a possible representation of
employees' skills is {Employee ID, Skill1, Skill2, Skill3 ...}, where
{Employee ID} is the unique identifier for a record. This
representation would not be in 1NF.
Second normal form
Main article: Second normal form
The criteria for second normal form (2NF) are:
The table must be in 1NF.
None of the non-prime attributes of the table are functionally
dependent on a part (proper subset) of a candidate key; in other
words, all functional dependencies of non-prime attributes on
candidate keys are full functional dependencies. For example, consider
a "Department Members" table whose attributes are Department ID,
Employee ID, and Employee Date of Birth; and suppose that an
employee works in one or more departments. The combination of
Department ID and Employee ID uniquely identifies records within the
table. Given that Employee Date of Birth depends on only one of those
attributes namely, Employee ID the table is not in 2NF.
Note that if none of a 1NF table's candidate keys are composite i.e.
every candidate key consists of just one attribute then we can say
immediately that the table is in 2NF.
It has been suggested that this section be split into a new article entitled Sixth
normal form. (Discuss)
This normal form was, as of 2005, only recently proposed: the sixth normal form
(6NF) was only defined when extending the relational model to take into account the
temporal dimension. Unfortunately, most current SQL technologies as of 2005 do not
take into account this work, and most temporal extensions to SQL are not relational.
See work by Date, Darwen and Lorentzos[3] for a relational temporal extension, or
see TSQL2 for a different approach.
Denormalization
Main article: Denormalization
Databases intended for Online Transaction Processing (OLTP) are typically more
normalized than databases intended for On Line Analytical Processing (OLAP). OLTP
Applications are characterized by a high volume of small transactions such as
updating a sales record at a super market checkout counter. The expectation is that
each transaction will leave the database in a consistent state. By contrast, databases
intended for OLAP operations are primarily "read only" databases. OLAP applications
tend to extract historical data that has accumulated over a long period of time. For
such databases, redundant or "denormalized" data may facilitate Business
Intelligence applications. Specifically, dimensional tables in a star schema often
contain denormalized data. The denormalized or redundant data must be carefully
controlled during ETL processing, and users should not be permitted to see the data
until it is in a consistent state. The normalized alternative to the star schema is the
snowflake schema.
Denormalization is also used to improve performance on smaller computers as in
computerized cash-registers. Since these use the data for look-up only (e.g. price
lookups), no changes are to be made to the data and a swift response is crucial.
Non-first normal form (NF)
In recognition that denormalization can be deliberate and useful, the non-first
normal form is a definition of database designs which do not conform to the first
normal form, by allowing "sets and sets of sets to be attribute domains" (Schek
1982). This extension introduces hierarchies in relations.
Consider the following table:
Non-First Normal Form
Person Favorite Colors
Bob
blue, red
Jane
green, yellow, red
Assume a person has several favorite colors. Obviously, favorite colors consist of a
set of colors modeled by the given table.
The most useful and practical approach is with the understanding that
testing is the process of executing a program with tha explicit intention of finding
errors, that is ,making the program dail
TESTING STRATEGIES:
A test case is a set of data that the system will process
as normal input. However the data are created with the express intent of
determining whther the system will process them correctly
There are two logical strategies for testing software that is the strategic of code
testing and specification testing
CODE TESTING:
The code testing strategy examines the logic of the
program to follow this testing method the analyst develops test cases that result in
executing every instruction in the program or module that is every path through the
program is tested
This testing strategy does not indicate whether the
code meets its specification nor does it determine whether all aspects are even
implemented. Code testing also does not check the range of data that the program
will accept even through when the software failure occur in actual size it is frequently
because users submitted data outside of expected ranges(for example a sales order
for $1 the largest in the history of the organization)
SPECIFICATION TESTING:
To perform specification testing the analyst
examines the specifications stating what the program should do and how it should
perform under
various conditions. Then test cases are developed for each condition or combination
of condition and submitted for processing. by examining the results the analyst can
determine whether the program perform according to its specified requirement
LEVELS OF TESTING:
7. What are structured walkthrough and how are they carried out? Describe
the Composition of walkthrough system.(May-06,Nov-03,May-05).
A structured walkthrough is a planned review of a system or its software by
persons involved in the development efforts. Sometimes structured walkthrough is
called peer walkthrough because the participant are colleagues of the same levels in
the organization
PURPOSE:
1. The purpose of structured walkthrough is to find area where improvement can
be made in the system or the development process.
2. structured walkthrough are often employed to enhance quality and to provide
guidance to system analyst and programmers.
3. A walkthrough should be viewed by the programmers and analyst as an
opportunity to receive assistance not as an obstacle to be avoided tolerated.
4. The structured walkthrough can be used to process a constructive and cost
effective management tool after detailed investigation following design and
during program development.
PROCESS OF STRUCTURED WALKTHROUGH:
1. The walkthrough concept recognizes that system development is team
process. The individuals who formulated the design specifications or crated
the program code are parts of the review team
2. A moderated is chosen to lead the review.
3. A scribe a recorder is also needed to capture the details of the discussions
and the ideas that are raised.
4. Maintenance should be addressed during walkthrough.
5. Generally no more than seven persons should be involved including the
individuals who actually developed the product under review, the recorder
and the review leader.
6. structured review rarely exceed 90 minutes in length.
REQUIREMENT REVIEW:
1. A requirement review is a structured walkthrough conducted to examine the
requirement specifications formulated by the analyst.
2. It is also called as specification review.
3. It aims at examining the functional activities and processes that the review
the new system will handle.
4. it includes documentation that participants read and study price to actual
walkthrough.
DESIGN REVIEW
1. Design review focuses on design specification for meeting previously identified
system requirements
2. The purpose of this type of structured walkthrough is to determine whether
the proposed design will meet the requirement effectively and efficiently.
Ans:
User interface design is the spercification of of a conversation between the system
user and the computer.it generally results in the form of input or output.there are
several types of user interface styles including menu selection , instruction sets,
question answer dialogue and direct manipulation.
1. Menu selection: It is a strategy of dialogue design that presents a
list of alternatives or options to the user.The system user selects the
desired alternative or option by keying in the no. of alternatives
associated with the options.
2. Instruction sets: It is strategy where the application is designed osing
a dialogue syntax that the user must learn.There are three types of
syntax :structured English,mnemonic syntax and natural language.
3. Question answer dialogue strategy: It is a style that wsa primerly used
to supplement either menu-driven or syntax-driven dialogues.The
system question involves yes or no.It was also popular in developing
interfaces for character based screens inainframe applications.
4. Direct manipulation: It allows graphical objects appear on a
screen.Essentialy, this user interface style foceses on using icons ,
small graphical images ,to sugest functions to the user.
Ans.
a.
v.
3.
4.
e.
Advantages :
i.
Shorter development time.
ii.
More accurate user requirements.
iii.
Greater user participation and support
iv.
Relatively inexpensive to build as compared with the cost of
conventional system.
Disadvantages :
i.
An appropriate operating system or programming languages may be
used simply because is it available.
ii.
The completed system may not contain all the features and final
touch. For instance headings titles and page numbers in the report
may be missing.
iii.
File organization may be temporary nd record structures may left
incomplete.
iv.
Processing and input controls may be missing and documentation of
system may have been avoided entirely.
v.
Development of system may become never ending process as changes
will keep happening.
vi.
Adds to cost and time of the developing system if left uncontrolled.
Application :
i.
This method is most useful for unique applications where developers
have little information or experience or where risk or error may be
high.
ii.
It is useful to test the feasibility of the system or to identify user
requirements.
11. Distinguish between reliability and validity how are they related?(May06,Nov-03).
Ans:
Reliability and validity are two faces of information gathering .the term reliability is
synonymous with dependability,consistency and accuracy.Concern for reliability
comes from the necesisity for dependability in measurement whereas validity is
concerned on what is being measured rather than consistency and accuracy.
Reliability can be approached in three ways
1.it is defined as stability, dependability, and predictability.
2.It focuses on accuracy aspect.
3.Errors of measurement-they are random errors stemming from fatigue or
fortuitous conditions at a given time.
The most common question that defines validity is :Does the instrument measure
what it is measuring? It refers to the notion that the question asked are worded to
produce the information sought.in validity the emphasis is on what is being
measured .
2.
3.
4.
5.
6.
7.
It is drawn to describe a
process or program component shown in System Flow chart in
more details. The structure charts create further top-down
decomposition. Thus it is another lower level of abstraction of the
design before implementation.
A computer program is
modularized so that it is very easy to understand; it avoids
repetition of the same code in multiple places of the same (or
different) programs and can be reused to save development time,
effort and costs. The Structured chart makes it possible to model
even at the lower level of design.
These modules of a
program, called as subprograms, are placed physically one after
the other serially. However, they are referenced in the order that
the functionality requires them. Thus the program control is
transferred from a line is the calling subprogram to the first line in
the called subprograms. Thus they perform the role of calling
and/or called subprograms, at different times during the program
execution.
The calling subprograms is
referred to as a parent and the called one is referred to as a child
subprogram. Thus a program can be logically arranged into
hierarchy of subprograms. The structured charts can be used to
represent the parent-child relationships between subprograms
effectively by correctly representing these relationships.
The subprograms also
communicate with each other in either direction. The structure
chart can describe the data flows effectively. These individuals
data items between programs modules are called as data couples
. They are represented by an arrow starting from a hollow circle,
as shown in the diagram. The arrow is labeled with the name of
data item passed.
The subprograms also
communicate among themselves a type of data item called as flag
which is purely internal information between subprograms, used to
indicate some result. They could be binary values, indicating
presence or absence of a thing.
Data
Returne
Controld
Flag
Data Passed
Calling
Module
Called
Module
Warrier or Diagram:
1. The ability to show the relationship between processes and steps in a
process is not unique to Warnier/Orr diagrams, not is the use of
iteration ,or treatment of individual cases.
2. Both the structured flowcharts and structured-English methods do
this equality well. However, the approach used to develop systems
definitions with Warnier/Orr diagrams are different and fit well with
those used in logical system design.
3. To develop a Warnier/Orr diagram, the analyst works backwards,
starting with systems output and using on output-oriented analysis.
On paper, the development moves from left to right. First, the
intended output or results of processing are defined..
4. At the next level, shown by inclusion with a bracket, the steps
needed to produce the output are defined. Each step in turn is
further defined. Additional brackets group the processes required to
produce the result on the next level.
5. A complete Warnier/Orr diagram includes both process groupings
and data requirements. Both elements are listed for each process or
process component. These data elements are the ones needed to
determine which alternative or case should be handled by the system
and to carry out the process.
6. The analyst must determine where each data element originates how
it is used, and how individual elements are combined. When the
definition is completed, data structure for each process is
documented. It is turn, is used by the programmers, who work from
the diagrams to code the software.
7. Advantages:
a. Warnier/Orr diagrams offer some distinct advantages to
system experts. They are simple in appearance and easy to
understand. Yet they are powerful design tools.
b. They have the advantage of showing grouping of processes
and the data that must be passed from level to level.
c. In addition, the sequence of working backwards ensures that
the system will be result-oriented.
13. What are the roles of the system Analyst in system analysis
design?(Nov-03,May-01)
The various roles of system analyst are as follows:
Change Agent:
The analyst may be viewed as an agent of change. A candidate system is designed to
introduce change and reorientation in how the user organization handles information
or makes decisions .In role of a change agent, the system analyst may select various
styles to introduce change to user organization. The styles range from that of the
persuader to imposer. In between, there are the catalyst and confronter roles. When
user appears to have a tolerance for change ,the persuader or catalyst(helper)styles
is appropriate. On the other hand , when drastic changes are requires, it may be
necessary to adopt the confronter or even imposer style .No matter what style is
used; however the goal is same :to achieve acceptance of candidate system with a
minimum of resistance.
Investigator and Monitor.
In defining a problem, the analyst pieces together the information gathered to
determine why the present system does not work well and what changes will correct
the problem .In one respect, this work is similar to that of an investigator. Related to
the role of investigator is that of monitor programs in relation to time, cost, and
quality. Of these resources, time is the most important, If time gets away, the
project suffers from increased costs and wasted human resources.
Architect:
Just as an architect related the client abstract design requirement and the
contractors detailed building plan, an analyst relates the users logical design
requirement with the detailed physical system design,As an architect,the analyst also
creates a detailed physical system design of the candidate systems.
Psychologist
In system development, system are built around people. The analyst plays the role
of psychologist in the way he/she reaches people, interprets their thoughts, assesses
their behaviour and draws conclusions from these interactions. Understanding
inetrfunctional relationships is important.It is important that analyst be aware of
peoples feelings and be prepared to get around things in a graceful way. The art of
listening is important in evaluating responses and feedback.
Salesperson:
Selling change can be as crucial as initiating change. Selling the system actually
takes place at each step in the system life cycle. Sales skills and persuasiveness are
crucial to the success of system.
Motivator:
The analyst role as a motivator becomes obvious during the first few weeks after
implementation of new system and during times when turnover results in new people
being trained to work with the candidate system. The amount of dedication if takes
to motivate the users often taxes the analyst abilities to maintain the pace.
Politcian:
Related to the role of motivator is that of politician. In implementing a candidate
system , the analyst tries to appeases all parties involved.Diplomacy and fitness in
dealing with the people can improve acceptance of the system.In as much as a
politician must have the support of his/her constituency, so as good analysts goal to
have the support of the userss staff.
vi)
i)
Technical Skills:
Technical skills focus on procedures and techniques for operetional analysis,
system analysis and computer science. The technical skills relevant to system
include the following:
a.
Working Knowledge of Information Technologies:
The system analyst must be aware of both existing and emerging information
technologies. They should also stay through disciplined reading and
participation in current appropriate professional societies.
b.
Computer programming experience and expertise:
System Analyst must have some programming experience.
Most system analyst need to be proficient in one or more high level
programming language.
ii)
Interpersonal Skills:
Interpersonal skills deal with relationships and interface of the analyst with
people in the business.The interpersonal skills relevant to system include the
following:
a.
Good interpersonal Communication skills:
An analyst must be able to communicate, both orally and in writing.
Communication is not just reports, telephone conversations and interviewes.
It is people talking, listening, feeling and reaching to one another; their
experiences and reactions. Opening communication channel are a must for
system development.
b.
Good interpersonal Relations skills:
An analyst interacts with all stackholders in a system development project.
There interaction requires effective interpersonal skills that enable the analyst
o deal with a group dynamics, bussiness politics, conflict and change.
iii)
General knowledge of Business process and terminology:
System analyst must be able to communicate with business
experts to gain on understanding of their problems and needs. They should
avail themselves of every opportunity to complete basic business literacy
courses such as financial accounting, management or cost accounting,
finance, marketing , manufacturing or operations management, quality
management, economics and business law.
iv)
General Problem solving skills:
The system analyst must be able to tke a large bussiness problem, break
down that problem into its parts, determine problem causes and effects and
then recommended a solution.Analyst must avoid the tendancy to suggest the
solution before analyzing the problem.
v)
Flexibility and Adaptability:
No two are alike.Accordingly there is no single, magical approach or standard
that is equally applicable to all projects.Successful system analyst learn to be
flexible and to adopt to unique challenges and situations.
Character and Ethics:
The nature of the systems analyst job requires strong character and a
sense of right and wrong.Analysts often gain access to sensitive and
confidential facts and information that are not meant for public
disclosure.
15. Build a current Admission for MCA system .Draw context level diagram,
DFD upto Two level, ER diagram and a dataflow and data stores and
a process. Draw input, output screen.(May-04)
EVENT LIST FOR THE CURRENT MCA ADMISSION SYSTEM
1. Administrator enters the college details.
2. Issue of admission forms.
3. Administrator enters the student details into the system.
4. Administrator verifies the details of the student.
5. System generates the hall tickets for the student.
6. Administrator updates the CET score of student in the system.
7. System generates the score card for the students.
8. Student enters his list of preference of college into the system.
9. System generates college-wise student list according to CET score.
10. System sends the list to the college as well as student.
Data Store used in MCA admission:
1)
Student_Details :
i)
Student_id
ii)
Student Name
iii)
Student Address
iv)
STudent_contactNo
v)
Student Qualifiacation
vi)
Studentmarks10th
vii)
Studentmarks12th
viii)
StudentmarksDegree
2)
Stud_cet details:
i)
Student_id
ii)
Student_rollNo
iii)
Student_cetscore
iv)
Student_percentile
3)
Stud_preference List:
i)
Student_id
ii)
Student_rollNo
iii)
Student_prefenence1
iv)
Student_prefenence3
v)
Student_prefenence3
vi)
Student_prefenence4
vii)
Student_prefenence5
4)
College List:
i)
ii)
iii)
iv)
v)
Input Files
College_id
college_name
college_address
seats available
Fee
1)
Student Name:
___________________________
Student Address:
Student_rollNo:
_________
Preference No 1:
Preference No 2:
Preference No 3:
Preference No 4:
Preference No 5:
3)
Student id :
Student rollno:
Student Score:
Student Percentile:
4)
College List :
College Name:
College Address:
Sets Available :
Fees :
OUTPUT FILES
1)
Student ScoreCard
Student RollNo
Student Name
Student SCore
Percentile
2)
1
2
3
4
5
6
7
8
9
Name
Score
Percentile
7. The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to
minimize downtime.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
The spiral model is a realistic approach to the development of large-scale
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
17. What is 4GL model? What are its advantages and diadvantages ?
(M-03).
Fourth Generation Technique:
1)
The term 'Fourth Generation Technique' encompasses broad array of
software tools
that have one thing in common:ech toolenables
this software developer to secify some characteristics of software of a
high level.The tool then automatically generates source code based on
developers specification.
2)
There is a little debate that higher the level at which software can be
specified to a machine,the faster a program can be built.
3)
The 4gl model focuses on the ability to specify software to a machine
at level that is close to natural language or using a notation that
imparts significant functions.
4)
Currently, a software development enviornment that supports th 4Gl
model includes some or all of the foloowing tools: non-procedural
languages for database query,report generation,data
manipulation,screen interaction and defination,code generation,highlevel graphics cpability,spreadsheets capability.
5)
There is no 4GT enviornment today that may be applied with the equal
facility to each of the software application categories.
Steps in 4GT:
1)
Requirement Gathering:
i)
Like other paradigms,4Gt begins with the requirement gathering
step.Ideally,customer would describe requirements and these would be
directly translated into an operational prototype.But this is
unworkable.
ii)
The customer may be unsure of what is required,may be ambigous in
specifying facts that are known and may be unable or unwilling to
specify information in a matter that 4GT can consume.
iii)
In addition, current 4GT tools are not sophisticated enough to
accomodate truly 'natural language' and won't be for some time.At this
time, the customer-developer dialouge described for other paradigms
remains an essential part of 4GT model.
2)
Design Stratergy:
i)
For small applications it may be possible to move directly from the
requirement gathering step to implementation using non-procedural
fourth-generation language(4GT).
ii)
However for large it is necessary to develop a design strategy for the
system,even a 4GL is to be used.
iii)
The use of 4GT without design(for large project) will cause the same
difficulties (poor quality,poor maintainibilty etc) that we
haveencountered when developing software using conentional
approaches.
3)
4)
Testing:
This method is most useful for unique applications where developers have little
information or experience or where risk of error may be high. It is useful to test
the feasibility of the system or to identify user requirements.
This is to ensure that all system inputs are indentified and have been
specified correctly.
Basically, it involves indenfying the information flows across the system
boundary.
Using the DFD models, at the lowest levels of the system the developers
mark the system boundary considering what part of the system is to be
automated and what other not.
Each input data flow in the DFD may translate into one or more of the
physical inputs to the system. Thus it is easy to identify the DFD
processes, which would input the data from outside sources.
Knowing now the details of what data elements to input, the GUI
designer prepares a list of input data elements.
When the input form is designed the designer ensures that all these data
elements are provided for entry, validations and storage.
Input integrity controls are used with all kinds of input mechanisms.
They help reduce the data entry errors at the input stage. One more
control is required on the input data to ensure the completeness.
Various error detection and correction methods are employed these
days. Some of them arelisted as follows :i.
Data validation controls introduce an additional digit called as
check digit which is inserted based on some mathematical
formula. The data verification step recalculates, if there is an
data entry error, the check digit would not match and flag
error.
ii.
Range of acceptable values in a data item can also be used to
check the defects going into the data. If the data value being
entered is beyond the acceptable range, it flags out an error.
iii.
References to master data tables already stored can be used
to validate codes, such as customer codes, product codes,
etc. if there is an error then either the reference would not be
available or will be wrong.
iv.
Some of the controls are used in combination depending upon
the business knowledge.
Transaction logging is another technique of input control. It logs
important data base update operations by user. The transaction log
records the user Id, date, time, location of user. This information is
useful to control the fraudulent transactions, and also provides
recovery mechanisms for erroneous transactions.
Use the paper form for recording the transaction mainly, if an evidence is
required or direct data capture is not feasible.
Provide help to every data element to be entered.
It should be easy to use, appear natural to the user and should be
complete.
The data elements asked should be related in logical order. Typically, the
left to right and top to bottom order of entry is more natural.
A form should not have too many data elements in one screen display.
The prototype form can be provided to the users. The steps are as follows :
a) The users should be provided with the form prototype and related
functionality including validations, help, error messages.
b) The users should be invited to use the prototype forms and the designers
should observe them.
c) The designers should observe their body language, in terms of ease of use,
comfort levels, satisfaction levels, help asked etc. they also should note the
errors in the data entry.
d) The designers should ask the users their critical feedback.
e) The designers should improve the GUI design and associated functionality
and run second test runs for the users in the same way.
f) This exercise should be continued until the input form delivers expected
levels of usability.
As soon as specific requirements and solutions have been identified, the analyst can
weigh the costs and benefits of each alternative. This is called costs benefit analysis.
o Variable costs occur in proportion to some usage factor. For eg :
Costs of computer usage (eg : CPU time used) which vary with
the
workload.
Supplies ( printer paper used) which vary with the workload.
Overhead costs ( utilities) which can be allocated throughout the
lifetime of the system using standard techniques of cost
accounting.
Benefits
Benefits are classified as tangible and intangible.
Tangible benefits: They are those benefits that can be easily quantified.
Tangible benefits are usually measured in terms of monthly or annual savings
or of profits to the firm. Eg: fewer processing errors, decreased response
time etc.
Intangible benefits : they are those benefits believed to be difficult or
impossible to quantify. Unless these benefits are at least identified, it is
entirely possible that many projects would not be feasible. Eg: improved
customer goodwill, better decision making etc.
Cost benefit analysis :
a) The cost benefit analysis is a part of economic feasibility analysis :
b) The basic tasks here are as follows:
i.
To compute the total costs involved.
ii.
To compute the totals benefits from the project.
iii.
Top compare to decide, if the project provides more net gains or no
net gains.
c) The costs are classified under two heads :
i.
Development costs :
Although the project manager has final responsibility for estimating the
costs of development, senior analyst always assists with the calculations.
Generally project costs come in the following categories :
Salaries and wages
Software and licenses
Training
Facilities
Support staff.
22. What are the most important reasons why analysis use a Data
Dictionary. Give atleast One example illustrating each reason.
(M-03)
Analyst use data dictionaries for five important reasons
1. To manage the detail in large systems.
2. To communicate a common meaning.
3. To document the features of the system.
4. To facilitate analysis of the details in order to evaluate
characteristics and determine where system changes should be
made.
5. To locate errors and omissions in the system.
Ans:
A data dictionary is a catalog - a repository of the elements in the
system. Data dictionary is list of all the elements composing the data
flowing through a system. The major elements are data flows, data stores,
and processes.
Analysts use data dictionary for five important reasons:
1. To manage the details in large systems:- Large systems have huge
volumes of data flowing through them in the form of documents,
reports, and even conversations. Similarly, many different activities take
place that use existing data or create new details. All systems are
ongoing all of the time and management of all the descriptive details is a
challenge. So the best way is to record the information.
2. To communicate a common meaning for all system elements:- Data
dictionaries assist in ensuring common meanings for system elements
and activities.
For example: Order processing (Sales orders from customers are
processed so that specified items can be shipped) have one of its data
element invoice. This is a common business term but does this mean
same for all the people referring it?
Does invoice mean the amount owed by the supplier?
Does the amount include tax and shipping costs?
How is one specific invoice identified among others?
Answers to these questions will clarify and define systems requirements
by more completely describing the data used or produced in the system.
Data dictionaries record additional details about the data flow in a
system so that all persons involved can quickly look up the description
of data flows, data stores, or processes.
3. To document the features of the system :- Features include the parts or
components and the characteristics that distinguish each. We want to
know about the processes are data stores. But we also need to know
under what circumstances each process is performed and hoe often the
circumstances occur. Once the features have been articulated and
recorded, all participants in the project will have a common source for
information about the system.
L ic e n s e
D e a le r s h ip
S to c k s
M a n u fa c tu re r
B u ild s
C ar
C o n tr a c ts
S h ip p e r
T ra n s p o rts
CUSTOMER
ORDE
R
One order may
Include many
items
ITEM
entity cannot be uniquely identified by its own attributes. In given example orders
cannot exist unless there is first a customer.
Extent of dependency includes 2 interrelated concerns: the direction of the
relationship & the type of association between them. In the given example customer
entity points to order entity as indicated by crows foot. The relationship means that
customers own/have orders.
Once the entities & relationships are determined we need to focus on data
requirements for each entity.
In addition to the basic components we have already identified in a data structure
diagram- entities , attributes, and records- two additional elements are essential:
Attribute pointers: Link two entities by common information, usually a key attribute
in one and an attribute (non key)in another.
Logical pointers: identify the relationship b/w entities, serve to gain immediate
access to the information in one entity by defining a key attribute in another entity.
Cust_i
d
Cust_name
Cust_add
Customer
Give
The
Orde
r To
Su
ppl
y
Ra Name
w
Ma
teri
al
Supplier
General
Manager
Sup
plier
Add
ress
Ma
nag
er
Production Manager
id
Gi
ve
id
Tel.No
Dept No,
Gi
ves
or
Supplier
id
Sup
plier
Nam
e
Dept
No.
Mgr.
Name
e
Workers
Giv
es
Ord
er
Det
ails
ork
To
Name
Dept
Id
Account Manager
Mgr
Id
Dept
Name
Order
Customer
Order_No
Cust_Id
Order_Type
Cust Name
Order_Qunatity
Cust Add
Order_Price
Cust_Tel
Cust_Id
General Mgr_Id
Supplier_Id
Supplier
General Manger
Supplier_Id
Manager_Id
Supplier_name
Name
Supplier_add
Department No.
Supplier_Tel no
Worker
Worker_id
Prod._Manager_Id
Worker_name
Tele no
Production Manager
Manager_id
Name
Department No.
Tel.No
Genera_ Mgr_Id
Account Manager
Id
Name
Dept_no
Tel_no
General_Mgr
Architecture/Design Documentation
Architecture documentation is a special breed of design documents. In a way,
architecture documents are third derivative from the code (design documents being
second derivative, and code documents being first). Very little in the architecture
documents is specific to the code itself. These documents do not describe how to
program a particular routine, or even why that particular routine exists in the form
that it does, but instead merely lays out the general requirements that would
motivate the existence of such a routine. A good architecture document is short on
details but thick on explanation. It may suggest approaches for lower level design,
but leave the actual exploration trade studies to other documents.
Technical Documentation
This is what most programmers mean when using the term software documentation.
When creating software, code alone is insufficient. There must be some text along
with it to describe various aspects of its intended operation. This documentation is
usually embedded within the source code itself so it is readily accessible to anyone
who may be traversing it.
User Documentation
Unlike code documents, user documents are usually far divorced from the source
code of the program, and instead simply describe how it is used.
In the case of a software library, the code documents and user documents could be
effectively equivalent and are worth conjoining, but for a general application this is
not often true. On the other hand, the Lisp machine grew out of a tradition in which
every piece of code had an attached documentation string. In combination with
strong search capabilities (based on a Unix-like apropos command), and online
sources, Lispm users could look up documentation and paste the associated function
directly into their own code. This level of ease of use is unheard of in putatively more
modern systems.
Marketing Documentation
For many applications it is necessary to have some promotional materials to
encourage casual observers to spend more time learning about the product. This
form of documentation has three purposes:1. To excite the potential user about the product and instill in
them a desire for becoming more involved with it.
2. To inform them about what exactly the product does, so that
their expectations are in line with what they will be receiving.
To explain the position of this product with respect to other alternatives.
Requirement/Characteristics of Good Requirements
As described above, a list of system requirements contains a complete description of
the important requirements for a product design. From this list, subsequent design
decisions can be based. Of course, if one is to place their trust in them, we must
assume that all the requirements in such a list are good. This raises the question,
How does one differentiate between good requirements and those that are not so
good?
It turns out that good requirements have the following essential qualities:
1. A good requirement contains one idea. If a requirement is found to contain more
than one idea then it should be broken into two or more new requirements.
2. A good requirement is clear; that is, the idea contained within it is not open to
interpretation. If any aspects of a
requirement are open to interpretation then the designer should consult the relevant
parties and clarify the statement.
3. Requirements should remain as general as possible. This ensures that the scope of
the design is not unnecessarily limited.
4. A good requirement is easily verifiable, that is, at the end of the design process it
is possible to check whetherthe requirement has been met.
These criteria apply to both user and systems requirements. Examples are given in
the section below.
In addition to the qualities listed above, a set of good requirements should
completely describe all aspects relevant to aproducts design.
Examples
User Requirement
A good user requirement is listed below
The seat shall be comfortable for 95% of the population of each country in which the
vehicle is sold.
It meets the three criteria as follows:
1. It contains one idea. If the requirement had also made reference to leg room,
then the requirement would need to have been broken into two requirements.
2. It is easily verifiable. The statement has given quantitative limits from which the
seat can be designed. Stating that, The seat shall be comfortable for the majority of
its intended users, would be unacceptable.
3. The requirement is general. The requirement does not state how the seat should
be made so as to be
Comfortable for 95% of the population. To do so would unnecessarily narrow the
scope, and limit the design
process.
4. The requirement is verifiable. To test whether the seat is comfortable or
uncomfortable for the right number of people, you need only get people to sit in it,
and see if during normal operation they experience discomfort.
the steps a user will take to accomplish a particular task on your site
the way the Web site should respond to a user's actions
A use case begins with a user's goal and ends when that goal is fulfilled.
Customer
item
Product item
Availability
Details
Inventory item
Catalogue
item
Customer
Product item
Availability
Details
Inventory item
cash
Customer
We need an extra database for sale of a product to a customer after he/she has paid
the cash as the same details can be recorded in the customer database and to check
out later if he returns the product or if sale is made on credit basis or cash n the
records of customers is need for future use. This is clearly seen by the sample entry
of the customer database.
Product Item
Name
Milk
Soap
Pen
Customer
Availability
Y
Y
N
Quantiity Available
100 liters
2 pieces
Price/unit
15
20
10
Name
Address
Xyz
Phone
NO
5641
Products
Purchased
Milk,Soap
ABC
LMN
PQR
Jkn
Jsur
323
6543
Milk
Soap
Quantiity
Purchased
1 liters,1
piece
2 liters
1 Piece
Total
Amount
35
Cash/Credit
Return
Cash
30
20
Credit
Cash
N
N
RAD is used primarily for information systems applications, the RAD approach
encompasses the following phases:
Business modeling
The information flow among business functions is modeled in a way that answers the
following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
Data modeling
The information flow defined as part of the business modeling phase is refined into a
set of data objects that are needed to support the business. The characteristics
(called attributes) of each object are identified and the relationships between these
objects are defined.
Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions
are created for adding, modifying, deleting, or retrieving a data object.
Application generation
RAD assumes the use of the RAD fourth generation techniques and tools like VB,
VC++, Delphi etc rather than creating software using conventional third generation
programming languages. The RAD works to reuse existing program components
(when possible) or create reusable components (when necessary). In all cases,
automated tools are used to facilitate construction of the software.
Testing and turnover
Since the RAD process emphasizes reuse, many of the program components have
already been tested. This minimizes the testing and development time.
If a business application can be modularized so that each major function can be
completed within the development cycle then it is a candidate for the RAD model. In
this case, each team can be assigned a model, which is then integrated to form a
whole.
Disadvantages
For Large (but scalable) projects, RAD requires sufficient resources to create
the right number of RAD teams.
RAD is not appropriate when technical risks are high, e.g. this occurs when a
new application makes heavy use of new technology.
29. What is the difference between system analysis and system design. How
does the focus of information system analysis differ from information
system design?(m-04,m-05)
. System analysis is a problem solving technique that decomposes a system into its
component pieces for the purpose of studying how well the components parts work
and interact to accomplish their purpose .
System Analysis:
System analysis is a problem solving technique that decomposes a system
into its component pieces for the purpose of studying how well those component
parts work and interact to accomplish their purpose
System design :
System design is a complimentary problem solving technique(to system
analysis) that reassembles a systems component pieces back into a complete system
.This may involve adding deleting and changing pieces relative to the original
system.
Information system analysis:
Information system Analysis primarily focuses on the business problems and
requirements, independent of any technology that can or will be used to implement a
solution to that problem .
Information system design:
Information system design is defined as those tasks that follow system
analysis and focus on the specification of a detailed computer based solution
Whereas system analysis emphasizes the business problems, system design focuses
on the technical implementation concerns of the system.
Hardware costs:
Hardware cost relate to the actual purchase or lease of the
computer and peripherals (e.g. printer, disk drive, tape unit etc .)Determining
the actual costs of hardware is generally more difficult when the system is
shared by many users than for a dedicated stand alone system.
2.
Personnel costs:
Personnel costs include EDP staff salaries and benefits (health
insurance, vacation time, sick pay , etc.)as well as payment of those involved
in developing the system . Costs incurred during the development of a system
are one time costs and are labeled development costs.
3.
Facility cost:
Facility costs are facilities incurred in the preparation of the
physical site where the application of computer will be in operation. This
includes wiring, flooring, lighting and air conditioning .These costs are treated
as one time costs
4.
Operating costs:
Operating costs include all costs associated with the day to-day
operation of the system. The amount depends on the number shifts the
nature of the applications and the caliber of the operating staff. The amount
charged is based on computer time, staff time and volume of the output
produced.
5.
Supply cost:
Supply costs are variable costs that increase with increased use of
paper, ribbons, disks and the like.
Procedure for cost benefit determination:
The determination of cost and benefit entails the following steps :
1.
Identify the cost and benefits pertaining to a given project
2.
Categorize the various costs and benefits for analysis
3.
Select a method for evaluation
4.
Interpret the result of the system
5.
Take action
Classification of costs and benefits:
1.
a.
b.
c.
d.
2.
3.
Fixed costs are sunked costs .They are constant and d not change
Variable costs are incurred on regular basis .they are usually
proportional to work volume and continue as long as the system is in
operation.
3.
Payback analysis:
The payback method is a common measure of the relative time value
of a project .It is easy to calculate and allows two or more activities to
be ranked .The payback period may be computed by the following
formula
Overall cash outlay/Annual cash return=
(C*D)/5+2=Years +Ins.Time/Years to recover
(A*B)+
5.
6.
Step2.
Step3.
Step5.
32. What is the reason for selecting the prototype development method? What
is the desired impact on the application development process?(m-05)
1. Prototyping is a technique for quickly building a functioning but incomplete model
of the information system.
2. A prototype is a small, representative, or working model of users requirements or
a proposed design for an information system.
3. The development of the prototype is based on the fact that the requirements are
seldom fully know at the beginning of the project. The idea is to build a first,
simplified version of the system and seek feedback from the people involved, in
order to then design a better subsequent vesion. This process is repeated until the
system meets the clients condition of acceptance.
4. any given prototype may omit certain functions or features until such a time as
the prototype has sufficiently evolved into an acceptable implementation of
requirements.
Reason for Protoyping:
6. information requirements are not always well defined. Users may know only
that certain business areas need improvement or that the existing procedures
must be changed. Or, they may know that they need better information for
managing certain activities but are not sure what that information is.
7. the users requirements may be too vague to even begin formulating a design.
8. developers may have neither information nor experience of some unique
sitations or some high-cost or high-risk situations, in which the proposed
design is new and untested.
9. developers may be unsure of the efficiency of an algorithm, the adaptability
of an operating system, or the form that human-machine interaction should
take.
10. in these and many other situations, a prototyping approach may offer the
best approach.
Advantages:
5.
6.
7.
8.
This method is most useful for unique applications where developers have little
information or experience or where risk of error may be high. It is useful to test
the feasibility of the system or to identify user requirements.
33. What is the feasibility study? Hat are different types of feasibility study?
(m-05)
Feasibility study
Types of Feasibility
Within a feasibility study, six areas must be reviewed, including those of Economics,
Technical, Schedule, Organizational, Cultural, and Legal.
Economic feasibility study
This involves questions such as whether the firm can afford to build the system,
whether its benefits should substantially exceed its costs, and whether the project
has higher priority and profits than other projects that might use the same
resources. This also includes whether the project is in the condition to fulfill all the
eligibility criteria and the responsibility of both sides in case there are two parties
involved in performing any project.
Technical feasibility study
This involves questions such as whether the technology needed for the system
exists, how difficult it will be to build, and whether the firm has enough experience
using that technology.The assessment is based on an outline design of system
requirements in terms of Input, Output, Fields, Programs, and Procedures.This can
be qualified in terms of volumes of data,trends,frequency of updating,etc..in order to
give an introduction to the technical system.
Action entries
Y Y
Y Y
N N Y
Printer is unrecognized
Y N Y
Check/replace ink
Check for paper jam
N N
N Y
N Y
X X
X
N N N N
X X
X
Of course, this is just a simple example (and it does not necessarily correspond to
the reality of printer troubleshooting), but even so, it is possible to see how decision
tables can scale to several conditions with many possibilities.
Software engineering benefits
Decision tables make it easy to observe that all possible conditions are accounted
for. In the example above, every possible combination of the three conditions is
given. In decision tables, when conditions are omitted, it is obvious even at a glance
that logic is missing. Compare this to traditional control structures, where it is not
easy to notice gaps in program logic with a mere glance --- sometimes it is difficult
to follow which conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand
that a programmer think of all possible conditions. With traditional control structures,
it is easy to forget about corner cases, especially when the else statement is
optional. Since logic is so important to programming, decision tables are an excellent
tool for designing control logic. In one incredible anecdote, after a failed 6 man-year
attempt to describe program logic for a file maintenance system using flow charts,
four people solved the problem using decision tables in just four weeks. Choosing the
right tool for the problem is fundamental.
Decision tree
In operations research, specifically in decision analysis, a decision tree is a decision
support tool that uses a graph or model of decisions and their possible
consequences, including chance event outcomes, resource costs, and utility. A
decision tree is used to identify the strategy most likely to reach a goal. Another use
of trees is as a descriptive means for calculating conditional probabilities.
In data mining and machine learning, a decision tree is a predictive model; that is,
a mapping from observations about an item to conclusions about its target value.
More descriptive names for such tree models are classification tree or reduction
tree. In these tree structures, leaves represent classifications and branches
represent conjunctions of features that lead to those classifications [1]. The machine
learning technique for inducing a decision tree from data is called decision tree
learning, or (colloquially)
Four major steps in building Decision Trees:
1.
2.
3.
4.
Identify
Identify
Identify
Identify
the
the
the
the
conditions
outcomes (condition alternatives) for each decision
actions
rules.
Structured English
The two building blocks of Structured English are (1) structured logic or instructions
organized into nested or grouped procedures, and (2) simple English statements
such as add, multiply, move, etc. (strong, active, specific verbs)
Five conventions to follow when using Structured English:
1. Express all logic in terms of sequential structures, decision structures, or
iterations.
2. Use and capitalize accepted keywords such as: IF, THEN, ELSE, DO, DO
WHILE, DO UNTIL, PERFORM
3. Indent blocks of statements to show their hierarchy (nesting) clearly.
4. When words or phrases have been defined in the Data Dictionary, underline
those words or phrases to indicate that they have a specialized, reserved
meaning.
5. Be careful when using "and" and "or" as well as "greater than" and "greater
than or equal to" and other logical comparisons.
Data dictionary
A data dictionary is a set of metadata that contains definitions and representations
of data elements. Within the context of a DBMS, a data dictionary is a read-only set
of tables and views. Amongst other things, a data dictionary holds the following
information:
are more precise than glossaries (terms and definitions) because they frequently
have one or more representations of how data is structured. Data dictionaries are
usually separate from data models since data models usually include complex
relationships between data elements.
Data dictionaries can evolve into full ontology (computer science) when discrete logic
has been added to data element definitions.
Attribute
Data
type
Size
Book_Id
Number
Bk_name
Text
Stud_Id
Number
Bk_author
Text
Bk_price
Number
Price of book
Bk_edition
Text
Number of edition
Integrity
Description
Primary key
Book Identity
30
4
20
Name of book
Foreign Key
Student Identity
Name of author
Database normalization
Purpose
The primary purpose of database normalization is to improve data quality through
the elimination of redundancy. This involves identification and isolation of repeating
data, so that the repeated information may be reduced down to a single record, then
conveniently retrieved wherever it is needed, reducing the potential for anomalies
during data operations. Maintenance of normalized data is simpler because the user
need only modify the repeated information in one place, with confidence that the
new information will be immediately available wherever it is needed. This is because
all data duplication is system maintained by key field inheritance.
Uses
Database normalization is a useful tool for requirements analysis and data modeling
processes in software development. The process of database normalization provides
many opportunities to improve understanding of the information which the data
represents, leading to the development of a logical data model which may be used
for design of tables in a relational database, classes in an object database, or
elements in an XML schema, to offer just a few examples.
Description
A non-normalized database can suffer from data anomalies:
Normalized databases have a design that reflects the true dependencies between
tracked quantities, allowing quick updates to data with little risk of introducing
inconsistencies. Instead of attempting to lump all information into one table, data is
spread out logically into many tables. Normalizing the data is decomposing a single
relation into a set of smaller relations which satisfy the constraints of the original
relation. Redundancy can be solved by decomposing the tables. However certain new
problems are caused by decomposition.
One can only describe a database as having a normal form if the relationships
between quantities have been rigorously defined. It is possible to use set theory to
express this knowledge once a problem domain has been fully understood, but most
database designers model the relationships in terms of an "idealized schema". (The
mathematical support came back into play in proofs regarding the process of
transforming from one form to another.)
36. what are the major threads in the system security. Which is one of the
most serious and important and why?
System Security Threats
ComAirs system crash on December 24, 2004, was just one example showing that
the availability of data and system operations is essential to ensure business
continuity. Due to resource constraints, organizations cannot implement unlimited
controls to protect their systems. Instead, they should understand the major threats,
and implement effective controls accordingly. An effective internal control structure
cannot be implemented overnight, and internal control over financial reporting must
be a continuing process.
The term system security threats refers to the acts or incidents that can and will
affect the integrity of business systems, which in turn will affect the reliability and
privacy of business data. Most organizations are dependent on computer systems to
function, and thus must deal with systems security threats. Small firms, however,
are often understaffed for basic information technology (IT) functions as well as
system security skills. Nonetheless, to protect a companys systems and ensure
business continuity, all organizations must designate an individual or a group with
the responsibilities for system security. Outsourcing system security functions may
be a less expensive alternative for small organizations
Top System Security Threats
The 2005 CSI/FBI Computer Crime and Security Survey of 700 computer security
practitioners revealed that the frequency of system security breaches has been
steadily decreasing since 1999 in almost all threats except the abuse of wireless
networks.
Viruses
A computer virus is a software code that can multiply and propagate itself. A virus
can spread into another computer via e-mail, downloading files from the Internet, or
opening a contaminated file. It is almost impossible to completely protect a network
computer from virus attacks; the CSI/FBI survey indicated that virus attacks were
the most widespread attack for six straight years since 2000.
Insider Abuse of Internet Access
Annual U.S. productivity growth was 2.5% during the second half of the 1990s, as
compared to 1.5% from 1973 to 1995, a jump that has been attributed to the use of
IT (Stephen D. Oliner and Daniel E. Sichel, Information Technology and
Productivity: Where Are We Now and Where Are We Going?, Reserve Bank of
Atlanta Economic Review, Third Quarter 2002). Unfortunately, IT tools can be
abused. For example, e-mail and Internet connections are available in almost all
offices to improve productivity, but employees may use them for personal reasons,
such as online shopping, playing games, and sending instant messages to friends
during work hours.
Laptop or Mobile Theft
Because they are relatively expensive, laptops and PDAs have become the targets of
thieves. Although the percentage has declined steadily since 1999, about half of
network executives indicated that their corporate laptops or PDAs were stolen in
2005 (Network World Technology Executive Newsletter, 02/21/05). Besides being
expensive, they often contain proprietary corporate data, access codes
Denial of Service
A denial of service (DoS) attack is specifically designed to interrupt normal system
functions and affect legitimate users access to the system. Hostile users send a flood
of fake requests to a server, overwhelming it and making a connection between the
server and legitimate clients difficult or impossible to establish. The distributed denial
of service (DDoS) allows the hacker to launch a massive, coordinated attack from
thousands of hijacked (zombie) computers remotely controlled by the hacker. A
massive DDoS attack can paralyze a network system and bring down giant websites.
For example, the 2000 DDoS attacks brought down websites such as Yahoo! and
eBay for hours. Unfortunately, any computer system can be a hackers target as long
as it is connected to the Internet.
Unauthorized Access to Information
To control unauthorized access to information, access controls, including passwords
and a controlled environment, are necessary. Computers installed in a public area,
such as a conference room or reception area, can create serious threats and should
be avoided if possible. Any computer in a public area must be equipped with a
physical protection device to control access when there is no business need. The LAN
should be in a controlled environment accessed by authorized employees only.
Employees should be allowed to access only the data necessary for them to perform
their jobs.
Abuse of Wireless Networks
Wireless networks offer the advantage of convenience and flexibility, but system
security can be a big issue. Attackers do not need to have physical access to the
network. Attackers can take their time cracking the passwords and reading the
network data without leaving a trace. One option to prevent an attack is to use one
of several encryption standards that can be built into wireless network devices. One
example, wired equivalent privacy (WEP) encryption, can be effective at stopping
amateur snoopers, but it is not sophisticated enough to foil determined hackers.
Consequently, any sensitive information transmitted over wireless networks should
be encrypted at the data level as if it were being sent over a public network.
System Penetration
Hackers penetrate systems illegally to steal information, modify data, or harm the
system.
Telecom Fraud
In the past, telecom fraud involved fraudulent use of telecommunication (telephone)
facilities. Intruders often hacked into a companys private branch exchange (PBX)
and administration or maintenance port for personal gains, including free longdistance calls, stealing (changing) information in voicemail boxes, diverting calls
illegally, wiretapping, and eavesdropping.
Theft of Proprietary Information
Information is a commodity in the e-commerce era, and there are always buyers for
sensitive information, including customer data, credit card information, and trade
secrets. Data theft by an insider is common when access controls are not
implemented. Outside hackers can also use Trojan viruses to steal information
from unprotected systems. Beyond installing firewall and anti-virus software to
secure systems, a company should encrypt all of its important data
Financial Fraud
The nature of financial fraud has changed over the years with information
technology. System-based financial fraud includes scam e-mails, identity theft, and
fraudulent transactions. With spam, con artists can send scam e-mails to thousands
of people in hours. Victims of the so-called 419 scam are often promised a lottery
winning or a large sum of unclaimed money sitting in an offshore bank account, but
they must pay a fee first to get their shares. Anyone who gets this kind of e-mail is
recommended to forward a copy to the U.S. Secret Service
Misuse of Public Web Applications
The nature of e-commerceconvenience and flexibilitymakes Web applications
vulnerable and easily abused. Hackers can circumvent traditional network firewalls
and intrusion-prevention systems and attack web applications directly. They can
inject commands into databases via the web application user interfaces and
surreptitiously steal data, such as customer and credit card information.
Website Defacement
Website defacement is the sabotage of webpages by hackers inserting or altering
information. The altered webpages may mislead unknowing users and represent
negative publicity that could affect a companys image and credibility. Web
defacement is in essence a system attack, and the attackers often take advantage of
undisclosed system vulnerabilities or unpatched systems.
computer from virus attacks; the CSI/FBI survey indicated that virus attacks were
the most widespread attack for six straight years since 2000.
Viruses are just one of several programmed threats or malicious codes (malware) in
todays interconnected system environment. Programmed threats are computer
programs that can create a nuisance, alter or damage data, steal information, or
cripple system functions. Programmed threats include, computer viruses, Trojan
horses, logic bombs, worms, spam, spyware, and adware.
According to a recent study by the University of Maryland, more than 75% of
participants received e-mail spam every day. There are two problems with spam:
Employees waste time reading and deleting spam, and it increases the system
overhead to deliver and store junk data. The average daily spam is 18.5 messages,
and the average time spent deleting them all is 2.8 minutes.
Spyware is a computer program that secretly gathers users personal information
and relays it to third parties, such as advertisers. Common functionalities of spyware
include monitoring keystrokes, scanning files, snooping on other applications such as
chat programs or word processors, installing other spyware programs, reading
cookies, changing the default homepage on the Web browser, and consistently
relaying information to the spyware home base. Unknowing users often install
spyware as the result of visiting a website, clicking on a disguised pop-up window, or
downloading a file from the Internet.
Adware is a program that can display advertisements such as pop-up windows or
advertising banners on webpages. A growing number of software developers offer
free trials for their software until users pay to register. Free-trial users view
sponsored advertisements while the software is being used. Some adware does more
than just present advertisements, however; it can report users habits, preferences,
or even personal information to advertisers or other third parties, similar to spyware.
To protect computer systems against viruses and other programmed threats,
companies must have effective access controls and install and regularly update
quarantine software. With effective protection against unauthorized access and by
encouraging staff to become defensive computer users, virus threats can be reduced.
Some viruses can infect a computer through operating system vulnerabilities. It is
critical to install system security patches as soon as they are available. Furthermore,
effective security policies can be implemented with server operating systems such as
Microsoft Windows XP and Windows Server 2003. Other kinds of software (e.g., Deep
Freeze) can protect and preserve original computer configurations. Each system
restart eradicates all changes, including virus infections, and resets the computer to
its original state. The software eliminates the need for IT professionals to perform
time-consuming and counterproductive rebuilding, re-imaging, or troubleshooting
when a computer becomes infected.
Fighting against programmed threats is an ongoing and ever-changing battle.
Many organizations, especially small ones, are understaffed and underfunded
for system security. Organizations can use one of a number of effective security
suites (e.g., Norton Internet Security 2005, ZoneAlarm Security Suite 5.5,
McAfee VirusScan) that offer firewall, anti-virus, anti-spam, anti-spyware, and
parental controls (for home offices) at the desktop level. Firewalls and routers
should also be installed at the network level to eliminate threats before they
37. Define Data structure? What are the major type of data structure ?
illustrate?
Data are structured according to the data model. An entity is a conceptual
representation of an object. Relationships between entities makeup a data structure.
A data model represents a data structure that is described to the DBMS in DDL.
Types of Data-Structure
Data-structuring determines whether the system can create 1:1 1:m or m:m
relationships among entities. Although all DBMSs have a common approach to data
management they differ in the way they structure data.
i.
Hierarchical
ii.
Network and
iii.
Relational
Hierarchical Structuring
i.
Hierarchical (also called tree) structuring specifies that an entity can have no
more than one owing entity; that is, we can establish a 1:1 or a 1:m
relationship.
ii.
The owing entity is called the parent; the owned entity, the child. A parent
can have many children (1:m), whereas a child can have only one parent
iii.
A parent with no owners is called the root. There is only one root in a
hierarchical model.
iv.
Elements at the ends of the branches with no children are called leaves.
v.
Trees are normally upside down, with the root at the top and leaves at the
bottom.
vi.
The hierarchical model is easy to design and understand . some
application,however do not conform to such a scheme. The problem is
sometimes resolved by using a network-structure.
Network-Structuring
i.
A network structure allows 1:1 1:m or m:m relationships among entities.
For example, an auto parts shop may have dealings with more than one
automaker(parent).
ii.
A network structure reflects the real-world although a PROGRAM
STRUCTURE CAN BECOME COMPLEX. The solution is to separate the
network into several hierarchies with duplicates. This simplifies the
relationship to more complex than 1: M A hierarchy, then , becomes a
subview of the network structure
Relational Structuring
i.
ii.
iii.
iv.
38. what cost elements are considered in the cost/benefit analysis? Which do
you think is most difficult to estimate? Why?
The cost-benefit analysis is a part of economic feasibility study of a system. The
basic elements of cost-benefit analysis are as follows.
I. To compute the total costs involved.
II. To compute the total benefits from the project.
III. Top compare to decide, if the project provides more net gains or no net
gains.
The costs are classified under two heads as follows.
i.
Initial Costs- They are mainly the development costs.
ii.
Recurring Costs- They are the costs incurred in running the
application system or operating the system (usually per
month or per year)
The initial costs include the following major heads of expenses: Salary of development staff
Consultant fees
Costs of software Development tools-Licensing Fees.
Infrastructure development costs
Training and Training material costs
Travel costs
Development hardware applied for te costs, Networking
costs
Salary of support staff
The development costs are applied for the entire duration of the software
development. Therefore, they are over initial but longer period. To keep these costs
lower, the development time reduction is the key.
The Recurring costs include the following major costs: Salary of operation users
License fees for software tools used for running the software systems, if any.
Hardware/Networking Maintenance charges
Costs of hard discs, magnetic tapes, being used as media for data storage.
The furniture and fixture leasing charges
Electricity and other charges
Rents and Taxes
Infrastructure Maintenance charges
Spare Parts and Tools
The benefits are classified under two heads, as follows:i.
Tangible benefits- benefits that can be measured in terms of the rupee
value.
ii.
Intangible benefits-benefits that can not be measured in terms of the
rupee value.
The common heads of tangible benefits vary from organisation to organisation, but
some of them are as follows:i.
Benefits due to reduced business cycle times i.e production cycle, marketing
cycle, etc.
ii.
Benefits due to increased efficiency
iii.
Savings in the salary of operational users
iv.
v.
vi.
vii.
The common heads of intangible benefits vary from organisation to organisation, but
some of them are as follows:i.
Benefits due to increased working satisfaction of employees.
ii.
Benefits due to increased quality of products and/or services provided to the
end-customer.
iii.
Benefits in terms of business growth due to better customer services
iv.
Benefits due to increased brand image
v.
Benefits due to captured errors, which could not be captured in the current
system.
vi.
Savings on costs of extra activities that were carried out and now not
required to be carried out in the new system.
The net gain or loss is worked after calculating the difference in the costs
and benefits.
The advantages of cost or benefit analysis are many. Some of them are
listed below
i.
Since it is a computed value, the decisions of go or no-go ahead based
on it are likely to be more effective than just going ahead without it.
ii.
It completes the user management proposal developer to think through
the costs and benefits ahead of time proactively and assign some
numbers to these heads of accounts.
iii.
Since it assigns the amount values to each one of these heads, they can
be used to verify against, after the g-ahead decision is implemented, to
test the correctness of basis of assumptions made during the planning
stage. This acts as a very effective control tool.
iv.
If used properly, the cost/benefit analysis can be used as a very good
opportunity for organisational learning which will go into building up the
organisational maturity into that key performance area. (KPA)
Some common drawbacks of the cost/benefits analysis are as follows:
i.
The basis for computing of costs and that of benefits should be
balanced. E.g. salary expenses incurred and salary expenses saved
should have some bearings on reality. Unfortunately, these basis
are subjective and therefore, may be difficult to balance.
ii.
The personal bias of the team members may reflect upon the
computations. If a person/team is in favour of developing the new
system, they may overlook, some important cost elements or likely
to underestimate the costs and vice versa.
iii.
Some of the intangible benefits may be very hard to compute
accurately.
iv.
Many times, in practice, the Top managements decision does not
depend upon the outcome of the cost/benefit analysis. If the
project manager/system analyst knows about the decision, they
v.
may not take all hard efforts to collect data and carry out the
comparative analysis.
Many times, the software development projects are business
compulsions rather than the matter of choice. Therefore, the
cost/benefit analysis may be either fertile exercise or acts only as a
paper horse.
39. There are 2 ways of debugging program software bottom up and top
down. How do they differ?
Its a long-standing principle of programming style that the functional elements of a
program should not be too large. If some component of a program grows beyond the
stage where it's readily comprehensible, it becomes a mass of complexity which
conceals errors.Such software will be hard to read, hard to test, and hard to debug.
In accordance with this principle, a large program must be divided into pieces, and
the larger the program, the more it must be divided. How do you divide a program?
The traditional approach is called top-down design: you say "the purpose of the
program is to do these seven things, so I divide it into seven major subroutines. The
first subroutine has to do these four things, so it in turn will have four of its own
subroutines," and so on. This process continues until the whole program has the
right level of granularity-- each part large enough to do something substantial, but
small enough to be understood as a single unit.
As well as top-down design, they follow a principle which could be called bottom-up
design-- changing the language to suit the problem.It's worth emphasizing that
bottom-up design doesn't mean just writing the same program in a different order.
When you work bottom-up, you usually end up with a different program. Instead of a
single, monolithic program, you will get a larger language with more abstract
operators, and a smaller program written in it. Instead of a lintel, you'll get an arch.
This brings several advantages:
1.By making the language do more of the work, bottom-up design yields programs
which are smaller and more agile. A shorter program doesn't have to be divided into
so many components, and fewer components means programs which are easier to
read or modify. Fewer components also means fewer connections between
components, and thus less chance for errors there. As industrial designers strive to
reduce the number of moving parts in a machine, experienced Lisp programmers use
bottom-up design to reduce the size and complexity of their programs.
2.Bottom-up design promotes code re-use. When you write two or more programs,
many of the utilities you wrote for the first program will also be useful in the
succeeding ones. Once you've acquired a large substrate of utilities, writing a new
program can take only a fraction of the effort it would require if you had to start with
raw Lisp.
3.Bottom-up design makes programs easier to read. An instance of this type of
abstraction asks the reader to understand a general-purpose operator; an instance
of functional abstraction asks the reader to understand a special-purpose subroutine.
4.Because it causes you always to be on the lookout for patterns in your code,
working bottom-up helps to clarify your ideas about the design of your program. If
two distant components of a program are similar in form, you'll be led to notice the
similarity and perhaps to redesign the program in a simpler way.
Top-Down
Advantages
1.Design errors are trapped earlier
2.A working (prototype) system
3.Enables early validation of design
4.No test drivers are needed
5.The control program plus a few modules
6.Interface errors discovered early
7.Modular features aid debugging
Disadvantages
1.Difficult to produce a stub for a complex component
2.Difficult to observe test output from top-level modules
3.Test stubs are needed
4.Extended early phases dictate a slow manpower buildup
5.Errors in critical modules at low levels are found late
Bottom-Up
Advantages
1.Easier to create test cases and observe output
2.Uses simple drivers for low-level modules to provide data and the interface
3.Natural fit with OO development techniques
4.No test stubs are needed
5.Easier to adjust manpower needs
6.Errors in critical modules are found early
7.Natural fit with OO development techniques
Disadvantages
1.No program until all modules tested
2.High-level errors may cause changes in lower modules
3.Test drivers are needed
4.Many modules must be integrated before a working program is available
5.Interface errors are discovered late
40. Discuss the six special system test? Give special examples?
Special System Tests:
The tests which do not focus on the normal running of the system but come
under special category and are used for performing specific tests related specific
tasks are termed as Special System Tests.
They are listed as follows:
1. Peak Load Test:
This is used to determine whether the system will handle the volume of activities
that occur when the system is at peak of the processing demand. For instance when
all terminals are active at the same time. This test applies mainly for on-line
systems.
For example, in a banking system, analyst want to know what will happen if all the
tellers sign on at their terminals at the same time before start of the business day.
Will the system handle them one at a time without incident, or will it attempt to
handle all of them at once and be so confused that it locks up and must be
restarted, or will terminal address be post? The only sure way to find out is to test
for it.
2. Storage Testing:
This test is to be carried out to determine the capacity of the system to store
transaction data on a disk or in other files. Capacities her are measured in terms of
the number of records that a disk will handle or a file can contain. If this test is not
carried out then there are possibilities that during installation one may discover that,
there is not enough storage capacity for transactions and master file records.
3. Performance Time Testing:
This test refers to the response time of the system being installed. Performance time
testing is conducted prior to implementation to determine how long it takes to
receive a response to a inquiry, make a backup copy of the file, or send a
transmission and receive a response.
This also includes test runs to time indexing or restoring of large files of the size the
system will have during atypical run or to prepare a report. A system may run well
with only a handful of test transactions may be unacceptably slow when full loaded.
This should be done using the entire volume of live data.
4. Recovery Testing:
Analyst must never be too sure of anything. He must always be prepared for the
worst. One should assume that the system will fail and data will be damaged or lost.
Even though plans and procedures are written to cover these situations, they also
must be tested.
5. Procedure Testing:
Documentation & manuals telling the user how to perform certain functions and tests
quite easily by asking the user to follow them exactly through a series of events.
By not including instructions about aspects such as, when to depress the enter key,
removing the diskettes before putting off the power and so on, could cause
what
is
not
mentioned
in
the
6. Human Factors:
In case during processing, the screen goes blank the operator may start to wonder
as to what is happening and the operator may just do things like press the enter key
number of times, or switch off the system and so on, but if a message is displayed
saying that the processing is in progress and asking the operator to wait, then these
types of problems can be avoided.
Thus, during this test we determine how users will use the system when processing
data or preparing reports.
As we have noticed that these special test are used for some special situations, and
hence the name as Special System Tests.
41. Define the following types of maintenance. Give examples for each?
a. corrective maintenance
b. adaptive maintenance
c. perfective maintenance
d. preventive maintenance
a)
Corrective Maintenance
b)
Adaptive Maintenance
c)
Perfective Maintenance
d)
Preventive Maintenance
Solution
Corrective Maintenance:
Corrective maintenance means repairing processing or performance
failures or making changes because of previously uncorrected problems or false
assumptions.
For example,
fixing cosmetic problems, like correcting a misspelled word in the
user interface;
fixing functional errors that dont obviously affect processing,
like correcting a mathematical function so that it is calculated
correctly;
fixing algorithmic errors that cause severe performance problems,
like changing a program to avoid crashes or infinite loops
fixing algorithmic problems that damage lose, corrupt, or destroy
data in the program or in files.
Adaptive Maintenance:
Adaptive maintenance is a type of software maintenance where changes the
software in response to changes in the working environment or due to system
upgrade or hardware replacement in short changing the program function/
For example,
porting to use newer versions of the development tools and/or
components,
porting product to a different operating system,
adapting a program for new locales,
modification of code to take full advantages of hardware supported
operations,
Preventive Maintenance:
Preventive maintenance is a type of software maintenance where work is
done in order to try to prevent malfunctions or improve
maintainability.
42. what are different methods of file organization? Explain advantages and
disadvantages of each?
File Organization:
A file is organized to ensure that records are available for processing. It should be
designed in line with the activity and volatility of the information and the nature of
the storage media and devices. Other consideration for it are Cost of the media, File
privacy, security, and confidentiality.
There are four methods of organizing files:
1. Sequential organization:
- Sequential organization simply means storing data in physical , contiguous blocks
within files on tapes or disk. Records are also in sequence within each block.
- To access a record, previous records within the block are scanned. Thus this design
is best suited for get next activities, reading one record after another without a
search delay.
- In this, records can be added only at the end of the file. It is not possible to insert
a record in the middle of file without rewriting the file.
- In this, file update, transactions records are in the same sequence as in the master
file. Records from both files are matched, one record at a time, resulting in an
updated master file.
Advantages:
- Simple to design and Easy to program.
- Variable length and blocked records are available.
- Best use of disk storage.
Disadvantages:
- Records cannot be added to middle of the file.
2. Indexed Sequential Organization:
- Like sequential organization, it also stores data in physically contiguous blocks.
- The difference is in the use of indexes to locate the records.
-Disk storage is divided into three areas:
(a). Prime area: It contains file records stored by key or ID numbers. All records
are initially stored in the prime area.
(b). Overflow area: It contains records added to the files that cannot be placed in
logical sequence in the prime area.
(c) Index area: This is more like a data dictionary. It contains keys of records and
their locations on the disks. A pointer associated with each key is an address that
tells the system where to find a record.
Advantages:
- Indexed sequential organization reduces the magnitude of the sequential
search and provides quick access for sequential and direct processing.
- Records can be inserted or updated in middle of the file.
Disadvantages:
- The prime drawback is the extra storage required for the index.
- It also takes longer to search the index for data access or retrieval.
- Periodic reorganization of file is required.
3. Inverted List Organization:
2)
Specification Testing
Code Testing:
The code-testing strategy examines the logic of the program. To follow the
testing method, the analyst develops test cases that result in executing every
instruction in the program or module, that is, every path through the program is
tested. A path is a specific combination of
conditions that is handled by the program.
Code testing seems the ideal method for testing software. This testing
strategy does not indicate whether the code meets its specification nor
does it determines whether all aspects are even implemented. It also
does not check the range of data that the program will accept.
Specification Testing:
In this the analyst examines the specifications stating what the
program should do and how it should perform under various conditions. Then the
test case are developed for each condition or combination of
conditions and submitted for the processing. By examining the results, the analyst
can determine whether the program performs according to its
specified requirements.
This strategy treats the program as if it were a black box: the
analyst does not look into the program to study the code and is not concerned about
whether every instruction or path through the program m is tested.
Specification testing strategy is a more efficient, since it focuses on the way
software expected to be used.
SDLC Phase
System Analysis
High-Level Design
3
4
Low-Level Design
Construction
Testing
Implementation
Project Management
Software Configuration
& Control Management
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
The spiral model is a realistic approach to the development of large-scale
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
46. what is the purpose of feasibility study? What are the parameter used to
decide different feasibilities of a system in detail?
Feasibility study is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative .It, for example, can decide whether an
order processing be carried out by a new system more efficiently than the previous
one.
A feasibility study is a study conducted to find out whether the proposed system
would be :
1. Possible to build with given technology and resources
2. Affordable given the time and cost constraint of the organisation ,and
3. Acceptable for use by the eventual users of the system
47. Explain how the waterfall model and the prototyping model can be
accommodated in spiral process model?
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a
systems development method (SDM) used in information technology (IT). This model
of development combines the features of the prototyping model and the waterfall
model. The spiral model is favored for large, expensive, and complicated projects.
25. The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
26. The final system is constructed, based on the refined prototype.
27. The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to
minimize downtime.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
The spiral model is a realistic approach to the development of large-scale
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
48. Discuss the advantages of graphical information displays and suggest four
application areas where it would be more appropriate to use graphical
rather than digital display of numeric information?
The Advantages of Graphical information display are:
- Improve Effevtiveness of Output.
- Manage Information Volume.
- Fulfilling Personal Preference.
- Use of different graphic forms ensures better readibility of
information.
Use of ICONS- < Pictorial representation of entities described by
data. >
- Properly selected icons communicate information immediately since
they duplicate
images that Users are already familiar with.
- Appropriate icon ensures that the right words & phrases,that is the
Intended
Meaning is conveyed.
COLOUR REPRESENTATION- A consistent colour usage enhances a good output design.
eg- RED for Exceptions, GREEN/BLUE for Normal Situations.
- The Bightest colour Emphasizing most important information on
Display screen
eg- TORQUOISE & PINK.
Application areasi. BUSINESS REPORTS- PIE-DIAGRAMS, BAR CHARTS help better
understand
the
information than just printing the
values &
figures.
ii. AUTOMOBILE SHOWROOMS- A Comparison of Different Models using
different
graphics gives a
better output value to the user.
iii.HOSPITALS- Diagrams showing different dosages,
Instrument outputs on paper< Scans>
give a quick idea about the patient condition n needs.
50. What do you mean by structured analysis? describe the various tools used
for structured analysis wit pros n cons of each
STRUCTURED ANALYSIS:SET OF TECHNIQUES & GRAPHICAL TOOLS THAT ALLOW THE ANALYST TO
DEVELOP
A NEW KIND
OF SYSTEM SPECIFICATIONS THAT ARE UNDERSTANDABLE TO THE USER.
Tools of Structured Analysis are:
1: DATA FLOW DIAGRAM ( DFD )--[ Bubble Chart ]
Modular Design;
Symbols:
i. External Entities: A Rectangle or Cube -- Represents SOURCE or
DESTINATION.
ii. Data Flows/Arrows: Shows MOTION of Data.
iii.Processes/Circles/Bubbles: Transform INCOMING to OUTGOING
functions.
iv. Open Rectangles: File or Data Store.
( Data at rest or temporary repository of
data )
-DFD describes data flow logically rahter than how it is processed.
-Independent of H/W, S/W, Data Structure or File Organization.
ADV- Ability to represent data flow.
Useful in HIGH & LOW level Analysis.
Provides Good System Documentation.
DIS ADV- Weak input & output details.
Confusing Initially.
Iterations.
2: DATA DICTIONARY-- It is a STRUCTURED REPOSITORY of DATA
METADATA: Data about Data
3 items of Data present in Data Dictionary are:
i. DATA ELEMENT.
ii. DATA STRUCTURE.
iii. DATA FLOW & DATA STORES.
ADV- Supports Documentation.
Improves Communication b/w Analyst & User.
Provides common database for Control.
Easy to locate error.
DIS ADV- No Functional Details.
Not Acceptable by non-technical users.
3: DECISION TREE-ADV- Used to Verify Logic.
Used where Complex Decisions are Involved.
51. Describe how you would expect documentation to help analyst and
designers?
Introduction:
Documentation is not a step in SDLC. It is an activity on-going in every phase of
SDLC. It is about developing documents initially as a draft, later on the review
document and then a signed-off document.
The document is born, either after it is signed-off by an authority or after its review.
It cries initial version number. However, the document also undergoes changes and
then the only way to keep your document up tom date is to incorporate these
changes.
Software Documentation helps Analysts and Designers in the following
ways:
1. The development of software starts with abstract ideas in the minds of the
Top Management of User organization, and these ideas take different forms
as the software development takes place. The Documentation is the only link
between the entire complex processes of software development.
2. The documentation is a written communication, therefore, it can be used for
future reference as the software development advances, or even after the
software is developed, it is useful for keeping the software up to date.
3. The documentation carried out during a SDLC stage, say system analysis, is
useful for the respective system developer to draft his/her ideas in the form
which is shareable with the other team members or Users. Thus it acts as a
very important media for communication.
4. The document reviewer(s) can use the document for pointing out the
deficiencies in them, only because the abstract ideas or models are
documented. Thus, documentation provides facility to make abstract ideas,
tangible.
5. When the draft document is reviewed and recommendations incorporated, the
same is useful for the next stage developers, to base their work on. Thus
documentation of a stage is important for the next stage.
6. Documentation is a very important because it documents very important
decisions about freezing the system requirements, the system design and
implementation decisions, agreed between the Users and Developers or
amongst the developers themselves.
7. Documentation provides a lot of information about the software system. This
makes it very useful tool to know about the software system even without
using it.
8. Since the team members in a software development team, keep adding, as
the software development project goes on, the documentation acts as
important source of detailed and complete information for the newly joined
members.
9. Also, the User organization may spread implementation of a successful
software system to few other locations in the organization. The
documentation will help the new Users to know the operations of the software
system. The same advantage can be drawn when a new User joins the
existing team of Users. Thus documentation makes the Users productive on
the job, very fast and at low cost.
10. Documentation is live and important as long as the software is in use by the
User organization.
11. When the User organization starts developing a new software system to
replace this one, even then the documentation is useful. E.G. The system
analysts can refer to the documentation as a starting point for discussions on
the documentation as a starting point for discussions on the new system
requirements.
Hence, we can say that Software documentation is a very important aspect of SDLC.
52. What are the elements of cost/benefits analysis. Take a suitable example
and given a system proposal for it?
Soln: Cost-benefit analysis:
Cost-benefit analysis is a procedure that gives the picture of various
costs,benefits and rules associated with each alternative system.
Cost-benefit categories:
In developing cost estimates for a system we need to consider several cost
elements. Among them are following:
1. Hardware costs:
Hardware costs relate to the actual purchase or lease of the computer
and
Peripherals(e.g printer,disk drive, tape unit, etc). Determining the actual costs of
hardware is generally more difficult when the system is shared by many users than
for a dedicated stand-alone system.
2.Personnel Costs:
Personnel costs include EDP staff salaries and benefits (health insurance,
vacation time, sick pay, etc.) as well as payment of those involved in developing the
system . Costs incurred during the development of a system are one-time costs and
are labeled development costs.
3.Facility Costs:
Facility costs are expenses incurred in the preparation of the physical
site
where the application or computer will be in operation. This includes wiring, flooring,
lightning and air conditioning. These costs are treated as one time costs.
4. Operating costs:
Operating costs includes all costs associated with the day-to-day operation of
the system. The amount depends on the number shifts, the nature of the application
and the caliber of the operating staff. The amount charged is based on computer
time, staff time and volume of the output produced.
5. Supply Costs:
Supply costs are variable costs that increase with increased use of paper,
ribbons, disks and the like.
Procedure for cost-benefit Determination:
The determination of costs and benefit entails the following steps:
1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the result of the system.
5. Take action.
Classification of costs and benefits:
1. Tangible and intangible costs and benefits:
a. Tangible refers to the ease with which costs or benefits can be
measured. An outlay of cash for a specific item or activity is referred to
as tangible costs. The purchase of a hardware or software, personnel
training and employee salaries are examples of tangible costs.
b. Costs that are known to exist but those financial value cannot be
accurately measured are referred to as the intangible costs. For
example employee morale problems caused by a new system or
lowered company image is an intangible cost.
c. Benefits can also be classified as tangible and intangible. Tangible
benefits such as completing jobs in few hours or producing reports
with no errors are quantifiable.
d. Intangible benefits such as more satisfied customers or an improved
corporate image are not easily quantified.
2. Direct and Indirect Cost and benefits:
a. Direct costs are those with which a money figure can be directly
associated in a project. They are applied directly to a particular
operation. For example the purchase of a box of diskettes for $35 is a
direct cost.
b. Indirect costs are the result of operation that are not directly
associated with a given system or activity. They are often referred to
as overhead.
c. Direct benefits also can be specifically attributable to a given project.
For example a new system that can handle 25 percent more
transaction per day is a direct benefit.
d. Indirect benefits are realized as a by-product at another activity or
system.
3. Fixed or variable cost and benefits:
a. Fixed costs sre sunk costs. They are constant and do not change. Once
encountered, they will not recur. Examples are straight-line
depreciation of hardware, exempt employee salary, and insurance.
b. Variable costs are incurred on a regular basis. They are usually
proportional to work volume and continue as long as system is in
operation. For example the costs of the computer forms vary in
proportion to amount of processing or the length of the reports
required.
c. Fixed benefits are also constant and do not change. An example is a
decrease in the number of personnel by 20 percent resulting from the
use of a new computer.
d. Variable costs are realized on a regular basis. For example consider a
safe deposit tracking system that saves 20 minutes preparing
customer notices compared with the manual system.
Examples of Tangible benefits:
Fewer processing errors.
Increased throughput.
Decreased response time.
Elimination of job step.
Reduced expenses.
Increased Scale.
Faster turnaround.
Better credit.
Reduced credit losses.
Examples of Intangible Benefits:
Improved customer goodwill.
Improve employer morals.
Improved employer job satisfaction.
Better service to community.
53. What are the sources of information used for evaluating hardware and
software? Which source do you consider the most reliable and why?
Hardware Selection:
1. Determining Size and Capacity Requirements:
The starting point in the equipment decision process is the size and
capacity requirements. One particular system may be appropriate for one workload
and inappropriate for another. Systems capacity is frequently the determining factor.
Relevant features to consider include the following:
1. Internal memory size.
2. Cycle speed of system for processing.
3. Number of channels for input, output and communication.
4. Characteristics of display and communication components.
5. Types and numbers of auxiliary storage units that can be attached.
6. Systems support and utility software provided or available.
Auxiliary storage capacity is generally determined by file storage and processing
needs. To estimate the disk storage needed for a system, the analyst must consider
the space needed for each master file, the space for programs and software,
including systems software, and the method by which backup copies will be made.
When using flexible diskettes on a small business system the analyst must determine
whether master and transaction files will be maintained on the same diskette and on
which diskette programs will be stored. Backup considerations as well as file size,
guide the decision about how many disk drives are needed.
2. Design of synthetic Programs:
A synthetic job is a program written to exercise a computers resources
in a
way that allows the analyst to imitate the expected job stream and determine the
results. Then the artificial job stream can be adjusted and rerun to determine the
impact. The process can be repeated as many times as necessary to see which tasks
a comparison set of computers handles well and which they do not handle as well.
The synthetic jobs can be adjusted to produce the same type of activity as actual
programs, including random access of files, sequential searching of files with varying
size records.
3. Plug Compatible Equipment:
Working from this basic set of questions, coupled with mandated cost
and
expenditure limitations, the analyst is able to quickly remove from consideration
those packages that do not meet requirements. Then it is necessary to further
examine the remaining candidates for adoption on the basis of other attributes such
as flexibility, capacity and vendor support.
2. Flexibility:
The flexibility of a software system should include the ability to meet
changing requirements and varying user needs. Software that is flexible is generally
is more valuable than a program that is totally inflexible. However, excessive
flexibility is not desirable, since that requires the user or analyst to define many
details in the system that could be included in the design as the standard feature.
Areas where flexibility is needed are data storage, reporting and options,
definition of parameters, and input output. The flexibility of software varies according
to the types of hardware it will support. The capability to instruct the system to
handle one of the optional formats is another dimension of software flexibility.
3. Audit and Reliability Provisions:
Users often have a tendency to trust systems more than should, to the
extent that they frequently believe the results produced through a computer-based
information system without sufficient skepticism. Therefore the need to ensure that
adequate controls are included in the system is an essential step in the selection of
the software. Auditors must have the ability to validate reports and output and to
test the authenticity and accuracy of data and information.
Systems reliability means that the data are reliable, that they are accurate and
believable. It also includes the element of security, which the analyst evaluates by
determining the method and suitability of protecting the system against
unauthorized use. Ensuring system has passwords is not sufficient access protection.
Multiple levels of passwords are often needed to allow different staff members access
to those files and databases or capabilities that they need.
4. Capacity:
Systems capacity refers to the number of files that can be stored and
the
amount each file will hold. To show complete capacity, it may be necessary to
consider the specific hardware on which the software will be used. Capacity also
depends on the language in which the software is written.
Capacity is also determined by following:
The maximum size of each record measured in number of bytes.
The
The
The
The
54. STAR HOTELis medium size hotel having capacity of 100 rooms
belonging to to different categories such as AC/Non-AC/Delux/Super
Delux/Suit etc
The main purpose of the Hotel is to computerized its billing
system end get some financial related MIS reports.
The Hotel wants to keep following information in their database
1. Customer who are booking and checking in the hotel.
2. The Hotel offers different kinds of services such as Bar and
Restaurant, Laundry ,Room service, Rent-a-car etc.
3. The customer are charges daily for the given services, through
voucher and such transactions are maintained
4. The room tariff depends on the category of the room selected by
the customer.
5. When the customer check-out,The bill is generated and details of
payments received are maintained for generating various financial
reports.
a. Draw context level DFD for the above case.
b. Draw ER-Diagram mentioninfg the key attributes of Entities.
(May-04)
55. Draw screen Layout for the capturing information written on following
input documents.
i. Purchase Order.
ii. Case paper of patients,admitted in the Hospital.
iii. Saving Bamk Account opening form[M-04]
Purchase Order
Purchase Order
No.
Supl No.
Date:-
Supplier:Address:Pin:Sr. No
Value
1.
2.
3.
4
5
Class
Title
Qty
Rate
Signature
Condition
Condition statement
Action statement
For example :Conditions
C1 patient has basic health
C2 patient has social health
assurance
Decision rules
Condition entities
Action entities
Decision rules
1
2
3
Y
N
Y
N
Y
N
4
N
Y
57. Solve
a. Draw report of pay-slip give to employees in payroll system
b. Draw layout of the Hotel Bill given to customer for lodging and
boarding.
Employee no :
Employee Name :
Designation :
Branch Name :
Month :
Basic :
Days :
Sal Code
Earnings
Current
Arrears
Sal Code
Basic
DA
TSA
MMA
HRA
Total
COIN B/F :
RC
COIN C/F :
POST-EXP
Deduction
Current
Arrears
NPF
EDB
PT
PF-ADV-BAL
PF-AC-NO
PAN No:
Signature
Net Rs
Officer In Charge
documentation will help the new user to know the operation of the software
system. The same advantage can be drawn when a new user joins existing
team of user. Thus documentation makes the users productive on the job,
very fast and at low cost.
10. Documentation is live and important as long as software is in use by the user
organization.
11. When the user organization starts developing a new software system to
replace older one, even then the documentation is useful.
b) Feasibility Study
The feasibility of a project can be ascertained in terms of technical factors, economic
factors, or both. A feasibility study is documented with a report showing all the
ramifications of the project. In project finance, the pre-financing work (sometimes
referred to as due diligence) is to make sure there is no "dry rot" in the project and
to identify project risks ensuring they can be mitigated and managed in addition to
ascertaining "debt service" capability.
Technical Feasibility
Technical feasibility refers to the ability of the process to take advantage of the
current state of the technology in pursuing further improvement.
The technical
Managerial Feasibility
Managerial feasibility involves the capability of the infrastructure of a process to
achieve
and
sustain
process
improvement.
Management
support,
employee
Economic Feasibility
This involves the feasibility of the proposed project to generate economic benefits. A
benefit-cost analysis and a breakeven analysis are important aspects of evaluating
the economic feasibility of new industrial projects.
Financial Feasibility
Financial feasibility should be distinguished from economic feasibility.
Financial
feasibility involves the capability of the project organization to raise the appropriate
funds needed to implement the proposed project. Project financing can be a major
obstacle in large multi-party projects because of the level of capital required. Loan
availability, credit worthiness, equity, and loan schedule are important aspects of
financial feasibility analysis.
Cultural Feasibility
Cultural feasibility deals with the compatibility of the proposed project with the
cultural setup of the project environment.
functions must be integrated with the local cultural practices and beliefs.
For
example, religious beliefs may influence what an individual is willing to do or not do.
Social Feasibility
Social feasibility addresses the influences that a proposed project may have on the
social system in the project environment. The ambient social structure may be such
that certain categories of workers may be in short supply or nonexistent. The effect
of the project on the social status of the project participants must be assessed to
ensure compatibility. It should be recognized that workers in certain industries may
have certain status symbols within the society.
Safety Feasibility
Safety feasibility is another important aspect that should be considered in project
planning. Safety feasibility refers to an analysis of whether the project is capable of
being implemented and operated safely with minimal adverse effects on the
environment.
Unfortunately,
environmental
impact
assessment
is
often
not
Political Feasibility
A politically feasible project may be referred to as a "politically correct project."
Political considerations often dictate direction for a proposed project.
This is
particularly true for large projects with national visibility that may have significant
government inputs and political implications. For example, political necessity may be
a source of support for a project regardless of the project's merits. On the other
hand, worthy projects may face insurmountable opposition simply because of
political
factors.
Political
feasibility
analysis
requires
an
evaluation
of
the
compatibility of project goals with the prevailing goals of the political system.
Environmental Feasibility
Often a killer of projects through long, drawn-out approval processes and outright
active opposition by those claiming environmental concerns.
This is an aspect
Concern must be
shown and action must be taken to address any and all environmental concerns
raised or anticipated. A perfect example was the recent attempt by Disney to build a
theme park in Virginia. After a lot of funds and efforts, Disney could not overcome
the local opposition to the environmental impact that the Disney project would have
on the historic Manassas battleground area.
Market Feasibility
Another concern is market variability and impact on the project. This area should
not be confused with the Economic Feasibility. The market needs analysis to view
the potential impacts of market demand, competitive activities, etc. and "divertible"
market share available. Price war activities by competitors, whether local, regional,
national or international, must also be analyzed for early contingency funding and
debt service negotiations during the start-up, ramp-up, and commercial start-up
phases of the project.
58.c
Information Gathering
Gathering all the relevant information is one of the most crucial tasks in the
analysis of system.
Procedure manual
Book of rules
Group Discussion
Trade Journals
Conference Proceedings
Trade Statistics
Interviews
o
Taking prior appointment, informing the purpose and time needed for
interview always help.
Group Discussions
o
Questionnaires
o
59. What is data structure? What is there are relation to data elements to data
process data Hows, data stores,[Nov-04].
Data Elements:The most fundamental level is the data element, for e.g. invoice no. invoice
date, and amount due are data elements included in the invoice data flow.
These serve as building blocks for all other data elements in the system. By
themselves, they do not convey any enough message to any user, for e.g. the
meaning of the data item DATE on an invoice may be well understood it means the
date invoice was issued. However, out of this context it is meaningless. It might
pertain to pay date, graduation date, starting date or invoice date.
Data Structures:It is a set of data items that are related to one another and that collectively
describe a component in the system.
Both Data flows and data stores are data structures. They consist relevant
elements that describe the activity or entity being studied.
Relationship between data structures, data elements, data processes, data
flows, and data stores: Data element is the smallest unit of data that provides no further
decomposition.
Data structure is a group of data elements handled as a unit e.g. Student
Data Structure consist of student_id, student_name, gender, prog_enrolled,
blood_grp, Address, contact_no.
Data flows and data stores are data structures in motion and data structures
at rest respectively.
And, Process is a procedure that transforms incoming data flows to outgoing
Data flows.
60. Built air line reservation system. Draw context level diagram, DFD upto
two level, ER diagram, draw input, output screen [Nov-04].
62. Describe the concept and procedure used in constructing DFDs.Using and
example of your own to illustrate.[Apr-04]
Data flow diagram (DFD):
DFD is a process model used to depict the flow of data through a system &
work or processing performed by the system.
It also known as Bubble Chart Transformation graph & process model.
DFD is a graphic tool to describe & analyze the movement of data through a
system, using the processes, stores of data & delay in system.
DFD are of two types:
A) Physical DFD
B) Logical DFD
Physical DFD:
Represents implementation-dependent view of current system & system show
what task are carried out & how they are performed.
Physical
Characteristics are:
1. name of people
2. form & document names or numbers
3. names of department
4. master & transaction files
5. locations
6. names of procedures
Logical DFD:
They represent implementation-independent view of the system & focus
on the flow of the specific devices, storage locations, or people in the
system. They do not specify physical characteristics
Listed above for physical DFDs.
The most useful approach to develop an accurate & complete description
system begins with the development of a physical DFD & then they are
converted to logical DFD
Data store
PROCES
S
External entity
Procedure:
Step 1: make a list of business activities & use it to determine:
External entities
Data flows
Processes
Data stores
Step 2: draw a context level diagram:
Context level diagram is a top level diagram & contains
Only one process representing the entire system. Anything that not inside the
context diagram will not be the part of the system study.
Step3: develop process chart:
It is also called as hierarchy charts or decomposition diagram. It shows
top down functional decomposition of the system.
Step4: develop the first level DFD:
It is also known as diagram 0 or level 0 diagram. It is the explosion of
the context level diagram. It includes data stores & external entities. Here the
processes are number.
Step 5: draw more detailed level:
Each process in diagram 0 may in turn be exploded to create a more
detailed DFD. New data flows & data stores are added. There are
decomposition/ leveling of processes.
E.g. library management system
1.0
Librarian
Maintenance of
Book Master
Book Master
65. Discuss the six special system tests. Explain the purpose of each. Give
example to show the purpose of the tests.
Following are the types of system testing:
Functional Requirement Testing
Regression Testing
Parallel Testing
Execution Testing
Recovery Testing
Operations Testing
The first three types of system testing focus on testing the functional aspects of
the system. i.e. examining if the system is doing all that it was expected to do
and it doing completely and accurately.
The last three types of system testing focus on testing the structural aspects of
the system. I.e. examining if the structure on the system is built meets
expectations and how best.
1) Functional Requirement Testing
This is described as follows:a. The focus of this system testing is to ensure that the system
requirements and specifications are all met by the system. It also
examines, if the application system meets the user standards.
b. The test conditions are created directly from the user requirements
and they aim to prove the correctness of the system functionality.
c. The development team may prepare a list of core functionality to
check and follow it rigorously during the system testing for completion.
d. Every software system must be tested for the functional requirements
testing. It is a mandatory system testing.
e. The testing activities should begin at the system analysis phase and
continue in every phase, as expected.
2) Regression Testing
This is described as follows:a. Software testing is peculiar in a way that a system change carried out
in one part of the application system to charge its (previously tested)
functionality.
b. The regression testing examines that the (testing) changes carried out
in one part has not changed the functionality in the other parts of the
system.
c. This may happen, if a change (in software requirements or due to
testing) is implemented incorrectly either to wrong program (or
lines) or wrongly.
d. In order to carry out the regression testing, the previously run testdata-packs are applied at the input of the tested and/or unchanged
programs and their expected outputs are compared to match exactly
to the expected output.
e. It also checks that there is no change in the manual procedures of
unchanged parts of the application system.
f. Regression system is more useful during system testing of large
complex system, where development team is very large, multiplication
and/or team communication may be weak.
d.
e.
f.
g.
h.
i.
j.
5) Recovery Testing
This is described as follows:a. Due to any attempts of attacks on the software systems integrity,
such as virus or unauthorized intrusion, etc. the systems integrity
may be threatened or sometimes harmed. In that case, the software
system reliability is said to be in danger.
b. The recovery testing is expected to examine how far the system can
recover from such a disintegrated state and how fast.
c. The recover testing is also aimed at the following: Establishing the procedures for successful recovery.
Create Operational documentation for recovery procedure.
Train the users on the same and provide them an opportunity
to develop/build confidence on the system security.
d. The integrity loss may happen at any time for any long duration.
Therefore the system recovery processing must identify the time of
failure, duration of failure and the scope of damages carried out. The
recovery testing should examine various commonly occurring
situations and even some exceptional situations.
e. The recovery procedure essentially involves the procedures to back-up
of data, documentating and training on the same.
f.
The recovery requirements differ from one application system to
another. Therefore it may not be carried out for some application
system.
g. The drawback of the recovery system testing is the number of security
failure scenarios, it has been built for may be inadequate.
h. Also, the time, budget inadequacies may reflect to avoid or to
inadequate focus on it.
6) Operation Testing
This is described as follows:a. The operation testing is to examine the operations of the new system
in the operational environment of the organization, along with the
other systems being executed simultaneously.
b. Typically, it can be used for the other purpose also, as mentioned
below: To examine if the operations documents are complete,
unambiguous and user friendly.
To examine the effectiveness of the User training.
To examine the completeness and correctness of the Job
Control source codes developed to automate system
operations.
To examine the operations of external interfaces and their
Users efficiency in handling them at work.
c. The advantages of the operation Testing is that it helps surface
otherwise hidden problems related to User Training, Documentation,
and other operational flaws.
d. Also, if planned properly, it need not be carried out separately; it can
be combined to save testing time, effort and costs.
e. The drawback is that the users availability is always very for a limited
time. Therefore, this testing is either avoided or inadequately
performed.
66. Consider payroll system for college, Explain the system to be developed
for this the task through. Develop Context level DFD,Draw physical and
logical DFD,Data Dictionary and Draw ERD Diagram.
67. What is purpose of the system study? Whom should it invove? What
outcome is expected?
Different stages in the System Study (SDLC) addresses the following keys points:
1. Recognition of needs:
Identify the business problem and opportunities.
2. Feasibility Study:
Check whether the problem is worth solving. Redefine the problem.
3. Analysis:
Select appropriate solution for solving the problem.
4. Design:
Design the system to address ho must the problem be solved and define
system flow. the user approval is important.
Proper testing should be exercised over each and every program/module.
5. Implementation:
Actual operations should be identified. Manuals should be provided to the
user.
6. Post Implementation and Maintenance:
Proper maintenance support should be provided. Modifications are done if
some change occurs.
The primary source of information for the functional system requirement is
various types of stake holders of the new system. Stake holders are all the people
who have interest in the successful implementation of the system.
Stake holders are classified in three groups:
1. The uses: who actually work on the system on a daily basis.
2. the clients: who pay or own the system.
3. the technical staff: who ensure that the system operates the computing
environment of the organization.
The next important after identifying the stakeholders, is to identify the critical
persons from each stakeholder type.
they are being observed. In other words people may let you see what they
want you to see.
Interviews:
1. Interview is a fact-finding technique where by the system analyst collects
information from individuals through face-to-face interaction.
2. There are two roles assumed in an interview:
a. Interviewer: The system analyst is the interviewer responsible for
organizing interviews.
b. Interviewee: The system user or system owner is the Interviewee,
who is asked to respond to a series of questions.
3. There are two types of interviews:
a. Unstructured interviews: This is an interview that is conducted with
only a general goal or subject in mind and with few if any specific
questions. The interviewer counts on the interviewee to provide a
framework and direct the conversation.
b. Structured interviews: This is an interview in which the interviewer
has a specific set of questions to ask of the interviewee.
4. Unstructured interviews tend to involve asking open ended questions while
structured interviews tent to involve asking more closed ended questions.
Advantages:
1. Interviews give the analyst an opportunity to motivate the interviewee to
respond freely and openly to questions.
2. Interviews allow the system analyst to probe for more feedback from the
interviewee.
3. Interviews permits the system analyst to adapt or reword questions for each
individual.
4. A good system analyst may be able to obtain information by observing the
interviewees body movements and facial expressions as well as by listening to
verbal replies to questions.
Disadvantages:
1. Interviewing is very time consuming fact-finding approach.
2. It is costlier that other approaches.
3. Success of interviews is highly dependant on the system analysts human
skills.
4. Interviewing may be impractical due to location of the interviewees.
Questionnaires:
1. Questionnaire is a special purpose documents that allows the analyst to
collect information and options from respondent.
2. There are two format for questionnaires:
a. Free format questionnaire: these are designed to offer the respondent
greater latitude in answer. A question is asked, and the respondent
records the answer in the space provided after the question.
69. What are the basic components of the file? Give an example of each.
Explain how file differ.
70. Explain how you would expect documentation to help analyst and
designers.
Introduction:
Documentation is not a step in SDLC. It is an activity on-going in every phase of
SDLC. It is about developing documents initially as a draft, later on the review
document and then a signed-off document.
The document is born, either after it is signed-off by an authority or after its review.
It cries initial version number. However, the document also undergoes changes and
then the only way to keep your document up tom date is to incorporate these
changes.
Software Documentation helps Analysts and Designers in the following
ways:
12. The development of software starts with abstract ideas in the minds of the
Top Management of User organization, and these ideas take different forms
as the software development takes place. The Documentation is the only link
between the entire complex processes of software development.
13. The documentation is a written communication, therefore, it can be used for
future reference as the software development advances, or even after the
software is developed, it is useful for keeping the software up to date.
14. The documentation carried out during a SDLC stage, say system analysis, is
useful for the respective system developer to draft his/her ideas in the form
which is shareable with the other team members or Users. Thus it acts as a
very important media for communication.
15. The document reviewer(s) can use the document for pointing out the
deficiencies in them, only because the abstract ideas or models are
documented. Thus, documentation provides facility to make abstract ideas,
tangible.
16. When the draft document is reviewed and recommendations incorporated, the
same is useful for the next stage developers, to base their work on. Thus
documentation of a stage is important for the next stage.
17. Documentation is a very important because it documents very important
decisions about freezing the system requirements, the system design and
implementation decisions, agreed between the Users and Developers or
amongst the developers themselves.
18. Documentation provides a lot of information about the software system. This
makes it very useful tool to know about the software system even without
using it.
19. Since the team members in a software development team, keep adding, as
the software development project goes on, the documentation acts as
important source of detailed and complete information for the newly joined
members.
20. Also, the User organization may spread implementation of a successful
software system to few other locations in the organization. The
documentation will help the new Users to know the operations of the software
system. The same advantage can be drawn when a new User joins the
existing team of Users. Thus documentation makes the Users productive on
the job, very fast and at low cost.
21. Documentation is live and important as long as the software is in use by the
User organization.
22. When the User organization starts developing a new software system to
replace this one, even then the documentation is useful. E.G. The system
analysts can refer to the documentation as a starting point for discussions on
the documentation as a starting point for discussions on the new system
requirements.
Hence, we can say that Software documentation is a very important aspect of SDLC.
o
o
o
o
The focus of this system testing is to ensure that the system requirements
and specifications are all met by the system. It also examines, if the
application system meets the user standards.
The test conditions are created directly from the user requirements and
they aim to prove the correctness of the system functionality.
The development team may prepare a list of core functionality to check
and follow it rigorously during the system testing for completion.
Every software system must be tested for the functional requirements
testing. It is a mandatory system testing.
The testing activities should begin at the system analysis phase and
continue in every phase, as expected.
Regression Testing:
This is described as follows:o
o
o
Parallel Testing:
It is described as below:
o The objective of the parallel system is to ensure that the new application
system generates exactly similar output data as the just previous or the
current application system.
o The parallel testing for an existing computerized current system involves
mainly setting up the environment for accepting the same input data, running
the new application system and matching the results to highlight the
differences.
o The parallel testing is advantageous because the testing activities are
minimum and user confidence, which is otherwise very difficult to build, is
built.
o The drawback of the parallel testing is that the variations in the new and
current system functionality may make it difficult.
Execution testing:
It is described as below:
o The execution testing is to examine the new system by executing it. The
purpose is to check, how far it meets the operational expectations of the
users.
o It focuses on measuring the actual performance of the system, by measuring
the response time, turn around time etc.
o The execution testing is also used to find out the resource requirements of
the new system, such as the internal memory, storage space, etc. this can
also be used to plan for capacity expansion.
o The execution testing is advantageous, because it can be carried out with
little modifications along with the other types of testing. Therefore it saves
time effort and cost of total testing.
o The drawback of execution testing is that creating situations like remote
processing or use of new hardware/ software proposed to be bought but not
in possession at the time of execution time.
Recovery testing:
It is described as below:
o Due to any attempts of attacks on the software systems integrity, such as
virus or unauthorized intrusion. The sys integrity may be threatened or
sometimes harmed. In that , the software system reliability is said to be in
danger.
o The recovery testing is also aimed at the following:
- Establishing the procedures for successful recovery,
- Create operational documentation for recovery procedure.
- Train the users on the same and provide them an opportunity to
develop/build confidence on the security .
o
The integrity loss may happen at any time for any long duration. Therefore
the system recovery processing must identify the time of failure, duration of
failure and the scope of damages carried out. The recovery testing should
examine various commonly occurring situations and even some exceptional
situations.
Operations testing:
It is described as below:
o The operation testing is to examine the operations of the new system in the
operational environment of the organization, along with the other systems
being executed simultaneously.
o Typically it can be used for the other purposes also, as mentioned below:
-
1.
2.
3.
4.
Logical record:1) A logical record maintains a logical relationship among all the data times in record
.
2) It is the way the program or user sees the data.
3) The software present the logical record in the require sequence.
Physical record:1) Physical record is the way data recorded on storage medium.
2) The programmer does not mean about the physical Map on the disk.
Data Item:1) Individual elements of data are called data items (also known as fields). Each data
item is identified by name and has specific value associate with it. The association of
a value with a field creates one instance of data item.
2) Data items can comprise sub items or subfields for examples, Data is often used
as a single data item, consisting of subfields of month, day and year.
3) Whenever a field consisting of subfields is referenced by name, it automatically
includes only that subfields and excludes all other subfields in data item. Therefore
the subfield day in data item excludes months and year.
Data Fields:
1) Fields are the smallest units of meaningful data stored in a file or database. There
are four types of fields that can be stored: primary key, secondary key, and foreign
keys, and descriptive fields.
File Activity:
1) Its specifies the percentage of actual record processed in single record.
2) If small percentage of record is accessed at any given time file should be
organized on the disk for direct access and if fair percentage is effected regularly
then storing the file on the tap would be more efficient and less costly.
File Volatility:1) It addresses properties of record changes .
2) File record with substantial change are highly volatile, meaning disk design
would be more efficient then tape.
3) Higher the volatility more attractive is disk design.
Sequential:1) In computer science sequential access means that a group of elements (e.g.
data in a memory array or a disk file or on a tape) is accessed in a predetermined,
ordered sequence. Sequential access is sometimes the only way of accessing the
data, for example if it is on a tape. It may also be the access method of choice, for
example if we simply want to process a sequence of data elements in order.
2) Write operation require that data is first arrange in order thus data need to be
sorted before entry .Append adding at the end of the file is simple.
3) to access the record, previous record within the block are scanned thus the
sequential record design is best suited for get next activities , reading one record
after another without a search delay.
4) Advantages:
Simple to desine
Easy to program
Variable length & block record are available
Best use of best disk storage
5) Disadvantage
Records can not be added to the middle of the file.
Indexed Sequential:1) ISAM stands for Indexed Sequential Access Method, a method for storing data for
fast retrieval. ISAM was originally developed by IBM and today forms the basic data
store of almost all databases, both relational and otherwise.
2) The difference is in use of indexes to locate the record.
3) Disks storage:
Disk storage is divided in 3 parts
1) prime area: Contain file records stored by key or id no
2) Overflow area: contains records added to the files that cannot be placed to in
the logical sequence in prime area.
3) Index area: Contains keys of records and their locations on the disks.
4) Advantages:
reduces the magnitude of the sequential search
Records can be inserted or updated in middle of the file.
5) Disadvantage: extra storage is required
Longer to search
Periodic reorganization off file is required.
amplification.
Some of these questions may be planned and others spontaneous.
Unstructured interviews tend to involve asking open-ended questions. Questions give
the interviewee significant latitude in their answers. An example of an open-ended
question is Why are You dissatisfied with there ort of uncollectible accounts?.
Structured interviews tend to involve asking more closed-ended Questions that are
designed to elicit short, direct responses from the interviewee .Examples of such
questions are Are you receiving the report of uncollectible accounts on time?. And
does the report of uncollectible accounts on time? and Does the report of
uncollectible accounts contain accurate information / Realistically, most questions
fall between the two extremes.
76. Draw a context diagram for purchasing systems. Also draw two levels of
details for the same. Write data dictionary entries for any 2 data
elements,2 data stores,2 processes and 2 data structure of your choice for
the above system.
Product
Customer
Purchasing Dept
Order notification
Purchasing Dept
Purcha
sing
WareHouse
Customer
Bills
Order request
System
Low inventory
notice
Shipment request
WareHouse
Product
Low inventory notice
Supplier
Raises invoice
Payment
Purchase order
Product supply
CONTEXT DIAGRAM
Supplier
Warehouse
Stock details
1.
0
Manage stock
Stock details
2.0
Customer
Customer and
order details
Process order
p
Shi
1 Stock master
Warehouse
est
equ
r
t
n
me
Order details
Order details
Customer details
Customer details
3. Warehouse gives product confirmation/low inventory notification to the system and the
system gives delivery confirmation to the customer and the low inventory notice to the
purchasing dept.
Warehouse
Con
deli firma
ver tion
y
of
Customer
3.
0
Process request
Stock
1 master
Product delivery
notification
Low inventory
notice
Purchasing Dept.
Purchasing dept.
Purchasing
details
Order
details
4.
0
Make purchase
order
Purchase
details
4
Purchasing
details
Purchase Order
details
Supplier
Order details
Supplier
produc
t
5.0
Receive
product
details
Purchasing
details
Receipt of supplied product
Purchasing
dept.
Warehouse.
Purchasing dept.
Purchasing order
details
Purchasing details
6.0
Manage
inventory
Inventory details
Inventory details
1 Stock master
4 Purchase details
Purchasing dept.
Supplier
Raises invoice
7.0
Generate
invoice
invoice
Invoice details
5
Invoice master
Purchasing details
Purchasing
dept.
Make
payment
8.
0
Make
payment
Payment
details
Invoice details
5
2
Supplier
Payment received
Payment
details
Order
details
Purchasing
dept.
Bill details
Invoice
details
5
Invoice details
Customer
9.0
Generate
bills
Bill issued
Bill
details
3
customer
details
Customer
10
Make
payment
Receive
payment
5
Payment
details
Payment 2
details
Payment details
DATA DICTIONARY
Data Elements:
1.Shipment Request
Order number
Product id
Product name
Quantity
Order date
Invoice details
Order details
customer details
Data stores
Order details
Order number
Product code
Product name
Quantity demanded
Quantity Supplied
Customer name
Customer id
2. Stock Master
Product code
Product Name
Quantity
77. Develop a decision tree and a decision table for the following:
a. If thre person is under 3 years of age,there is no admission fee.If a
person is under 16 half the full admission is charged and this
admission is reduced to quarter of full admission if the person is
accompanied by adult (the reduction applies only if the person is under
12).Between 16 and 18,half the full admission fee is charged if the
person is student :otherwise the full admission is charged.
b. Over 18,the full admission fee is charged.
c. A discount of 10% is allowed for a person over 16,if they are in group
of 10 or more.
d. There are no student concession during weekends.On weekends under
125 get one free ride.
DECISION TREE
Free
Ans 79
1/4 Fee
Age
12
>=1
0
Adult
8
<1
ge
e
<a 18<ag
16
Age
yes
Student
no
Fee
6
age1
12<
ye
s
e<
Ag
3<
Fee
<3
Group
Weekend
<1
0
Weekend
no
>=10
<10
Group
yes
no
yes
no
Free
9/20 Fee
Fee
Fee
9/10 Fee
Fee
Group
>=10
<10
9/10 Fee
Fee
Ans 79
Age
Age < 3
Adult
Decision table
Age > 18
no
Student
Yes
no
Group
>=10 <10
>=10
Weeken
d
y N Y N
e o e o
s
s
<10
>=10
<10
Fee
Free
fee
fee
Full fee
9/20 fee
9/10 fee
What is difference between system analysis and system design. How does
the focus of information system analysis differ from information system
design?
System analysis:
System analysis is a problem solving technique that decomposes a system into its
component pieces for the purpose of studying how well those components parts work
and interact to accomplish their purpose
System design:
System design is the process of planning a new business system or one to replace or
complement an existing system.
Information system analysis:
Information system analysis primarily focuses on the business problem and
requirements, independent of any technology that can or will be used to implement a
solution to that problem
Information system design:
Information system design is defined as those tasks that follow system analysis and
focus on the specification of a detailed computer based solution.
System analysis emphasizes the business problems; system design focuses on the
technical or implementation concerns of the system.
What are the elements of cost benefit analysis?
Cost benefit analysis is a procedure that gives the picture of various costs, benefits and
rules association with each alternative system
Cost benefit categories:
In developing cost estimates for a system, we need to consider several cost elements.
They are:
Hardware costs: It relates to the actual purchase or lease of the computer and
peripherals (for e.g. printer, diskdrive, and tape unit). Determining the actual
cost of hardware is generally more difficult when the system is shared by various
users than for a dedicated stand alone system alone system. In some cases, the
best way to control for this cost is to treat it as an operating cost.
Personnel costs: It includes EDP staff salaries and benefits (health insurance,
vacation time, sick pay etc.) as well as pay for those involved in developing the
system. Costs incurred during the development of a system are one time costs
and are labeled developmental costs. Once the system is installed, the costs of
operating and maintaining the system become recurring costs.
Facility costs: They are the expenses incurred in the preparation of the physical
site where the application or the computer will be in operation. This includes
wiring, flooring, acoustics lighting and air conditioning. These costs are treated as
one time cost and are incorporated into overall cost estimate of the candidate
system.
Operating costs : It includes all costs associated with the day to day operation of
the system; the amount depends on the no. of shifts, the nature of the
applications and the caliber of the operating staff. There are various ways of
covering operating costs. One approach is to treat operating costs as overhead.
Another method is to charge each authorized user for the amount of processing
they request from the system. The amount of processing they request from the
system. The amount charged is based on computer time, time and volume of the
o/p produced. In any case, some accounting is necessary to determine how
operating costs should be handled.
Supply costs: They are variable cost that increases with increased use of paper,
ribbons, disks etc. They should be estimated & included in the overall cost of the
system.
The two major benefits are improving performance & minimizing the cost of
processing. The performance category emphasizes improvement in the accuracy of
or access to info & easier access to the system by authorized users. Minimizing costs
through an efficient sys. Error control or reduction of staff is benefit that should be
measured & included in cost benefit analysis.
Summarize he procedure for developing DFD, using your own example
illustrate.
Structured analysis is a model driven, process centered technique used to either
analyze an existing system or define business requirements for a new system or
both.
One of the tools of structured analysis is the DFD.
DFD is a process model used to depict the flow of data through a system and work or
processing performed by the system.
DFD are of 2 types:
Physical DFD
Logical DFD
Physical DFD represent implementation dependent view of the current system &
show what tasks are carried out and how they are performed.
Logical DFD represent implementation independent view of the system and focus on
the flow of the specific devices, storage locations or people in the system.
Most comprehensive and useful approach to develop an accurate & complete
description of the current begins with the development of physical DFD & then they
are converted to logical DFD.
Developing DFD:
1. make a list of biz activities & use it to determine
External entities i.e. source & sink
Data flows
Processes
Data stores
2. draw a context level diagram
Context level diagram is a top level diag and contains only one process
representing the entire system. It determines the boundaries of the system.
Anything that is not inside the context diag will not be the part of system
study.
3. develop process chart
It is also called as hierarchy charts or decomposition diagram. It shows top
down functional decomposition of the sys.
4. develop the first level DFD
It is aka diag 0 or 0 level diag. It is the explosion of the context level
diagram. More processes are included. It includes data stores and external
entities. Here the processes are numbered.
5. draw more detailed level :
Each process in diagram 0 may in turn be exploded to create a more detailed
DFD. New data flows & data stores are added. There are further
decomposition/ leveling of processes.
Condition:
Size of order: over $10,000
Action:
take 3 % discounts from
Invoice tools
2%
no discount
$ 5000 to $ 10,000
Less than $ 5000
3
Amount
< 10days
0
Days
> 10 days
Amount
Decision tables
A decision table is a matrix of rows and columns, rather than a tree that
shows conditions and actions.
The columns on the right side of the table, linking conditions and actions,
form decision rules, which state the conditions that must be satisfied for a
particular set of actions to be taken.
e.g. decision table using y/n format for payment discount
<10 days
>$
10,000
$ 5000 $ 10,000
< $ 5000
Take 3%
discount
Take 2 %
discount
Pay full
invoice
amount
Y
Y
Y
N
Y
N
N
Y
N
N
N
N
N
X
Structured English
This technique is described as follows:
Data Dictionary
Data dictionaries are integral components of structures analysis, since data flow
diagrams themselves do not fully describe the subject of the investigations.
A data dictionary is a catalog a repository of the elements in a system
In data dictionary one will find a list of all the elements composing the data flowing
through a system. The major elements are
Data flows
Data stores
Processes
The dictionary is developed during data flow analysis and assists the analysis
involved in determining the sys. Design as well..
Importance
To manage the details in large sys.
To communicate a common meaning for all system elements
To document the features of the sys.
To facilitate analysis of the details in order to evaluate characteristics and
determine where sys changes should be made
To locate errors and omissions in the sys
Contents of data dictionary
Data elements
The most fundamental data level is the data element. They are the building blocks
for all other data in the system
Data structures
A data structure is a set of data items that are related to one another and that
collectively describe a component in the sys.
What is the reason for selecting the prototype development method? What
are the desired impacts on the application development process?
The sys prototype method involves the user more directly in the analysis &
design experience than does the SDLC or structured analysis method.
A prototype is working sys not just an idea on paper that is developed to
test ideas and assumption about the new system. Like any computer based
sys it consists of working software that accepts input, performs calculations,
produces printed or displayed information or performs other meaningful
activities.it is the first version or iteration of an information system an original
model
The design and the information produced by the system are evaluated by
users. This can be effectively done only if the data are real & the situations
live. Changes are expected as the system is used
Information requirements are not always well defined. Users may know only
that certain biz areas need improvement or that existing procedures must be
changed. Or they may know that they need better information for managing
certain activities but are not sure what that info is.
The users requirements might be too vague to even begin formulating a
design. In other cases, a well managed systems investigations may produce a
comprehensive set of sys requirements, but building a sys that will meet
those requirements may require development of new technology.
Unique situations, about which developers have neither info nor experience,
and high cost or high risk situations in the proposed design is new and
untested, are often evaluated through prototypes.
The prototype is actually a pilot or test model; the design evolves through
use.
3. use the prototype, noting needed enhancements and changes. These expand
the list of known sys requirements
4. revise the prototype based on info gained through user experience
5. repeat these steps as needed to achieve a satisfactory system.
Feasibility considerations:
Economic feasibility:
Technical feasibility :
- Technical feasibility is a measure of the practicality of a technical
solution and the availability of the technical resources and expertise.
- It helps in understanding what level and kind of technology is needed
for a system.
- It includes functions, performance issues and constraints that may
affect the availability of the technical resources and expertise.
- Technical feasibility entails an understanding in different technologies
involved in the proposed system, existing technology levels within the
organization and the level of expertise to use the suggested
technology.
Legal feasibility:
-
Political feasibility:
organization.
Evaluation of alternatives :
-
To structure the data so that any pertinent relationships between entities can
be represented.
To permit simple retrieval of data in response to query and report requests.
To simplify the maintenance of the data through updates, insertions and
deletions.
To reduce the need to restructure or reorganize data when new application
requirements arise.
Systems analysts should be familiar with the steps in normalization, since this
process can improve the quality of design for an application.
What are major threats of system security? Which one is more serious?
Why?
System security:
The system security problem can be divided into four related issues:
System security:
System security refers to the technical innovations and procedures applied to
the hardware and operating systems to protect deliberate or accidental damage from
a defined threat.
System integrity:
System integrity refers to the proper functioning of hardware and programs,
appropriate physical security, and safely against external threats such as
eavesdropping and wiretapping
Privacy :
Privacy defines the rights of the users or organizations to determine what
information they are willing to share with or accept from others and how the
organization can be protected against un welcome, unfair or excessive dissemination
of information about it.
Confidentiality:
Errors and omissions: errors and omission contains a broad range of miscues.
Some results in incredible but short lived
Disgruntled and dishonest employees: when huge quantities of info are stored
in one database sensitive data can be easily copied and stolen. A dishonest
programmer can bypass control and surreptitiously authorize his/ her own
transactions. Dishonest employee have an easier time identifying the
vulnerabilities of a software sys than outside hackers because they have
access to the sys for a much longer time and can capitalize on its weakness.
Fire: fire and other man made disasters that deny the system power, air
conditioning or needed supplies can have a crippling effect. In the design of
sys facility, there is tendency to place fire fighting equipments.
Natural disaster: natural disasters are floods hurricanes, snowstorms,
lightening, and other calamities. Although there is no way to prevent them
from occurring there are measures to protect computer based systems from
being wiped out.
External Attack: Outside hackers can get into the systems and have access to
confidential, sensitive data. This is possible because of bugs or vulnerabilities
in the current system
Define data structure? What are the major types of data structure?
An entity is conceptual representation of an object. Relationships between entities
make up a data structure.
Types of relationships exist among entities: one to one, one to many, many to many
relationships.
A one to one (1:1) relationship is an association between two entities.
A one to many (1:M) relationship describes an entity that may have two or more
entities related to it.
A many to many (M:M) relationship describes entities that may have many
relationships in both the directions.
Types of data structures:
Data structuring determines whether the system can create 1:1, 1:M, M:M
relationships among entities. There are three types of data structures: hierarchical,
network, and relational
children
toys
Ford
Grille
GM
Radiator
Alternator
Distributor
batteries
Drive
Shaft
degree
MBA
MCA
high school