Você está na página 1de 66

Q. Identify the difference between FSM and FSA.

Describe their functional workings, characteristics and


applications.

Ans:

In deterministic finite automata, there's exactly one state transition for every input symbolstate pair. There are also no epsilon transitions, meaning that you're not allowed to change
states
without
consuming
anything
from
the
input.
In non-deterministic finite automata, there can be 0 or more state transitions for every
input-state pair. You can also have epsilon transitions. When there's no state transition for a
given input-state pair, then we say that the automata had crashed, meaning that it can't
proceed processing the input, and therefore it doesn't accept the input. When there's more
than one choice for a state transition for a given input-state pair, then the machine can just
follow all possible paths (think of it as parallel computation), if one of the paths ends up in
an accept state, then we say that the automata accepted the input string.
Both automata are equivalent in terms of power though. It may seem that a nondeterministic automata is more powerful, but both automata are proven to be equivalent,
meaning that they recognize the same class of languages called regular languages. The proof
of equivalence is by construction in which you show that given a DFA, you can construct an
equivalent NFA, and vice versa. The proof can be found in any textbook on theory of
computation.
***********************************
In general, which you probably know, a finite automaton has a set of states, starts in a start
state, and reads an input string character-by-character, each character making it switch

states depending on which character it read and which state it was previously in (statecharacter pair); this is called the transition function or transition relation. Some states may
also have -transitions, which are states the machine can go to without reading any
character. A certain set of states are designated accept states; whether the finite automaton
accepts or rejects depends on whether it's in an accept state or reject state after reading the
entire string. The description I just gave was, intentionally, vague enough to apply to both
deterministic and nondeterministic finite automata, which I'll refer to as DFAs and NFAs,
respectively,
from
here
on.
In a nondeterministic finite automaton, the transition relation specifies any number,
including 0, 1, 2, or more, possible states that the NFA can transition to for each statecharacter pair. How it decides which one to take is neither defined nor relevant to the
abstract concept, but you can pretend that it chooses uniformly at random. This can
produce many different computation paths for the same input string; and we say that an
NFA accepts a string if at least one of those paths ends in an accept state.
In some sense, we can just say a deterministic finite automaton is one that isn't
nondeterministic - where the transition function specifies only one future state for each
state-character pair, and thus there is only one computation path on any given input string.
If it ends in a designated accept state, the machine accepts. So the key difference is whether
the computation path is determined by the input string, or if there's some additional
nondeterminism involved. But there's another difference, and it has to do with size.
Obviously, any DFA can be made into an "NFA with no nondeterminism", so any language
accepted by a DFA can be accepted by an NFA. Given an NFA, at any point in the string we
can look at all the computation paths up to this point, and the set of states that at least one
computation path is in; and these sets of states can be considered the states of a DFA, since
this transition is deterministic. (If it's not obvious from this explanation, think about it a
bit.) So the languages recognized by DFAs and those recognized by NFAs are identical.
However, if you take an NFA with n states and make a DFA out of it using this method, that
DFA will have 2^n states, and there are some languages where you can't actually get more
efficient than that. It's also proven that the languages that can be described by regular
expressions are the same set as those that can be recognized by DFAs or NFAs, but given a
regular expression of length n, the smallest NFA has O(n) states, while the smallest DFA in
general has O(2^n) states.

Q. Define phrase structure grammar. State and explain different types of phrase structure grammar. How
can you tie up the concept of grammar, languages and FSA together? Give details with examples.
Ans:

Definition of Phrase-Structure Grammars:


A phrase-structure grammar G = (V, T, S, P) consists of a vocabulary
V, a subset T
of V consisting of terminal symbols, a start symbol S from V, and a
finite set of productions P. The set V T is denoted by N. Elements
of N are called nonterminal symbols. Every production in P must
contain at least one nonterminal on its left side.

Example: Let G = (V, T, S, P),


Where V = {a, b, A, B, S},
T = {a, b},
S is the start symbol, and
P = {S ABa, A BB, B ab, AB b}.
G is an example of a phrase-structure grammar.

Types of Phrase-Structure Grammars


Phrase-structure grammars can be classified according to the
types of productions that are allowed. The different types of
languages defined in this scheme correspond to the classes of
languages that can be recognized using different models of
computing machines.
A type 0 grammar has no restrictions on its productions.
A type 1 grammar can have productions of the form w1 w2,
where w1 = lAr and w2 = lwr, where A is a nonterminal symbol, l
and r are strings of zero or more terminal or nonterminal symbols,
and w is a nonempty
string of terminal or nonterminal symbols. It can also have the
production S as long as S does not appear on the right-hand
side of any other production.
A type 2 grammar can have productions only of the form w1
w2, where w1 is a single symbol that is not a terminal symbol.

A type 3 grammar can have productions only of the form w1


w2 with w1 = A and either w2 = aB or w2 = a, where A and B are
nonterminal symbols and a is a terminal symbol, or with w1 = S
and w2 = .
Type 2 grammars are called context-free grammars because a
nonterminal symbol that is the left side of a production can be
replaced in a string whenever it occurs, no matter what else is in
the string. A language generated by a type 2 grammar is called a
context-free language.
When there is a production of the form lw1r lw2r (but not of
the form w1 w2), the grammar is called type 1 or contextsensitive because w1 can be replaced by w2 only when it is
surrounded by the strings l and r. A language generated by a type
1 grammar is called a context-sensitive language.
Type 3 grammars are also called regular grammars. A language
generated by a regular grammar is called regular language.
Language Recognition by Finite-State Machines
Here, we define some terms that are used when studying the
recognition by finite-state automata of certain sets of strings.
DEFINITION
A string x is said to be recognized or accepted by the machine M
= (S, I, f, s0, F) if it takes the initial state s0 to a final state, that is,
f (s0, x) is a state in F. The language recognized or accepted by
the machine M, denoted by L (M), is the set of all strings that are
recognized by M. Two finite-state automata are called equivalent if
they recognize the same language.
EXAMPLE:

Determine the languages recognized by the finite-state automata


M1, M2, and M3 in Figure 1.

FIGURE 1 Some Finite-State Automata.

Solution: The only final state of M1 is s0. The strings that take s0 to
itself are those consisting of zero or more consecutive 1s. Hence,
L (M1) = {1n | n = 0, 1, 2, . . . }.
The only final state of M2 is s2. The only strings that take s0 to s2
are 1 and 01. Hence, L (M2) = {1, 01}.
The final states of M3 are s0 and s3.The only strings that take s0 to
itself are , 0, 00, 000, . . . , that is, any string of zero or more
consecutive 0s. The only strings that take s0 to s3 are a string of
zero or more consecutive 0s, followed by 10, followed by any
string. Hence, L (M3) = {0n, 0n10x | n = 0, 1, 2, . . . , and x is any
string}.
Q. a). Construct a phrase structure grammar that generates all signed decimal
numbers, consisting of a sign, either+ or - ; a nonnegative integer; and a
decimal fraction that is either the empty string or a decimal point followed by a
positive integer, where initial zeros in an integer are allowed.
b). Give the BNF form of this grammar.
Ans:

b). Give the BNF form of this grammar.


Ans:

Q. Identify the difference between FSM and FSA. Describe their functional
workings, characteristics and applications.
Ans:
A finite-state
machine (FSM)
or finite-state
automaton (FSA,
plural: automata), or simply a state machine, is a mathematical model of
computation used to design both computer programs and sequential
logic circuits. It is conceived as an abstract machine that can be in one of a
finite number of states. The machine is in only one state at a time; the state
it is in at any given time is called the current state. It can change from one
state to another when initiated by a triggering event or condition; this is
called a transition. A particular FSM is defined by a list of its states, and the
triggering condition for each transition.
Simple examples are vending machines, which dispense products when
the proper combination of coins is deposited, elevators, which drop riders
off at upper floors before going down, traffic lights, which change sequence
when cars are waiting, and combination locks, which require the input of
combination numbers in the proper order.
Finite-state machines can model a large number of problems, among which
are electronic
design
automation, communication
protocol design,
language parsing and other engineering applications.
Considered as an abstract model of computation, the finite state machine
has less computational power than some other models of computation such
as the Turing machine.
A finite automaton (FA): is a simple idealized machine used to recognize
patterns within input taken from some character set (or alphabet) C. The
job of an FA is to accept or reject an input depending on whether the
pattern defined by the FA occurs in the input.
A finite automaton consists of:
a finite set S of N states
a special start state

a set of final (or accepting) states


a set of transitions T from one state to another, labeled with chars in C
As noted above, we can represent a FA graphically, with nodes for states, and arcs for
transitions.
We execute our FA on an input sequence as follows:
Begin in the start state
If the next input char matches the label on a transition from the current state to
a new state, go to that new state
Continue making transitions on each input char
o If no move is possible, then stop
o If in accepting state, then accept

Application of FSM or FA:

Application of FA:
Finite automata and its variants are used in formal methods (a field of software
engineering) which deals in proving a software system as mathematically correct. Automata
is used in modelling the system and regular expression and different logic are used to
specify the requirements. Thus the important question now becomes to check whether a
given model satisfies the given logic statement (model checking). Automata are also used for
proving decidability of sat probs.

Q. Define the role of Induction, Recursion and Iteration to describe the behaviour
of a procedure. Give examples of code blocks to define their status. (Hints:
Induction in proving program correctness that arises in recursive programs,
recursion in calling of functions and iteration in different loop status).
Ans:
Mathematical Induction
In general, mathematical induction can be used to prove
statements that assert that P(n) is true for all positive integers n,
where P(n) is a propositional function. A proof by mathematical
induction has two parts, a basis step, where we show that P(1) is

true, and an inductive step, where we show that for all positive
integers k, if P(k) is true, then P(k + 1) is true.

Recursion:
Sometimes it is difficult to define an object explicitly. However, it
may be easy to define this object in terms of itself. This process is
called recursion. For instance, the picture shown in Figure 1 is
produced recursively. First, an original picture is given. Then a
process of successively superimposing centered smaller pictures
on top of the previous pictures is carried out.
We can use recursion to define sequences, functions, and sets. In
Section 2.4, and in most beginning mathematics courses, the
terms of a sequence are specified using an explicit formula. For
instance, the sequence of powers of 2 is given by an = 2n for n =
0, 1, 2, . . . . Recall from Section 2.4 that we can also define a
sequence recursively by specifying how terms of the sequence
are found from previous terms. The sequence of powers of 2 can
also be defined by giving the first term of the sequence, namely,
a0 = 1, and a rule for finding a term of the sequence from the
previous one, namely, an+1 = 2an for n = 0, 1, 2, . . . . When we
define a sequence recursively by specifying how terms of the
sequence are found from previous terms,
we can use induction to prove results about the sequence.
When we define a set recursively, we specify some initial
elements in a basis step and
provide a rule for constructing new elements from those we
already have in the recursive
step. To prove results about recursively defined sets we use a
method called structural
induction.
1.

Recurrence Relation
Recursive Definition of sequences, Solution of linear recurrence, Solution of
Non-linear recurrence, Application to Algorithm Analysis.
1. Recursive Definition of sequences

Recursion is a process in which each step of a pattern is


dependent on the step or steps that come before it

EXAMPLE 1 Suppose that f is defined recursively by


f (0) = 3,
f (n + 1) = 2f (n) + 3.
Find f (1), f (2), f (3), and f (4).
Solution: From the recursive definition it follows that
f (1) = 2f (0) + 3 = 2 3 + 3 = 9,
f (2) = 2f (1) + 3 = 2 9 + 3 = 21,
f (3) = 2f (2) + 3 = 2 21 + 3 = 45,
f (4) = 2f (3) + 3 = 2 45 + 3 = 93.
Recursively defined functions are well defined. That is, for every
positive integer, the value of the function at this integer is
determined in an unambiguous way. This means that given any
positive integer, we can use the two parts of the definition to find
the value of the function at that integer, and that we obtain the
same value no matter how we apply the two parts of the
definition. This is a consequence of the principle of mathematical
induction.
EXAMPLE 2 Give a recursive definition of an, where a is a
nonzero real number and n is a nonnegative integer.
Solution: The recursive definition contains two parts. First a0 is
specified, namely, a0 = 1. Then the rule for finding an+1 from an,
namely, an+1 = a an, for n = 0, 1, 2, 3, . . . , is given. These two
equations uniquely define an for all nonnegative integers n.

EXAMPLE 3 Give a recursive definition of

Solution: The first part of the recursive definition is


In some recursive definitions of functions, the values of the
function at the first k positive integers are specified, and a rule is
given for determining the value of the function at larger integers
from its values at some or all of the preceding k integers. That
recursive definitions defined in this way produce well-defined
functions follows from strong induction.
Recall from Section 2.4 that the Fibonacci numbers, f0, f1, f2, . . . ,
are defined by the equations f0 = 0, f1 = 1, and
fn = fn1 + fn2
for n = 2, 3, 4, . . .. [We can think of the Fibonacci number fn
either as the nth term of the sequence of Fibonacci numbers f0,
f1, . . . or as the value at the integer n of a function f (n).] We can
use the recursive definition of the Fibonacci numbers to prove
many properties of these numbers. We give one such property in
Example 4.
EXAMPLE 4 Show that whenever n 3, fn > n2, where = (1
+5)/2.
Solution: We can use strong induction to prove this inequality. Let
P(n) be the statement fn > n2.Wewant to showthat P(n) is true
whenever n is an integer greater than or equal to 3.
BASIS STEP: First, note that
< 2 = f3, 2 = (3 + 5)/2 < 3 = f4,
so P(3) and P(4) are true.

Digital System
Design
Q. Discuss about the major levels of abstraction for VLSI design
process. Explain and list out the steps for Design abstractions.
Ans:

Karnaugh Maps
Why Do You Need To Know About Karnaugh Maps?
What Is a Karnaugh Map?
Using Karnaugh Maps
Some Observations
Problems
Why Do You Need To Know About Karnaugh Maps?
Karnaugh Maps are used for many small design problems. It's true
that many larger designs are done using computer implementations of
different algorithms. However designs with a small number of variables
occur frequently in interface problems and that makes learning Karnaugh
Maps worthwhile. In addition, if you study Karnaugh Maps you will gain a
great deal of insight into digital logic circuits.
In this section we'll examine some Karnaugh Maps for three and
four variables. As we use them be particularly tuned in to how they are
really being used to simplify Boolean functions.

The goals for this lesson include the following.


Given a

Boolean function described by a truth table or logic function,

Draw the Karnaugh Map for the function.


Use the information from a Karnaugh Map to determine the
smallest sum-of-products function.
What Does a Karnaugh Map Look Like?
A Karnaugh Map is a grid-like representation of a truth table. It is
really just another way of presenting a truth table, but the mode of
presentation gives more insight. A Karnaugh map has zero and one entries
at different positions. Each position in a grid corresponds to a truth
table entry. Here's an example taken from the voting circuit presented
in the lesson on Minterms. The truth table is shown first. The Karnaugh
Map for this truth table is shown after the truth table.
A

How Can a Karnaugh Map Help?


At first, it might seem that the Karnaugh Map is just another way of
presenting the information in a truth table. In one way that's true.
However, any time you have the opportunity to use another way of looking
at a problem advantages can accrue to you. In the case of the Karnaugh
Map the advantage is that the Karnaugh Map is designed to present the
information in a way that allows easy grouping of terms that can be
combined.
Let's start by looking at the Karnaugh Map we've already
encountered. Look at two entries side by side. We'll start by focussing
on the ones shown below in gray.

Let's examine the map again.


The term on the left in the gray area of the map corresponds to:
o
The term on the right in the gray area of the map corresponds to:
o
These two terms can be combined to give
o
The beauty of the Karnaugh Map is that it has been cleverly
designed so that any two adjacent cells in the map differ by a change in
one variable. It's always a change of one variable any time you cross a

horizontal or vertical cell boundaries. (It's not fair to go through the


corners!)
Notice that the order of terms isn't random. Look across the top
boundary of the Karnaugh Map. Terms go 00, 01, 11, 10. If you think
binary well, you might have ordered terms in order 00, 01, 10, 11. That's
the sequence of binary numbers for 0,1,2,3. However, in a Karnaugh Map
terms are not arranged in numerical sequence! That's done deliberately
to ensure that crossing each horizontal or vertical cell boundary will
reflect a change of only one variable. In the numerical sequence, the
middle two terms, 01, and 10 differ by two variables! Anyhow, when only
one variable changes that means that you can eliminate that variable, as in
the example above for the terms in the gray area.
Let's check the claim made on above. Click on the buttons to shade
groups of terms and to find out what the reduced term is.

The Karnaugh Map is a visual technique that allows you to generate


groupings of terms that can be combined with a simple visual inspection.
The technique you use is simply to examine the Karnaugh Map for any
groups of ones that occur. Grouping ones into the largest groups possible
and ensuring that all ones in the table have been included are the first
step in using a Karnaugh Map.
In the next section we will examine how you can generate groups
using Karnaugh Maps. First, however, we will look at some of the kinds of
groups that occur in Truth Tables, and how they appear in Karnaugh Maps.

Click on these buttons to show some groupings. There's one


surprise, but it really is correct. In each case, be sure that you
understand the term that the group represents.

There is a small surprise in one grouping above. The lower left and
the lower right 1s actually form a group. They differ only in having B and
its' inverse. Consequently they can be combined. You will have to imagine
that the right end and the left end are connected.
So far we have focussed on K-maps for three variables. Karnaugh
Maps are useful for more than three variables, and we'll look at how to
extend ideas to four variables here. Shown below is a K-map for four
variables.

Note the following about the four variable Karnaugh Map.


There are 16 cells in the map. Anytime you have N variables, you
will have 2N possible combinations, and 2N places in a truth table or
Karnaugh Map.

Imagine moving around in the Karnaugh Map. Every time you cross a
horizontal or vertical boundary one - and only one - variable changes
value.
The two pairs of variables - WX and YZ - both change in the same
pattern.
Otherwise, if you can understand a Karnaugh Map for a three-variable
function, you should be able to understand one for a four-variable
function. Remember these basic rules that apply to Karnaugh maps of any
size.
In a Karnaugh Map of any size, crossing a vertical or horizontal cell
boundary is a change of only one variable - no matter how many
variables there are.
Each single cell that contains a 1 represents a minterm in the
function, and each minterm can be thought of as a "product" term
with N variables.
To combine variables, use groups of 2, 4, 8, etc. A group of 2 in an
N-variable Karnaugh map will give you a "product" term with N-1
variables. A group of 4 will have N-2 variables, etc.
You will never have a group of 3, a group of 5, etc. Don't even think
about it. See the points above.
Let's look at some examples of groups in a 4-variable Karnaugh Map.
Example 1 - A Group of 2
Here is a group of 2 in a 4-variable map.

Note that Y and Z are 00 and 01 at the top of the two columns in which
you find the two 1s. The variable, Z, changes from a 0 to a 1 as you move
from the left cell to the right cell. Consequently, these two 1s are not
dependent upon the value of Z, and Z will not appear in the product term
that results when we combine the 1s in this group of 2. Conversely, W, X
and Y will be in the product term. Notice that in the row in which the 1s
appear, W = 0 and X = 1. Also, in the two columns in which the 1s appear
we have Y = 0. That means that the term represented by these two cells
is:

Problem
P1. Here is a Karnaugh map with two entries. Determine the product
term represented by this map.

Larger groups in Karnaugh Maps of any size can lead to greater


simplification. Let's consider the group shown shaded below. There are
four terms covered by the shaded area.

In the upper left:o


In the upper right;
o
In the lower left;
o
In the lower right;
o
These terms can be combined (assuming they are all ones in the
Karnaugh Map!). The result is
By combining the first two terms above (the two terms at the top of
the Karnaugh Map):o
By combining the last two terms above (the two terms at the
bottom of the Karnaugh Map):-

o
Then, these two germs can be combined to give:
o
Notice how making the grouping larger reduces the number of
variables in the resulting terms. That simplification helps when you start
to connect gates to implement the function represented by a Karnaugh
map.
By now you should have inferred the rules for getting the sum-ofproducts form from the Karnaugh map.
The number of ones in a group is a power of 2. That's 2, 4, 8 etc.
If a variable takes on both values (0 and 1) for different entries
(1s) in the Karnaugh Map, that variable will not be in the sum-ofproducts form. Note that the variable should be one in half of the
K-Map ones and it should be zero (inverted) in the other half.
If a variable is always 1 or always zero (it appears either inverted
all the time in all entries that are one, or it is always not inverted)
then that variable appears in that form in the sum-of-products
form.
Now, let's see if you can apply those rules.
Problem
P2. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?

P3. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?

P4. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?

P5. Here is a Karnaugh Map with eight entries. What is the sum-ofproducts form for the four ones shown?

Some Further Observations


There are a few further observations that should be made. Note
the following.
There may well be more than one solution of equal complexity.
o Here is an example Karnaugh Map. There are two groups that
are obvious - one in orange, and one in light blue.

o In this example, the two terms shown are:

o There is still one entry to account for. There is a 1 that can


be joined to either of two other entries to form a group.

There is no best way to go on this. Either way will take the


same number of gates, inputs, etc.
And another observation
If there are more than four variables, it is still possible to use
Karnaugh Maps, and you will find larger Karnaugh Maps discussed in
many textbooks. However, as the number of variables increases it
becomes more difficult to see patterns, and computer methods
start to become more attractive.

*****************
An Logic Problem (3.1)
Here is a truth table.
A
0
0
0
0
1
1
1
1

B
0
0
1
1
0
0
1
1

C
0
1
0
1
0
1
0
1

F
1
0
1
1
1
0
0
0

Your Job:
Your job is to do the following

Sketch the Karnaugh Map for this function.

Using the Karnaugh Map, get the simplest sum-of-products form for this function

Draw the circuit diagram using all NANDs.

************************
An Logic Problem (3.2)
Dr. Abner Mallity has been working with a logic circuit. He has a circuit
that works but he suspects that it can be made simpler. Here is the circuit.

Your Job:
Your job is to do the following

Determine the truth table for the function implemented in this circuit. Fill in the truth table
below.

W
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

X
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

Y
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

Z
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

Using the Karnaugh Map, get the simplest sum-of-products form for this function

Draw the circuit diagram using ANDs, ORs and NOTs.

Draw the circuit diagram using all NANDs.

******************
A Logic Problem (3.3)- The Solar Collector System (Prob 3.3) - Still Under
Construction.
Here's a solar heating system representation.

The sun shines out of an intense blue sky onto a solar collector.
The solar collector heats up.
Fans can be used to move the accumulated heat in the collector to
a rock bin - to store heat - or to the house itself.

o Fan A can be used to move air through the solar collector.


o Fan B can be used to move air into the heated space (the
house).
The way the system works is:
When either fan is OFF, air cannot move through that fan.
When both Fan A and Fan B are ON air moves through the
collector directly into the house.
When Fan B is ON and Fan A is OFF air moves from the rock bin
into the heated space.
When Fan A is ON and Fan B is OFF (heated) air moves from the
collector to the rock bin.
Several sensors are available, producing several signals.
When the heated area needs heat the signal H becomes TRUE.
This signal is supplied by a temperature sensor that compares
measured temperature to desired temperature.
When the rock bin is warmer than the heated space - and can
supply heat - a signal RH is TRUE. The measurements from two
temperature sensors is compared to generate this signal, and the
same scheme is used for the two measurements below.
When the collector is warmer than the heated space the
signal CH is TRUE.
When the collector is warmer than the rock bin the
signal CR is TRUE.
Your Problem:

Generate a truth table for all functions. Here is a blank truth table. FA is TRUE when Fan A
is ON, and FB is TRUE when Fan B is ON.

First, we note that when there is no need for heat to the heated space (H = 0), and the
collector is warmer than the rock bin (CR = 1) we should move heat from the collector to
the rock bin. To do that, turn FAN A ON and B OFF. That allows us to fill in four parts of
the truth table.

H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

FA

FB

However, there is one situation here that is thought-provoking. If CR = 1 (Collector warmer


than the Rock bin), and if RH = 1 (Rock bin warmer than the Heated space.), then it is clear
that CH = 1 (Collecter warmer than the Heated space.) There are places in the truth table
where that is not the case, and when CH = 0 in that situation, it's something that can't
happen. Since it can't happen, we don't care what the function is for that case. That gives
us a new truth table with DON'T CAREs. We'll add DON'T CAREs wherever that happens.

H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

FA

FB

Actually, we can note that anytime heat is needed and either the Collector or Rock
bin can supply it, we need to turn on Fan A. When we don't need heat, Fan A is

OFF. And, if we need heat (H = 1) and nothing can supply heat we won't turn on
Fan A. Let's put that into the truth table.

H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1

CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

FA
0
1
0
1
0
X
0
1
0
0
1
1
1
X
1
1

FB
0
0
X
0

Next, we note that we turn on Fan B when

Determine the simplest sum-of-products form for both fan functions.

RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1

Be careful. There may be some Don't Care terms in the truth table. Think about
what conditions are possible carefully.

Show the circuit.

*********************
A Logic Problem (3.4)
Your favorite professor, the ever-ebullient Dr. Abner Mallity, is working on
a little consulting project. As usual he is going to stiff one of his graduate
students, Willy Nilly and Millie Farad, and get one of them to do the project for
him. They could use some help. They are also working hard to finish a term
project. Here's the story.
You need to do a design for the logic circuit that adds ingredients to an
ice cream sundae. Mallity's client - the NutCase Sundae Company - wants to

develop a product that makes sundaes automatically. They plan to add


ingredients chosen from the following list. The list has 16 different ingredients
1. Avocado Slush
2. Bannanas
3. Chocolate Syrup
4. Dacamania Nuts (Newly discovered in the Fiji Islands, and not
related to macadamia nuts)
5. Something starting with the letter "E", but the only thing they've
thought of so far is Eggplant.
Here is the truth table for the Avacodo Slush used for different sundaes. The
sundaes are numbered from 0 to 15 (in binary numbers) in the truth tables.
They plan to come up with some interesting names later, and will get rid of the
numbers ASAP.
Sundae #
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

W X Y Z A
0 0 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 1 1 1
0 1 0 0 1
0 1 0 1 1
0 1 1 0 1
0 1 1 1 0
1 0 0 0 1
1 0 0 1 0
1 0 1 0 1
1 0 1 1 0
1 1 0 0 1
1 1 0 1 0
1 1 1 0 1
1 1 1 1 1

Here is what you need to do.

Determine the Karnaugh map for the Avocado Slush function (A).

From the Karnaugh map determine the smallest sum-of-products form for the Avocado
Slush function.

Show an AND-OR-NOT implementation for the circuit.

Quine-McCluskey Method of Logic


Minimization
I. INTRODUCTION

The QuineMcCluskey algorithm or the method of prime


implicants is a method used for minimization of Boolean
functions. It was developed by W.V. Quine and Edward J.
McCluskey in 1956. It is functionally identical to Karnaugh
mapping, but the tabular form makes it more efficient for use
in computer algorithms, and it also gives a deterministic way
to check that the minimal form of a Boolean function has been
reached. It is sometimes referred to as the tabulation method.
The method involves two steps:
1. Finding all prime implicants of the function.
2. Use those prime implicants in a prime implicant chart to find
the essential prime implicants of the function, as well as other
prime implicants that are necessary to cover the function.
In this paper, we intend to discuss the Quine-McCluskey
minimization procedure as well as provide the readers with all
the simulation codes which are available on net in one single
paper, highlighting the variations in each of the given codes
implemented using a different computer language. The
procedure which is discussed in the following section 2 and 3
has also been taken from the net and for that appropriate
references have been given.
II

QUINE-McCluskey MINIMIZATION PROCEDURE

This is basically a tabular method of minimization and as much it


is suitable for computer applications. The procedure for
optimization as follows:
Step 1: Describe individual minterms of the given expression by their equivalent
binary numbers.
Step 2: Form a table by grouping numbers with equivalent number of 1s in them,
i.e. first numbers with no 1s, then numbers with one 1, and then numbers with two
1s, etc.
Step 3: Compare each number in the top group with each minterm in the next lower
group. If the two numbers are the same in every position but one, place a check
sign
to the right of both numbers to show that they have been paired and

covered. Then enter the newly formed number in the next column (a new table).
The new number is the old numbers but where the literal differ, an x is placed in
the position of that literal.
Step 4: Using (3) above, form a second table and repeat the process again until no
further pairing is possible. (On second repeat, compare numbers to numbers in the
next group that have the same x position.
Step 5: Terms which were not covered are the prime implicants and are ORed and
ANDed together to form final function. Note: The procedure above gives you the
prime implicant but not essential prime implicant.

***********************************************

Quine-McCluskey Method
when # variables is large, instead of using Karnaugh map, you
can use quine-McCluskey method (using a CAD tool on a
computer)
Quine-McCluskey method reduces the minterm expansion of a
function to obtain minimum sum of products form
procedure has two steps
1. Eliminate as many literals as possible from each term by
applying xy + xy' = x. The resulting terms are called
prime implicants.
2. Use a prime implicant chart to select as minimum # of
prime implicants such that when they're OR'd together it
equals the function being simplified and contains a
minimum # of literals.

determination of prime implicants

function given as sum of minterms


to determine the prime implicant we represent the minterms in
binary notation and combine them using xy + xy' = x (two
terms combine if they differ in exactly one variable)
binary minterms are grouped according to # of 1's
o i.e.. binary notation

Group 0,1 ...x (column 1) has 0,1...x number of 1's in the minterm.
The blue column groups minterms from adjacent groups. For example the top
cell groups together minterms from group 0 and group 1 (first two rows). the
next cell groups minterms from group 1 and 2 (i.e.. 0-10 (2,6) formed from
2:0010 and 6: 0110) . The yellow column continues grouping minterms from
the previous (blue column).
o grouping allows one to look for combinations of minterms in adjacent
groups.
o e.i.

the duplicate terms (0,1 ; 8,9) and (0,8 ; 1,9) were formed by combining the 4
minterms in different order
Finally we compare the minterms from the two groups in last column, but
there's no more reduction so the procedure terminates.
each time a term combines with another term it's checked off
terms which have not been checked off are prime implicants
since every minterm has been included in at least one of the prime implicants
the function is equal to the sum of it's prime implicants
so we get last row of table was reduced using consensus theorem., However
instead of using this theorem to reduce further we can use the prime implicant
chart, to be discussed next.

Definition: given a function F of n variables, a product term P is an implicant of


F iff for every combination of values of n variables for which P=1, F is also 1.
Definition : a prime implicant of a function F is a product term implicant which
is no longer an implicant if any literal is deleted from it.
The quine-mccluskey procedure finds all the prime implicants of a function F
(implicants which are not prime are checked off)
The Prime Implicant Chart

used to select a minimum set of prime implicants


minterms of f are listed across top of chart
prime implicants are listed down the side
a prime implicant is equal to a sum o f minterms and is said to
cover these minterms

if a prime implicant covers a minterm an X is placed

f= b'c' + cd' + a'bd


if a minterm is covered by only one prime implicant then that prime implicant
is essential and must be included in minimum sum of products
each time a prime implicant is included in minimum sum of products its row
should be crossed out and columns of its covered minterms should also be
crossed out.
then a minimum set of prime implicants must be chosen to cover the remaining
columns.
in the example, a'bd covers the remaining columns so f = b'c' + cd' + a'bd
note even though a'bd is included in minimum sum of products it is not an
essential prime implicant
we can do this by trial and error (although for large charts there are special
procedures)
simplification of incompletely specified functions:
o treat don't cares as if they were required minterms (so we can eliminate
as many literals as possible) in step(1)
o in step(2) , when form prime implicant chart, do not list these don't care
terms at the top.

**************************************************

Introduction
In order to understand the tabular method of minimization, it is best
you understand the numerical assignment of Karnaugh map

cells and the incompletely specified functions also known as


the can't happen conditions. This is because the tabular method
is based on these principles.
The tabular method which is also known as the Quine-McCluskey method is
particularly useful when minimizing functions having a large number of variables, e.g.
The six-variable functions. Computer programs have been developed employing this
algorithm. The method reduces a function in standard sum of products form to a set
of prime implicants from which as many variables are eliminated as possible.
These prime implicants are then examined to see if some are redundant.
The tabular method makes repeated use of the law A + = 1. Note that Binary
notation is used for the function, although decimal notation is also used for the
functions. As usual a variable in true form is denoted by 1, in inverted form by 0, and
the absence of a variable by a dash ( - ).

Rules of Tabular Method


Consider a function of three variables f(A, B, C):

Consider the function:

Listing the two minterms shows they can be

combined
Now consider the following:

Note that these variables cannot be

combined
This is because the FIRST RULE of the Tabular method for two terms
to combine, and thus eliminate one variable, is that they must differ
in only one digit position.
Bear in mind that when two terms are combined, one of the combined terms has one
digit more at logic 1 than the other combined term. This indicates that the number of
1's in a term is significant and is referred to as its index.
For example: f(A, B, C, D)
0000...................Index 0
0010, 1000.............Index 1
1010, 0011, 1001.......Index 2
1110, 1011.............Index 3
1111...................Index 4
The necessary condition for combining two terms is that the indices of the two terms
must differ by one logic variable which must also be the same.
Example 1:
Consider the function: Z = f(A,B,C) =

C+A

+A C

To make things easier, change the function into binary notation with index value and
decimal value.

Tabulate the index groups in a colunm and insert the decimal value alongside.

From the first list, we combine terms that differ by 1 digit only from one index group
to the next. These terms from the first list are then seperated into groups in the second
list. Note that the ticks are just there to show that one term has been combined with
another term. From the second list we can see that the expression is now reduced to: Z
=
+
+ C+A
From the second list note that the term having an index of 0 can be combined with the
terms of index 1. Bear in mind that the dash indicates a missing variable
and must line up in order to get a third list. The final simplified expression is: Z =
Bear in mind that any unticked terms in any list must be included in the final
expression (none occured here except from the last list). Note that the only prime
implicant here is Z = .
The tabular method reduces the function to a set of prime implicants.
Note that the above solution can be derived algebracially. Attempt this in your notes.
Example 2:
Consider the function f(A, B, C, D) =
in decimal form.

(0,1,2,3,5,7,8,10,12,13,15), note that this is

(0000,0001,0010,0011,0101,0111,1000,1010,1100,1101,1111) in binary form.


(0,1,1,2,2,3,1,2,2,3,4) in the index form.

The prime implicants are:

+ D + BD + A

+ AB

The chart is used to remove redundant prime implicants. A grid is prepared having all
the prime implicants listed at the left and all the minterms of the function along the
top. Each minterm covered by a given prime implicant is marked in the appropriate
position.

From the above chart, BD is an essential prime implicant. It is the only prime
implicant that covers the minterm decimal 15 and it also includes 5, 7 and 13.
is
also an essential prime implicant. It is the only prime implicant that covers the
minterm denoted by decimal 10 and it also includes the terms 0, 2 and 8. The other
minterms of the function are 1, 3 and 12. Minterm 1 is present in
and D.
Similarly for minterm 3. We can therefore use either of these prime implicants for
these minterms. Minterm 12 is present in A
and AB , so again either can be used.

Thus, one minimal solution is: Z =

+ BD +

+A

Problems

1. Minimise the function below using the tabular method of


simplification:
Z = f(A,B,C,D) =
+
C + A CD + A C + BCD +
+
CD

BC

2. Using the tabular method of simplification, find all equally


minimal solutions for the function below.
Z = f(A,B,C,D) =
(1,4,5,10,12,14)
Answere:
1. Consider the function: Z = f(A,B,C,D) =
BCD + BC +
CD

C + A CD + A C +

Convert to decimal and binary equivalents:Z = f(A,B,C,D) =

(0,2,4,5,8,9,12) - decimal equivalent

Z = f(A,B,C,D) =
equivalent

(0000,0010,0100,0101,1000,1001,1100) - binary

(0, 1, 1, 2, 1, 2, 2) - index values

The simplified answer is: Z =

+ B +A

2.

Note that the remaining term 12 is covered by B

and AB

One simplified answer is: Z =


+ AC + B
Another answer is: Z =
+ AC + AB

**************************************************

Object Oriented Software Engineering

Q.1 Developing software for complex system is a challenging task. Explain


how multiple system models help in understanding complex systems along
with suitable example

a) Discuss any five Software related myths that


you find intriguing
Software Piracy Facts and Myths

Top Five Myths about Software Copy Protection

Software Piracy Myths

1.
2.
3.
4.
5.

Software piracy is a victimless crime.


Software copy protection makes software more expensive.
Software copy protection gets in the way of the legitimate user.
Inexpensive software is not copied.
Any protection system can be cracked. Therefore, protection is useless.

- See more at: http://www.safenet-inc.com/software-monetization/software-piracy-andmyths/#sthash.QWYFfCBX.dpuf

5 Myths of Software Testing

As I scan the software testing stories of the day, I'm amazed at the frequency of
certain misconceptions. While there are too many to list, I wanted to share five of the
most common testing myths (in my brief experience). The first three I find to be
prevalent in mainstream news articles, while the other two are more common within the
tech industry in general.
Take a look and see if you agree with me.
Myth 1. Testing is boring: It's been said that "Testing is like sex. If it's not fun, then
you're doing it wrong." The myth of testing as a monotonous, boring activity is seen
frequently in mainstream media articles, which regard testers as the assembly line
workers of the software business. In reality, testing presents new and exciting
challenges every day. Here's a nice quote from Michael Boltonthat pretty much sums it
up:
"Testing is something that we do with the motivation of finding new information. Testing
is a process of exploration, discovery, investigation, and learning. When we configure,
operate, and observe a product with the intention of evaluating it, or with the intention of
recognizing a problem that we hadn't anticipated, were testing. Were testing when
were trying to find out about the extents and limitations of the product and its design,
and when were largely driven by questions that havent been answered or even asked
before."
Myth 2. Testing is easy: It's often assumed testing cannot be that difficult, since
everyday users find bugs all the time. In truth, testing is a very complex craft that's not
suited for your average Joe. Here's Google's Patrick Copeland on the qualities of a
great tester:
Its a mindset and a passion. From the 100s of interviews Ive done, great boils down
to: 1) a special predisposition to finding problems and 2) a passion for testing to go
along with that predisposition. In other words, they love testing and they are good at
it. They also appreciate that the challenges of testing are, more often than not, equal or
greater than the challenges of programming. A great career tester with the testing
gene and the right attitude will always be able to find a job. They are gold.
Myth 3. Testers only find bugs: Yes, testers do find bugs, but that's not their sole
purpose. Here's a good summary on this myth from Ankur of freesoftwaretesting.info:

This view of the testers role is very limited and adds no value for the customer. Testers
are experts with the system, application, or product under test. Unlike the developers,
who are responsible for a specific function or component, the tester understands how
the system works as a whole to accomplish customer goals. Testers understand the
value added by the product, the impact of the environment on the products efficiency,
and the best ways to get the most out of the product.
Myth 4. Machines will make human testers obsolete: With advances in automated
technology, it's often assumed that computers will someday render human testers
obsolete. But since the ultimate users of an application are not robots or machines, but
rather live human beings, it stands to reason that human testing will always have an
important role to play. Here's testing author James Whittaker on the importance of
manual testing:
Test automation is often built to solve too big a problem. This broad scope makes
automation brittle and flaky because its trying to do too much. There are certain things
that automation is good at and certain things humans are good at and it seems to me a
hybrid approach is better. What I want is automation that makes my job as a human
easier. Automation is good at analyzing data and noticing patterns. It is not good at
determining relevance and making judgment calls. Fortunately humans excel at
judgment.
Myth 5. Testers don't get along with developers: It's not hard to see why this myth
persists. As testing guru James Bach once wrote: "Anyone who creates a piece of work
and submits it for judgment is going to feel judged. Thats not a pleasant feeling. And the
problem is compounded by testers who glibly declare that this or that little nit or nat is a
defect, as if anything they personally dont like is a quality problem for everybody."
What's not widely known is that many testers are actually former developers (and viceversa), so there's a mutual understanding and appreciation for the challenges each
camp faces. This is not the case inside all companies, but to say that the majority of
testers and developers don't get along is false, in our experience.

+++++++++++++++++

What is the basic philosophy of Evolutionary Software Process MODEL

Modeling
Modeling is the process of creating a representation of the domain or the
software. Various modeling approaches can be used during both requirements

analysis and design. These include:


Use case modeling. This involves representing the sequences of actions
performed by the users of the software. We will discuss this in Chapter 4.
Structural modeling. This involves representing such things as the classes and
objects present in the domain or in the software. This is the topic of Chapters 5
and 6.
Dynamic and behavioral modeling. This involves representing such things as
the states that the system can be in, the activities it can perform, and how its

components interact.

*************

Context models

Interaction models
Structural models
Behavioral models
Model-driven engineering

System modeling
System modeling is the process of developing abstract models of a
system, with each model presenting a different view or perspective of that
system.

System modeling has now come to mean representing a system using


some kind of graphical notation, which is now almost always based on
notations in the Unified Modeling Language (UML).

System modelling helps the analyst to understand the functionality of


the system and models are used to communicate with customers.

Existing and planned system


models
Models of the existing system are used during requirements engineering. They help
clarify what the existing system does and can be used as a basis for discussing its
strengths and weaknesses. These then lead to requirements for the new system.
Models of the new system are used during requirements engineering to help explain
the proposed requirements to other system stakeholders. Engineers use these models
to discuss design proposals and to document the system for implementation.
In a model-driven engineering process, it is possible to generate a complete or partial
system implementation from the system model

Use of graphical models


As a means of facilitating discussion about an existing or
proposed system
Incomplete and incorrect models are OK as their role is to
support discussion.
As a way of documenting an existing system
Models should be an accurate representation of the system
but need not be complete.
As a detailed system description that can be used to generate a
system implementation
Models have to be both correct and complete.

Context models
Context models are used to illustrate the operational context of a system -they show
what lies outside the system boundaries.
Social and organisational concerns may affect the decision on where to position
system boundaries.
Architectural models show the system and its relationship with other systems.

==============================================
====

OOSE:
==============================================
=====

NON-Functional requirement:

In addition to the obvious features and functions that you will


provide in your system, there are other requirements that don't
actually DO anything, but are important characteristics
nevertheless. These are called "non-functional requirements" or
sometimes "Quality Attributes." For example, attributes such as
performance, security, usability, compatibility. Arent a "feature" of
the system, but are a required characteristic. You can't write a
specific line of code to implement them, rather they are "emergent"
properties that arise from the entire solution. The specification
needs to describe any such attributes the customer requires. You
must decide the kind of requirements that apply to your project and
include those that are appropriate.
Each requirement is simply stated in English. Each requirement must be objective and
quantifiable; there must be some measurable way to assess whether the requirement
has been met.
Often deciding on quality attributes requires making tradeoffs, e.g., between
performance and maintainability. In the APPENDIX you must include an engineering
analysis of any significant decisions regarding tradeoffs between competing attributes.
Here are some examples of non-functional requirements:
Performance requirements
Requirements about resources required, response time, transaction rates, throughput,
benchmark specifications or anything else having to do with performance.

Operating constraints
List any run-time constraints. This could include system resources, people, needed
software,
Platform constraints
Discuss the target platform. Be as specific or general as the user requires. If the user
doesn't care, there are still platform constraints.
Accuracy and Precision
Requirements about the accuracy and precision of the data. (Do you know the
difference?) Beware of 100% requirements; they often cost too much.
Modifiability
Requirements about the effort required to make changes in the software. Often, the
measurement is personnel effort (person- months).
Portability
The effort required to move the software to a different target platform. The
measurement is most commonly person-months or % of modules that need changing.
Reliability
Requirements about how often the software fails. The measurement is often expressed
in MTBF (mean time between failures). The definition of a failure must be clear. Also,
don't confuse reliability with availability which is quite a different kind of
requirement. Be sure to specify the consequences of software failure, how to protect
from failure, a strategy for error detection, and a strategy for correction.
Security
One or more requirements about protection of your system and its data. The
measurement can be expressed in a variety of ways (effort, skill level, time, ...) to

break into the system. Do not discuss solutions (e.g. passwords) in a requirements
document.
Usability
Requirements about how difficult it will be to learn and operate the system. The
requirements are often expressed in learning time or similar metrics.
Legal
There may be legal issues involving privacy of information, intellectual property
rights, export of restricted technologies, etc.
Others
There are many others. Consult your resources.

=========================================

How Well Are NFRs Documented?


Academics and standards organizations have proposed many notations and templates to write
system requirement specifications to make documentation more efficient. However, nine of the 13
respondents acknowledged that they hadnft documented the NFRs. Respondent H said,
[Functional requirements] came in UML, using conceptual models and use cases, but there was
no mention of NFRs. Some respondents said that documentation is only necessary if the client or
the critical nature of the domain require it. The four respondents who explicitly documented their
NFRs used different methods. Respondent Dfs organization uses a domain-specific language.
Since we work in the field of aerospace, our NFRs had to be clearly stated and verifiable. We have
special templates, and we use different techniques from other engineering disciplines, such as risk
models, failure trees, etc. Two respondents used natural language with a certain structure.
Respondent B used Volere templates that provide a high degree of structure to the requirements;
Respondent K used plaintext classified under the ISO/IEC 9126 quality model.
Respondent J simply wrote a plaintext document. Respondents B and D were the only ones who
maintained up-to-date requirement documentation; J and K documented only the initial NFRs.
Respondent K said, "At first, we wrote down some initial ideas for NFRs in natural language, c but

afterward, we did not keep track of any of them or of any other NFRs arising during the design
process." It seems natural to think a relationship exists between measurability and continual (or at
least regular) documentation updates, but confirming this link will require further studies.
So, we can see that NFRs are more tacit or even hidden than documented. When they are
documented, their accuracy and timeliness are seriously compromised. This situation can be
explained in terms of cost and benefit. Respondent C stated it plainly: I rarely appropriately
document my projects, basically because it costs money. If practitioners don ft perceive a clear
benefit from shaping NFRs into fully engineered artifacts, NFRs will remain elusive.

================================
For example, if you want to build a prototype web site you can go either way:
The Throw Away Prototype - you find a template as close as possible and hammer out
the HTML elements to make the sort of what you want it to look like. You hack a back-end
service
with
no
security
and
minimal
performance.
The result - after a very short time you can show someone what you want and (hopefully) get
the
investment
/
a
co-founder
/
a
new
girlfriend.
The you throw the whole thing away and start developing the real thing.
The Evolutionary Prototype: you carefully weigh all the options and select the
infrastructure and architecture based on scalability, security, technological barriers, package
availability and other considerations. After that you start developing the prototype using the
selected infrastructure and architecture and build one user story after the other with
automatic
tests
for
each
story.
The result - after a pretty long time you can show something which can actually grow to a
market worthy product. You can show off your craftsmanship and technological genius and
get
the
investment
/
a
co-founder
/
a
new
girlfriend.
Then you can continue to build and extend the prototype or (as is the usual case) throw it
away and build a new one.

=========
The Software Prototyping refers to building software application prototypes
which display the functionality of the product under development but may
not actually hold the exact logic of the original software.
Software prototyping is becoming very popular as a software development
model, as it enables to understand customer requirements at an early stage
of development. It helps get valuable feedback from the customer and helps
software designers and developers understand about what exactly is
expected from the product under development.

What is Software Prototyping?

Prototype is a working model of software with some limited functionality.

The prototype does not always hold the exact logic used in the actual software
application and is an extra effort to be considered under effort estimation.

Prototyping is used to allow the users evaluate developer proposals and try them
out before implementation.

It also helps understand the requirements which are user specific and may not
have been considered by the developer during product design.

Following is the stepwise approach to design a software prototype:

Basic Requirement Identification: This step involves understanding the very


basics product requirements especially in terms of user interface. The more
intricate details of the internal design and external aspects like performance and
security can be ignored at this stage.

Developing the initial Prototype: The initial Prototype is developed in this


stage, where the very basic requirements are showcased and user interfaces are
provided. These features may not exactly work in the same manner internally in
the actual software developed and the workarounds are used to give the same
look and feel to the customer in the prototype developed.

Review of the Prototype:The prototype developed is then presented to the


customer and the other important stakeholders in the project. The feedback is
collected in an organized manner and used for further enhancements in the
product under development.

Revise and enhance the Prototype: The feedback and the review comments
are discussed during this stage and some negotiations happen with the
customer based on factors like , time and budget constraints and technical
feasibility

of

actual

implementation.

The

changes

accepted

are

again

incorporated in the new Prototype developed and the cycle repeats until
customer expectations are met.

Prototypes can have horizontal or vertical dimensions. Horizontal prototype


displays the user interface for the product and gives a broader view of the
entire system, without concentrating on internal functions. A vertical

prototype on the other side is a detailed elaboration of a specific function or


a sub system in the product.
The purpose of both horizontal and vertical prototype is different. Horizontal
prototypes are used to get more information on the user interface level and
the business requirements. It can even be presented in the sales demos to
get business in the market. Vertical prototypes are technical in nature and
are used to get details of the exact functioning of the sub systems. For
example, database requirements, interaction and data processing loads in a
given sub system.

Software Prototyping Types


There are different types of software prototypes used in the industry.
Following are the major software prototyping types used widely:

Throwaway/Rapid Prototyping: Throwaway prototyping is also called as


rapid or close ended prototyping. This type of prototyping uses very little efforts
with minimum requirement analysis to build a prototype. Once the actual
requirements are understood, the prototype is discarded and the actual system
is developed with a much clear understanding of user requirements.

Evolutionary Prototyping: Evolutionary prototyping also called as breadboard


prototyping is based on building actual functional prototypes with minimal
functionality in the beginning. The prototype developed forms the heart of the
future prototypes on top of which the entire system is built. Using evolutionary
prototyping only well understood requirements are included in the prototype and
the requirements are added as and when they are understood.

Incremental Prototyping: Incremental prototyping refers to building multiple


functional prototypes of the various sub systems and then integrating all the
available prototypes to form a complete system.

Extreme Prototyping : Extreme prototyping is used in the web development


domain. It consists of three sequential phases. First, a basic prototype with all
the existing pages is presented in the html format. Then the data processing is
simulated using a prototype services layer. Finally the services are implemented
and integrated to the final prototype. This process is called Extreme Prototyping
used to draw attention to the second phase of the process, where a fully
functional UI is developed with very little regard to the actual services.

Software Prototyping Application


Software Prototyping is most useful in development of systems having high
level of user interactions such as online systems. Systems which need users
to fill out forms or go through various screens before data is processed can
use prototyping very effectively to give the exact look and feel even before
the actual software is developed.
Software that involves too much of data processing and most of the
functionality is internal with very little user interface does not usually
benefit from prototyping. Prototype development could be an extra
overhead in such projects and may need lot of extra efforts.

Software Prototyping Pros and Cons


Software prototyping is used in typical cases and the decision should be
taken very carefully so that the efforts spent in building the prototype add
considerable value to the final software developed. The model has its own
pros and cons discussed as below.
Following table lists out the pros and cons of Big Bang Model:
Pros

Cons

Increased user involvement in


the

product

even

before

analysis

implementation

Since a working model of the


get a better understanding of the
system being developed.

owing

to

too

much

dependency on prototype

system is displayed, the users

Risk of insufficient requirement

Users may get confused in the


prototypes and actual systems.

Practically,

this

methodology

may increase the complexity of

Reduces time and cost as the

the system as scope of the

defects can be detected much

system

earlier.

original plans.

Quicker
available
solutions.

user
leading

feedback
to

is

better

may

expand

beyond

Developers may try to reuse the


existing prototypes to build the
actual system, even when its not

Missing

functionality

can

be

identified easily

technically feasible

The effort invested in building

Confusing or difficult functions

prototypes may be too much if

can be identified

not monitored properly

Você também pode gostar