Escolar Documentos
Profissional Documentos
Cultura Documentos
Ans:
In deterministic finite automata, there's exactly one state transition for every input symbolstate pair. There are also no epsilon transitions, meaning that you're not allowed to change
states
without
consuming
anything
from
the
input.
In non-deterministic finite automata, there can be 0 or more state transitions for every
input-state pair. You can also have epsilon transitions. When there's no state transition for a
given input-state pair, then we say that the automata had crashed, meaning that it can't
proceed processing the input, and therefore it doesn't accept the input. When there's more
than one choice for a state transition for a given input-state pair, then the machine can just
follow all possible paths (think of it as parallel computation), if one of the paths ends up in
an accept state, then we say that the automata accepted the input string.
Both automata are equivalent in terms of power though. It may seem that a nondeterministic automata is more powerful, but both automata are proven to be equivalent,
meaning that they recognize the same class of languages called regular languages. The proof
of equivalence is by construction in which you show that given a DFA, you can construct an
equivalent NFA, and vice versa. The proof can be found in any textbook on theory of
computation.
***********************************
In general, which you probably know, a finite automaton has a set of states, starts in a start
state, and reads an input string character-by-character, each character making it switch
states depending on which character it read and which state it was previously in (statecharacter pair); this is called the transition function or transition relation. Some states may
also have -transitions, which are states the machine can go to without reading any
character. A certain set of states are designated accept states; whether the finite automaton
accepts or rejects depends on whether it's in an accept state or reject state after reading the
entire string. The description I just gave was, intentionally, vague enough to apply to both
deterministic and nondeterministic finite automata, which I'll refer to as DFAs and NFAs,
respectively,
from
here
on.
In a nondeterministic finite automaton, the transition relation specifies any number,
including 0, 1, 2, or more, possible states that the NFA can transition to for each statecharacter pair. How it decides which one to take is neither defined nor relevant to the
abstract concept, but you can pretend that it chooses uniformly at random. This can
produce many different computation paths for the same input string; and we say that an
NFA accepts a string if at least one of those paths ends in an accept state.
In some sense, we can just say a deterministic finite automaton is one that isn't
nondeterministic - where the transition function specifies only one future state for each
state-character pair, and thus there is only one computation path on any given input string.
If it ends in a designated accept state, the machine accepts. So the key difference is whether
the computation path is determined by the input string, or if there's some additional
nondeterminism involved. But there's another difference, and it has to do with size.
Obviously, any DFA can be made into an "NFA with no nondeterminism", so any language
accepted by a DFA can be accepted by an NFA. Given an NFA, at any point in the string we
can look at all the computation paths up to this point, and the set of states that at least one
computation path is in; and these sets of states can be considered the states of a DFA, since
this transition is deterministic. (If it's not obvious from this explanation, think about it a
bit.) So the languages recognized by DFAs and those recognized by NFAs are identical.
However, if you take an NFA with n states and make a DFA out of it using this method, that
DFA will have 2^n states, and there are some languages where you can't actually get more
efficient than that. It's also proven that the languages that can be described by regular
expressions are the same set as those that can be recognized by DFAs or NFAs, but given a
regular expression of length n, the smallest NFA has O(n) states, while the smallest DFA in
general has O(2^n) states.
Q. Define phrase structure grammar. State and explain different types of phrase structure grammar. How
can you tie up the concept of grammar, languages and FSA together? Give details with examples.
Ans:
Solution: The only final state of M1 is s0. The strings that take s0 to
itself are those consisting of zero or more consecutive 1s. Hence,
L (M1) = {1n | n = 0, 1, 2, . . . }.
The only final state of M2 is s2. The only strings that take s0 to s2
are 1 and 01. Hence, L (M2) = {1, 01}.
The final states of M3 are s0 and s3.The only strings that take s0 to
itself are , 0, 00, 000, . . . , that is, any string of zero or more
consecutive 0s. The only strings that take s0 to s3 are a string of
zero or more consecutive 0s, followed by 10, followed by any
string. Hence, L (M3) = {0n, 0n10x | n = 0, 1, 2, . . . , and x is any
string}.
Q. a). Construct a phrase structure grammar that generates all signed decimal
numbers, consisting of a sign, either+ or - ; a nonnegative integer; and a
decimal fraction that is either the empty string or a decimal point followed by a
positive integer, where initial zeros in an integer are allowed.
b). Give the BNF form of this grammar.
Ans:
Q. Identify the difference between FSM and FSA. Describe their functional
workings, characteristics and applications.
Ans:
A finite-state
machine (FSM)
or finite-state
automaton (FSA,
plural: automata), or simply a state machine, is a mathematical model of
computation used to design both computer programs and sequential
logic circuits. It is conceived as an abstract machine that can be in one of a
finite number of states. The machine is in only one state at a time; the state
it is in at any given time is called the current state. It can change from one
state to another when initiated by a triggering event or condition; this is
called a transition. A particular FSM is defined by a list of its states, and the
triggering condition for each transition.
Simple examples are vending machines, which dispense products when
the proper combination of coins is deposited, elevators, which drop riders
off at upper floors before going down, traffic lights, which change sequence
when cars are waiting, and combination locks, which require the input of
combination numbers in the proper order.
Finite-state machines can model a large number of problems, among which
are electronic
design
automation, communication
protocol design,
language parsing and other engineering applications.
Considered as an abstract model of computation, the finite state machine
has less computational power than some other models of computation such
as the Turing machine.
A finite automaton (FA): is a simple idealized machine used to recognize
patterns within input taken from some character set (or alphabet) C. The
job of an FA is to accept or reject an input depending on whether the
pattern defined by the FA occurs in the input.
A finite automaton consists of:
a finite set S of N states
a special start state
Application of FA:
Finite automata and its variants are used in formal methods (a field of software
engineering) which deals in proving a software system as mathematically correct. Automata
is used in modelling the system and regular expression and different logic are used to
specify the requirements. Thus the important question now becomes to check whether a
given model satisfies the given logic statement (model checking). Automata are also used for
proving decidability of sat probs.
Q. Define the role of Induction, Recursion and Iteration to describe the behaviour
of a procedure. Give examples of code blocks to define their status. (Hints:
Induction in proving program correctness that arises in recursive programs,
recursion in calling of functions and iteration in different loop status).
Ans:
Mathematical Induction
In general, mathematical induction can be used to prove
statements that assert that P(n) is true for all positive integers n,
where P(n) is a propositional function. A proof by mathematical
induction has two parts, a basis step, where we show that P(1) is
true, and an inductive step, where we show that for all positive
integers k, if P(k) is true, then P(k + 1) is true.
Recursion:
Sometimes it is difficult to define an object explicitly. However, it
may be easy to define this object in terms of itself. This process is
called recursion. For instance, the picture shown in Figure 1 is
produced recursively. First, an original picture is given. Then a
process of successively superimposing centered smaller pictures
on top of the previous pictures is carried out.
We can use recursion to define sequences, functions, and sets. In
Section 2.4, and in most beginning mathematics courses, the
terms of a sequence are specified using an explicit formula. For
instance, the sequence of powers of 2 is given by an = 2n for n =
0, 1, 2, . . . . Recall from Section 2.4 that we can also define a
sequence recursively by specifying how terms of the sequence
are found from previous terms. The sequence of powers of 2 can
also be defined by giving the first term of the sequence, namely,
a0 = 1, and a rule for finding a term of the sequence from the
previous one, namely, an+1 = 2an for n = 0, 1, 2, . . . . When we
define a sequence recursively by specifying how terms of the
sequence are found from previous terms,
we can use induction to prove results about the sequence.
When we define a set recursively, we specify some initial
elements in a basis step and
provide a rule for constructing new elements from those we
already have in the recursive
step. To prove results about recursively defined sets we use a
method called structural
induction.
1.
Recurrence Relation
Recursive Definition of sequences, Solution of linear recurrence, Solution of
Non-linear recurrence, Application to Algorithm Analysis.
1. Recursive Definition of sequences
In some recursive definitions of functions, the values of the
function at the first k positive integers are specified, and a rule is
given for determining the value of the function at larger integers
from its values at some or all of the preceding k integers. That
recursive definitions defined in this way produce well-defined
functions follows from strong induction.
Recall from Section 2.4 that the Fibonacci numbers, f0, f1, f2, . . . ,
are defined by the equations f0 = 0, f1 = 1, and
fn = fn1 + fn2
for n = 2, 3, 4, . . .. [We can think of the Fibonacci number fn
either as the nth term of the sequence of Fibonacci numbers f0,
f1, . . . or as the value at the integer n of a function f (n).] We can
use the recursive definition of the Fibonacci numbers to prove
many properties of these numbers. We give one such property in
Example 4.
EXAMPLE 4 Show that whenever n 3, fn > n2, where = (1
+5)/2.
Solution: We can use strong induction to prove this inequality. Let
P(n) be the statement fn > n2.Wewant to showthat P(n) is true
whenever n is an integer greater than or equal to 3.
BASIS STEP: First, note that
< 2 = f3, 2 = (3 + 5)/2 < 3 = f4,
so P(3) and P(4) are true.
Digital System
Design
Q. Discuss about the major levels of abstraction for VLSI design
process. Explain and list out the steps for Design abstractions.
Ans:
Karnaugh Maps
Why Do You Need To Know About Karnaugh Maps?
What Is a Karnaugh Map?
Using Karnaugh Maps
Some Observations
Problems
Why Do You Need To Know About Karnaugh Maps?
Karnaugh Maps are used for many small design problems. It's true
that many larger designs are done using computer implementations of
different algorithms. However designs with a small number of variables
occur frequently in interface problems and that makes learning Karnaugh
Maps worthwhile. In addition, if you study Karnaugh Maps you will gain a
great deal of insight into digital logic circuits.
In this section we'll examine some Karnaugh Maps for three and
four variables. As we use them be particularly tuned in to how they are
really being used to simplify Boolean functions.
There is a small surprise in one grouping above. The lower left and
the lower right 1s actually form a group. They differ only in having B and
its' inverse. Consequently they can be combined. You will have to imagine
that the right end and the left end are connected.
So far we have focussed on K-maps for three variables. Karnaugh
Maps are useful for more than three variables, and we'll look at how to
extend ideas to four variables here. Shown below is a K-map for four
variables.
Imagine moving around in the Karnaugh Map. Every time you cross a
horizontal or vertical boundary one - and only one - variable changes
value.
The two pairs of variables - WX and YZ - both change in the same
pattern.
Otherwise, if you can understand a Karnaugh Map for a three-variable
function, you should be able to understand one for a four-variable
function. Remember these basic rules that apply to Karnaugh maps of any
size.
In a Karnaugh Map of any size, crossing a vertical or horizontal cell
boundary is a change of only one variable - no matter how many
variables there are.
Each single cell that contains a 1 represents a minterm in the
function, and each minterm can be thought of as a "product" term
with N variables.
To combine variables, use groups of 2, 4, 8, etc. A group of 2 in an
N-variable Karnaugh map will give you a "product" term with N-1
variables. A group of 4 will have N-2 variables, etc.
You will never have a group of 3, a group of 5, etc. Don't even think
about it. See the points above.
Let's look at some examples of groups in a 4-variable Karnaugh Map.
Example 1 - A Group of 2
Here is a group of 2 in a 4-variable map.
Note that Y and Z are 00 and 01 at the top of the two columns in which
you find the two 1s. The variable, Z, changes from a 0 to a 1 as you move
from the left cell to the right cell. Consequently, these two 1s are not
dependent upon the value of Z, and Z will not appear in the product term
that results when we combine the 1s in this group of 2. Conversely, W, X
and Y will be in the product term. Notice that in the row in which the 1s
appear, W = 0 and X = 1. Also, in the two columns in which the 1s appear
we have Y = 0. That means that the term represented by these two cells
is:
Problem
P1. Here is a Karnaugh map with two entries. Determine the product
term represented by this map.
o
Then, these two germs can be combined to give:
o
Notice how making the grouping larger reduces the number of
variables in the resulting terms. That simplification helps when you start
to connect gates to implement the function represented by a Karnaugh
map.
By now you should have inferred the rules for getting the sum-ofproducts form from the Karnaugh map.
The number of ones in a group is a power of 2. That's 2, 4, 8 etc.
If a variable takes on both values (0 and 1) for different entries
(1s) in the Karnaugh Map, that variable will not be in the sum-ofproducts form. Note that the variable should be one in half of the
K-Map ones and it should be zero (inverted) in the other half.
If a variable is always 1 or always zero (it appears either inverted
all the time in all entries that are one, or it is always not inverted)
then that variable appears in that form in the sum-of-products
form.
Now, let's see if you can apply those rules.
Problem
P2. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?
P3. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?
P4. Here is a Karnaugh Map with four entries. What is the sum-ofproducts form for the four ones shown?
P5. Here is a Karnaugh Map with eight entries. What is the sum-ofproducts form for the four ones shown?
*****************
An Logic Problem (3.1)
Here is a truth table.
A
0
0
0
0
1
1
1
1
B
0
0
1
1
0
0
1
1
C
0
1
0
1
0
1
0
1
F
1
0
1
1
1
0
0
0
Your Job:
Your job is to do the following
Using the Karnaugh Map, get the simplest sum-of-products form for this function
************************
An Logic Problem (3.2)
Dr. Abner Mallity has been working with a logic circuit. He has a circuit
that works but he suspects that it can be made simpler. Here is the circuit.
Your Job:
Your job is to do the following
Determine the truth table for the function implemented in this circuit. Fill in the truth table
below.
W
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
X
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
Y
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
Z
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
Using the Karnaugh Map, get the simplest sum-of-products form for this function
******************
A Logic Problem (3.3)- The Solar Collector System (Prob 3.3) - Still Under
Construction.
Here's a solar heating system representation.
The sun shines out of an intense blue sky onto a solar collector.
The solar collector heats up.
Fans can be used to move the accumulated heat in the collector to
a rock bin - to store heat - or to the house itself.
Generate a truth table for all functions. Here is a blank truth table. FA is TRUE when Fan A
is ON, and FB is TRUE when Fan B is ON.
First, we note that when there is no need for heat to the heated space (H = 0), and the
collector is warmer than the rock bin (CR = 1) we should move heat from the collector to
the rock bin. To do that, turn FAN A ON and B OFF. That allows us to fill in four parts of
the truth table.
H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
FA
FB
H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
FA
FB
Actually, we can note that anytime heat is needed and either the Collector or Rock
bin can supply it, we need to turn on Fan A. When we don't need heat, Fan A is
OFF. And, if we need heat (H = 1) and nothing can supply heat we won't turn on
Fan A. Let's put that into the truth table.
H
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
CH
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
CR
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
FA
0
1
0
1
0
X
0
1
0
0
1
1
1
X
1
1
FB
0
0
X
0
RH
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
Be careful. There may be some Don't Care terms in the truth table. Think about
what conditions are possible carefully.
*********************
A Logic Problem (3.4)
Your favorite professor, the ever-ebullient Dr. Abner Mallity, is working on
a little consulting project. As usual he is going to stiff one of his graduate
students, Willy Nilly and Millie Farad, and get one of them to do the project for
him. They could use some help. They are also working hard to finish a term
project. Here's the story.
You need to do a design for the logic circuit that adds ingredients to an
ice cream sundae. Mallity's client - the NutCase Sundae Company - wants to
W X Y Z A
0 0 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 1 1 1
0 1 0 0 1
0 1 0 1 1
0 1 1 0 1
0 1 1 1 0
1 0 0 0 1
1 0 0 1 0
1 0 1 0 1
1 0 1 1 0
1 1 0 0 1
1 1 0 1 0
1 1 1 0 1
1 1 1 1 1
Determine the Karnaugh map for the Avocado Slush function (A).
From the Karnaugh map determine the smallest sum-of-products form for the Avocado
Slush function.
covered. Then enter the newly formed number in the next column (a new table).
The new number is the old numbers but where the literal differ, an x is placed in
the position of that literal.
Step 4: Using (3) above, form a second table and repeat the process again until no
further pairing is possible. (On second repeat, compare numbers to numbers in the
next group that have the same x position.
Step 5: Terms which were not covered are the prime implicants and are ORed and
ANDed together to form final function. Note: The procedure above gives you the
prime implicant but not essential prime implicant.
***********************************************
Quine-McCluskey Method
when # variables is large, instead of using Karnaugh map, you
can use quine-McCluskey method (using a CAD tool on a
computer)
Quine-McCluskey method reduces the minterm expansion of a
function to obtain minimum sum of products form
procedure has two steps
1. Eliminate as many literals as possible from each term by
applying xy + xy' = x. The resulting terms are called
prime implicants.
2. Use a prime implicant chart to select as minimum # of
prime implicants such that when they're OR'd together it
equals the function being simplified and contains a
minimum # of literals.
Group 0,1 ...x (column 1) has 0,1...x number of 1's in the minterm.
The blue column groups minterms from adjacent groups. For example the top
cell groups together minterms from group 0 and group 1 (first two rows). the
next cell groups minterms from group 1 and 2 (i.e.. 0-10 (2,6) formed from
2:0010 and 6: 0110) . The yellow column continues grouping minterms from
the previous (blue column).
o grouping allows one to look for combinations of minterms in adjacent
groups.
o e.i.
the duplicate terms (0,1 ; 8,9) and (0,8 ; 1,9) were formed by combining the 4
minterms in different order
Finally we compare the minterms from the two groups in last column, but
there's no more reduction so the procedure terminates.
each time a term combines with another term it's checked off
terms which have not been checked off are prime implicants
since every minterm has been included in at least one of the prime implicants
the function is equal to the sum of it's prime implicants
so we get last row of table was reduced using consensus theorem., However
instead of using this theorem to reduce further we can use the prime implicant
chart, to be discussed next.
**************************************************
Introduction
In order to understand the tabular method of minimization, it is best
you understand the numerical assignment of Karnaugh map
combined
Now consider the following:
combined
This is because the FIRST RULE of the Tabular method for two terms
to combine, and thus eliminate one variable, is that they must differ
in only one digit position.
Bear in mind that when two terms are combined, one of the combined terms has one
digit more at logic 1 than the other combined term. This indicates that the number of
1's in a term is significant and is referred to as its index.
For example: f(A, B, C, D)
0000...................Index 0
0010, 1000.............Index 1
1010, 0011, 1001.......Index 2
1110, 1011.............Index 3
1111...................Index 4
The necessary condition for combining two terms is that the indices of the two terms
must differ by one logic variable which must also be the same.
Example 1:
Consider the function: Z = f(A,B,C) =
C+A
+A C
To make things easier, change the function into binary notation with index value and
decimal value.
Tabulate the index groups in a colunm and insert the decimal value alongside.
From the first list, we combine terms that differ by 1 digit only from one index group
to the next. These terms from the first list are then seperated into groups in the second
list. Note that the ticks are just there to show that one term has been combined with
another term. From the second list we can see that the expression is now reduced to: Z
=
+
+ C+A
From the second list note that the term having an index of 0 can be combined with the
terms of index 1. Bear in mind that the dash indicates a missing variable
and must line up in order to get a third list. The final simplified expression is: Z =
Bear in mind that any unticked terms in any list must be included in the final
expression (none occured here except from the last list). Note that the only prime
implicant here is Z = .
The tabular method reduces the function to a set of prime implicants.
Note that the above solution can be derived algebracially. Attempt this in your notes.
Example 2:
Consider the function f(A, B, C, D) =
in decimal form.
+ D + BD + A
+ AB
The chart is used to remove redundant prime implicants. A grid is prepared having all
the prime implicants listed at the left and all the minterms of the function along the
top. Each minterm covered by a given prime implicant is marked in the appropriate
position.
From the above chart, BD is an essential prime implicant. It is the only prime
implicant that covers the minterm decimal 15 and it also includes 5, 7 and 13.
is
also an essential prime implicant. It is the only prime implicant that covers the
minterm denoted by decimal 10 and it also includes the terms 0, 2 and 8. The other
minterms of the function are 1, 3 and 12. Minterm 1 is present in
and D.
Similarly for minterm 3. We can therefore use either of these prime implicants for
these minterms. Minterm 12 is present in A
and AB , so again either can be used.
+ BD +
+A
Problems
BC
C + A CD + A C +
Z = f(A,B,C,D) =
equivalent
(0000,0010,0100,0101,1000,1001,1100) - binary
+ B +A
2.
and AB
**************************************************
1.
2.
3.
4.
5.
As I scan the software testing stories of the day, I'm amazed at the frequency of
certain misconceptions. While there are too many to list, I wanted to share five of the
most common testing myths (in my brief experience). The first three I find to be
prevalent in mainstream news articles, while the other two are more common within the
tech industry in general.
Take a look and see if you agree with me.
Myth 1. Testing is boring: It's been said that "Testing is like sex. If it's not fun, then
you're doing it wrong." The myth of testing as a monotonous, boring activity is seen
frequently in mainstream media articles, which regard testers as the assembly line
workers of the software business. In reality, testing presents new and exciting
challenges every day. Here's a nice quote from Michael Boltonthat pretty much sums it
up:
"Testing is something that we do with the motivation of finding new information. Testing
is a process of exploration, discovery, investigation, and learning. When we configure,
operate, and observe a product with the intention of evaluating it, or with the intention of
recognizing a problem that we hadn't anticipated, were testing. Were testing when
were trying to find out about the extents and limitations of the product and its design,
and when were largely driven by questions that havent been answered or even asked
before."
Myth 2. Testing is easy: It's often assumed testing cannot be that difficult, since
everyday users find bugs all the time. In truth, testing is a very complex craft that's not
suited for your average Joe. Here's Google's Patrick Copeland on the qualities of a
great tester:
Its a mindset and a passion. From the 100s of interviews Ive done, great boils down
to: 1) a special predisposition to finding problems and 2) a passion for testing to go
along with that predisposition. In other words, they love testing and they are good at
it. They also appreciate that the challenges of testing are, more often than not, equal or
greater than the challenges of programming. A great career tester with the testing
gene and the right attitude will always be able to find a job. They are gold.
Myth 3. Testers only find bugs: Yes, testers do find bugs, but that's not their sole
purpose. Here's a good summary on this myth from Ankur of freesoftwaretesting.info:
This view of the testers role is very limited and adds no value for the customer. Testers
are experts with the system, application, or product under test. Unlike the developers,
who are responsible for a specific function or component, the tester understands how
the system works as a whole to accomplish customer goals. Testers understand the
value added by the product, the impact of the environment on the products efficiency,
and the best ways to get the most out of the product.
Myth 4. Machines will make human testers obsolete: With advances in automated
technology, it's often assumed that computers will someday render human testers
obsolete. But since the ultimate users of an application are not robots or machines, but
rather live human beings, it stands to reason that human testing will always have an
important role to play. Here's testing author James Whittaker on the importance of
manual testing:
Test automation is often built to solve too big a problem. This broad scope makes
automation brittle and flaky because its trying to do too much. There are certain things
that automation is good at and certain things humans are good at and it seems to me a
hybrid approach is better. What I want is automation that makes my job as a human
easier. Automation is good at analyzing data and noticing patterns. It is not good at
determining relevance and making judgment calls. Fortunately humans excel at
judgment.
Myth 5. Testers don't get along with developers: It's not hard to see why this myth
persists. As testing guru James Bach once wrote: "Anyone who creates a piece of work
and submits it for judgment is going to feel judged. Thats not a pleasant feeling. And the
problem is compounded by testers who glibly declare that this or that little nit or nat is a
defect, as if anything they personally dont like is a quality problem for everybody."
What's not widely known is that many testers are actually former developers (and viceversa), so there's a mutual understanding and appreciation for the challenges each
camp faces. This is not the case inside all companies, but to say that the majority of
testers and developers don't get along is false, in our experience.
+++++++++++++++++
Modeling
Modeling is the process of creating a representation of the domain or the
software. Various modeling approaches can be used during both requirements
components interact.
*************
Context models
Interaction models
Structural models
Behavioral models
Model-driven engineering
System modeling
System modeling is the process of developing abstract models of a
system, with each model presenting a different view or perspective of that
system.
Context models
Context models are used to illustrate the operational context of a system -they show
what lies outside the system boundaries.
Social and organisational concerns may affect the decision on where to position
system boundaries.
Architectural models show the system and its relationship with other systems.
==============================================
====
OOSE:
==============================================
=====
NON-Functional requirement:
Operating constraints
List any run-time constraints. This could include system resources, people, needed
software,
Platform constraints
Discuss the target platform. Be as specific or general as the user requires. If the user
doesn't care, there are still platform constraints.
Accuracy and Precision
Requirements about the accuracy and precision of the data. (Do you know the
difference?) Beware of 100% requirements; they often cost too much.
Modifiability
Requirements about the effort required to make changes in the software. Often, the
measurement is personnel effort (person- months).
Portability
The effort required to move the software to a different target platform. The
measurement is most commonly person-months or % of modules that need changing.
Reliability
Requirements about how often the software fails. The measurement is often expressed
in MTBF (mean time between failures). The definition of a failure must be clear. Also,
don't confuse reliability with availability which is quite a different kind of
requirement. Be sure to specify the consequences of software failure, how to protect
from failure, a strategy for error detection, and a strategy for correction.
Security
One or more requirements about protection of your system and its data. The
measurement can be expressed in a variety of ways (effort, skill level, time, ...) to
break into the system. Do not discuss solutions (e.g. passwords) in a requirements
document.
Usability
Requirements about how difficult it will be to learn and operate the system. The
requirements are often expressed in learning time or similar metrics.
Legal
There may be legal issues involving privacy of information, intellectual property
rights, export of restricted technologies, etc.
Others
There are many others. Consult your resources.
=========================================
afterward, we did not keep track of any of them or of any other NFRs arising during the design
process." It seems natural to think a relationship exists between measurability and continual (or at
least regular) documentation updates, but confirming this link will require further studies.
So, we can see that NFRs are more tacit or even hidden than documented. When they are
documented, their accuracy and timeliness are seriously compromised. This situation can be
explained in terms of cost and benefit. Respondent C stated it plainly: I rarely appropriately
document my projects, basically because it costs money. If practitioners don ft perceive a clear
benefit from shaping NFRs into fully engineered artifacts, NFRs will remain elusive.
================================
For example, if you want to build a prototype web site you can go either way:
The Throw Away Prototype - you find a template as close as possible and hammer out
the HTML elements to make the sort of what you want it to look like. You hack a back-end
service
with
no
security
and
minimal
performance.
The result - after a very short time you can show someone what you want and (hopefully) get
the
investment
/
a
co-founder
/
a
new
girlfriend.
The you throw the whole thing away and start developing the real thing.
The Evolutionary Prototype: you carefully weigh all the options and select the
infrastructure and architecture based on scalability, security, technological barriers, package
availability and other considerations. After that you start developing the prototype using the
selected infrastructure and architecture and build one user story after the other with
automatic
tests
for
each
story.
The result - after a pretty long time you can show something which can actually grow to a
market worthy product. You can show off your craftsmanship and technological genius and
get
the
investment
/
a
co-founder
/
a
new
girlfriend.
Then you can continue to build and extend the prototype or (as is the usual case) throw it
away and build a new one.
=========
The Software Prototyping refers to building software application prototypes
which display the functionality of the product under development but may
not actually hold the exact logic of the original software.
Software prototyping is becoming very popular as a software development
model, as it enables to understand customer requirements at an early stage
of development. It helps get valuable feedback from the customer and helps
software designers and developers understand about what exactly is
expected from the product under development.
The prototype does not always hold the exact logic used in the actual software
application and is an extra effort to be considered under effort estimation.
Prototyping is used to allow the users evaluate developer proposals and try them
out before implementation.
It also helps understand the requirements which are user specific and may not
have been considered by the developer during product design.
Revise and enhance the Prototype: The feedback and the review comments
are discussed during this stage and some negotiations happen with the
customer based on factors like , time and budget constraints and technical
feasibility
of
actual
implementation.
The
changes
accepted
are
again
incorporated in the new Prototype developed and the cycle repeats until
customer expectations are met.
Cons
product
even
before
analysis
implementation
owing
to
too
much
dependency on prototype
Practically,
this
methodology
system
earlier.
original plans.
Quicker
available
solutions.
user
leading
feedback
to
is
better
may
expand
beyond
Missing
functionality
can
be
identified easily
technically feasible
can be identified