Escolar Documentos
Profissional Documentos
Cultura Documentos
b).Accessing Methods
c).Degree of Associative
d).Processing Alternatives for Information.
Linear :
the values are arranged in linear fashion.
that means in sequence
Eg.-Arrays, linked lists, stacks, queues, etc.
Non-Linear :
this type is opposite to linear.
The data values are not arranged in order.
Eg- Trees, Graphs, Table, Sets, etc.
Gate- ADA & DSA by Nitesh Dubey
Other Classifications..
Homogenous
Non-homogenous
Dynamic
Static
Data Types
The Data Structure is a collection of different data types.
Array
An Array is a collection of elements having following
properties:
1.
2.
3.
4.
Types of array:
Array..
Advantages:
Retrieval of stored elements is efficient using index
value.
Searching technique is very simply.
Disadvantages:
Insertion & deletion at random location are
complicated.
For storing data, large continuous free block of
memory is required.
Memory fragmentation occurs if remove the elements
randomly.
Structure
A structure is a group of items in which item is identified
by its own identifier, each of which is known as a member
of the structure.
Thus structure is a collection of different items of various
data types under a unique name.
Syntax of structure in C is as under:
struct name
{
type1 data1;
type2 data2;
.
.
typen datan; };
Union
Unions are very similar to structures except the way of
member data is stored.
In union, the members are sharing the common memory
location. Thus, it is used to save memory.
Syntax of union in C is as under:
union name
{
type1 data1;
type2 data2;
.
.
typen datan;
};
Union..
DOS.h
union REGS
{
struct WORDREGS x;
struct BYTEREGS h;
};
Gate- ADA & DSA by Nitesh Dubey
Function
A function is a set instruction to carryout a particular
task.
After its execution it returns a single value.
Algorithm
Algorithm
An algorithm is a computational method for solving a
problem.
It is a sequence of steps that take us from the input to
the output.
An algorithm must be
Correct: It should provide a correct solution according to the
specifications.
Finite: It should terminate.
General: It should work for every instance of a problem
Efficient: It should use few resources (such as time or
memory).
Gate- ADA & DSA by Nitesh Dubey
Analysis of Algorithms
It is the study of their efficiency.
It determine the amount of resources necessary
to execute it.
Most algorithms are designed to work with
inputs of arbitrary length.
Quantifying the resources required.
Analysis of Algorithms.
Measures of resource utilization (efficiency):
Execution time
Memory space
time complexity
space complexity
Observation :
The larger the input data the more the resource
requirement:
Complexities are functions of the amount of
input data (input size).
Gate- ADA & DSA by Nitesh Dubey
Space Complexity
Space complexity is defined as the amount of
memory a program needs to run to completion.
Space Complexity is= Instruction space +
Data space +
Stack space
Time Complexity
Time complexity is the amount of computer time a
program needs to run.
How do we measure?
1. Count a particular operation (operation counts)
Asymptotic Notation
Describes the behavior of the time or space
complexity for large instance characteristic
Major Notations are
Big Oh (O) notation provides an upper bound for the
function
Omega () notation provides a lower-bound
Theta ( ) notation is used when an algorithm can be
bounded both from above and below by the same
function
Gate- ADA & DSA by Nitesh Dubey
Big Oh (O).
Big Oh Examples
Bubble Sort :T(n) =O(n2).
Linear Search: T(n) =O(n).
2n2 =O(n2).
Omega ()..
Omega () Examples
Theta ( ) Examples:
Find Max / Min: T(n) =
(n)
(n3)
Theta ( )
Relations b/w ,
,O
Relations b/w ,
i.e.,
,O..
In practice,
asymptotically tight bounds are obtained from
asymptotic upper and lower bounds.
Practical Complexities
logn
nlogn
0
1
2
3
4
5
7
1
2
4
8
16
32
100
0
2
8
24
64
160
700
n2
n3
2n
1
1
2
4
8
4
16
64
16
64
512
256
256
4096
65536
1024 32768
4294967296
10000 1000000 1267650600228
2294014967032
05376
Recursion
Recursive Function
A method of programming whereby a function directly or
indirectly calls itself.
Recursion is often presented as an alternative to iteration.
Example of Recursion
Recursion for finding factorial of given number.
int fact(int x)
{
if(x==0)
return(1);
else
return(x * fact(x-1));
}
Recursion Tree
A tree representation of recursion calls.
A method to analyze the complexity of an
algorithm by diagramming the recursive
function calls.
Tail Recursion
Binary Recursion
Exponential Recursion
Nested Recursion
Mutual Recursion
Gate- ADA & DSA by Nitesh Dubey
Direct Recursion
int factorial (int x)
{
if (x==0)
return(1);
else
return (x * factorial(x-1));
}
Gate- ADA & DSA by Nitesh Dubey
Indirect Recursion
void fun1()
{
static i=0;
if (i<5)
fun2();
}
void fun2()
{
printf ("Recursion from fun2 to fun1 which is indirect
recursion\n");
fun1();
}
main()
{
fun1();
}
Gate- ADA & DSA by Nitesh Dubey
Linear Recursion
A linear recursive function is a function that
only makes a single call to itself each time the
function runs.
(as opposed to one that would call itself multiple
times during its execution).
// or factorial(n-1) * n
Tail Recursion
A recursive procedure where the recursive call is the last action to
be taken by the function.
Tail recursive functions are generally easy to transform into
iterative functions.
Tail Recursion..
As such, tail recursive functions can often be easily
implemented in an iterative manner;
by taking out the recursive call and replacing it with a
loop, the same effect can generally be achieved.
A good compiler can recognize tail recursion and
convert it to iteration in order to optimize the
performance of the code.
Tail Recursion.
to compute the GCD, or Greatest Common Denominator, of two
numbers:
//C++
int gcd(int m, int n)
{
int r;
if (m < n)
return gcd(n,m);
r = m%n;
if (r == 0)
return(n);
else
return(gcd(n,r));
}
Gate- ADA & DSA by Nitesh Dubey
Tail Recursion.
Is the factorial method a tail recursive method?
int fact(int x)
{
if (x==0)
return 1;
else
return x*fact(x-1);
}
When returning back from a recursive call, there is still one
pending operation, multiplication.
Therefore, factorial is a non-tail recursive method.
Gate- ADA & DSA by Nitesh Dubey
Tail Recursion.
Is this method tail recursive?
void fun1(int i)
{
if (i>0)
{
printf(%d , i);
fun1(i-1);
}
It is tail recursive.
Tail Recursion.
Is the following program tail recursive?
void prog(int i) {
if (i>0) {
prog(i-1);
printf(%d , i);
prog(i-1);
}
}
No, because there is an earlier recursive call, other than the last one,
In tail recursion, the recursive call should be the last statement, and
there should be no earlier recursive calls whether direct or indirect.
Gate- ADA & DSA by Nitesh Dubey
Tail Recursion.
Advantage of Tail Recursive Method
Tail Recursive methods are easy to convert to iterative.
void tail(int i){
if (i>0)
{
printf(%d , i);
tail(i-1)
}
}
Binary Recursion
A recursive function which calls itself twice during
the course of its execution.
The mathematical combinations operation is a good
example of a function that can quickly be
implemented as a binary recursive function. The
number of combinations, often represented as nCk
where we are choosing n elements out of a set of k
elements.
Exponential Recursion
Recursion where more than one call is made to the
function from within itself. This leads to exponential
growth in the number of recursive calls.
An exponential recursive function is one that, if you
were to draw out a representation of all the function
calls, would have an exponential number of calls in
relation to the size of the data set
(exponential meaning if there were n elements, there would be
O(an) function calls where a is a positive number).
Nested Recursion
In nested recursion, one of the arguments to the
recursive function is the recursive function itself.
These functions tend to grow extremely fast.
Mutual Recursion
A recursive function doesn't necessarily need to call
itself.
Some recursive functions work in pairs or even larger
groups.
Mutual Recursion
//C++
int is_even(unsigned int n)
{
if (n==0)
return 1;
else
return(is_odd(n-1));
}
int is_odd(unsigned int n)
{
return (!is_even(n));
}
Gate- ADA & DSA by Nitesh Dubey
Recursions.
Recursion is a powerful problem-solving technique that
often produces very clean solutions to even the most
complex problems.
Recursive solutions can be easier to understand and to
describe than iterative solutions.
Recursion works the best when the algorithm and/or data
structure that is used naturally supports recursion.
One such data structure is the tree, One such algorithm is
the binary search algorithm.
Recursion..
Recursive solutions may
because they use calls.
involve
extensive
overhead
Recursion..
Therefore, if the recursion is deep, say, factorial(1000),
we may run out of memory.
Because of this, it is usually best to develop iterative
algorithms when we are working with large numbers.
Clearer logic
Often more compact code
Often easier to modify
Allows for complete analysis of runtime
performance
CONS
Overhead costs
Time Consuming
Additional Memory Requirement.
Gate- ADA & DSA by Nitesh Dubey
Sample Question-1
Consider following recursive function in C/ C++
int func (int n)
{
static int i=1;
if (n>=5) return n;
n=n + 1;
i++;
int x = func (n);
Stmt : return n;
}
Gate- ADA & DSA by Nitesh Dubey
Questionlinked
1.
2.
3.
5
6
7
2
1
5
4
0
1
5
4
0
Questionlinked
4.
5.
1
2
3
4
5
5
6
7
2
Questionlinked- Answers
1.
2 (D)
2.
4 (C)
3.
5 (B)
4.
2 (B)
5.
5 (A)
Sample Question-2
Consider following recursive function in C/ C++
int func(int a, int b)
{
if(b==0)
return(a);
else
return(1+ func (a, b-1));
}
Questions..
1.
2.
5
1
3
2
None
5
1
3
2
none
1.
5 (A)
2.
3(C)
Sample Question-3
Find the output of following code of C/ C++
void f(void)
{
int s = 0;
s++;
if (s == 10)
return;
printf("%d ", s);
f( ); }
1 1 Infinite
int main(void)
{
f( );
return 0;
}
Gate- ADA & DSA by Nitesh Dubey
Sample Question-4
Find the output of following code of C/ C++
void f(void)
{
static int s = 0;
s++;
if (s == 10)
return;
printf("%d ", s);
f( ); }
int main(void)
{
f( );
return 0;
}
123456789
Sample Question-5
Find the output of following code of C/ C++
void f(int i)
{
if( i < 10)
{
f( i + 1 );
printf("%d ", i);
}
}
int main(void)
{
f( 0 );
return 0;
}
9876543210
Sample Question-6
Find the output of following code of C++
void func6()
{
char ch;
cout << "Enter a character ('.' to end program): ";
cin >> ch;
if (ch != '.')
{
func6();
For Input Chars : a b c d e f .
cout << ch;
}}
Output Chars: fedcba
void main()
{
func6();
cout << "\n";
}
Sample Question-7
27
32
31
28
31 (C)
But they could only move one disk at a time, and could
never put a larger disk on top of a smaller one.
When they completed this task, the world would
end!
Gate- ADA & DSA by Nitesh Dubey
for n == 1
For n == 1
d. Were done!
Linear Search
Searching is the process of determining whether or not a
given value exists in a data structure or a storage media.
We discuss two searching methods on one-dimensional
arrays: linear search and binary search.
The linear (or sequential) search algorithm on an array is:
Sequentially scan the array, comparing each array item with the searched value.
If a match is found; return the index of the matched element; otherwise return 1.
Linear Search..
//Function
public static int linearSearch(Object[] array, Object key)
{
for(int k = 0; k < array.length; k++)
if(array[k].equals(key))
return k;
return -1;
}
Linear Search
/* Linear search */
for ( i=0; i < N ; i++)
{
if( keynum == array[i] )
{
found = 1;
break;
}
}
if ( found == 1)
printf("SUCCESSFUL SEARCH\n");
else
Gate- ADA & DSA by Nitesh Dubey
printf("Search is FAILED\n
Linear Search
Data structure -Array
Bubble Sort
Sorting takes an unordered collection and makes it an ordered one.
Bubble sort algorithm:
Compare adjacent elements. If the first is greater than the second, swap
them.
Do this for each pair of adjacent elements, starting with the first two and
ending with the last two. At this point the last element should be the
greatest.
Repeat the steps for all elements except the last one.
Keep repeating for one fewer element each time, until you have no more
pairs to compare
Bubble Sort..
for i = 1:n,
swapped = false
for j = n:i+1,
if a[j] < a[j-1],
swap a[j,j-1]
swapped = true
break if not swapped
end
Bubble Sort
Data structure -Array
Worst case performance O(n2)
Best case performance O(n)
Bubble Sort
#define MAXSIZE 10
void main()
{
int array[MAXSIZE];
int i, j, N, temp;
clrscr();
printf("Enter the value of N\n");
scanf("%d",&N);
printf("Enter the elements one by
one\n");
for(i=0; i<N ; i++)
scanf("%d",&array[i]);
printf("Input array is\n");
for(i=0; i<N ; i++)
printf("%d\n",array[i]);
Selection Sort
Algorithm:
Pass through elements sequentially;
In the ith pass, we select the element with the
Selection Sort..
for i = 1:n,
k=i
for j = i+1:n,
if a[j] < a[k],
k=j
invariant: a[k] smallest of a[i..n]
swap a[i,k]
invariant: a[1..i] in final position
end
Selection Sort..
Not stable
(n) swaps
Not adaptive
Insertion Sort
Algorithm:
Start with the result being the first element of the input;
Loop over the input array until it is empty, "removing" the first
remaining (leftmost) element;
Compare the removed element against the current result, starting from
the highest (rightmost) element, and working left towards the lowest
element;
If the removed input element is lower than the current result element,
copy that value into the following element to make room for the new
element below, and repeat with the next lowest result element;
Otherwise, the new element is in the correct location; save it in the cell
left by copying the last examined result up, and start again from step 2
with the next input element.
Gate- ADA & DSA by Nitesh Dubey
Insertion Sort
for i = 2:n,
for (k = i; k > 1 and a[k] < a[k-1]; k--)
swap a[k,k-1]
invariant: a[1..i] is sorted
end
Insertion Sort.
Stable
O(1) extra space
O(n2) comparisons and swaps
Adaptive: O(n) time when nearly sorted
Very low overhead
Divide-and-Conquer
Divide-and-Conquer
The divide-and-conquer strategy solves a problem by:
1. Breaking it into sub-problems that are themselves smaller
instances of the same type of problem.
2. Recursively solving these sub-problems
3. Appropriately combining their answers
The name "divide and conquer" is sometimes applied also to
algorithms that reduce each problem to only one sub problem,
such as the binary search algorithm.
Divide-and-Conquer- Advantages
Solving difficult problems
Algorithm efficiency
Parallelism
Memory access
Roundoff control
Explicit stack
Stack size
Choosing the base cases
Sharing Repeated subproblems
Divide-and-Conquer- Examples
Problems Solved by Divide & Conquer :
Binary Search
Merge Sort
Quick Sort
Max-Min Problem
Matrix Multiplication
..etc.
Gate- ADA & DSA by Nitesh Dubey
T(1)
n=1
T(n) =
aT(n/b) + f(n)
n>1
Some Examples
Max-Min Problem
Recurrence Relation :
T(n) = 2T(n/2) + 2
Where T(1) = 0
Recurrence Relation :
T(n) = T(n/2) + O(1)
Binary Search
Merge Sort
Divide
Conquer
(1)
,if n = 1
T(n)=
2T (n/2) +
i.e. Complexity = O(n log2n)
(n)
,if n > 1
Quick Sort
it works as under:
1. select a value as a pivot element from given array
A[1], . . . , A[n]
2. Divide the list into 2 half based on pivot value such that,
(elements of first half) < pivot < (elements of second half)
3. Repeat from step-1 for both halves if they still divisible into
two.
we hope the pivot is near the median key value in the array, so
that it produce nearly equal size of halves.
Quick Sort..
Simple Steps-
numbers
less than p
(left >
n/2
n/2
n/4
n/4
n/8
.
.
.
.
.
.
.
.
.
.
Gate- ADA & DSA by Nitesh Dubey
.
.
.
.
.
.
n/8
Way of Partitioning
n/2
n-1
n-2
n-3
.
.
.
.
.
.
.
.
.
.
.
.
Therefore,
C11 = a11b11 + a12b21
C12 = a11b12 + a12b22
C21 = a21b11 + a22b21
Matrix Multiplication
Algorithm
for (i = 1; i <= N; i++)
}}
Time analysis
Ci , j
ai ,k bk , j
k 1
N
Thus T ( N )
c
i 1 j 1 k 1
cN 3
O( N 3 )
A0
A1
B0
B1
A2
A3
B2
B3
R
A0 B0+A1 B2
A0 B1+A1 B3
A2 B0+A3 B2
A2 B1+A3 B3
,if n = 2
T(n)=
Where,
C11 = P1 + P4 - P5 + P7
C12 = P3 + P5
C21 = P2 + P4
C22 = P1 + P3 - P2 + P6
And,
P1 = (A11+ A22)(B11+B22)
P2 = (A21 + A22) * B11
P3 = A11 * (B12 - B22)
P4 = A22 * (B21 - B11)
P5 = (A11 + A12) * B22
P6 = (A21 - A11) * (B11 + B12)
Gate- ADA & DSA by Nitesh Dubey
P7 = (A12 - A22) * (B21 + B22)
,if n = 2
T(n)=
Towers of Hanoi
Towers of Hanoi
T[N] = O(2^N)
O(2n)
Gate- ADA & DSA by Nitesh Dubey
Hashing
Hashing
Hashing is function that maps each key to a location
in memory.
A keys location does not depend on other elements,
and does not change after insertion.
unlike a sorted list
Hash Tables
a hash table is an array of size Tsize
has index positions 0 .. Tsize-1
Hash Function
a hash function is used to put data in the hash table.
Hash function is used to implement the hash table.
The integer returned by the hash function is called
Hash key.
Each position of the hash table is called Bucket.
Home bucket is Actual bucket of a value.
by applying the hash function to the key we perform
insert,
retrieve,
update,
delete operations.
Gate- ADA & DSA by Nitesh Dubey
Digit Folding
Shift folding
Boundary folding (reverse the ith part)
Digit Analysis
when all identifiers are known in advance
Some insertions:
K1 --> 3
K2 --> 5
K3 --> 2
0
1
2
3
4
5
6
K3
K3info
K1
K1info
K2
K2info
key value
Gate- ADA & DSA by Nitesh Dubey
Hash Collisions
Collisions occur when different elements are mapped to the
same cell.
0
1
2
3
4
5
6
K6
K6info
K3
K3info
K1
K1info
K4
K4info
K2
K2info
K5
K5info
Linear probing
Linear probing with Chaining using array
Linear probing with Chaining (with replacement)
Quadratic probing
Double Hashing
NOT FOUND !
FOUND !
Search Performance
0
1
2
3
4
5
6
K6
K6info
K3
K3info
K1
K1info
K4
K4info
K2
K2info
K5
K5info
hash(K)
K1
3
K2
5
K3
2
K4
3
K5
2
K6
4
#probes
1
1
1
2
5
4
Chaining
0
1
2
3
4
insert keys:
K1 --> 3
K2 --> 5
K3 --> 2
K4 --> 3
K5 --> 2
K6 --> 4
0
1
2
3
4
5
6
K3 K3info K5 K5info
K1 K1info K4 K4info
K6 K6info
K2 K2info
Insertion: insert 53
53 = 4 x 11 + 9
53 mod 11 = 9
0
1
2
3
4
5
6
7
8
9
10
23
24
36
56
14
16
17
7
29
31
20
42
0
1
2
3
4
5
6
7
8
9
10
23
24
36
14
16
17
7
29
53
20
31
56
42
Search Performance
0
1
2
3
4
5
6
K5 K5info
K4 K4info
hash(K)
K1
3
K2
5
K3
2
K4
3
K5
2
K6
4
#probes
1
1
1
2
2
1
Quadratic Probing
Solves the clustering problem in Linear Probing
Check H(x)
If collision occurs
If collision occurs
If collision occurs
If collision occurs
...
check
check
check
check
H(x)
H(x)
H(x)
H(x)
+
+
+
+
1
4
9
16
H(x) + i2
Double Hashing
When collision occurs use a second hash function
Hash2 (x) = R (x mod R)
R: greatest prime number smaller than table-size
Inserting 12
H2(x) = 7 (x mod 7) = 7 (12 mod 7) = 2
Check H(x)
If collision occurs check H(x) + 2
If collision occurs check H(x) + 4
If collision occurs check H(x) + 6
If collision occurs check H(x) + 8
H(x) + i * H2(x)
Gate- ADA & DSA by Nitesh Dubey
Traversal
Visit each item in the hash table
Rehashing
If table gets too full, operations will take too long.
Build another table, twice as big (and prime).
Eg.- Next prime number after 11 x 2 is 23
Question-1
An Advantage of chained hash table (external hashing)
over the open addressing scheme isA. Worst case complexity of search operations is less
B. Space used is less
C. Deletion is easier
D. None of the above
Ans:- (C)
Question-2
A chained hash table has an array size of 512. What is
the maximum number of entries that can be placed in
the table?
A. 256
B. 511
C. 512
D. 1024
E. There is no maximum.
Ans:- (E)
Question-3
The hash function hash : = key mod size, and linear
probing are used to insert the keys37, 38, 72, 48, 98, 11, 56
into the hash table with indices 0 ... 6. The order of the
keys in the array are given byA.
B.
C.
D.
E.
72,
11,
98,
98,
11,
11,
48,
11,
56,
37,
37,
37,
37,
37,
48,
38,
38,
38,
38,
38,
56,
72,
72,
72,
72,
98,
98,
56,
11,
98,
48
56
48
48
56
Ans:- (D)
Question-4
An internal hash table has 5 buckets, numbered 0, 1, 2,
3, 4. Keys are integers, and the hash function
h(i) = i mod 5
is used, with linear resolution of collisions. If elements
with keys 13, 8, 24, 10, and 3 are inserted, in that
order, into an initially blank hash table, then the
content of the bucket numbered 2 isA. 3
B. 8
C. 10
D. 13
E. 24
Gate- ADA & DSA by Nitesh Dubey
Ans:- (A)
Question-5
Suppose there is an open (external) hash table with four buckets,
numbered 0,1,2,3, and integers are hashed into these buckets
using hash function h(x) = x mod 4, If the sequence of perfect
squares 1,4,9,.. ,l2, ... is hashed into the table, then, as the total
number of entries in the table grows, what will happen?
A. Two of the buckets will each get approximately half the entries,
and the other two will remain empty.
B. All buckets will receive approximately the same number of
entries.
C. All entries will go into one particular bucket.
D. All buckets will receive entries, but the difference between the
buckets with smallest and largest number of entries will grow.
E. Three of the buckets will each get approximately one-third of the
entries, and the fourth bucket will remain empty.
Ans:- (A)
Question-6
Insert the characters of string K R P C S N Y T J M into a
hash table of size 10. Use the hash function
h(x) = (ord(x) ord(a + 1) mod 10.
Use linear probing to resolve collisions. Get the answer
of followingA. Which insertions cause collisions?
B. Display the final hash table.
Ans:-
Collisions at J M
Table Contents- T K J C N Y P M R S
Gate- ADA & DSA by Nitesh Dubey
Question-7
A hash table implementation uses function of (% 7) and
linear probing to resolve collision. What is the ratio of
numbers in the following series with out collision and
with collision if 7 buckets are used:
32, 56, 87, 23, 65, 26, 93
A. 2/5
B. 3/4
C. 4/3
D. 5/2
Ans:- (C)
Asymptotic Notation
little Oh (o) : o-notation is used to denote an upper
bound that is asymptotically tight.
f(n) = o(g(n)) ,
for any positive constants c >0 and n0 >0 such that
0 f(n) < c.g(n) for all n, n n0.
The function f(n) becomes insignificant relative to g(n) as
n approaches infinity, i.e.
Asymptotic Notation
little Omega (w) : w-notation is used to denote an lower
bound that is asymptotically tight.
f(n) = w (g(n)) ,
for any positive constants c >0 and n0 >0 such that
0 c.g(n) < f(n) for all n, n n0.
The function f(n) becomes tends to infinity relative to g(n)
as n approaches infinity, i.e.
If f(n) = (g(n)) i.e. f(n) and g(n) grow at the same rate.
Dynamic programming,
Backtracking
Branch n Bound
Lower Bound Theory
Parallel Computing
Gate- ADA & DSA by Nitesh Dubey
programming:
optimization technique
Dynamic
programming
is
an
Greedy Approach
Greedy approach
Optimization technique.
Greedy approach
Optimization technique.
Greedy Algorithm
Problems exhibit optimal substructure.
Greedy Algorithm..
We need to find a feasible solution that either maximizes or
minimizes a given objective function.
A feasible solution that does this is called Optimal solution.
A feasible solution is not necessarily be an optimal solution.
Working:
Greedy method devise an algorithm
(considering one input at a time).
that
works
in
stages
Otherwise it is added.
Greedy Algorithm
The selection procedure itself is based on some optimization
measure.
This measure may be the objective function.
Subset Paradigm : Generally we get an algorithm that generate
suboptimal solutions. This version of the greedy technique is
called subset paradigm.
Ordering Paradigm : Some problems do not optimized the
subset. The greedy method make decision by considering the
inputs in some order. Each decision is made using an
optimization criterion that can be computed using decisions
already made. This version of G M is called as ordering
paradigm.
Some Problems
Goal: Choose items with maximum total benefit but with weight
at most W.
bi ( xi / wi )
i S
Subject to :
xi
knapsack
Solution:
Items:
Weight:
Profit:
Value:
4 ml
8 ml
2 ml
6 ml
1 ml
$12
$32
$40
$30
$50
20
50
1
2
6
1
ml
ml
ml
ml
of
of
of
of
5
3
4
2
10 ml
Algorithm
fractionalKnapsack(S, W)
Input: set S of items w/ benefit
bi
and weight wi; max. weight
W
Complexity - ???????
0.
p2
p3
Step 2: Add the next job i to the solution set if i can be completed
by its deadline.
Assign i to time slot [r-1, r], where r is the largest integer
such that 1 r di and [r-1, r] is free.
Step 3: Stop if all jobs are examined. Otherwise, go to step 2.
Time complexity: O(n2)
Gate- ADA & DSA by Nitesh Dubey
pi
di
20
assign to [1, 2]
15
assign to [0, 1]
10
reject
assign to [2, 3]
reject
Gate- ADA4
& DSA by Nitesh
1 Dubey
Time complexity:
O(n2)
A spanning tree of G is a free tree (i.e., a tree with no root) with | V | 1 edges that connects all the vertices of the graph.
(c) Depth-first
spanning tree of
G rooted at c
The graph has two minimum-cost spanning trees, each with a cost of 6:
Prims Algorithm
10
5
25
3
(a)
10
2
10
2
5
25
4
3
(b)
1
6
4
22
3
(c)
10
5
25
10
2
12
22
(d)
0
1
5
25
16
2
12
22
3
(e)
10
14
5
25
16
2
12
22
3
(f)
Example
Trace Prims algorithm starting at vertex a:
0
10
5
25
28
1
14
6
24
22
(a)
16
2
18
12
10
2
4
3
(b)
3
(c)
10
12
3
(d)
10
0
1
14
10
2
12
3
(e)
14
16
2
12
3
(f)
0
10
0
1
14
16
2
12
22
10
14
5
25
12
22
(g)
3
(h)
16
weight
----10
12
14
16
18
22
24
25
28
result
initial
added to tree
added
added
added
discarded
added
discarded
added
not considered
figure
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Time complexity:
an efficient sorting method
O(nlogn)
Huffman Coding
Take the characters and their frequencies, and sort
list by increasing frequency.
this
Huffman Coding
Problem Statement :
Find the shortest path from vo to all other nodes
in G.
Shortest paths are generated in increasing order:
1,2,3, ..
Maintains a set S of
vertices whose SP from
s has been determined.
Repeatedly selects u in
VS with minimum SP
estimate
(greedy
choice).
Store VS in priority
queue Q.
T(V) = O(V2)
Gate- ADA & DSA by Nitesh Dubey
Dynamic Programming
Dynamic Programming
Dynamic programming is an optimization technique.
Dynamic Programming..
Dynamic programming is a method for efficiently solving a
broad range of search and optimization problems which
exhibit the characteristics of overlapping sub problems
and optimal substructure.
Richard Bellman described the way of solving problems
where you need to find the best decisions one after
another.
In 1982 David Kohler used dynamic programming to
analyses the best way to play the game of darts.
When applicable, the method takes much less time than
naive methods.
Dynamic Programming
Top-down dynamic programming simply means storing
the results of certain calculations, which are then re-used
later because the same calculation is a sub-problem in a
larger calculation.
Top-Down method is often called Memorization.
Bottom-up dynamic programming involves formulating a
complex calculation as a recursive series of simpler
calculations.
In Bottom Up Dynamic Programming, we start from
smaller cases and store the calculated values in a table
for future use, an effective strategy to most dependencybased problems.
This avoids calculating the subproblem twice.
Gate- ADA & DSA by Nitesh Dubey
Divide-and-conquer
algorithms can be thought
of as top-down algorithms
Dynamic Programming
Dynamic Programming split
a
problem
into
subproblems,
some
of
which are common, solve
the
subproblems,
and
combine the results for a
solution to the original
problem.
Example: Multistage graph,
Matrix Chain Multiplication
Dynamic programming can
be thought of as bottom-up
Dynamic Programming
In Dynamic Programming,
subproblems
are
not
independent.
Dynamic
programming
solutions can often be quite
complex and tricky.
Dynamic programming is
generally
used
for
Optimization Problems.
Many decision sequences
may be generated.
Some Problems
Forward
Method
Backward
Method
1
11
18
16
13
1+4+18 = 23.
cost (i, j )
min {c( j , l )
l Vi 1
j ,l
E
cost (i 1, l )}
Backward Method
bcost (i, j)
1
11
16
13
d(B, T)
d(A, T)
18
9
5
d(C, T)
11
D
E
d(D, T)
T
d(E, T)
forward approach
Solving:
cost(k-1,j) = c(j,t) if <j,t> E, otherwise
Then computing cost(k-2,j) for all j Vk-2
Then computing cost(k-3,j) for all j Vk-3
Example:
k=5
Stage 5
cost(5,12) = 0.0
Stage 4
cost(4,9) = min {4+cost(5,12)} = 4
cost(4,10) = min {2+cost(5,12)} = 2
cost(4,11) = min {5+cost(5,12)} = 5
Stage 3
cost(3,6) = min {6+cost(4,9), 5+cost(4,10)} = 7
cost(3,7) = min {4+cost(4,9), 3+cost(4,10)} = 5
cost(3,8) = min {5+cost(4,10), 6+cost(4,11)} = 7
forward approach
Stage 2
cost(2,2)
cost(2,3)
cost(2,4)
cost(2,5)
Stage 1
cost(1,1)
=
=
=
=
min
min
min
min
forward approach
forward approach
v2 = d(1,1) = 2
v3 = d(2,d(1,1)) = 7
v4 = d(3,d(2,d(1,1))) = d(3,7) = 10
So the solution (minimum-cost path) is 1271012 and
its cost is 16
Gate- ADA & DSA by Nitesh Dubey
Time complexity:
O(|V|+|E|)
Reliability Design
Reliability Design
Want to design a system that is composed of
several devices connected in Series.
where 1 i n
Given
where 0 ri 1
Multiple copies of same device type are connected in parallel.
Cost of single unit of device Di is ci
Maximum allowable cost of entire system, c
Gate- ADA & DSA by Nitesh Dubey
Reliability Design
Reliability Design
If stage i contains mi copies of device Di, then reliability of stage i
becomes
Solution :
Find the maximum possible instance of a device with reference
to the overall system cost (c) by following equation :
where 1 mi ui
Reliability Design
Solution :
Reliability Design
Time complexity:
just find out
where,
clearly,
For following
given graph find the all pair
shortest path.
Goal: Choose items with maximum total benefit but with weight
at most W.
Here we are not allowed to select the items fractionally.
i.e. x = 0 or x = 1 for all items.
Si = { (fi( yj ) , yj ) : 1 j k }
Si is a pair of (P, W), where P = fi( yj ) and W = yj .
If Si+1 contains two pairs (Pj, Wj) and (Pk, Wk) with
the property that Pj Pk and Wj Wk , then the pair
(Pj, Wj) can be discarded.
It is also called as Purging rules.
Bellman-Ford Algorithm -
Recurrence Relation
Recurrence Relation
A recurrence relation for the sequence {an} is an
equation that expresses an in terms of one or more of the
previous terms a0; a1; .. ; an-1, for all integers n with n
>= n0.
T(1)
n=1
T(n) =
aT(n/b) + f(n)
n>1
n is a power of b ( i.e. n= bk )
Gate- ADA & DSA by Nitesh Dubey
Recurrence Relations
In other words, a recurrence relation is like a
recursively defined sequence, but without specifying
any initial values (initial conditions).
Therefore, the same recurrence relation can have (and
usually has) multiple solutions.
If both the initial conditions and the recurrence
relation are specified, then the sequence is uniquely
determined.
Recurrence Relation..
The initial conditions for a sequence specify the terms
before n0 (before the recurrence relation takes effect).
The recurrence relations together with the initial
conditions uniquely determines the sequence.
For the example above, the initial conditions are: a0 = 0,
a1 = 3; and a0 = 5, a1 = 5; respectively.
Recurrence Relation..
Some more examples of recurrence relation are:
tn = tn-1 + 1 (Sequential search)
tn = tn-1 + n (Selection sort)
Pn = 1.05Pn-1 = (1.05)nP0
We now have a formula to calculate Pn for any natural
Gate- ADA
& DSA
by Nitesh Dubey
number n and can avoid
the
iteration.
Master Theorem
Consider following R.R
T(n) = a T(n/b) + f(n)
Ignore floors and ceilings for n/b
constants a >= 1 and b>1
CASE 1:
If f(n) = O(ne-c) for constant c>0, then
T(n) =
(ne)
Master Theorem.
CASE 2:
If f(n) =
(ne), then
T(n) =
CASE 3:
If f(n) =
(f(n).log n)
T(n) =
(f(n))
Master Theorem.
Example:
T(n) = 4 T(n/2) + n
a=
b=
nlog_ba =
f(n) =
End of Part -I