Você está na página 1de 20

Divide and conquer method

General characteristics:1. Problems will be divided into sub problems. 2. This process will be repeated until the problem cannot be divided further. 3. Recurrence relations are used to analyze the algorithms which uses Divide and conquer technique. 4. Dand C algorithms are recursive in nature.

Examples: Binary search Finding MAXMIN Merge sort Quick sort Selection sort Strassens Matrix multiplication

General algorithm
Algorithm DandC (P) { if small(P)bthen return S(P) else { divide P into smaller instances p1,p2,..pk k>=1; apply DandC to these problems return combine(DandC(p1), DandC(p2),.. DandC(pk), } }

Greedy method
General characteristics:1. Problems will have n inputs. 2. From the input set, find a subset that satisfies some constraints. This subset is called feasible solution. 3. Feasible solution either maximizes or minimizes a give objective function. This feasible solution is called an optimal solution. 4. Greedy algorithm works in stages, by considering one input at a time. 5. Each stage, a decision is made regarding whether the input considered yields an optimal solution. If so, the input is considered otherwise, rejected. 6. Inputs are considered by a selection procedure. Selection procedure is based on some optimization measure. 7. If the inclusion of the next input into the partially constructed optimal solution results in an infeasible solution, then this input is not added to the partial solution, otherwise, it is added.This measure may be the objective function.

Greedy method
Examples: Knapsack problem Tree vertex splitting Job sequencing with dead lines MST Single source shortest path Optimal storage on tapes Optimal merge patterns

General algorithm
Algorithm greedy (a, n) { Solution: = For i = 1 to n do { x = select (a); If feasible (solution, x) then solution = union (solution, x) } Return solution; }

Knapsack problem
Statement: Assume there are n objects and a knapsack or bag, objects has a weight Wi. let m be the capacity of knapsack or bag. Let Pi be the profit knapsack or bag. Let Pi be the profit knapsack or bag. Let Pi be the profit associated with each object. Let Xi be the fraction of each object i. If a fraction Xi of object i is placed in a bag or knapsack, then the profit PiXi is earned. The objective is to fill the knapsack and maximize the profit. Maximize ni=1 PiXi ------------------- (1) Subject ni=1 Wi Xi <= m ------------ (2) 0<= Xi <=1 Where profits and weights are positive numbers A feasible solution is any set (X1, .. Xn) satisfying the equations 2. An optimal solution is a feasible solution for which equation 1 is maximized.

Knaspsack algorithm
Alogorithm greedy knapsack (m, n) { For i=1 to n do X[i] =0.0; U=m; For i=1 to n do { If(w[i]>u) then break; X[i] = 1.0; U= U-W[i]; } If(i<=n) then X[i] = U/W[i]; } m=capacity of the bag n=number of objects W *1n+ = weights of the objects P *1n+ = profit of the objects

Selection procedure: Objects are ordered such that (P[i]/W[i]) >= (P[i+1]/W[i+1])

Minimum spanning tree


MINIMUM SPANNING TREE Given a connected, undirected, weighted graph, a spanning tree of that graph is a sub graph which is a tree and connects all the vertices together. If weights are assigned, MST is one with the lowest total cost. A graph can have many different spanning trees.

APPLICATIONS OF MST Electric network G/ G , V(G/) = V(G) Link/communicati on problems, all cities connected with minimum cost.

PRIMs Algorithm
Algorithm: Procedure Prim (G: weighted connected graph with n vertices) { T= a minimum weighted edge For i = 1 to n-2 Begin { E = an edge of minimum weight incident to a vertex in T and not forming a cycle in T if added in T. T = T with e added } End Return (T) }

Analysis of Prims Algorithm


Minimum edge is selected by storing the edges in a priority queue. Priority queue can be implemented as Binary heap Cost adjacency matrix O(V2) Fibonacci heap O(V log V +E) Implementation of priority queue as Binary heap Minimum edge is selected by storing the edges in a priority queue. If priority queue is implemented as binary heap, Construction of heap O (V) Extracting min/max O (log V) 1 Therefore, Total time O (V log V) -------------------------------------1 For loop executes at the max n-2+1 i.e. E=n-1 Therefore, O [E] times. Within the for loop, the edge selection and updating MST is implemented as heap which takes O (log V) 2 Therefore, Total time O (E log V) -------------------------------------1 Therefore, total time for prims algorithm using binary heap is O (V log V + E log V). 1 + 2

Kruskals Algorithm
Algorithm (G: weighted connected graph with n vertices) { T= While t has less than n-1 edges and ( E 0 ) do { Choose an edge (v, w) from E of lowest cost; Delete v, w from E; If (v, w) does not create a cycle in T , then add (v, w) to T; Else Discard (v, w); } }

Analysis of Kruskals algorithm


ANALYSIS: Minimum weight edge is selected by storing the edges and vertices in a priority queue. Priority queue can be implemented as Binary heap 1. Outside the while loop, heap would be constructed for n vertices. Construction of heap would take O(n) time i.e. O(v) time.---------- 1 2. while loop executes (n-1) times i.e. O(E) times. Inside the while loop, two computations are carried out : 1. Retrieval from heap, which is O (log n) i.e. O (log V). 2. Update of MST, which is O (log n) i.e. O(log V). 3. Therefore, while loop performance is stated as O(E log V + E log V).--- 2 So, total analysis is 1 + 2 therefore O(V + E log V + E log V).

Dynamic programming
Characteristics: 1. The problems can be divided into stages with a decision required at each stage. 2. Each stage has number of states associated with it. 3. Decision at one stage transforms one state into a state in the next stage. 4. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decision. 5. There exists a recursive relationship that identifies the optimal decision for stage j, and stage j+1 . 6. Optimal solutions to the sub problems are retained so that recomputation is avoided. Examples: 1. Multi stage graph 2. All pairs shortest path 3. Single source shortest path 4. OBST => (selection problem) 5. TSP => (permutation problem)

Dynamic programming
Principle of optimality: The principle of optimality states that an optimal sequence of decisions has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision.

EXAMPLES BST
TREE 1

TREE 2 for

for
do while int if

do if

int while

OBST
Characteristics of OBST: 1. Every element has the key and no two elements have the same key. 2. The keys in the left sub tree are smaller than the key in the root. 3. The keys in the right sub tree are larger than the key in the root. 4. The left & right sub trees are also BST. 5. Let the identifiers are denoted as {a1,a2.an} such as that a1 < a2..an 6. P(i) is the probability to search ai where 1 <= i <= n 7. q(i) is the probability of an unsuccessful search. x is not present in ai < x < ai+1 where 0<= i<= n [x can be less than a1 ] [therefore min value is 0] 8. BST contains n internal nodes and n+ 1 external node. 9. Successful search will terminate in internal nodes and unsuccessful search will terminate in external nodes. 10. If a successful search terminate at an internal node at level l, then cost is calculated as p(i) + level (ai ) (i.e. x = ai ) 11. For unsuccessful search, terminates at an external node at level l, then cost is calculated as q(i) + (level (Ei) 1). The identifiers not in BST can be partitioned into n+1 classes Ei 0 <= i <= n i.e. E0 contains all identifiers x such that x < a1 E1 contains all identifiers x such that x > an. 12. Expected cost of a BST is Min {1<=i<=n P (i) + level (ai) + 0<=i<=n q(i) + (level (Ei) 1) }

Dynamic programming approach OBST


C(i, j) represents cost of OBST. Cost(l) can be represented as C(0,k-1). Similarly, Cost(r) can be represented as C(k,n). C(i, j) = mini<k<=j { C(i,k-1) + C(k,j) } + w(i,j) ------- (1) w(i,j) = p(j) + q(j) +w(i,j-1) C(i,i) = 0 w(i,i) = q(i) r(i,i) = 0 r(i,j) = k ( the value of k which minimizes the equation 1) C(0,n) can be solved by first computing all C(i, j) such that j-i = 1, j-i = 2 . So on till j-i = n. r(i,j) is also recorded w(i,j) is also computed.

OBST ANALYSIS
Analysis : 1. To compute one C(i, j) = O(n) i.e. j-i = 1 is O(n). We have to compute C(i, j) till j-i=n Therefore, O(n*n) = O(n2) 2. To compute w(i,j) & r(i,j) is O(n). 3. Therefore, total complexity is O(n2 * n ) is O(n3).

TSP
Problem description: Every tour starts and ends at vertex 1. Each vertex is visited exactly once. find a tour with minimum cost. Let G(i,s) be the length of a shortest path starting at vertex i, going through all vertices in s and terminating at vertex 1. G(i,s) = minj s { Cij + G(j,s-{j}) } Also, G(i,) = Ci1 The equation is solved by solving |s|=1,|s|=2..|s|=n-1 s=V-{1}

TSP Analysis

Analysis Let N be the number of G(i,s)s that have to be computed. N= (n-1) 2n-2 Any algorithm can be developed with complexity O(n2 * 2n)

Você também pode gostar