Você está na página 1de 16


Name : CS41 : Design and Analysis Of Algorithms UNIT-1 Algorithm Analysis 1. Why is the need of studying algorithms? From a practical standpoint, a standard set of algorithms from different areas of computing must be known, in addition to be able to design them and analyze their efficiencies. From a theoretical standpoint the study of algorithms in the cornerstone of computer science. 2. What is algorithmics? The study of algorithms is called algorithmics. It is more than a branch of computer science. It is the core of computer science and is said to be relevant to most of science, business and technology. 3. What is an algorithm? (Nov/Dec 2008), (May/June 2007) An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in finite amount of time. 4. Formally define the notion of algorithm with diagram (Nov/Dec 2009) Year & Branch: II - CSE Staff In-charge: Ms.S.Vijaya Iswariya Lakshmi

5. How efficiency of an algorithm is defined? (Nov/Dec 2007) Efficiency of an algorithm is defined by means of, Time complexity - indicates how fast the algorithm runs Space complexity - indicates how much extra memory the algorithm needs.

6. What are the fundamental steps involved in algorithmic problem solving? The fundamental steps are Understanding the problem Ascertain the capabilities of computational device. Choose between exact and approximate problem solving. Decide on appropriate data structures Algorithm design techniques. Methods for specifying the algorithm. Proving algorithms correctness. Analyzing an algorithm. Coding an algorithm 7. What is an algorithm design technique? An algorithm design technique is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing. 8. What are the steps involved in the analysis framework? The various steps are as follows Measuring the inputs size Units for measuring running time Orders of growth Worst case, best case and average case efficiencies 9. What is the basic operation of an algorithm and how is it identified? The most important operation of the algorithm is called the basic operation of the algorithm, the operation that contributes the most to the total running time. It can be identified easily because it is usually the most time consuming operation in the algorithms innermost loop. 10. What is the running time of a program implementing the algorithm? The running time T (n) is given by the following formula T (n) _copC (n) cop is the time of execution of an algorithms basic operation on a particular computer and C (n) is the number of times this operation needs to be executed for the particular algorithm. 11. What are exponential growth functions? The functions 2n and n! are exponential growth functions, because these two functions grow so fast that their values become astronomically large even for rather smaller values of n. 12. What is worst-case efficiency? The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input or inputs of size n for which the algorithm runs the longest among all possible inputs of that size. 13. What is best-case efficiency? The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input or inputs for which the algorithm runs the fastest among all possible inputs of that size.

14. What is average case efficiency? The average case efficiency of an algorithm is its efficiency for an average case input of size n. It provides information about an algorithm behavior on a typical or random input. 15. What is amortized efficiency? In some situations a single operation can be expensive, but the total time for the entire sequence of n such operations is always significantly better that the worst case efficiency of that single operation multiplied by n. this is called amortized efficiency. 16. Define O-notation? A function t(n) is said to be in O(g(n)), denoted by t(n) O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that T(n) cg(n) for all n . n0 17. Define -notation? A function t(n) is said to be in (g(n)), denoted by t(n) (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that T(n) cg(n) for all n .n0 18. Define -notation? A function t(n) is said to be in (g(n)), denoted by t(n) (g(n)), if t(n) is bounded both above & below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constants c1 & c2 and some nonnegative integer n0 such that c2g (n) T (n) c1g (n) for all n n0 20. Mention the useful property, which can be applied to the asymptotic notations and its use? If t1(n) O(g1(n)) and t2(n) O(g2(n)) then t1(n)+t2(n) max {g1(n),g2(n)} this property is also true for _ and notations. This property will be useful in analyzing algorithms that comprise of two consecutive executable parts. 21. What are the basic asymptotic efficiency classes? The various basic efficiency classes are. Constant: 1. Logarithmic : log n Linear: n. N-log-n: log n. Quadratic : n2 Cubic: n3. Exponential : 2n Factorial: n! 22. Design an algorithm for checking whether the given word is a palindrome or not, i.e., whether the word is the same even when reversed. E.g., MADAM is a palindrome. (Nov/Dec 2008) Algorithm : boolean is Palindrome(String testString) { int length=testString.length(); for(int i=0;i<length/2;i++) if(testString[i]!=testString[length-i+1])

{ return false; } //else it's palindrome return true; } 23. Define worst case complexity of an algorithm. (Apr/May 2008) The worst-case complexity is the complexity of an algorithm when the input is the Worst possible with respect to complexity 24. Give the smoothness rule applied for recurrence relation (Nov/Dec 2007) Let T (n) be an eventually non decreasing function and f (n) be a smooth function. If T (n) (f (n)) for values of n that are powers of b, Where b>=2, then T (n) (f (n)) for any n. 25. What is recursive call? (May/June 2007) Function call itself is called recursive call UNIT-2 Divide and Conquer & Greedy Algorithms 1. What is dividing and conquer strategy? The divide and conquer strategy suggests splitting the k distinct n inputs into subsets yielding k sub problems. These sub problems must be solved and a method must be found to combine sub solutions in to solutions of the whole. What is the complexity of the divide and conquer algorithm? T (n) = {T (1), n=1 ={aT(n/b)+fn n>1 2. Give the general plan for divide-and-conquer algorithms. The general plan is as follows A problems instance is divided into several smaller instances of the same problem, ideally about the same size. The smaller instances are solved, typically recursively. If necessary the solutions obtained are combined to get the solution of the original problem state 3. Write a pseudo code for a divide and conquer algorithm for finding the position of the largest element in an array of n numbers. (Nov/Dec 2008) void maxposition(int i, int j) //a[ 1 : n ] is a global array. Parameters i & j are integers, 1 <= i <= j <= n. { int pos1, pos2, max1,max; if (i == j) { max = a [i]; //max position is i; } else if (i == j-1) { if (a [i] < a [j])

{ max = a [j]; pos1=j; } else max = a [i]; pos1=i; } else { int mid = (i + j)/2; maxpos (i, mid, max); max1=max; pos2=pos1; maxpos ( mid+1, j, max1); if (max < max1) { max = max1; pos1=pos2; //max position is pos1; } else { //max position is pos1; } } } 4. What is the general divide-and-conquer recurrence relation? An instance of size n can be divided into several instances of size n/b, with a of them needing to be solved. Assuming that size n is a power of b, to simplify the analysis, the following recurrence for the running time is obtained: T (n) = aT (n/b) +f (n) Where f (n) is a function that accounts for the time spent on dividing the problem into smaller ones and on combining their solutions 5. What is the difference between greedy methods and divide and conquer Greedy Method Dynamic Programming Only one sequence of decision is generated Many numbers of decisions are generated It does not guarantee to give an optimal .Its definitely gives an optimal solution solution always always.

6. What is binary search? Binary search is a remarkably efficient algorithm for searching in a sorted Array. It works by comparing a search key K with the arrays middle element A[m]. If they match the algorithm stops; otherwise the same operation is repeated recursively for the first half of the array if K < A[m] and the second half if K > A[m]. A [0]A [m-1] A[m] A [m+1]A [n-1] Search here if K<A[m] search here if K>A[m]

7. List out any two drawbacks of binary search algorithm (Nov/Dec 2007) The disadvantage of a binary search tree is that its height can be as large as N-1 - This means that the time needed to perform insertion and deletion and many others Operations can be O (N) in the worst case a binary tree with N node has height at least (log N) 8. Differentiate sequential from binary search Technique (May/June 2009) Sl.No 1. 2. 3. 4. Sequential Technique This is the simple technique of searching an element This technique does not require the list to be stored Every element in the list may get compared with the key element The worst case time complexity of this technique is O(n) Binary Search Technique This is efficient technique of searching the element This technique requires the list to be stored. Only the mid element of the list is compared with key element The worst case time complexity of this technique is O(n log n)

9. Define merge sort. Merge sort sorts a given array A [0...n-1] by dividing it into two halves a [0... (N/2)-1] and A [n/2...n-1] sorting each of them recursively and then merging the two smaller sorted arrays into a single sorted one. 11. Give the time efficiency and drawbacks of merge sort algorithm? (Nov/Dec 2005) Time Efficiency: The best, worst and average case time complexity of merge sort is O (n log n) Drawbacks: The algorithm requires extra storage to execute this method This method is slower than quick sort method The method is complicated to code 12. Give the algorithm for merge sort. ALGORITHM Merge sort (A [0...n-1]) //Sorts an array A [0...n-1] by recursive merge sort //Input: An array A [0...n-1] of orderable elements //Output: Array A [0...n-1] sorted in non decreasing order if n > 1 Copy A [0... (N/2)-1] to B[0..(n/2)-1] copy A[(n/2)..n-1] to C[0..(n/2)-1] Mergesort(B[0..(n/2)-1])

13. Give the algorithm to merge two sorted arrays into one. ALGORITHM Merge (B [0...p-1], C [0...q-1], A [0...p+q-1]) //Merges two sorted arrays into one sorted array //Input: arrays B [0...p-1] and C [0...q-1] both sorted //Output: sorted array A [0...p+q-1] of the elements of B & C i<- 0; j <- 0; k <-0 while i < p and j < q do if B[I] C[j] A[k] <- B[I]; i<-i+1 else A[k] <-C[j]; j <-j+1 k <- k+1 if I = p copy C[j..q-1] to A[k..p+q-1] else copy B[i..p-1] to A[k..p+q-1] 14. What is greedy technique? Greedy technique suggests a greedy grab of the best alternative available in the hope that a sequence of locally optimal choices will yield a globally optimal solution to the entire problem. The choice must be made as follows ._Feasible: It has to satisfy the problems constraints. Locally optimal: It has to be the best local choice among all feasible Choices available on that step. Irrevocable : Once made, it cannot be changed on a subsequent step of the algorithm 15.What are the steps required to develop a greedy algorithm? Greedy method is the most important design technique, which makes a choice that looks best at that moment. A given n inputs are required to obtain a subset that satisfies some constraints that is the feasible solution. A Greedy method suggests that one can device an algorithm that works in stages considering one input at a time. 16. What are the steps required to develop a greedy algorithm? Determine the optimal structure of the problem Develop a recursive solution Prove that at any stage of recursion one of the optimal choices is greedy choices. Thus it is always safe to make greedy choice. Show that all but one of the sub problem induced by having made the greedy choice is empty. Develop a recursive algorithm are converted into iterative algorithm. 17. Write some applications of greedy method? Knapsack Problem Prims algorithm for finding minimum spanning tree Kruskal's algorithm for minimum spanning tree Finding shortest path Job sequencing with deadlines Optimal storage on tapes

18. What s container loading problem? In this problem, the ship is loaded attach stage, each container is loaded. The total weight of all the containers must be less than or equal to the capacity. The Greedy approach is applied to the problem for solving. 19. Define a Knapsack problem. (Nov/Dec 2008) Given items of different values and volumes, find the most valuable set of items that fit in a knapsack of fixed volume. - We have n kinds of items, 1 through n. Each kind of item j has a value pj and a weight wj. We usually assume that all values and weights are nonnegative. The maximum weight that we can carry in the bag is W. The most common formulation of the problem is the 0-1 knapsack problem, which restricts the number xj of copies of each kind of item to zero or one. Mathematically the 0-1-knapsack problem can be formulated as: minimize

Subject to 20. What is meant by optimal solution? (May/June 2007) Solution to the optimization problem which minimizes (maximizes) the objective Function UNIT-3 Dynamic Programming 1. What is dynamic programming and who discovered it? Dynamic Programming is a technique for solving problems with overlapping subproblems.These sub problems arise from a recurrence relating a solution to a given problem with solutions to its smaller sub problems only once recording their results in a table from which the solution to the original problem is obtained. It was invented by a pronimant U.S.Mathematician,Richard Bellman in the 1950s. 2. Define dynamic programming Dynamic Programming is an algorithm design technique that can be when a solution to the problem is viewed as the result of the decisions.

3. What is the general method of dynamic programming? Dynamic programming truly applied to optimization problems. For each given problem, we may get any number of solutions which we seek for optimal solution (i.e. minimum value or maximum value solution).And such an optimal solution becomes solution to the given problem 4. Compare divide and conquer and dynamic programming Sl.No The 1. Divide and Conquer problem is divided into Dynamic Programming small

subproblems.These sub problems are solved In dynamic programming many decision independly.Finally all the solutions of sub sequences are generated and all the overlapping problems are collected together to get the subinsances are considered. solution to the given problem In this method duplications in sub solutions 2. are neglected.i.e.,duplicate sub solutions may be obtained Divide and conquer is less efficient

In dynamic computing duplications in solutions is avoided totally. Dynamic programming is efficient than divide uses bottom up


Because of rework on solutions and conquer strategy The divide and conquer uses top down Dynamic programming approach of problem


solving(recursive approach of the problem solving(iterative

methods) method) Divide and conquer splits its input at Dynamic programming splits its input at every 5. specific deterministic points usually in the possible split pints it determines which split middle point is optimal.

5. What are the features of dynamic programming? Optimal solutions to the sub problems are retained so as to avoid recomposing their values. Decision sequences containing subsequences that are sub optimal are not considered It definitely gives the optimal solution always. 6. What is the principle of optimality? The principle of optimality states that in an optimal sequence of decisions or choices, each sub sequences must also be optimal.

7. What are the applications of dynamic programming? Multistage graphs Shortest path

Optimal binary search tree 0/1 knapsack problem Traveling salesman problem

8. Define multistage graph? A Multistage graph G= (V, E) which is a directed graph. In this graph all the vertices partitioned into the k stages where k2.In multistage graph problem we have to find the shortest path from source to sink. The cost of each path is calculated by using the weight along with edge. 9. What are the types of shortest path? (a,b)= {min {path(a,b)}Where path exist between a and b = Otherwise Single pair Shortest path In these types of problems we have to find shortest path from a to b for given vertices a and b All pair shortest path: In these types of problems we have to find a shortest path from every pair of a and b 10. State the Shortest path problem. (Apr/May 2008) Shortest path problem is the problem of finding a path between two vertices (or nodes) such that the sum of the weights of its constituent edges is minimized
What is the use of Dijksras algorithm?

11.Dijkstras algorithm is used to solve the single-source shortest-paths Problem: for a given vertex called the source in a weighted connected graph, find the shortest path to all its other vertices. The single-source shortest-paths problem asks for a family of paths, each leading from the source to a different vertex in the graph, though some paths may have edges in common 12. Define optimal binary search tree In Optimal binary search tree all the nodes are arranged in such a manner that the searching cost for any node is optimum. OBST is one special kind of advanced tree. It focuses on how to reduce the cost of the search of the BST. It may not have the lowest height! It needs 3 tables to record probabilities, cost, and root. 13. Define 0/1 knapsack problem The 0/1 knapsack problem, is based on the concept of filling up the knapsack using the objects of least weight but maximum profit. 14. Define Traveling Salesman Problem (May/June 2009) The goal of the Travelling Salesman Problem (TSP) is to find the cheapest tour of a select number of cities with the following restrictions: You must visit each city once and only once You must return to the original starting point

15. Define transitive closure. The transitive closure of a directed graph with n vertices is defined as the n-by-n Boolean matrix T= {tij}, in which the elements in the ith row (1<=i<=n) and the jth column (1<=j <=n) is 1 if there exists a non trivial directed path from the ith vertex to the jth vertex otherwise, tij is 0

16. Compare feasible solution with optimal solution (Apr/May 2008) While solving the problem using Greedy approach solutions are obtained in number of stages. These solutions satisfy problems constraints .Such solutions are called feasible solutions among the feasible solutions if the best solutions (either with minimum or maximum value) is chosen then that solution is called optimal solution Construction of optimal binary search tree
ALGORITHM OptimalBST(P[1..n]) //Finds an optimal binary search tree by dynamic programming //Input An array P[1..n] of search probabilities for a sorted list of n keys //Output Average number of comparisons in successful searches in the optimal //BST and table R of subtrees roots in the optimal BST
for i := 0 to n do wi,i := qi ci,i := 0 ri,i := 0 for length := 1 to n do for i := 0 to n-length do j := i + length wi,j := wi,j-1 + pj + qj m := value of k (with i < k j) which minimizes (ci,k-1+ck,j) ci,j := wi,j + ci,m-1 + cm,j ri,j := m Leftson(ri,j) := ri,m-1 Rightson(ri,j) := rm,j

The time complexity of this algorithm is O(n3).

UNIT-4 Backtracking 1. State the principle of backtracking(Nov/Dec 2005) Backtracking is a method in which the desired solution is expressed as n tuple (x1,x2,x3,x4,xn )which is chosen from solution space, using backtrack formulation. The solution obtained i.e. (x1,x2,x3,x4,xn) can either minimizes or maximizes or satisfies the criteria function. 2. What is state space tree?(Nov/Dec 2008) As State Space tree is a rooted tree whose nodes represent partially constructed solutions to the given problem .In Backtracking method the state spaces tree is built for finding the solution. This tree is built using depth first search fashion 3. Give a formal definition of n queens problem? Consider an n x n chessboard on which we have to place n queens such that no two queens can attack each other by being in the same row or in the same column or the same diagonal The diagonal conflicts can be checked using following formula Let,P1=(i,j) and P2=(k,l) are two positions.

Then P1 and P2are the positions that are on the same diagonal, if i+j=k+l (or) i-j=k-l
4. What are the tricks used to reduce the size of the state-space tree? The various tricks are ._Exploit the symmetry often present in combinatorial problems. So some solutions can be obtained by the reflection of others. This cuts the size of the tree by about half. ._Reassign values to one or more components of a solution ._Rearranging the data of a given instance. 5. What is the method used to find the solution in n-queen problem by Symmetry? The board of the n-queens problem has several symmetries so that some solutions can be obtained by other reflections. Placements in the last n/2 columns need not be considered, because any solution with the first queen in square (1,i), n/2 i n can be obtained by reflection from a solution with the first queen in square (1,n-i+1)

6. What are the additional items required for branch and bound to compare backtracking technique? (Nov/Dec2006) In this method, it is not necessary to use DFS for obtaining the solution, even the BFS; Best first search can be applied. Typically optimization problem can be solved using branch and bound It proceeds on better solutions so there cannot be a bad solution. The sate space tree needs to be search completely as there may be chances of being an optimum solution anywhere in state space tree. Applications: Job sequencing 7. What is a promising node in the state-space tree? A node in a state-space tree is said to be promising if it corresponds to a partially constructed solution that may still lead to a complete solution. 8. What is a non-promising node in the state-space tree? A node in a state-space tree is said to be promising if it corresponds to a partially constructed solution that may still lead to a complete solution; otherwise it is called nonpromising. 9. What do leaves in the state space tree represent? Leaves in the state-space tree represent either non-promising dead ends or complete solutions found by the algorithm. 10. What is the manner in which the state-space tree for a backtracking algorithm is constructed? In the majority of cases, a state-space tree for backtracking algorithm is constructed in the manner of depth-first search. If the current node is promising, its child is generated by adding the first remaining legitimate option for the next component of a solution, and the processing moves to this child. If the current node turns out to be nonpromising, the algorithm backtracks to the nodes parent to consider the next possible solution to the problem, it either stops or backtracks to continue searching for other possible solutions. 11. Draw the solution for the 4-queen problem. Q Q Q Q


12. How can the output of a backtracking algorithm be thought of? The output of a backtracking algorithm can be thought of as an n-tuple (x1, xn) where each coordinate xi is an element of some finite linearly ordered set Si. If such a tuple (x1, xi) is not a solution, the algorithm finds the next element in Si+1 that is consistent with the values of (x1, xi) and the problems constraints and adds it to the tuple as its (I+1)st coordinate. If such an element does not exist, the algorithm backtracks to consider the next value of xi, and so on. 13.Give a template for a generic backtracking algorithm. ALGORITHM Backtrack(X[1..i]) //Gives a template of a generic backtracking algorithm //Input X[1..i] specifies the first I promising components of a solution //Output All the tuples representing the problems solution if X[1..i] is a solution write X[1..i] else for each element x Si+1 consistent with X[1..i] and the constraints do X[i+1] x Backtrack(X[1..i+1]) 14. Define Hamiltonian circuit?
The Hamiltonian is defined as a cycle that passes through all the Vertices of the graph exactly once. It is named after the Irish mathematician Sir William Rowan Hamilton (1805-1865).It is a sequence of n+1 adjacent vertices vi0, vi1,, vin-1, vi0 where the first vertex of the sequence is same as the last one while all the other n-1 vertices are distinct. 15. What is the subset-sum problem? Find a subset of a given set S={s1,,sn} of n positive integers whose sum is equal to a given positive integer d. 16. When can a node be terminated in the subset-sum problem? The sum of the numbers included are added and given as the value for the root as s. The node can be terminated as a non-promising node if either of the two equalities holds: s+si+1>d (the sum s is too large) s+
+ = n ji1

sj<d (the sum s is too small) 16. What is graph coloring?

Graph coloring is the problem of coloring each vertex in a graph such that no two adjacent vertices are the same color Some direct examples: Map coloring Register assignment

17.Give the applications of Graph Coloring Twelve faculty members in a mathematics department serve on the following committees: Undergraduate education: Sineman, Limitson, Axiomus, Functionini Graduate Education: Graphian, Vectorades, Functionini, Infinitescu Colloquium: Lemmeau, Randomov, Proofizaki Library: Van Sum, Sineman, Lemmeau Staffing: Graphian, Randomov, Vectorades, Limitson Promotion: Vectorades, Van Sum, Parabolton UNIT-5 Graph Algorithms & Branch and Bound 1. How efficient Prims algorithm?(Nov/Dec 2006),(May/June 2009) If Prims algorithm is implemented using adjacency matrix then the time complexity is | (V2 |) where V is total number of vertices in a given graph. If Prims algorithm is implemented using binary heap with creation of a graph using adjacency list then the time complexity (E log2 V) where V is total number of vertices and E is total number of edges. 2.What is a minimum cost spanning tree?( (May/June 2007) Minimum cost spanning tree is a spanning tree with weight less than or equal to the weight of every other spanning tree. 3. Give the formula used to find the upper bound for knapsack problem. A simple way to find the upper bound ub is to add v, the total value of the items already selected, the product of the remaining capacity of the knapsack W-w and the best per unit payoff among the remaining items, which is vi+1/wi+1 ub = v + (W-w)( vi+1/wi+1) 4. Define articulation point. (Nov/Dec 2009). (Apr/May 2008) A vertex v in a connected graph G is an articulation point if the deletion of vertex v together with all edges incident to v disconnects the graph into two or more non empty components. 5. When do you terminate the search path in a state space tree of a branch and bound algorithm? (Nov/Dec 2008) The algorithm always keeps the list of live nodes in a list When all the children of E have been generated, E becomes a dead node happens only if none of Es children is an answer node If there are no live nodes left, the algorithm terminates; otherwise, Least () correctly chooses the next E- node and the search continues. 6. State the properties of NP Hard problems. (Apr/May 2008) A problem H is NP-hard if and only if there is an NP-complete problem L that is polynomial time. 7. When can a search path be terminated in a branch-and-bound algorithm? A search path at the current node in a state-space tree of a branch and bound algorithm can be terminated if

._The value of the nodes bound is not better than the value of the best solution seen so far. _The node represents no feasible solution because the constraints of the problem are already violated. . The subset of feasible solutions represented by the node consists of a single point in this case compare the value of the objective function for this feasible solution with that of the best solution seen so far and update the latter with the former if the new solution is better.

8. What are the strengths of backtracking and branch-and-bound? The strengths are as follows. _It is typically applied to difficult combinatorial problems for which no efficient algorithm for finding exact solution possibly exists. _It holds hope for solving some instances of nontrivial sizes in an acceptable amount of time. _Even if it does not eliminate any elements of a problems state space and ends up generating all its elements, it provides a specific technique for doing so, which can be of some value. 9. What is best-first branch-and-bound? It is sensible to consider a node with the best bound as the most promising, although this does not preclude the possibility that an optimal solution will ultimately belong to a different branch of the state-space tree. This strategy is called best-first branch-and-bound
10. Give the formula used to find the upper bound for knapsack problem. A simple way to find the upper bound ub is to add v, the total value of the items already selected, the product of the remaining capacity of the knapsack W-w and the best per unit payoff among the remaining items, which is vi+1/wi+1 ub = v + (W-w)( vi+1/wi+1) 11. What is knapsack problem? Given n items of known weights w i and values vi, i=1,2,,n, and a knapsack of capacity W, find the most valuable subset of the items that fit the knapsack. It is convenient to order the items of a given instance in descending order by their value-to-weight ratios. Then the first item gives the best payoff per weight unit and the last one gives the worst payoff per weight unit. 12.Compare Backtracking and Branchbound SlNo 1. Backtracking State-space tree is constructed using depth-first search Finds solutions nonoptimization problems for combinatorial Finds solutions for combinatorial optimization problems Bounds are associated with the each and every node in the state-space tree Branch-and-bound State-space tree is constructed using best-first search



No bounds are associated with the Bounds are associated with the each nodes in the state-space tree and