Escolar Documentos
Profissional Documentos
Cultura Documentos
Hence, many solution algorithms can be derived for a given problem. Next step is to analyze those
proposed solution algorithms and implement the best suitable.
Q. Define algorithm and describe its complexities.
An algorithm is a finite set of instructions or logic, written in order, to accomplish a certain
predefined task. Algorithm is not the complete code or program, it is just the core logic (solution)
of a problem, which can be expressed either as an informal high level description as pseudo code or
using a flowchart.
An algorithm is said to be efficient and fast, if it takes less time to execute and consumes less
memory space. The performance of an algorithm is measured on the basis of following properties:
1. Time Complexity
2. Space Complexity
Space Complexity: Its the amount of memory space required by the algorithm, during the course
of its execution. Space complexity must be taken seriously for multi-user systems and in situations
where limited memory is available.
An algorithm generally requires space for following components:
Instruction Space: Its the space required to store the executable version of the program.
This space is fixed, but varies depending upon the number of lines of code in the program.
Data Space: Its the space required to store all the constants and variables value.
Environment Space: Its the space required to store the environment information needed to
resume the suspended function.
Time Complexity: Time Complexity is a way to represent the amount of time needed by the
program to run to completion. We will study this in details in our section.
Q. Describe the Types of Notations for Time Complexity
Types of Notations for Time Complexity:
Now we will discuss and understand the various notations used for Time Complexity.
1. Big Oh denotes "fewer than or the same as" <expression> iterations.
2. Big Omega denotes "more than or the same as" <expression> iterations.
3. Big Theta denotes "the same as" <expression> iterations.
4. Little Oh denotes "fewer than" <expression> iterations.
5. Little Omega denotes "more than" <expression> iterations.
Understanding Notations of Time Complexity with Example
O (expression) is the set of functions that grow slower than or at the same rate as expression.
Omega (expression) is the set of functions that grow faster than or at the same rate as expression.
Theta (expression) consist of all the functions that lie in both O(expression) and Omega
(expression).
Suppose you've calculated that an algorithm takes f(n) operations, where,
f(n) = 3*n^2 + 2*n + 4.
// n^2 means square of n
Since this polynomial grows at the same rate as n^2, then you could say that the function f lies in
the set Theta(n^2). (It also lies in the sets O(n^2) and Omega(n^2) for the same reason.)
The simplest explanation is, because Theta denotes the same as the expression. Hence, as f(n)
grows by a factor of n^2, the time complexity can be best represented as Theta(n^2).
Q. Mention the criteria upon which we can judge an algorithm.
An algorithm must satisfy the following criteria:
1. Input- These are the values that are supplied externally to the algorithm.
2. Output- These are the results that are produced by the algorithm.
3. Definiteness- Each step must be clear and unambiguous.
4. Finiteness- The algorithm must terminate after a finite number of steps.
5. Effectiveness- Each step must be feasible i.e. it should be practically possible to perform the
step.
Q. Distinguish between algorithm and method or technique.
Algorithm: An algorithm is a well-defined, formalized approach to a particular problem where the
input and the desired output are formally specified. An algorithm is a complete description of how
to correctly produce that output from the input.
If you need to sort a group of data you have a lot of different Algorithms to choose from. You
could use the Bubble Sort algorithm, the Insertion Sort algorithm, etc.
No matter what sort algorithm you choose it's just going to be some lines of code that sorts the
group of data; you could keep all of this inside your main class without having a single function.
Method: A method simply encapsulates any amount of code into a reusable container.
A technique is a broad word for any kind of general approach that may be used to make progress on
problems. Being general, it is usually not specific enough to the details of any given problem to be
a one-stop solution. It may also be the case that coming up with an algorithm to a problem requires
combining multiple techniques.
If I wanted to print a really long message multiple times throughout my code I could create a
method that prints out that message and then every time I wanted to print the message I would
simply call the method and it would take care of everything. I only had the write the message once
rather than write it each time I wanted to print it.
Q. Why Study Algorithms?
Computer scientists learn by experience. We learn by seeing others solve problems and by solving
problems by ourselves. Being exposed to different problem-solving techniques and seeing how
different algorithms are designed helps us to take on the next challenging problem that we are
given. By considering a number of different algorithms, we can begin to develop pattern
recognition so that the next time a similar problem arises, we are better able to solve it.
Algorithms are often quite different from one another. Consider the example of sqrt seen earlier.
It is entirely possible that there are many different ways to implement the details to compute the
square root function. One algorithm may use many fewer resources than another.
One algorithm might take 10 times as long to return the result as the other. We would like to have
some way to compare these two solutions. Even though they both work, one is perhaps better
than the other. We might suggest that one is more efcient or that one simply works faster or uses
less memory. As we study algorithms, we can learn analysis techniques that allow us to compare
and contrast solutions based solely on their own characteristics, not the characteristics of the
program or computer used to implement them.
In the worst case scenario, we may have a problem that is intractable, meaning that there is no
algorithm that can solve the problem in a realistic amount of time. It is important to be able to
distinguish between those problems that have solutions, those that do not, and those where
solutions exist but require too much time or other resources to work reasonably.
There will often be trade-offs that we will need to identify and decide upon. As computer scientists,
in addition to our ability to solve problems, we will also need to know and understand solution
evaluation techniques. In the end, there are often many ways to solve a problem.
Finding a solution and then deciding whether it is a good one are tasks that we will do over and
over again.
Q. Write a selection sort algorithm to sort n numbers.
Algorithm
Step 1 Set MIN to location 0
Step 2 Search the minimum element in the list
Step 3 Swap with value at location MIN
Step 4 Increment MIN to point to next element
S/E
Freq.
Total
Algorithm Sum(a[],n)
S = 0.0;
for i=1 to n do
n+1
n+1
s = s+a[i];
6
7
return s;
}
The Internet enables people around the world to quick access and retrieve large amounts of
information.
Electronic commerce enables goods and services to be exchanged electronically.
Manufacturing and other commercial enterprises need to allocate resources in the most beneficial
way.
Q. Explain the complexity.
Algorithm Complexity: Suppose X is an algorithm and n is the size of input data, the time and
space used by the Algorithm X are the two main factors which decide the efficiency of X.
Time Factor The time is measured by counting the number of key operations such as
comparisons in sorting algorithm
Space Factor The space is measured by counting the maximum memory space required
by the algorithm.
The complexity of an algorithm f(n) gives the running time and / or storage space required by the
algorithm in terms of n as the size of input data.
Space Complexity: Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. Space required by an algorithm is equal to the sum of the
following two components
A fixed part that is a space required to store certain data and variables, that are independent
of the size of the problem. For example simple variables & constant used, program size etc.
A variable part is a space required by variables, whose size depends on the size of the
problem. For example dynamic memory allocation, recursion stack space etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I) Where C is the fixed part and S(I) is
the variable part of the algorithm which depends on instance characteristic I. Following is a simple
example that tries to explain the concept
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C A + B + 10
Step 3 - Stop
Here we have three variables A, B and C and one constant. Hence S(P) = 1+3. Now space depends
on data types of given variables and constant types and it will be multiplied accordingly.
Time Complexity
Time Complexity of an algorithm represents the amount of time required by the algorithm to run to
completion. Time requirements can be defined as a numerical function T(n), where T(n) can be
measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c*n, where c is the time taken for addition of two bits. Here, we observe that T(n)
grows linearly as input size increases.
Q. Define asymptotic analysis and expression.
Asymptotic analysis: Asymptotic analysis of an algorithm, refers to defining the mathematical
boundation/framing of its run-time performance. Using asymptotic analysis, we can very well
conclude the best case, average case and worst case scenario of an algorithm.
Asymptotic analysis are input bound i.e., if there's no input to the algorithm it is concluded to work
in a constant time. Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units of
computation. For example, running time of one operation is computed as f(n) and may be for
another operation it is computed as g(n2). Which means first operation running time will increase
linearly with the increase in n and running time of second operation will increase exponentially
when n increases. Similarly the running time of both operations will be nearly same if n is
significantly small.
Usually, time required by an algorithm falls under three types
Notation
Notation
Notation
1) Notation: The theta notation bounds a function from above and below, so it defines exact
asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order terms and ignore leading
constants.
For example, consider the following expression.
3n3 + 6n2 + 6000 = (n3)
Dropping lower order terms is always fine because there will always be a n0 after which (n3) beats
n2) irrespective of the constants involved.
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n)
and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be
non-negative for values of n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a
function only from above. For example, consider the case of Insertion Sort. It takes linear time in
best case and quadratic time in worst case. We can safely say that the time complexity of Insertion
sort is O(n^2). Note that O(n^2) also covers linear time.
If we use notation to represent time complexity of Insertion sort, we have to use two statements
for best and worst cases:
1. The worst case time complexity of Insertion Sort is (n^2).
2. The best case time complexity of Insertion Sort is (n).
The Big O notation is useful when we only have upper bound on time complexity of an algorithm.
Many times we easily find an upper bound by simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= cg(n) for
all n >= n0}
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as (n), but it is not a very useful information about insertion sort, as we are generally
interested in worst case and sometimes in average case.
Q. What is a Magic Square? How to Solve a Magic Square?
Magic Square: A magic square is an arrangement of the numbers from 1 to n2 (n-squared) in an
nxn matrix, with each number occurring exactly once, and such that the sum of the entries of any
row, any column, or any main diagonal is the same. It is not hard to show that this sum must be
n(n2+1)/2.
The simplest magic square is the 1x1 magic square whose only entry is the number 1.
and those derived from it by symmetries of the square. This 3x3 square is definitely magic and
satisfies the definition given above.
How to solve:
Magic squares have grown in popularity with the advent of mathematics-based games like Sudoku.
A magic square is an arrangement of numbers in a square in such a way that the sum of each row,
column, and diagonal is one constant number, the so-called "magic constant." This article will tell
you how to solve any type of magic square, whether odd-numbered, singly even-numbered, or
doubly-even numbered.
Method 1: Solving an Odd-Numbered Magic Square
1. Calculate the magic constant: You can find this number by using a simple math formula, where
n = the number of rows or columns in your magic square. So, for example, in a 3x3 magic square, n
= 3. The magic constant = [n * ((n*10+2) + 1)] / 2. So, in the example of the 3x3 square:
sum = [3 * (9 + 1)] / 2
sum = (3 * 10) / 2
sum = 30 / 2
2. Place the number 1 in the center box on the top row. This is always where you begin when
your magic square has odd-numbered sides, regardless of how large or small that number is. So, if
you have a 3x3 square, place the number 1 in Box 2; in a 15x15 square, place the number 1 in Box
8.
3. Fill in the remaining numbers using an up-one, right-one pattern. You will always fill in the
numbers sequentially (1, 2, 3, 4, etc.) by moving up one row, then one column to the right. Youll
notice immediately that in order to place the number 2, youll move above the top row, off the
magic square. Thats okay although you always work in this up-one, right-one manner, there are
three exceptions that also have patterned, predictable rules:
If the movement takes you to a box above the magic squares top row, remain in that
boxs column, but place the number in the bottom row of that column.
If the movement takes you to a box to the right of the magic squares right column,
remain in that boxs row, but place the number in the furthest left column of that row.
If the movement takes you to a box that is already occupied, go back to the last box that has
been filled in, and place the next number directly below it.
A singly even square has a number of boxes per side that is divisible by 2, but not 4.
The smallest possible singly even magic square is 6x6, since 2x2 magic squares cant be
made.
2. Calculate the magic constant. Use the same method as you would with odd magic squares: the
magic constant = [n * (n2 + 1)] / 2, where n = the number of boxes per side. So, in the example of a
6x6 square:
sum = (6 * 37) / 2
sum = 222 / 2
3. Divide the magic square into four quadrants of equal size. Label them A (top left), C (top
right), D (bottom left) and B (bottom right). To figure out how large each square should be, simply
divide the number of boxes in each row or column by half.
4. Assign each quadrant a number range. Quadrant A gets for the quarter of numbers; Quadrant
B the second quarter; Quadrant C the third quarter, and Quadrant D the final quarter of the total
number range for the 6x6 magic square.
In the example of a 6x6 square, Quadrant A would be solved with the numbers from 1-9;
Quadrant B with 10-18; Quadrant C with 19-27; and Quadrant D with 28-36.
5. Solve each quadrant using the methodology for odd-numbered magic squares. Quadrant A
will be simple to fill out, as it starts with the number 1, as magic squares usually do. Quadrants BD, however, will start with strange numbers 10, 19, and 28, respectively, in our example.
Treat the first number of each quadrant as though it is the number one. Place it in the center
box on the top row of each quadrant.
Treat each quadrant like its own magic square. Even if a box is available in an adjacent
quadrant, ignore it and jump to the exception rule that fits your situation.
6. Create Highlights A and D. If you tried to add up your columns, rows, and diagonals right now,
youd notice that they dont yet add up to your magic constant. You have to swap some boxes
between the top left and bottom left quadrants to finish your magic square. Well call those
swapped areas Highlight A and Highlight D.
Using a pencil, mark all the squares in the top row until you read the median box position of
Quadrant A. So, in a 6x6 square, you would only mark Box 1 (which would have the number 8
in it), but in a 10x10 square, you would mark Boxes 1 and 2 (which would have the numbers 17
and 24 in them, respectively).
Mark out a square using the boxes you just marked as the top row. If you only marked one
box, your square is just that one box. Well call this area Highlight A-1.
So, in a 10x10 magic square, Highlight A-1 would consist of Boxes 1 and 2 in Rows 1 and
2, creating a 2x2 square in the top left of the quadrant.
In the row directly below Highlight A-1, skip the number in the first column, then mark as
many boxes across as you marked in Highlight A-1. Well call this middle row Highlight A-2.
Highlight A-3 is a box identical to A-1, but placed in the bottom left corner of the quadrant.
Highlight A-1, A-2, and A-3 together comprise Highlight A.
Repeat this process in Quadrant D, creating an identical highlighted area called Highlight D.
7. Swap Highlights A and D. This is a one-to-one swap; simply lift and replace the boxes between
Quadrant A and Quadrant D without changing their order at all. Once youve done this, all the
rows, columns, and diagonals in your magic square should add up to the magic constant you
calculated.
2. Calculate the magic constant. Use the same method as you would with odd-numbered or
singly-even magic squares: the magic constant = [n * (n 2 + 1)] / 2, where n = the number of boxes
per side. So, in the example of a 4x4 square:
sum = (4 * 17) / 2
sum = 68 / 2
3. Create Highlights A-D. In each corner of the magic square, mark a mini-square with sides a
length of n/4, where n = the length of a side of the whole magic square. Label them Highlights A,
B, C, and D in a counter-clockwise manner.
In a 4x4 square, you would simply mark the four corner boxes.
In a 12x12 square, each Highlight would be a 3x3 area in the corners, and so on.
4. Create the Central Highlight. Mark all the boxes in the center of the magic square in a square
area of length n/2, where n = the length of a side of the whole magic square. The Central Highlight
should not overlap with Highlights A-D at all, but touch each of them at the corners.
In a 4x4 square, the Central Highlight would be a 2x2 area in the center.
In an 8x8 square, the Central Highlight would be a 4x4 area in the center, and so on.
5. Fill in the magic square, but only in Highlighted areas. Begin filling in the numbers of your
magic square from left to right, but only write in the number if the box falls into a Highlight. So, in
a 4x4 box, you would fill in the following boxes:
6. Fill in the rest of the magic square by counting backwards. The essentially the inverse of the
previous step. Begin again with the top left box, but this time, skip all boxes that fall in Highlighted
area, and fill in non-highlighted boxes by counting backwards. Begin with the largest number in
your number range. So, in a 4x4 magic square, you would fill in the following:
Breaking the problem into several sub-problems that are similar to the original problem but
smaller in size,
Combine these solutions to sub problems to create a solution to the original problem.
Broadly, we can understand (basic theme) divide-and-conquer approach as three step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent as a part of original problem. This step generally takes recursive approach to divide
the problem until no sub-problem is further dividable. At this stage, sub-problems become
atomic in nature but still represents some part of actual problem.
Conquer/Solve
This step receives lot of smaller sub-problem to be solved. Generally at this level, problems
are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate solution of the original problem.
This algorithmic approach works recursively and conquer & merge steps works so close that they
appear as one.
First, we shall determine the half of the array by using this formula
mid = low + (high - low) / 2
Now we compare the value stored at location 4, with the value being searched i.e. 31. We find that
value at location 4 is 27, which is not a match. Because value is greater than 27 and we have a
sorted array so we also know that target value must be in upper portion of the array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is less that what we are looking for. So the
value must be in lower part from this location.
We compare the value stored ad location 5 with our target value. We find that it is a match.
1
2
3
4
5
6
7
Q. Define divide and conquer method. Write the control abstraction for divide and conquer
method.
Divide and conquer (D&C): Divide and conquer is an algorithm design paradigm based on multibranched recursion. A divide and conquer algorithm works by recursively breaking down a problem
into two or more sub-problems of the same or related type, until these become simple enough to be
solved directly. The solutions to the sub-problems are then combined to give a solution to the
original problem.
Control Abstraction For Divide and Conquer:A control abstraction is a procedure that reflects the way an actual program based on DAndC will
look like. A control abstraction shows clearly the flow of control but the primary operations are
specified by other procedures. The control abstraction can be written either iteratively or
recursively.
If we are given a problem with n inputs and if it is possible for splitting the n inputs into k
subsets where each subset represents a sub problem similar to the main problem then it can be
achieved by using divide and conquer strategy.
If the sub problems are relatively large then divide and conquer strategy is reapplied. The sub
problem resulting from divide and conquer design are of the same type as the original problem.
Generally divide and conquer problem is expressed using recursive formulas and functions.
A general divide and conquer design strategy(control abstraction) is illustrated as given belowAlgorithm DAndC (P)
{
if small(P) then return S(P) //termination condition
else
{
Divide P into smaller instances P1, P2, P3 Pk k1; or 1kn
Apply DAndC to each of these sub problems.
Return Combine (DAndC(P1), DAndC (P2), DAndC (P3)
DAndC (Pk)
}
}
The above blocks of code represents a control abstraction for divide and conquer strategy. Small (P)
is a Boolean valued function that determines whether the input size is small enough that the answer
can be computed without splitting. If small (P) is true then function S is invoked. Otherwise the
problem P is divided into sub problems. These sub problems are solved by recursive application
of Divide-and-conquer. Finally the solution from k sub problems is combined to obtain the solution
of the given problem.
If the size of P is n and if the size of k sub problems is n1,n2,. nk
Respectively then the computing time of DAndC is described by the recurrence relation.
T(n) = g(n)
when n is small
= T(n1)+ T(n1)+ T(n2)+.+ T(nk)+ f(n) otherwise.
T(n) denotes the time for DAndC on any input of size n.
G(n) is the time to compute the answer directly for small inputs.
F(n) is the time for dividing P and combining the solutions of sub problems.
Q. What is greedy method? Classify the paradigms for greedy method. Write the control
abstraction for the subset paradigm.
Greedy method: A greedy algorithm is an algorithmic paradigm that follows the problem solving
heuristic of making the locally optimal choice at each stage with the hope of finding a global
optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but
nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global
optimal solution in a reasonable time.
For example, a greedy strategy for the traveling salesman problem (which is of a high
computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest
to the current city". This heuristic need not find a best solution, but terminates in a reasonable
number of steps; finding an optimal solution typically requires unreasonably many steps. In
mathematical optimization, greedy algorithms solve combinatorial problems having the properties
of matroids.
Q. Briefly describe optimal substructure property with appropriate example.
Optimal Substructure: A given problems has Optimal Substructure Property if optimal solution of
the given problem can be obtained by using optimal solutions of its subproblems.
For example the shortest path problem has following optimal substructure property: If a node x lies
in the shortest path from a source node u to destination node v then the shortest path from u to v is
combination of shortest path from u to x and shortest path from x to v. The standard All Pair
Shortest Path algorithms like FloydWarshall and BellmanFord are typical examples of Dynamic
Programming.
On the other hand the Longest path problem doesnt have the Optimal Substructure property. Here
by Longest Path we mean longest simple path (path without cycle) between two nodes. Consider
the following un weighted graph given in the CLRS book. There are two longest paths from q to t:
q->r->t and q->s->t. Unlike shortest paths, these longest paths do not have the optimal substructure
property. For example, the longest path q->r->t is not a combination of longest path from q to r and
longest path from r to t, because the longest path from q to r is q->s->t->r.
One more example: Consider finding a shortest path for travelling between two cities by car, as
illustrated in Figure 1. Such an example is likely to exhibit optimal substructure. That is, if the
shortest route from Seattle to Los Angeles passes through Portland and then Sacramento, then the
shortest route from Portland to Los Angeles must pass through Sacramento too. That is, the
problem of how to get from Portland to Los Angeles is nested inside the problem of how to get
from Seattle to Los Angeles. (The wavy lines in the graph represent solutions to the subproblems.)
As an example of a problem that is unlikely to exhibit optimal substructure, consider the problem
of finding the cheapest airline ticket from Buenos Aires to Kyiv. Even if that ticket involves stops
in Miami and then London, we can't conclude that the cheapest ticket from Miami to Kyiv stops in
London, because the price at which an airline sells a multi-flight trip is usually not the sum of the
prices at which it would sell the individual flights in the trip.
Figure 1. Finding the shortest path using optimal substructure. Numbers represent the length of the
path; straight lines indicate single edges, wavy lines indicate shortest paths, i.e., there might be
other vertices that are not shown here.
Q. What is the difference between dynamic programming and divide and conquer? write an
algorithm for longest subsequence problem
Divide and conquer
Dynamic programming
The divide-and-conquer paradigm involves The development of a dynamic-programming
three steps at each level of the recursion:
algorithm can be broken into a sequence of four
1. Divide the problem into a number of sub steps.
problems.
1. Characterize the structure of an optimal
2. Conquer the sub problems by solving them solution.
recursively. If the sub problem sizes are small 2. Recursively define the value of an optimal
enough, however, just solve the sub problems solution.
in a straightforward manner.
3. Compute the value of an optimal solution in a
3. Combine the solutions to the sub problems bottom-up fashion.
into the solution for the original problem.
4. Construct an optimal solution from computed
information
They call themselves recursively one or more
Dynamic Programming is not recursive.
times to deal with closely related sub problems
D&C does more work on the sub-problems
DP solves the sub problems only once and then
and hence has more time consumption.
stores it in the table.
In D&C the sub problems are independent of
In DP the sub-problems are not independent.
each other.
Example: Merge Sort, Binary Search
Example : Matrix chain multiplication
C Program for Longest Common Subsequence Problem
#include<stdio.h>
#include<string.h>
int i,j,m,n,c[20][20];
char x[20],y[20],b[20][20];
void print(int i,int j)
{
if(i==0 || j==0)
return;
if(b[i][j]=='c')
{
print(i-1,j-1);
printf("%c",x[i-1]);
}
else if(b[i][j]=='u')
print(i-1,j);
else
print(i,j-1);
}
void lcs()
{
m=strlen(x);
n=strlen(y);
for(i=0;i<=m;i++)
c[i][0]=0;
for(i=0;i<=n;i++)
c[0][i]=0;
respectively
}
int main()
{
printf("Enter 1st sequence:");
scanf("%s",x);
printf("Enter 2nd sequence:");
scanf("%s",y);
printf("\nThe Longest Common Subsequence is ");
lcs();
print(m,n);
return 0;
}
Output
Q. Why is it necessary to have the auxiliary array in function merge? What is the best-case
tine of procedure Merge Sort?
Abstract in-place merge The method merge(a, lo, mid, hi) in Merge.java puts the
results of merging the subarrays a[lo..mid] with a[mid+1..hi] into a single ordered
array, leaving the result in a[lo..hi]. While it would be desirable to implement this method
without using a significant amount of extra space, such solutions are remarkably complicated.
Instead, merge() copies everything to an auxiliary array and then merges back to the original.
Q. Define the following items:
(i) Live Node (ii) Dead Node
(iii) E(Expanded) Node
Live node: Live node is a node that has been generated but whose children have not yet been
generated.
E-node: E-node is a live node whose children are currently being explored. In other words, an Enode is a node currently being expanded.
Dead node: Dead node is a generated node that is not to be expanded or explored any further. All
children of a dead node have already been expanded.
Branch-and-bound: Branch-and-bound refers to all state space search methods in which all
children of an E-node are generated before any other live node can become the E-node.
Q. What is complexity? Explain its types.
Complexity: Complexity of an algorithm is a measure of the amount of time and/or space required
by an algorithm for an input of a given size (n). There are two main complexity measures of the
efficiency of an algorithm:
1. Time complexity
2. Space complexity
Time complexity: Time complexity of an algorithm signifies the total time required by the
program to run to completion. The time complexity of algorithms is most commonly expressed
using the big O notation. It depends on two components
Fixed Part Compile time
Variable Part Run time dependent on problem instance. Run time is considered usually
and compile time is ignored.
Ways to measure time complexity
Use a stop watch and time is obtained in seconds or milliseconds.
Step Count - Count no of program steps.
o
Comments are not evaluated so they are not considered as program step.
o
In while loop steps are equal to the number of times loop gets executed.
o
In for loop, steps are equal to number of times an expression is checked for
condition.
o
A single expression is considered as a single step. Example a+f+r+w+w/q-d-f is
one step.
Rate of Growth(Asymptotic Notations)
Fixed Part It is needed for instruction space i.e byte code. Variable space, constants space etc.
Variable Part Instance of input and output data.
Space(S) = Fixed Part + Variable Part
Example 1:
Algorithm sum(x,y)
{
return x+y;
}
Two computer words will be required to store variable x and y so,
Space Complexity (S)= 2.
Example 2:
Algorithm sum( a[], n)
{
sum =0;
for(i=0;i<=n;i++)
{
sum = sum + a[i];
return sum;
}
Prims Algorithm:
problems whose solutions can be used to solve NP problems and NP-Complete is the intersection of
NP and NP-Hard. Proving that circuit sat is in NP-Complete is done in the Cook-Levin theorem,
which is rather complicated. The basic idea of the proof is that circuits can simulate nondeterministic turning machines efficiently, so it can be used to solve NP problems.
Q. What is P versus NP problem? Describe with suitable example.
P vs. NP
The P vs. NP problem asks whether every problem whose solution can be quickly verified by a
computer can also be quickly solved by a computer.
So lets figure out what we mean by P and NP.
P problems are easily solved by computers, and NP problems are not easily solvable, but if you
present a potential solution its easy to verify whether its correct or not.
As you can see from the diagram above, all P problems are NP problems. That is, if its easy for the
computer to solve, its easy to verify the solution. So the P vs NP problem is just asking if these two
problem types are the same, or if they are different, i.e. that there are some problems that are easily
verified but not easily solved.
It currently appears that P NP, meaning we have plenty of examples of problems that we can
quickly verify potential answers to, but that we cant solve quickly. Lets look at a few examples:
A traveling salesman wants to visit 100 different cities by driving, starting and ending his
trip at home. He has a limited supply of gasoline, so he can only drive a total of 10,000
kilometers. He wants to know if he can visit all of the cities without running out of gasoline.
A farmer wants to take 100 watermelons of different masses to the market. She needs to
pack the watermelons into boxes. Each box can only hold 20 kilograms without breaking. The
farmer needs to know if 10 boxes will be enough for her to carry all 100 watermelons to
market.
All of these problems share a common characteristic that is the key to understanding the intrigue of
P versus NP: In order to solve them you have to try all combinations.
Q. Write Grahams scan algorithm to compute convex hull.
GRAHAM_SCAN(Q)
1. Find p0 in Q with minimum y-coordinate (and minimum x-coordinate if there are
ties).
2. Sorted the remaining points of Q (that is, Q {p0}) by polar angle in
counterclockwise order with respect to p0.
3. TOP [S] = 0
Lines 3-6 initialize the stack to contain, from bottom to
top, first three points.
4.
5.
6.
7.
PUSH (p0,
PUSH (p1,
PUSH (p2,
for i = 3
S)
S)
S)
to m
The Traveling Salesman Problem (TSP) is possibly the classic discrete optimization problem. The
TSP is fairly easy to describe:
Q. Describe convex hull with an example. Write the Quick hull algorithm and show every step
to find a Convex Hull using Quick Hull method.
Definitions: Given a set of points in the plane. The convex hull of the set is the smallest convex
polygon that contains all the points of it.
The idea of Jarviss Algorithm is simple, we start from the leftmost point (or point with minimum x
coordinate value) and we keep wrapping points in counterclockwise direction. The big question is,
given a point p as current point, how to find the next point in output? The idea is to use
orientation () here. Next point is selected as the point that beats all other points at
counterclockwise orientation, i.e., next point is q if for any other point r, we have orientation
(p, r, q) = counterclockwise. Following is the detailed algorithm.
1) Initialize p as leftmost point.
2) Do following while we dont come back to the first (or leftmost) point.
a) The next point q is the point such that the triplet (p, q, r) is counterclockwise for any other
point r.
b) next[p] = q (Store q as next of p in the output convex hull).
c)p = q (Set p as q for next iteration).