Escolar Documentos
Profissional Documentos
Cultura Documentos
ADA-ASSIGNMENT
Submitted by,
Rizawan N Shaikh
Research Scholar
RU101CS062
1. What are stacks and queues? Explain how you implement stack using arrays?
STACK
A stack is a container of objects that are inserted and removed according to the
last-in first-out (LIFO) principle. In the pushdown stacks only two operations are
allowed: push the item into the stack, and pop the item out of the stack. A stack is a
limited access data structure - elements can be added and removed from the stack only at
the top. Push adds an item to the top of the stack, pop removes the item from the top.
QUEUE
{
int num;
if (s.top == - 1)
{
printf ("Error: Stack Empty\n");
}
else
{
num = s.stk[s.top];
printf ("poped element is = %d\n", num);
s.top = s.top - 1;
}
}
2. Given the definition of Abstract Data Type. Explain List ADT and the set of operations
performed on List.
Definition
An abstract data type is defined as a mathematical model of the data objects that make up
a data type as well as the functions that operate on these objects. There are no standard
conventions for defining them. A broad division may be drawn between "imperative" and
"functional" definition styles
Operations performed on ADT
INSERT(x, p, L). Insert x at position p in list L, moving elements at p and following
positions to the next higher position. That is, if L is al, a2, . . . ,an, then L becomes a1,
a2,. . . ,ap- 1, x, ap, . . . ,an. If p is END(L), then L becomes a1, a2, . . . , an, x. If list L
has no position p, the result is undefined.
LOCATE(x, L). This function returns the position of x on list L. If x appears more than
once, then the position of the first occurrence is returned. If x does not appear at all, then
END(L) is returned.
RETRIEVE(p, L). This function returns the element at position p on list L. The result is
undefined if p = END(L) or if L has no position p. Note that the elements must be of a
DELETE(p, L). Delete the element at position p of list L. If L is a1, a2, . . . ,an, then L
becomes a1, a2, . . . ,ap- 1, ap+1, . . . ,an. The result is undefined L has no position p or if
p = END(L).
NEXT(p, L) and PREVIOUS(p, L) return the positions following and preceding position
p on list L. If p is the last position on L, then NEXT(p, L) = END(L). NEXT is undefined
if p is END(L). PREVIOUS is undefined if p is 1. Both functions are undefined if L has
no position p.
MAKENULL(L). This function causes L to become an empty list and returns position
END(L).
FIRST(L). This function returns the first position on list L. If L is empty, the position
returned is END(L).
Computing n!
Selection sort
Bubble sort
Sequential search
Exhaustive search: Traveling Salesman Problem, Knapsack problem.
Greedy Algorithms:
The worst-case running time of an algorithm gives us an upper bound on the running
time for any input. Knowing it provides a guarantee that the algorithm will never take any
longer. We need not make some educated guess about the running time and hope that it never
gets much worse.
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all
possible inputs of that size. Accordingly, we can analyze the best case efficiency as follows.
First, we determine the kind of inputs for which the count C(n) will be the smallest among all
possible inputs of size n. Then we ascertain the value of C(n) on these most convenient
inputs. For example, the best-case inputs for sequential search are lists of size n with their
first element equal to a search key; accordingly, Cbest(n) = 1 for this algorithm.
Conquer Step
The sub problems by solving them recursively. If the sub problem sizes are small
enough, however, just solve the sub problems in a straightforward manner. Conquer by
recursively sorting the two sub arrays A[p .. q] and A[q + 1 .. r].
Combine Step
Combine the elements back in A[p .. r] by merging the two sorted sub
arrays A[p .. q] and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we
will define a procedure MERGE (A, p, q, r).
Merge Sort Recurrence: Here is the recurrence we derived last time for Merge Sort.
Recall that T(n) is the time to run Merge Sort on a list of size n. We argued that if the list is of
length 1, then the total sorting time is a constant (1). If n > 1, then we must recursively sort two
sub lists, one of size dn/2e and the other of size bn/2c, and the non-recursive part took (n) time
for splitting the list (constant time)
ALGORITHM Merge(B[0..p 1], C[0..q 1], A[0..p + q 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p 1] and C[0..q 1] both sorted
//Output: Sorted array A[0..p + q 1] of the elements of B and C
i 0; j 0; k0
while i <p and j <q do
if B[i] C[j ]
A[k]B[i]; i i + 1
else A[k]C[j ]; j j + 1
kk + 1
if i = p
copy C[j..q 1] to A[k..p + q 1]
else copy B[i..p 1] to A[k..p + q 1]
6. Write an algorithm for binary search and analyze its running time. Given the
following sorted numbers, explain how do use the search to locate 100.
12 20 24 30 35 40 42 78 100 110
ALGORITHM BinarySearch(A[0..n 1], K)
//Implements nonrecursive binary search
//Input: An array A[0..n 1] sorted in ascending order and
// a search key K
//Output: An index of the arrays element that is equal to K
// or 1 if there is no such element
l0; r n 1
while l r do
m_(l + r)/2_
if K = A[m] return m
else ifK <A[m] r m 1
else lm + 1
return 1;
let a[] = 12 20 24 30 35 40 42 78 100 11
l=0; r =9
mid = 9/2 = 4 , a[4] =35 , 100>35, l=5, r= 9
mid =( 5+9)/2 = 7 , a[7] = 78, 100>78, l= 8 , r= 9
mid = (8+9)/2 = 8 , a[8] = 100, found
7. Write and explain insertion sort algorithm. What is the time complexity of insertion
sort?
Insertion sort is an efficient algorithm for sorting a small number of elements. All we
need is to find an appropriate position for A[n 1] among the sorted elements and insert
it there. This is usually done by scanning the sorted sub array from right to left until the
first element smaller than or equal to A[n 1] is encountered to insert A[n 1] right after
that element. The resulting algorithm is called straight insertion sort or simply insertion
sort.
Example of sorting with insertion sort
8. Write and explain quick sort algorithm. Compare its performance with insertion
sort.
Quick sort is the other important sorting algorithm that is based on the divide-and
conquers approach. Unlike merge sort, which divides its input elements according to their
position in the array, quick sort divides them according to their value. A partition is an
arrangement of the arrays elements so that all the elements to the left of some element
A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater
than or equal to it:
ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s Partition(A[l..r]) //s is a split position
Quicksort(A[l..s 1])
Quicksort(A[s + 1..r])
ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoares algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
// this functions value
pA[l]
i l; j r + 1
repeat
repeat i i + 1 until A[i] p
repeat j j 1 until A[j ] p
swap(A[i], A[j ])
until i j
swap(A[i], A[j ]) //undo last swap when i j
swap(A[l], A[j ])
return j
9. What is greedy strategy? What are the elements of the greedy strategy? Explain
Greedy algorithms are simple and straightforward. They are shortsighted in their
approach in the sense that they take decisions on the basis of information at hand without
worrying about the effect these decisions may have in the future. They are easy to invent,
easy to implement and most of the time quite efficient. Many problems cannot be solved
correctly by greedy approach. Greedy algorithms are used to solve optimization problems
The greedy algorithm consists of four functions.
1. A function that checks whether chosen set of items provide a solution.
2. A function that checks the feasibility of a set.
3. The selection function tells which of the candidates is the most promising.
4. An objective function, which does not appear explicitly, gives the value of a solution.
10. What is dynamic programming? Discuss any one algorithm that uses dynamic
programming technique
Dynamic programming works by solving sub problems and using the results of those sub
problems to more quickly calculate the solution to a larger problem. Unlike the divide-andconquer paradigm (which also uses the idea of solving sub problems), dynamic programming
typically involves solving all possible sub problems rather than a small portion.
No optimal solution exists because the length of feasible paths from city 1 to city n is
unbounded from below.
Ideally then, algorithms designed for the solution of shortest path problems should be capable of
handling these three cases.
Figure 1 depicts three instances illustrating these cases. The cities are represented by the nodes
and the distances are displayed on the directed arcs of the graphs. In all three cases n=4. The
respective distance matrices are also provided. The symbol "*" represents infinity so the
implication of D(i,j)="*" is that there is no direct link connecting city i to city j
By inspection we see that the problem depicted in Figure 1(a) has a unique optimal path,
that is x=(1,2,3,4), whose length is equal to 6. The problem depicted in Figure 1(b) does not have
a feasible - hence optimal - solution. Figure 1(c) depicts a problem where there is no optimal
solution because the length of a path from node 1 to node 4 can be made arbitrarily small by
cycling through nodes 1,2 and 3. Every additional cycle will decrease the length of the path by 1.
Observe that if we require the feasible paths to be simple, namely not to include cycles, then the
problem depicted in Figure 1(c) would be bounded. Indeed, it would have a unique optimal path
x=(1,2,3,4) whose length is equal to 6. In our discussion we do not impose this condition on the
problem formulation, namely we admit cyclic paths as feasible solutions provided that they
satisfy the precedence constraints. Thus, x'=(1,2,3,1,2,3,4) and x"=(1,2,3,1,2,3,1,2,3,4) are
feasible solutions for the problem depicted in Figure 1(c). This is why in the context of our
discussion this problem does not have an optimal solution.