Você está na página 1de 13

COURSE-02

ADA-ASSIGNMENT
Submitted by,

Rizawan N Shaikh
Research Scholar
RU101CS062

PhD Course Work Engineering and Technology

1. What are stacks and queues? Explain how you implement stack using arrays?
STACK

A stack is a container of objects that are inserted and removed according to the
last-in first-out (LIFO) principle. In the pushdown stacks only two operations are
allowed: push the item into the stack, and pop the item out of the stack. A stack is a
limited access data structure - elements can be added and removed from the stack only at
the top. Push adds an item to the top of the stack, pop removes the item from the top.
QUEUE

A queue is a container of objects (a linear collection) that are inserted and


removed according to the first-in first-out (FIFO) principle.
YOU IMPLEMENT STACK USING ARRAYS
/*Function to add an element to the stack*/
void push() {
int num;

if (s.top == (MAXSIZE - 1))


{
printf ("Error: Overflow\n");
}
else
{
printf ("Enter the element to be pushed\n");
scanf ("%d", &num);
s.top = s.top + 1;
s.stk[s.top] = num;
}
}
/*Function to delete an element from the stack*/
void pop ()

{
int num;
if (s.top == - 1)
{
printf ("Error: Stack Empty\n");
}
else
{
num = s.stk[s.top];
printf ("poped element is = %d\n", num);
s.top = s.top - 1;
}
}

2. Given the definition of Abstract Data Type. Explain List ADT and the set of operations
performed on List.
Definition
An abstract data type is defined as a mathematical model of the data objects that make up
a data type as well as the functions that operate on these objects. There are no standard
conventions for defining them. A broad division may be drawn between "imperative" and
"functional" definition styles
Operations performed on ADT
INSERT(x, p, L). Insert x at position p in list L, moving elements at p and following
positions to the next higher position. That is, if L is al, a2, . . . ,an, then L becomes a1,
a2,. . . ,ap- 1, x, ap, . . . ,an. If p is END(L), then L becomes a1, a2, . . . , an, x. If list L
has no position p, the result is undefined.

LOCATE(x, L). This function returns the position of x on list L. If x appears more than
once, then the position of the first occurrence is returned. If x does not appear at all, then
END(L) is returned.

RETRIEVE(p, L). This function returns the element at position p on list L. The result is
undefined if p = END(L) or if L has no position p. Note that the elements must be of a

type that can be returned by a function if RETRIEVE is used. In practice, however, we


can always modify RETRIEVE to return a pointer to an object of type element type.

DELETE(p, L). Delete the element at position p of list L. If L is a1, a2, . . . ,an, then L
becomes a1, a2, . . . ,ap- 1, ap+1, . . . ,an. The result is undefined L has no position p or if
p = END(L).

NEXT(p, L) and PREVIOUS(p, L) return the positions following and preceding position
p on list L. If p is the last position on L, then NEXT(p, L) = END(L). NEXT is undefined
if p is END(L). PREVIOUS is undefined if p is 1. Both functions are undefined if L has
no position p.

MAKENULL(L). This function causes L to become an empty list and returns position
END(L).

FIRST(L). This function returns the first position on list L. If L is empty, the position
returned is END(L).

PRINTLIST(L). Print the elements of L in the order of occurrence.

3. Write a note on Paradigms for Algorithm Design?


Algorithm Design Paradigms: General approaches to the construction of efficient solutions to
problems.
Such methods are of interest because:
They provide templates suited to solving a broad range of diverse problems.
They can be translated into common control and data structures provided by most highlevel languages.
The temporal and spatial requirements of the algorithms which result can be precisely
analyzed.
Although more than one technique may be applicable to a specific problem, it is often the
case that an algorithm constructed by one approach is clearly superior to equivalent solutions
built using alternative techniques.
Brute Force

Brute force is a straightforward approach to solve a problem based on the problems


statement and definitions of the concepts involved. It is considered as one of the easiest approach
to apply and is useful for solving small size instances of a problem.
Some examples of brute force algorithms are:

Computing n!
Selection sort
Bubble sort
Sequential search
Exhaustive search: Traveling Salesman Problem, Knapsack problem.
Greedy Algorithms:

The solution is constructed through a sequence of steps, each expanding a partially


constructed solution obtained so far. At each step the choice must be locally optimal this is the
central point of this technique.
Examples:
Minimal spanning tree, shortest distance in graphs, Greedy algorithm for the Knapsack
problem, Greedy techniques are mainly used to solve optimization problems. They do not always
give the best solution.
Divide-and-Conquer

These are methods of designing algorithms that (informally) proceed as follows:


Given an instance of the problem to be solved, split this into several smaller sub-instances (of the
same problem), independently solve each of the sub-instances and then combine the sub-instance
solutions so as to yield a solution for the original instance. With the divide-and-conquer method
the size of the problem instance is reduced by a factor (e.g. half the input size), while with the
decrease-and-conquer method the size is reduced by a constant.
4. Write a note on amortized analysis. What do you mean by worst case and average case
analysis? Explain with an example
In an amortized analysis, we average the time required to perform a sequence of datastructure operations over all the operations performed. With amortized analysis, we can show
that the average cost of an operation is small, if we average over a sequence of operations,
even though a single operation within the sequence might be expensive. Amortized analysis
differs from average-case analysis in that probability is not involved; an amortized analysis
guarantees the average performance of each operation in the worst case.

The worst-case running time of an algorithm gives us an upper bound on the running
time for any input. Knowing it provides a guarantee that the algorithm will never take any

longer. We need not make some educated guess about the running time and hope that it never
gets much worse.

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all
possible inputs of that size. Accordingly, we can analyze the best case efficiency as follows.
First, we determine the kind of inputs for which the count C(n) will be the smallest among all
possible inputs of size n. Then we ascertain the value of C(n) on these most convenient
inputs. For example, the best-case inputs for sequential search are lists of size n with their
first element equal to a search key; accordingly, Cbest(n) = 1 for this algorithm.

Average case analysis


In average case analysis, we take all possible inputs and calculate computing time for all
of the inputs. Sum all the calculated values and divide the sum by total number of inputs. We
must know (or predict) distribution of cases. For the linear search problem, let us assume that
all cases are uniformly distributed (including the case of x not being present in array). So we
sum all the cases and divide the sum by (n+1).
5. What are the steps in Divide and conquer strategies? Explain how merge sort uses this
technique to sort N numbers with an example. Write the recursive procedure for merge
sort.
Step 1: If the problem size is small, solve this problem directly; otherwise, split the
original problem into 2 sub-problems with equal sizes.
Step 2: Recursively solve these 2 sub-problems by applying this algorithm.
Step 3: Merge the solutions of the 2 sub-problems into a solution of the original problem
Divide Step
If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two sub arrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].

Conquer Step
The sub problems by solving them recursively. If the sub problem sizes are small
enough, however, just solve the sub problems in a straightforward manner. Conquer by
recursively sorting the two sub arrays A[p .. q] and A[q + 1 .. r].
Combine Step
Combine the elements back in A[p .. r] by merging the two sorted sub
arrays A[p .. q] and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we
will define a procedure MERGE (A, p, q, r).

Merge Sort Recurrence: Here is the recurrence we derived last time for Merge Sort.
Recall that T(n) is the time to run Merge Sort on a list of size n. We argued that if the list is of
length 1, then the total sorting time is a constant (1). If n > 1, then we must recursively sort two
sub lists, one of size dn/2e and the other of size bn/2c, and the non-recursive part took (n) time
for splitting the list (constant time)
ALGORITHM Merge(B[0..p 1], C[0..q 1], A[0..p + q 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p 1] and C[0..q 1] both sorted
//Output: Sorted array A[0..p + q 1] of the elements of B and C
i 0; j 0; k0
while i <p and j <q do
if B[i] C[j ]
A[k]B[i]; i i + 1
else A[k]C[j ]; j j + 1

kk + 1
if i = p
copy C[j..q 1] to A[k..p + q 1]
else copy B[i..p 1] to A[k..p + q 1]
6. Write an algorithm for binary search and analyze its running time. Given the
following sorted numbers, explain how do use the search to locate 100.
12 20 24 30 35 40 42 78 100 110
ALGORITHM BinarySearch(A[0..n 1], K)
//Implements nonrecursive binary search
//Input: An array A[0..n 1] sorted in ascending order and
// a search key K
//Output: An index of the arrays element that is equal to K
// or 1 if there is no such element
l0; r n 1
while l r do
m_(l + r)/2_
if K = A[m] return m
else ifK <A[m] r m 1
else lm + 1
return 1;
let a[] = 12 20 24 30 35 40 42 78 100 11
l=0; r =9
mid = 9/2 = 4 , a[4] =35 , 100>35, l=5, r= 9
mid =( 5+9)/2 = 7 , a[7] = 78, 100>78, l= 8 , r= 9
mid = (8+9)/2 = 8 , a[8] = 100, found
7. Write and explain insertion sort algorithm. What is the time complexity of insertion
sort?
Insertion sort is an efficient algorithm for sorting a small number of elements. All we
need is to find an appropriate position for A[n 1] among the sorted elements and insert
it there. This is usually done by scanning the sorted sub array from right to left until the
first element smaller than or equal to A[n 1] is encountered to insert A[n 1] right after
that element. The resulting algorithm is called straight insertion sort or simply insertion
sort.
Example of sorting with insertion sort

ALGORITHM InsertionSort(A[0..n 1])


//Sorts a given array by insertion sort
//Input: An array A[0..n 1] of n orderable elements
//Output: Array A[0..n 1] sorted in nondecreasing order
for i 1 to n 1 do
v A[i]
j i 1
while j 0 and A[j ]> v do
A[j + 1]A[j ]
j j 1
A[j + 1]v

8. Write and explain quick sort algorithm. Compare its performance with insertion
sort.
Quick sort is the other important sorting algorithm that is based on the divide-and
conquers approach. Unlike merge sort, which divides its input elements according to their
position in the array, quick sort divides them according to their value. A partition is an
arrangement of the arrays elements so that all the elements to the left of some element
A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater
than or equal to it:

ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order

if l < r
s Partition(A[l..r]) //s is a split position
Quicksort(A[l..s 1])
Quicksort(A[s + 1..r])

ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoares algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
// this functions value
pA[l]
i l; j r + 1
repeat
repeat i i + 1 until A[i] p
repeat j j 1 until A[j ] p
swap(A[i], A[j ])
until i j
swap(A[i], A[j ]) //undo last swap when i j
swap(A[l], A[j ])
return j

9. What is greedy strategy? What are the elements of the greedy strategy? Explain
Greedy algorithms are simple and straightforward. They are shortsighted in their
approach in the sense that they take decisions on the basis of information at hand without
worrying about the effect these decisions may have in the future. They are easy to invent,
easy to implement and most of the time quite efficient. Many problems cannot be solved
correctly by greedy approach. Greedy algorithms are used to solve optimization problems
The greedy algorithm consists of four functions.
1. A function that checks whether chosen set of items provide a solution.
2. A function that checks the feasibility of a set.
3. The selection function tells which of the candidates is the most promising.
4. An objective function, which does not appear explicitly, gives the value of a solution.

10. What is dynamic programming? Discuss any one algorithm that uses dynamic
programming technique
Dynamic programming works by solving sub problems and using the results of those sub
problems to more quickly calculate the solution to a larger problem. Unlike the divide-andconquer paradigm (which also uses the idea of solving sub problems), dynamic programming
typically involves solving all possible sub problems rather than a small portion.

Shortest path problems


Consider then the problem consisting of n > 1 cities {1,2,...,n} and a matrix D
representing the length of the direct links between the cities, so that D(i,j) denotes the length of
the direct link connecting city i to city j. This means that there is at most one direct link between
any pair of cities. The distances are not assumed to be symmetric, so D(i,j) is not necessarily
equal to D(j,i). The objective is to find the shortest path from a given city h, called home, to a
given city d, called destination. The length of a path is assumed to be equal to the sum of the
lengths of the links between consecutive cities on the path. With no loss of generality we assume
that h=1 and d=n. So the basic question is: what is the shortest path from city 1 to city n?
To be able to cope easily with situations where the problem is not feasible (there is no path from
city 1 to city n) we deploy the convention that if there is no direct link from city i to city j then
D(i,j) is equal to infinity. Accordingly, should we conclude that the length of the shortest path
from node i to node j is equal to infinity, the implication would be that there is no (feasible) path
from node i to node j.
Observe that subject to these conventions, an instance of the shortest path problem is uniquely
specified by its distance matrix D. Thus, this matrix can be regarded as a complete model of the
problem.
As far as optimal solutions (paths) are concerned, we have to distinguish between three basic
situations:

An optimal solution exists.

No optimal solution exits because there are no feasible solutions.

No optimal solution exists because the length of feasible paths from city 1 to city n is
unbounded from below.

Ideally then, algorithms designed for the solution of shortest path problems should be capable of
handling these three cases.

Figure 1 depicts three instances illustrating these cases. The cities are represented by the nodes
and the distances are displayed on the directed arcs of the graphs. In all three cases n=4. The
respective distance matrices are also provided. The symbol "*" represents infinity so the
implication of D(i,j)="*" is that there is no direct link connecting city i to city j

By inspection we see that the problem depicted in Figure 1(a) has a unique optimal path,
that is x=(1,2,3,4), whose length is equal to 6. The problem depicted in Figure 1(b) does not have
a feasible - hence optimal - solution. Figure 1(c) depicts a problem where there is no optimal
solution because the length of a path from node 1 to node 4 can be made arbitrarily small by
cycling through nodes 1,2 and 3. Every additional cycle will decrease the length of the path by 1.
Observe that if we require the feasible paths to be simple, namely not to include cycles, then the
problem depicted in Figure 1(c) would be bounded. Indeed, it would have a unique optimal path
x=(1,2,3,4) whose length is equal to 6. In our discussion we do not impose this condition on the
problem formulation, namely we admit cyclic paths as feasible solutions provided that they
satisfy the precedence constraints. Thus, x'=(1,2,3,1,2,3,4) and x"=(1,2,3,1,2,3,1,2,3,4) are
feasible solutions for the problem depicted in Figure 1(c). This is why in the context of our
discussion this problem does not have an optimal solution.

Você também pode gostar