Escolar Documentos
Profissional Documentos
Cultura Documentos
12/08/21 1
In addition all algorithm must follow
Following criteria:
Input: Zero or more quantities are externally
supplied .
Output: At least one quantity is produced.
certainty: Each instruction must be clear
and unambiguous.
Finiteness: An algorithm must terminates
out after a finite number of steps.
Effectiveness: Every instruction must be
very basic so that it can be carried out.
12/08/21 2
Analyzing Algorithms
12/08/21 3
Analysis of algorithms
What’s more important than performance?
• modularity
• correctness
• maintainability
• functionality
• robustness
• user-friendliness
• programmer time
• simplicity
• extensibility
• reliability
12/08/21 4
Why study algorithms and performance ?
12/08/21 5
Running time & Input size
The running time of an algorithm on a
particular input is the number of primitive
operations or “steps” executed.
The best notation for “input size” depends on
the problem being studied. For example:
In sorting or computing discrete Fourier
transforms the most natural measure is the
no. of item in the input For example the
array size n for sorting.
On the other hand in multiplying two integers
the best measure is the total no. of bits
needed to represent the input in ordinary
binary notation .
12/08/21 6
• The running time depends on the
input: an already sorted sequence is
easier to sort.
• Parameterize the running time by the
size of the input, since short
sequences are easier to sort than
long ones.
• Generally, we seek upper bounds on
the running time, because everybody
likes a guarantee.
12/08/21 7
Complexity of Algorithm
The complexity of an algorithm M is the
functionf(n) which give the running time
and/or storage space requirement of the
algorithm in term of the size n of input
data. Frequently storage space required
by an algorithm is simply a multiple of
data size n. Accordingly, unless or
otherwise stated the term “complexity”
will refer to running time of an algorithm.
12/08/21 8
Cases for complexity function
Worst case :The maximum value of
f(n) for any possible input.
Average case: the expected value of
f(n).
Best case: Sometimes we consider
minimum possible value of f(n)
called the best case.
12/08/21 9
Asymptotic notation
Theta notation(Θ)
Big oh notation(O)
Small oh notation(o)
Omega notation (Ω)
Little omega notation (ω)
12/08/21 10
Theta notation (Θ)
12/08/21 11
f(n) = Θ(g(n))
C2(g(n))
f(n)
C1(g(n))
n0 n
12/08/21 12
Big oh notation(O)
12/08/21 13
f(n) = O(g(n))
f(n) = O(g(n))
C(g(n))
f(n)
n
n0
12/08/21 14
Omega notation(Ω)
Just as O-notation provides an
asymptotic upper bound on a
function, Ω notation provide an
asymptotic lower bound
For a given function g(n), we denote
by Ω(g(n)) the set of functions
Ω(g(n))={f(n): there exist positive
constant C and n0 such that
0 ≤ Cg(n) ≤ f(n) for all n≥ n0 }
12/08/21 15
f(n)=Ω(g(n))
f(n)=Ω(g(n))
f(n)
C(g(n))
no n
12/08/21 16
Small oh notation(o)
The asymptotic upper bound provided by O-
notation may or may not be asymptotically
tight, but the bound o-notation to denote
an upper bound that is not asymptotically
tight.
We formally define o(g(n)) as the set
12/08/21 17
Little omega notation (ω)
By analogy ω- notation is to Ω-notation is
same as o notation to O- notation.
We use ω- notation to denote a lower bound
that is not asymptotically tight .
It is defined by :
f(n) Є ω(g(n)) iff g(n) Є o(f(n))
Formally we defined ω(g(n)) as the set
ω(g(n))={ f(n) : for any positive constant
c>0, there exists a constant n0 >0such that
12/08/21 19
Reflexivity
f(n) = Θ(f(n))
f(n) = O(f(n))
f(n) = Ω(f(n))
12/08/21 20
Symmetry
12/08/21 21
Transitivity
12/08/21 22
Transpose symmetry
12/08/21 23
INSERTION SORT
12/08/21 24
n
12/08/21 25
Running time of insertion sort
n n
T(n) =c1n+c2(n-1) +c4(n-4)+c5 tj+c6 (tj-1)
j=2 j=2
n
+c7 (tj-1) +c8(n-1)
j=2
12/08/21 26
Best case
12/08/21 27
This running time can be expressed as
an+b for constant a and b that
depend on the statement cost ci;
It is thus a liner function of n.
12/08/21 28
Worst case
If the array is in reverse sorted order then
we must compare each element A[j] with
each element in the entire sorted sub
array A[1…j-1]and so tj=j for j=2,3,……n
n
(tj-1) =n(n+1) -1
j=2 2
n
(j-1)=n(n-1)
J=2 2
12/08/21 29
T(n)=c1n+c2(n-1)+c4(n-1)
+c5(n(n+1)/2-1)+c6(n(n-1)/2)
+ c7(n(n-1)/2)+c8(n-1)
2
=(c5/2+c6/2 c7/2)n +
(c1+c2+c4+c5/2-c6/2-c7/2+c8)n
-(c2+c4+c5+c8)
This worst case running time can be
2
expressed as an+bn+c
12/08/21 30
Worst case and average case
The average case is often as bad as worst
case .
On average if half the element inA[1…..j-
1]are less than A[j] and half are greater .
On average we check half the sub array
so tj=j/2.
If we worked out the resulting average case
running time it turn out to be a quadratic
function of the input size ,just like the
worst case running time.
12/08/21 31
Example of insertion sort
12/08/21 32
Order of growth
12/08/21 33
We thus ignored not only the actual
statement costs but also the abstract cost
ci.
We shall now make one more simplifying abstraction.
It is rate of growth or order of growth.
We therefore consider only leading term of a formula
2
(e.g. an ).
We ignore the lower order terms and constant term
because they are insignificant for large n.
We also ignore constant coefficient of leading term .
Thus we write that for example has a worst case
2
running tine of Θ(n).
12/08/21 34
Designing Algorithms
There are many ways to design algorithm , however
here mainly we use two methods.
Incremental approach :
Insertion sort use incremental approach: having
sorted the sub array A[1….j-1], we insert the
single element A[j] into its proper place , yielding
the sorted sub array A[1…j]
Divide and conquer approach:
Many useful algorithms are recursive in structure
.these algorithm typically follow a D and C
approach . They break problem into several sub
problem that are similar to original problem but
smaller in size, solve the sub problem recursively
and combine these solution to create a solution to
original problem.
12/08/21 35
Divide and conquer approach
12/08/21 36
divide and conquer approach in merge sort
Sorted sequence
Initial sequence
12/08/21 37
Analyzing divide and conquer algorithm
12/08/21 38
let T(n) be running time of on a problem
of size n.
If the problem size is small enough, say
n ≤ c for some constant ‘c’ the
straightforward solution take constant
time Θ(1).
Let our division of problem yields ‘a’ sub
problem each of which is ‘1/b’ of the
original size, we get the recurrence .
For merge sort both ‘a’ and ‘b’ are 2.
12/08/21 39
If we take D(n) time to divide the problem into
the solution and C(n) time to combine the
solutions to the sub problems into solution to
the original problem , we get the recurrences
T(n)={Θ(1) if n ≤ c
12/08/21 40