Você está na página 1de 93

Chapter 2

Analysis Framework 2.1 2.2

Homework

Remember questions due Wed.


Read sections 2.1 and 2.2
pages 41-50 and 52-59

Agenda: Analysis of
Algorithms

Blackboard
Issues:

Correctness
Time efficiency
Space efficiency
Optimality

Approaches:
Theoretical analysis
Empirical analysis

Time Efficiency
Time efficiency is analyzed by determining the
number of repetitions
of the basic operation
as a function of input size

Theoretical analysis of
time efficiency

Basic operation: the operation that contributes most


towards the running time of the algorithm.

input size

T(n) copC(n)
running time

execution time
for basic operation

Number of times
basic operation is
executed

Input size and basic


operation examples
Problem

Input size measure

Basic operation

Search for key in list


of n items

Number of items in
list n

Key comparison

Multiply two matrices


of floating point
numbers

Dimensions of
matrices

Floating point
multiplication

Compute an

Floating point
multiplication

Graph problem

#vertices and/or
edges

Visiting a vertex or
traversing an edge

Empirical analysis of time


efficiency

Select a specific (typical) sample of inputs


Use physical unit of time (e.g.,
milliseconds)
OR
Count actual number of basic operations
Analyze the empirical data

Best-case, average-case,
worst-case
For some algorithms efficiency depends on type of input:
Worst case:
W(n) maximum over inputs of size n
Best case:
B(n) minimum over inputs of size n

Average case:

A(n) average over inputs of size n

Number of times the basic operation will be executed on typical


input
NOT the average of worst and best case
Expected number of basic operations repetitions considered as a
random variable under some assumption about the probability
distribution of all possible inputs of size n

Example: Sequential
search

Problem: Given a list of n elements and a search


key K, find an element equal to K, if any.
Algorithm: Scan the list and compare its successive
elements with K until either a matching element is
found (successful search) of the list is exhausted
(unsuccessful search)
Worst case
Best case
Average case

Types of formulas for basic


operation count

Exact formula
e.g., C(n) = n(n-1)/2
Formula indicating order of growth with specific
multiplicative constant
e.g., C(n) 0.5 n2

Formula indicating order of growth with unknown


multiplicative constant
e.g., C(n) cn2

Order of growth

Most important: Order of growth within a


constant multiple as n
Example:
How much faster will algorithm run on computer that
is twice as fast?
How much longer does it take to solve problem of
double input size?

See table 2.1

Table 2.1

Day 4: Agenda

O, ,
Limits
Definitions
Examples
Code O(?)

Homework

Due on Monday 11:59PM


Electronic submission see website.
Try to log into Blackboard
Finish reading 2.1 and 2.2
page 60-61, questions 2, 3, 4, 5, & 9

Asymptotic growth rate


A way of comparing functions that ignores
constant factors and small input sizes
O(g(n)): class of functions t (n) that grow
no faster than g(n)
(g(n)): class of functions t (n) that grow
at same rate as g(n)
(g(n)): class of functions t (n) that grow
at least as fast as g(n)
see figures 2.1, 2.2, 2.3

Big-oh

Big-omega

Big-theta

Using Limits
0
t ( n)
lim
c
n g ( n)

if t(n) grows slower than g(n)


if t(n) grows at the same rate as g(n)

if t(n) grows faster than g(n)

LHopitals Rule

t ( n) t ' ( n)
lim

n g ( n)
g ' ( n)

Examples:

10n
n(n+1)/2
logb n

vs.
vs.
vs.

2n2

n2
logc n

Definition
f(n) = O( g(n) ) if there exists
a positive constant c and
a non-negative interger n0
such that
f(n) cg(n)
for every n > n0
Examples:
10n is O(2n2)
5n+20 is O(10n)

Basic Asymptotic
Efficiency classes
1
log n

n
n log n
n2
n3
2n

n!

constant
logarithmic
linear
n log n
quadratic
cubic
exponential
factorial

Non-recursive
algorithm analysis
Analysis Steps:
Decide on parameter n indicating input size
Identify algorithms basic operation
Determine worst, average, and best case for input
of size n
Set up summation for C(n) reflecting algorithms
loop structure
Simplify summation using standard formulas

Example
for (x = 0; x < n; x++)
a[x] = max(b[x],c[x]);

Example
for (x = 0; x < n; x++)
for (y = x; y < n; y++)
a[x][y] = max(b[x],c[y]);

Example
for (x = 0; x < n; x++)
for (y = 0; y < n/2; y++)
for (z = 0; z < n/3; z++)
a[z] = max(a[x],c[y]);

Example
y=n
while (y > 0)
if (a[y--] == b[y--])
break;

Day 5: Agenda

Go over the answer to hw2


Try electronic submission
Do some problems on the board
Time permitting: Recursive algorithm
analysis

Homework

Remember to electronically submit


hw3 before Tues. morning
Read section 2.3 thoroughly!
pages 61-67

Day 6: Agenda

First, what have we learned so far


about non-recursive algorithm analysis
Second, logbn and bn
The enigma is solved.
Third, what is the deal with
Abercrombie & Fitch
Fourth, recursive analysis tool kit.

Homework 4 and Exam 1

Last homework before Exam 1


Due on Friday 2/6 (electronic?)
Will be returned on Monday 2/9
All solutions will be posted next
Monday 2/9
Exam 1 Wed. 2/11

Homework 4

Page 68 questions 4, 5 and 6


Pages 77 and 78 questions 8 and 9
We will do similar example questions
all day on Wed 2/4

What have we learned?

Non-recursive algorithm analysis


First, Identify problem size.
Typically,
a loop counter
an array size
a set size
the size of a value

Basic operations

Second, identify the basic operation


Usually a small block of code or even a
single statement that is executed over and
over.
Sometimes the basic operation is a
comparison that is hidden inside of the loop
Example:
while (target != a[x])
x++;

Single loops

One loop from 1 to n


O(n)
Be aware this is the same as
2 independent loops from 1 to n
c independent loops from 1 to n
A loop from 5 to n-1
A loop from 0 to n/2
A loop from 0 to n/c

Nested loops
for (x = 0; x < n; x++)
for (y = 0; y < n; y++)
O(n2)
for (x = 0; x < n; x++)
for (y = 0; y < n; y++)
for (z = 0; z < n; z++)
O(n3)
But remember: We can have c independent nested
loops, or the loops can be terminated early n/c.

Most non-recursive
algorithms reduce to one of
these efficiency classes
1
log n

n
n log n
n2
n3
2n

constant
logarithmic
linear
n log n
quadratic
cubic
exponential

What else?

Best cases often arise when loops terminate


early for specific inputs.
For worst cases, consider the following: Is
it possible that a loop will NOT terminate
early
Average cases are not the mid-pointer or
average of the best and worst case
Average cases require us to consider a set of
typical input
We wont really worry too much about average
case until we hit more difficult problems.

Anything else?

Important questions you should be


asking?
How does log2n arise in non-recursive
algorithms?
How does 2n arise?
Dr. B. Please, show me the actual
loops that cause this!

Log2n
for (x = 0; x < n; x++) {
Basic operation;
x = x*2;
}

Note: These types of


log2n loops can be
nested inside of O(n),
O(n2), or O(nk) loops,
which leads to
n log n
n2 log n
nk log n

for every item in the list do


Basic operation;
eliminate or discard half the items;

Log2n

Which, by the way, is pretty much


equivalent to logbn, ln, or the magical
lg(n)
But, dont be mistaken
O(n log n) is different than O(n) even
though log n grows so slow.

Last thing: 2n

How does 2n arise in real algorithms?


Lets consider nn:
How can you program n nested loops
Its impossible right?
So, how the heck does it happen in
practice?

Think

Write an algorithm that prints * 2n


times.
Is it hard?
It depends on how you think.

Recursive way
fun(int x) {
if (x > 0) {
print *;
fun(x-1);
fun(x-1);
}
}
Then call the function fun(n);

Non-recursive way
for (x = 0; x < pow(2,n); x++)
print *;
WTF?
Be very wary of your input size.
If you input size is truly n, then this is truly
O(2n) with just one loop.

Thats it!

Thats really it.


Thats everything I want you do know
about non-recursive algorithm analysis
Well, for now.

Enigma

We know that log2n = (log3n)


But what about 2n = (3n)
Here is how I will disprove it
This gives you an idea of how I like to
see questions answered.

Algorithm Analysis
Recursive Algorithm Toolkit

Example Recursive
evaluation of n !

Definition: n ! = 1*2**(n-1)*n
Recursive definition of n!:
if n=0 then F(n) := 1
else F(n) := F(n-1) * n
return F(n)

Recurrence for number of


multiplications to compute n!:

Important recurrence
types:

One (constant) operation reduces


problem size by one.
T(n) = T(n-1) + c
Solution: T(n) = (n-1)c + d

linear

T(1) = d

Important recurrence
types:

A pass through input reduces problem


size by one.
T(n) = T(n-1) + cn
T(1) = d
Solution: T(n) = [n(n+1)/2 1] c + d

quadratic

Important recurrence
types:

One (constant) operation reduces


problem size by half.
T(n) = T(n/2) + c
Solution: T(n) = c lg n + d

logarithmic

T(1) = d

Important recurrence
types:

A pass through input reduces problem


size by half.
T(n) = 2T(n/2) + cn
Solution: T(n) = cn lg n + d n
n log n

T(1) = d

A general divide-andconquer recurrence


T(n) = aT(n/b) + f (n)

where f (n) = (nk)

2.

a < bk
a = bk

T(n) = (nk)
T(n) = (nk lg n )

3.

a > bk

T(n) = (nlog b a)

1.

Note: the same results hold with O instead of .

Recursive Algorithm
Analysis
Input: an array of floats a[0n-1] and
a integer counter x
Fun(int x, float a[]) {
if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

Example

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

951638
Fun(5, a)
851639
Fun(4, a)
351689
Fun(3, a)
351689
Fun(2, a)
153689
Fun(1, a)
153689
Fun(0, a)
return a[0] 1

Analysis

First, identify the


input size

The running time


seems to depend
on the value of x

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

Analysis

Second, identify
what terminates
the recursion
Think of the
running time as
function Fun(x)
Fun(0) is the base
case

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

Analysis

Third, identify the


basic operation
The basic operation
could be a constant
operation
Or it could be
embedded in a loop
that depends on
the input size

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

Analysis

Fourth, identify the


recursive call and
how the input size
is changed.
Warning the input
size reduction may
not be part of the
recursive call.

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0], a[x]);
Fun(x-1, a);
}

Analysis

Finally, put all the


pieces together
Base case: Fun(0)
Recursive structure:
Fun(x) = Fun(x-1)
Interior complexity
Fun(x) = Fun(x-1) + O(1)
Recursive algorithms usually
fit one of the 4 basic models

Fun(int x, float a[]) {


if (x == 0)
return a[0];
if (a[0] > a[x])
swap(a[0],
a[x]);
Fun(x-1, a);
}

Recursive Algorithm Analysis


Input: vector of integers v
Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Example

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Fun(3 1 4 2 5):
temp = Fun(3 1 4 2)
Fun(3 1 4 2):
temp = Fun(3 1 4)
Fun(3 1 4):
temp = Fun(3 1)
Fun(3 1):
temp = Fun(3)
Fun(3):
return 1;
Fun(3 1):
v[q] = 3; v[n] = 1; v: 1 3 4 2 5; return 2

Fun(3 1 4):
v[q] = 3; v[n] = 4; v: 1 3 4 2 5; return 3

Fun(3 1 4 2):
v[q] = 4; v[n] = 2; v: 1 3 2 4 5; return 4

Fun(3 1 4 2 5):
v[q] = 4; v[n] = 5; v: 1 3 2 4 5; return 5

Example

First, the input


size is the
vector size.

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Example

Second, when
the size of the
vector is 1 the
recursion
terminates

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Fun(v[1n]) {
if size of v is 1 return 1
Example
else
q = Fun(v[1..n-1]);
Third, the basic
if (v[q] > v[n])
operations are
swap(v[q], v[n]);
not simple
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Example

There is O(1)
compare and
swap

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Fun(v[1n]) {
if size of v is 1 return 1
Example
else
q = Fun(v[1..n-1]);
There is an O(n)
if (v[q] > v[n])
loop with O(3)
swap(v[q], v[n]);
operations
temp = 0;
inside
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Example

Fourth, the
recursive call
decrease the
vector size (n)
by one

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Example

Finally, we can describe


the running time as
T(n) = T(n-1) + O(n)
Which is the quadratic
basic form.

Fun(v[1n]) {
if size of v is 1 return 1
else
q = Fun(v[1..n-1]);
if (v[q] > v[n])
swap(v[q], v[n]);
temp = 0;
for x = 1 to n
if (v[x] > temp)
temp = v[x];
p = x;
return p;
}

Summary of simple
recursive algorithms

1.
2.

How does the input size (i.e.,


terminating variable) change for
each recursive function call.
In practice there are to basic options
nn1
n n/2

nn-1
This means the recursion is simply an O(n) loop.
This leads to two common cases:

Case #1:
fun(n) {
if (n==1) quit
O(1)
fun(n-1)
} O(n)

Case #2:
fun(n) {
if (n==1) quit
O(n)
fun(n-1)
} O(n2)

n n-1
Generic case
fun(n) {
if (n==1) quit
O(nk)
fun(n-1)
} O(nk+1)

nn/2
This means the recursion is simply an O(log n)
loop. This leads to three common cases:

Case #3:
fun(n) {
if (n==1) quit
O(1)
fun(n/2)
} O(log n)

Case #4:
fun(n) {
if (n==1) quit
O(n)
fun(n/2)
fun(n/2)
} O(n log n)

Case #5:
fun(n) {
if (n==1) quit
O(n)
fun(n/2)
} O(n)

Generic Case:
T(n) = aT(n/b) + f (n)

where f (n) = (nk)

2.

a < bk
a = bk

T(n) = (nk)
T(n) = (nk lg n )

3.

a > bk

T(n) = (nlog b a)

1.

1/b is the reduction factor


a is the number of recursive calls
f(n) is the interior complexity of the recursive
function.

Multiple Recursive Calls


O(2n) arises from multiple recursive function
calls where the input size is not reduced by
a factor of
fun(n) {
if (n==1) quit
O(1)
fun(n-1)
fun(n-1)
} O(2n)

Multiple Recursive Calls


fun(n) {
if (n==1) quit
O(1)
fun(n-1)
fun(n-1)
} O(2n)
Here the recursion acts as an O(n) loop, but for
each loop the number of recursive calls is doubled.
Number of operations =
1+ 2+ 4 + 8 + 16 + + 2n = (2n -1)+2n = O(2n)

Multiple Recursive Calls


fun(n) {
if (n==1) quit
O(n)
fun(n-1)
fun(n-1)
} O(???)

Multiple Recursive Calls


fun(n) {
if (n==1) quit
O(1)
fun(n-1)
fun(n-1)
fun(n-1)
} O(???)

Day 10 Agenda

(can you believe its day 10?)

The last algorithm analysis enigma.


Summary of enigma
Can things like 1.5n or 1.375n arise?
Homework 4 solutions
Exam review

Last Enigma
fun3(n,m) {
fun1(n) {
fun2(n) {
if (n==1) quit
if (n==1) quit
if (n==1) quit
O(m)
O(1)
O(n)
fun3(n-1,m)
fun1(n-1)
fun2(n-1)
fun3(n-1,m)
fun1(n-1)
fun2(n-1)
} O(n2n) if we
} O(2n)
} O(2n)
call fun(n,n)

Last Enigma
2n

1
1

+
1

1
1

1
1

8
+
4
+
2
+
1

fun1(n) {
if (n==1) quit
O(1)
fun1(n-1)
fun1(n-1)
} O(2n)

= 1 + 2 + 4 + + 2n
=(2n - 1) + 2n
=2(2n) 1

=O(2n)

Last Enigma
1

2n(1)
+

n-2

n-2

n-2

n-1

n-1
n

n-2

8(n-3)
+
4(n-2)
+
2(n-1)
+
1(n)

fun2(n) {
if (n==1) quit
O(n)
fun2(n-1)
fun2(n-1)
} O(2n)

= ??? =O(2n)
You have to see my
program to believe it!

Last Enigma
2nn

n
n

+
n

n
n

n
n

8n
+
4n
+
2n
+
1n

fun3(n,m) {
if (n==1) quit
O(m)
fun3(n-1,m)
fun3(n-1,m)
} O(n2n) if we call
fun(n,n)

= (1 + 2 + 4 + + 2n)n
=((2n - 1) + 2n)n
=(2(2n) 1)n

=O(n2n)

Summary of Enigmas

log grows so small, the base makes no


difference.
log3n = (log2n)
log100n = (log2n)
logen = (log2n)

Summary of Enigmas

Exponential grow so fast that the base


makes a huge difference.
2n (3n)
(2+0.01)n (2n)
However,
2n = O(3n)
3n = (2n)
2n+1 = 2 2n = (2n)

Summary of Enigmas

Cutting the input size in half recursively


creates a log n loop that does NOT increase
the efficiency class (except for Case #3 O(1)
internal complexity)

Case #3:
fun(n) {
if (n==1) quit
O(1)
fun(n/2)
} O(log n)

Case #5:
fun(n) {
if (n==1) quit
O(n)
fun(n/2)
} O(n)

Generic case:
fun(n) {
if (n==1) quit
O(nk)
fun(n/2)
} O(nk)

Summary of Enigmas

Here we cut the input size in half (log n), but we


spawn two recursive calls for each recursive level
(2n). This adds a factor of log n to the internal
complexity.
However, it adds a factor of n if the internal
complexity is O(1)

Exception:
fun(n) {
if (n==1) quit
O(1)
fun(n/2)
fun(n/2)
} O(n)

Case #4:
fun(n) {
if (n==1) quit
O(n)
fun(n/2)
fun(n/2)
} O(n log n)

Generic case
fun(n) {
if (n==1) quit
O(nk)
fun(n/2)
fun(n/2)
} O(nk)

Fibonacci numbers

(here is where things get difficult to analyze)

The Fibonacci sequence:


0, 1, 1, 2, 3, 5, 8, 13, 21,
Fibonacci recurrence:
F(n) = F(n-1) + F(n-2)
2nd order linear homogeneous
F(0) = 0
recurrence relation
F(1) = 1
with constant coefficients
Another example:
A(n) = 3A(n-1) - 2(n-2)
A(0) = 1 A(1) = 3

How do we handle things


like this?
fun(n) {
if (n==1) quit
O(1)
fun(n-1)
fun(n-2)
} O(???)
The simple way is to draw a picture.

Exam 1

Chapters 1 and 2 only


Review hw solutions
Go through powerpoint slides and
make your cheat sheet.

Homework 4

BTW, hw serves three purposes:


1. It helps me gauge if Im going too fast.
2. It helps improve the grades of those
who put forth effort but may have test
anxiety
3. It helps prepare you for the exams

hw1, hw2, hw3 & hw4 = 9 points


exam1 = 10 points

Você também pode gostar