Você está na página 1de 6

Two main methods: Direct counting: sum of the individual steps times the number of times executed T(n)

= citi Best for repeated iterations (loops). Formulating recurrence equations Solving recurrence equations The master theorem (simple and extended versions) Examples: Merge-Sort and Quick-Sort
1/35

Recurrence equation: an equality or inequality describing the function in terms of its behavior on smaller inputs: T(n) = T(n 1) + cn; T(1) = 1. the solution of the equation is T(n) = O(n2). Best for recursive functions and structures.

2/35

Divide data into two parts Sort both parts (with merge sort) Merge the parts

3/35

4/35

W(n) = W(n1) + W(n2) + (n1 + n2 - 1) Sort(p) Sort(q) Merge(p,q) W(n) = 2W(n/2) + (n - 1) kui n = 2k

W(n) = W( n/2 ) + W( n/2 ) + (n - 1) kui n 2k W(n) O(?)

5/35

6/35

Design strategy: Divide and Conquer Often recursive, at least in definition

Simplifying assumptions
n is sufficiently large T(1) = (1) for sufficiently small n. A value changes the solution of the equation, but usually only by a constant factor, so the order of growth is unchanged Choose n according to boundary conditions: n is even (n=2k), a power of two (n=2k) where k >0 is an integer

can be implemented as an iterative algorithm

Strategy:
Break a problem into 1 or more smaller subproblems that are identical in nature to the original problem Solve these subproblems (recursively) Combine the results for the subproblems (somehow) to produce a solution to original problem

Formulation: be very careful with the constants! T(n) is not the same as T(n/2)!

Note the assumption:

We can solve original problem given subproblems solutions

7/35

8/35

T(n) = T(n 1) + O(1) Ex: Sequential search T(n) = T(n 1) + O(n) Ex: Insertion-Sort, Bubble-Sort T(n) = T(n/2) + O(1) Ex: Binary search T(n) = 2T(n/2) + O(1) Ex: Binary tree traversal T(n) = 2T(n/2) + O(n) Ex: Merge-Sort

O(n) O(n2) O(lg n) O(n) O(n lg n)

Consider
in how many sub-problems the problem is split what is the size of each sub-problem how much work is required to split the problem how much work is required to combine the results of each subproblem

Recursion tree

n n/2 n/2

n/4 n/4 n/4 n/4


9/35 10/35

!
Factorial: multiply n by (n 1)! T(n) = T(n 1) + O(1) O(n) Sequential search: see if the first element is the one we are looking for, and if not, recursively call with one element less: T(n) = T(n 1) + O(1) O(n) Insertion sort: find the place of the first element in the sorted list, and recursively call with one element less: T(n) = T(n 1) + O(n) O(n2)

" !
Binary search: see if the root of the tree is the one we are looking for, and if not, recursively call with either the left or right subtree, which has half the elements T(n) = T(n/2) + O(1) O(lg n) Binary tree traversal: visit all the nodes of a tree by recursively visiting the nodes of the left and right tree: T(n) = 2T(n/2) + O(1) O(n) Merge Sort: split the list into two equal-sized parts, recursively sort each, and merge the resulting lists: T(n) = 2T(n/2) + O(n) O(n lg n)

11/35

12/35

#
Substitution: guess a bound and use mathematical induction to prove the guess correct. Recursion-tree: convert the recurrence into a tree whose nodes represent the costs at each level and use bounding summations to solve the recurrence. Master method: apply a theorem for recurrences of the form T(n) = aT(n/b) + f (n) where a, b are constants and f (n) is a function.

Equation: T(n) = 2T(n/2) + n We guess that the solution is O(n lg n) for n 2; assume T(1) = 1 Prove: T(n) c(n lg n) for c 2 Base case: T(2) c 2lg2, which holds for c 2 since T(2) = 3 General case: Assume that it holds for n/2, that is: T(n/2) 2(c(n/2) lg(n/2)) Substitute into the recurrence relation and prove for n: T ( n) 2(c(n/2) lg (n/2)) + n cn lg (n/2) + n = cn lg n cn lg 2 + n = cn lg n cn + n cn lg n for c 1

13/35

14/35

!
Write several elements of the recursion, and see if you can find a pattern. Once you find the pattern, prove it is true by substitution (induction)
T(n) = T(n 1) + n T(n 1) = T(n 2) + (n 1) T(n 2) = T(n 3) + (n 2) T(n 3) = T(n 4) + (n 3) Now substitute: T(n) = T(n 1) + n = [T(n 2) + (n 1)] + n = [[T(n 3) + (n 2)] +(n 1)] + n = [[[T(n 4) + (n 3)] + (n 2)] +(n 1)] + n = T( n k ) +
i=1( n

" !
T(n) = T(n k) + kn ((k 1)k)/2 At the end of the recursion, k = n 1 and T(1) = 1, so we get: T(n) = 1 + n2 n + n2/2 3n/2 1 = n2/2 n/2 = O(n2) So the guess is that O(n2) is the solution to the recurrence T(n) = T(n 1) + n

i + 1) = T(n k) + kn ((k 1)k)/2

15/35

16/35

17/35

18/35

!
T (n ) = aT(n/b) + nc T(n/b) = aT(n/b2) + (n/b)c T(n/b2) = aT(n/b4) + (n/b2)c T(n/b4) = aT(n/b8) + (n/b4)c Now substitute: T(n) = aT(n/b) + nc = a[aT(n/b2) + (n/b)c] + nc = a[a[aT(n/b4) + (n/b2)c]+ (n/b)c] + nc = akT(n/bk) + nc[1 + a(1/b)c + a2(1/b2)c +ak1 (1/bk1 )c]

Let a 1, b > 1, c 1 be constants, let f (n) be a function, and let T(n) be defined on non-negative integers by the recurrence: T(n) = aT(n/b) + nc where c 0

Then when a/bc < 1 (logba < c) 1. T(n) = (nc) c 2. T(n) = (n logb n) when a/bc = 1 (logba = c) 3. T(n) = (n log b a ) when a/bc > 1 (logba > c)

= a kT (n b k ) +

k 1 i =0

ai

n bi

k= logbn, is the depth of the recursion


20/35

19/35

%
i=0 i=1 a i=2 ... i=k
n bk
n b
c
c

n
n b
c
c

n b

...
n b2
n bk
c

n b
c

a a
n b2
n bk
c

n b

The number of comparisons is:

(n log b a ) +
c

log b n 1

ai
i =0 log b n 1 i=0

n bi a bc

=
i

n b2
c

... ...
c

n b2
n bk

...
c

... ...

c
2 a2 a

n b2

(n log b a ) + n c
which depends on the value of

... ak (n log b a )
log b n 1 c i

n bk

...

...
(n

k= logbn, the depth of the recursion

log b a

n )+ a i b i =0
21/35

a bc
22/35

!
Case 1:

"!
a bc
i

a <1 bc
a bc
log b n

log b n 1

Case 2:

nc
i =0

a =1 bc
logb n 1

log b n 1

nc
i =0

a bc

a 1= c b

<

log b n 1 i =0

a bc

1 =

a bc

< 1

1 a bc

< const.

i =0

a bc

= log b n

Therefore, T(n) = (nc)

Therefore, T(n) = (nc logb n)

23/35

24/35

&!
Case 3:
log b n 1 i =0

'
nc
log b n 1 i =0

%
T(n) = 2T(n/2) + n

a >1 bc
a bc
i

=
log b n

a bc

log b n

a bc

The recurrence equation is:

Here, a = 2, b = 2, and f (n) = n and c = 1 Case 2 applies

nc
nc a bc

a bc

= nc
nc

a bc

log b n

Conclusion: T

( n ) = ( n log 2 2 lg n )

log b n

c log b n

.a log b n =

n c log b a .n = n log b a nc
log
b

T ( n) = (n lg n)
25/35 26/35

Therefore,

T (n ) = (n

!
i=0 i=1 a i=2 ... i=k
n f 2 b

Let a 1, b > 1 be constants, let f (n) be a function, and let T(n) be defined on non-negative integers by the recurrence: T (n) = aT (n / b) + f (n) where n/b is either n / b or n / b log b a > 0 then 1. If f ( n) = O n

n
n f n f b b

... ... ...


n f 2 b

n f b

a a
n b2

af

n b

2. 3.

If f (n) = (nlog a ) then T ( n) = (n log a lg n ) log + If f (n) = (n b a ) > 0 and if a f (n / b) c f (n) for some c < 1 and sufficiently large n, then T ( n ) = ( f ( n ) )
b
b

T (n) = (n log

... f ...

n b2

... f

2 a2 a f b

...
(1) (1)

... ak (n log b a )

(1) (1) (1)

k= logbn, the depth of the recursion

(n logb a ) +

log b n 1 i =0

ai f

n bi
28/35

27/35

Compares two terms: O(n log a ) and f (n) log b a 1. when O n dominates, the complexity is

T (n) = (n

Solve the recurrence

log b a

T (n) = 9T (n / 3) + n
Here a = 9, b = 3, f (n) = n and Case 1 applies: = 1

2.

when f (n) dominates, the complexity is

T (n) = ( f (n) )

n logb a = n log 3 9 = n 2
f ( n ) = O n 2

3.

when they are comparable, there is a lg n penalty

T ( n) = n

log b a

lg n = ( f ( n) lg n )

Conclusion:

T (n) = n 2

( )

see book for formal proof!


29/35 30/35

"
Solve the recurrence Solve the recurrence

&
T (n) = 3T (n / 4) + n lg n
Here a = 3, b = 4, f (n) = n lg n and

T ( n ) = T ( 2 n / 3) + 1
Here a = 1, b = 3/2, f (n) = 1 and Case 2 applies: Conclusion:

n log

= n log
0

3/2

= n0 = 1

Case 3 applies: Conclusion:

n log

= n log

= O (n 0 .793 )

f ( n ) = (1)

T (n) = n lg n = (lg n )

f (n) = n 0.792+ , > 0

3 ( n / 4 ) lg( n / 4 ) ( 3 / 4 ) n lg n

T ( n) = (n lg n )
31/35

32/35

)
Solve the recurrence

*+
Clip and be conquered

T ( n) = 2T (n / 2) + n lg n
Here a = 2, b = 2, f (n) = n lg n

logb a

=n

log2 2

= O(n

and

many subproblems with the size smaller by a constant T(n) = c T(n-d) + f(n) Fibonacci: add fibonacci(n 1) and fibonacci(n 2) T(n) = T(n 1) + T(n 2) + O(1) O(F(n))

Divide and be conquered None of the three cases apply!


Case 3: 2(n/2) lg (n/2) cn lg n for c < 1 does not hold!

almost n subproblems with size T(n) = k(n) T(n/e) + f(n), k(n) = O(n)

Conclusion: the master theorem cannot be used :-(

33/35

34/35

$
T(n) = T(n 1) + O(1) T(n) = T(n 1) + O(n) T(n) = T(n/2) + O(1) O(n) O(n2) O(lg n) O(n) O(n lg n)

T(n) = 2T(n/2) + O(1) T(n) = 2T(n/2) + O(n)

35/35

Você também pode gostar