Você está na página 1de 161

QuickSort Algorithm

Using Divide and Conquer for Sorting


QuickSort 2
Topics Covered
QuickSort algorithm
analysis
Randomized Quick Sort
A Lower Bound on Comparison-Based Sorting
QuickSort 3
Quick Sort
Divide and conquer idea: Divide problem into two
smaller sorting problems.

Divide:
Select a splitting element (pivot)
Rearrange the array (sequence/list)
QuickSort 4
Quick Sort
Result:
All elements to the left of pivot are smaller or
equal than pivot, and
All elements to the right of pivot are greater or
equal than pivot
pivot in correct place in sorted array/list

Need: Clever split procedure (Hoare)
QuickSort 5
Quick Sort
Divide: Partition into subarrays (sub-lists)

Conquer: Recursively sort 2 subarrays

Combine: Trivial
QuickSort 6
QuickSort (Hoare 1962)
Problem: Sort n keys in nondecreasing order
Inputs: Positive integer n, array of keys S indexed
from 1 to n
Output: The array S containing the keys in
nondecreasing order.
quicksort ( low, high )
1. if high > low
2. then partition(low, high, pivotIndex)
3. quicksort(low, pivotIndex -1)
4. quicksort(pivotIndex +1, high)

QuickSort 7
Partition array for Quicksort
partition (low, high, pivot)
1. pivotitem = S [low]
2. k = low
3. for j = low +1 to high
4. do if S [ j ] < pivotitem
5. then k = k + 1
6. exchange S [ j ] and S [ k ]
7. pivot = k
8. exchange S[low] and S[pivot]


QuickSort 8
Input low =1, high = 4
pivotitem = S[1]= 5
5 3 2 6
pivot
k
5 3 6 2
k j
5 3 6 2
j,k
5 3 6 2
j,k
5 3 6 2
k j
5 3 6 2
k j
5 3 6 2
k j
5 3 2 6
k j
after line3
after line5
after line6
after loop
QuickSort 9
Partition on a sorted list
3 4 6
k j
3 4 6
after line3
3 4 6
k j
pivot
k
after loop
How does partition work for
S = 7,5,3,1 ?
S= 4,2,3,1,6,7,5
QuickSort 10
Worst Case Call Tree (N=4)
Q(1,4)
Left=1, pivotitem = 1, Right =4
Q(1,0)
Q(2,4)
Left =2,pivotItem=3
Q(2,1)
Q(3,4)
pivotItem = 5, Left = 3
Q(3,2)
Q(4,4)
S =[ 1,3,5,7 ]
S =[ 3,5,7 ]
S =[ 5,7 ]
S =[ 7 ]
Q(4,3)
Q(5,4)
QuickSort 11
Worst Case Intuition
n-1
0
n-2
0
n-3
0
n-4
0
1
.
.
.
0
0
n-1
n-2
n-3
n-4
1
0
t(n) =

k = 1
n
k = (n+1)n/2
Total =
QuickSort 12
Recursion Tree for Best Case
n
n/2 n/2
n/4 n/4
n
n
n
n
n/4 n/4
n/8
.
.
>
n/8 n/8
.
.
>
n/8 n/8
.
.
>
n/8 n/8
.
.
>
n/8
Sum =O(n lgn)
Nodes contain problem size
Partition Comparisons
QuickSort 13
Another Example of O(n lg n)
Comparisons
Assume each application of partition () partitions the list
so that ~(n/9) elements remain on the left side of the
pivot and ~(8n/9) elements remain on the right side of
the pivot.

We will show that the longest path of calls to Quicksort
is proportional to lgn and not n
The longest path has k+1 calls to Quicksort
= 1 + log
9/8

n( s 1 + lgn / lg (9/8)( = 1 + 6lg

n(
Let n = 1,000,000. The longest path has
1 + 6lg

n( = 1 + 6-20 = 121 << 1,000,000
calls to Quicksort.
Note: best case is 1+ lg

n( = 1 +7 =8
QuickSort 14
Recursion Tree for
Magic pivot function that Partitions a list
into 1/9 and 8/9 lists
n
n/9
8n/9
8n/81 64n/81
<n
n/81 8n/81
256n/729
.
.
>
.
.
>
.
.
>
9n/729
.
.
>
n/729
0/1
0/1
0/1
...
(log
9
n)
(log
9/8
n)
<n
0/1
n
n
n
n
QuickSort 15
Intuition for the Average case
worst partition followed by the best
partition
Vs
n
n-1
1
(n-1)/2 (n-1)/2
n
1+(n-1)/2 (n-1)/2
This shows a bad split can be absorbed by a good split.
Therefore we feel running time for the average case is
O(n lg n)
QuickSort 16
Recurrence equation:
T(n) = max ( T(q-1) + T(n - q) )+ O (n)
0 s q s n-1
A(n) = (1/n) (A(q -1) + A(n - q ) ) + O (n)
n
Worst case
Average case
q = 1
QuickSort 17
Sorts and extra memory
When a sorting algorithm does not require more than
O(1) extra memory we say that the algorithm sorts in-
place.

The textbook implementation of Mergesort requires
O(n) extra space
The textbook implementation of Heapsort is
in-place.
Our implement of Quick-Sort is in-place except for the
stack.
QuickSort 18
Quicksort - enhancements
Choose good pivot (random, or mid value between
first, last and middle)

When remaining array small use insertion sort

QuickSort 19
Randomized algorithms
Uses a randomizer (such as a random number
generator)

Some of the decisions made in the algorithm are
based on the output of the randomizer

The output of a randomized algorithm could
change from run to run for the same input

The execution time of the algorithm could also
vary from run to run for the same input
QuickSort 20
Randomized Quicksort
Choose the pivot randomly (or randomly
permute the input array before sorting).

The running time of the algorithm is
independent of input ordering.

No specific input elicits worst case behavior.
The worst case depends on the random number
generator.

We assume a random number generator
Random. A call to Random(a, b) returns a
random number between a and b.
QuickSort 21
RQuicksort-main procedure
// S is an instance "array/sequence"
// terminate recursionquicksort ( low, high )
1. if high > low
2a. then i=random(low, high);
2b. swap(S[high], S[I]);
2c. partition(low, high, pivotIndex)
3. quicksort(low, pivotIndex -1)
4. quicksort(pivotIndex +1, high)



QuickSort 22
Randomized Quicksort Analysis
We assume that all elements are distinct
(to make analysis simpler).

We partition around a random element, all
partitions from 0:n-1 to n-1:0 are equally
likely

Probability of each partition is 1/n.
QuickSort 23
Average case time complexity
( )
) log ( ) (
) 1 ( ) 1 ( ) 1 ( ) 0 (
) ( ) (
2
) ( ) 0 ( )... 1 ( ) 1 ( ... ) 0 (
1
) ( )) ( ) 1 ( (
1
) (
1
0
1
n n n T
T T
n k T
n
n T n T n T T
n
n k n T k T
n
n T
n
k
n
k
O e
O = O =
O + =
O + + + + + =
O + + =

=
=
QuickSort 24
Summary of Worst Case Runtime
exchange/insertion/selection sort = O(n
2
)

mergesort = O(n lg n )

quicksort = O(n
2
)
average case quicksort = O(n lg n )

heapsort = O(n lg n )
Yogeshri Gaidhani

25

8/11/2013
NP Completeness

M.S.(Software Eng.),
Tech Mahindra, Pune.
Yogeshri Gaidhani

26

8/11/2013
NP-Completeness
Some problems are intractable:
as they grow large, we are unable to solve
them in reasonable time
What constitutes reasonable time? Standard
working definition: polynomial time
On an input of size n the worst-case running time
is O(n
k
) for some constant k
Polynomial time: O(n
2
), O(n
3
), O(1), O(n lg n)
Not in polynomial time: O(2
n
), O(n
n
), O(n!)

Yogeshri Gaidhani

27

8/11/2013
Polynomial-Time Algorithms
Are some problems solvable in polynomial time?
Of course: every algorithm weve studied provides
polynomial-time solution to some problem
We define P to be the class of problems solvable in
polynomial time
Are all problems solvable in polynomial time?
No: Turings Halting Problem is not solvable by any
computer, no matter how much time is given
Such problems are clearly intractable, not in P
Yogeshri Gaidhani

28

8/11/2013
NP-Complete Problems
The NP-Complete problems are an interesting
class of problems whose status is unknown
No polynomial-time algorithm has been
discovered for an NP-Complete problem
No suprapolynomial lower bound has been proved
for any NP-Complete problem, either
We call this the P = NP question
The biggest open problem in CS
Yogeshri Gaidhani

29

8/11/2013
An NP-Complete Problem:
Hamiltonian Cycles
An example of an NP-Complete problem:
A hamiltonian cycle of an undirected graph is a
simple cycle that contains every vertex
The hamiltonian-cycle problem: given a graph G,
does it have a hamiltonian cycle?
Describe a nave algorithm for solving the
hamiltonian-cycle problem. Running time?

Yogeshri Gaidhani

30

8/11/2013
P and NP
As mentioned, P is set of problems that can be
solved in polynomial time
NP (nondeterministic polynomial time) is the
set of problems that can be solved in
polynomial time by a nondeterministic
computer
What the hell is that?

Yogeshri Gaidhani

31

8/11/2013
Nondeterminism
Think of a non-deterministic computer as a
computer that magically guesses a solution,
then has to verify that it is correct
If a solution exists, computer always guesses it
One way to imagine it: a parallel computer that can
freely spawn an infinite number of processes
Have one processor work on each possible solution
All processors attempt to verify that their solution works
If a processor finds it has a working solution
So: NP = problems verifiable in polynomial time
Yogeshri Gaidhani

32

8/11/2013
Class P
A decision problem P is in class P, if there is an algorithm that
solves any instance of problem P in polynomial time (with
respect to the size of instance).
Yogeshri Gaidhani

33

8/11/2013
Class NP
A decision problem P is in class NP, if there is a
non-deterministic algorithm that solves any instance of
problem P in polynomial time (with respect to the size of
instance).


What non-deterministic algorithm means?
Yogeshri Gaidhani

34

8/11/2013
Nondeterministic algorithms
We can consider non-deterministic algorithm as a
program which in addition may contain statements

goto {L
1
,...,L
n
}

which basically means that after the execution of such
statement the program arbitrarily jumps to any of the labels
L
1
,...,L
n
.
Yogeshri Gaidhani

35

8/11/2013
Nondeterministic algorithms
Non-deterministic algorithm - program with goto {L
1
,...,L
n
}

Depending from the labels chosen, non-deterministic
algorithm A works differently, and may produce different
results. We call each of the possible executions of algorithm A
a realisation of A.
Yogeshri Gaidhani

36

8/11/2013
Nondeterministic algorithms
A non-deterministic algorithm A computes a function
f: N {0,1} , if and only if

for all aeN, such that f(a) = 1, all realisations of A terminate,
and there exists a realisation of A that outputs 1;

for all aeN, such that f(a) = 0, all realisations of A terminate
and output 0.
Yogeshri Gaidhani

37

8/11/2013
Nondeterministic algorithms examples
There exist a non-deterministic polynomial time algorithms
for problems Complete subgraph, Hamiltonian cycle, Euler
cycle, SAT and k-SAT.

There exists also a deterministic polynomial time algorithm
for Euler cycle problem.
Yogeshri Gaidhani

38

8/11/2013
P and NP
Summary so far:
P = problems that can be solved in polynomial
time
NP = problems for which a solution can be verified
in polynomial time
Unknown whether P = NP (most suspect not)
Hamiltonian-cycle problem is in NP:
Cannot solve in polynomial time
Easy to verify solution in polynomial time (How?)
Yogeshri Gaidhani

39

8/11/2013
NP-Complete Problems
We will see that NP-Complete problems are
the hardest problems in NP:
If any one NP-Complete problem can be solved in
polynomial time
then every NP-Complete problem can be solved
in polynomial time
and in fact every problem in NP can be solved in
polynomial time (which would show P = NP)
Thus: solve hamiltonian-cycle in O(n
100
) time,
youve proved that P = NP. Retire rich & famous.
Yogeshri Gaidhani

40

8/11/2013
Reduction
The crux of NP-Completeness is reducibility
Informally, a problem P can be reduced to another
problem Q if any instance of P can be easily
rephrased as an instance of Q, the solution to
which provides a solution to the instance of P
What do you suppose easily means?
This rephrasing is called transformation
Intuitively: If P reduces to Q, P is no harder to
solve than Q
Yogeshri Gaidhani

41

8/11/2013
Reducibility
An example:
P: Given a set of Booleans, is at least one TRUE?
Q: Given a set of integers, is their sum positive?
Transformation: (x
1
, x
2
, , x
n
) = (y
1
, y
2
, , y
n
)
where y
i
= 1 if x
i
= TRUE, y
i
= 0 if x
i
= FALSE
Another example:
Solving linear equations is reducible to solving
quadratic equations
How can we easily use a quadratic-equation solver to
solve linear equations?
Yogeshri Gaidhani

42

8/11/2013
Using Reductions
If P is polynomial-time reducible to Q, we
denote this P s
p
Q
Definition of NP-Complete:
If P is NP-Complete, P e NP and all problems R are
reducible to P
Formally: R s
p
P R e NP
If P s
p
Q and P is NP-Complete, Q is also NP-
Complete
This is the key idea you should take away today
Yogeshri Gaidhani

43

8/11/2013
Encodings
An encoding of a set is a mapping from that set to set of
binary strings.
e.g. N =,1, 2, 3, 4,- ,0, 1, 10, 11, 100,-
ASCII
Concrete problem
P = {set of concrete problems solvable in polynomial time}
Want definition to be independent of particular encoding .
But
In practice, if expensive encodings are ruled out, then
actual encoding of a problem makes little difference.
Yogeshri Gaidhani

44

8/11/2013
f : {0,1}* {0,1}* is polynomial-time
computable if there exists a polynomial time
algorithm A, that given any input x {0,1}*,
produces as output f(x).
Two encodings e1, e2 are polynomially related
if there exist two polynomial-time computable
functions f
12
, f
21
such that for any iI,
f
12
(e1(i))= e2(i) ; f
21
(e2(i)) = e1(i)

Yogeshri Gaidhani

45

8/11/2013
Formal Language Framework
= Finite set of symbols Alphabet
L = Set of strings of symbols from
Language
Empty string =
Empty language =
* = Language of all strings over .
e.g. =,0,1- ; * = ,,0,1,00,01,10,-

Yogeshri Gaidhani

46

8/11/2013
Formal Language Framework
Operations on L : Union, Intersection,
Complement, Concatenation, etc.
Closure / Kleene Star of L:
L*= {}U L U L
2
U L
3
U

Yogeshri Gaidhani

47

8/11/2013
Algorithm A accepts a string x, if given input x,
algorithm As output A(x)=1
The language accepted by A is
L={x{0,1}* / A(x) = 1}
An algorithm rejects a string x, if A(x)=0
Is it necessary that if a string is not accepted
by A, then it is rejected?
Yogeshri Gaidhani

48

8/11/2013
A language L is decided by an algorithm A if
every binary string in L is accepted by A and
every binary string not in A is rejected by A.
P ={L / L is accepted by a polynomial-time
algorithm}
Yogeshri Gaidhani

49

8/11/2013
Polynomial Time Verification
Consider the decision problem PATH
PATH ={<G,u,v,k> / G(V,E) is undirected
graph;
u, v V;
k>=0, and
there exists u-v path in
G of length<=k}
Yogeshri Gaidhani

50

8/11/2013
Polynomial Time Verification
Hamiltonian Cycle problem :
Bipartite graph on odd vertices not
Hamiltonian
HAM-CYCLE={<G>/G is Hamiltonian}
Running time of nave algorithm (2
n
)
Verification problem:
Yogeshri Gaidhani

51

8/11/2013
P = NP P = NP
P = NP
NP
NP complete
P
P versus NP
Yogeshri Gaidhani

52

8/11/2013
Review: Reduction
A problem P can be reduced to another
problem Q if any instance of P can be rephrased
to an instance of Q, the solution to which
provides a solution to the instance of P
This rephrasing is called a transformation
Intuitively: If P reduces in polynomial time to Q,
P is no harder to solve than Q
Yogeshri Gaidhani

53

8/11/2013
An Aside: Terminology
What is the difference between a problem and an
instance of that problem?
To formalize things, we will express instances of
problems as strings
How can we express a instance of the hamiltonian
cycle problem as a string?
To simplify things, we will worry only about
decision problems with a yes/no answer
Many problems are optimization problems, but we
can often re-cast those as decision problems
Yogeshri Gaidhani

54

8/11/2013
NP-Hard and NP-Complete
If P is polynomial-time reducible to Q, we denote
this P s
p
Q
Definition of NP-Hard and NP-Complete:
If all problems R e NP are reducible to P, then P is NP-
Hard
We say P is NP-Complete if P is NP-Hard
and P e NP
Note: I got this slightly wrong Friday
If P s
p
Q and P is NP-Complete, Q is also
NP- Complete
Yogeshri Gaidhani

55

8/11/2013
Why Prove NP-Completeness?
Though nobody has proven that P != NP, if you
prove a problem NP-Complete, most people
accept that it is probably intractable
Therefore it can be important to prove that a
problem is NP-Complete
Dont need to come up with an efficient algorithm
Can instead work on approximation algorithms
Yogeshri Gaidhani

56

8/11/2013
Proving NP-Completeness
What steps do we have to take to prove a
problem P is NP-Complete?
Pick a known NP-Complete problem Q
Reduce Q to P
Describe a transformation that maps instances of Q to
instances of P, s.t. yes for P = yes for Q
Prove the transformation works
Prove it runs in polynomial time
Oh yeah, prove P e NP (What if you cant?)
Yogeshri Gaidhani

57

8/11/2013
The SAT Problem
One of the first problems to be proved NP-
Complete was satisfiability (SAT):
Given a Boolean expression on n variables, can we
assign values such that the expression is TRUE?
Ex: ((x
1
x
2
) v ((x
1
x
3
) v x
4
)) .x
2

Cooks Theorem: The satisfiability problem is NP-
Complete
Note: Argue from first principles, not reduction
Proof: not here
Yogeshri Gaidhani

58

8/11/2013
Conjunctive Normal Form
Even if the form of the Boolean expression is
simplified, the problem may be NP-Complete
Literal: an occurrence of a Boolean or its negation
A Boolean formula is in conjunctive normal form, or
CNF, if it is an AND of clauses, each of which is an OR of
literals
Ex: (x
1
v x
2
) . (x
1
v x
3
v x
4
) . (x
5
)
3-CNF: each clause has exactly 3 distinct literals
Ex: (x
1
v x
2
v x
3
) . (x
1
v x
3
v x
4
) . (x
5
v x
3
v x
4
)
Notice: true if at least one literal in each clause is true

Yogeshri Gaidhani

59

8/11/2013
The 3-CNF Problem
Thm 36.10: Satisfiability of Boolean formulas
in 3-CNF form (the 3-CNF Problem) is NP-
Complete
Proof: Nope
The reason we care about the 3-CNF problem
is that it is relatively easy to reduce to others
Thus by proving 3-CNF NP-Complete we can prove
many seemingly unrelated problems
NP-Complete
Yogeshri Gaidhani

60

8/11/2013
3-CNF Clique
What is a clique of a graph G?
A: a subset of vertices fully connected to each
other, i.e. a complete subgraph of G
The clique problem: how large is the
maximum-size clique in a graph?
Can we turn this into a decision problem?
A: Yes, we call this the k-clique problem
Is the k-clique problem within NP?
Yogeshri Gaidhani

61

8/11/2013
3-CNF Clique
What should the reduction do?
A: Transform a 3-CNF formula to a graph, for
which a k-clique will exist (for some k) iff the
3-CNF formula is satisfiable
Yogeshri Gaidhani

62

8/11/2013
3-CNF Clique
The reduction:
Let B = C
1
. C
2
. . C
k
be a 3-CNF formula with k
clauses, each of which has 3 distinct literals
For each clause put a triple of vertices in the
graph, one for each literal
Put an edge between two vertices if they are in
different triples and their literals are consistent,
meaning not each others negation
Run an example:
B = (x v y v z) . (x v y v z ) . (x v y v z )
Yogeshri Gaidhani

63

8/11/2013
3-CNF Clique
Prove the reduction works:
If B has a satisfying assignment, then each clause
has at least one literal (vertex) that evaluates to 1
Picking one such true literal from each clause
gives a set V of k vertices. V is a clique (Why?)
If G has a clique V of size k, it must contain one
vertex in each triple (clause) (Why?)
We can assign 1 to each literal corresponding with
a vertex in V, without fear of contradiction
Yogeshri Gaidhani

64

8/11/2013
Clique Vertex Cover
A vertex cover for a graph G is a set of vertices
incident to every edge in G
The vertex cover problem: what is the
minimum size vertex cover in G?
Restated as a decision problem: does a vertex
cover of size k exist in G?
Thm 36.12: vertex cover is NP-Complete
Yogeshri Gaidhani

65

8/11/2013
Clique Vertex Cover
First, show vertex cover in NP (How?)
Next, reduce k-clique to vertex cover
The complement G
C
of a graph G contains exactly
those edges not in G
Compute G
C
in polynomial time
G has a clique of size k iff G
C
has a vertex cover of
size |V| - k
Yogeshri Gaidhani

66

8/11/2013
Clique Vertex Cover
Claim: If G has a clique of size k, G
C
has a
vertex cover of size |V| - k
Let V be the k-clique
Then V - V is a vertex cover in G
C

Let (u,v) be any edge in G
C

Then u and v cannot both be in V (Why?)
Thus at least one of u or v is in V-V (why?), so
edge (u, v) is covered by V-V
Since true for any edge in G
C
, V-V is a vertex cover
Yogeshri Gaidhani

67

8/11/2013
Clique Vertex Cover
Claim: If G
C
has a vertex cover V _ V, with
|V| = |V| - k, then G has a clique of size k
For all u,v e V, if (u,v) e G
C
then u e V or
v e V or both (Why?)
Contrapositive: if u e V and v e V, then
(u,v) e E
In other words, all vertices in V-V are connected
by an edge, thus V-V is a clique
Since |V| - |V| = k, the size of the clique is k
Yogeshri Gaidhani

68

8/11/2013
General Comments
Literally hundreds of problems have been
shown to be NP-Complete
Some reductions are profound, some are
comparatively easy, many are easy once the
key insight is given
You can expect a simple NP-Completeness
proof on the final
Yogeshri Gaidhani

69

8/11/2013
Other NP-Complete Problems
Subset-sum: Given a set of integers, does
there exist a subset that adds up to some
target T?
0-1 knapsack: when weights not just integers
Hamiltonian path: Obvious
Graph coloring: can a given graph be colored
with k colors such that no adjacent vertices
are the same color?
Etc

Yogeshri Gaidhani

70

8/11/2013
Coming Up
Given one NP-Complete problem, we can prove
many interesting problems NP-Complete
Graph coloring (= register allocation)
Hamiltonian cycle
Hamiltonian path
Knapsack problem
Traveling salesman
Job scheduling with penalities
Many, many more
Yogeshri Gaidhani

71

8/11/2013
The End

Yogeshri Gaidhani
72


8/11/2013
Design and Analysis of Algorithms
NP Completeness Continued
Yogeshri Gaidhani
73


8/11/2013
Review: P and NP
What do we mean when we say a problem
is in P?
What do we mean when we say a problem
is in NP?
What is the relation between P and NP?

Yogeshri Gaidhani
74


8/11/2013
Review: P and NP
What do we mean when we say a problem
is in P?
A: A solution can be found in polynomial time
What do we mean when we say a problem
is in NP?
A: A solution can be verified in polynomial time
What is the relation between P and NP?
A: P _ NP, but no one knows whether P = NP

Yogeshri Gaidhani
75


8/11/2013
Review: NP-Complete
What, intuitively, does it mean if we can
reduce problem P to problem Q?
How do we reduce P to Q?
What does it mean if Q is NP-Hard?
What does it mean if Q is NP-Complete?
Yogeshri Gaidhani
76


8/11/2013
Review: NP-Complete
What, intuitively, does it mean if we can reduce
problem P to problem Q?
P is no harder than Q
How do we reduce P to Q?
Transform instances of P to instances of Q in
polynomial time s.t. Q: yes iff P: yes
What does it mean if Q is NP-Hard?
Every problem PeNP s
p
Q
What does it mean if Q is NP-Complete?
Q is NP-Hard and Q e NP
Yogeshri Gaidhani
77


8/11/2013
Review:
Proving Problems NP-Complete
How do we usually prove that a problem R
is NP-Complete?
A: Show R eNP, and reduce a known
NP-Complete problem Q to R
How did we prove that CIRCUIT-SAT problem is
NP-Complete?
Yogeshri Gaidhani
78


8/11/2013
NP-Completeness Proofs
If L is a language such that L L for some L
in NPC, then L is NP-Hard. If, in addition
LNP then L NPC
Yogeshri Gaidhani
79


8/11/2013
Why Prove NP-Completeness?
Though nobody has proven that P != NP, if you
prove a problem NP-Complete, most people
accept that it is probably intractable
Therefore it can be important to prove that a
problem is NP-Complete
Dont need to come up with an efficient algorithm
Can instead work on approximation algorithms
Yogeshri Gaidhani
80


8/11/2013
Proving NP-Completeness
What steps do we have to take to prove a
problem P is NP-Complete?
Pick a known NP-Complete problem Q
Reduce Q to P
Describe a transformation that maps instances of Q to
instances of P, s.t. yes for P = yes for Q
Prove the transformation works
Prove it runs in polynomial time
Oh yeah, prove P e NP (What if you cant?)
Yogeshri Gaidhani
81


8/11/2013
Method to prove NP-completeness
1. Prove LNP
2. Select a known NPC language L
3. Find a function f that maps every instance
x,0,1-* of L to an instance f(x) of L
4. Prove that f satisfies x L iff f(x) L
5. Prove that algorithm computing f runs in
polynomial time

Yogeshri Gaidhani
82


8/11/2013
Formula Satisfiability
Boolean formula consists of boolean variables,
connectives (like AND, OR,) , parantheses
A formula with satisfying assignment is a satisfying
formula
SAT={<>/ is a satisfiable boolean formula}
Nave algorithm : (2
n
)
Does there exist other polynomial time algorithm?
Unlikely as the following proof shows
Yogeshri Gaidhani
83


8/11/2013
Theorem : Satisfiability of formulae is NP-
complete
Claim : SAT is NP
Need to show that certificate can be verified
in polynomial time.
Certificate Satisfying assignments for i/p
formula
Replace each variable by corresponding value
from the certificate and evaluate the
expression
Polynomial time task


Yogeshri Gaidhani
84


8/11/2013
Theorem : Satisfiability of formulae is NP-
complete
Claim: SAT is NP-hard
We know that CIRCUIT-SAT is NPC
Now, TST : CIRCUIT-SAT SAT
? Express input of every gate as formula and find final output
Not polynomial time expression
For each wire xi in the circuit C, there is variable xi in the
formula
Clause each gate operates as a small formula involving the
variables of its incident wires
Formula is AND of the clauses
Circuit is satisfiable iff the formula is satisfiable
Hence, CIRCUIT-SAT SAT
Yogeshri Gaidhani
85


8/11/2013
Conjunctive Normal Form
Even if the form of the Boolean expression is
simplified, the problem may be NP-Complete
Literal: an occurrence of a Boolean or its negation
A Boolean formula is in conjunctive normal form, or
CNF, if it is an AND of clauses, each of which is an OR of
literals
Ex: (x
1
v x
2
) . (x
1
v x
3
v x
4
) . (x
5
)
3-CNF: each clause has exactly 3 distinct literals
Ex: (x
1
v x
2
v x
3
) . (x
1
v x
3
v x
4
) . (x
5
v x
3
v x
4
)
Notice: true if at least one literal in each clause is true

Yogeshri Gaidhani
86


8/11/2013
The 3-CNF Problem
Thm 34.10: Satisfiability of Boolean formulas
in 3-CNF form (the 3-CNF Problem) is NP-
Complete
The reason we care about the 3-CNF problem
is that it is relatively easy to reduce to others
Thus by proving 3-CNF NP-Complete we can prove
many seemingly unrelated problems
NP-Complete
Yogeshri Gaidhani
87


8/11/2013
3-CNF Clique
What is a clique of a graph G?
A: a subset of vertices fully connected to each
other, i.e. a complete subgraph of G
The clique problem: how large is the
maximum-size clique in a graph?
Can we turn this into a decision problem?
A: Yes, we call this the k-clique problem
Is the k-clique problem within NP?
Yogeshri Gaidhani
88


8/11/2013
3-CNF Clique
CLIQUE ={<G,k>/ G is a graph Containing a clique of
size k}
Nave Algorithm: Find all k-subsets of V, check if any
of them is a clique
Running Time :(k
2
*
|V|
C
k
)
What should the reduction do?
A: Transform a 3-CNF formula to a graph, for which a
k-clique will exist (for some k) iff the 3-CNF formula is
satisfiable
Yogeshri Gaidhani
89


8/11/2013
3-CNF Clique
Thm34.11: The Clique Problem is NP-
Complete
Proof: Claim : CLIQUE NP
Certificate : V vertices in clique, subset of V
Check whether each pair u,v of V, edge (u,v) belongs to
E
Claim: 3-CNF-SATCLIQUE
Reduction Algorithm Polynomial Time

Yogeshri Gaidhani
90


8/11/2013
3-CNF Clique
The reduction:
Let B = C
1
. C
2
. . C
k
be a 3-CNF formula with k
clauses, each of which has 3 distinct literals
For each clause put a triple of vertices in the
graph, one for each literal
Put an edge between two vertices if they are in
different triples and their literals are consistent,
meaning not each others negation
Run an example:
B = (x v y v z) . (x v y v z ) . (x v y v z )
Yogeshri Gaidhani
91


8/11/2013
3-CNF Clique
Prove the reduction works:
If B has a satisfying assignment, then each clause
has at least one literal (vertex) that evaluates to 1
Picking one such true literal from each clause
gives a set V of k vertices. V is a clique (Why?)
If G has a clique V of size k, it must contain one
vertex in each triple (clause) (Why?)
We can assign 1 to each literal corresponding with
a vertex in V, without fear of contradiction
Yogeshri Gaidhani
92


8/11/2013
Clique Vertex Cover
A vertex cover for a graph G is a set of vertices
incident to every edge in G

Yogeshri Gaidhani
93


8/11/2013
Clique Vertex Cover
The vertex cover problem: what is the
minimum size vertex cover in G?
Restated as a decision problem: does a vertex
cover of size k exist in G?
Thm 34.12: vertex cover is NP-Complete
Yogeshri Gaidhani
94


8/11/2013
Clique Vertex Cover
First, show vertex cover in NP (How?)
Next, reduce k-clique to vertex cover
The complement G
C
of a graph G contains exactly
those edges not in G
Compute G
C
in polynomial time
G has a clique of size k iff G
C
has a vertex cover of
size |V| - k
Yogeshri Gaidhani
95


8/11/2013
Clique Vertex Cover
Claim: If G has a clique of size k, G
C
has a
vertex cover of size |V| - k
Let V be the k-clique
Then V - V is a vertex cover in G
C

Let (u,v) be any edge in G
C

Then u and v cannot both be in V (Why?)
Thus at least one of u or v is in V-V (why?), so
edge (u, v) is covered by V-V
Since true for any edge in G
C
, V-V is a vertex cover
Yogeshri Gaidhani
96


8/11/2013
Clique Vertex Cover
Claim: If G
C
has a vertex cover V _ V, with
|V| = |V| - k, then G has a clique of size k
For all u,v e V, if (u,v) e G
C
then u e V or
v e V or both (Why?)
Contrapositive: if u e V and v e V, then
(u,v) e E
In other words, all vertices in V-V are connected
by an edge, thus V-V is a clique
Since |V| - |V| = k, the size of the clique is k
Yogeshri Gaidhani
97


8/11/2013
Directed Hamiltonian Cycle
Undirected Hamiltonian Cycle
What was the hamiltonian cycle problem
again?
For my next trick, I will reduce the directed
hamiltonian cycle problem to the undirected
hamiltonian cycle problem before your eyes
Which variant am I proving NP-Complete?
Draw a directed example on the board
What transformation do I need to effect?
Yogeshri Gaidhani
98


8/11/2013
Yogeshri Gaidhani
99


8/11/2013
Transformation:
Directed Undirected Ham. Cycle
Transform graph G = (V, E) into G = (V, E):
Every vertex v in V transforms into 3 vertices
v
1
, v
2
, v
3
in V with edges (v
1
,v
2
) and (v
2
,v
3
) in E
Every directed edge (v, w) in E transforms into the
undirected edge (v
3
, w
1
) in E (draw it)
Can this be implemented in polynomial time?
Argue that a directed hamiltonian cycle in G
implies an undirected hamiltonian cycle in G
Argue that an undirected hamiltonian cycle in G
implies a directed hamiltonian cycle in G
Yogeshri Gaidhani
100


8/11/2013
Undirected Hamiltonian Cycle
Thus we can reduce the directed problem to
the undirected problem
Whats left to prove the undirected
hamiltonian cycle problem NP-Complete?
Argue that the problem is in NP

Yogeshri Gaidhani
101


8/11/2013
Hamiltonian Cycle TSP
The well-known traveling salesman problem:
Optimization variant: a salesman must travel to n
cities, visiting each city exactly once and finishing
where he begins. How to minimize travel time?
Model as complete graph with cost c(i,j) to go from
city i to city j
How would we turn this into a decision
problem?
A: ask if - a TSP with cost < k
Yogeshri Gaidhani
102


8/11/2013
Yogeshri Gaidhani
103


8/11/2013
Hamiltonian Cycle TSP
The steps to prove TSP is NP-Complete:
Prove that TSP e NP (Argue this)
Reduce the undirected hamiltonian cycle problem
to the TSP
So if we had a TSP-solver, we could use it to solve the
hamilitonian cycle problem in polynomial time
How can we transform an instance of the hamiltonian
cycle problem to an instance of the TSP?
Can we do this in polynomial time?
Yogeshri Gaidhani
104


8/11/2013
The TSP
Random asides:
TSPs (and variants) have enormous practical
importance
E.g., for shipping and freighting companies
Lots of research into good approximation algorithms
Recently made famous as a DNA computing
problem
Yogeshri Gaidhani
105


8/11/2013
Review:
Directed Undirected Ham. Cycle
Given: directed hamiltonian cycle is
NP-Complete (draw the example)
Transform graph G = (V, E) into G = (V, E):
Every vertex v in V transforms into 3 vertices
v
1
, v
2
, v
3
in V with edges (v
1
,v
2
) and (v
2
,v
3
) in E
Every directed edge (v, w) in E transforms into the
undirected edge (v
3
, w
1
) in E (draw it)

Yogeshri Gaidhani
106


8/11/2013
Review:
Directed Undirected Ham. Cycle
Prove the transformation correct:
If G has directed hamiltonian cycle, G will have
undirected cycle (straightforward)
If G has an undirected hamiltonian cycle, G will
have a directed hamiltonian cycle
The three vertices that correspond to a vertex v in G
must be traversed in order v
1
, v
2
, v
3
or v
3
, v
2
, v
1
, since v
2

cannot be reached from any other vertex in G
Since 1s are connected to 3s, the order is the same for
all triples. Assume w.l.o.g. order is v
1
, v
2
, v
3
.
Then G has a corresponding directed hamiltonian cycle

Yogeshri Gaidhani
107


8/11/2013
Review: Hamiltonian Cycle TSP
The well-known traveling salesman problem:
Complete graph with cost c(i,j) from city i to city j
- a simple cycle over cities with cost < k ?
How can we prove the TSP is NP-Complete?
A: Prove TSP e NP; reduce the undirected
hamiltonian cycle problem to TSP
TSP e NP: straightforward
Reduction: need to show that if we can solve TSP
we can solve ham. cycle problem
Yogeshri Gaidhani
108


8/11/2013
Review: Hamiltonian Cycle TSP
To transform ham. cycle problem on graph
G = (V,E) to TSP, create graph G = (V,E):
G is a complete graph
Edges in E also in E have weight 0
All other edges in E have weight 1
TSP: is there a TSP on G with weight 0?
If G has a hamiltonian cycle, G has a cycle w/ weight 0
If G has cycle w/ weight 0, every edge of that cycle has
weight 0 and is thus in G. Thus G has a ham. cycle
Yogeshri Gaidhani
109


8/11/2013
General Comments
Literally hundreds of problems have been
shown to be NP-Complete
Some reductions are profound, some are
comparatively easy, many are easy once the
key insight is given
You can expect a simple NP-Completeness
proof on the final
Yogeshri Gaidhani
110


8/11/2013
Other NP-Complete Problems
Subset-sum: Given a set of integers, does
there exist a subset that adds up to some
target T?
0-1 knapsack: when weights not just integers
Hamiltonian path: Obvious
Graph coloring: can a given graph be colored
with k colors such that no adjacent vertices
are the same color?
Etc

Backtracking
Sum of Subsets
and
Knapsack

Backtracking 112
Backtracking
Two versions of backtracking algorithms
Solution needs only to be feasible (satisfy
problems constraints)
sum of subsets
Solution needs also to be optimal
knapsack
Backtracking 113
The backtracking method
A given problem has a set of constraints
and possibly an objective function
The solution optimizes an objective
function, and/or is feasible.
We can represent the solution space for
the problem using a state space tree
The root of the tree represents 0 choices,
Nodes at depth 1 represent first choice
Nodes at depth 2 represent the second choice,
etc.
In this tree a path from a root to a leaf represents
a candidate solution
Backtracking 114
Backtracking
Problem:
Find out all 3-bit binary numbers for which the
sum of the 1's is greater than or equal to 2.

The only way to solve this problem is to check all
the possibilities: (000, 001, 010, ....,111)

The 8 possibilities are called the search space of
the problem. They can be organized into a tree.
Backtracking 115
Backtracking: Illustration
0 0 _
0 0 0 0 0 1
0 1 _
0 1 0 0 1 1
1 _ _ 0 _ _
1 0 _
1 0 0 1 0 1
1 1 _
1 1 0 1 1 1
Backtracking 116
Sum of subsets
Problem: Given n positive integers w
1,
...
w
n
and a positive integer S. Find all subsets
of w
1,
... w
n
that sum to S.
Example:
n=3, S=6, and w
1
=2, w
2
=4, w
3
=6

Solutions:
{2,4} and {6}
Backtracking 117
Sum of subsets
We will assume a binary state space tree.

The nodes at depth 1 are for including (yes,
no) item 1, the nodes at depth 2 are for item
2, etc.

The left branch includes w
i
, and the right
branch excludes w
i
.
The nodes contain the sum of the weights
included so far
Backtracking 118
Sum of subset Problem:
State SpaceTree for 3 items
w
1
= 2, w
2
= 4, w
3
= 6 and S = 6
i
1

i
2

i
3

yes
no
0
0
0
0
2
2
2
6
6
12
8
4
4
10
6
yes
yes
no
no
no
no no
no
The sum of the included integers is stored at the node.
yes
yes
yes yes
Backtracking 119
A Depth First Search solution
Problems can be solved using depth first search
of the (implicit) state space tree.

Each node will save its depth and its (possibly
partial) current solution

DFS can check whether node v is a leaf.
If it is a leaf then check if the current solution
satisfies the constraints
Code can be added to find the optimal solution
Backtracking 120
A DFS solution
Such a DFS algorithm will be very slow.

It does not check for every solution state
(node) whether a solution has been reached,
or whether a partial solution can lead to a
feasible solution

Is there a more efficient solution?
Backtracking 121
Backtracking
Definition: We call a node nonpromising
if it cannot lead to a feasible (or optimal)
solution, otherwise it is promising

Main idea: Backtracking consists of
doing a DFS of the state space tree,
checking whether each node is promising
and if the node is nonpromising
backtracking to the nodes parent
Backtracking 122
Backtracking
The state space tree consisting of
expanded nodes only is called the pruned
state space tree
The following slide shows the pruned state
space tree for the sum of subsets example
There are only 15 nodes in the pruned
state space tree
The full state space tree has 31 nodes
Backtracking 123
A Pruned State Space Tree (find all solutions)
w
1
= 3, w
2
= 4, w
3
= 5, w
4
= 6; S = 13
0
0
0
3
3
3
7
7 12 8
4
4
9
5
3
4 4
0
0
0
5
5 0
0 0
0 6
13 7
Sum of subsets problem
Backtracking 124
Backtracking algorithm
void checknode (node v) {
node u

if (promising ( v ))
if (aSolutionAt( v ))
write the solution
else //expand the node
for ( each child u of v )
checknode ( u )
Backtracking 125
Backtracking and Recursion
Backtracking is easily implemented with
recursion because:

The run-time stack takes care of keeping track
of the choices that got us to a given point.

Upon failure we can get to the previous choice
simply by returning a failure code from the
recursive call.
Backtracking 126
Improving Backtracking: Search Pruning
Search pruning will help us to reduce the search space and hence get a solution faster.

The idea is to a void those paths that may not lead to a solutions as early as possible by
finding contradictions so that we can backtrack immediately without the need to build a
hopeless solution vector.
Backtracking 127
Checknode
Checknode uses the functions:

promising(v) which checks that the partial solution
represented by v can lead to the required solution

aSolutionAt(v) which checks whether the partial
solution represented by node v solves the
problem.
Backtracking 128
Sum of subsets when is a node
promising?
Consider a node at depth i
weightSoFar = weight of node, i.e., sum of numbers
included in partial solution node represents

totalPossibleLeft = weight of the remaining items
i+1 to n (for a node at depth i)
A node at depth i is non-promising
if (weightSoFar + totalPossibleLeft < S )
or (weightSoFar + w[i+1] > S )
To be able to use this promising function the w
i

must be sorted in non-decreasing order
Backtracking 129
A Pruned State Space Tree
w
1
= 3, w
2
= 4, w
3
= 5, w
4
= 6; S = 13
0
0
0
3
3
3
7
7 12 8
4
4
9
5
3
4 4
0
0
0
5
5 0
0 0
0 6
13 7
- backtrack
1
2
3
4 5
6 7
8
10 9
11
12
15
14
13
Nodes numbered in call order
Backtracking 130
sumOfSubsets ( i, weightSoFar, totalPossibleLeft )
1) if (promising ( i )) //may lead to solution
2) then if ( weightSoFar == S )
3) then print include[ 1 ] to include[ i ] //found solution
4) else //expand the node when weightSoFar < S
5) include [ i + 1 + = "yes //try including
6) sumOfSubsets ( i + 1,
weightSoFar + w[i + 1],
totalPossibleLeft - w[i + 1] )
7) include [ i + 1 + = "no //try excluding
8) sumOfSubsets ( i + 1, weightSoFar ,
totalPossibleLeft - w[i + 1] )

boolean promising (i )
1) return ( weightSoFar + totalPossibleLeft > S) &&
( weightSoFar == S || weightSoFar + w[i + 1] s S )
Prints all solutions!
Initial call sumOfSubsets(0, 0, )

=
n
i
i
w
1
Backtracking 131
Eight Queen Problem
Attempts to place 8 queens on a
chessboard in such a way that no queen can
attack any other.

A queen can attack another queen if it
exists in the same row, colum or diagonal as
the queen.

This problem can be solved by trying to
place the first queen, then the second
queen so that it cannot attack the first, and
then the third so that it is not conflicting
with previously placed queens.
Backtracking 132
Eight Queen Problem
The solution is a vector of length 8
(a(1), a(2), a(3), ...., a(8)).

a(i) corresponds to the column where we should place the i-th queen.

The solution is to build a partial solution element by element until it is complete.

We should backtrack in case we reach to a partial solution of length k, that we couldn't
expand any more.
Backtracking 133
Eight Queen Problem: Algorithm
putQueen(row)
{
for every position col on the same row
if position col is available
place the next queen in position col
if (row<8)
putQueen(row+1);
else success;
remove the queen from position col
}
Backtracking 134
Eight Queen Problem: Implementation
Define an 8 by 8 array of 1s and 0s to represent the chessboard

The array is initialized to 1s, and when a queen is put in a position (c,r), board[r][c] is set to
zero

Note that the search space is very huge:
16,772, 216 possibilities.

Is there a way to reduce search space?
Yes Search Pruning.
Backtracking 135
Eight Queen Problem: Implementation
We know that for queens:
each row will have exactly one queen
each column will have exactly one queen
each diagonal will have at most one queen

This will help us to model the chessboard not as a 2-D array, but as a set of rows, columns
and diagonals.

To simplify the presentation, study for smaller chessboard, 4 by 4
Backtracking 136
Implementing the Chessboard
First: we need to define an array to store the location of so far placed queens
PositionInRow
1
3
0
2
Backtracking 137
Implementing the Chessboard Contd
We need an array to keep track of the availability status of the column when we assign
queens.
F T F T
Suppose that we
have placed two
queens
Backtracking 138
Implementing the Chessboard Contd
We have 7 left diagonals, we want to keep track of available diagonals after the so far
allocated queens
T
T
F
T
F
T
T
Backtracking 139
Implementing the Chessboard Contd
We have 7 left diagonals, we want to keep track of available diagonals after the so far
allocated queens
T
F
T
T
F
T
T
Backtracking 140
Backtracking
[4,3]
[3,1]
[2,4]
[1,2]
Backtracking 141
The putQueen Recursive Method
static void putQueen(int row){
for (int col=0;col<squares;col++)
if (column[col]==available && leftDiagonal[row+col]==available &&
rightDiagonal[row-col+norm]== available)
{
positionInRow[row]=col;
column[col]=!available;
leftDiagonal[row+col]=!available;
rightDiagonal[row-col+norm]=!available;
if (row< squares-1)
putQueen(row+1);
else
System.out.println(" solution found");
column[col]=available;
leftDiagonal[row+col]=available;
rightDiagonal[row-col+norm]= available;
}
}
Backtracking 142
Backtracking for optimization
problems
To deal with optimization we compute:
best - value of best solution achieved so far
value(v) - the value of the solution at node v
Modify promising(v)

Best is initialized to a value that is equal to a candidate solution or
worse than any possible solution.
Best is updated to value(v) if the solution at v is better

By better we mean:
larger in the case of maximization and
smaller in the case of minimization
Backtracking 143
Modifying promising
A node is promising when
it is feasible and can lead to a feasible solution and
there is a chance that a better solution than best can
be achieved by expanding it
Otherwise it is nonpromising

A bound on the best solution that can be achieved
by expanding the node is computed and compared
to best
If the bound > best for maximization, (< best for
minimization) the node is promising
How is
it determined?
Backtracking 144
Modifying promising for
Maximization Problems
For a maximization problem the bound is
an upper bound,
the largest possible solution that can be
achieved by expanding the node is less or
equal to the upper bound
If upper bound > best so far, a better
solution may be found by expanding the
node and the feasible node is promising

Backtracking 145
Modifying promising for
Minimization Problems
For minimization the bound is a lower bound,
the smallest possible solution that can be
achieved by expanding the node is less or equal
to the lower bound

If lower bound < best a better solution may
be found and the feasible node is promising
Backtracking 146
Template for backtracking in the
case of optimization problems.
Procedure checknode (node v ) {
node u ;

if ( value(v) is better than best )
best = value(v);
if (promising (v) )
for (each child u of v)
checknode (u );
}
best is the best value so far
and is initialized to a value
that is equal or worse than
any possible solution.

value(v) is the value of the
solution at the node.
Backtracking 147
Notation for knapsack
We use maxprofit to denote best
profit(v) to denote value(v)
Backtracking 148
The state space tree for knapsack
Each node v will include 3 values:
profit (v) = sum of profits of all items included in
the knapsack (on a path from root to v)
weight (v)= the sum of the weights of all items
included in the knapsack (on a path from root to v)
upperBound(v). upperBound(v) is greater or equal
to the maximum benefit that can be found by
expanding the whole subtree of the state space
tree with root v.
The nodes are numbered in the order of
expansion
Backtracking 149
Promising nodes for 0/1 knapsack
Node v is promising if weight(v) < C, and
upperBound(v)>maxprofit
Otherwise it is not promising
Note that when weight(v) = C, or maxprofit =
upperbound(v) the node is non promising
Backtracking 150
Main idea for upper bound
Theorem: The optimal profit for 0/1 knapsack s
optimal profit for KWF
Proof:
Clearly the optimal solution to 0/1 knapsack is
a possible solution to KWF. So the optimal
profit of KWF is greater or equal to that of 0/1
knapsack
Main idea: KWF can be used for computing the
upper bounds
Backtracking 151
Computing the upper bound for 0/1
knapsack
Given node v at depth i.
UpperBound(v) =
KWF2(i+1, weight(v), profit(v), w, p,
C, n)
KWF2 requires that the items be ordered by
non increasing p
i
/ w
i
, so if we arrange the
items in this order before applying the
backtracking algorithm, KWF2 will pick the
remaining items in the required order.
Backtracking 152
KWF2(i, weight, profit, w, p, C, n)
1. bound = profit
2. for j=i to n
3. x[j]=0 //initialize variables to 0
4. while (weight<C)&& (i<=n) //not fulland more items
5. if weight+w[i]<=C //room for next item
6. x[i]=1 //item i is added to knapsack
7. weight=weight+w[i]; bound = bound +p[i]
8. else
9. x[i]=(C-weight)/w[i] //fraction of i added to knapsack
10. weight=C; bound = bound + p[i]*x[i]
11. i=i+1 // next item
12. return bound
KWF2 is in O(n) (assuming items sorted before applying backtracking)
Backtracking 153
C++ version
The arrays w, p, include and bestset have size
n+1.
Location 0 is not used
include contains the current solution
bestset the best solution so far
Backtracking 154
Before calling Knapsack
numbest=0; //number of items considered
maxprofit=0;
knapsack(0,0,0);
cout << maxprofit;
for (i=1; i<= numbest; i++)
cout << bestset[i]; //the best solution

maxprofit is initialized to $0, which is the
worst profit that can be achieved with positive
p
i
s
In Knapsack - before determining if node v is
promising, maxprofit and bestset are updated

Backtracking 155
knapsack(i, profit, weight)
if ( weight <= C && profit > maxprofit)
// save better solution
maxprofit=profit //save new profit
numbest= i; bestset = include//save solution
if promising(i)
include *i + 1+ = yes
knapsack(i+1, profit+p[i+1], weight+ w[i+1])
include*i+1+ = no
knapsack(i+1,profit,weight)
Backtracking 156
Promising(i)
promising(i)
//Cannot get a solution by expanding node
if weight >= C return false
//Compute upper bound
bound = KWF2(i+1, weight, profit, w, p, C, n)
return (bound>maxprofit)
Backtracking 157
Example
Suppose n = 4, W = 16, and we have the following:

i p
i
w
i
p
i
/ w
i


1 40 2 20
2 30 5 6
3 50 10 5
4 10 5 2
Note the the items are in the correct order needed
by KWF
Backtracking 158
maxprofit = 0 (n = 4, C = 16 )
Node 1
a) profit = 0
weight = 0

b) bound = profit + p
1
+ p
2
+ (C - 7 ) * p
3
/ w
3

= 0 + 40 + 30 + (16 -7) X 50/10 =$115

c) 1 is promising because its weight =0 < C = 16
and its bound 115 > 0 the value of
maxprofit.


The calculation for node 1
Backtracking 159
Item 1 with profit 40 and weight 2 is included
maxprofit = 40
a) profit = 40
weight = 2

b) bound = profit + p
2
+ (C - 7) X p
3
/ w
3

= 40 + 30 + (16 -7) X 50/10 =115

c) 2 is promising because its weight =2 < C = 16
and its bound 115 > 40 the value of
maxprofit.


The calculation for node 2
Backtracking 160
Item 1 with profit 40 and weight 2 is not
included
At this point maxprofit=90 and is not changed
a) profit = 0
weight = 0

b) bound = profit + p
2
+ p
3
+ (C - 15) X
p
4
/ w
4

= 0 + 30 + 50+ (16 -15) X 10/5 = 82

c) 13 is nonpromising because its bound 82
< 90 the value of maxprofit.


The calculation for node 13
Backtracking 161
1
13
$0
0
$115
$40
2
$115
$0
0
$82
Item 1 [$40, 2]
B
82<90
Item 2 [$30, 5]
$70
7
$115
$120
17

2
3
4
F
17>16
Item 3 [$50, 10]
Item 4 [$10, 5]
$40
2
$98
8
$70
7
$80
5
$90
12
$98
9
$40
2
$50
12
B
50<90
$80
12
$80
6
$70
7
$70
7
N
N
$100
17

10
$90
12
$90
11
maxprofit = 90
profit
weight
bound
Example
F - not feasible
N - not optimal
B- cannot lead to
best solution
Optimal
maxprofit =0
maxprofit =40
maxprofit =70
maxprofit =80
F
17>16

Você também pode gostar