Você está na página 1de 5

Set cover problem

From Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/Set_cover_problem - mwheadhttp://en.wikipedia.org/wiki/Set_cover_problem - p-search


The set covering problem (SCP) is a classical question in combinatorics, computer science and complexity
theory.
It is a problem "whose study has led to the development of fundamental techniques for the entire field"
of approximation algorithms.[1] It was also one of Karp's 21 NP-complete problems shown to be NPcomplete in 1972.
Given a set of elements

(called the universe) and a set

of

sets whose union

equals the universe, the set cover problem is to identify the smallest subset of

whose union equals the

universe. For example, consider the universe

and the set of

sets
is

. Clearly the union of


. However, we can cover all of the elements with the following, smaller number of sets:

Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then
for all edges, if the distance to the destination can be shortened by taking the edge, the distance is
updated to the new lower value. At each iteration
shortest paths of at most length

that the edges are scanned, the algorithm finds all

edges. Since the longest possible path without a cycle can

be
edges, the edges must be scanned
times to ensure the shortest path
has been found for all nodes. A final scan of all the edges is performed and if any distance is updated,
then a path of length
exists in the graph.

edges has been found which can only occur if at least one negative cycle

3.
a)Len(x,i) = {max(len(x1,i-1)(if x1 is less than x),len(x2,i-2)(if x2 is less
than x),len(x3,i-3)(if x3 is less than x),...len(xn,1)(if xn is less than x),0)
+ 1}
b) you could use the exact same equation but this time, add a third
parameter, ( a vector containing the longest ascending subsequence
ending with xi). To compute xi in the previous equation, simply take the
largest subvector previously computed where the tail is less then x and
then append x.)

This can be solved in O(n^2) using dynamic programming. Basically, the problem is
about building the longest palindromic subsequence in x[i...j] using the longest
subsequence for x[i+1...j], x[i,...j-1] and x[i+1,...,j-1] (if first and last letters are the
same).
Firstly, the empty string and a single character string is trivially a palindrome. Notice
that for a substring x[i,...,j], if x[i]==x[j], we can say that the length of the longest
palindrome is the longest palindrome over x[i+1,...,j-1]+2. If they don't match, the
longest palindrome is the maximum of that of x[i+1,...,j] and y[i,...,j-1].
This gives us the function:
longest(i,j)= j-i+1 if j-i<=0,
2+longest(i+1,j-1) if x[i]==x[j]
max(longest(i+1,j),longest(i,j-1)) otherwise

You can simply implement a memoized version of that function, or code a table of
longest[i][j] bottom up.
This gives you only the length of the longest subsequence, not the actual
subsequence itself. But it can easily be extended to do that as well.

1. Dynamic Programming ([KT] Ch.6: 6.2, 6.4, 6.8, 6.10)


2. Network Flow ([KT] Ch.7: 7.1, 7.2, 7.3, 7.5, 7.6, 7.7)
3. Computational Intractability ([KT] Ch.8: 8.1, 8.2, 8.3, 8.4, 8.5, 8.8)

In computer science, the subset sum problem is an important problem in complexity


theory and cryptography. The problem is this: given a set (or multiset) of integers, is there a non-empty
subset whose sum is zero? For example, given the set {7, 3, 2, 5, 8}, the answer is yes because
the subset {3, 2, 5} sums to zero. The problem is NP-complete.

The knapsack problem is interesting from the perspective of computer science for many reasons:

The decision problem form of the knapsack problem (Can a value of at least V be achieved
without exceeding the weight W?) is NP-complete, thus there is no possible algorithm both correct
and fast (polynomial-time) on all cases, unless P=NP.

While the decision problem is NP-complete, the optimization problem is NP-hard, its resolution
is at least as difficult as the decision problem, and there is no known polynomial algorithm which
can tell, given a solution, whether it is optimal (which would mean that there is no solution with a
larger V, thus solving the decision problem NP-complete).

There is a pseudo-polynomial time algorithm using dynamic programming.

There is a fully polynomial-time approximation scheme, which uses the pseudo-polynomial


time algorithm as a subroutine, described below.

Many cases that arise in practice, and "random instances" from some distributions, can
nonetheless be solved exactly.

time.
Meet-in-the-middle algorithm
input:asetofitemswithweightsandvaluesoutput:the
greatestcombinedvalueofasubsetpartitiontheset{1...n}intotwo
setsAandBofapproximatelyequalsizecomputetheweightsand
valuesofallsubsetsofeachsetforeachsubsetofAfindthe
subsetofBofgreatestvaluesuchthatthecombinedweightislessthan
Wkeeptrackofthegreatestcombinedvalueseensofar

Unbounded knapsack problem[edit]


If all weights (

) are nonnegative integers, the knapsack problem can be solved

in pseudo-polynomial time using dynamic programming. The following describes a dynamic


programming solution for theunbounded knapsack problem.

To simplify things, assume all weights are strictly positive (wi > 0). We wish to maximize total value
subject to the constraint that total weight is less than or equal to W. Then for each w W,
define m[w] to be the maximum value that can be attained with total weight less than or equal
to w. m[W] then is the solution to the problem.
Observe that m[w] has the following properties:
(the sum of zero items, i.e., the summation of the empty set)

where

is the value of the i-th kind of item.

(To formulate the equation above, the idea used is that the solution for a knapsack is the same as the
value of one correct item plus the solution for a knapsack with smaller capacity, specifically one with
the capacity reduced by the weight of that chosen item.)
Here the maximum of the empty set is taken to be zero. Tabulating the results from
through

gives the solution. Since the calculation of each

items, and there are

values of

programming solution is

up

involves examining

to calculate, the running time of the dynamic


. Dividing

by

their greatest common divisor is a way to improve the running time.


The

complexity does not contradict the fact that the knapsack problem is NP-complete,

since

, unlike

, is not polynomial in the length of the input to the problem. The length of the

input to the problem is proportional to the number of bits in


0/1 knapsack problem[edit]

, not to

itself.

A similar dynamic programming solution for the 0/1 knapsack problem also runs in pseudo-polynomial
time. Assume

are strictly positive integers. Define

to be the maximum value that can be attained with weight less than or equal to
(first

using items up to

items).

We can define

recursively as follows:
if

(the new item is more than the

current weight limit)

if
.
The solution can then be found by calculating
to store previous computations.
The following is pseudo code for the dynamic program:

. To do this efficiently we can use a table

//Input://Values(storedinarrayv)//Weights(storedinarrayw)//
Numberofdistinctitems(n)//Knapsackcapacity(W)forjfrom0toWdo
m[0,j]:=0endforforifrom1tondoforjfrom0toWdoif
w[i]<=jthenm[i,j]:=max(m[i1,j],m[i1,jw[i]]+v[i])
elsem[i,j]:=m[i1,j]endifendforendfor
This solution will therefore run in
only a 1-dimensional array
array
only

times, rewriting from

time and

space. Additionally, if we use

to store the current optimal values and pass over this


to

every time, we get the same result for

space.

Note: Algorithm V should work for EVERY G, for SOME C. For every input there should
EXIST information that could help us verify whether the input is in the problem
domain or not. That is, there should not be an input where the information doesn't
exist.
2) Prove it's NP-hard.
This involves getting a known NP-complete problem like SAT (the set of boolean
expressions in the form "(A OR B OR C) AND (D OR E OR F) AND ..." where the
expression is satisfiable (ie there exists some setting for these booleans which makes
the expression true).
Then reduce the NP-complete problem to your problem in polynomial time.
That is, given some input X for SAT (or whatever NP-complete problem you are
using), create some input Y for your problem such that X is in SAT if and only if Y is in
your problem. The function f:X -> Y must run in polynomial time.
In the example above the input Y would be the graph G and the size of the vertex
cover k.
For a full proof, you'd have to prove both:

that X is in SAT => Y in your problem

and Y in your problem => X in SAT.

Você também pode gostar