Você está na página 1de 40

Single Source Shortest Path

Bellman Ford Algorithm: DAG


All Pairs of Shortest Path: Warshall
Negative-Weight Edges
s a: only one path What if we have negative-
(s, a) = w(s, a) = 3 weight edges?
a b
s b: only one path 3
-4
-1
3 4
c g
(s, b) = w(s, a) + w(a, b) = -1 5
6 d
8
s 0 5 11 -
-3
s c: infinitely many paths 2
y
3 7
- -
s, c, s, c, d, c, s, c, d, c, d, c e -6 f

cycle has positive weight (6 - 3 = 3)


s, c is shortest path with weight (s, b) = w(s, c) = 5

2
Negative-Weight Edges
s e: infinitely many paths: a b
-4
s, e, s, e, f, e, s, e, f, e, f, e 3 -1
4
3
c d g
cycle e, f, e has negative 5
6
8
s 0 5 11 -
weight: y -3
2 3 7
3 + (- 6) = -3 - -
can find paths from s to e with e -6 f

arbitrarily large negative h i


2
weights
h, i, j not
(s, e) = - no shortest path -8 3 reachable
exists between s and e from s
Similarly: (s, f) = - , j
(s, h) = (s, i) = (s, j) =
(s, g) = -
3
Negative-Weight Edges
a b
Negative-weight edges may form -4

3 4
negative-weight cycles 5
c 6 d
8
g
s 0
-3
If such cycles are reachable from 2
y
3 7

the source: (s, v) is not properly e -6 f

defined
Keep going around the cycle, and get

w(s, v) = - for all v on the cycle

4
Cycles
Can shortest paths contain cycles?
Negative-weight cycles No!
Positive-weight cycles: No!
By removing the cycle we can get a shorter path
We will assume that when we are finding
shortest paths, the paths will have no cycles

5
Shortest-Path Representation
For each vertex v V:
t x
d[v] = (s, v): a shortest-path estimate 3
6
9
Initially, d[v]= 3
4
2 1
Reduces as algorithms progress s 0
2 7
3
[v] = predecessor of v on a shortest
5
5 11
path from s
6
y z

If no predecessor, [v] = NIL


induces a treeshortest-path tree

6
Initialization
Alg.: INITIALIZE-SINGLE-SOURCE(V, s)
1. for each v V
2. do d[v]
3. [v] NIL
4. d[s] 0

All the shortest-paths algorithms start with


INITIALIZE-SINGLE-SOURCE

7
Relaxation
Relaxing an edge (u, v) = testing whether we
can improve the shortest path to v found so far
by going through u
If d[v] > d[u] + w(u, v)
we can improve the shortest path to v
update d[v] and [v]
s s
u v u v
5
2
9
2 After relaxation:
5 6
d[v] d[u] + w(u, v)
RELAX(u, v, w) RELAX(u, v, w)

u v u v
2 2
5 7 5 6
8
RELAX(u, v, w)
1. if d[v] > d[u] + w(u, v)
2. then d[v] d[u] + w(u, v)
3. [v] u

All the single-source shortest-paths algorithms


start by calling INIT-SINGLE-SOURCE
then relax edges
The algorithms differ in the order and how
many times they relax each edge

9
Bellman-Ford Algorithm
Single-source shortest paths problem
Computes d[v] and [v] for all v V
Allows negative edge weights
Returns:
TRUE if no negative-weight cycles are reachable from
the source s
FALSE otherwise no solution exists
Idea:
Traverse all the edges |V 1| times, every time
performing a relaxation step of each edge

10
BELLMAN-FORD(V, E, w, s)
t x
1. INITIALIZE-SINGLE-SOURCE(V, s) 5

2. for i 1 to |V| - 1 6 -2
-3
8 7
3. do for each edge (u, v) E s 0 -4
7 2
4. do RELAX(u, v, w)
9
5. for each edge (u, v) E y z
t 5 x
6. do if d[v] > d[u] + w(u, v)
6
6 -2
7. then return FALSE 8
-3
7
s 0
8. return TRUE -4
7 2

E: (t, x), (t, y), (t, z), (x, t), (y, x), (y, z), (z, x), (z, s), (s, t), (s, y)
7
9
y z

11
Example (t, x), (t, y), (t, z), (x, t), (y, x), (y, z), (z, x), (z, s), (s, t), (s, y)

t 5 x t x
Pass 1 5
Pass 2
6 6
4
11
6 -2 6 -2
-3 -3
8 7 8
s 0 7
s 0
-4 -4
7 2 7 2
7 7
2
9 9
y z y z
Pass 3 t 5 x Pass 4 t 5 x
2
6
4
11 2
6
4
11
6 -2 6 -2
-3 -3
8 7 8 7
s 0 s 0
-4 -4
7 2 7 2
7
2 7 2
-2
9 9
y z y z

12
Bellman-Ford algorithm
All 0 edge shortest paths

5

-2
2 3
6
-3
0
8
1 7
-4
7 2
4 5

9
Bellman-Ford algorithm

Calculate all at most 1 edge


shortest paths 5
/6 /
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
/
/7 /
9
Bellman-Ford algorithm

Calculate all at most 2 edges


shortest paths 5
6 /6 /11
/4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 /2
9
Bellman-Ford algorithm

Calculate all at most 3 edges


shortest paths 5
/2
6 /6 4 /4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 2 /2
9
Bellman-Ford algorithm

Calculate all at most 4 edges


shortest paths 5
2 /2 4 /4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 2 /-2
9
Bellman-Ford algorithm

Final result:
5
2 4
-2
2 3
6
-3
0
8
1 7
-4
7 2
4 5
7 -2
9
What is the shortest path 1, 4, 3, 2, 5
from 1 to 5?
What is the shortest path
What is weight of this path?-2 from 1 to 2, 3, and 4?
Detecting Negative Cycles
for each edge (u, v) E s b
do if d[v] > d[u] + w(u, v) 0
2

then return FALSE -8 3



return TRUE
c

s b s b Look at edge (s, b):


2 2
0
-3
2 -6
-3 -1
2
d[b] = -1
-8 3 -8 3

5 5
2 d[s] + w(s, b) = -4
c c
d[b] > d[s] + w(s, b)

19
BELLMAN-FORD(V, E, w, s)
1. INITIALIZE-SINGLE-SOURCE(V, s) (V)
2. for i 1 to |V| - 1 O(V)
3. do for each edge (u, v) E O(E)
O(VE)

4. do RELAX(u, v, w)
5. for each edge (u, v) E O(E)
6. do if d[v] > d[u] + w(u, v)
7. then return FALSE
8. return TRUE

Running time: O(VE)


20
Single-source shortest paths

Two classic algorithms to solve single-source


shortest path problem
Bellman-Ford algorithm
A dynamic programming algorithm
Works when some weights are negative
Dijkstras algorithm
A greedy algorithm
Faster than Bellman-Ford
Works when weights are all non-negative
Greedy algorithm vs Dynamic programming

Both classes of algorithms try to find the best strategy from the current state? What is
the key difference? Is a greedy program a subset of dynamic programming?
Greedy Algorithm: At a given point in time, makes a local optimization. Whereas,
Dynamic programming: Smart recursion. It often requires one to break down a
problem into smaller components that can be cached.
In dynamic programming we examine a set of solutions to smaller problems and
pick the best among them. In a greedy algorithm, we believe one of the solutions
to smaller problems to be the optimal one (depending on certain properties of the
problem) and don't examine other solutions.
Sometimes it is easy to see a DP solution to a problem, but some observation is
needed to find a greedy solution which reduces time and space complexity.
One good example of dynamic programming is a simple function:
int fib(int x). This function takes a number x and returns the xth fibonacci number. A
very naive solution to this problem will take O(fib(x)) time due to repeated recursive
calls, but with dynamic programming it is possible to cache previous answers and
efficiently (in linear time) find the xth fibonacci number.
So. Greedy algorithms are indeed a special case of Dynamic Programming. Wait un-
till the next lecture 14 for More detail.
Shortest Path Properties
Triangle inequality s

For all (u, v) E, we have:


u v
2
5 7

(s, v) (s, u) + w(u, v)


s
u v
2
5 6

If u is on the shortest path to v we have the


equality sign

23
Shortest Path Properties
Upper-bound property
We always have d[v] (s, v) for all v.
Once d[v] = (s, v), it never changes.
The estimate never goes up relaxation only lowers the
estimate
v 5 x v 5 x
6
4
11 2
6
4
11
6 -2 Relax (x, v) 6 -2
-3 -3
8 7 8 7
s 0 s 0
-4 -4
7 2 7 2
7
2 7
2
9 9
y z y z
24
Shortest Path Properties
No-path property
If there is no path from s to v then d[v] = always.
(s, h) = and d[h] (s, h) d[h] =
a b
-4 h i
3 -1 2
3 4
c 6 d
8
g h, i, j not
5
s 0 5 11 - -8 3 reachable
y -3 from s
2 3 7
- - j
e -6 f (s, h) = (s, i) = (s, j) =

25
Shortest Path Properties
Convergence property
If s u v is a shortest path, and if d[u] = (s, u)
at any time prior to relaxing edge (u, v), then
d[v] = (s, v) at all times afterward.
u 2 v If d[v] > (s, v) after relaxation:
5
6
8
11 d[v] = d[u] + w(u, v)
5
d[v] = 5 + 2 = 7
s 0 Otherwise, the value remains
4
4
unchanged, because it must have
7 been the shortest path value

26
Shortest Path Properties
Path relaxation property
Let p = v0, v1, . . . , vk be a shortest path from
s = v0 to vk. If we relax, in order, (v0, v1), (v1, v2), . . .
, (vk-1, vk), even intermixed with other relaxations,
then d[vk ] = (s, vk).

v1 v2
2 d[v2] = (s, v2)

6
5
7
11 v4
5
14
v3 3
d[v1] = (s, v1) 4
s 0
11 d[v4] = (s, v4)

d[v3] = (s, v3)

27
Single-Source Shortest Paths in DAGs

28
Key Property in DAGs

If there are no cycles it is called a DAG

In DAGs, nodes can be sorted in a linear order such


that all edges are forward edges
Topological sort
Single-Source Shortest Paths in
DAGs

Shortest paths are always well-defined in dags


no cycles => no negative-weight cycles even if there are
negative-weight edges

Idea: If we were lucky


To process vertices on each shortest path from left to
right, we would be done in 1 pass
Single-Source Shortest Paths in
DAGs

DAG-SHORTEST PATHS(G, s)
TOPOLOGICALLY-SORT the vertices of G
INIT(G, s)
for each vertex u taken in topologically sorted
order do
for each v in Adj[u] do
RELAX(u, v)
Example

6 1

r t u
s v w
5 2 7 1 2
0

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0 2 6

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0 2 6 6 4

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0 2 6 5 4

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0 2 6 5 3

4
3
2
Example

6 1

r t u
s v w
5 2 7 1 2
0 2 6 5 3

4
3
2
Single-Source Shortest Paths in
DAGs: Analysis
O(V+E)
DAG-SHORTEST PATHS(G, s)
TOPOLOGICALLY-SORT the vertices of G
INIT(G, s) O(V)

for each vertex u taken in topologically sorted


order do
for each v in Adj[u] do
Total O(E)
RELAX(u, v)

Time Complexity: O (V + E)
DAG-SHORTEST-PATHS(G, w, s)
1. topologically sort the vertices of G (V+E)
2. INITIALIZE-SINGLE-SOURCE(V, s) (V)
3. for each vertex u, taken in topologically (V)
sorted order
(E)
4. do for each vertex v Adj[u]
5. do RELAX(u, v, w)

Running time: (V+E)

40

Você também pode gostar