Você está na página 1de 39

1.

INTRODUCTION

In graph theory, the shortest path problem is the problem of finding a path between
two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is
minimized. The problem of finding the shortest path between two intersections on a road map
(the graph's vertices correspond to intersections and the edges correspond to road segments,
each weighted by the length of its road segment) may be modeled by a special case of the
shortest path problem in graphs.
The shortest path problem can be defined for graphs whether undirected, directed,
or mixed. It is defined here for undirected graphs; for directed graphs the definition of path
requires that consecutive vertices be connected by an appropriate directed edge.
Two vertices are adjacent when they are both incident to a common edge. A path in an
undirected graph is a sequence of vertices

such that
length

is adjacent to
from

to

for
. (The

. Such a path

is called a path of

are variables; their numbering here relates to their

position in the sequence and needs not to relate to any canonical labeling of the vertices.)
Let

be

the

edge

function

incident

both

and

, and an undirected (simple) graph

the path

(where

minimizes the sum


or

to

and

Given

a real-valued weight

, the shortest path from

to

is

) that over all possible

When each edge in the graph has unit weight

, this is equivalent to finding the path with fewest edges.

The problem is also sometimes called the single-pair shortest path problem, to distinguish it
from the following variations:

The single-source shortest path problem, in which we have to find shortest paths from a
source vertex v to all other vertices in the graph.

The single-destination shortest path problem, in which we have to find shortest paths
from all vertices in the directed graph to a single destination vertex v. This can be
reduced to the single-source shortest path problem by reversing the arcs in the directed
graph.

The all-pairs shortest path problem, in which we have to find shortest paths between
every pair of vertices v, v' in the graph.
SINGLE-SOURCE SHORTEST PATHS

Suppose G be a weighted directed graph where a minimum labeled w(u, v) associated with
each edge (u, v) in E, called weight of edge (u, v). These weights represent the cost to traverse
the edge. A path from vertex u to vertex v is a sequence of one or more edges.
<(v1,v2), (v2,v3), . . . , (vn-1, vn)> in E[G] where u= v1 and v=vn
The cost (or length or weight) of the path P is the sum of the weights of edges in the sequence.
The shortest-path weight from a vertex uV to a vertex vV in the weighted graph is the
minimum cost of all paths from u to v. If there exists no such path from vertex u to vertex v
then the weight of the shortest-path is .
Variant of single-source shortest problems

Given a source vertex, in the weighted diagraph, find the shortest path weights to all
other vertices in the digraph.

Given a destination vertex, t, in the weighted digraph, find the shortest path weights
from all other vertices in the digraph.

Given any two vertices in the weighted digraph, find the shortest path (from u
to v or v to u).

Negative-Weight Edges
The negative weight cycle is a cycle whose total is negative. No path from starting vertex S to a
vertex on the cycle can be a shortest path. Since a path can run around the cycle many, many
times and get any negative cost desired. in other words, a negative cycle invalidates the noton
of distance based on edge weights.

Relaxation Technique
This technique consists of testing whether we can improve the shortest path found so far if so
update the shortest path. A relaxation step may or may not decrease the value of the shortestpath estimate.
The following code performs a relaxation step on edge(u,v).
Relax (u, v, w)
If d[u] + w(u, v) < d[v]
then d[v] d[u] + w[u, v]

SINGLE SOURCE SHORTEST PROBLEM


Given a weighted graph G, find a shortest path from given vertex to each other vertex in G.
Note that we can solve this problem quite easily with BFS traversal algorithm in the
special case when all weights are 1. The greedy approach to this problem is repeatedly
selecting the best choice from those available at that time.

Undirected graphs

Weights Time complexity

Author

R+

O(V2)

Dijkstra 1959

R+

O(E + V log V)

Fredman & Tarjan 1984 (Fibonacci heap)

O(E)

Thorup 1999 (requires constant-time multiplication).

Un-weighted graphs

Algorithm

Time complexity Author

Breadth-first search O(E + V)

Directed acyclic graphs


An algorithm using topological sorting can solve the single-source shortest path problem in
linear time, (E + V), in weighted DAGs.

Directed graphs with non-negative weights


The following table is taken from Schriever (2004). A green background indicates an
asymptotically best bound in the table.

Algorithm

Time complexity Author

BellmanFord algorithm

O(V2EL)

Ford 1956

O(VE)

Bellman 1958, Moore 1959

O(V2 log V)

Dijkstra's
list

algorithm with

O(V2)

Dantzig 1958, Dantzig 1960, Minty (cf. Pollack


& Wiebenson 1960), Whiting & Hillier 1960

Leyzorek et al. 1957, Dijkstra 1959

Dijkstra's

algorithm with

modified binary heap


...
Dijkstra's

O((E + V) log V)

...
algorithm

with Fibonacci heap

Gabow's algorithm

O(E + V log V)

...
Fredman & Tarjan 1984, Fredman & Tarjan
1987

O(E log log L)

Johnson 1981, Karlsson & Poblete 1983

O(E logE/V L)

Gabow 1983, Gabow 1985

O(E + Vlog L)

Ahuja et al. 1990

ALL-PAIRS SHORTEST PATHS

The all-pairs shortest path problem finds the shortest paths between every pair of
vertices v, v' in the graph. The all-pairs shortest paths problem for unweighted directed graphs
was introduced by Shamble (1953), who observed that it could be solved by a linear number of
matrix multiplications that takes a total time of O(V4).

2 .DIFFERENT TYPES OF ALGORITHMS

Algorithm Classification

Algorithms that use a similar problem-solving approach can be grouped together

This classification scheme is neither exhaustive nor disjoint

The purpose is not to be able to classify an algorithm as one type or another, but to
highlight the various ways in which a problem can be attacked

A Short List of Categories


Algorithm types we will consider include:

Simple recursive algorithms

Backtracking algorithms

Divide and conquer algorithms

Dynamic programming algorithms

Greedy algorithms

Branch and bound algorithms

Brute force algorithms

Randomized algorithms

Simple Recursive Algorithms

A simple recursive algorithm:

Solves the base cases directly

Recurs with a simpler sub problem

Does some extra work to convert the solution to the simpler sub problem into a
solution to the given problem

I call these simple because several of the other algorithm types are inherently
recursive

Example Recursive Algorithms

To count the number of elements in a list:

If the list is empty, return zero; otherwise,

Step past the first element, and count the remaining elements in the list

Add one to the result

To test if a value occurs in a list:

If the list is empty, return false; otherwise,

If the first thing in the list is the given value, return true; otherwise

Step past the first element, and test whether the value occurs in the remainder of
the list

Backtracking Algorithms

Backtracking algorithms are based on a depth-first recursive search

A backtracking algorithm:
o Tests to see if a solution has been found, and if so, returns it; otherwise
o For each choice that can be made at this point,

Make that choice

Recur

If the recursion returns a solution, return it

o If no choices remain, return failure

Example backtracking algorithm

To color a map with no more than four colors:


o color(Country n)

If all countries have been colored (n > number of countries) return


success; otherwise,

For each color c of four colors,

If country n is not adjacent to a country that has been colored c


o Color country n with color c
o recursively color country n+1
o If successful, return success

Return failure (if loop exits)

Divide and Conquer

A divide and conquer algorithm consists of two parts:

Divide the problem into smaller subproblems of the same type, and solve these
subproblems recursively

Combine the solutions to the subproblems into a solution to the original problem

Traditionally, an algorithm is only called divide and conquer if it contains two or more
recursive calls

Examples

Quicksort:
o Partition the array into two parts, and quicksort each of the parts
o No additional work is required to combine the two sorted parts

Mergesort:
o Cut the array in half, and mergesort each half
o Combine the two sorted arrays into a single sorted array by merging them

Binary Tree Lookup

Heres how to look up something in a sorted binary tree:


o Compare the key to the value in the root

If the two values are equal, report success

If the key is less, search the left subtree

If the key is greater, search the right subtree

This is not a divide and conquer algorithm because, although there are two recursive
calls, only one is used at each level of the recursion

Fibonacci numbers

To find the nth Fibonacci number:

o If n is zero or one, return one; otherwise,


o Compute fibonacci(n-1) and fibonacci(n-2)
o Return the sum of these two numbers

This is an expensive algorithm


o It requires O(fibonacci(n)) time
o This is equivalent to exponential time, that is, O(2n)

Dynamic programming algorithms

A dynamic programming algorithm remembers past results and uses them to find new
results

Dynamic programming is generally used for optimization problems


o Multiple solutions exist, need to find the best one
o Requires optimal substructure and overlapping subproblems

Optimal substructure: Optimal solution contains optimal solutions to


subproblems

Overlapping subproblems: Solutions to subproblems can be stored and


reused in a bottom-up fashion

This differs from Divide and Conquer, where subproblems generally


need not overlap

Fibonacci numbers again

To find the nth Fibonacci number:

10

o If n is zero or one, return one; otherwise,


o Compute, or look up in a table, fibonacci(n-1) and fibonacci(n-2)
o Find the sum of these two numbers
o Store the result in a table and return it

Since finding the nth Fibonacci number involves finding all smaller Fibonacci numbers,
the second recursive call has little work to do

The table may be preserved and used again later

Greedy algorithms

An optimization problem is one in which you want to find, not just a solution, but the
best solution

A greedy algorithm sometimes works well for optimization problems

A greedy algorithm works in phases: At each phase:


o You take the best you can get right now, without regard for future consequences
o You hope that by choosing a local optimum at each step, you will end up at a
global optimum

Example: Counting money

Suppose you want to count out a certain amount of money, using the fewest possible
bills and coins

greedy

algorithm

would

do

this

would

be:

At each step, take the largest possible bill or coin that does not overshoot
o Example: To make $6.39, you can choose:
11

a $5 bill

a $1 bill, to make $6

a 25 coin, to make $6.25

A 10 coin, to make $6.35

A failure of the greedy algorithm

In some (fictional) monetary system, krons come in 1 kron, 7 kron, and 10 kron coins

Using a greedy algorithm to count out 15 krons, you would get


o A 10 kron piece
o Five 1 kron pieces, for a total of 15 krons
o This requires six coins

A better solution would be to use two 7 kron pieces and one 1 kron piece
o This only requires three coins

The greedy algorithm results in a solution, but not in an optimal solution

Branch and bound algorithms

Branch and bound algorithms are generally used for optimization problems
o As the algorithm progresses, a tree of subproblems is formed
o The original problem is considered the root problem
o A method is used to construct an upper and lower bound for a given problem
12

o At each node, apply the bounding methods

If the bounds match, it is deemed a feasible solution to that particular


subproblem

If bounds do not match, partition the problem represented by that node,


and make the two subproblems into children nodes

o Continue, using the best known feasible solution to trim sections of the tree,
until all nodes have been solved or trimmed.

Brute Force Algorithm

A brute force algorithm simply tries all possibilities until a satisfactory solution is found

Such an algorithm can be:

Optimizing: Find the best solution. This may require finding all solutions, or if a value
for the best solution is known, it may stop when any best solution is found

Example: Finding the best path for a travelling salesman

Satisficing: Stop as soon as a solution is found that is good enough

Example: Finding a travelling salesman path that is within 10% of optimal

Improving brute force algorithms

Often, brute force algorithms require exponential time

Various heuristics and optimizations can be used

Heuristic: A rule of thumb that helps you decide which possibilities to look at
first

Optimization: In this case, a way to eliminate certain possibilites without fully


exploring them
13

Randomized Algorithms

A randomized algorithm uses a random number at least once during the computation to
make a decision
o Example: In Quicksort, using a random number to choose a pivot
o Example: Trying to factor a large prime by choosing random numbers as
possible divisors.

3. FLOYD WARSHALL ALGORITHM

Algorithms
The most important algorithms for solving this problem are:

Dijkstra's algorithm solves the single-source shortest path problem.

BellmanFord algorithm solves the single-source problem if edge weights may be


negative.

A* search algorithm solves for single pair shortest path using heuristics to try to speed
up the search.

FloydWarshall algorithm solves all pairs shortest paths.


14

Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd
Warshall on sparse graphs.

Viterbi algorithm solves the shortest stochastic path problem with an additional
probabilistic weight on each node.

Additional algorithms and associated evaluations may be found in Cherkassky,


Goldberg & Radzik (1996)

FloydWarshall Algorithm
In computer science, the FloydWarshall algorithm is an algorithm for finding shortest paths in
a weighted graph with positive or negative edge weights (but with no negative cycles). A single
execution of the algorithm will find the lengths (summed weights) of the shortest paths
between all pairs of vertices, though it does not return details of the paths themselves. Versions
of the algorithm can also be used for finding the transitive closure of a relation

, or (in

connection with the Schulze voting system) widest paths between all pairs of vertices in a
weighted graph.
Contents

History and naming

Algorithm

Example

Analysis

Applications and generalizations

History And Naming


The FloydWarshall algorithm is an example of dynamic programming, and was published in
its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as
algorithms previously published by Bernard Roy in 1959 and also byStephen Warshall in
1962 for finding the transitive closure of a graph, and is closely related to Kleene's
algorithm (published in 1956) for converting a deterministic finite automaton into a regular
15

expression. The modern formulation of the algorithm as three nested for-loops was first
described by Peter Ingerman, also in 1962.
The algorithm is also known as Floyd's algorithm, the RoyWarshall algorithm, the RoyFloyd
algorithm, or the WFI algorithm.

Algorithm

The FloydWarshall algorithm compares all possible paths through the graph between
each pair of vertices. It is able to do this with (|V|3) comparisons in a graph. This is
remarkable considering that there may be up to (|V|2) edges in the graph, and every
combination of edges is tested. It does so by incrementally improving an estimate on
the shortest path between two vertices, until the estimate is optimal.

Consider a graph G with vertices V numbered 1 through N. Further consider a


function shortestPath(i, j, k) that returns the shortest possible path from i to j using
vertices only from the set {1,2,...,k} as intermediate points along the way. Now, given
this function, our goal is to find the shortest path from each i to each j using only
vertices 1 to k + 1.
o For each of these pairs of vertices, the true shortest path could be either
o a path that only uses vertices in the set {1, ..., k}or
o a path that goes from i to k + 1 and then from k + 1 to j.

We know that the best path from i to j that only uses vertices 1 through k is defined
by shortestPath(i, j, k), and it is clear that if there were a better path from i to k + 1 to j,
then the length of this path would be the concatenation of the shortest path
from i to k + 1 (using vertices in {1, ..., k}) and the shortest path from {k + 1} to j (also
using vertices in {1, ..., k}).

If w(i, j) is

the

weight

of

the

edge

between

vertices i and j,

we

can

define shortestPath(i, j, k + 1) in terms of the following recursive formula: the base case
is
o shortestPath(i, j, 0) = w(i, j) and the recursive case is
o shortestPath(i,j,k+1)

min(shortestPath(i,j,k),

shortestPath(i,k+1,k)

shortestPath(k+1,j,k))
16

This formula is the heart of the FloydWarshall algorithm. The algorithm works by first
computing shortestPath(i, j, k) for all (i, j) pairs for k = 1, then k = 2, etc. This process
continues until k = N, and we have found the shortest path for all (i, j) pairs using any
intermediate vertices.

Example
The algorithm above is executed on the graph on the left below:

Prior to the first iteration of the outer loop, labeled k=0 above, the only known paths
correspond to the single edges in the graph. At k=1, paths that go through the vertex 1 are
found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges
but is longer (in terms of weight). At k=2, paths going through the vertices {1,2} are found.
The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths
[4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3]
is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At k=3,
paths going through the vertices {1,2,3} are found. Finally, at k=4, all shortest paths are found.
Analysis

Let n be |V|, the number of vertices. To find all n2 of shortestPath(i,j,k) (for all i and j) from
those

of

shortestPath(i,j,k1)

requires

2n2 operations.

Since

we

begin

with

shortestPath(i,j,0) = edgeCost(i,j) and compute the sequence of n matrices shortestPath(i,j,1),


shortestPath(i,j,2), , shortestPath(i,j,n), the total number of operations used is n 2n2 = 2n3.
Therefore, the complexity of the algorithm is (n3).

17

Applications And Generalizations

The FloydWarshall algorithm can be used to solve the following problems, among others:

Shortest paths in directed graphs (Floyd's algorithm)

Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original


formulation of the algorithm, the graph is unweighted and represented by a Boolean
adjacency matrix. Then the addition operation is replaced by logical conjunction (AND)
and the minimum operation by logical disjunction (OR).

Finding a regular expression denoting the regular language accepted by a finite


automaton (Kleene's algorithm, a closely related generalization of the FloydWarshall
algorithm)

Inversion of real matrices (GaussJordan algorithm)

Optimal routing. In this application one is interested in finding the path with the
maximum flow between two vertices. This means that, rather than taking minima as in
the pseudocode above, one instead takes maxima. constraints on flow.

4. BREADTH FIRST SEARCH ALGORITHM


Breadth-firstsearch (BFS) is an algorithm for traversing or searching tree or graph data
structures. It starts at the tree root(or some arbitrary node of a graph, sometimes referred to as a
'search key') and explores the neighbor nodes first, before moving to the next level neighbors.
BFS was invented in the late 1950s by E. F. Moore, who used it to find the shortest path out of
a maze, and discovered independently by C. Y. Lee as a wire routing algorithm (published
1961).

Contents

More details

Example
18

Analysis

Time and space complexity

Completeness and optimality

Applications

Testing bipartiteness

More Details

This non-recursive implementation is similar to the non-recursive implementation


of depth-first search, but differs from it in two ways:

it uses a queue instead of a stack and

it checks whether a vertex has been discovered before enqueueing the vertex rather than
delaying this check until the vertex is dequeued from the queue.

The distance attribute of each vertex (or node) is needed for example when searching
for the shortest path between nodes in a graph. At the beginning of the algorithm, the
distance of each vertex is set to INFINITY, which is just a word that represents the fact
that a node has not been reached yet, and therefore it has no distance from the starting
vertex. We could have used other symbols, such as -1, to represent this concept.

The parent attribute of each vertex can also be useful to access the nodes in a shortest
path, for example by backtracking from the destination node up to the starting node,
once the BFS has been run, and the predecessors nodes have been set.

Example

The following is an example of the breadth-first tree obtained by running a BFS starting
from Frankfurt:

19

An example map of Germany with some connections between cities

The breadth-first tree obtained when running BFS on the given map and starting in Frankfurt

Time and space complexity


20

The time complexity can be expressed as

, since every vertex and

every edge will be explored in the worst case.

is the number of vertices and

is the number of edges in the graph. Note that


and

may vary between

, depending on how sparse the input graph is.

When the number of vertices in the graph is known ahead of time, and additional
data structures are used to determine which vertices have already been added to the
queue, the space complexity can be expressed as

, where

is

the cardinality of the set of vertices (as said before). If the graph is represented by
an adjacency list it occupies
matrix representation occupies

space in memory, while an adjacency


.

When working with graphs that are too large to store explicitly (or infinite), it is
more practical to describe the complexity of breadth-first search in different terms:
to find the nodes that are at distance d from the start node (measured in number of
edge traversals), BFS takes O(bd + 1) time and memory, where b is the "branching
factor" of the graph (the average out-degree).

Completeness And Optimality

In the analysis of algorithms, the input to breadth-first search is assumed to be a finite graph,
represented explicitly as an adjacency list or similar representation. However, in the application
of graph traversal methods in artificial intelligence the input may be an implicit
representation of an infinite graph. In this context, a search method is described as being
complete if it is guaranteed to find a goal state if one exists. Breadth-first search is complete,
but depth-first search is not: when applied to infinite graphs represented implicitly, it may get
lost in parts of the graph that have no goal state and never return.

Applications
21

Breadth-first search can be used to solve many problems in graph theory, for example:

Copying garbage collection, Cheney's algorithm

Finding the shortest path between two nodes u and v, with path length measured by
number of edges (an advantage over depth-first search)[10]

Testing a graph for bipartiteness

(Reverse) CuthillMcKee mesh numbering

FordFulkerson method for computing the maximum flow in a flow network

Serialization/Deserialization of a binary tree vs serialization in sorted order, allows the


tree to be re-constructed in an efficient manner.

Construction of the failure function of the Aho-Corasick pattern matcher.

Testing bipartiteness

BFS can be used to test bipartiteness, by starting the search at any vertex and giving alternating
labels to the vertices visited during the search. That is, give label 0 to the starting vertex, 1 to
all its neighbors, 0 to those neighbors' neighbors, and so on. If at any step a vertex has (visited)
neighbors with the same label as itself, then the graph is not bipartite. If the search ends
without such a situation occurring, then the graph is bipartite.

22

5.BELLMAN FORD ALGORITHM

The BellmanFord algorithm is an algorithm that computes shortest paths from a single
source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's
algorithm for the same problem, but more versatile, as it is capable of handling graphs in which
some of the edge weights are negative numbers. The algorithm is named after two of its
developers, Richard Bellman and Lester Ford, Jr., who published it in 1958 and 1956,
respectively; however, Edward F. Moore also published the same algorithm in 1957, and for
this reason it is also sometimes called the BellmanFordMoore algorithm.
Negative edge weights are found in various applications of graphs, hence the
usefulness of this algorithm. If a graph contains a "negative cycle" (i.e. a cycle whose edges
sum to a negative value) that is reachable from the source, then there is no cheapest path: any
path can be made cheaper by one more walk around the negative cycle. In such a case, the
BellmanFord algorithm can detect negative cycles and report their existence.

Contents

Algorithm

Applications in routing

Algorithm
Like Dijkstra's Algorithm, BellmanFord is based on the principle of relaxation, in which an
approximation to the correct distance is gradually replaced by more accurate values until
eventually reaching the optimum solution. In both algorithms, the approximate distance to each
vertex is always an overestimate of the true distance, and is replaced by the minimum of its old
value with the length of a newly found path. However, Dijkstra's algorithm uses a priority
queue to greedily select the closest vertex that has not yet been processed, and performs this
relaxation process on all of its outgoing edges; by contrast, the BellmanFord algorithm simply
relaxes all the edges, and does this

times, where

is the number of vertices in the


23

graph. In each of these repetitions, the number of vertices with correctly calculated distances
grows, from which it follows that eventually all vertices will have their correct distances. This
method allows the BellmanFord algorithm to be applied to a wider class of inputs than
Dijkstra.

In this example graph, assuming that A is the source and edges are processed in the worst order,
from right to left, it requires the full |V|1 or 4 iterations for the distance estimates to converge.
Conversely, if the edges are processed in the best order, from left to right, the algorithm
converges in a single iteration.

Applications in routing
A distributed variant of the BellmanFord algorithm is used in distance-vector routing
protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed
because it involves a number of nodes (routers) within an Autonomous system, a collection of
IP networks typically owned by an ISP. It consists of the following steps:

Each node calculates the distances between itself and all other nodes within the AS and
stores this information as a table.

Each node sends its table to all neighboring nodes.

When a node receives distance tables from its neighbors, it calculates the shortest routes to
all other nodes and updates its own table to reflect any changes.

The main disadvantages of the BellmanFord algorithm in this setting are as follows:
24

It does not scale well.

Changes in network topology are not reflected quickly since updates are spread nodeby-node.

Count to infinity if link or node failures render a node unreachable from some set of
other nodes, those nodes may spend forever gradually increasing their estimates of the
distance to it, and in the meantime there may be routing loops.

6.DIJKSHERS ALGORITHM

Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a graph,
which may represent, for example, road networks. It was conceived by computer
scientist Edsger W. Dijkstra in 1956 and published three years later.
The algorithm exists in many variants; Dijkstra's original variant found the shortest path
between two nodes, but a more common variant fixes a single node as the "source" node and
finds shortest paths from the source to all other nodes in the graph, producing a shortest-path
tree.
For a given source node in the graph, the algorithm finds the shortest path between that node
and every other. It can also be used for finding the shortest paths from a single node to a single
destination node by stopping the algorithm once the shortest path to the destination node has
been determined. For example, if the nodes of the graph represent cities and edge path costs
represent driving distances between pairs of cities connected by a direct road, Dijkstra's
algorithm can be used to find the shortest route between one city and all other cities. As a
result, the shortest path algorithm is widely used in network routing protocols, most notably ISIS and Open Shortest Path First (OSPF). It is also employed as a subroutine in other algorithms
such as Johnson's.
Dijkstra's original algorithm does not use a min-priority queue and runs
in time

(where

is the number of nodes). The idea of this algorithm is also given

in (Leyzorek et al. 1957). The implementation based on a min-priority queue implemented by


a Fibonacci heap and running in

(where

is the number of edges)


25

is due to (Fredman & Tarjan 1984). This is asymptotically the fastest known singlesource shortest-path algorithm for arbitrary directed graphs with unbounded non-negative
weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs
etc) can indeed be improved further as detailed in Specialized variants.
In some fields, artificial intelligence in particular, Dijkstra's algorithm or a variant of it is
known as uniform-cost search and formulated as an instance of the more general idea of bestfirst search.

Contents

Algorithm

Description

Algorithm
Let the node at which we are starting be called the initial node. Let the distance of node Y be
the distance from the initial node to Y. Dijkstra's algorithm will assign some initial distance
values and will try to improve them step by step.

Assign to every node a tentative distance value: set it to zero for our initial node and to
infinity for all other nodes.

Set the initial node as current. Mark all other nodes unvisited. Create a set of all the
unvisited nodes called the unvisited set.

For the current node, consider all of its unvisited neighbors and calculate
their tentative distances. Compare the newly calculatedtentative distance to the current
assigned value and assign the smaller one. For example, if the current node A is marked
with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the
distance to B (through A) will be 6 + 2 = 8. If B was previously marked with a distance
greater than 8 then change it to 8. Otherwise, keep the current value.

When we are done considering all of the neighbors of the current node, mark the current
node as visited and remove it from theunvisited set. A visited node will never be
checked again.
26

If the destination node has been marked visited (when planning a route between two
specific nodes) or if the smallest tentative distance among the nodes in the unvisited
set is infinity (when planning a complete traversal; occurs when there is no connection
between the initial node and remaining unvisited nodes), then stop. The algorithm has
finished.

Description
Suppose you would like to find the shortest path between two intersections on a city map:
a starting point and a destination. Dijkstra's algorithm initially marks the distance (from the
starting point) to every other intersection on the map with infinity. This is done not to imply
there is an infinite distance, but to note that those intersections have not yet been visited; some
variants of this method simply leave the intersections' distances unlabeled. Now, at each
iteration, select the current intersection. For the first iteration, the current intersection will be
the starting point, and the distance to it (the intersection's label) will be zero. For subsequent
iterations (after the first), the current intersection will be the closest unvisited intersection to the
starting point (this will be easy to find).
From the current intersection, update the distance to every unvisited intersection that is directly
connected to it. This is done by determining the sum of the distance between an unvisited
intersection and the value of the current intersection, and relabeling the unvisited intersection
with this value (the sum), if it is less than its current value. In effect, the intersection is
relabeled if the path to it through the current intersection is shorter than the previously known
paths. To facilitate shortest path identification, in pencil, mark the road with an arrow pointing
to the relabeled intersection if you label/relabel it, and erase all others pointing to it. After you
have updated the distances to each neighboring intersection, mark the current intersection
as visited, and select the unvisited intersection with lowest distance (from the starting point)
or the lowest labelas the current intersection. Nodes marked as visited are labeled with the
shortest path from the starting point to it and will not be revisited or returned to.
Continue this process of updating the neighboring
intersections with the shortest distances, then marking the current intersection as visited and
moving onto the closest unvisited intersection until you have marked the destination as visited.
Once you have marked the destination as visited (as is the case with any visited intersection)
27

you have determined the shortest path to it, from the starting point, and can trace your way
back, following the arrows in reverse; in the algorithm's implementations, this is usually done
(after the algorithm has reached the destination node) by following the nodes' parents from the
destination node up to the starting node; that's why we keep also track of each node's parent.
This algorithm makes no attempt to direct "exploration" towards the destination as one might
expect. Rather, the sole consideration in determining the next "current" intersection is its
distance from the starting point. This algorithm therefore expands outward from the starting
point, interactively considering every node that is closer in terms of shortest path distance until
it reaches the destination. When understood in this way, it is clear how the algorithm
necessarily finds the shortest path. However, it may also reveal one of the algorithm's
weaknesses: its relative slowness in some topologies.

7. COMPARISON OF DIFFERENT TYPES OF ALGORITHMS

Comparison With Other Shortest Path Algorithms


The FloydWarshall algorithm is a good choice for computing paths between all pairs of
vertices in dense graphs, in which most or all pairs of vertices are connected by edges.
For sparse graphs with non-negative edge weights, a better choice is to use Dijkstra's
algorithm from each possible starting vertex, since the running time of repeated Dijkstra (
using binary heaps) is better than the
FloydWarshall algorithm when

is significantly smaller than

running time of the


. For sparse graphs with

28

negative edges but no negative cycles, Johnson's algorithm can be used, with the same
asymptotic running time as the repeated Dijkstra approach.
There are also known algorithms using fast matrix multiplication to speed up all-pairs shortest
path computation in dense graphs, but these typically make extra assumptions on the edge
weights (such as requiring them to be small integers). [13][14] In addition, because of the high
constant factors in their running time, they would only provide a speedup over the Floyd
Warshall algorithm for very large graphs
Comparison of Floyds algorithm with Dijkstras algorithm

This post will compare two of the popular algorithms used to attack this shortest-path
problem in graph theory, Floyds algorithm and Dijkstras algorithm. I will briefly
describe each algorithm, compare the two, and then provide guidelines for choosing
between them. In addition, I will describe the results of test cases I ran on both
algorithms.

The shortest path problem is the problem of finding a path between two vertices of a
graph with a minimum total edge weight. The application of shortest path algorithms is
useful in solving many real-world problems. They can be applied to network and
telecommunications problems in order to find paths with the lowest delay. Additionally,
mapping software utilized these types of algorithms to find the optimum route between
two cities.

Both the Floyds algorithm and Dijkstras algorithm are examples of dynamic
programming. Dynamic programming is a technique for solving problems by breaking
them into smaller sub-problems. Each sub-problem is solved only once and the results
are stored for later use. Dynamic programming algorithms are more resource intensive
than their non-dynamic counterparts, which give them a higher space complexity.

Floyds Algorithm

Stephen Warshall and Robert Floyd independently discovered Floyds algorithm in 1962. In
addition, Bernard Roy discovered this algorithm in 1959. This algorithm is sometimes referred
29

to as the Warshall-Floyd algorithm or the Roy-Floyd algorithm. The algorithm solves a type of
problem call the all-pairs shortest-path problem, meaning that it finds the shortest path between
all the vertices of a given graph. Actually, the Warshall version of the algorithm finds the
transitive closure of a graph but it does not use weights when finding a path. The Floyd
algorithm is essentially the same as the Warshall algorithm except it adds weight to the distance
calculation.
This algorithm works by estimating the shortest path between two
vertices and further improving that estimate until it is optimum. Consider a graph G, with
Vertices V numbered 1 to n. The algorithm first finds the shortest path from i to j, using only
vertices 1 to k, where k<=n. Next, using the previous result the algorithm finds the shortest
path from i to j, using vertices 1 to k+1. We continue using this method until k=n, at which
time we have the shortest path between all vertices. This algorithm has a time complexity of
O(n3), where n is the number of vertices in the graph. This is noteworthy because we must test
up to n2 edge combinations.
Dijkstras Algorithm

Edsger Dijkstra discovered Dijkstras algorithm in 1959. This algorithm solves the singlesource shortest-path problem by finding the shortest path between a given source vertex and all
other vertices. This algorithm works by first finding the path with the lowest total weight
between two vertices, j and k. It then uses the fact that a node r, in the path from j to k implies
a minimal path from j to r. In the end, we will have the shortest path from a given vertex to all
other vertices in the graph. Dijkstras algorithm belongs to a class of algorithms known as
greedy algorithms. A greedy algorithm makes the decision that seems the most promising at a
given time and then never reconsiders that decision.
The time complexity of Dijkstras algorithm is dependent upon the internal data
structures used for implementing the queue and representing the graph. When using an
adjacency list to represent the graph and an unordered array to implement the queue the time
complexity is O(n2), where n is the number of vertices in the graph. However, using an
adjacency list to represent the graph and a min-heap to represent the queue the time complexity
can go as low as O(e*log n), where e is the number of edges. It is possible to get an even lower
time complexity by using more complicated and memory intensive internal data structures, but
that is beyond the scope of this paper.

30

Comparison and Experiment

When using a naive implementation of Dijkstras algorithm the time complexity is


quadratic, which is much better that the cubic time complexity of the Floyds
algorithm. However, Dijkstras algorithm returns only a subset of Floyds algorithm.
Specifically, it returns the shortest path between a given vertex and all other vertices
while the Floyds algorithm returns the shortest path between all vertices. It is
interesting to note that if you run Dijkstras algorithm n times, on n different vertices,
you will have a theoretical time complexity of O(n* n2)=O(n3). In other words, if you
use Dijkstras algorithm to find a path from every vertex to every other vertex you will
have the same efficiency and result as using Floyds algorithm.

In order to test the efficiency of these algorithms I ran several test cases. I implemented
Dijkstras algorithm using a priority queue and I ran each test case 1,000 times. All of
the results are aggregations of the 1,000 runs, which gives me a larger, more
manageable number. I ran six test cases, for each algorithm, varying the number of
vertices in the graph. I used an automated method for creating edge so the sparseness
of each graph is always the same.

In my implementation, the Floyd algorithm is actually faster when the number of


vertices is small. Only after the number of vertices grows to more than ten does the
Dijkstra algorithm become faster. When running Dijkstras algorithm n times (to get
all-pairs shortest-path) the time complexity quickly grows greater that Floyds
algorithm. Additional tests show that when running Dijkstras algorithm for more than
a quarter of the vertices the time complexity exceeds that of Floyds algorithm.

The chart in figure 1 shows the total number of seconds for various values of n (for
1,000 iterations of each algorithm). As the number of vertices doubles from 80 to 160,
the time increases by a factor 8, which is cubic time complexity. There is only a small
increase in time complexity for Dijkstras algorithm over the same values for n. In fact,
the time for Dijkstras algorithm increases by 2.3 as the value of n doubles, which is a
logarithmic time complexity.

31

However, we must keep in mind that Floyds algorithm finds the shortest path between all
vertices, while Dijkstras algorithm finds the shortest path from a single vertex to all other
vertices. Therefore, the comparison between the two is not necessarily valid. If we want to use
Dijkstras algorithm to find the shortest path for all vertices we must run it n times once for
each vertex.
The chart in figure 2 shows the total number of seconds for the same values of n, but with each
iteration of Dijkstras algorithm being repeated n times. The result is that Dijkstras algorithm
has also found the shortest path between all vertices but we the time requires increases by 6
when the value of n doubles.

In order to find the point at which Floyds algorithm is more efficient that Dijkstras algorithm I
ran an addition test, only using the graphs with 80 nodes. The chart in figure 3 shows the
results for finding the paths for a varying number of vertices. It shows that finding the shortest
path for 21 vertices takes about 13 seconds, which is higher than the 12.65 seconds Floyds
algorithm need to find all paths.

32

Table 1 below gives the raw data used for the charts in figure 1 and 2. Table 2, shows the raw
data for Dijkstras algorithm when called for a different number of vertices.

Conclusion

Both Floyds and Dijkstras algorithm may be used for finding the shortest path between
vertices. The biggest difference is that Floyds algorithm finds the shortest path between all
vertices and Dijkstras algorithm finds the shortest path between a single vertex and all other

33

vertices. The space overhead for Dijkstras algorithm is considerably more than that for
Floyds algorithm. In addition, Floyds algorithm is much easier to implement.
In most cases, for a small values number of vertices, the savings of using Dijkstras algorithm
are negligible and probably not worth the effort and overhead required. However, when the
number of vertices increases the performance of Floyds algorithm drops quickly. Therefore,
the use of Dijkstras algorithm can provide a solution when performance is a factor. On the
other hand, if you will need the shortest path between several vertices on the same graph you
may want to consider Dijkstras algorithm. In the test case, running the algorithm for more
than of the vertices decreased performance below that of running Floyds algorithm.

8.APPLICATIONS

Application
Shortest-paths is a broadly useful problem-solving model

Maps

Robot navigation.

Texture mapping.

Typesetting in TeX.

Urban traffic planning.

Optimal pipelining of VLSI chip.

Subroutine in advanced algorithms.

Telemarketer operator scheduling.

Routing of telecommunications messages.

Approximating piecewise linear functions.

Network routing protocols (OSPF, BGP, RIP).


34

Exploiting arbitrage opportunities in currency exchange.

Optimal truck routing through given traffic congestion pattern.

Application of Different Shortest Path Algorithms In Daily Life

The shortest path problem is the problem of finding a path between two vertices (or nodes) in a
graph such that the sum of the weights of its constituent edges is minimized. (Shortest path
problem - Wikipedia, the free encyclopedia, 2011) In other words, when we have to find a path
with minimum cost to go from a place to another place which there are a number of
intermediate points in between to travel to with different costs, we are dealing with the shortest
path problems. It should be noted that the phrase shortest path here does not necessarily
mean physically shortest distance, but a path with minimum weight which can be measured in,
say, time or monetary cost. Actually, the shortest path problems are closely related to our daily
life. For example, we all have to travel within the campus when we attend different lectures,
such

as

going

from

Meng

Wah

Complex

(MW)

to

Main

Building(M)

Graph 1: The map of the University of Hong Kong

In fact, we should wisely choose our path so we can travel with least amount of time in order to
arrive on time, especially when the next lesson starts only 5 minutes later. This will be an
example of shortest path problem and the weight will be the time cost. Certainly different
routes will involve different buildings and pathways, which some are less time-consuming. For
35

instance, we may go via Chong Yuet Ming Amenities Centre (CYA) and K K Leung Building
(KK), or we may walk on the University Drive to Haking Wong Building (HW) first and pass
through the podium of Kadoorie Biological Sciences Building (KBS) amd reach MB. However,
there are many other possible routes and it is quite impossible for us to try out all possible route
to find the one with least time needed. Therefore, we need a more effective method to find out
such path and we may apply the Dijkstras algorithm to solve this problem.

Graph 2: A simplified map of the University of Hong Kong with checkpoint marked

The Dijkstra's algorithm is an algorithm that can find the shortest path for a single source
graph. The Dijkstra's algorithm adopts the concept of greedy approach. First, we will have to
initialize the temporary estimated time needed from the starting point (the source which is
MW in this case) of every checkpoint in the University of Hong Kong (the node, e.g.CYA )
Obviously, the time needed to go to the starting point from the starting point is 0. However
we do not know the time needed to go to other checkpoint from the starting point yet so we
may take them as infinity. Then we can choose a checkpoint which take the minimum time to
go from the starting point (choose any one if there are more than 1 checkpoint needed same
minimum time to go to) and update the neighboring checkpoints estimated time needed, i.e.
if the time needed to go to next checkpoint(CPnext) via the chosen checkpoint(CPchosen) is
less, we will go to CPnext via the CPchosen instead and replace the time needed to go to
CPnext by the sum of the time needed to go to the CPchosen from starting point and the time
needed to go to CPnext from CPchosen. And we repeat the above process until all checkpoints
36

are

considered.
So, we know that the Dijkstras algorithm is useful in solving this kind of shortest path

problem. Then can we apply the Dijkstras algorithm to every situation to solve the shortest
path problems? Unfortunately, the answer is no. The Dijkstras algorithm will fail to find the
path with minimum weight if the graph consist of some negative weight. At first glance, this
situation seems impossible to happen as you may wonder since there will never be a path that
will reduce the time needed for the journey. Yet, according to Robert Sedgewick, "Negative
weights are not merely a mathematical curiosity; [...] [they] arise in a natural way when we
reduce

other

problems

to

shortest-paths

problems".(Sedgewick,

2003)

(Bellman

Ford_algorithm - Wikipedia, the free encyclopedia, 2011) Besides, the weight need not to be
time only. Consider the same situation that you have to go from a place to another in the
University of Hong Kong, but you have to pay if you go through some place (e.g. the Run Run
Shaw Podium (RR)) and some other place may pay you back instead (e.g. the Starbucks may
be giving out free coffee at the Sun-Yat-Sen Place). For simplicity, we may consider go from
MW to MB again. Consider you can make a handsome profit (e.g. +$1000) going only from
the Main Library New Wing (LBN) to KK, but going from RR to LBN costs you a lot (e.g. $300) and assume travelling in other checkpoint costs nothing . Then according to the
Dijkstras algorithm, you will choose the LBN checkpoint at last since the only incoming check
point is from RR to LBN which make lower the priority of LBN. And even if you update the
neighboring checkpoint of LBN, the cost of MB cannot be changed since other checkpoints are
all visited already. So the Dijkstras algorithm can not give an optimal solution.
And we need another algorithm to deal with this kind of problems and we may apply another
method -- the Bellman-Ford Algorithm.
Bellman-Ford is in its basic structure very similar to Dijkstra's algorithm, but instead of
greedily selecting the minimum-weight node not yet processed to relax, it simply relaxes all the
edges, and does this |V | 1 times, where |V | is the number of vertices in the graph (Bellman
Ford_algorithm - Wikipedia, the free encyclopedia, 2011) This means that we will repeat the
updating process of Dijkstras algorithm for each checkpoints instead of the neighboring
checkpoints only, and repeat this for every checkpoints. This guarantees that every checkpoint
is updated every time and promises the optimality of the path even if there are negative
weights. Nonetheless, this will result in many repeated procedures (overlapping sub-problems)
and we may apply the concept of Dynamic Programming in order to speed up the BellmanFord algorithm, i.e. using the previously updated checkpoints information to update the next
checkpoint.
37

All in all, the shortest path problems is closely connected to our daily life. This is
because many problems can be transformed into shortest path problems, though they may not
seems to be shortest path problems at first glance, if we analyze the problems carefully. I think
this lead to another observation - we should, in terms of both algorithm and problem solving
skills, try to think in multiple-dimension and should not simply limited ourselves into 1
solution only. For example, there are different algorithms to solve the shortest path problems,
such as the Dijkstras algorithm and Bellmen-ford algorithm covered in this survey and many
other not discussed in this survey. However, different algorithms may have their own strengths
and drawbacks (e.g. Dijkstras algorithm is faster and simpler but cannot deal with graph with
negative weights) and we should not be constrained to use 1 only in all situation.

9.CONCLUSION

The A* algorithm can achieve better running time by using Euclidean heuristic function
although it theoretical time complexity is still the same as Dijkstras. It can also guarantee to
find the shortest path. The restricted algorithm can find the optimal path within linear time but
the restricted area has to be carefully selected. The selection actually depends on the graph
itself. The smaller selected area can get less search time but the tradeoff is that it may not find
38

the shortest path or, it may not find any path. This algorithm can be used in a way that allowing
search again by increasing the factor if the first search fails.

10.REFERENCE

Sanders, Peter (March 23, 2009). "Fast route planning". Google Tech Talk.

Jump up^ Chen, Danny Z. (December 1996). "Developing algorithms and software for
geometric

path

planning

problems". ACM

Computing

Surveys 28 (4es):

18.doi:10.1145/242224.242246.

Jump up^ Abraham, Ittai; Fiat, Amos; Goldberg, Andrew V.; Werneck, Renato
F. "Highway Dimension, Shortest Paths, and Provably Efficient Algorithms". ACMSIAM Symposium on Discrete Algorithms, pages 782-793, 2010.

Jump up^ Abraham, Ittai; Delling, Daniel; Goldberg, Andrew V.; Werneck, Renato
F.research.microsoft.com/pubs/142356/HL-TR.pdf "A Hub-Based Labeling Algorithm
for Shortest Paths on Road Networks". Symposium on Experimental Algorithms, pages
230-241, 2011.

Jump up^ Kroger, Martin (2005). "Shortest multiple disconnected path for the analysis
of entanglements in two- and three-dimensional polymeric systems". Computer Physics
Communications 168 (168): 209232. doi:10.1016/j.cpc.2005.01.020.

Jump

up^ Ravindra

K.

Ahuja, Thomas

L.

Orlin (1993). Network Flows: Theory, Algorithms

Magnanti,

and James

and Applications.

B.

Prentice

Hall. ISBN 0-13-617549-X.

Jump up^ John Baras; George Theodorakopoulos (4 April 2010). Path Problems in
Networks. Morgan & Claypool Publishers. pp. 9. ISBN 978-1-59829-924-3.

39

Você também pode gostar