^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

208016. Magnanti. B. and James Sloan School of Management Massachusetts Institute of Technology Cambridge. 02139 .NETWORK FLOWS Ravindra K. MA. Kanpur . INDIA . Ahuja* Thomas L. Orlin On leave from Indian Institute of Technology.

LffiRARF --^ JUN 1 .MIT.

10 5.11 Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis Assignment Problem Reference Notes References .2 3.5 Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm Dijkstra's Dial's Maximum Flows 4.6 Developing Polynomial Time Algorithms Basic Properties of 21 Z2 Z3 24 Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks.1 Applications 1.5 Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems Minimum 5.4 5.6 Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns 5.4 Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm 4.7 5.2 Complexity Analysis 1.9 5.1 4.5 Search Algorithms 1.4 3.1 3. Linear and Integer Programming Network Transformations Shortest Paths 3.1 5.3 3.2 5.5 5.3 4.8 5.3 Notation and Definitions 1.3 5.NETWORK FLOWS OVERVIEW Introduction 1.4 Network Representations 1.2 4.

.

flows on arcs and mass balance at nodes) have natural mathematical representations.. primal-dual methods of linear and nonlinear programming. plane methods and branch and bound procedures of integer programming. For example. communication and many other a consequence. practitioners and of non-specialists can readily understand the mathematical descriptions network optimization problems and the basic ruiture of techniques used to solve these problems. Indeed. impact on the design and implementation of many network many The aim optimization. Network optimization is also alluring to methodologists. network optimization has inspired many of the most fundamental results in all of optimization. Moreover. of this paf>er is to summarilze of the fundamental ideas of network In particular. we concentrate on network flow problems and highlight a number of recent theoretical and algorithmic advances. have served as the major prototype for several theoretical domaiiis (for example. the field of matroids) for a and as the core model wide variety of min/max duality results in discrete mathematics.Network Flows Perhaps no subfield of mathematical programming is more alluring than network optimization. Highway. Moreover. Networks provide a concrete setting for testing and devising new theories. physical networks pervade our everyday As even non-specialists recognize the practical importance and the wide ranging applicability of networks. price directive decomposition algorithms for both linear programming and So did cutting combinatorial optimization had their origins in network optimization. network optimization has served as a fertile meeting ground for ideas from optimization and computer science. electrical. This combination of widespread applicability and ease of assimilation has undoubtedly been instrumental in the evolution of network planning models as one of the most widely used modeling techniques in all of operatior^s research and applied mathematics. topics: We have divided the discussion into the following broad major . rail.g. because the physical operating characteristics of networks (e. lives. science concerning data structures and ideas from computer and efficient data manipulation have had a major optimization algorithms. and polyhedral methods of In addition. Many results in network optimization are routinely used to design and evaluate computer systems. networks combinatorial optimization.

will not be covered in our survey. (e.Applications Basic Prof)erties of Network Flows '' Shortest Path Problems Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems Much of our discussion focuses on the design of provably good algorithms. particularly linear programming. (iii) (ii) graph notation and vtirious ways that to represent a few basic ideas from computer science (iv) underUe the design to many in 1. polynomial-time) algorithms. Our discussion intended to illustrate a range of applications and to be suggestive of how network flow problems arise in practice. we will consider four different types of networks arising in practice: . and two generic proof techniques that have proven be useful designing polynomial-time algorithms. . As a prelude to the remainder of our discussion. this discussion.. To illustrate the breadth of network applications. arise in numerous application settings emd in a variety of guises.g. however. Among good we have presented those that to structure are simple and are likely to be efficient in practice. we limit our discussions to the problems (i) above. Applications Networks this section. In is we briefly describe a few prototypical applications. listed In this chapter. but also serves as an introduction and summary to the non-specialists who have a basic working knowledge of the rudiments of optimization. the multicommodity flows. briefly describe these problems in Section 6. and (iv) the network design. Some important generalizations of these problems such as (ii) the generalized network flows. a more extensive survey would take us far beyond the scope of our discussion. quantitively. that we consider some models requiring solution techniques For the purposes of we will not describe in this chapter.1 algorithms. We. We have attempted our discussion so that it not only provides a survey of the field for the specialists. in this section we present several important preliminaries We discuss (i) different ways to measure the networks of performance of algorithms.6 and provide some important references.

• • Physical networks (Streets. and a capacity integer Uj. if b(i) then node | is a | demand node. associated with every arc b(i) e A.: ' (1. (1 . they provide a useful taxonomy for summarizing a variety of applications. These four categories are not exhaustive and overlap Nevertheless. if b(i) > 0. The constraint (1. We associate with each If b(i) node i i e N an number < 0.j)€A^ subject to X^ii {j:(i. railbeds. x. performing optimization) that is.. Network flow models are • • • also used for several purposes: Descriptive modeling (answering "what is?" questions) Predictive modeling (answering "what will be?" questions) Normative modeling (answering "what should be?" questions. A) be a directed network with a cost (i. We first introduce the basic underlying network flow model and some useful notation. representing i its supply or demand. (1..1b) /jj < Xjj S u^ = .1a) (i. We will illustrate models in each of these categories. wires) Route networks Space-time networks (Scheduling networks) • • Derived networks (Through problem trai^formations) in coverage.1b) implies that the total flow out of a node minus the total flow into that node must equal . node. a lower bound /.i)6^A} =b(i). for all (i. j) e A.. and |. pipelines.Ic) We refer to the vector x (xjj) as the flow in the network. = 0. j) Cjj. then node i is a supply node. then node is a transhipment Let n = N | and m= A The minimum cost network flow problem can be formulated as follows: Minimize ^ C. The Network Flow Model Let G = (N. foralli€N.j)e]\} - Xxji {j:(j.

€ {N : b(i) < 0) if Consequently.3. Therefore the column The matrix nonzero. Figure 2. we show can be made zero without any loss of generality. cost flow problem (1. balance constraint. as an outflow from node to Cj with a +1 coefficient and as an inflow is corresponding to arc (i. we : represent the minimum ). Frequently. j with a -1 coefficient. total supply must equal total demand the mass balance cor\straints are to have any feasible solution.1c) We henceforth refer to this constraint as the moss The flow must also satisfy the lower bound and capacity constraints which we refer to as the flow bound constraints.or i € {N : Ib(i) = Mi) > 0) Ib(i) i .. In matrix notation. = node . contractual obligations or simply operating ranges of interest.e. and each column h<is exactly one +1 and one 2. all N has very special structure: only 2m out of its nm total entries are of its nonzero entries are +1 or -1. (1. central role in the The following special ccises of the minimum cost flow problem play a theory and applications of network flows. . we make two (i) observations. any equation is sum of all other equations. the given lower bounds /j. and hence redundant. let e. entries are all zeros except for the )-th entry which a flow variable app>ears in two mass balance equations. all the mass is balance equations gives the zero equation Ox = equal to minus the or equivalently. The matrix N has one row for each node of the network and one column corresponding to arc of size n (i. are all zero. The flow bounds might model later that they physical capacities.1 gives an example of the node-arc incidence matrix. column vector Note that each i whose x-. For now.2) minimize { ex Nx = b and / <xSu in terms of a node-arc incidence matrix N. (ii) If the total supply does equal the total demand. j). then summing 0. 1. the net supply /demand of the node. j) Nj. Later in Sections and we consider some of the consequences of this special structure. We let Njj represent the column of N and denote the j-th unit vector which is is a 1. for each arc.2 -1. Summing gives all the mass balance constraints eliminates all the flow variables and i € I N b(i) = 0.

(1.2) 1 2 3 4 5 .(a) An example network.

consider the problem of managing.A c Nj one is X N2 representing possible person-to-object assignments. specifies of equilibrium line of the network flow model permits us to answer that Each network has an associated delay function how long it takes to traverse this link. Now also suppose that each user of the system has a point of origin (e. The following type these types of questions. that these route choices each other. the longer the travel time to (e. The objective is to assign each person to exactly way that a minimum cost flow problem on a network minimizes the cost of the assignment. a limits. network decide upon such issues as speed one way street assignments. = 1 for all (i. his or her home) and a point of destination his or her workplace in the central business district). let they add to each other's travel time because of the added congestion on the link. traverse it. is there a flow pattern in the his (or her) choice of network with the property that no user can unilaterally change origin to destination path (that is. We can then use these models to answer a variety of "what if planning questions. if two users traverse the same link. Physical Networks "^ The one that familiar city street map is perhaps the prototypical physical network. Many network planning problems arise in this problem context. j) € A).g.. Operations researchers have setting.. as well as related theory developed a set of sophisticated models for this problem (concerning. we need a descriptive model how to model traffic flows and measure the performance of any design as well as a effect of predictive model for measuring the any change in the system.. j) C. traffic that The time to do so depends upon is traffic conditions. that is. affect however. the more flows on the link. existence and uniqueness of equilibrium solutions). or whether or not to construct a new road or bridge. and the most readily comes to inind when we envision a network. associated with each element object in a in A. or designing. Note. This situation leads to the following equilibrium problem vdth an embedded set of network optimization problems (shortest path problems). Now us make the behavioral assumption that each user wishes to travel possible.g. all other ULsers continue to use their specified paths in the equilibrium solution) to reduce his travel time. Used in the mode of "what if . for example. street As one to illustration. and algorithms for computing equilibrium solutions. Each of these users must choose a route through the network. A) with b(i) = 1 for all i i e Nj and b(i) = -1 for all e N2 (we set l^:= and u^. that tells us In order to make these decisions intelligently. The Jissignment problem G = (N^ u N2. between his or her origin and destination as quickly as along a shortest travel time path. and a cost (i.

a network equilibrium model forms the heairt of the Project Independence Energy Systems (LPIES) model developed by the U. the Urban Mass Transit Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement. how can we lay out or smallest possible integrated circuit to make the necessary connections between components and maintain necessary sejjarations between the wires (to avoid electrical interference). in this case the transportation network. each with a given aistomer costs based demand. The basic equilibrium model of electrical networks is another example. Another type of physical network circuit). A shipper with supplies must ship to geographically dispersed retail centers.scenario analysis. Similar types of models arise in many other problem contexts. For example. The traditional operations research transportation at its plants problem is illustrative. and Kirkhoff s Law represents the network mass balance equations. we posed These models are actively used in practice. Indeed. (iv) from the rail head (by truck) to a distribution center. an arc connecting a supply point and center might correspond to a complex four leg distribution channel with legs to a rail station. Each arc connecting a supply point to a retail center incurs upon some physical network.he its problem context. planning problems arise design. is a very large-scale integrated circuit (VLSI In this setting the nodes of the network correspond to electrical components and the links correspond to wires that connect these links. construct transportation routes. Route Networks Route networks. (ii) from a plant (by truck) (iii) from the rail station to a rail head elsewhere in the system. which are one level of abstraction removed from physical networks. *. we assign the arc with the composite . and even from the distribution center (on a local delivery truck) to the final If customer (or in some cases just to the distribution center).S. Rather than solving the problem directly on the physical network. are familiar to most students of operations research and management science. these models permit analysts to answer the type of questions previously. Ohm's Law serves as the analog of the congestion function for the traffic equilibrium problem. in this Numerous network . For example. Department of Energy as an analysis tool for guiding public policy on energy. retail (i) we preprocess the data and Consequently. In this setting.

2. this all the intermediary legs. an airport) but at different points in time. for instance. a warehouse. the an important example. One problem special case of the transportation problem merits note — the assignment This problem has numerous that we introduced previously in this section. a prize winning practice paper written several years ago described an application of such a network planning system by the Cahill costs May Roberts Pharmaceutical Company (of Ireland) to reduce overall distribution by 20%. In each d^ lot size problem. and one . T represents each of the planning periods. Space Time Networks Frequently in practice. period. j) demand points with available machines. . the It is design issue of deciding upon the location of the distribution centers. As but one illustration. we would identify the supply points with jobs to be performed.distribution cost of this route. . assuming that each machine has the capacity to perform only one job. as well as with the distribution capacity for classic problem becomes a network transportation model: costs. The solution to the problem specifies the minimum cost assignment of the jobs to the machines. and the cost associated with arc i as the cost of completing job on machine j. which represents a core planning model is in production planning. find the flows is from plants to customers that minimizes overall This type of model used in numerous applications. a noted study conducted several years ago permitted Hunt Wesson Foods Corporation to save over $1 million annually. In these instances it is often convenient to formulate a network flow problem facility (a on a "space— time network" with several nodes representing a particular machine. we can produce I^ at level Xj and /or we can meet the demand by drav^g upon inventory from the previous t f)eriod. the (i. In this problem context. . we wish to meet prescribed demands for a product in each of the T time periods. particularly in problem contexts such as machine scheduling. The network representing this problem has T+ 1 nodes: one node = 1. Many address this related problems arise in this type of problem setting. we wish to schedule some production or service activity over time. and network flows to cost out (or optimize flows) for any using this approach. Figure economic 1. In this application context.2. while improving customer service as well. possible to type of decision problem using integer programming methodology for sites choosing the distribution given choice of sites. applications. .

Id. t) prescribes the production level level I^ in period t. One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^. whenever we in period . and the flow on arc t + 1) represents the inventory for each in to t be carried from period to period t + 1 . the problem becomes a minimum cost network shortest path problem (for each demand period. the cost on each arc for this either linear (for inventory carrying arcs) or linear plus a fixed cost (for production arcs). Hence. we incur a fixed cost t In addition we may h^ incur a per unit production cost c^ in period and a per t unit inventory cost for carrying any unit of inventory from period problem is t to i>eriod + 1. Network flow model of the economic lot size problem. over the entire planning period) must be produced in some period = 1. Consequently. . The mass balance equation period models the basic accounting equation: incoming inventory plus production that period must equal demand plus all final inventory. Whenever the production and holding costs are linear. . Figure 1^.e. this problem is easily solved as a we must find the minimum cost path of If we impose to that demand point). x^ > 0). The flow on (t. t arc (0. the objective function for . arises frequently in practice. 2. flow problem. The mass balance equation fir\al for node indicates that demand (assuming zero beginning and zero t inventory . no matter how much or how little. . production and inventory arcs from node capacities on production or inventory.node represents the "source" of Xj all production. cost: that is.. t (i. T.

Another classical network flow scheduling problem is the airline scheduling problem used to identify a flight schedule for an airline. As we indicate in Section 2. in no period do we both carry inventory from the previous period and produce. to T+ 1. layover arcs that permit a plane A. is a production arc (of the (0. Hence we can obtain the optimum production schedule by of the solving a shortest path problem.10 the problem is concave.. any such concave cost network flow problem always has a special type of optimum solution solution. 6 to wait for a later flight. each in node represents both a geographical location (e. This observation implies the following production property: in the each time we produce.. though the embedded network often proves to be useful in designing either heuristic or optimization methods. or the production facility might be producing several products that are linked by common share production costs or by changeover cost (for example. the common limited production facilities. G' contair\s a directed path 1 G' from node to node T + 1 of the same objective function veilue and vice-versa.). it contains an arc (i. we may need to change dies in an automobile stamping plant when making In different types of fenders).. t)) and each other arc is an inventory carrying solution. for New York at 10 A.M. efficiently as a problem on an auxiliary network G' defined 1 The network G' i nodes j).g. The production property permits us shortest path consists of to solve the problem very as follows. to Boston at 11 to stay at New York from 10 A. we produce enough to meet the demand for an integral number of contiguous periods. a A. The length of arc is equal to the production and inventory cost of i satisfying the demand of the periods from to j-1.M. for example (i) the production might have limited production capacity or limited storage for inventory. A flow that maximizes revenue will prescribe a schedule for an . until 11 overnight at New York from 11 P.2 . Moreover.M. This problem's spanning tree solution known as a spanning trees decomposes form into disjoint directed paths. or to wait If A. an airport) and a point (i) time (e.g. the next morning. until example revenues vdth each service or leg. the first arc on each path arc. and for every pair of (i. In this application setting. most enhanced models are structure quite difficult to solve (they are NP<omplete).M. in this we identify network flow network (with no external supply demand) will specify a set of flight plans (circulation of airplanes through the airline's fleet network). The arcs are of two types: service arcs connecting (ii) two airports. Many enhancements facility (ii) model are possible. j) nodes i and j with < j.M.M. New York at 10 A. or that cases. Observe that for every production in schedule satisfying the production property.M.

Time Period/Duty Number . The foUovdng examples illustrate this Single Duty Crew Scheduling. The same type of network representation arises in many other dynamic scheduling applications. point.11 of planes. Figure 1. Derived Networks This category a "grab is bag" of specialized applications and illustrates that arise in surprising sometimes network flow problems ways from problems that on the surface might not appear to involve networks.3 illustrates a number of possible duties for the drivers of a bus company.

constructing a house. and b is column vector whose components are all Observe 's that the ones in each column A occur in consecutive rows because each driver duty contains a single work is shift (no split shifts or work breaks). rather than a shortest problem. We show that this problem a shortest path problem. each column in the first revised system will have a single +1 (corresponding to the hour of the duty in the column just of A) and last a single -1 (corresponding to the row in A. Moreover. hand side coefficients (supply and demands) could be Therefore. = a of we select the j-th duty. ^5 unit 1 Figure 1. the matrix A represents the matrix of duties Vs. the transformed problem p)ath would be a general minimum cost network flow problem. workers need to in complete a variety of tasks that are related by precedence conditions.2b) subtract each equation from the equation below to the system. that Hes below the +1 in the column of A). . the problem cost in the to ship in one unit of flow from node 1. for example.4. at Therefore. but in arbitrary. This transformation does not change the solution to Now add a redundant equation equal minus the sums of all the equations in the revised system. to we specify a number network be on duty in each period. Critical Path Scheduling and Networks Derived from Precedence Conditions In construction and many other project planning applications.4. the following operations: In (1. Shortest path formulation of the single duty scheduling problem. = 1) or not (x. the revised right hand side vector of the problem will have a +1 is in row 1 and a -1 in the last (the 1 appended) row.12 In this formulation the binary variable x: indicates whether 0) (x. or the added row. Because of the structure of A. a builder must pour the foundation before framing the house and complete the framing before beginning to install either electrical or plumbing fixtures. To make this identification. the same this case the right transformation would produce a flow problem. we perform it. If instead of requiring a single driver to be on duty in each period. to node 9 minimum network given Figure which is an instance of the shortest path problem.

X. however. one with one coefficient and one with a minus one structure. "start" job we add we to two dummy both with zero processing time: a a "completion" job J to be completed before any other job can begin and have completed this all + 1 that cannot be initiated until other jobs. . otherwise.j)eA) {j:(j. A) represent the network corresponding to solve the following optimization augmented project. we represent the by nodes. ^ .13 This type of application can be formulated mathematically as follows.j)€X subject to ^ 2^ X:. j) .. 2. Then we vdsh . for l. that we move variable to the left hand side of the a plus constraint. For convenience of notation. then each constraint contains exactly two variables.i)€!^) -l. On to the surface. . . which is a linear program in the variables if s: .Sq T subject to Sj S Sj + tj . is coefficient. j (j = 1. problem has a familiar If we associate a dual variable with each arc then the dual of this problem maximize V t. The precedence constraints imply that for each arc job j (i. thereby giving us a network. We are to choose the constraints jobs start time of each job j so that we honor a set of specified precedence If and complete the overall project as quickly as possible. seems the bear no resemblance to network optimization. for each arc (i . to Suppose we need complete J jobs and that job S. j) in the network. .ifi = J + l all i € N . then the precedence constraints can be represented by arcs. The linear programming dual xj: of this (i. the cannot start until job jobs. i has been completed. Let G = (N. J) requires t: days to complete. problem: minimize sj^^ . j) e A. (i. + ^ 2- f Xjj si I {j:(i. Sj Note.ifi = 0. this problem.

14 .

y. linear y. and . this path has become known as the critical path heis and a the problem has become known as the critical path problem.g. j) 6 A . As shown of in we have divided the region to be mined into blocks. S 0. Certain versions of this problem can be formulated as minimum cost flow problems. and perhaps the geography of the mine. j). Consider the open mine shown in Figure 1. the value of the ore in the block minus the j cost for extracting the block) If and we wish to extract blocks to (y^ maximize - overall revenue. impose restrictions on how we can remove the blocks: that lies for example. y. and the revenue n as the demand demand node j. we have removed any block immediately above restrictions on the "angle" of mining the blocks might impose similar precedence conditions. rather than network flow problem with a node for each block. we let j. itself is particularly for managing it large-scale corwtruction The critical path important because identifies those jobs that require managerial attention in order to complete the project as quickly as possible. this figure.15 xj. The provisions any given mining technology. Since delaying any job in this sequence must necessarily delay the completion of the overall project. This network will also have a dummy "collection it node" with (that is. management. This longest path has the to fulfill the sp>ecified following interpretation. a variable for at each precedence constraint. (ii) an objective function specifying over all we revenue ny. = 0) we extract block the problem will contain j a constraint y. we can never remove a block until it. The dual linear program (obtained from the constraints programming version = will be a of the problem (with the ^ y. For example. Researchers and practitioners have enhanced this basic model in several ways..5. if resources are available for expediting individual jobs. summed or 1) blocks j. and an arc connecting to node j . be a zero-one variable indicating whether (i) = 1) or not (y. < 1. equal to minus the sum of the rj's. It is the longest sequence of jobs needed precedence conditions. we could consider the most efficient use of these resources to complete the overall project as quickly as possible. Suppose now that each block has an associated revenue n (e. ^ y^ (or. This problem requires us to determine the longest path in the network G from node to node J + 1 with tj as the arc length of arc (i. yj S 0) whenever we that need wish to mine block to maximize before block total i. The open pit mining problem is another network flow problem that arises from pit precedence conditions. This model become principal tool in project projects. for all (i.

In addition. Since the upper leftmost entry in this table a 1. the data is shown in Figure 1. and the overall sum of the entries in the new table adds to a rounded version of the overall that sum in the original table.7 up or down. The problem can be a feasible flow in a network and can be solved by an application of the maximum flow algorithm. If all entries in the original table rounded up or down. the variable corresponding to this If precedence constraint in the dual linear program v^ll have a network flow structure. rounded up or dov^n.6(b) shows a cast as finding rounded version of the data meets this criterion. Matrix Rounding of Census Information The for a U. we add a supersink with the arc connecting each j-th column node j to this node. ^ 1 in the original linear program. say. Census Bureau uses census infonnation to construct millions of tables wide variety of purposes. Figure 1.6. either up or dov^n to the nearest multiple of three. way that network flow problems related Whenever. We might disguise the information in this table as follows. two variables in a linear program are by a precedence conditions. so that the entries in the table continue to add to the (rounded) row and column sums. this arc corresponds to the upper bound constraint is y. Figure network flow problem corresponding to the census data specified in Figure we rescale all the flows. i: we add a supersource s to the i-th network connected to each row node Similarly. can attempt to do so by rounding the census information contained in any Consider. including the row and column sums. the tabulated information might disclose information about a particular individual. the flow on this arc must be the column sum. meeisuring them in integral units of the rounding base .16 block j).S. table. the dual linear program will be a network flow problem. rounded the flow on this arc 1. flows on the arcs incident to node The critical path scheduling problem and open pit mining problem illustrate one arise indirectly. the only constraints in the problem are precedence constraints. information and not disclose It the Bureau has an obligation to protect the source of its statistics that can be attributed to any particular individual. the flow on this arc t must be the row sum. It contains an arc connecting node j): i (corresponding to row ij-th i) and node (corresponding to column the flow on this arc should be the entry in the prescribed table. round each entry in the table. By law.6(a). We also add an arc connecting node t and node s. for example. The dual problem one of finding a network flow that minimizes ths sum of 0. rounded either up or dov^T*. The network contains a node for each row in the table and one node for each j column. must be the sum of illustrates the 1.

000 .16a Time in ^service (hours) <1 Income less 1-5 <5 than $10.(XX) 1 $10.000 .000 mure than $50.$50.000 $30.000 Column Total .$30.

Nevertheless.16b (multiples of 3 in our example). The objective of average-case analysis to estimate the expected number of steps taken it by an algorithm. this chapter will focus primarily on worst-case analysis. Each of these three performance measures has appropriate for certain purposes. this type of analysis provides performance guarantees. Researchers have designed many of the algorithms described in this chapter specifically to improve worst-case complexity while simultaneously maintaining good empirical behavior. Worst-case analysis aims to provide upper bounds on the number of steps that a given algorithm can take on Therefore. worst-case analysis is the primary measure of Worst-Case Analysis For worst-case analysis. terms of several basic problem parameters: the number of nodes (m). the number of arcs and upper bounds C and U on the cost coefficients and the arc capacities. and average-case analysis. worst-case analysis. The formulation of a more general version imbedded network problem. then the flow on each arc must be integral at one of two of this consecutive integral values. typically Empirical analysis measures the computational time of an algorithm using statistical sampling on objective of a distribution (or several distributions) of problem instances. Average-case analysis differs from empirical analysis because provides rigorous mathematical proofs of average-case performance. we bound the running time of network algorithms in (n). Whenever is C (or U) appears in the complexity arulysis. The major empirical analysis is to estimate how algorithms behave in practice. is any problem instance. Thus. we can exploit in divising 12 Complexity Analysis There are three basic approaches for measuring the performance of an algorithm: empirical analysis. corresponding to tables with more than two dimensions. rather than statistical estimates. these problems have an (corresponding to 2-dimensional "cuts" in the table) that algorithms to find rounded versions of the tables. for the algorithms performance. we present. its relative merits. we will prove . and only secondarily on empirical behavior. will not be a network structure flow problem. and is Nevertheless. we assume that each cost (or capacity) integer valued. As an example of a worst-case result within this chapter.

assuming that is m ^ n. the constant terms are relatively small integers for the terms in the complexity bound. the use of the 0( notation typically has permited analysts to avoid the prohibitively difficult analysis required to compute the leading constants. Counting Steps The running time of steps it of a network algorithm is determined by counting the number performs. For example. has led to a flourishing of research on the worst<ase performance of algorithms. The counting for of steps relies on a number of assumptions. The leeist value of the constants not determined solely by the algorithm. 3. the constant factors do not contribute nearly as much to the running time as do the factors involving n. replacing the expressions: requires "the label correcting algorithm pmn steps for some constant p" with the equivalent expression "the running is time of the label correcting algorithm 0(nm). 2. m. To avoid the need to compute or mention the constant p. By dominant. then we would state that the running time O(nm^). the constant terms 2''^'^n'^m this dominant even though most practical term would dominate. Although ignoring the may have undesirable feature.17 that the number is less of steps for the label correcting algorithm to solve the shortest path problem than pnm steps for some sufficiently large constant p. which. most of which are quite appropriate most of today's computers. researchers have widely adopted the 0( 1." The 0( ) notation avoids the need to state a specific constant. the time is called asymptotic running times. and even to the choice of the computer. in turn. if Therefore. 4. C or U. For all all of the algorithms that we present. the actual running time is lOnm^ + 2'^'^n^m. ) Consequently. For large practical problems. it is also highly sensitive to the choice of the computer language. Estimating the constants correctly is is fundamentally difficult. researchers typically use a "big O" notation. . ) notation for several reasons: Ignoring the constants greatly simplifies the analysis. instead. this notation indicates only the dominant terms of the all running time. sufficiently large values of we mean the term that would dominate bounds are other terms for n and m. Observe that the for running time indicates that the lOnm^ term values of n and m.

18 Al. obtain the same asymptotic worst-case it algorithms that we Our cissumption that each operation. are polynomially bounded in n. is justified by the fact that 0( is ) notation ignores differences in running times of at most a constant factor. By envoking Al. it is 0((n + m)flog n + log C + log U)). if known as the similarity assumption. Consequently.l.. be counted are comparisons Al . log C and log U. The input length of a problem number is of bits needed to represent that problem. Other instances of . even by counting all other computer operations. the running time of one of the polynomial-time maximum flow algorithms we consider is 0(nm + n^ log U). most one instruction A1. with at at a time. On may the other hand. For example. Therefore. on results for the today's computers we would present. be in part an addition or division.g. is some constant This assumption. For example. m. the assumption that each arithmetic operation takes one step lead us to underestimate the aisymptotic running time of arithmetic operations involving very large numbers on real computers since.e. To avoid systematic underestimation of the running time.2 Each comparison and basic arithmetic operation counts as one step. running time is bounded by a polynomial function in m. in comparing two running times.l The computer being executed carries out instructions sequentially. which the time difference between an addition and a multiplication on essentially all modem computers. we will typically assume for that both C and U k.000. takes equal time. For a network problem. quite /) reasonable in practice.000 for networks with 1000 nodes. a computer must store large numbers in several words of its memory. In fact. to perform each operation on very large numbers. we would allow costs to be as large as 100. we will not discuss parallel implementations of network flow «dgorithms. a computer must access a number of words of data and this thus takes more than a constant number of steps. C = Oirr-) and U = 0(n'^).. researchers refer if its network algorithm as a polynomial-time algorithm n. Polynomial-Time Algorithms An the algorithm is said to be a polynomial-time algorithm if its running time is is boimded by a polynomial function of the input length.2 implicitly assumes that the only operations to and tirithmetic operations. we are adhering to a sequential model of computations. we were to restrict costs to be less than lOOn-^. i. in practice. the input length a low order polynomial function of n.000. log C and to a log U (e.

pseudopolynomial-time algorithms become polynomial-time algorithms. flow algorithm alluded therefore.) polynomial-time algorithms. In particular. as a rule. Qn n must be larger than 2"^^^'^^^. the polynomials in practice are typically of a . polynomial-time algorithms perform better than exponential time algorithms. For example. n^'^OO is smaller than tP'^^^E^ ^ if sufficiently large. C and U. C The maximum algorithm. polynomial-time algorithms are strongly polynomial-time because log C = Odog n) and log U= CXlog n). this case. small degree. we envoke the similarity assumption. experience has Figure 1. but the algorithms will not be attractive if C and U are high degree polynomiab in n. There are two major reasons for preferring polynomial-time algorithms to exponential-time algorithms. Even n is in extreme cases this is true. For problems that satisfy the similarity assumption. Moreover.19 polynomial-tiine bounds are said to be a strongly O(n^m) and 0(n log n). (Observe that nC cannot be bounded by is C) We say that an algorithm n. is not a strongly polynomial-time is The if interest in strongly polynomial-time algorithms all primarily theoretical. Much practical shown that. First. An algorithm is said to be an exponential-time algorithm if its running time grows of exp)onential time a as a function that can not be polynomially bovmded. any polynomial-time algorithm is asymptotically superior to any exponential-time algorithm. a polynomial function only n and m.8 illustrates the asymptotic superiority of The second reason is more pragmatic. A polynomial-time algorithm is is polynomial-time algorithm in if its running time bounded by or log U. and does not involve log to. Some examples bounds are 0(nC). 0(n!) and 0(n^°g polynomial function of n and log if "). pseudopolynomial-time its running time is polynomially bounded in is m. The class of pseudopolynomial-time algorithms algorithms. an important subclass of exponential-time Some instances of pseudopolynomial-time bounds are 0(m + nC) and 0(mC). 0(2^).

20 APPROXIMATE VALUES .

as a cutset of G. (ij. A) N' = N and A' c A. . we always assume graph G is is We refer to any set Q c A with the property that the graph G' = (N. A directed (\2 r-1. . ij-. i\^ We refer to the nodes i3 . . i) or (i^ . The degree node is the number of incoming and outgoing arcs incident to that node. In this chapter. • • . representing cycles. we shall often refer to a path as a sequence of nodes - i2 - -ij^ when its arcs are apparent from the problem context. and ij^^-j on the path.e. i and j j. . a cost Cj. if the graph contains at least one if all undirected path from connected. If any ambiguity might arise. A') a spanning subgraph of G = (N. . whichever is appropriate from context. An arc (i. ij. An undirected path is defined similarly except that for any two consecutive nodes either arc (ij^.- • • . j) e A. G if = (N. and a capacity Uj:. > with each arc (i.). shall explicitly state directed or undirected path. We associate that Uj. A) is a sequence of distinct (ij^. othervs^se. to is A graph is said to be connected pairs of nodes are that the it disconnected.j) is incident to nodes i and j. j) emanates from node Tlie arc adjacency The of j arc is an outgoing of node i and an incoming arc of node i. the path contains i2 . and no superset of Q has this property. A') is a subgraph of G= (N. A(i) = {(i. refer to node i tail jmd node (i. . is defined as the set of arcs emanating from node of a i. j). We assume throughout nodes in a graph. and say that the arc (i. or arc (ij^+i .^ as the internal nodes of the path. A(i). . for each € A.. .21 I N I and m= A I I . A cutset connected. Alternatively. 13). j) as the head of arc aire (i. The arc (i. . e N| and if e N2. We shall use similar conventions for A graph G = (N. as the i. i| For simplicity of notation. Two nodes i and i j are said to be connected j. A) if N' CN c A. 12. . 13.i. A graph G' = (N'. j) (i. j) e A : € N}.( ij. i\^+-[) i^. j) has two end points. path in . we shall sometimes refer to a path as a set of (sequence oO arcs without mention of the nodes. We we shall often use the terminology path to designate either a directed or an undirected path. list node i. we distinguish two special the source s and sink t. A) is called a bipartite graph (i. A-Q) disconnected. j) if its i node set j N can be partitioned into and A' two subsets N| and N2 so that for each arc in A. if) satisfying the property that ij^+p € A for each k= . nodes and arcs ip (ip 12^. 1. We j. A graph G' = is (N'. A directed is cycle is a directed path together with the arc i|) and an undirected cycle an imdirected path together with the arc (ij. Frequently.

Each least two leaf A spanning tree contains a unique path between any two nodes.1. to this cutset is added to the subtrees. but to represent the also upon the manner used network within a computer and the storage results. N-X). j) with the property that 1 if arc € A. T are called tree arcs. A tree T is said to be a spanning A tree of G if and T is a spanning subgraph arcs not belonging to 1 of G. we state it othervdse. to represent a network representation is not efficient. we some popular ways In Section 1. any nontree arc to a spanning tree creates exactly one Removing any two arc in this cycle again creates a spanning tree. A tree is a connected acyclic graph. and Ijj = otherwise. the resulting graph is again a spanning In this chapter. T are A spanning tree of G = (N.4 Network Representations The complexity of a network algorithm depends not only on the algorithm. We shall alternatively represent the cutset Q as the graph is node partition (X. of is which only space 2m words have nonzero values. This scheme requires nm this words to store a network. subtree of a tree T is a connected subgraph of T. scheme used for maintaining and updating the intermediate The running time of an algorithm (either worst<ase or empirical) can often be improved by representing In this section. Removing any tree-arc creates subtrees. the network discuss more cleverly and by using improved data of representing a network. a tree with degree equal to one called a leaf node. any arc belonging tree. A acyclic if it contains no cycle. Another popular way = network the node-node adjacency I matrix representation. A node in nc des. the element I^: This representation stores an n x n matrix (i. Clearly. X and N-X.22 partitions the graph into two sets of nodes. The addition of cycle. We represent the logarithm of any number b by 1. structures. The arc costs and capacities are . we assume that logarithms are of base 2 unless log b. Arcs belonging to a spaiming tree called nontree arcs. we have already described the node-arc incidence matrix representation of a network. Arcs a whose end points belong to two If different subtrees of a spanning tree created by deleting tree-arc constitute a cutset. A) is has exactly ntree has at tree arcs.

head) cost 2 3 4 5 6 7 8 . arc number 1 (tail.23 (a) A network example arc number 1 point (tail. (c) The reverse star representation. head) cost cost 1- 2 3 1 4 2 3 2 3 1 4 5 4 2 1 6 7 8 4 1 3 4 2 3 (b) The forward star representation.

Starting from a forward star representation. head) and the cost of the For example. The arc (1.9(a). we n can create a reverse star representation as follows. incidence list (These representations are also literature. We then sequentially store the (taU. both sparse and dei^se. For consistency. 2) hcis 1. storing arc (3. and so on. head) and We also maintain a pointer with each node i. 2) So instead of storing head) and cost of arcs. that indicates the smallest i.24 also stored in n x n matrices. (tail. maintain a reverse position in these pointer with each node denoted by rpoint(i). we number the arcs emanating from node 1. We also i. but is not attractive for storing a sparse network. . store the (tail. then the arcs emanating from node arbitrarily. we can simply store the arc numbers and once we know the from the forward 1. which denotes the first arrays that contains information about an incoming arc at node consistency. simultaneously. Figure complete trace array. We examine the nodes j = 1 to j. The forward star and reverse star representations are probably the most popular ways to represent networks.9(d) gives the arc numbers. This data structure gives us the representation shov^Ti in Figure Observe that by storing both the forward and reverse star representation S. To determine. we store the incoming arcs node i at positions rpoint(i) to (rpoint(i+l) . numbers ir\stead of the (tail. We can avoid this duplication by eircs. We numbers in an m-array trace. 1. in order and sequentially head) and the cost of incoming arcs of node i. number i in the arc list of an arc emanating from - node 1) in Hence the outgoing list. then node i has no outgoing arc.) first known as representation in the computer science The forward star representation numbers the arcs in a certain order: 2.9(c). Figure 1. set point(l) = 1 and point(n+l) = m+1. the incoming arcs at any node efficiently. we need an additional data structure known as the reverse star representation. This representation is adequate for very dense networks. Arcs emanating from the same node can be numbered the cost of arcs in this order. we can always retrieve the associated information store circ star representation.9(b) specifies the forward star 1. For the sake of we at set rpoint(l) = 1 and rpoint(n+l) = m+1. representation of the network given in Figure The forward outgoing arcs at star representation allows us to determine efficiently the set of set of any node. denoted by point(i). As earlier. arc has arc number arc number 4 in the forward star representation. we will maintain a significant duplicate information. arcs of node - are stored at positions point(i) to (point(i+l) the arc If point(i) > point(i+l) 1.1).

25 1. in At every point states: in the search procedure. A) that are reachable through directed paths from a distinguished node called the source. let us suppose that we wish to find all the nodes graph s. different variants of search lie at the heart of many network algorithms. In this section. inadmissible We call an arc otherwise. in a all nodes in a network that satisfy a particular For purposes of illustration.5 Search Algorithms Search algorithnvs are fundamental graph techniques. j) admissible if node i is marked and node is j is unmarked. The marked nodes are is known be reachable from the source. The algorithm we say that node is a predecessor terminates when the graph contains no (i. and Initially. Search algorithms attempt to find property. all nodes in the to network are one of two marked or unmarked. by examining admissible arcs. j) admissible arcs. i. the search algorithm will mark more nodes. Subsequently. only the source node marked. . Whenever i the procedure marks of a new node by examining an j admissible arc node j. Tl e follovkdng algorithm summarizes the basic iterative steps.. (i. predi]) = i. we discuss two of the most commonly used search techniques: breadth-first search and depth-first search. G = (N. and the status of unmarked nodes yet to be determined.e.

The predecessor indices define a tree consisting of marked We structure use the following data structure to identify admissible is arcs. When from nodes. it arc in the arc the ciirrent When the algorithm reaches the end of the arc arc. it has marked all nodes in G that are reachable s via a directed path. it this list sequentially list and whenever the current arc arc. it executes the while loop at most 2n times. The same data also used in the maximum flow and minimum i cost flow algorithms A(i) of arcs discussed in later sections. is The search algorithm examines inadmissible. node i from LIST. j) node is incident to an admissible arc then begin mark node pred(j) := i. j) from it. Since the algorithm marks any node at most once. first the current arc of node is the arc in A(i). begin unmark all in N. Each iteration of the while loop either finds an admissible arc or does not. add node end else delete j to LIST. while LIST * do begin select a if node i i in LIST. (i. Arcs in each list can be arranged arbitrarily. this algoirthm terminates. mark node LIST := {s). In the former case. We maintain with each node the list emanating (i. Each node has a current arc Initially. Now consider the effort spent in identifying the . and in the latter Ccise deletes a marked node from LIST. end. end. makes the next list. declares that the node has no admissible It is easy to show that the search algorithm runs in 0(m + n) = 0(m) time.26 algorithm SEARCH. nodes s. it the algorithm marks a new node and adds it to LIST. j. which i is the current candidate for being examined next.

this version of search is called a breadth-first search.. creating a path as long as possible. and U. i. in the m. It marks nodes s to i in the nondecreasing order of their distance from the with the distance from i. For each node i. We assume. in the problem H = maximum mCU. then the search algorithm selects the marked nodes in the order. first-out to the rear. the search in algorithm examines a total of ie X A(i) = m N and thus terminates 0(m) time. at most once. kind of search amounts to visiting the nodes in order of increasing distance from therefore. i.27 admissible arcs. the set LIST is maintained as a queue. Hence. For cost flow instance. Therefore. This s. as usual. and minimum . C. L6 Developing Polynomial-Time Algorithms Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence) approach. Geometric Improvement Approach The geometric improvement approach shows polynomial time if that an algorithm runs in at every iteration it makes an improvement proportioT\al to the solutioiis. nodes are always selected from the front and added first-in. and the scaling approach. This algorithm to performs a deep probe. In this section. and backs up one node initiate a new probe when it can mark no new nodes from the tip of the path. s.. the search algorithm selects the marked nodes in the last-in. will we briefly outline the basic ideas all underlying these two approaches. as described. H is a function of n.e. nodes to LIST. The algorithm. feasible solutions. flow problem H = mU. first-out order. nodes are always selected from the front and added to the front. that data are integral and that algorithms maintain integer solutions at intermediate stages of computations. meeisured as minimum number of arcs in a directed path from s to Another popular method is to maintain the set LIST as a stack.e. does not specify the order for examining and adding If Different rules give rise to different search techniques. we scan arcs in A(i) arcs. in this instance. difference between the objective function values of the current and optimum Let H be an upper bound on the difference in objective function values between any two For most network problems. this version of search is called a depth-first search.

Further. if at some iteration. the algorithm must terminate wathin 0((log H)/a) iterations. The geometric improvement approach might be summarized by "network algorithms that have algorithms.3) implies that a(z^ . we can look for local improvement techniques that lead to large fixed percentage) improvements for the in the objective function.e. and. Since H is the maximum possible improvement and every objective function value is an integer. The maximum augmenting path algorithm for the 4.2 maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. We A have stated this result for minimization versions of optimization problems. (i. we describe the simplest form of scaling which we call bit-scaling.. therefore.z*)/2 ^ z^ - z^-^^ ^ aCz^ . If in each iteration.z*)/2 units.z*).) and Scaling Approach Researchers have extensively used an approach called scaling to derive polynomial-time algorithms for a wide variety of network and combinatorial optimization problems. In this discussion.z*) by a factor of 2 within these 2/a iterations. similar result applies to maximization versions of optimization problems. the algorithm improves the objective function value by at least aCz*^ .z*)/2 units. Then the algorithm terminates in O((log H)/a) iterations. Proof.11 presents an example of a bit-scaling algorithm for . the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1.3.. the algorithm must have reduced the total possible improvement (z*^." a the statement geometric convergence rate are polynomial time In order to develop polynomial time algorithms using this approach. On the other hand.28 Lemma 1. The quantity (z*^ - z*) represents the total possible improvement in the objective function value after the k-th iteration. then (1.e. suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*) (13) for (i. Consider a consecutive sequence of starting 2/a iterations from iteration k. Section 5. (See Sections 5.1. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value. then the algorithm would determine an optimum solution within these 2/a iterations. q the algorithm improves the objective function value by no more than aCz*^ .

P3. . of Observation. using more refined versions of scaling.29 the assignment problem. adding leading zeros necessary to make each capacity K bits long. . describe polynomial-time algorithms for the maximum flow and minimum cost flow problems. The is scaling technique useful whenever reoptimization from a good starting solution solving the problem from scratch. for each k = 2. . Let K = Flog Ul and would consider suppose if that we represent each arc capacity as a K bit binary number.. more efficient than For example. .-j serves as the starting solution for problem Pj^. the optimum solution is of problem Pj^^. K. Further. Using the bit-scaling technique. the problem P2 approximates data to the second bit. Pj^ the problem P^ approximates data to the first .10 illustrates an example of this type of scaling. Then the its problem Pj^ the capacity of each arc as the k leading bits in binary representation. The manner of defining arc capacities easily implies the following observation. . we solve a problem P parametrically as a sequence of problems P^. P2. and each successive problem .. The capacity an arc in P^ is tivice that in Pf^^j plus or 1. Sections 4 and 5. Figure 1. is a better approximation until Pj^ = P. consider a network flow problem whose largest arc capacity has value U. : bit.

Example of a bit-scaling technique. P2. and P3. (b) (c) Network with binary expansion of The problems Pj. .30 100 <=^ (a) (b) PI : P2 100 P3: 010 (c) Figure 1.10. (a) Network with arc capacities. arc capacities.

Let vj^ denote the vj^. end. This approach is very robust. for this approach to work. begin reoptimize using the obtain an optimum solution of end._i is an excellent starting solution for problem Pj^ since Pj^. solution of Pi^_i can be easily reoptimized to obtain an Hence.e. Pj^ denote an arc flow corresponding to its In the problem the capacity of an arc xj^. because of the following reasons. for example. the number of problems solved is OOog n). Thus (i. simple scaling algorithm improves the running time dramatically. Moreover. claissical easier to reoptimize such a maximum Section 4. The former Thus this polynomial and the bound is only pseudopolynomial.1 flow problem. vj^ < m because multiplying the flow X]^_^ by 2 takes care of the I's doubling of the capacities and the additional can increase the maximum increase the flow value by at most m units (if we add 1 to the capacity of any arc. In general. (iii) optimum For problems that satisfy the similarity assumption. maximum flow value for problem Pj. Therefore. 0(m^ log U) time. For example. the optimum solution of Pj^.^ and Pj^ are quite similar.. If we multiply the optimum flow 2vj^_'j for Pj^.i plus or 1. begin obtain an for k : optimum to solution of P^. we obtain a feasible flow for Pj^. whereas time bound is the scaling version of the labeling algorithm runs in the non-scaling version runs in latter O(nmU) time.^ twice capacity in Pj^. of the bit-scaling technique.. taking O(m^) time. This approach works well (i) for these applications.i to Pj^. the maximum and is xj^ flow problem.i by 2. in part.31 The following algorithm encodes a generic version algorithm BIT-SCALING. it then is we maximum flow from source to sink by at most 1). the labeling algorithm as discussed in would perform the reoptimization in at most m augmentations. of The problem P^ is generally easy to solve. Consider. . = 2 K do optimum solution of Pj^. (ii) The optimal solution problem Pj. reoptimization needs to be only a little more efficient by a factor of log n) than optimization. variants of it have led to improved algorithms for both the maximum flow and minimum cost flow problems.

the flow on path p. is contained in path p and is otherwise. We next establish several important connections between network flows and linear and integer programming. as the first step in our discussion. we discuss a few useful 2. j). 6jj(q) equals arc is contained in cycle q and otherwise. we need Finally. cycle formulation starts with an enumeration of the paths Its P and Q of decision variables are h(p). Therefore. for every directed path and f(q). j) 1 if arc (i. transformations of network flow problems. The path and the network. Notice that every set of path and cycle flows uniquely determines arc flows in a natural way: the flow xj. each view has own to advantages. Then we partially characterize optimal solutions to network flow problems and demonstrate that these problems always have certain special types of optimal solutions (so<alled cycle free solutions). in this section properties of network flows. We begin by showing how network flow problems can be modeled Section in either of two equivalent ways: as flows on arcs as in our formulation in 1. BASIC PROPERTIES OF As a NETWORK FLOWS we describe several basic prelude to the rest of this chapter. q. and spanning tree Consequently.1).1 or as flows on paths and cycles. in designing algorithms. on arc (i.1 Flow Decomposition Properties and Optimality Conditions It is natural to view network flow problems in either of two ways: as flows on arcs or as flows on paths and cycles. or algorithms. In the context of developing underlying theory.32 2. only consider these special types of solutions. similarly. We j) formalize this observation by defining some new notation: 5jj(p) 1 if equals (i. the flow in on cycle which are defined p in P and every directed cycle q Q. we will find alternate formulations. Then ^i3= I p€ P 5ij(p)h(p)+ X qe hf<i^^^^^- Q . its models. j) equals the sum of the flows h(p) and f(q) for all paths p and cycles q that contain this arc. the basic decision variables are flows Xj: on arcs cycles (i. it worthwhile develop several connections between these In the arc formulation (1.

Now observe that each time we identify to zero. otherwise the (i^. the path and cycle .1. We lecist repeat this process with the redefined problem until the network contains no supply node (and hence no demand node).2. into path and cycle If flows. Every path with positive flow connects a supply node of x to a demand node most of x. as) path and cycle flows? The following result provides an affirmative answer to this question. these. In the former case ij^ we obtain a directed path p from the supply node some demand node consisting solely of arcs with positive flow. (i. must find a is cycle. and repeat the procedure. Every directed path and cycle flow Conversely. Then we select a transhipment node with at one outgoing arc with positive flow as the starting node. we reduce the identify supply /demand of some node or the flow on some arc a cycle. If and in the latter case [b(iQ). a path. nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties: C2. the original flow the sum of flows on the paths and cycles identified procedure. we say that the flow is represented f is eis path flows and cycle flows and that the path flow vector h and cycle flow vector cycle flow representation of the flow. 12) mass balance constraint (1. every has a unique representation as nonnegative arc flows. we obtain a directed path. j) e p)]. can we decompose any arc flow into (i. If b(ijj) + h(p) and : = Xj. 2. j) in we obtain a cycle q. (i. Proof. j) we let f(q) = min {x^: (i. a path and Can we represent it reverse this process? That is. We terminate when for the redefined problem x = by the Clearly.33 If the flow vector x is expressed in this way. xj: we obtain a directed (xj: : cycle q. is a demand node then we stop. i^ implies that some other arc carries positive We repeat this argument until either we encounter a demand node ig to or we revisit a previously examined node. We give an algorithmic proof to show any feasible arc flow x can be decomposed Oq. Note that one of these cases will occur within n steps.1b) of node flow. Then some arc i|) carries a positive flow. At most n+m paths and cycles have nonzero flow. at m we need that ig is a to establish only the converse assertions. b(ij^) we = let h(p) = inin min (i. p. in this Ceise which 0. - h(p) for each arc x^. and redefine b(iQ) = b(iQ) . Consequently. j) € q) and redefine = Xj: - f(q) for each arc in q.. and each time we we reduce the flow on some arc to zero.h(p). In the light of our previous observations. cycles C2. i^j Suppose supply node.1: Theorem Flow Decomposition Property (Directed Case).e. -b(ij^). out of have nonzero flow.

final Each undirected path which has an orientation from its initial to its node.(p) same way.4. The major modification . of which there are It at most m cycles. any arc with positive flow occurs as a forward arc and any arc with negative flow occurs as a backward arc. our representation using the notation and -1 if valid v^th the following provision: we now define 6j. some node by adding an arc (ij^. every arc flow x can be flow has a unique representation as arc flows. has forward arcs and backward arcs which are defined as arcs along and opposite to the path's orientation.(p) and S^jCq) to be arc (i. out of C2. to is be negative. Every path and cycle Conversely. Flow Decomposition Property (Undirected Case). is that we extend the path (ij^ . At most n+m paths and cycles have nonzero flow. have nonzero flow. C2. to a sink node of x. p. h(p) on each forward arc A path flow will be defined arc.5. j) .34 representation of the given flow x contains at most (n + m) total paths and cycles. is possible to state the decomposition property in a somewhat more general form that permits arc flows xj. on p as a flow with value and -h(p) on each backward We define a cycle flow in the 5j.2. The other steps can be modified accordingly. As enables us to compare any two solutions of a network flow problem in a particularly convenient way and to show how we can build one solution from another by a sequence of simple operations. for each arc (i.1. Every path with positive flow connects a source node of x For every path and cycle.3. at most m cycles This proof at is similar to that of ij^_-j Theorem 2. even though the underlying network directed. Proof. these. and can contain arcs with negative flows. 6j:(q) is still In this more general setting. Theorem 2. We need flow f(q) the concept of augmenting cycles with respect to a flow x. if < Xjj + < Ujj. The flow decomposition property has one example.'j ij^) with positive flow or an arc ij^_| ) with negative flow. it a number of important consequences. A cycle q with > is called an augmenting 5jj(q) f(q) cycle with respect to a flow x e q. In this Ccise. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: C2. the paths and cycles can be undirected. j) is a backward arc of the path or cycle.

j)€A k=l .. the resulting solution remains feasible on each arc Hence. The f(q) is c(q) f(q).. by condition C2.35 In other words. arc . We define the cost of an augmenting q as c(q) = V (i. yjj < Consequently. f(q-)). is. q. the flow remains feasible if some positive amount of flow (namely cycle f(q)) is augmented around the cycle q. change in flow cost for augmenting around cycle q with flow Suppose < X < u and that x and y are any two solutions to a network flow problem. j). Therefore. qm that contains it. + 6j:(qj(. 5jj(q).. j) at most r < m cycle flows f(q])/ f(qj.e. zjj = 6ij(qi) f(qi) + 5jj(q2) f(q2) + . j) we have + 6ij(q2) < yjj = Xjj + 5jj(q^) fCq^) f(q2) + . j) < qj^. - Then the difference vector z = y x satisfies the homogeneous equations Nz = Ny Nx = 0. . q2. Now q-j. same < sign.. j) e A (i. note (i.. Ny = b. for any arc (i. + 5jj(qr) f(qr) < Ujj. . q^ that contains it or a backward arc on each cycle x^... we can find (i.... + SjjCqr) fCq^. j) is either a forward arc on each cycle q^. i. j) e A (i. - i..4 of the flow decomposition property.) f(qj^^) Uj. (i. . . q2 . 0<y<u. each cycle q^ that . is an augmenting cycle with respect to the flow x. . Nx = b. j) e A k=l r (i. if inequality in this expression has the for each cycle qj^ . Since y = x + z. Further. The cost of an augmenting cycle represents the change € A if in cost of a feasible solution we augment along the cycle with one unit of flow.e. j) Cj. . q2. . j) 6 A (i. j) e A (i..) satisfying the property that for each arc of A. (i. Consequently. moreover. < Xj. flow decomposition implies that z can be represented as cycle flows. . .. of these cycle flows qj^ to x. each term between and the rightmost Ujj. for each arc e That we add any (i.

Suppose that X is any feasible solution. ex* < cx then one of these cycles must have a negative cost. Much of the underlying theory of 2.x has a 0.4. A feasible flow x is an optimum flow if and only if admits no negative cost augmenting cycle. Theorem Augmenting Cycle Property. arc flows a simple observation concerning the example in Figure are given besides each arc. and costs . nonnegative cost. that an optimum solution of the minimum cost flow problem. is also an optimum flow.cx > Since x* is an optimum flow. Let X network flow problem.x can be decomposed most m augmenting cycles and the sum of the costs of these cycles equals cx* . We have thus obtained the following Theorem it 2.36 We have thus established the following important 2.ex. Then y equals x plus the with respect to x. Further. cx* = cx and x result. network flows stems from In the example. Cycle Free and Spanning Tree Solutions We start by assuming that x is a feasible solution to the network flow problem minimize { cx : Nx = b and / ^x<u ) and that / = 0. Optimality Conditions. then cx* . and that x ^ x*. the cost of y equals the cost of x any two feasible solutions of a flow on at most m augmenting nicies and y he plus the cost of flow on the augmenting cycles.3: result.1. Further. The augmenting into at If cycle property implies that the difference vector X* . The augmenting characterizing the cycle property permits us to formulate optimality conditions for optimum solution of the x* is minimum cost flow problem. if every augmenting cycle in the decomposition of x* . 2J.

e. Per unit change in cost = A = $2 + $1 + $3 Let us refer to this incremental cost negative.. we must on 6. Figure Improving flow around a being that all Let us assume for the time arcs are uncapacitated. select 6 in the interval -2 <6 < 3. 4+e <!) cycle.1. or 6 < 3. of all We can restate this observation in another way: to preserve nonnegativity flows. or 6 > at and again find a lower cost solution with the flow one arc in the cycle value zero. .. then -2) we would decrease 6 as much as possible (i.$3 i 2. we set 6 all = 3. (at i. Note that adding a given amount this of flow 6 to all the arcs pointing in a clockwise direction all and subtracting flow from at arcs pointing in the counterclockwise direction preserves the mass balance is each of the node. and on at least 4 + 6 S 0.e. we set 6 as large as possible while preserving 4 - 3-6^0 and we no 8 S 0. longer have positive flow on arcs in the Similarly.e. 2 + 6^0. we were to change C|2 from 2 to 4).. that is. The network in this figure contains flow around an undirected cycle. positive or zero cost cycle - $4 - $3 = $ -1.. i. Also. A as the q/cle cost and say A. to minimize cost nonnegativity of that in the cycle.e. that the cycle is a depending upon the sign of Consequently.37 3. Since the objective function -2 at depends linearly we optimize it by selecting 6 = 3 or 6 = which point one arc in the cycle has a flow value of zero. in all our example. 5 + 6^0. arc flows. note that the per unit incremental cost for this flow change cost of the clockwise arcs the sum minus the sum of the cost of counterclockvkdse arcs. if the cycle cost were positive (i.$4 3-e <D 2+e 4. Note new solution 6 = 3).

. then the range of flows that preserves flows) feasibility Ceise -2 mass balances.38 We (i) If can extend this observation in several ways: the per unit cycle cost A = 0. j) is a p'ee arc with respect to a given feasible flow x if Xj..5: fundamental result: Theorem optimization Cycle Free Property. equals either its lower or if upper bound. again an interval. upper bound (x^2 = ^ ^t 6 = 1). Therefore.e. lies strictly (i. at a given any time.g. At these values of the solution is cycle free. Let us say that an arc (i. for example. a solution x has the "cycle free property" entirely of free arcs. or arbitrarily small (negative) in a positive cost cycle. the network contains no cycle made up In general. (ii) If we impose upper bounds on is the flow. good as the original that is. either the flow is zero (the lower bound) or Some observations additional notation will be helpful in encapsulating and summarizing our up to this point. our prior observations apply to any cycle in a network. one cycle and establish the following 2. one by choosing 6 = for -2 or 6 = 1. (i. lower and upper bounds on 1. such as 6 units on all arcs. e. this condition rules out any negative cost directed cycle with no upper bounds on its arc flows. j) between the lower and upper bounds imposed is restricted if its upon it. 1 <x <u } is bounded from below on the feasible region and the problem has a feasible solution. initial flow we can apply our previous argument repeatedly. We will also say that arc flow xj. in this <6< and we can find a solution as 6. In this terminology. Note that the lower bound assumption imposed upon the objective value is necessary to rule out situations in which the flow change variable 6 in our prior argument can be made arbitrarily large in a negative cost cycle. then at least one cycle free solution solves the problem. is at its some arc on the cycle. . we are indifferent to all solutions in the interval -2 < 9 < 3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero. problem minimize ex If the objective function value of the network { : Nx = b.

39
useful to interpret the cycle free property in another way.

It is

Suppose

that the

network
nodes).

is

connected

(i.e.,

there

is

an undirected path connecting every two pairs of
is

Then, either a given cycle free solution x contains a free arc that

incident to

each node in the network, or

we

can add to the free arcs some restricted arcs so that the

resulting set S of arcs has the following three properties:

(i)
(ii)

S contains

all

the free arcs in the current solution,

S contaiT\s no undirected cycles, and

(iii)

No

superset of S satisfies properties

(i)

and
(i)

(ii).

We

will refer to

any

set

S of arcs satisfying

through

(iii) eis

a spanning tree of
a

the network

and any

feasible solution x for the

network together with
(At times

spanning

tree S

that contains all free arcs as a spanning tree solution.

we

will also refer to a

given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs

may

be needed to form the spanning tree

S.)

Figure
that
it

2.2. illustrates a

spanning
is)

tree

corresponding to a cycle free solution. Note
set of free arcs into a

may

be possible (and often
(e.g.,

to

complete the
wdth arc
(3,

spanning

tree

in several

ways

replace arc

(2, 4)

5) in

Figure

2.2(c)); therefore, a

given

cycle free solution can correspond to several spanning trees S.

We
If

will say that a

spanning tree solution x
this case, the

is

nondegenerate

if

the set of free arcs forms a spanning tree.
to the

In

spanning tree S corresponding
are not incident to)
all

flow x

is

unique.

the free arcs do

rot span

(i.e.,

the nodes, then any spanning tree corresponding to
arc's

this solution will contain at least

one arc whose flow equals the
vdll say that the

lower or upper

bound

of the arc.

In this case,

we

spanning

tree

is

degenerate.

40

(4,4)

(1,6)

(0,5)

(a)

An example network with

arc

flows and capacities represented as

(xj:, uj:

).

©
(b)

A cycle free solution.

<D

©
(c)

A

spanning

tree solution.

Figure

2.2.

Converting a cycle free solution to

a

spanning

tree solution.

41

When

restated in the terminology of spanning trees, the cycle free property
result in

becomes another fundamental

network flow theory.
If the objective

Theorem

2.6:

Spanning Tree Property.
problem
minimize
{ex:

function value of the network

optimization

Nx

=

b,

I

<x <

u]

is

bounded from below on the

feasible

region and the problem has a feasible solution

then at least one spanning tree solution solves the problem.

We
of the flow

might note

that the

spanning

tree property is valid for

concave cost versions
is

problem as

well,

i.e.,

those versions where the objective function

a concave
is

function of the flow vector
valid because
if

x.

This extended version of the spanning tree property
is

the incremental cost of a cycle

negative at

some

point, then the

incremental cost remains negative (by concavity) as

we augment

positive

amount

of

flow around the

cycle.

Hence,

we

can increase flow in a negative cost cycle until

at least

one arc reaches
2.3

its

lower or upper bound.

Networks, Linear and Integer Programming

The

cycle free property

and spanning

tree property

have many other important

consequences.

In particular, these

two properties imply

that

network flow theory bes

at

the cusp between

two

large

and important subfields of optimization—linear and integer

programming.

This positioning may, to a large extent, account for the emergence of
a cornerstone of mathematical

network flow theory as
Triangularity Property

programming.

Before establishing our

first

results relating

network flows
that

to linear

and integer
S has
at

programming, we
least

first

make

a

few observations. Note
is,

any spanning

tree

one

(actually at

lecist

two) leaf nodes, that
if

a

node

that is incident to only

one arc

in the

spanning

tree.

Consequently,

we

rearrange the rows and columns of the
is

node-arc incidence matrix of S so that the leaf node

row

1

and
-1,

its

incident arc
lies

is

column

1,

then

row

1

has only a single nonzero entry, a +1 or a
If
is

which

on the
its

diagonal of the node-arc incidence matrix.
incident arc from S, the resulting network

we now remove

this lecif

node and

a

spanning tree on the remaining nodes.
1

Consequently, by rearranging
for the

all

but

row and column
that

of the node-arc incidence matrix

spanning

tree,

we

can

now assume

row

2 has

-t-1

or

-1

element on the

42

diagonal and zeros

to the right of the diagonal.

Continuing

in this

way

permits us to
n-1

rearrange the node-arc incidence matrix of the spanning tree so that

its first

rows

is

lower triangular. Figure

2.3

shows

the resulting lower triangular form (actually, one of

several possibilities) for the spanning tree in Figure 2.2(c).

nodes
5

L =

+1. emalysis. If the objective value of the network optimization minimize is { ex: Nx = b.1) is an integer But this observation implies that the diagonal element of components -1. now if we move x] to the right of the equality in for X 2 the right hand side remains this is integral and we can solve from the second equation.2 shows that this integrality property is also valid in the more general situation in which the objective function is concave. expressed tis a weighted combination of two other feasible solutions y and as x = ay + (l-a)z for some weight < a < 1. bounded from below on the feasible region. or generalizations with concave cost objective functions. as the leist objective function ex is a linear program result shows. or b - Mx^ (2. component of 0.1).43 Now further suppose that the / supply/demand vector b and lower and upper bound Then since every vectors and u have all integer components. implies that x| is integreil. in the parlance of convex is. Relationship to Linear Programming The network flow problem with the which. 1 <x <u } the vectors solution. that solutions x with the property that x cannot be z. extreme point solutions. then the problem has at least one integer optimum Our observation at the end of Section 2. always has an integer solution. 1. ako satisfy another well-known property: they always have. we might expect to discover that extreme point . and u are integer. network flow problems always have cycle free solutions. and b. problems always have spanning fundamental result. Since. we have established the following Theorem problem 2. as we have seen. This argument shows that for problems with integral data. continuing forward substitution by successively solving for one variable at a time shows that x^ integral. Integrality Property. i. of x' are integral as well: since the first U equals +1 or the first equation in (2. yr- equals -1). the problem has a feasible solution. every spanning tree solution is integral.8.. Since the spanning tree property ensures that network flow tree solutions. Linear programs. an arc lower or upper bound and the right hand side M has integer components (each equal to vector.e. Network flow problems are distinguished as the most important large class of problems with this prop>erty.

< a< i. Theorem Extreme Point Property. and indeed they are as shown by the next result. 1. Let x'. yjj network contains an imdirected cycle with not equal to Zij for any arc on the But by definition of the Therefore.1.9. then is not a cycle free solution. then it cannot be an extreme point.44 solutions and cycle free solutions are closely related. every cycle free solution is an extreme point and. y' yij and zij z' be the ujj components zjj of /ij < < xij < < or /jj < < (i. conversely. between program of the basis and the that integrality property. Let us now make one final connection between networks and linear and integer programming— namely. Then . We can extend B to a basis of the constraint matrix by adding a Just as cycle free solutions for maximal number of columns. uij.x^) is a compatible partitioning of Also suppose that we eliminate the redundant row so that B is a nonsingular matrix. Consequently. In linear programming.. it X is not an extreme point solution. < yjj < and " let Nj = 0' denote the submatrix of N corresponding to these arcs that the cycle. every extreme point is a cycle free solution. Then NjCz^ > ) which implies. extreme points are usually represented algebraically as basic solutions. Theorem is 2. every basic solution a spanning tree solution. conversely.M] for some basis B and that x = (x . by flow decomposition. spanning tree solutions correspond to basic solutions. Consider a linear form Ax = b and suppose x.e. Conversely. Every spanning tree solution to a is network flow problem a basic solution and. Proof. With the background developed already. minimize is { ex: Nx = b. if x not a cycle free solution. components if x^. xij j).10: Basis Property. y^ and z^. N = [B. then the problem has an extreme point solution. I <x <u ) bounded from below on the feasible region and the problem has a feasible solution. we define two feasible solutions y and z with the property is that X = (l/2)y + (l/2)z. network flow problems correspond to extreme points. First. suppose that x not an extreme point and is represented as x = ay + (l-a)z with these vectors for which y and z differ. the columns B of the constraint matrix of a between their linear program corresponding to variables strictly lower and upper bounds are linearly independent. if the objective value of the network optimization problem 2. this result is is easy to establish. For network flow problems. as in our discussion of Figure 2. this cycle contains only free arcs in the solution x. for these special solutions. since by perturbing the -6 flow by a small amount 6 and by a small amount around a cycle with free arcs.

In this subsection.Mx^. As measured by the new 0. Let us -1. and u are all integers. - Also. a node-arc incident matrix let is unimodular. or How Since bases of are these notions related to network flows and the integrality property? N correspond to sparming trees. the determinant of B equals +1 or of all integers. using an expansion of determinants by minors. to show equivalences of different network problems. analysts use network transformations to simplify a network problem. (Removing Nonzero Lower Bounds). partitioning of b. determinant of B. Consequently. S is singular. or to put a network problem into a standard form required by a computer code. 2. j) has a positive lower boimd l^y then we can replace Xjj.+l. the triangularity property shows that the determinant of any basis (excluding the redundant row now). If it is totally 0. Xy. Therefore. by Cramer's rule from linear algebra. if all of square submatrices have determincmt equal to either 0. the -1.) The constraint matrix of a Theorem Total Unimodularity Property. But then. equals the product of the diagonal elements in the triangular representation of the basis. provides this totally an alternate proof of unimodular property. of x' as it is possible to find each component sums and multiples of components of if b' =b - Mx^ and B. If an arc (i. call a matrix it A unimodular unimodular of its its bases have determinants either +1 or <md call totally -1. which a spanning tree on each is of its connected components. variable the flow on arc (i. must be equal to 4l (An induction argument. j) will have a lower bound of This transformation has a . network flow problem is totally unimodular.45 Bx^ = b . or x^ = B-^(b Mx^). then x^ if all and consequently x^ is an integer. we describe some of these important transformations. or -1. Even more. it is easy to see that the determinant of S it the product of the determinants of the spanning trees and. vector whenever x^. unimodular. divided by det(B). For Otherwise. and therefore equals +1 or -1. therefore. the b. then x^ is an integer if and M are composed In particular.11: minimum cost M 2. by Xjj+ l^- in the problem formulation.4 Network Transformations Frequently. Tl. is it has determinant must correspond to a cycle free solution. it S be any square submatrix of N. / A corresponds to a basic feasible solution x and the problem data A.

j) in the original Xjj^ network. (i. can be written as -1. V. x^: The capacity Sj.5. = Ujj. the corresponding flow in the transformed network both the flows x and x' = ik Xjj and = Uj. in only one. b(j) oo) + Uij (0.. + Sj. constraint (i. have the same Xj: cost.2) from the mass balance constraint of node we assure that each of Xj. These algebraic manipulations correspond to the following network transformation. O Removing ^©< t I © Xjj. a flow ^k' " ^" *^^ transformed network yields a flow of = Xjj^ of the same cost in the . now appears in three mass balance constraints and j. if we introduce a slack variable > 0. Multiplying both sides by we obtain -Xjj .2) This transformation is tantamount to turning the slack variable into an for that node. <D 2.j = X^j = Sjj arc capacities. and Sj. b(j) b(i)-/ij b(i) + / 'Cij. using the following ideas.Sjj = -Ujj (2. appear in exactly two constraints-in one with the positive sign and in the other with the negative sign. Uj:.oo) Ujj) <T) Xjj <^ Figure 2. j) (Cij'Uij-V CD lower bound to zero. Likewise. In the network context. we begin by sending /j.2) as the mass balance constraint Observe that the variable xj. units of flow on the arc and then measure incremental flow above b(i) /jj. Sj: additional node k with equation (2. ^ a positive (i. an arc has a positive capacity we can remove the capacity. b(j) b(i) -Uij (Cjj . If x^. <D then Transforming If {Removing Capacities).Xj:. b(i) (Cjj . By subtracting (2. X.Ujj) CD Figure T2.4. . is a flow on arc is X. this transformation implies the follov^dng. making the j) arc uncapacitated. 46 simple network interpretation.

.. (i'.7 illustrates the resulting network all when we carry out the node splitting transformation for the nodes of a network. The new flow X •: measures the amount of flow we "remove" from the "full capacity" flow of b(i) b(j) b(i)-Ujj b(i) + Ujj CD <D Figure 2. This transformation a change (i. (j. uncapacitated. j) send Ujj units of flow on the arc and then replace arc by arc (j. This transformation splits each node (k.6. i) i into and i' and replaces each original arc (i. © two nodes capacity. Doing so replaces arc with its associated cost by the arc i) v^ath a cost -Cj. (Arc Reversal). This transformation has the following network interpretation: (i. Consequently. (i. (Node Splitting). since this x^j^ + Xjj^ = u^. j) by Cj: X • in the problem formulation. transformation valid. Therefore. j) by an cost of the same cost and and each arc by an arc i. i') 0< of arc reversal. and is x^j^. i) vdth cost -Cj. j) or an upper in variable: bound on the replace x^. = x^< Ujj. . i') i T4. Let arc flow Ujj if it is represent the capacity of the arc is (i. Uj.47 original network. T3. We also add arcs of cost zero for each Figure 2. j) of the same and capacity. this transformation permits us to remove arcs with negative costs. x^j Further. and x:j^ are both nonnegative. » An example arc (k.

(b) The transformed network.48 (a) (b) Figure 2.11 when we use it reduce a shortest path problem with arbitrary arc lengths to an assignment problem. is This transformation also used in practice for representing node activities and node data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of we simply associate arc (i. (a) The original network. We to shall see the usefulness of this transformation in Section 5. i'). node i with the new throughput .7.

cheapest. More importantly. Next. We then describe two more sophisticated implementations that achieve in practice improved running times emd in theory. We will show that methods have the most attractive worst-case performance. we discuss problem types (i) (i). or most pairs of rebable path between one or many nodes in a network. the k-th shortest path). Researchers have studied several different (directed) shortest path models. are finding shortest paths from one node to other nodes all when arc lengths are nonnegative. practical experience has efficient shown is the label correcting methods to be modestly more Dijkstra's algorithm first the most popular label setting method. algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often call for the solution of a large number of shortest path problems as subroutines. shortest paths visiting specified nodes.49 3. in increasing all order of solution difficulty. nevertheless. Label setting methods designate one or more labels as permanent (optimum) at each iteration. outlining one special implementation of this general approach that runs in polynomial time and another implementation that perfomns very . whereas label correcting methods apply to networks with negative arc lengths as well. The problem arises when trying to determine the shortest. Consequently. and (iv) finding shortest paths from every node to every other (e. node. In this section. we consider a generic version of the label correcting method. The label setting methods are applicable networks with nonnegative arc lengths. SHORTEST PATHS Shortest path problems are the most fundamental and also the most commonly encountered problems shortest path in the study of transportation and communication networks.. The (i) major types of shortest path problems. Each approach assigns tentative distance labels (shortest path distances) to nodes at each step. designing amd testing shortest path efficient algorithms for the problem has been a major area of research in network optimization. finding various types of constrained shortest paths between nodes shortest paths with turn penalties. In this section.g. we discuss a simple implementation of this algorithm that achieves a time bound of 0(n2). Label correcting methods consider as temporary until the final step label setting all labels when they all become f>ermanent. (ii) finding shortest paths from one node to (iii) other nodes for networks with arbitrary arc lengths. The algorithmic approaches for solving problem types setting and (ii) Cem be classified into two groups—label to and label correcting. (ii) and (iii).

j). The correctness of the algorithm on the key observation we prove later) that it is always possible to minimum temporary label as permanent. we s a permanent «> label of zero. In this section. j) network G= (N.3. and each other node j a temporary label equal to Cgj € A. We can ensure this condition by adding an with a suitably large arc length. Finally. designate the node vdth the algorithmic representation is a . and in this section as well as in Sections 3. We invoke this connectivity assumption throughout Dijkstra's algorithm finds shortest paths from the source node from node s s to all other nodes. we assume amd that aire lengths are integer numbers. The basic idea of the algorithm of their distances from s. j) e A }. node j. Initially. node i with the minimum labels temporary makes it permanent. the label of a node are i is its shortest distance from the source node along a path whose internal nodes selects a all permanently labeled. j) otherwise is temporary. G contains a directed path from s to every artificial arc (s. The following (which basic implementation of Dijkstra's algorithm.2 3. The algorithm terminates when has designated relies nodes as permanently labeled. and scans au-cs in A(i) to it update the distamce all of adjacent nodes. Let A(i) represent the set of arcs emanating from node { € N. We suppose that node s is a specially designated node. we further assume that arc lengths are nonnegative. The algorithm label. denoted by d(i): the label i. At each iteration. aissodated with each arc i e A. we discuss a method to solve the all pairs shortest path problem. and otherwise. permanent it once we know that it represents the shortest distance from s to give node if (s. is to fan out and label nodes is in order Each node i has a label.1 We consider a (i. and assume without any loss of generality that the network other node. Dijkstra's Algorithm 3.50 well in practice.A) with an arc length Cj. and let C = max Cjj : (i. for each this section.

with each node € N. furthermore. To establish the validity of Dijkstra's algorithm. and d(j) : = «» otherwise. begin P:=(s). end. At each point nodes are partitioned into two P and T. Then it is possible to transfer the node i in T to with the smallest label d(i) to P for the following reason: that is any path P from the source node i must contain a first node k i in T. while P * begin N do (node selection) let i e T be a node T: for which d(i) = min {d(j) : j € T). This observation shows that the length of path P is at least d(i) and hence labeled i it is valid to permanently label node i. the segment of the path P between node k and node has a nonnegative length because arc lengths are nonnegative.j) = T-{i}. then setting d(j) = d(i) The computational time its for this algorithm can be split into the time required by two basic operatior\s--selecting nodes and ujjdating i distances. the temporary labels of some nodes > T+ Cj: (i) might decrease. After the algorithm has permanently in node i. In an iteration. sets. end. The algorithm i associates a predecessor index. whereas the label of each node in T is j) the length of a shortest path subject to the restriction that each node in the path (except belongs to P. Assume that the label of each node j in P is the length of a shortest path from the source. the algorithm requires 0(n) time to identify the node with minimum temporary label and . d(j) We + Cj. d(s) d(j) : : = = and pred(s) = : 0. denoted to by pred(i). tentative shortest paths to these nodes. d(i) . However. the we use an inductive argument. The algorithm updates these indices (tentative) shortest path ensure that s to pred(i) is the last node prior to i on the from node node i. {distance update) for each if (i. Cgj and pred(j) : = s if (s. P: = Pu(i). these indices allow us to trace back along a shortest path from each node to the source. j) in A(i). if updates the labels of nodes in T (i). At termination. because node could become an internal node in the must thus scan all of the arcs (i. node k must be is at i at least as far away from the source as node since its label least that of node i. € A(i) do then d(j) : d(j) > d(i) + Cjj = d(i) + Cjj and pred(j) : = i. T: = N-{s). in the algorithm.51 algorithm DIJKSTRA.j) e A .

1 suggests the following scheme for node 0. We maintain nC+1 buckets numbered label is k. The distance nondecreasing.. hence. labels Dijkstra's algorithm designates as permanent are This fact follows from the observation that the algorithm permanently labels a node i with smallest temporary label d(i). the computation time by maintaining distances fashion? Ehal's algorithm tries to accomplish this objective. Bucket k stores each node whose temporary distance network and. of Dijkstra's algorithm Dijkstra's algorithm has been a subject of much research.. selection. 2.52 takes 0( A(i) I I )) time to update the distance labels of adjacent nodes. and C. 1. and while scanning arcs in A(i) during the distance update step. the algorithm requires Oirr-) time for selecting nodes and CX ^ ie A(i) | | ) = 0(m) time for N thus runs in O(n^) updating distances.1. more complex version of R-heaps gives the best worst-case performance for choices of the parameters n. FACT 3. To improve we must ask the following question. using clever data structures. In the following discussion. . which currently comparable to the best label setting algorithm in practice. This implementation time. we scan the buckets in increasing order until label of each we is nonempty bucket. m. never decreases the distance label of any permanently labeled node since arc lengths are nonnegative. they have. These implementations have either its dramatically reduced the running time of the algorithm in practice or improved worst case complexity. One by . suggested several implementations of the algorithm. which is nearly known most implementation of Dijkstra's algorithm from the perspective of worst-case analysis.) 3^ Dial's Implementation in Dijkstra's The bottleneck operation the algorithm's performance. all algorithm is node selection. Instead of scanning temporarily labeled nodes at each iteration to find the one with the minimum in a sorted distance label. can we reduce in practice. Subsequently the best we (A all describe an implementation using R-heaps. Consequently. overall. and reduces the algorithm's fact: computation time using the foUouing that FACT 3. . nC. Researchers have attempted to reduce the node selection time without substantially increasing the time for updating distances. is we describe Oial's algorithm. nC is Recall that C represents the largest arc length in the all an upper bound on the distance labels of the nodes. In the identify the first node selection step. Thus. The distance node in this bucket minimum.

Hence. 2. FACT 3. 1. Dial's algorithm uses C+1 buckets numbered 0. making them permanent and scanning their lists to update distance labels of adjacent nodes.e. bls a time bounded by some linked list.. temporary labels are bracketed from below by Consequently. d(j) = d(k) + Cj... k+(C+l). By storing the content of these buckets carefully. in 0(1) time. to select easily a node. Now. and for each finitely labeled node j in T.. Consequently. then buckets k+1. for some k € P (by the property all finite of distance updates). .1. . node from the list. efficiently. Doing so permits the topmost relabel us. Consequently. This storage scheme bucket k contains a node with . by rearranging the pointers. One implemention uses a data structure knov\T» a doubly In this data structure. This d(j) in implementation stores a temporarily labeled node j with distance label the bucket d(j) mod (C+1). or delete label.2. at any point in time this bucket also implies that vvill if hold only nodes with the same distance labels. distance label that the algorithm designates as permanent at the d(j) beginning of an iteration. in fact.. C+1 buckets suffice to store d(i) and from above by finite d(i) + C. this algorithm runs in following fact 0(m + nC) time and uses nC+1 buckets. add a bottommost node. < d(i) + C for each finitely This fact follows by noting that (ii) (i) d(k) < d(i) for eacl k e P (by FACT 3. bucket labels k. .1). it is possible to add. it as we nodes and decrease any node's temporary distance we move from a higher index bucket to a lower index bucket. nodes with temporary distance in labels. delete. C. storing to its two pointers for each entry: one pointer immediate predecessor and one to its immediate successor. and select the next element of any bucket very constant. 2. k+2. this transfer requires 0(1) time. d(j) < d(i) + < d(i) + C. In other words. k-1. We then resume the scanning of higher numbered buckets in increasing order to select the next nonempty bucket. .. . 1. k stores temporary labeled nodes with distance however.53 one.. we order the content of each bucket arbitrarily. allows us to reduce the If d(i) is the number FACT 3. store nodes in increeising values of the distance labels.2. k+2(C+l). . arc we delete these rodes from the bucket.: cj^. C which can be viewed as arranged in a circle as in Figure 3. The of buckets to C+1. i. 0. We need not store the nodes with to a bucket infinite temporary distance labels first any of the buckets-we can add them when they receive a finite distance label. minimum distance label. during the entire execution of the algorithm. then at the end of that iteration labeled node j in T. because of and so forth.

The Rather. to identify the first nonempty where it reexamines the buckets starting at the place A potential disadvantage of this scheme. as compared to the original algorithm. In addition. In the next iteration. 3.3. it a wrap around fashion.1. pseudopolynomial if time. is that C may be very large. very large. The algorithm. next section. all of the buckets much less than however. in it algorithm runs in is 0(m + nC) time which if not even polynomial time. The first implementation considers all the . is is rot attractive theoretically. typically does not encounter these difficulties in practice. then the algorithm runs O(n^) time. we is consider an implementation using a data structure called a runs in redistributive heap (R-heap) that 0(m + n log nC) time. and the number of passes through Dial's algorithm. resulting in a large computation time. C = n'. and C = 2" the algorithm takes exponential time in the worst case. For most applications. left off earlier. For example. the previous The discussion sections of this implementation can skip it of a more advanced nature than and the reader without any loss of continuity. is C is not n.54 k-l Figure 3. the algorithm as may wrap around many as n-1 times. necessitating large storage and increased computational time. in bucket. R-Heap Implementation Our first O(n^) implementation of Dijkstra's algorithm and then Dial's implementation represent two extremes. Bucket arrangement in Dial's algorithm Dial's algorithm examines the buckets sequentially. The search heis for the theoretically fastest implementations of Dijkstra's algorithm In the led researchers to develop several new data structures for sparse networks. however.

uses variable length widths and changes the ranges dynamically. adopting an intermediate approach. For a given shortest path problem. the R-heap consists of + flog nCl buckets. so to speak) and searches for a node with the smallest label. so number of buckets needed in only Odog nC). for each bucket reduces the number of buckets. and nodes in the buckets. . . set We store permanent nodes. we could conceivably retain the advantages of bo. The buckets are numbered as is K = nCl We do not represent the range of bucket k by range(k) which a (possibly empty) if closed interval of integers. lOOk+99] and width is TOO. . reallocate we dynamically modify the ranges of numbers stored each bucket and we nodes with temporary distance labels in a is 1. the widths of the buckets are is 1. different we could store temporary labels from 100k to lOOk+99 in bucket that can be stored in a bucket is k.. Moreover.55 temporarily labeled nodes together (in one large bucket. 2. If we could devise a variable width scheme. k arbitrarily large. way that stores the minimum distance label in a bucket whose width In this way. if But in order to find the smallest distance we need is to search all of the elements in the smallest index nonempty bucket. as in the previous algorithm. with a width of numbered bucket.. redistributes the . redistributive heaps that that the we present. changes the ranges. Could we improve upon these methods by all. . We now Flog describe an R-heap in 1 more detail. the running time of this version of the R-heap algorithm 0(m + n log nC). the range of bucket k is [100k . say. The temporary labels make up the range of the bucket. 1. instead of storing only nodes with a temporary label of k in the k-th bucket. The algorithm each time it change the ranges of the buckets dynamically. The nodes in bucket k are denoted by the CONTENT(k). 0. and the resulting algorithm reduces to Dijkstra's implementation. In fact. but not bucket? labels in a For example. the cardinality of the range called its width. Using widths of factor of k.h the wide bucket and narrow bucket approaches.. 8. The R-heap algorithm we consider next In the version of 16. to find the is we avoid the need to search the entire bucket minimum. 4.. perhaps by storing many.. 2. Using a width of TOO. Indeed. We store a will it temporary node i in bucket k d(i) e range(k). its For the preceding example. but still requires us to search through the lowest numbered bucket to find the node with minimum temporary one for the lowest label. Dial's algorithm separates nodes by storing any two nodes with different labels in different buckets. we need original only one bucket. 1. size k permits us to reduce the number of buckets needed by a label.

range(4) = [8 . [9]. makes sense example 15]. distance label without searching nodes in bucket is The following observation helpful.. the minimum temporary it label is in a bucket with width one. Since the that minimum index nonempty bucket label the bucket less whose range is [8 15]. Eventually. resulting in the ranges 0. each of the elements of bucket 4 moves to a lower indexed bucket. rangeO) = [4 .. label in the bucket.. for 15]. since each node can be shifted at most K = 1 + flog nCl times. it Actually.e. in the Suppose range [8 . shift (or redistribute) its temporarily labeled nodes into the appropriate buckets and 3). could not identify the minimum is . 15].. ranged) = range(2) = [2 3). These ranges will change dynamically. we would Since we will be scanning find the all of the elements of bucket 4 in the redistribute step. and We then set the range of bucket 4 to and we (0. range(K) = [2^-1 . we know no temporary v^l ever again be than 8. to first minimum temporary label is 11.. Suppose for .. we can redistribute the range of bucket 4 (whose width is 8) the previous buckets (whose combined width [12. the redistribution time 0(n log nC) time in total. At all this point. 7]. 2.. Essentially. and hence buckets to 3 v^ll never be needed again.. . [8]. 2^-1]. 15]. [10 11]. [1]. 1. Rather than leaving is 8) to ..56 Initially. the widths of the buckets initial will not increase beyond their distance label is widths.. Thus. In this case the resulting ranges of buckets . we 4. however. example that the initial minimum quickly determined to be We could verify this is fact by verifying that buckets through 3 are empty and bucket 4 nonempty.. that the minimum Then rather than . we have replaced the node selection step (i. and the algorithm selects in an additional 0(1) time. carry out these operations a bit differently. the buckets have the following ranges: rarge(0) = [0]. redistributing the range [8 we need only to 4 redistribute the subrange [11 15]. Roughly speaking. finding a node with smallest temporary distance label) by a sequence of redistribution steps in which we shift is nodes constantly to lower indexed buckets. these buckets idle.

For this problem. Moreover.3 specifies the starting solution of Dijkstra's algorithm and the initial R-heap. greater than 1.4) (6) Figure 3.. and then we reassign the content of bucket k time is The is redistribution 0(n log nC) and the running time of the algorithm 0(m + n log nC). whose width To reiterate.. the illustrate R-heaps on the shortest path example given in Figure arc indicates its 3. In number beside each length. 1. we scan the buckets is 0. 7 127] Ranges: CONTENT: (2. at the end of this redistribution. Nodei: Label d(i): 12 13 [0] [1] 3 4 15 5 6 20 4 [8 . we is 1. Figure 3.. e.. source Figure 3. to k-1.2 The shortest path example.57 would be [n].15] nC=120 5 [16.63] [64 . 14]. Since bucket label. C=20 and K = flog 1201 = 7. In our example.3] (3) 3 [4 . To select the node with the smallest distance label. . every node in this bucket has the same (minimum) distance . K to find the first nonempty bucket.. the has width 1. [15].31] {5} Buckets: 12 [2 . We now the figure.2. So. [12]. bucket nonempty..2. ..7] 6 [32 .. the minimum nonempty to buckets bucket is whose width we redistribute the range of bucket k into buckets to k-1. we do is not carry out the actual node selection step until the If minimum nonempty bucket k. are guaranteed that the minimum temporary label is stored in bucket 0. (13 . bucket has width one.3 The initial R-heap.

its We check whether the is new distance label of node 5 5. It isn't.4 shows the new R-heap. deletes node 3 from the R-heap. to index bucket. which bucket Since its distance label has decreased.58 algorithm designates node 3 as permanent. and scans the arc (3. node 5 should left. So identify the first we sequentially scan the buckets from right to 9. is contained in the range of present bucket. move bucket to a lower 5. bucket whose range contains the number 5 to bucket 4. Node i: .5) to change the distance label of node 5 from 20 to 9. which Node 5 moves from bucket Figure 3. starting at bucket is 4.

we assign 2. say bucket total. we can redistribute the useful range of bucket k over the buckets . 0. CONTENT(2) = e. first buckets can be as large as 2*^'^ for a total potential 0. This redistribution of ranges and the subsequent reinsertions of labels to bucket nodes empties bucket k and moves the nodes with the smallest distance 0. then the useful range of the bucket u]. . The algorithm the first redistributes the useful range in the following manner: 1. CONTENTO) = CONTENT(4) = 4). {2. the modified we sequentially scan lower numbered buckets from right to left and add the node to the appropriate bucket. label of a node in djj^j^. 1. O(nK) is node can move at K times. so the nodes total move a total of at most nK times. u] and the smallest distance is Idjj^jp . e CONTENT(k) and that d(j) decreases.. nK arises because the total every time a node moves. If If This operation takes 0(K) time per iteration and O(nK) time in k=0 or k=l. width < 2"^ and since the width of widths of the 2*^. to right to identify the first nonempty bucket. we its redistribute the "useful" range of bucket k into the buckets those buckets. the node selection steps take O(nK) Since K = [log nC"L the algorithm runs in 0(m + n log nC) time.. the next integer to bucket 3. Thus. to a lower indexed bucket. a moves most lower indexed bucket. . . the next two integers to bucket htis the next four integers to bucket and so on. and moves the node with the We are now then in a position to outline the general j algorithm and analyze If its complexity. then any then node in the selected bucket has the minimum distance label. Overall. Next we consider the node buckets from left selection step.. 0. 1. Suppose that d(j) « range(k). . since there are K+1 Therefore. . all we move can time. to a and the term 0(m + nK) time.. the bucket is k-1 and reinsert content to If the range of bucket k is [/ . We now summarize our discussion. buckets.... bucket 4 . each node can move most K times. Whenever we examine it a node in the nonempty bucket k with the at smallest index. CONTENTO) = 0. . 2. k-1 in the manner described. .59 CONTENT(O) = (5).. a bound on node movements. this operation takes The term m reflects the number it of distance ujxlates. Since bucket k 1. integer to bucket 0. This redistribution necessarily empties smallest distance label to bucket 0. Node selection begins by scanning the k. 1. k ^ 2.

Unlike label setting algorithms. structures.2 permits us to reduce the number of buckets to 1 + flog CT This refined implementation of the algorithm runs in 1. Label Correcting Algorithms Label correcting algorithms. FACT 3. of Dijkstra's algorithm solves the shortest This algorithm requires 1 + flog nCl buckets. conditions which is more suitable from the viewpoint of be a set of labels. We will prove an alternate version of these label correcting algorithms. 3.2). Theorem source 3. when they all become permanent simultaneously. The label correcting algorithms are conceptually more general than the label setting algorithms and are applicable to more general To produce situations. Using substantially more sophisticated data ). shortest paths. every cycle in the network has a positive length. Label correcting algorithms can be viewed as a procedure for solving the following recursive equations: d(s) d(j) = 0.e. (3..60 Theorem 3. maintain tentative distance labels for nodes and correct the all labels at every iteration.1) (d(i) = min + Cjj : i € N).1. i. is possible to reduce this all bound further to 0(m + n Vlog n which is a linear time algorithm for but the sparsest classes of shortest path problems. usual. for each j e N - {s}.4. Most label correcting algorithms have the capability to detect the presence of negative cycles. (3. then they represent the shortest path lengths from the node: . a directed cycle whose arc lengths sum to a negative value. For probelm that satisfy the similarity assumption (see Section bound becomes 0(m+ n it log n). path problem in 0(m The R-heap implementation + n log nC) time. these algorithms maintain distance labels as temporary until the end. these algorithms typically require that the network does not contain any negative directed cycle.2 Let d(i) for i e N If d(s) = and if in addition the labels satisfy the following conditions.2) As j. d(j) denotes the length of a shortest path from the source node to node These equations are knov^m as Bellman's equations and represent necessary conditions These conditions are also sufficient if for optimality of the shortest path problem. for example. to networks containing negative length arcs. as the name implies. 0(m this + n log C) time.

j) is a shorter path to node j than the current path of length d(j).2. Consider any directed path i-j P from the source to node j. . then they are also lower bounds on the - shortest path lengths. > d(i) based upon the simple observation that whenever the current path from the source to node i. (i.2 implies that d(i2) ^ d(i^) + Ci^i2 = + Cij^-iijc d{i3) < d(i2) + Ci2i3' / d(ij^) d(ij^. cycle. of the shortest path problem.j) ^ii ^ ^' since the labels d(i) cancel W e W W is out in the summation. first is The generic label correcting algorithm that we consider a general procedure for successively updating distance labels d(i) until they satisfy the conditions C3. The algorithm + Cjj. At any point in the algorithm. which implies the conclusion of the theorem.2 correspond to primal feeisibility for the linear programming formulation dual feasibility.61 C3.-j) Adding these inequalities yields d(j) = d(ij^) < V (i. . Therefore d(j) is a lower bound on the length of any directed path from e P node j. Suppose C3. or the length of some path from the source to node d(j) j. These inequalities imply that (i. the source node to node i. of length d(i). we might view and methods that always maintain primal feasibility try to achieve dual feasibility.j) Cj. and C32. Conditions C3. Let P consist of Ciii2/ nodes s = - i2 i3 ••• ••• - 'k = < ) Condition C3.j) Consequently. d(i) d(j) is the length of d(i) some path from (i. together with is the arc (i. Proof. the label d(i) is either «» indicating that it is we have yet to discover any path from the source to node j.2. + d(j) + = Cj: ^ for each e W. network did contain a negative cycle d(i) W and some labels (i. Since d(i) is the length of some path from the source to node i.1 in Theorem 3.1.2. We show that if the labels d(i) satisfy C3. the source to including a shortest path from s to j. < + Cjj for all j) e A. it is an upper bound on the shortest path length.2. This conclusion contradicts our assumption that a negative Conditions C3.2 correspond to label correcting algorithms as From this perspective.j) V e (d(i) - d(j) Cjj) T!. We d(i) satisfy note that if the network contains a negative cycle then that the no set of labels d(i) satisfies C3.

the modified requires 0(nm) time to determine shortest paths from the source to every other node. number of steps can grow exponentially with (Since the algorithm of C. from Theorem 3. In each pass. Thus when data are in number of distance updates is O(n^C). is that without a further on the choice of arcs. A in order and check the condition d(j) > d(i) + Cj:. j) e A. Now make d(j) passes through A. in Arrange the arcs A in some (possibly arbitrary) order. the most 2nC times. is and hence represent finite if there are We now note that this algorithm Since d(j) is no negative cost cycles and if the data are integral. we start with pathological instances of the problem and make a poor choice of arcs at every iteration. the label correcting if algorithm does not necessarily run in polynomial time. We show that the algorithm performs at most n-1 passes through the arc list. restriction One drawback Indeed. A arcs that nice feature of this label correcting algorithm is its flexibility: we can select the finite do not satisfy conditions C3. correcting algorithm. this conclusion imphes the . The correctness of the label correcting algorithm follows Cj. Terminate the algorithm algorithm the modified no distance label changes during an entire pass. end. = oo for each j € N .j) > d(i) + Cj. = and pred(s) = : 0. Proof.2. do = d(i) + Cjj. for all (i. if the arc if then update = d(i) + Cj:. Since each pass requires 0(1) computations for each arc. method. these instances a polynomial time do have exponentially large values algorithm.) is pseudopolynomial time. pred(j) : = i.2 in any order and of the still assure the convergence. the labels d(i) satisfy d(j) < d(i) + the shortest path lengths.62 algorithm begin d(s) d(j) : : LABEL CORRECTING. end. then the n. To obtain bound for the we can organize the computations carefully in the following manner. d(j) at bounded from above by nC all and below by -nC. and hence the algorithm runs pseudopolynomial time. satisfies d(j) while some arc begin d(j) : (i. At termination. the algorithm updates integral. We call this label Theorem label 3. scan arcs in satisfies this condition. however.3 correcting algorithm Wher: applied to a network containing no negative cycles.(s).

or in case (ii). the modified label correcting algorithm considers every arc of the list. up to the (n-l)-th pass. As network during every pass through the arc arcs in the arc list It aill need not do so. r arcs either (i) has no more than r-1 arcs. The modified label correcting algorithm is also capable of detecting the presence If of negative cycles in the network. scanning arcs in A(i) and testing the optimality conditions. while scanning the arcs. d''(j) for each j € N and each r j = 1. represent the distance label of node D''(j) after r . Next note that the shortest path to node j containing no more than case (i).. then has a set of labels d(j) satisfying C3-2. passes through the arc for list. In dJi]) d^'Q) = d''"^(j). (ii) it contains exactly r arcs. more nodes in the algorithm modifies If the distance label of (a some node i changes then the network contains a directed walk path i together with a cycle that have one or common) from node all 1 to of length greater than n-1 arcs that has snnaller distance than paths from the source node to i.)) > min {D^" "•()). This situation cannot occur ui\less the network contair\s a negative cost cyde Practical Improvements stated so far. when we make one more pass. min i*j D''"^(i) + Cj. Now suppose that during one pass through the arc the algorithm does not change the distance label of a d(j) node i. . the algorithm terminates v^th the shortest path lengths. we consider one node at a time. n-1. Let d''(j) denote the length of the shortest path from the source let D'^(j) to j node j consisting of r or fewer arcs. min {d^"''(i) + Cj. Consequently. Further. and d^(j) = min i*j {d''"^(i) + c^:]. In this case. we note that Therefore. Suppose we order the by their tail nodes so that arcs with the same tail node appear i consecutively on the list.)). . D^'Cj) < d'^(j) for all j e N. The provisions { of the modified labeling algorithm imply that < min {ly'^(j).63 0(nm) bound. Hence. in the n-th pass. that < r. list. during the next pass S d(i) + Cj. the n-1 passes. j) 6 A(i) and the . for every (i. Finally. We perform induction on the value of Suppose D^*^(j) < d''"Uj) I^Cj) for each € N. min (D''"''(i) + Cj. the inequality fol3o\*^ from the induction hypothesis. = min (d''"^(j). We claim. the shortest path from the source to any after at node consists of at most n-1 arcs. )). On the other hand. the algorithm terminates with the shortest path distances and the network does not contain any negative distance labels in all cycle. most n-1 passes. the algorithm does not update any distance label it during an entire pass. Then. inductively.. Thus.

end. j) for each e A(i) do + Cjj if d(j) > d(i) then begin d(j) : = d(i) + C|j pred(j) if j : = i. terminates in 0(nm) time. otherwise we add If it to the end of LIST. : «> for each j e N- LIST = (s). then some nodes may have i as a predecessor. The following procedure this further m. practice. yes. while LIST* begin do select the first element i of LIST. algorithm begin d(s) d(j) : MODIFIED LABEL CORRECTING. then we add to the beginning of LIST. the it algorithm can list of nodes whose distance labels have changed since it last examined them. end. Empirical studies indicate that with this change several times faster for many reasonable problem classes. While adding a node If to LIST. the algorithm is i. but greatly improves running time in practice. delete i from LIST. consequently. (s). rather than update them from other nodes and then update them again when we consider node alone. end. 64 algorithm need not maintain a It test these conditions. « LIST then add j to the end of LIST. first-out order to assure that performs passes through is the arc A and. It is advantageous to update the distances for these nodes immediately. = = : and pred(s) = : 0. (i. Another modification of this algorithm sacrifices its its polynomial time behavior in the worst case.. i This heuristic rule has the follovdng plausible justification. i we to see has already appeeired in the LIST.odification of the a formal description of modified label correcting method. The modification i alters the manner check in which the algorithm adds nodes whether the it to LIST. the worst-case Though this change makes the algorithm very attractive in . scans this list list in the first-in. the node has previously appeared on the LIST. To achieve this savings.

transformation. then we can solve the all pairs shortest path as the source problem by applying Dijkstra's algorithm n times.j)€P ^ii ~ X ^ii "*" ^^^^ ~ '^^'^ since the intermediate (i. first In this section is describe two It algorithms to solve this problem. Condition C3. and if the network contains no negative cost cycle. (i.e. It is based on dynamic programming. The second better suited for dense graphs. become nonnegative after the we can apply Dijkstra's algorithm n-1 additional times to determine shortest path distances between all pairs of nodes in the transformed network. we P new length of the arc Cj. The algorithm well suited for sparse graphs.5. combines the modified algorithm is label correcting algorithm and Dijkstra's algorithm. Indeed.2 implies that t for all € A. we need we to determine shortest path distances between all pairs of nodes. this version of the label correcting the fastest algorithm in practice for finding the shortest path from a single all nodes in non-dense networks. the method takes an extra . connected by directed paths. distances from s The algorithm Cu = (i. note that for any path from node k to node / X (i. This transformation thus changes the length of paths between a pair of nodes by a constant amount (depending on the pair) and Since arc lengths consequently preserves shortest paths. If the network has nonnegative arc lengths. We use the modified label correcting algorithm to compute shortest path to all other nodes. then we can fist transform the network to one with nonnegative arc lengths as follows a Let s be node from which all nodes in the network are reachable. This approach requires 0(nm) time to solve the first shortest path problem. certain variants of the label setting algorithm more efficient in practice.j)eP labels d(j) cancel out in the all summation. All Pairs Shortest Path Algorithm In certain applications of the shortest path problem. (For the problem of finding a shortest path from are a single source node to a single sink.65 running time of the algorithm algorithm source to is is exponential. If the network contains arcs with negative arc lengths. + d(i) - d(j) e A. / We then obtain the shortest path distance between nodes k and in the original network by adding d(/) . j) path distances define the d(j) or indicates the presence of a negative cycle. j) either terminates with the shortest In the former case. i. j) as Cj.d(k) to the corresponding shortest path distance in the transformed network.. for each (i. Further. cor\sidering each node node once.) 3.

The approach we present variables d^(i. j) = d''(i. r. 2. in which case = d^(i. (ii) . the time needed to solve a shortest path problem with nonnegative arc For the R-heap implementations of Dijkstra's algorithm we considered previously. ..C) = m+ to n log nC. and d^'+^Ci. j subject to the i condition that the path uses only the nodes as internal nodes. j) e A..66 0(n S(n. jX d^Ci. solve the all Another way pairs shortest path is problem is by dynamic define the programming. r) + d^Cr. r-1 (and and j) Let d(i. j that passes through the nodes 1. we first . C) time is to compute the remaining shortest path distances.C) lengths.. j) = min Cj.m. (d^'U. 2. observe that a shortest path from node either (i) to node r.j) = Cjj.m. . 1. j) known as Floyd's algorithm. j). j)). Thus we have d^(i. or does pass through the node in which case d^'*'^{i. m. j) + d^ir. We as follows: d'"(i. In the expression S(n. We over assume that = » for all node pairs (i. j) " the length of a shortest path from node i to node . S(n. To compute i d''"*'^(i. The following procedure a formal description of this algorithm. r) d'^"''^(i. j). It is possible to solve the pairs previous equations recursively for increasing values of and by varying the node is N X N for a fixed value of r.. j) denote the actual shortest path distance. j). r does not pass through the node r.

pred(i. = Cj. j) : = pred(r. (Hi) < d(i. for each for each A to do d(i. j). d(i. i) = for all i. = <« j) : and = i. ))•. end. r) do + d(r. d(i. j) : = 0.ork contains a path from node to node j of length d(i. : < > begin d(i. . pred(i. Floyd's algorithm jilgorithm. j) > then . j) -: j T • if d(i. e NxN d(i. end. netw. when < 0. For fixed i. j) € NxN j) : do d(i. j). last node prior to node j in the tentative shortest path from the node i to node The algorithm maintains the property i that for each finite d(i. Floyd's algorithm uses predecessor indices. j) is the d(i. j). and in each iteration it performs 0(1) computations for each node pair. j) : = d(i. predd. j) for each . then they represent the shortest path distances: (i) (ii) d(i. and pred(i. i) Hence. j) e N N satisfy the following conditions. r : = n do (i. the union of the r to tentative shortest paths to node r and from node node i contains a negative cycle. This algorithm performs n iterations. j) for more transparent from x the followang theorem. if i = j and < then the network contains a negative cycle. p. r) i + d(r. node (i.4 If d(i. j). j) pairs € 1 (i. STOP.2. + i) d(r. This cycle can be obtained by using the predecessor indices. The algorithm for either terminates vdth the shortest path distances or stops i. This path can be obtained by tracing the predecessor indices. and j. r. for some node from node r.67 algorithm begin for all ALL PAIRS SHORTEST PATHS. Theorem (i. The index pred(i. is in many respects similar to the modified label correcting This relationship becomes 3. r) d(i.*i. Consequently. for each node pair (i. r) + c^: for all i. this theorem is a consequence of Theorem 3. d(i. j) denotes the j. j) length of some path from node i to node j. Proof. j). i) < some node In the latter case. it runs in OCn-') time.

We assume that for every arc in A. We also we can assume set the without any loss of generality that arc capacities are finite (since capacity of any uncapacitated arc equal to the sum of the capacities of list all capacitated arcs). communication systems planning and several other application domains. the solution of the maximum flow problem with capacity data chosen judiciously establishes other performance measures for a network. the arc adjacency i. As earlier. k) € A) designates the arcs emanating from node In the maximum flow problem. Formally. the problem to . for consider a capacitated network (i. i) is There is no generality in making this assumption since all we allow zero capacity arcs. For example. two distinguished nodes also in A. j) € A). The source s and sink (i. This remarkable theorem has a number of surprising implications in machine and vehicle scheduling. In particular. t we wish to find the maximum flow from the source node is s to the sink node that satisfies the arc capacities. of the loss of network. defined as A(i) = k) : (i. we describe preflow push algorithms that have recently emerged as the for solving the most powerful techniques maximum flow problem. Let U = max (u^. A) with a nonnegative t integer capacity any arc e A. Moreover. MAXIMUM FLOWS An important characteristic of a network is its capacity to carry flow. is the maximum flow that can be sent between any two nodes? tire The resolution of this question determines the "best" use of capacities and establishes a reference point against which to compare other ways of using the network. the maximum number reliability of node disjoint paths that nodes? These and similar its measures indicate the robustness of the network to failure of components. given capacities on the arcs. we discuss several algorithms for computing the maximum flow between two nodes solving the in a network. both theoretically and computationally. We Uj. What. We begin The by introducing a basic labeling algorithm for maximum flow problem.68 4. We then consider improved versions of the basic labeling algorithm with better theoretical performance guarantees. {(i. j) are (j. validity of these algorithms rests upon the celebrated max-flow min-cut theorem of network flows. In this section. j) G = GM. : (i. what all is the minimum number join this pair of of nodes whose removal from the network destroys what is paths joining a particular pair of nodes? Or.

algorithm proceeds by identifying directed paths from the source to the sink in the residual network and augmenting flows on these paths. rj. Algorithms whose complexity bounds involve U assume integrality of data. that rational arc capacities can always be transformed to integer arc capacities by appropriately scaling the data. is crucial to the algorithms (i. ifi 0. of any arc i e j A represents the (i. + xij .1c) It is possible to relax the integrality assumption on arc capacities for is some algorithms. The following high-level (and flexible) description of the algorithm summarizes the basic iterative steps. j) maximum (j. the current flow rj. (4.. positive residual capacities the residual represent it network (with respect to the flow and as G(x). though this assumption necessary for others.t.69 Maximize v subject to V. 4. for each (i. Note. j) (4.j on arc x^: (j. x. without specifying any particular algorithmic strategy for how to determine augmenting paths.1 illustrates an example of a residual network.1b) (i. i) Xjj = \ ^ ifi*s.foraUiG N. additional flow that can be sent from node (i) to u^: node - using the arcs and of arc i). The and j. j). (4.1a) r = s. until the residual network contains no such path. .1 Labeling Algorithm and the Max-Flow Min-Cut Theorem One of the simplest is and most path intuitive algorithms for solving the maximum The flow problem the augmenting algorithm due to Ford and Fulkerson. i) which can be cancelled flow to node Consequently. Figure 4. "V' ^ > = ^' < Xj: < Ujj . the integrality assumption of residual network is not a restrictive assumption in practice. Given a the residual capacity. however. Thus. the unused capacity to increase (i. j) € A) € A) e A. j) we consider. The concept flow x. = Uj. y {j : Xjj {) : y (j. We call the network consisting of the arcs with x). . residual capacity has two components: (ii) x^.

At any step. or (iii) convex combination of and For our purposes. augment A end. we need to show that the algorithm terminates Finally. The algorithm selects a labeled node and scans arc adjacency list (in the residual network) to label more uiilabeled nodes. end. The in arc (i. A directed path from the source to the sink in the residual network path. For each increases r:j (i. finitely. It then erases the labels and repeats this process. The . Xjj by A in the original it network. We now more detail. The algorithm terminates all when has scanned labeled nodes and the sink remains unlabeled. Second. j) e P). The labeling algorithm performs directed path from s to t. the sink becomes labeled and the algorithm sends the maximum possible flow on the path from s to it t. while there begin is a path P from s to t in G(x) do A = min : (rjj : (i. is also called an augmenting The residual capacity on the path. units of flow along P and update G(x). The follows from the proof of the max-flow min-cut theorem. or (i) a decreeise in (ii). last result we must establish that the algorithm termirtates with a maximum flow. The following algorithmic description specifies the steps of the labeling algorithm in detail. j) e P. AUGMENTING PATH. First.70 algorithm begin x: = 0. j) of an augmenting path is the minimum an residual capacity of any arc that definition of the residual capacity implies (i) an additional flow of A Xj. a search of the residual network to find a It does so by fanning out from the source node s to find a directed tree containing nodes that are reachable from the source along a directed path in the residual network. augmenting A units of flow along P decreases discuss this algorithm in r^: by A and a by A. we need method to to identify a directed path from the source to the sink in the residual network or show that the network contains no such path. the flows only is easier to work directly with residual capacities and to compute when the algorithm terminates. Eventually. of the residual network corresponds to (ii) increase in by A a in the original network. we refer to the nodes in the tree as labeled and those its not in the tree as unlabeled.

Figure 4.) Network with a flow x. (Arcs not shown have zero capacities.1 Example of a residua] network. Node 1 is the source and node 4 is the sink.71 Network with arc capacities. c The residual network with residual arc capacities. .

otherwise we set x^: = and x:j = fj.u^. augment A erase all units of flow along P.72 algorithm maintains a predecessor index. labels and go to loop. j) i € L.Fjj. A = min : (rj. L: = (s). end else quit the loop. . pred(i). for each if e A(i) do rj. Hence. = Ujj . if t is labeled then begin use the predecessor labels to trace back to obtain the augmenting path P from s to : t. Since arc flows satisfy xj: .the can be used to obtain the arc flows as follows.r^..xj: + x:j x:j. end. . j is unlabeled and > then begin pred(j) : = i. mark end end. j) e P). . . for each labeled node i indicating the to trace back rode that caused node a i to be labeled. The predecessor indices allow us along the path from node algorithm LABELING. and = 0. (i. (loop) end. while L * begin and t is unlabeled do select a node (i. begin loop pred(j) : = for each j e N. The rjj final residual capacities r = uj.x:j = uj. if u^: > rj: we can set x^. j as labeled and add this node to L. to the source.

j) with i e S cind e S is called a forward arc. S)< X Z_ "ij ^ C<S.'^ij - I_ i€ S X je S ''ij = Fx^S.1 We in claim that the flow across any s-t and does not exceed the cutset capacity.5) i€ S J6 S . j we (S.1). Adding the flow conservation constraints b) for nodes j S and noting -Xjj in that when nodes i.4) j€ S in the first < u^. Recall from Section 1. into two A cutset is called am s-t cutset the source and the sink nodes are contained in different subsets of nodes S cind S = N . x^j in equation for node Cemcels equation for node we obtain ^=1 ie S Substituting x^. A if N s. i and j both belong to S. alternatively designate s-t cutset as i (S. Consequently. S) as Fx<S< S)= i X G S j X_Xij e i I_ X e Xij. S). An is arc (i. S).S: an S is the set of nodes connected t to Conversely. we introduce some if new definitions and notation.73 In order to show that the algorithm obtains a maximum flow. The flow x determines the cutset (S. net flow across an s-t We refer to v as the value of the flow. (4. S). I. cutset partitions set A .2) S S j e S Def ne the capacity C(S. and capacity constraints of For this flow vector X. Q c A is a cutset the Q Yias this property. and an arc (i. (4. S) is defined as C(S.Q) disconnected eind no superset of subsets. j) with e S and € S called a backward arc in the cutset Let X be a flow vector satisfying the flow conservation (4. summation and xj: ^ in the second summation shows that Fx(S. (4. let v be the amount of flow leaving the source. S) = X X ie S je "ij ^'^•^^ S cutset equals the value of the flow (4. j s-t cutset.3 that a set is subnetwork G' = (N. S) of an s-t cutset (S. any partition of the node set as S and S with s e S and e S defines an S).

then the capacity of the cutset at least N . the conditions S) < Ujj and ^ imply that = Uj. S) (4.4) yields V = Fx(S.6) But we have observed earlier that v (S. This strong duality property the max-flow min-cut theorem. Since the labeling algorithm increases the flow value by iteration.5) S). or our subsequent algorithmic developments. inspecting each in A(i). weak duality results. s e S and S.{s}) is at most nU.. The more substantive strong duaUty property an equality for cisserts that (4. Coi^equently. Adding in the flow conservation equations for nodes in S. - Xj: + Xjj. hence rj: for each forward arc x. We thus have established the theorem. the If all labeling iteration scans each arc at most once capacities are integral (s. is duahty theory.4). value. Note that = xjj t e S. But does at it terminate finitely? Each labeling eirc iteration of the algorithm scans any node most once. U = 2".) Define some cutset has finite S to be the set of labeled initial nodes flow in the residual x. eis guarantee that the problem always has a maximvmn flow as long capacity. arc and bounded by a finite number U. to I Proof. but the same argument shows that when the labeling algorithm terminates. = for each backward arc in the cutset. S) = i ^ e S j ]£ € S Ujj = C(S. Consequently. apply the labeling algorithm with the Let S= N- Clearly. Making these substitutions in (4. vector) it has at hemd both the maximum flow value (and a maximum flow capacity s-t and a minimum cutset. This bound on the if number is of iterations not entirely satisfactory for large values of U. The maximum value of flow from s Theorem equals the 4. <md requires 0(m) computations. and x^. is a lower bound on the capacity s-t of any s-t cutset. j) in the cutset xj. network G(x) when we S. (i. holds as some is choice of x and some choice of an s-t cutset (S. the bound . 74 This result is the weak duahty property Like most of the maximum it flow problem when the "easy" half of the viewed as a linear program.1.. the cutset the S) is a minimum capacity cutset and its capacity equals maximum flow value v. since x is a maximum flow. Since rj: = U|. (Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts. we obtain (4. it one unit in any is terminates within nU iterations. for each forward arc in the cutset (S. The proof of this theorem not only establishes the max-flow min-cut property. nodes S cannot be labeled from the nodes in (S. S). Let x denote the a maximum flow vector and v denote the maximum flow (Linear programming theory.

Several refinements of the algorithms. is possible to obtain x from y by a sequence of at most s to t m augmentations on If augmenting paths from plus flows around augmenting cycles. then also is a maximum it flow (flows around cycles do not change flow value). if Moreover. thein augmenting path algorithms to find a maximum is flow in no more initial m augmentations. By the flow decomposition property. the algorithm can indeed perform that many iterations.2 Decreasing the Number the of Augmentations The bound not satisfactory of nU on a number of augmentations in the labeling algorithm is from theoretical perspective. moreover. the augmenting path algorithm may example given in Figure 4.2 . including those we consider in Section 4. as the modifications. Flow decomposition shows should be able X is that. Furthermore. In addition. they may not converge to the select the maximum flow value. At each algorithm generates node labels that contain information about to other nodes.4 if overcome this difficulty and obtain an optimum flow even the capacities are irrational. possible to find a maximum flow using at most m augmentations. This result shows that is. No algorithm developed in the literature comes close to achieving this it is bound. in principle. we define x' x' as the flow vector obtained from y by applying only the augmenting paths. . the algorithm may not terminate: although the successive flow values converge. Ideally. in theory. to Unfortunately. For suppose an optimum flow and y it any flow (possibly zero). Thus if the method is to be effective. even though Erasing the labels much of this information may be valid in the next residual network. Nevertheless. without further take fiCnU) augmentations. the max-flow min-cut theorem (and our proof of Theorem 4. second drawback of the labeling algorithm the is its "forget fulness". the capacities are irrational. to apply this flow decomposition argument. possible to improve considerably on the bound of 0(nU) augmentations of the basic labeling algorithm. A iteration. 4. we need know a maximum theoretical flow. augmenting paths from the source described erases the labels The implementation we have when it proceeds from one iteration to the next.4.2 illustrates. we must augmenting paths carefully.1) is true even if the data are irrational. it we should retain a label when can be used profitably in later computations. therefore destroys potentially useful information.75 exponential in the number of nodes.

s-a-b-t. .2 A pathological example for the labeling arc capacities. algorithm. the flow maximum.1 10^. alternately along s-a-b-t and s-b-a-t.0 (b) 10^.76 (a) 10 \l 10^. (c) s-b-a-t.1 (0 Figiire 4. (a) The input network with (b) After aug^nenting along the path After augmenting along the path Arc flow is indicated beside the arc capacity. is After 2 xlO^ augmentations.

we consider another algorithm for reducing the number of augmentations. starting with flow - At or one of these augmentations must augment the flow by an amount for v)/2m otherwise we will a maximum flow.) In the following section. 2m consecutive maximum have capacity augmentations.6. Let v be any flow value and v* be maximum flow value.77 One natural specialization of the augmenting path algorithm is to augment flow along a "shortest path" from the source to the sink. Now (v* consider a v. and (by our subsequent observations) the resulting computation time would be O(nm^). We can improve this running time by exploiting the minimum distance from any . then the length of any increases. within m augmentations. this rule guarantees that the number of augmentations most (n-l)m. (We will prove these results next section.3 Shortest Augmenting Path Algorithm would be to successively If A natural approach to augmenting along shortest paths first look for shortest paths by performing a breadth the labeling algorithm maintains the set search in the residual network.) Since no path contains is at more than n-1 arcs. the network contains to (v* - at most m augmenting paths whose residual capacities sum v). This specialization also leads to improved complexity. in a first-in. shortest path either stays the Moreover. 4. (Note 0(m log U) maximum that we are essentially repeating the argument used to establish the geometric improvement approach discussed in Section 1. would obtain a shortest path in the Each of these iterations would take 0(m) steps both worst case and in practice. If we augment same or flow along a shortest path. then it by examining the labeled nodes in the residual network. at least 1 Since this capacity is initially at U and must be until the flow is maximum. the algorithm would reduce the capacity of a 2m or fewer maximum capacity most augmenting path by the capacity a factor of at least two. this computation time fact that the is excessive. sequence of least less. L of labeled nodes as a queue. defined as a path consisting of the least number of arcs. the flow must be maximum. Thus the maximum capacity augmenting path has residual capacity at least (v*-v)/m. Unfortunately. first-out order. the in the length of the shortest path is guaranteed to increase. a path of An the alternative is to augment flow along maximum residual capacity. Thus after augmentations. By flow decomposition. after capacity augmentations.

satisfies the We say that a distance function valid follovdng two conditions: C4. j) in the residual network is t = d(j) + 1. However. which are lower bounds on the exact to distances. An arc (i. 0) represents the exact distance label. Let = i^ - - i3 - . from C4. By fully exploiting this property. Then. d(t) d(i) = < 0.5.. any shortest path from node i to t contains at leaist d(i) arcs. we can reduce the average time per augmentation to 0(n). . d = (0. 1. imply that d(i) < k for any path of length k in the residual network and.1 C4-2. Other arcs are inadmissible.2 we have d(i) = d(i|) < d(i2) + 1. > 0. j) € A with r^. Thus. d(j) + 1 for every arc (i. By allowing flexibility the distance label of in the algorithm. refer to d(i) as the distance label of It and condition C4. d(ij^) < d(t) + 1 = 1. suffices to have valid distances.2 as the validit}/ is easy to demonstrate that i d(i) a lower boimd on i the length of the i2 shortest directed path from to t in the residual network. Since d(s) is a lower bound on the length of any path from the source to the sink. then a valid we call the distance labels exact. -\ - t be any path of length k in the residual network from node i to t. A path from s to consisting entirely of admissible arcs is an admissible path. to t.. For any admissible path of length k. we refer to the algorithm as the shortest augmenting path algorithm. the distance label d(i) equals the length of the shortest path from to in the residual network. each of the distance labels for nodes in the in the exact. We now admissible if it define satisfies some d(i) additional notation. d(s) = k.. it for other nodes network it is not necessary to maintain exact distances. i If t for each node i. These inequalities . the algorithm augments flows along shortest paths in the residual network. in Figure 4.1(c). node is i We condition. d(i2) 2 d(i3) + 1. 0. 2. node i to be less than the distance from cost.. The algorithm we describe next repeatedly augments flow along admissible paths. For example. 0. maximum flow algorithms that we discuss in this section and in Sections 4.4 Tj: is and A if it distance function d : N -* Z"*" with respect to the residual capacities a fimction from is the set of nodes to the nonnegative integers. 0) is distance label. though d = (3. Whenever we augment along path is a path. hence. There is no particular urgency compute these distances i exactly. The Algorithm The concept of distance labels w^ill prove to be an important construct in the 4.78 node i to the sink node t is monotonically nondecreasing over all augmentations. we maintain without incuring any significant .

i'. pred(j') : and i* : = j*. procedure ADVANCE(i»).79 We can compute the initial distance labels by performing a backward breadth first search of the residual network. = s. end. SHORTEST AUGMENTING PATH. Whenever t). two steps at the current (i*.e. = t then AUGMENT and set i» : s. (we i*) refer to this step as a relabel i* Increasing d(i*) makes the arc (predd*). as follows. 0. Consequently. j*) node: advance or The advance step it identifies some and admissible arc designates j* emanating from node current node. This step increeises the distance label of node it so that at least one admissible arc emanates from operation). We next describe the algorithm formally. inadmissible (assuming # s). maintains a path from the source node admissible arcs. one at a time. we delete (pred(i*). starting at the sink node. the partial admissible path becomes an contains node the algorithm makes a maximum possible augmentation on this path and begins again with the source as the current node..e. The algorithm generates an It admissible path by adding admissible circs. The algorithm terminates when d(s) S n. indices. i*. indicating that the network contains no augmenting path from the source algorithm begin to the sink. admissible path (i. If i*. j*) be an admissible arc in = i* A(i*). to some node path a called the current node. end. adds to the partial admissible path. The algorithm performs one retreat. as the new no admissible arc emanates from node then i* the algorithm performs the retreat step. first X = : perform backward breadth search of the residual network from node 1* : t to obtain the distance labels d(i). end. while begin d(s) < n do if i* has an admissible arc then ADVANCE(i*) = else if i* RETREAT(i*). begin let (i*. i. i*) from the partial admissible path and node pred(i*) becomes the new current node. j) on the path. .. consisting entirely of We = call this i partial admissible path and store it using predecessor of the pred(j) for each arc (i.

Correctness of the Algorithm We maximum first show that the shortest augmentation algorithm correctly solves the flow problem. In our subsequent discussion we shall always assume that the algorithms select admissible arcs using this technique. algorithm constructs valid distance function is Initially. j) € P).80 procedure RETREAT(i'). end. The shortest augmenting path algorithm maintains valid distance labels at Lemma 4. list the current-arc of node sequentially list is the arc in its is arc list. begin d(i*) if !• : = min s { d(j) + 1 : (i. node. procedure begin AUGMENT. but the order. that the distance valid prior to a step. A = min : {rjj : (i. remains unchanged throughout the algorithm. each step. We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node Each node i emanating from Arcs in each a i. j) which i is the current candidate for the first next advance step. updates the distance label of node arc in its and the current arc once again becomes the implicitly first arc list. each relabel step strictly increases the distance label of a node. ?t then i* : = pred(i*).e.2. satisfies the validity (i) condition C4. makes the next arc in the arc it the current arc. Proof. Moreover. j) € A(i*) and ^- > ). When i the algorithm has examined all arcs in A(i). Initially. has a current-arc (i. after an augment step (when the and (ii) after a relabel step. the labels. augment A end. inductively. units of flow along path P.1.. The algorithm examines this it and whenever the current arc inadmissible. . using predecessor indices identify an augmenting path P from the source to the sink. We show that the algorithm maintains valid distance labels at every step by performing induction on the number of augment and relabel steps. We need to check whether these conditions remain valid residual graph changes). once decided. Assume. i. list can be arranged arbitrarily.

since d(i) increases. a relabel step at if (ii) The algorithm performs list A(i). create an and. let a^ denote for the number of nodes with distance label equal to k. then it remains inadmissible until d(i) increases because of our inductive hypothesis that the current arc reaches the end of the arc 1 distance labels are nondecreasing. d(i) > d(j) + rj: for all e (S. (Recall that d(s) ^ n. node is i when the current arc reaches the end of arc Observe that an arc (i. Finally. The validity condition C4. 4. but this modification distance function for this arc. S). to the residual network does not (i. i) conditions dOc) < d(i) + 1 remain valid in the residual network. and rj. The algorithm terminates when d(s) ^ n. S). since = d(j) + by the admissibility property of the augmenting path. Hence.81 (i) A flow augmentation on arc (i. thereby establishing the second part of the lemma. affect the validity of the i) with rjj > might.) = k and S = N . though. d(i) < min{d(j) + (i. j) might delete this arc from the residual network. Theorem flow. the for all arcs Gc. d(s) is a lower bound on the length of the shortest augmenting path from s to this condition implies that the network contains no augmenting path from the source to the sink. Note that Oj^.S.2 implies that s-t = for each (i. j) € A(i) and > 0) = d'(i). S). j) inadmissible at some stage. additional arc d(i) (j.2. also create an additional condition d(j) < j) Augmentation on arc + 1 that needs to be d(i) satisfied. which is the termination criterion for the generic augmenting path algorithm. Since t. j) e (S. S) is a minimum cutset and the current flow is maximum. . list when A(i). ^ n and the algorithm terminates. therefore. j) e A(i) satisfies d(i) = d(j) + rj. Let S = {i some k* < n . When d(s. Consider the (i. the choice for changing d(i) ensures that the condition d(i) < d(j) + 1 remains valid for all (i. however. Hence. < k < n.1 since Oj^ ^n-1. Thus. > 0. j) in the residual network. 1 The distance labels satisfy this validity condition. By construction. must be zero e N: d(i) > k*) S. in addition. then no arc 1 : (i. s e sets 1 V S and t e and both the S and S are nonempty. we can obtain a minimum For s-t cutset as follows. (S. The shortest augmenting path algorithm correctly computes a maximum Proof. At termination of the algorithm. j) s-t cutset (S.

4. n^m) advance steps. at least one arc. From this point on. decreases its residual capacity to Suppose that the arc (i. its and each retreat step decrecises length by one. Hence.3. each I execution requiring 0( A(i) I ) time.e. the algorithm total reaches the end of the arc and relabels node Thus the time spent in all . Cortsequently. The first term comes from the number of of augmentations. j) becomes saturated sent at some iteration (at is which from d(i) j = i d(j) + 1). + 1 ^ d(i) + = d(j) + 2). the algorithm performs the relabel operation 0(n) times. After having performed list I A(i) i. The algorithm performs 0(nm) flow augmentations and each augmentation takes in 0(n) time. Each augment step saturates zero. j) until flow sent back to (at which point = d'(i) . d(k) < d(s) < n. resulting O(n^m) total effort in the augmentation steps. j) d(j) increases by at least 2 units. (b) The number of augment steps at most nrnfl. at most n/2 times and the number of arc saturations is no more Theorem Proof. the algorithm never node again during an advance step since for every node k in the current path. The total time spent in all relabel operations is V i€ n I A(i) I = 0(nm). of relabel steps is Thus the algorithm relabels a node at most n times and the total number bounded by n'^. the algorithm requires at most 0(n^ + retreat (relabel) steps. total any arc (i.82 Complexity of the Algorithm We Lemma number Proof. Finally. The shortest augmenting path algorithm runs in O(n^m) time. Consequently. node I i is 0(1) plus the time sf)ent in scanning arcs in A(i). S n. i. since each partial admissible path has length at most n. (a) Each distance is label increases at most n times. Each advance step increases the length of the partial admissible path by one. Then no more flow can be d'(j) on 1 (i. the total is of relabel steps at most n^ . After the algorithm has relabeled selects node i i at most n times. and the second term from the number the previous lemma. next show that the algorithm computes a maximvun flow in O(n^m) time. j) can become saturated than nm/2. such scannings. Each relabel step at node i increeises d(i) d(i) by at least one. 4. between two consecutive saturations of arc (i.. which are bounded by nm/2 by For each node i.2. we consider the time spent in identifying admissible N The time taken to identify the admissible arc of arcs.

83 scannings is 0( V i€ nlA(i)l) = 0(nm). except in very dense networks. for ^ k < n. if S = i : d(s) > k*).e. because maintaining the data structures requires substantial overhead that tends to increase rather than reduce the computationjd times in practice. A detailed discussion of dynamic trees is beyond the scope of this chapter.e. The use of potential functions enables us to define an "accounting" relationship between the occurrences of various steps of an algorithm that can be used to . This implementation of the maximum flow algorithm runs in difficult 0(nm log n) time and obtaining further These improvements appears quite implementations interest. aj^ with distance label equal to k.3 also suggests an alternative temnination condition criteria for is the shortest augmenting path algorithm. i. Researchers have observed empirically major portion of which is that the algorithm spends too much time in relabeling. identify at most 0(nm) augmenting paths and this bound on particular examples these algorithms to perform f2(nm) augmentations. (S. called dynamic trees reduces the average time for each augmentation from 0(n) to OGog n). As we have seen earlier. The algorithm updates it after every relabel operation and terminates whenever first finds a gap in the { a array. Vkith sophisticated data structures appear to be primarily of theoretical however. shortest The only way is improve the running time of the fewer computations per . Potential Functions and an Alternate Proof of Lemma 4. of a sophisticated data structure. powerful method for proving computational time bounds is to use potential Potential function techniques are general purpose techniques for proving the complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function.. but The termination of d(s) ^ n may not be efficient in practice. augmenting path algorithm The use to perform augmentation. We can do so by maintaining the number of nodes » i. The minimum cutset prior to this array performing these relabeling operations.. shortest paths is intuitively appealing and The resulting algorithms is tight. then S) denotes a minimum cutset. The idea of augmenting flows along easy to implement in practice.2(b) A functions. ex. The combination of these time bounds N establishes the theorem. satisfactory for a worst-case analysis. = for some k* < n. a done after it has already found the algorithm can be improved by detecting the presence of a maximum flow. The proof of Theorem 4.

Since the initial value of F is at most is m more than terminal value. Thus the number of augmentations most m + nm was = 0(nm). We shall refer to each A path augmentation has one advantage over a single push: at all it maintains conservation of flow nodes. and increases F by the all same amount. This basic decomposes into the more elementary operation of sending flow along an Thus sending a flow of A A units along a path of k arcs units along an arc of the path. we of bound number of steps of one type in terms of knovm boiands on the number steps of other types. we illustrate the technique by showing is that the number of augmentations in the shortest augmenting path algorithm 0(nm). Let the algorithm perform 0. . representative of the potential function argument.84 obtain a bound on the steps that might be difficult to obtain using other arguments. since the algorithm any node at most n times (as a consequence of Lemma its 4. and thus we can bound In general. arcs at the number of admissible eis end of the k-th step. a path.4 Freflow-Push Algorithms Augmenting path algorithms send flow by augmenting along step further arc. relabel operation. the push-based algorithms such as those we develop in this and the following sections necessarily violate conservation of flow. 4. Rather than formally introducing potential functions. K steps before it Clearly.1) and V i€ n I A(i) I = N nm. potential increases only The when the algorithm relabels distances. for the purpose of this argument. relabeling of Each node i creates as cis I A(i) I new admissible arcs. Suppose in the shortest augmenting path algorithm we kept track of the number Let F(k) denote the of admissible arcs in the residual network. This relabels increase in F is at most nm over relabelings. we count a step either an augmentation or as a terminates. the total decrease in F due to is at all augmentations m + nm. the number the of augmentations using bounds on the number of relabels. decomposes into k basic of operations of sending a flow of these basic operations as a push. This argument objective to is fairly Our bound the number We did so by defining a potential function that decreases whenever the algorithm performs an augmentation. In fact. of augmentations. F(0) < m and many F(K) ^ Each augmentation decreases the residual capacity of at least one arc to zero and hence reduces F by at least one unit.

i) ''ji (j : € A) X'^ij (i. as in the augmenting path algorithm described in the last section. The goal of each iterative step is to choose some active node and to send excess closer to the sink. (i) The two basic operations arc. We will refer to any such flows as preflows. algorithms perform all The preflow-push iteration of the le<ist operations using only local information. The preflow-push algorithm uses the following subroutines: . First. At each algorithm (except active node. The Generic Algorithm A preflow of (4. We adopt the convention that the source and sink nodes are never active. they can push flow for closer to the sink before identifying augmenting paths.1c) and the following relaxation y {j:(j. The algorithm terminates when the network contains no active nodes. Fourth.e. j) € A) • We refer to a node with positive excess as an active node. i. closeness being measured with respect algorithms.menting path we send to flow only on admissible arcs. the it method cannot send excess increases the distance label from this node nodes with smaller distance it then of the node so that creates at least one new admissible arc.j) € A) a The preflow-push algorithms maintain a given preflow x. of the generic (ii) preflow-push methods are pushing the flow on an admissible and updating a distance label. they are better suited distributed or parallel computation. the network contains at e(i) one a node i e N .1b): x is a function x: A —» R that satisfies (4. As If in the shortest aug. these algorithms permit the flow into a node to exceed the flow out of this node.i) Xjj - y '^ij SO .. € A) (j:(i. its initialization and t) its termination). the best preflow-push algorithms currently outperform the best augmenting path algorithms in theory as well as in practice.85 Rather.) (We Preflow-push algorithms have several advantages over augmentation based algorithms. t} as e(»>= {) : Z (j. preflow at each intermediate stage. t). with its > 0. define the distance labels and admissible arcs as in the previous section. to the current distance labels.foralli€ N-{s. labels. they are more general and more flexible. Second. Third. For i we define the excess for each node e N- {s.{s.

begin x: = 0. procedure PUSH/RELABEL(i). while the network contains an begin select active node do an active node i. begin if the network contains an admissible arc (i. to create at least The piirpose of the relabel operation is one admissible arc on which the algorithm can perform further pushes. We refer to the process of increasing the distance label of a node as a relabel operation. algorithm begin PREFLOW-PUSH. arcs represent flexible water pipes. PREPROCESS. and nonsaturating otherwise. j) e A(i) and > 0}. we visualize flow in an . Xgj : = Ugj for each arc (s. j) increases both saturating if and r. in this network. end. It might be instructive to visualize the generic preflow-push algorithm in terms of a physical network. : r^:) units of flow from 1 : node Tj: i to node j else replace d(i) by min {d(j) + (i. We say that a push of 6 units of flow on arc is 5 = rj.. by 5 units. A push of 5 units e(j) from node i to node j decreases both e(i) and r^: by 6 units and (i. end. nodes represent joints. perform a backward breadth first-search of the residual network. PUSH/RELABEL(i). and to the sink. end. j) e A(s) and d(s) : = n. The following generic version of the preflow-push algorithm combines the subroutines just described. end. we v^h to send water from the source In addition. to determine initial distance labels d(i). j) then push 5 = min{e(i). stairting at node t. and the distance function measures how far nodes are above the ground.86 procedure PREPROCESS.

no flow than can reach the sink. and again water flows downhill towards the sink. Figure 4. In the push/relabel(i) step. 3) an active can be selected again for further pushes. j) augmenting path algorithm. Arc (2.3 illustrates the push/relabel steps applied to the example given in Figure 4. since d(s) = n t. so that the algorithm can begin by selecting all some node with incident to node positive excess. 2) is added to the residual network. d(l)+l} = min{2.3(a) specifies the preflow determined by the preprocess step. The arc and (2. Second. it node 2 to 1.5) = 2. The push reduces the excess network and arc node. Since node 2 (2. We maintain vrith each node i a current arc which push operation. a lower bound on the length of t.87 admissible arc as water flowing downhill. We choose the current arc by sequentially scanning the arc scanning the arc times. Hence. Since arc (2. As we continue to move nodes upwards. Suppose the select step 1. lists We have seen earlier that takes 0(nm) total time. First. the current candidate for the list. and water flows to its neighbors. s. but they do not satisfy the distance condition. In general. all the water flows either into the sink or into the Figure 4.1(a). we move at a the source node upward. the residual network will never contain a directed path from s to will be and so there never any need to push flow from s again.2. we is identify an admissible arc in A(i) using the same data structure we used in the shortest (i. Initially. 1} units. however. we are also guaranteed that in subsequent iterations t. if the algorithm relabels each node 0(n) . it gives each node s a positive excess. 1) have positive residual capacities. node that has no downhill At this point. 4) is deleted from the residual is still (4. we move the node upward. water flows downhill towards the sink. The preprocessing node adjacent to step accomplishes several important tasks. The algorithm terminates when source. 4) has residual capacity r24 = of value 6 1 and d(2) = d(4) + the algorithm performs a (saturating) of push = min {2. the remaining excess flow eventually flows back towards the source. examines node 2. Third. Eventually. any shortest path from s to the residual network contains no path from s to Since distances in d are nondecre<ising. the algorithm performs a relabel operation and gives node 2 a new distance d'(2) = min {d(3) + 1. since the preprocessing step saturates is arcs none of these arcs admissible and setting d(s) = n will satisfy the is validity condition C4. occasionally flow becomes trapped locally neighbors.

(a) d(3) = 1 d(l) =4 d(4) = d(2) = l 1 6^ = (b) After the execution of step PUSH(2). .88 d(3) = 1 e3=4 d(l) = 4 d(4) = d(2) = 1 e.= 2 The residual network after the preprocessing step.

paths from s to active nodes. be an . Figure 4.1. any preflow x can be decomposed with respect (i) to the original (ii) network G into nonnegative flows along paths from the source s to Let i t. we can easily resides show that it finds a maximum flow. analyze the complexity of the algorithm. The algorithm terminates when the excess is either at the source or at the sink implying that the current preflow r. the residual a flow. By the flow decomposition theory. The second conclusion follows from the following lemma. and (iii) the flows around directed cycles. connected to At any stage of the preflow-push algorithm. Complexity of the Algorithm We now important times. arcs directed into the sink is and thus the flow on the maximum flow value. Proof. Since d(s) = network contains no path from the source to the sink. begin by establishing one result: first always valid and do not increase too many The of these conclusions follows from Lemma because as in the shortest augmenting path algorithm.89 d(3) = 1 d(l) = 4 d(4) = d(2) = 2 (c) After the execution of step RELABEL(2). This condition total is the termination criterion of the augmenting path algorithm. Assuming that the generic preflow-push algorithm terminates. Lemma is 43. each node i with positive excess node s by a directed path from i to s in the residual network. that distance labels are We 4. the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily when no admissible arc emanates from it.3 An illustration of push and relabel steps.

6. j) it performs a saturating or a nonsaturating push. new excess at node d(j). Case The <ilgorithm is unable to find an admissible arc along which it can push flow. Lemma Proof. that during a relabel step.2. Let III We prove the lemma using an argument based on potential functions.4. Proof. dii) < 2n. I denote the set of active nodes. j) over all saturating pushes. does not . and so (i. Each distance is label increases at . V i€ I d(i). it had a positive excess. 4. and hence a directed path from i to s. the initial value of F (after the preprocessing step) step. create a A saturating push on arc might 1. 2n. The number of nonsaturating pushes is O(n^m). The proof is ver>' much similar to that of Lemma 4.5. During the push/ relabel (i) one of the following two must apply: 1. Then there t must be a path P from s to i in the flow decomposition of since paths from s to i. i and hence s. The last time the algorithm relabeled node i.2 imply that (a) d(i) < d(s) + n - 1 < 2n. the residual network contained a path of length at most n-1 from node fact that d(s) to node The = n and condition C4. In this case the distance label of node i increases by e ^ 1 units. Lemma number 4. This lemma imples set. the algorithm does not minimize over an empty Lemma Proof. and d(i) < 2n for all i e is I. F cases zero. and flows around cycles do not P contribute to the excess at node Then the residual network contains the reversal of O' with the orientation of each arc reversed). The algorithm able to identify an arc on which it can push flow. Since < n. the total increase in F due to increases in bounded by is Case 2. Consequently. j. and hence 2n'^m Next note that a nonsaturating push on arc (i. At termination. the total is of relabel steps at most 2n^ (b) The number of saturating pushes at most nm. most 2n times. Cor^ider the potential function F = . This operation increases F by at most e units.90 active node relative to the preflou' x in G. 4. thereby increasing the number of active nodes by and increasing F by which may be as much as 2n per saturating push. is at most 2n^. x. For each node i e N. Since the total increase in d(i) throughout the running time of the i algorithm for each node distance labels is is bounded by 2n''.

of the algorithm. in particular. The algorithm maintains a set S of active nodes. to nodes with distance and these nodes. we can derive many max different algorithms select {d(i) from the generic version. example. but it simultaneously increases F by If d(j) = d(i) - 1 if the push causes node j to become The net active. this algorithm performs O(n^) nonsaturating pushes. We have thus established the following Theorem 1. The initial value of F is at most 2n^ and the F is Irr- + 2n^m. A Specialization of the Generic Algorithm The running time of the generic preflow-push algorithm is comparable to the bound of the shortest augmenting path algorithm. Hence. it is easy to implement the preflow-push algorithm theorem: O(n'^m) time. is push flow to nodes with distance h*-2. we always an active node with the highest distance label for : Let h* = e(i) > 0. the preflow-push and its algorithm has several nice features. or select elements are available for storing S so that the algorithm can add. . we immediately obtain a bound of O(n^) on the number of node examinations. and deletes from S nodes that become inactive following a nonsaturating push. we indicate how the algorithm keeps track of active nodes for the It push/relabel steps. Consequently. and so on.91 increase III. the nortsaturating pushes can occur most 2n^ + 2n^ + 2n^m = O(n^m) times. Note all If a if node relabeled then excess moves up and then gradually comes cor\secutive dov^n. We maximum at summarize these possible increase in facts. lists) Several data structures (for example. i e N) at some point h*-l. decreeise in is at least 1 unit per norxsaturating push. that the algorithm relabels no node during n node examinations. then F decrejises by an amount d(i). it from in in 0(1) time. further improvements. Consequently. its flexibility potential for By specifying different rules for selecting nodes for push/relabel For operations. doubly linked delete. Since the algorithm requires O(n^) relabel operations. The nonsaturatirg push will decrease F by d(i) since i becomes inactive. Finally.4 The generic preflow-push algorithm runs in O(n'^m) time. node F j was active before the push. suppose that push/relabel step. Each node examination entails at most one nonsaturating push. However. proving the lemma. in Then nodes with distance h* push flow turn. Each nonsaturating push decreases F by one unit and F always remains nonnegative. that adds to S nodes become active following a push and are not already in S. then excess reaches the sink node and the algorithm terminates.

algorithm pushes flow from nodes whose excess is A/2 S ^jj^ax^^- "^^ choice assures that during nonsaturating pushes the algorithm sends relatively large excess closer to the sink. Pushes carrying small amounts of flow are of little benefit and can cause bottlenecks that retards the algorithm's progress.5 The preflcnv-push algorithm O(n^) time. that during Cj^^g^. The algorithm also does not allow the maximum excess to increase beyond A. though. from O(n^m) 0(n^ log U). This algorithmic strategy may prove to be useful for the following reason. Theorem 4. Researchers have shown using more clever analysis that the ) highest label preflow push algorithm in fact runs in 0(n^ Vrn time. for the highest label straightforward. Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling . active node) is By pushing flows from active nodes. The excess-scaling algorithm is based on the following ideas. observe no particular pattern in In this section. deleting. nonempty list and sequentially scanning the lower indexed needed is We leave it as an exercise to show that the overall effort to scan the lists is bounded by n plus is the total increase in the distance labels which O(n^). starting at LIST(level) We identify the highest indexed lists. that always pushes flow from an active node ipith the highest distance label runs in U preflow push algorithm is The O(n^) bound and can be improved. or selecting an element takes 0(1) time. attempts to satisfy the meiss balance equations. the execution of the generic algorithm. We can store these as doubly linked lists so that adding.5 Excess-Scaling Algorithm at The generic preflow-push algorithm allows flows violate each intermediate step to mass balance equations. it We algorithm as the excess-scaling algorithm since is bcised on scaling the node excesses.92 variable level which is an upper bound on the highest index lists r for which LlST(r) is nonempty. Note. except that e^^g^^ eventually decreases to vtdue we develop an excess- scaling technique that systematically reduces Cjj^^ to 0. 4. the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an : one measure of the infeasibility of a preflow. refer to this U represents the largest arc capacity in the network. The following theorem now evident. we would 0. Suppose . We to will next describe another implementation of the generic preflow-push algorithm that dramatically reduces the Recall that number of nonsaturating pushes.

Ehjring the A-scaling phase.. Ij. U < A < 2U. begin PREPROCESS. It is node Vkdll j could not send the accumulated flow closer to the sink. for k : = K down to do begin (A-scaling phase) A: = 2^ while the network contains a node i with e(i) > A/2 do perform push/relabel(i) while ensuring that no node excess exceeds A. The algorithm performs a number of dominator A decreasing from phase certain value of scaling phases with the value of the excess- to phase. Tj. After the algorithm has peformed flog scaling phases. Among all nodes with excess of distance label (breaking ties arbitrarily). ejy. it pushes 6 = min {e(i).ax decreases to value and we obtain The the maximum flow. 6 = min {e(i). may vary up and down during When Ul + 1 Cjj^g^ < A/2. end. K:=2riogUl. end. a new scaling ph«ise begins. A . pushing too much flow to any node likely to be a wasted The excess-scaling algorithm has the follouang algorithmic description. A/2 < Cj^g^ < A and ejj^^^ the phase. The algorithm uses following node selection rule to guarantee that no node excess exceeds A.} units of flow. select a node with minimum . and thus the algorithm need to increase its distance and return much of is its excess back toward the source. excess-scaling algorithm uses the same step push/relabel(i) as in the generic preflow-push algorithm. algorithm EXCESS-SCALING.e(j)} This change will the ensure that the algorithm permits no excess to exceed A. Initially. This algorithmic strategy may prove to be useful for the following reason. Suppose likely that several nodes send flow to a single node creating a very large excess.93 The algorithm also does not allow the maximum excess to increase beyond A. more than A/2. j. We refer to a specific scaling phase with a A as the /^-scaling phase. Thus. A= 2' ^°6 ^ when ' the logarithm has base 2. Selection Rule. but with one slight difference: instead of pushing units. effort. Thus.

it performs either a saturating or a nonsaturating push. r^. Odog U) at scaling phases. to node j after this operation F decreaases by is at Since the initial value of F at the beginning of a A-scaling phase most 2n^ and the increases 1). throughout the running of the algorithm increase in F is bounded by 2n (by Lemma is the total due to the relabeling of nodes bounded by 2n^ is at in the A-scaling all phase (actually. of flow. - 1 < d(i) since arc is Hence. C4. 4. Proof. A < + A- e(j) <A . the second assertion a consequence of the The e(i) is initial value of F the beginning of the A-scaling phase d(i) is bounded by 2n^ because step. ijj) units of flow. at leaist A/2 vmits excess at node e(j) Further. The algorithm satisfies the following two conditions: Each nonsaturating push sends at least A/2 units of flow. Since for each increaise in d(i) 4. we will establish the first assertion of the is is lemma. The algorithm is able to identify an arc on which it can push flow and so Ccise. we ensure that in a nonsaturating push the Jilgorithm sends e(j). i A and sends at leaist A/2 tmits of flow at least from node 1/2 units. The excess-scaling algorithm performs O(n^) nonsaturating pushes per and scaling phase 0(n^ log U) pushes in total. the push operation increases only Let Tj. the increase in F due to node relabelings most 2n'^ over scaling phases). and d(j) {e(i). bounded by A and bounded by 2n. in F during this scaling is phase sum to 8rr. The algorithm is unable to find an admissible arc along which it can push flow. This relabeling operation the totcil increases F by at most e units because < A.4). Consider the potential function F = ^ ie e(i) d(i)/A.e(j)) > min {A/2. A . During the push/relabeKi) one of the following two cases must apply: Case 1.. For every push on arc (i.. In this case the distance label of node i increases e(i) by e ^ 1 units.7. e'(j) - be the e(j)) j after All the push. at most 2n^ (from Case the number of nonsaturating pushes bounded by . nonsaturating push on arc since d(j) = d(i) .8. node excesses thus remain less than or equal to A. Case 2. Using this potential function N Since the algorithm has first. we have e(i) > A/2 and excess e(j) is < A/2. j). No excess ever exceeds A. (i. Then e'(j) = e(j) + min {e(i). j) among nodes whose admissible.4. by sending min more than A/2. Proof.1. Lemma 4. j) In either F decreases. since node i is a node with smallest distance = d(i) label (i.94 Lemma C43. i.

with capacity -e(i). and super sink. we show how to solve maximum flow problems vdth nonnegative lower bounds on flows.4 to find a e(i) node with the highest distance d(i) We is maintain the LIST(r) = {i € N : > A/2 and = r). Let /j. j) e A. We is leave as an exercise to show needed to scan the lists is bounded by the number not a bottleneck of pushes performed by the algorithm plus 0(n log U) and. Although the maximum flow problem v^th zero lower bounds always infecisible. preflow-push method in Section lists 4. This i representing the excess or deficit of any node e N. We identify the lowest indexed nonempty starting at LIST(level) and sequentially scanning the higher indexed that the overall effort lists. We e(i) introduce a super source. and for each node i with e(i) < 0. otherwise. the problem infeasible. — require 0(nm) time. If \ e(i) . we add an s* to t*. choice gives us a pseudoflow with e(i) We problem by solving a maximum flow set x^: = /j: for each arc (i. has a feasible solution.4 for the definition of a pseudoflow with both a excesses and deficits). With this observation. determine the feeisibUlity of this problem with zero lower bounds as follows. 4. however. then the original problem > 0) is feasible and choosing the flow on each is arc (i. relabel operations and finding admissible arcs point. + /jj a feasible flow. (We refer the reader to Section 5. node t*. j) e A. Up to this we have if ignored the method needed to identify a node with the excess minimum distance label easy. node 0. and a variable level which a lower bound on the smallest index list r for which LlST(r) is nonempty. j) as x^. operation. the problem wiih nonnegative lower bounds could be We can.6 The preflow-push algorithm with excess-scaling runs in 0(nm + n^ log U) Networks with Lower Bounds on Flows To conclude this section. Making in the this identification is we use a scheme similar to the one used label. we can summarize our discussion by the following Theorem time. s*. .i) with capacity e(i). arc (i. ^ denote the lower bound for flow on any eu'C (i. among nodes with more than A/2. We then solve a v* problem from Let x* denote the maximum v* = {i: flow and e(i) maximum flow denote the maximum is flow value in the transformed network. result.95 This lemma implies a bound of 0(nm all + n^ log U) for the excess-scaling algorithm since we have already seen that other operations — such as saturating pushes. For each node i with > we add an t*) arc (s*. hence.

96 Once we have found = (ujj a feasible flow. j) respectively. define the residual capacity of an arc (i. i). j) - Xjj) + (xjj - /jj). (i. initially first we apply any of the maximum flow as algorithms with only one change: rj. The and second tenns on arc in this expression denote. the residual capacity for incre<ising flow cmd for decreasing flow on arc (j. These observations show that it is possible to solve the problem with nonnegative lower bounds by two applications of the cilgorithms maximum maximum flow flow we have already discussed. . It is possible to establish the optimality of the solution generated by the algorithm by generalizing the max-flow min-cut theorem to accomodate situations with lower bounds.

Now solve a maximum flow problem cost from s* to t*. j) and 1) for each € N and assigning a large cost and a very large capacity to each of these . Connectedness Assumption. We consider the following node-arc formulation of the problem. impose j necessary. j) e A ) and U = max max { lb(i)l : ie N}. > 0. Introduce a super source node i s*. if We (j. the maximum flow value equals {i : T b(D > b(i) 0) then the minimum flow problem A5.j)€A^ subject to {j : (i. A5. max Cj.1c) We assume nonnegative.1a) (i.. add an If t*) with capacity -b(i). j) € A The transformations Tl and T3 loss of generality. in Section 2. is feasible. on arc flows are all zero and that arc costs are [ C } = ). we consider algorithmic approaches for the minimum cost flow problem. We assume that X ieN ^(^^ - and that the minimum cost flow problem has a feasible solution. for each node with < 0. and a super and sink node i For each node b(i) with arc b(i) (i. MINIMUM COST FLOWS In this section. by adding artificial arcs (1. We maximum t*.97 5.4 imply that these assumptions do not impose any We remind the reader of our blanket capacity) are integral. Feasibility Assumption. j) € A. assumption that all data (cost. Minimize 2^ Cj. (5. supply/demand and problem We also assume that the minimum cost flow satisfies the following two conditions. (i.2.1b) < xjj < Ujj. Let that the lower bounds ( /j.e. x. : (i. directed path We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes. add an arc (s*. can ascertain the feasibility of the minimum cost flow problem by solving a flow problem as follows. for each (i. otherwise. (5. max ( ujj : (i.: ' {5. it is infeasible. i) X^!k) = ''ii t)(>)' for a" > e N. i) with capacity b(i).1. j) X € X) X:: (j : (j. this condition.

Our notation for arcs assumes that at most one arc joins easily treat this one node any other node.4.1..1 for the definition of augmenting cycle). any directed cycle in the residual network G(x) is an augmenting cycle with respect to the flow x and vice-versa (see Section 2. A feasible flow x is an optimum flow if and only if the residual network G(x) contains no negative cost directed cycle. By using more complex notation. j) is defined as follows: Cj: We replace each arc r^. we can case. i). the minimum The cost flow problem has a number of important theoretical properties. residual capacity = u^j . i). CXir algorithms rely on the concept of residual networks. rather simple complementary slackness conditions. if of residual networks poses some (i.x^. will tissume that never arise by inserting extra nodes on parallel arcs.. notational difficulties. we (or. and the has cost -Cj: and residual capacity = The residual network consists only of arcs with positive residual capacity. the minimum cost flow problem and its dual have. rather than changing our notation. 5. This equivalence implies the following alternate statement of Theorem Theorem 2. more general parallel arcs However. then the residual j network may contain two arcs from node i to node and/or two j arcs from node to node with possibly different to costs. 5. view. due to its special structure. The concept example. Duality and Optimality Conditions As we have seen programming dual in Section 1. state the linear we formally programming dual problem and derive the complementary slackness conditions.2. from a linear programming point of In this section.98 arcs. of this problem inherits linear many of these properties. and (j. . The arc (i. e A by two arc (j. No such arc would appear in a minimum cost solution unless the problem contains no feasible solution without artificial arcs. j) G(x) corresponding to a flow x arcs i) (i. j) has cost rjj and x^. j) For the original network contains both the arcs i and (j. Moreover. we can produce a network without any Observe that parallel arcs).1. The residual network (i.

Further.1c).3) 6jj > ^ Xjj = Ujj. implies that n(i) - n(j) - 5jj = Cjj . 99 We each arc generality. . .4) These conditions are equivalent Xj: to the following optimality conditions: (5. we that can set one of these dual 0. j). substituting this result in (5.7) To see this equivalence.j) N X e A "ij ^i\ ^ (5 2a) ' subject to 7c(i) - 7c(j) - 6ij < Cjj . (5.1) assuming that Uj. (5. consider the j) minimum is cost flow problem that this (5.6) Xij n(i) - n{]) ^ qj < Xj: (5. (5.1b) redundant. (5. Xj: < Uj. therefore assume 7c(l) = (i. 0<xjj <u^j=* = Ujj=> Jt(i)- Jt(j) = Cjj.8) yields (5. (5. suppose that < Uj: for some arc (i. The complementary slackness conditions Xjj for this primal-dual pair are: > => 7i(i) - n(j) - 5jj = Cjj (5.1b). j) e A. > for (i. (5.2c) and Ji(i) are unrestricted. Xj. j) variables to an arbitrary value.6).8) Since (5. associate a dual variable 6jj We. j). foraU (i. in (5. in (5. It possible to show 7t(i) assumption imposes no loss of i We associate a dual variable with the mass balance corwtraint of node is Since one of the constraints in (5.3) implies that 7t(i)-7t(j) -5jj = Cjj.2b) 5jjS 0. we The with the upper bound constraint of arc dual problem to (5. € A.5) = =* 7c(i) - 7t(j) < Cjj . .j)e A. .3) Whenever = > for some arc (i.1) is: Maximize X ie t)(') '^(i^ ~ (i. The condition (5. for all (i.4) Uj: implies that 6jj = 0..

We (5.1. 0.5). > and Xj. C5. the residual network contains no negative cost cycle. j) in A.3 C5. Cjj Cjj Cjj > = < 0. i)eW To see the converse.4) imples that = and substituting this result in (5.j)€ W C:.2.6. shortest path optimality condition C3. Let W be any directed cycle in the residual network. then = U|j. if xj: = < uj.7). Note note that if that the condition C5.3 follows it from the conditions C5. '^ S 0. Cjj = Xjj But then for Cjj contradicting A similar contradiction arises if < and < Uj.100 Substituting 5jj S in this 6jj equation gives (5. It is easy to establish the equivalence between these optimality conditions and the condition stated in satisf)'ing Theorem 5.5 and C5. t for each arc (i. C5. respect to the arc lengths are well defined.j)€ (-Jt(i) W + Jt(j)) (i. i) C5.4.4 If If < x^: Xj. some (i. -t^ Cjj .6. Further . .. for some arc (i. n of flows and node potentials C5. ^ S (i. j) then (5. C5.1 C5. with Let d(i) denote the shortest distance from node 1 to node i. network the shortest distances from node 1. simplify C5. suppose that x is feasible and C(x) does not contain a negative cycle. The conditions if it - (5. in the original Cjj.6 Cj. network. Consider any pair x. Hence. that the condition C5.6 (Primal feasibility) x (E>ual feasibility) Cj.5 C5.2 X If is feasible.3. when stated in we retain for the sake of completeness. residual network C5.j)e W q: = '' (i. 0. j) in the residual network G(x). > subsumes for some arc (j. then then Xjj = 0. n of flows and node potentials optimal satisfies the follov^ing conditions: C5. + I (i. Condition C5.2b) gives (5. To < see this result.5) define the reduced cost of an arc (i. is feasible. j) as Cj. = Cj: - Ji(i) + is n(j).2 implies that d(j) < d(i) + q. Finally.2 and C5. then the 0. j) and C5. (i. would contain arc with Cj.6 implies that X (i. to: terms of the residual network. < Ujj.1. These conditions. Observe however.j)€ XW C.7) imply that a pair x. Then The in the residual Cj:.4.

Hence. j) € A.*. 71 - d. and setting = (i. setting Uj: equal to any integer greater than (n 1) will suffice if we wish s to to maintain t finite capacities). more transparent when we discuss algorithms have already shov^m in Section 5. Suppose that 7t is an optimal dual solution and c is the vector of reduced costs. . for all (i. algorithms for the minimum cost flow problem solve both the shortest path and maximum flow problems as special cases.101 for aU (i. + d(i) - d(j) = Cj. We define the cost-residual network G* = (N. Similarly. as the nodes in G. the pair satisfies C5. led to Consequently. maximum flow problems are of Indeed. many of the algorithms use shortest path minimum and/or maximum for the cost flow problem either explicitly or implicitly flow algorithms as subroutines. improved algorithms for the for these two problems have improved algorithms minimum cost flow for problem. Then < q. : (i. A*) as follows. The shortest path problem from node s to all . We now show how to obtain an optimal primal solution from an optimal dual solution by solving a single maximum flow problem. Let n = x.1 minimum We how to obtain an optimum dual solution from an optimum primal solution by solving a single shortest path problem. with Cj: c^g = -1 and u^^ = for each arc ~ (in fact.1) b(i) = -1 for all 1 * s. same supply /demand well as a lower Any arc (i. 1^. the This relationship will be cost flow problem. Conversely. j) in G(x).6. other nodes can be formulated as a minimum cost flow problem by setting b(l) = (n .5 and C5. 5^ Relationship to Shortest Path and Maximum Flow Problems The minimum cost flow problem generalizes both the shortest path and maximum flow problems. j) e The nodes in G* have the A* has an upper bound u^:* as bound defined as follows: . and Uj.Jt(i) + 7t(j) = Cj. j) e A (in fact. j) in G(x). Thus. j) e A) would suffice). Uj^ m • max {u|. the maximum = flow problem from node node can be transformed to the s) minimum cost flow problem by introducing an additional arc (t. algorithms for the shortest path and great use in solving the minimum cost flow problen.. = «« for each (i.

5. minimum problem and point out relationships between We first consider the negative cycle algorithm. j) in (i. Notable examples are the negative cycle. meets the supply/demand constraints of the nodes. > 0. Negative Cycle Algorithm Operations researchers. primal-dual. A* contains an arc in A with Cj. then any flow value will satisfy the condition C5.3.4 in and then transform problem to a maximum cost flow problem as described assumption A5. = 0.4 implies the flow on arc flow. arc j) with Uj. does so by identifying negative cost directed cycles in the in these cycles.102 (i) (ii) For each For each (i. < 0. if Cjj < for some (i. then C5. primal simplex and scaling-based algorithms. r Now network the problem is reduced to finding a feasible flow in the cost-residual that satisfies the lower and upper bound restrictions of arcs and.4. j) 6 A. out-of-kilter. A* contains an (i. j) € A. (i. successive shortest path. residual network G(x) and augmenting flows The algorithm terminates when when the residual network contains no negative cost cycles. A* contains an arc in A with c. it Theorem 5. j) A with Cj.2 dictates that xj: = in the optimum (i. j) (iii) For each (i..3.* = uj. In this and the following cost flow sections. then condition C5. Similarly.1. . > for some (i. j) flow. electrical engineers and many others have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem. j) with u^* = (i. j) with u^:* = 1j:» = 0. and hf = 0- The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5.. We first eliminate the lower this bounds of arcs as described in Section 2. Let x* denote the x*+/* is flow in the transformed network. If Cj.2-C5. the algorithm terminates. Then an optimum solution of the maximum minimum problem in G. computer scientists. 1^:* =Uj. .1 implies that has found a minimum cost flow. at the same time. If cjj must be at the arc's upper bound in the optimum = 0. cycle algorithm maintains a primal feasible solution It The negative to attain x and strives dual feasibility. we discuss most of these important algorithms for the them.

6 units of flow along the cycle W and update G(x). due to degeneracy. This algorithm can be improved in the following three ways (which irizpV summarize) we briefly (i) Identifying a negative cost cycle in effort (to much less than 0(nm) time. The augmenting cycle theorem (Theorem 2.1. j) e W). However. while C(x) contains a negative cycle do begin use some algorithm 5 : to identify a negative cycle W. Let x be some flow and an optimum flow. = min [t^ (i. j) IW € m (min ^ (rjj : (i. One algorithm for identifying a negative cost the label correcting algorithm for the shortest path problem. end. Since mCU is an upper bound on an cost. which requires 0(nm) time at least Every iteration reduces the initial flow cost by zero is one unit. objective due to flow augmentations on these augmenting cycles sum Consequently. Identifying a negative cost cycle with maximum improvement due in the objective function value. improvements to ex -ex*. It The simplex algorithm solution be discussed later) nearly achieves this objective. The improvement is in the objective function to the augmentation x* be along a cycle W - (i. begin establish a feasible flow x in the network. it maintains a tree and node potentials that enable to identify a negative cost cycle in 0(m) effort. the algorithm always augments flow along a . if to x must decrease the function by at least (ex -cx*)/m.3) implies that x* equals x plus the flow on at most in cost augmenting cycles with respect to x. the flow cost and a lower bound on the optimum flow algorithm terminates after at most O(mCU) iterations and requires O(nm^CU) time in total. described in Section to identify a negative cycle.4. Further. at least one augmenting cycle with respect Hence. is feasible flow in the network can be found by solving a maximum flow problem as explained just after assumption A5. A cycle 3.103 algorithm NEGATIVE CYCLE. augment end. the simplex algorithm cannot necessarily send a positive amoimt (ii) of flow along this cycle. j) e W)).

i) X€ A] ''ii - {j: (i. Successive Shortest Path Algorithm The negative cycle algorithm maintains primal feasibility of the solution at every feaisibility. the algorithm selects a node i with extra supply and a node with unfulfilled demand and sends flow from terminates i to j along a shortest path in the residual network. the successive shortest path algorithm maintains dual feasibility of the solution at every step and strives to attain primal feasibility. iterations. then Lemma 1. researchers have shown the negative cycle algorithm always augments the flow along a minimum mean is cycle.j) X€ a1 e(i) ''ii' for all i e N.1 implies an optimum flow within 0(m log mCU) iterations. (iii) Identifying a negative cost cycle vdth ais its minimum mean it cost. For any pseudoflow x. If e(i) -e(i) is > for some node i. possible to identify a minimum mean that if cycle in 0(nm) or 0(Vri m log nC) Recently. we define the imbalance of node as e(i) = b(i) + {j: (j. 5. then e(i) is called the excess is of node Let S i. time. A node i vdth = called balanced. absolute value decreases by a factor of l-(l/n) within m Since mean cost of the minimum mean -1/n. the its to the next. j At each step. then called the deficit. if e(i) < 0.1 cycle value nondecreasing. but a modest variation approach yields a polynomial time algorithm for the minimum cost flow problem. but violates the supply/demand constraints of the nodes. cycle is that the method would Finding a of this maximum improvement a difficult problem. We define the mean cost cycle is a of a cycle cycle cost divided by the number of arcs It is contains. and T denote the .104 cycle with obtain maximum improvement. A pseudoflow is a function x A -» R satisfying only : <md normegativity constraints. the minimum mean (negative) cycle 1. A minimum mean whose mean cost is as small as possible. is bounded from below by -C and bounded from above by Lemma implies that this algorithm will terminate in 0(nm log nC) iterations. then from one iteration moreover.4. step and attempts to achieve dual In contrast. It maintains a solution x that satisfies the normegativity and capacity constraints. all The algorithm when the current solution satisfies the supply/demand the capacity i constraints.

5.e. j) in G(x). j) (i. Lemma Proof.6 with respect to the node potentials to n'. Augmenting flow on an Cj: arc (i. Observe that for any directed path P from a node k to a node /. for all (i. Besides using them to prove the correctness of the algorithm. fe P Z C.nil) + n(k). Cj. Augmenting flow along any = arc P maiintains the dual feasibility condition C5. The node potentials play a very important role in this algorithm. Suppose a pseudoflow x satisfies the dual feasibility condition C5. The residual network corresponding to a pseudoflow is defined in the same way that we define the residual network for a flow. shortest path with respect to the same bls the shortest path with respect to The correctness of the successive shortest path algorithm rests on the following result. The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj. j) may add . Hence. and so (j. suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). x every arc every arc satisfies (i.105 sets of excess and deficit nodes respectively.1.6. j) in G(x). '' = (i. j) in G(x).c(i) + Jt(j). i) to the residual network. is and the Cjj.' = Cjj for on the shortest path P from node k node since d(j) = d(i) + for € P and Cj: = c^. We in are now its in a position to prove the lemma. i) also satisfies C5. node k any node v in G(x) with respect to the arc lengths We claim that x also Jt' satisfies the dual feasibility conditions with re. for all (i.. reversal arc (j.. we use them to ensure that the arc . Let d(v) denote the shortest path distances from Cj. But since for each arc 6 P Cjj = 0. . Furthermore. Since x satisfies the dual feasibility conditions with respect to the node potentials Cj: we have to ^ for all (i. The shortest path optimality conditions C3. j) (i. Next note that Cj. Hence. the node ?'> potentials change all path lengths between a specific pair of nodes by a constant amount.6 unth respect to the node potentials it.pect to the potentials (i.. - Jt(i) + n(j) in these conditions and using 7t'(i) = 7t(i) - d(i) yields qj" = Cjj 7:'(i) + n'(j) S 0.6 for this arc. Y fe C:. Cj: = - Cj. Then x' also satisfies the dual feasibility conditions with respect to some node potentials.. - . = 7t-d. jt.2) imply that d(j)<d(i)+ Substituting cjj . (i. j) C5. /.

units of flow along the path P. Further. T. Consequently. m. end.. e(k). shortest path algorithm largest pseudopolynomial time since is. Each iteration of algorithm solves a shortest path problem with nonnegative arc lengths and reduces the supply of some node by Cj: at least one unit.6 with respect to the node potentials n = Also. is the best strongly polynomial -time bound implement Dijkstra's algorithm is CXm + n log n) and the best (weakly) polynomial time bound is 0(min {m log log C. The algorithm however.106 lengths are nonnegative. -e(/). where S(n. if since. if U is an upper bound on iterations. by assumption. S and To satisfies initialize the algorithm. other nodes in G(x) with respect to the reduced costs let P denote : a shortest path from k to 1. do begin select a node k e S and a node / € T. The successive n. all lengths are nonnegative. So the overall complexity of is this algorithm is 0(nU S(n. thus enabling us to solve the shortest path subproblems efficiently. m. j) € P } ]. f>olynomial . C) the time taken by Dijkstra's algorithm. ujxJaten 6 : = 7t-d. to Currently. of the successive shortest path algorithm summarizes the steps algorithm SUCCESSIVE SHORTEST PATH. more The following formal statement of this method. the largest supply of any node. d(j) determine shortest path distances from node k to all Cj. = min [ min { rj: : (i. the connectedness assumption implies that the residual network G(x) contains a directed path from this node k to node /. X. then T* because the sum of excesses always equals the sum of deficits. m it + is nVlogC ) ). O). 5*0. e(i) compute imbalances while S ^ and initialize the sets S and T. the shortest path problem at each iteration can be solved using Dijkstra's algorithm. polynomial in m and the supply U. and = begin X : = 7t : 0. we set x = 0. which is a feasible pseudoflow and arc C5. the algorithm terminates in at most the arc lengths nU Since are nonnegative. augment 6 update end.

we could The just as well have violated other constraints at intermediate steps. nC M(n. These observations give a bound of min {nU. > 0.1). To explain the primal-dual algorithm. In Section 5. and the flow bound restrictior«. the algorithm incurs the additional expense of solving a maximum flow problem at each iteration. the network contains no path from the source to the sink in the residual network consisting iteration d(t) entirely of arcs with zero reduced costs. coi^equently. and also assures that the node potential of the sink latter strictly decreases. Thus.107 time for the assignment problem. comes closer to satisfying the mass balance However. C) and M(n. represented by k^:. The successive shortest path and primal-dual algorithnw maintain a solution that satisfies the dual feasibility conditions violates the and the flow bound iteratively constraints.7. Primal-Dual and Out-of-Kilter Algorithms The primal-dual algorithm is very similar to the successive shortest path problem. These algorithnns modify the flow and potentials so that the flow at each step constraints. drive the flow to zero if = 0. a special case of the minimum cost flow problem for which U = 1. m. might send flow along many paths.e. the algorithm has an overall complexity of 0(min (nU S(n. as before. U) respectively denote the solution times of shortest p>ath and maximum flow algorithms. out-of-kilter algorithm satisfies only the mass balance cortstraints and may idea is violate the dual feasibility conditions to drive the flow on an arc (i. The flow observation follows from the fact that after we have solved the maximum problem. but. we will develop a polynomial time algorithm for the minimum cost flow problem using the successive shortest path algorithm in conjunction with scaling. where S(n. the adding nodes and arcs as in the assumption A5. The basic if Cj. U)). but that mass balance constraints. each 7:(j) becomes 7t(j) - d(j)) and then solves a maximum flow problem to send the reduced maximum possible flow from the source to the sink using only arcs with zero that the excess of cost. in the next ^ 1. of course. m. we transform the minimum cost flow problem into a single-source and single-sink problem (possibly by At every iteration. it except that instead of sending flow on only one path during an iteration. j) to Uj. primal-dual algorithm solves a shortest path problem from the source to update the node potentials (i. m. and to permit any flow between and Uj: if Cj: The kilter number. . 5. The algorithm guarantees some node strictly decreases at each iteration..5. nC} on the number of iterations since the magnitude of each node potential is bounded by nC. m. This bound is better than that of the successive shortest path algorithm. C). Cj: < 0.

- x^: I .3) permits the algorithm to achieve these efficiencies. At each it iteration.108 kjj. of an arc (i. that of the successive shortest path algorithm. k^j = I x^j I and for an arc (i. k^. with is Cjj > 0. j) is defined cis the minimum increase or decrease in the flow necessary to satisfy its j) flow bound constraint and dual feasibility condition. In this section. we describe the network simplex algorithm in detail. best implementations are empirically comparable to or better than other minimum cost flow algorithms. the out-of. and node potentials for any basis We then show how to compute arc flows We next discuss how to perform various to simplex operations such as the selection of entering arcs. = u^. The Section 2. i in the residual at least is one unit of flow similar to. the last The advances made in two decades for maintaining and upxiating the tree structure efficiently have substantially improved the speed of the algorithm. 5. j) with c^j < 0. which is a spanning tree. structure. version of the primal network simplex algorithm its is Though no known to run in polynomial time. leaving arcs and pivots using the tree data structiire. Network Simplex Algorithm The network simplex algorithm specialization of for the minimum cost flow problem for is a the bounded variable primal simplex algorithm cost flow linear programming. we show how guarantee the finiteness of the network simplex algorithm. For example. An arc with k^: = said to be in-kilter. Finally. Then the algorithm network and would obtain augment this a shortest path to node {(i. Suppose the kilter would decrease by increasing flow on P from node in the cycle j the arc. researchers have also improved the performance of the simplex algorithm by developing various heuristic rules for identifying entering variables. The special structure of the minimum benefits. j) terminates when all arcs are in-kilter. Through extensive empirical testing. but P u The proof of the correctness of algorithm more detailed than.kilter algorithm reduces the kilter number number of at least one arc. j)).6. streamlining of the simplex problem offers several computations and eliminating the tree structure of the basis (see »need to explicitly maintain the simplex tableau. of an arc (i. We first define the concept of a basis structure and describe a data structure to store and to manipulate the basis. for an arc I (i. particularly. .

. / (5.1c). j) A basis xj: structure (B. Then. The condition (5.9) 1 tree path in B from node to node j. L and tree. B denotes the set of basic arcs. for each (i. then equations (5. arcs of a spanrung U by respectively denote the sets of nonbasic arcs at their lower and upper U) is j) g U. U p>artition and L and the arc set A. = Cj. j) (5. L. B.10) . possible to obtain a set of node potentials n so that the reduced costs defined by = Cj. j) € U.11) has a similar interpretation. the problem has a feasible solution satisfying (5. We refer to the triple (B. (i. (i. = each (i.9) Cij . The following algorithmic description specifies the essential steps of the procedure.109 The network simplex algorithm maintains a basic feasible solution at is each stage U). and setting (5. for each for each (i. (B. if U) as a basis structure. - nii) n(j) satisfy the following optimality conditions: Cjj = S < . j) e B. A + feasible basis structure U) is called an optimum basis structure if it is Cj. € L. L. bounds. (5. The network simplex algorithm maintains iteration a feasible basis structure at each until it and successively improves the basis structure becomes an optimum basic structure. little later We shall see a that if nil) = 0. L.jc(i) + 7t(j) for a nonbeisic arc (i. j). p in L denotes the change in the cost of flow achieved by sending one unit of flow through the tree path from node 1 to node j i. and then returning the flow (5. The condition not profitable for any nonbasic arc in L. through the arc (i. = for each e L.e.11) These optimality conditions have a nice economic interpretation. imply that -7t(j) denotes the length of the cj.10) implies that this along the tree path from node circulation of flow is to node 1. A basic solution of the minimum The cost flow set problem defined by a triple i. . Cjj . L. u^: for called feasible setting Xj.1b) and (B. .

and a thread index.1 for an example of the We associate three indices with each node i in the tree: a predecessor index. The node potentials for basis are easily computed using (5. depthd). the network contains arcs (1. The algorithm requires the tree to be represented so that the simplex algorithm can perform operations efficiently and update the representation quickly when the basis changes. j) with flow S and arc set (j. of obtaining an initial We (j. perform a basis exchange and update node potentials. jmd the as U is empty. U). end. compute node potentials for this basis structure. pred(i).2 provides one way basic feasible solution. baisis add arc to the spanning tree corresponding to the in this cycle. j) and 1) with sufficiently large costs and capacities. begin determine an initial btisic feasible flow x and the corresponding basis structure (B. The -b(j) if b(j) initial basis B includes the arc set (1. We consider the tree We assume that node as "hanging" from a specially designated node. thread(i). Each node has a unique path connecting it . /) (k. Maintaining the Tree Structure The specialized network simplex algorithm is possible because of the spanning tree property of the beisis.110 algorithm NETWORK SIMPLEX. end. root. have assumed that for every node j € N - {!). we describe the various steps performed by the network simplex algorithm Obtaining an Initial in greater detail. In the following discussion. /) violating the optimality conditions. a depth i index.9). L. q). See Figxire 5. called the tree. forming a cycle and augment the maximum possible flow determine the leaving arc (p. we will see later. 1) with flow b(j) if b(j) > 0. 1 is the root node. violates the optimality conditions while some arc begin select do an entering arc (k. Basis Structure Our connectedness assumption A5. The this L consists of the remaining arcs. We next describe one such tree representation.

8.1. The simplex method has two given basis structure. the node set (5. The thread threads its indices define a traversal of the tree. 6. descendants of node and then left visit node Since node 3's depth equals that of node we know that we have the "descendant tree" lying below node 5. Note that the value of one node potential . For example. We assume that n(l) = 0. For example. this sequence would read For each node i. starting at the root and visiting nodes in a "top to bottom" and "left to right" order. and (ii) basic steps: (i) determining the node p>otentials of a computing the arc flows for a given basis efficiently structure.1. and 9 are leaf nodes. and (ii) the descendants of any node are consecutive elements The thread indices provide a particularly convenient i: means for visiting (or finding) all i. A node with no successors called a leaf node. 7. starting node 5. its successors. For the root node these The Figure 5. 9. Computing Node Potentials and Flows for a Given Basis Structure We first consider the problem of computing node potentials n for a given basis structure (B. successors of successors. nodes 4. we can enumerate the path from any node to the root node. (i) the predecessor of each node appears sequence before the node in the traversal. 9) is contair« the descendents In Figure 5. L. we visit nodes 3. and 7 in order. which are the 5. For our example. 7. of node 5 in Figure 5. finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the simplex method. its We say that pred(i) of a is the predecessor of node i i and i is a successor of node The descendants and so node i consist of the node itself. The thread indices can be formed by performing a depth first search of the tree as described in Section 1.5 and setting the thread of a node to be the node encountered after the itself node in this depth first search. simply follow the thread from node recording the nodes depth of the visited node becomes at at least as large as node i. itself. As we will see. descendants of a node visited until the We 5. 8. This traversal satisfies the following in the two properties. pred(i). thread (i) specifies the next node in the traversal visited after node i. and then finally returning to the root. We now describe how to perform these steps using the tree indices. 8. 1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5. U).1).1 shows an example of these indices. The predecessor index stores the stores the first node in that path (other than node i) and the depth index indices are zero. on.Ill to the root. Note that by iteratively using the predecessor indices. 6. number of arcs in the path. a sequence of nodes that walks or the way through nodes of the tree.

We compute the remaining node potentials using the conditions that Cj: = for each arc (i. j) in B.1b) is redundant. These conditions can alternatively be stated as 1 .112 can be set arbitrarily since one constraint in (5.

:(]) : = 7t(i) - Cj. = 0. procedure begin 7t(l): COMPUTE POTENTIALS. The traversal assures that whenever this its fanning out procedure predecessor.113 n(j) = Ji(i) - Cjj. if (j. U). j) 6 € A then . j) e B. j while ^ 1 do begin i : = pred(j). for every arc (i. We proceed. L. on arcs encountered along the way. end. Cjj. the procedure can all comput in 7t(j) using (5. end. The following procedure accomplishes . j: = thread(l).12) The basic idea indices to is to start at node 1 and fan out along the tree arcs using the thread compute other node visits potentials. say node indices allow us to i. if (i. j. while node and move in toward the root using the predecessor computing flows this task.12).. A similar procedure will permit us to compute flows on basic arcs for a given start at the leaf basis structure (B. in the reverse order: indices. i) A then 7t(j) : = 7t(i) + j : = thread (j). (5. however. node it has already evaluated the potential of hence. The thread compute node potentials 0(n) time using the following method.

114 procedure begin e(i) : COMPUTE FLOWS. Now additional consider the steps of the method. 2. This assignment creates an at j. which B represents the columns Since B is in the node-arc incidence matrix N corresponding to 2. else Xjj add e(j) to e(i). j) : for each e U do subtract Uj. j) € : T then = e(j). let T be the basis tree. which precisely . = u^: explains the adjustments in the supply/demand of The manner for up>dating e(j) implies that each e(j) represents the j. Thus. Xj. each node appears after prior to its its Hence. i)). j) (or (j. j delete node and the arc incident to it from T. = b(i) for aU i € N. this arc must carry -e(j) (or e(j)) units of flow to satisfy the adjusted supply /demand of nodes in the subtree. and add u^: to e(j). units at Xj: node i and makes the same amount available initial This effect of setting nodes.3). sum of the adjusted supply /demand of nodes in the subtree hanging from node is Since this subtree connected to the rest of the tree only by the arc (i. = pred(j). One way thread indices. descendants. The arcs in the set U must carry flow node equal to their capacity. Note that in the thread traversal. end. we set x^. demand of Uj. : = -e(j). = U|j for these arcs. the reverse thread traversal examines each node examining descendants. (i. it a lower triangular matrix (see Theorem is possible to solve these equations by forward substitution. The procedure Compute Flows in essentially solves the system of equations Bx = b.6 in Section is the spanning tree T. end. from e(i) set X|j = u^j. and then take them out from the top one at a time. if (i. while T*{1) do begin select a leaf i : node j in the subtree T. of identifying leaf nodes in T is to select nodes in the reverse order of the all A simple procedure completes this task in 0(n) time: push the nodes into a stack in order of their appearance on the thread.

Compute Potentials solves the system B = c by back Entering Arc types of arcs are eligible to enter the basis: a negative is Two bound with aiiy nonbasic arc at its lower a reduced cost or any nonbasic arc eligible to enter the basis. The algorithm maintains a candidate list of arcs violating the optimality conditions. examining the arc optimality condition large cyclically and selecting the first arc that violates the would quickly find the entering arc. that As we scan the arcs.. These arcs violate condition (5. we construct the candidate list. Once minor the list becomes empty or we have reached a specified be performed at on the list number of iterations to iteration. One most successful implementations uses a candidate approach that strikes an effective compromise between these two strategies. one node emanating from node i at a time.11). An i. at its upper bound with positive reduced cost. each major iteration. but must examine each arc at each iteration.115 what the algorithm does. (5. until either we have examined all nodes or the has reached its maximum allowable size. In other words. The next major iteration begins with the node where the previous major nodes cyclically as it iteration ended. it performs list iterations. This approach also offers sufficient flexibility for fine tuning to special problem classes. We repeat this list selection process for nodes i+1. . the algorithm examines to the candidate list.10) or The method used for selecting an entering arc among these eligible arcs has a inajor effect selects I on the performance of the simplex algorithm. selecting arcs in a two-phase procedure cor«isting of major iterations and minor iterations.. We examine arcs emanating from nodes. adds arcs emanating from them Once minor the algorithm has formed the candidate list in a major iteration.. but might require a of the relatively number of iterations due to the list poor arc choice. adding to the candidate list the arcs (if any) that violate the optimality condition. of equations n Similarly. list which is very time<onsuming. we rebuild the with another major . On the other hand. has the largest value of Cjj I among such arcs. the procedure substitution.e. In a major iteration. scanning all candidate arcs and choosing a nonbasic arc from this that violates the optimality condition the most to enter the basis. might require the fewest number of iterations in practice. i+2. we ufxiate the candidate list by removing those arcs no longer violate the optimality limit conditions. implementation that an arc that violates the optimality condition the most.

116 Leaving Arc ^ Suppose we basis select the arc (k. denote the . ^j=[Xi. Node w. We define the orientation of (k. The maximum flow is change on an arc W that satisfies the flow bound constraints . which sometimes referred (k. namely. Let W and W respectively. up It node w.(P(k) i to the root node. W contains the portions of the path P(k) and This method is efficient. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of its the current solution. and is select an arc (p. /) pivot cycle. e this arc leaves the basis. In other words. j) e W. Start at node k and using all predecessor indices trace the path from this node to the root and label this path. /)} u P(k) u P(/)) n P(/))). W consists of the arc (k. |Uj: - X|: if (i. j) W) units of flow around W. has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex W. /) if (k. 1) as the entering arc. which we might /. P(/). and e U.j)eW. q) with 5pQ = 6 as the leaving arc. refer to as the apex. /) if Oc. The addition is of this arc to the to as the B forms exactly one (undirected) cycle W. The simultaneous use of depth and predecessor indices. around W along and opposite to the cycle's orientation. as indicated in the following procedure. e We W. cycle We (i.. . but it can be improved. along with the arc (k. then this cycle consists of the arcs {(((k. /) and the disjoint portions of P(k) and Using predecessor indices alone permits us to identify the cycle W as / follows. those in w and the root. If P(i) send 6 = min {5jj : (i. j) change the flow as much as possible until one of the arcs in the W reaches 5j: its lower or upper bound. the nodes in Repeat the same operation for node until we encounter a node already is labeled. if(i. say node w. the first common P(/) ancestor of nodes k and to The cycle /). eliminates this extra work. The crucial operation in this step in the basis to identify the cycle denotes the unique path from any node . opposite to the orientation of sets of arcs in /) W as the same as that of € L.

typically The entire flow change operation takes CKn) time worstose.117 * ' procedure IDENTIFY CYCLE. A basis is called degenerate flow on some basic arc equals lower or upper bound. T^ containing the that the subtree root node. and nondegenerate otherwise. the arc (k. . q) from the previous b<isis partitions the set of nodes . begin i : = k and i j : = /. In this instamce. into two subtrees— one. /) Xpg = Upg. while ^ j do begin if depth(i) > depth(j) then if i : = pred(i) j : else depth(j) > depth(i) then j : = pred(j) else i : = pred(i) and = pred(j). q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or Oc. otherwise its nondegenerate. If the leaving arc differs from becomes a more extensive ch«mges are needed. /) for a leaving arc (p. The deletion Note of the arc (p. a basis exchange it is is a pivot operation. end. In this instance. it must update the basis structure. or vice versa. (p. and the other. but examines only a small subset of the nodes. merely moves from the the entering arc. Adding that is again a and deleting tree. then set L to the set U. which would happen when 6 = uj^j the basis does not change. Basis Exchange In the terminology of the simplex method. Observe that a degenerate pivot occurs only in a degenerate Each time the method exchanges an entering arc (k. basis. T2. q) from the previous basis yields a new basis spanning The node potentials also change and can be updated as follows. the root node. then the pivot if is said to be degenerate. the arc (p. A /.J) . If the leaving arc is the same as the entering arc. ancestor w of nodes k and Using predecessor indices to again traverse the cycle the algorithm can then update in the flows on arcs. not containing q. : = i. simple modification of this procedure permits first it to determine the flow 6 that can be augmented along W as it determines the common W. T2 hangs from node p or node The arc (k. /) has . q). w end. If 6 = 0.

if k e T| then change = 7t(y) - Cjj else change = Cjj : : = 7t(y) + change. procedure begin if UPDATE POTENTIALS. 118 one endpoint Cjj . end.4 for it is the details. the conditions n(l) = 0. Recall that I cj^/ I represents the net decrease in the (in cost per unit flow sent 0). around the cycle W. The following method. however. if / e T| and k € T2. however. in T2 change by Cj^/ . while depth(z) < depth(y) do begin 7c(z) : = 7:(z) + change. The final step in the basis exchange is to ujxiate various indices. During a nondegenerate pivot is which 6 > the new basis structure has a cost that 61 cy I units lower than the previous basis structiire. that possible to update the tree indices in 0(n) Termination The network simplex algorithm. . - If k e T^ and / e T2. they change by the eimount indices. Since there are a finite number of basis structures and every basis structure has a unique associated cost. to another until (5. z : = thread(y). time. using the thread and depth updates the node potentials quickly. It is it as just described. We do note. end. and the potentials of nodes in the subtree T2 change by a constant amount. As is easy to verify. the network simplex algorithm will terminate finitely assunung nondegeneracy. moves from one basis structure obtains a basis structure that satisfies the optimality conditions (5.11).9)- easy to is show that the algorithm terminates in a finite number of steps if each pivot operation nondegenerate. This step is rather involved and we refer the reader to the reference material cited in Section 6. 2 : = thread (z). then all the node potentials Cjj. and + 7t(j) = for all arcs in the new basis imply that the potentials of nodes in the subtree T^ remain unchanged. : : q e T2 then y = q else y = : p..7t(i) in T-j and the other in T2. pose theoretical difficulties that we address next. Degenerate pivots.

. Let (B.. feasible basis. the feister in simplex algorithm terminates finitely. positive We say that a basis structure of flow from U) is strongly feasible if we can send a amount any node in the tree to the root along arcs in the tree without violating any of the flow bounds. ar»d . but also a practical one. L. . (ii) i 1 = 2 ti < 1. . . for avoiding cycling in the The perturbation technique is a well-known method simplex algorithm for linear programming. i.2 for an example of a strongly Observe that this definition implies that no upward pointing at its eirc can be at upper bound and no downward pointing arc can be lower bound. t^) is a feasible perturbation .e.. we conceive of a basis tree as a tree hanging from the root node. called a strongly feasible basis. moreover. We show that a particular perturbation technique for the network simplex method basis technique. hand-side vector so that every convert an This technique slightly pertvirbs the right- fecisible basis is nondegenerate and so that it is easy to optimum solution of the perturbed problem to an optimum solution of the original problem. if it satisfies the following conditions: (i) Ej > n for all i = 2.. L. Degeneracy in network problems not only a theoretical issue. the root). As we show next.. minimum cost flow problem with integral As earlier. infinite repetitive is sequence of degenerate pivots. in Computational studies have shown that as many as 90% of the pivot operations common runs networks can be degenerate. Researchers have constructed very small network examples for which poor choices lead to cycling..119 Strongly Feasible Bases The network simplex algorithm does not of iterations unless necessarily terminate in a finite number we impose an an additional restriction on the choice of entering and leaving arcs. its See Figure 5. is equivalent to the combinatorial rule knov^Ti as the strongly feasible The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej. U) be a basis structure of the data. n. ££. by maintaining a special type of basis. 3. it practice as well. The tree arcs either are upward pointing (towards the root) or are downward pointing (away from (B.

l)/n Another choice is Ej = a* for i = 2.. for the perturbation e = (-(n-l)/n. Proof. o chosen as a very small justification positive number. 2/n. U) is strongly feasible. j. . Then node basis. is at its upper bound and no downward pointing arc of at its lower (iii) U) L. L. If (i. is nonintegral and thus nonzero. perturbed the problem. Suppose true. As noted strictly earlier.- Since < X < rXi) k € CKi) 1. (i) ^ (ii). 120 r (iii) El = i L ^^ = 2 is Cj .2. n (and thus = -{n . The perturbation changes the flow on for the basic arcs. is (ii) No upward the basis (B. perturbation increases the flow on an upward pointing arc by an amount between than its and 1. 1/n). . (B.. j) by k€ X El.. for any feasible perturbation e replace b (iv) (B. if we by b+e. . 1/n. The procedure we gave Compute-Flows. implies that perturbation of b by e changes the flow on basic arcs in the following maimer: 1. j) by k€D(j) 1. (ii) (iii). Suppose an upward pointing arc (i. . 2.. Theorem For any basis structure U) of the minimum cost flow problem. no dov^mward pointing arc can be that (ii) is at its lower bound. L. If (i. i. = 1/n with for i = 2.. the following statements are equivalent: (i) (B. One E| possible choice for a feasible perturbation ). then the perturbation increases the flow the resulting flow 5. . earlier in this section. is X Ew- Since < keD(j) Z < nonintegral and thus nonzero. U) is feasible . pointing arc of the basis bound. Since the flow on an upward pointing arc is integral and strictly less (integral) upp>er bound. violating the definition of a strongly feasible the For same =^ reason. then the perturbation decreases the flow in arc the resulting flow (i. j) is at its upper bound. is feasible if we replace b by b+e.. i cannot send any flow to the root. j) is an upward pointing arc of tree B and in arc D(i) is the set of descendants of node El. j) is a downward pointing arc of tree B and D(j) is the set of descendants of node Ei. L. n. Similar reasoning shows that after we have downward pointing arcs also remain feeisible. the perturbed solution remains feasible.. (i.

the algorithm will terminate in at most of nmCU iterations. any implementation feasible the simplex algorithm that maintains a strongly basis runs in pseudopolynomial time. < and U) is strongly feasible for the origiruil problem.. 1/n.. This theorem shows that maintaining a strongly feasible basis is equivalent to applying the ordinary simplex algorithm to the perturbed problem. Follows directly because e = (-(n-l)/n. Even though this rule permits degenerate pivots. for pxjinting arcs. . the p>erturbation Consider the same basis tree for the replace original problem. and > for downward pointing arcs. Consequently. We can e. 1/n). . U) of the perturbed problem. The method initial basis always gives such a basis. 1/n. 1/n. thus maintain strong feasibility by f>erturbing b by a suitable perturbation is However.e. every pivot operation augments at at least 1/n units of flow and therefore decreases the objective function value by units. L. x^. it is guaranteed to converge. there no need to actually perform the perturbation. we can maintain strong feasibility using a "combinatorial rule" that the original simplex equivalent to applying method after we have imposed the perturbation. Each arc in the basis B has a If positive nonintegral flow. this equivalence shows any implementation of the simplex algorithm that maintains a strongly feasible basis performs at most nmCU pivots. Therefore.. cortsider the perturbed . flows on the the U|: downward upward pointing arcs increase. the flow leeist on every arc is a multiple of 1/n. This result implies that both approaches obtain exactly the same sequence of basis structures if they use the that same rule to select the entering arcs. . As a corollary. x^: upward pointing arcs decreaise. Consequently. 1/n Since mCU is is an upper bound on the objective function value of the starting a lower solution and zero bound on the minimum objective function value. L. (B. ( - To establish this result. problem with the perturbation e = (n-l)/n.121 (iii) => (iv). 1/n. Consider the feasible basis structure (B. (iv) =* (i).. The algorithm selects the leaving arc in a degenerate pivot carefully so that the next basis is also . 1/n) is a feasible perturbation. Figure 5. flows on the resulting flows are integral. we remove (i. Combinatorial Version of Perturbation The network simplex algorithm described earlier to construct the starts with a strongly feasible basis. is Instead. b + e by b).. With this perturbation.2 will illustrate our discussion of this method.

Hence. /) is at its lower bound and the apex w /). then the pivot flow along the arcs in Wj. cj^j < 0. We distinguish two cases. See Figure 5. /. If W2 can send positive flow to the root along the Now consider nodes contained in the segment W^. hence. is those arcs (i. W along its introducing an arc into the basis for the network simplex say arc (p.e. W2 = for W - W| {(p. Let W^ be the segment of the cycle orientation. no arc on this path can be a blocking arc in a degenerate pivot. thus. pivot cycle select the leaving arc as the last blocking arc. node k the subtree T2 and . j) W that satisfy = 5. Notice that since the previous basis was strongly feasible.122 feasible. change flow values. no arc in be compatable vdth the orientation of W. every node could to the root send positive flow node. The leaving lies in from node k node w. is the common tree. When We next do so. because by the property of strong every node on the path from node to node w can send a positive amount of flow to the root before the pivot and. q)). the current pivot of augmented segment via a positive amount was a nondegenerate pivot. it (k. If the blocking arc arc.. the i. /) to the basis We define 5jj the orientation of the cycle as the After in the If updating the flow. the current pivot was a degenerate pivot. show that this rule guarantees that the next basis is strongly feasible. when we traverse the cycle along Further. /) degenerate pivot. unique. some basic arcs will be at their lower or upper bounds. q) is W2 blocking and every node contained in the segment orientation of W2 and via node w. then leaves the basis. is This conclusion completes the proof that the next basis strongly feasible. every node in the to the orientation of W^ can augment flow back to the root opposite If W^ and in node w.. apex w and arc - (p. q). In this case. q). then W^ must be contained the segment of feasibility. first Suppose that the entering arc (k. Now must observe that before the pivot.e.2 for an illustration of the segments the last blocking arc in W| is and W2 our example. We now study the effect of the basis (k. cycle contains more than one blocking then the next basis will be degenerate. the algorithm selects the leaving arc in accordance with the following rule: Combinatorial Pivot Rule. Since arc arc belongs to the path enters the basis to change on node potentials during a at its lower bound. since the pivot does not W^ could send positive flow to the root and. encountered in traversing the orientation starting at the apex w. Define the orientation of segments W^ and W2 to W. W between node w and node / k. To we show that in this basis every node in the cycle W can send positive flow to the W between the let its root node. the algorithm cycle identifies the blocking arcs. method. ancestor of nodes k and Let W be the cycle formed by adding arc same as that of arc (k. i. every node in therefore. Since arc (p. every node in W^ be able to send positive flow to the root after the pivot as well.

As . Consequently. we consider the perturbed z*^ problem with perturbation function value = (-(n-l)/n. we can reduce the number of pivots 0(nmU log H). We have already shown that any version of the network simplex algorithm that maintairis a strongly feasible basis performs O(nmCU) pivots. W as opposite to its the orientation of arc The criteria to select the leaving arc remaii\s unchanged-the leaving arc starting at is the Icist blocking arc encountered in traversing W along orientation node w. x denote the current flow. easy to show . pivot all nodes T2 again increase by the amount of the consequently. If the entering arc (k. node / is contained in the subtree T2 and. thus. the network simplex algorithm implemented using Dantzig's pivot rule. L. Using Dantzig's pivot rule to and geometric improvement arguments.. U) denote the current basis Let arc. 1/n. earlier. 1/n). in In this case. the arc that most violates the optimality conditions (that is. /) is at its upper bound. Since the sum of all node bounded from below. far we have assumed that the entering arc is at its lower bound.123 the potentials of all nodes in T2 change by the amount . then the objective function value decreases by at least A/n units. violation. and structure. this degenerate pivot strictly increases the sum of all node potentials (which by our prior potentials is assumptions the is integral). .. then we define the orientation of the cycle (k.c^j > 0. with H defined as e H = mCU. Let denote the objective of the perturbed minimum cost flow problem (B.^k+l^^/n (513) We now need an upper bound on the It is total possible that improvement in the objective function after the k-th iteration. also yields polynomial lime simplex algorithms for the shortest path and assignment problems.. number So of successive degenerate pivots is finite. at the k-th iteration of the simplex algorithm. after the Cj^^j . A > If denote the maximum violation of the optimality condition of any nonbasic the algorithm next pivots in a nonbasic arc corresponding to the maximum Hence. 1/n. /). pivoting in the arc (k. Complexity Results The strongly feasible basis technique implies some nice theoretical results about i. ^k.e. /) with the largest This technique value of I Cj^j | among all arcs that violate the optimality conditions). the pivot again increases the sum node potentials.

(x^:.5) (0. capacities represented as The entering arc the blocking arcs are (2. and the leaving arc is (7.2) 0.124 ap>exw (3.2. The figure shows the flows and is (9.4) (2. 5). This pivot is a degenerate pivot. 5). 10). A strongly feasible basis. . 3) and (7. Ujj). The segments W^ and W2 are as shown.5) Entering arc Figure 5.

. 0(nmU log W) iterations. x.125 (i.j)€A^ is equal to the total improvement Further.'" improvement • in the following relaxed (i. and by leaving the flow on the basic arcs unchanged.3. We have thus shown that z^-z»^mAU.. the with respect to in the the objective objective function £ £ c.15) we obtain nmu By Lemma 1.14b) For a given basis structure (B. We summarize our discussion as Theorem The network simplex algorithm that maintains a strongly feasible basis and uses Dantzig's pivot rule performs 0(nmU log H) pivots. (514a) A »] 1] < xjj < Ujj. j) € A.1.j)€A'^ function Cj.. for all (i.j)6 C. (5. we construct an optimum solution of (5.. This readjustment of at flow decreases the objective function by most mAU. U). = u^ for all arcs (i. if H = mCU. the total improvement with respect to the objective function ^ C:: x. L.13) (5. Combining (5. . Xj. 5. the network simplex algorithm terminates in follows. j) € L vdth Cj: < 0.j)€ A^ ^ ieN Since the rightmost term in this expression is a constant for fixed values of the node potentials.j)e A ' ' (i. is bounded by the total . '' total improvement (i. by setting xj: = for all arcs (i.14) by setting Xj.' (i. j) € A ' ' problem: f minimize subject to X {i.15) and (5... x. j) e U with Cjj > 0.

upon cost and simultaneous right-hand-side and is The RHS-scaling algorithm an improved version of the successive shortest in the successive shortest path algorithm. we begin a new scaling phase. { j e(j) < -A ). sufficiently large The RHS-sc<iling algorithm guarantees each augmentation carries flow and thereby reduces the number of augmentations substantially. The definition of A implies that within n augmentations the algorithm will decrease A by a factor of at scaling least 2.7 Right-Hand-Side Scaling Algorithm ni . scaling. It x and the imbalances e(i) as defined in performs a number of scaling phases. is : e(i) < 2A or e(i) > -2A for all but not necessarily both. Hence. This definition implies that the is sum of excesses { i : (whose magnitude ) equal to the sum of deficits) bounded by 2nA..126 This result gives polynomial time bounds for the shortest path and assignment problems since both can be formulated as minimum cost flow problems with U = n and U = 1 respectively. resulting in a fairly large that number of augmentations in the worst case. we perform a number of augmentations. the least power of 2 satisfying Initially. Scaling techniques are among the most effective algorithmic strategies for designing polynomial time algorithms for the section. Let S(A) = e(i) ^A and let T(A) = 0. to In fact.4). Much A be as we did in the excess scaling algorithm for the either 2' (i) maximum for all i. flow problem.4. The next two sections present polynomial time algorithms based cost scaling. We i. it is possible to modify the algorithm and use the previous in arguments pivots show in that the simplex algorithm solves these problems 0(n^ log C) and runs 0(nm log C) total time. either S(2A) = or T(2A) = In the given i A-scaling phase.4.e. within Odog U) . j) e A. it This algorithm can be applied to the capacitated minimum cost flow problem 2. after has been converted into an uncapacitated problem (as described in Section The algorithm uses the pseudoflow Section 5. In this we describe an algorithm based on a right-hand-side scaling (RHS-scaling) technique. These results can be found in the references cited in Section 6. At this point. and each of these augmentations A imits of flow. A= '°S ^ '. minimum cost flow problem. a problem with = » for each (i. The inherent drawback path algorithm is that augmentations may carry relatively small amounts of flow. Then at the beginning of the A-scaling phase. shall illustrate RHS-scaling on the uncapacitated Uj: minimum cost flow problem. 5. each from a carries node c S(A) to a node € j T(A). (ii) we let i.

X. A := A/2. ^ . while S(A) * and T(A) * e do begin select a node k e S(A) and a node / e T(A). units of flow along the path P.2) ensure in S(A) to a we can always send A units of flow from a node description is node in T(A). A < 1. begin X := 0. algorithm RHS-SCALING. > . all imbalances are now zero and the algorithm has found an optimum flow. S(A) and T(A). T(A) := { i € N : e(i) < -A ). let n be the shortest path distances in G(0). The RHS-scahng algorithm A-scaling phcise. update n:=n-d.^ 2f log while the network contains a node with nonzero imbalance do begin S(A):={i€ N:e(i)^A). node / e T(A). U1. The driving force behind this scaling technique is an invariant property (which is we will prove later) that each arc flow in the A-scaling phase a multiple of A. to a it is correctly solves the problem because during the able to send A units of flow on the shortest path from a node k € SiA) result. . This fact follows from the follovdng . e := b. augment A update end. end. . The following algorithmic a formal statement of the RHS-scaling algorithm. end.127 phase. This flow that invariant property and the connectedness assumption (A5. all determine shortest path distances d from node k to in the residual other nodes network G(x) with respect to the reduced costs let P denote the shortest path from node k to node /. By the integrality of data.

because fails to Lemma 5. The residual capacities of arcs in the residual A Proof. either S(2A) = We consider the case when phase. The Each residual capacities are a multiple of A because they or are either or «. 0.2. or T(2A) = 0.4. one method of solving the problem cajjacitated minimum cost flow to first transform the capacitated 2.. to an uncapacitated one using the technique described in Section We then apply the RHS-scaling algorithm on the transformed network. A recently developed modest variation of the problem RHS-scaling algorithm solves the capacitated minimum cost flow 0(m lof^ n . A n. Theorem 5. algorithm and thus terminates with a minimum We show that the algorithm performs l+Flog at most n augmentations per scaling phase. O) time. m. each scaling phase can perform most n augmentations. S(2A) = I 0. O) time. this fact would imply the conclusion of the theorem. A units and preserves the inductive A decrease in the scale factor by a factor of 2 also preserves the inductive This result implies the conclusion of the lemma. Let S(n.128 Lemma 5. the RHS-scaling algorithm solves the capacitated minimum in cost flow problem in 0(m log U S(n.4. I ends I node with a and carries A units of flow. and each seeding phase performs at most n+m augmentations. The inductive hypothesis be true initially since the residual capacities are or Uj. Proof. similar proof applies when T(2A) = At the beginning of the scaling i S(A) | < Observe that A< at a e(i) < 2A for each node deficit. The shortest path problem on the transformed problem can be solved (using some clever techniques) in S(n. As we noted problem is previously. e S(A). Applying the scaling algorithm problem introduces some directly to the capacitated minimum cost flow subtlety. The transformed network contains n+m nodes. Consequently. At the beginning of the A-scaling phase. m. hypothesis. The RHS-scaling algorithm is a special case of the successive shortest path cost flow. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U Sin. initial network are always integer multiples of We use induction on the number of augmentations and scaling phases. m. decreases S(A) by one.2 does not apply for this situation. m. C) denote the time to solve a shortest path problem on a network with nonnegative arc lengths. Since the algorithm requires Ul seeding phases. Each augmentation starts at a node it in S(A). at therefore. augmentation changes the residual capacities by hypothesis. C) time. Consequently.

The follovsdng facts are useful for analysing the cost scaling algorithm.: = Y C. an arc (i. Clearly.129 (m + n log n)) time.5 and C5. Cost Scaling Algorithm We now maximum describe a cost scaling algorithm for the miiumum cost flow problem.8. the residual network contaii« no negative cost cycle and from Theorem 5. and finally e < 1/n. 5.1 the flow is optimum. Any feasible flow e -optimal for ekC. ^ -e for each arc (i. This method is currently the best strongly polynomial-time algorithm for solving the minimum cost flow problem.8. The algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation procedure that transforms an e-optimal flow into an e/2-optimal flow. which a relaxation of the usual optimality conditions. i^ C. this result implies that (i. and iteratively obtains e-optimal flows for successively smaller values of Initially e = C. After l+Tlog nCl . This algorithm can be viewed as a generalization of the preflow-push algorithm for the flow problem. Now consider an e-optimal flow with e < /n. < for and reduce to C5.^-n£>-l. feasible.7 C5.6 when e is 0. Any e -optimal feasible flow for E<l/n is an optimum flow. The cost scaling algorithm treats e as a parameter e.8 for e feasibility conditior« ^ C. A flow x is said to be e -optimal for some conditions. j) X W' ^ 6 ^\\ 0. The e-dual imply that for all any directed cycle W in the residual network.3. j) at its upper bound. Since arc costs are integral. This algorithm relies on the concept of approximate optimality. j) in the residual network G(x). These conditions are a relaxation of the original optimality conditions e -optimality conditions permit -e < Cj. Hence. We The Cjj refer to these conditions as the e -optimality conditions. any feasible flow with zero 1 node potentials satisfies C5. (Primal feasibility) x (e -EHial feasibility) is Cj. is Lemma 5.. Proof. e > if x together with some node potentials n satisfy the following C5. j) at its lower bound and e S is > for an arc (i.

end. e(i) > and call an arc (i. The purpose of create new admissible arcs. rj: } units of flow from : node > 0). We also refer to the a relabel bls updating of the potential of a node as a operation is to relabel operation. begin if G(x) contains an admissible arc (i. Moreover. i The Improve-Approximation procedure transforms an E/2-optimal flow. and then gradually converting the pseudoflow into a flow while feasibility conditions. i to node j := 7c(i) + e/2 + min { c^: (i. j) in G(x). It e -optimal flow into an does so by is (i) first converting an e -optimal flow into an 0-optimal if it satisfies pseudoflow (a pseudoflow x (ii) called e -optimal the e -dual feasibility conditions C5. i with 0. 130 cost scaling phases. an optimum flow for the minimum cost flow problem. j) in the residual network admissible -e/2 < < The basic shall operations are selecting active nodes and pushing flows on admissible arcs. re).. j) e A(i) and r^j end. we use the same data structure . discussion of preflow-push algorithms for the maximum it flow problem. 1 while e S /n do begin IMPROVE. X is x.APPROXIMATION-I(£. We feasibility The Improve-Approximation procedure uses the following subroutine. More formally. e < 1/n and the algorithm terminates with an optimum flow. Recall that r^: denotes the residual capacity of an arc (i. begin j: := let X be any feasible flow. j) then push 6 else Jt(i) := min { e(i). As if in our earlier r^.. otherwise is nonsaturating. always maintaining the e/2-dual active We if call a node c^. see later that pushing flows on admissible arcs preserves the e/2-dual conditions. and e := C. end. algorithm COST SCALING. 5 = then we refer to the push as saturating. E:=£/2. we can state the algorithm as follows. procedure PUSH/RELABEL(i).8).

Cjj > and the condition C5.. the reduced cost of every arc Ji(i) with > still satisfies Cj. j) might add its reversal i) to the residual network. compute node imbalances. increasing residual network. PUSH/RELABEL(i).8 i satisfied for (i. j) to identify admissible arcs. after we Jt(i) by e/2 + min rj: Cj: : (i. The correctness of this procedure rests on the iollowing result.4. begin if Cjj if x. maintain a currenl arc which is the current candidate for pushing flow out of list A(i). But since -e/2 S is Cj. the (in fact. ^ for every arc increaise (i. at termination. use induction on the number of push/relabel steps to show algorithm preserves £/2-optimality of the pseudoflow. end. we i. In addition. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow.1. 131 used in the maximuin flow algorithms (i. while the network contains an active node do begin select an active node i.APPROXIMATION-I(e. Pushing flow on arc (i. procedure IMPROVE. yields an e/2-optimal flow. end. and at termination yields an e /2-optimal flow. it algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow is a 0-optiCTiaI that the pseudoflow). node The current arc is found by sequentially scanning the arc of the The following generic version summarizes its Improve-Approximation procedure essential operations. Jt). . We (j. At the beginning of the procedure. For each node i. > then Cjj Xj. ^ -e/2. j) any value of > 0. Lemma 5. := else < then Xj: := uj. This proof is similar to that of Lemma 4. j) e A(i) > 0) units.i) in the Therefore. Proof. < (by the criteria of c admissibility). { By our and fjj rule for increasing potentials. j) in the residual network. maintains the condition cj^ t -e/2 for all arc (k. the procedure preserves e/2-optimality of the pseudoflow throughout and. The algorithm relabels node when Cj.

17) 7l'(w) < 7t*(v) + /£ + I (j.optimality conditions C:. with the - P = vq . that the complexity of the generic version is We a show O(n^m) and then describe specialized version running in time OCn-^). its vj = w . using a variation pseudoflow x and the flow x' repectively. networks implies that there property that sequence of nodes v = vq.18) Now we use n. Let X be the current £/2-optimal pseudoflow and x' be the e-optimal flow at the end of the previous cost scaling phase. ^ on the path P in G(x)..16) and + (5. (5. Applying the e/2.j V| is a path in G(x'). the facts that (i) k(w) = it'(w) (the potentials of it a node with a negative (ii) / imbalance does not change because the algorithm never selects for push/relabel). These time bounds are comparable flow problem. Proof. (i.. = 7t'(v) + /£ - 2 C. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure.....j)eP'J Combii\ing (5. we obtain X ^-/(e/2). Let n and to the n' be the node potentials corresponding possible to show. to those of the preflow-push algorithms for the maximum Lemma 5. .1..i)€ _C.17) gives Jt(v) < n'(v) (7c(w) - n'(w)) + (3/2)/£. .j)eP 7i(v) < Jt(w) + /(e/2) + y Cjj. we obtain (5. - path in G(x) and reversal to arcs P = vp vj..132 We will next analyze the complexity of the Improve-Approximation procedure. that for every node v with positive imbalance in x there exists a satisyfing the properties that (i) node w with negative imbalance in x and a path P x. It is of the flow decomposition properties discussed in Section 2. Alternatively. P^' (i.v-j - . v^.16) apeP^J Applying the £ - optimality conditions to arcs on the path P in G(x')..5. < is and (iii) each increase in potential increases Ji(v) by at least e/2 The len\ma now immediate. and (ii) its reversal P is an augmenting path with respect to exists a v^ is a This fact in terms of the residual . P is an augmenting path with respect to x'. units. (5.

Approximation procedure performs 0(n m) nonsaturating pushes.7. the algorithm resulting in also saturates any arc 0(n) times the 0(nm) total saturating pushes. We establish this result relabels. it also deletes cj^j all admissible arcs because for any arc i). (i. Lemma 5. and cj^j ^ after the (k. 5. we obtain the following . We The define the admissible network as the network consisting solely of admissible arcs. The algorithm takes 0(nm) time perform saturating pushes. to Approximation procedure which take O(n^m) time. then > 0. To bound number of nonsaturating pushes. As in the maximum is flow algorithm. number of nodes that are reachable from node i in the to admissible network and the potential function F = i X g^i)- Th^ proof amounts at active showing that a relabel operation or a saturating push can increase F by 1 most n units and each nonsaturating push decreases F by at at least unit. hence. Since any node p>otential increases 0(n) times. Let g(i) be the let Proof (Sketch). Approximation l+Tlog nCl times. The admissible network acyclic throughout the cost scaling algorithms.5 ar\d essentially (i. that 5. relabel operation since the relabel operation increases 7t(i) by at least e/2 units.133 Lemma Proof. amounts to showing i between two consecutive saturations of an arc j the potentials of both the nodes and increase at least once. The Improve. j). Since the algorithm performs 5.6. if the algorithm adds create its reversal to the residual network. A relabel operation at may create new but (k. The Improve. these observations yield a bound O(nTn) on the number of nonsaturating pnjshes. Therefore the algorithm can create no directed cycles. is Lemma Proof. k -e/2 before a relabel The latter result is true operation. is by an induction argument applied to the number of pushes and The result is true at the beginning of each cost scaling phase because the pseudoflow 0-optimal and the network contains no admissible arc. Thus pushes do not new node admissible arcs and i preserve the inductive hypothesis. we need one more result.8. i). arcs while identifying admissible arcs. i) push flow on an arc with Cjj Cj: < 0.6.AppToximation procedure performs 0(nm) saturating pushes. by Lemmas of and 5. j) We always (j. and the same time to scan Since the cost scaling algorithm calls Improveresult. following result is crucial to analyse the complexity of the cost scaling algorithms.5 most 3n2 relabel operations and 0(nm) saturation pushes. This proof is similar to that of Lemma 4. admissible arcs (i. j). the bottleneck operation in the Improvethe nor«aturating pushes.

the Researchers have using si>ecific order. but it nodes for the push/relabel step in a specific order. to When examined in this order.7). We now describe a relabel operation. arcs.134 Theorem 5S. The wave algorithm examines each node is active. the algorithm relabels Note that after the relabel operation at node the network contains no incoming admissible i arc at node i (see the proof of Lemma 5. Observe pushes do not change the admissible network since they do not create new admissible operations. Suppose that while examining node i. which in turn push fiow to even higher so on. We then move node from its present position in . active nodes have discharged their Since the algorithm requires O(n^) relabel of OCn-^) on the operations. we immediately obtain a bound number of node examinations. algorithm. numbered nodes. the wave algorithm performs O(n^) nor\saturating pushes per Improve- Approximation. method again if examine the nodes according However. The algorithm uses the network can acyclicity of the admissible network. in 0(m) time. It is possible to determine this that ordering. active nodes push flow higher numbered nodes. called the wave algorithm. the all algorithm performs no relabel operation then excesses and the algorithm obtains a flow. nodes i of an acyclic be ordered so that for each arc (i. The wave algorithm selects active is the same as the Improve-Approximation procedure. i. and A relabel operation changes the numbering of nodes and starts to the topological order. within n cortsecutive node examinations. in the topological order and if the node then it performs a push/relabel step. maximum flow problem. and thus the to the topological order. an Improve-Approximation problem very similar to solving a Just as in the generic preflow-push algorithm for the maximum flow problem. The relabel may create new admissible arcs and consequently may affect the topological ordering of nodes. Each node examination entails at most one nonsaturating push. however. procedure for obtaining a top)ological order of nodes after each initial An topological ordering is determined using an 0(m) it. < j. j) in the network. called a topological ordering of nodes. or bottleneck operation is the number of nonsaturating pushes. Consequently. As is well known. suggested improvements based on examining nodes in some clever data structures. The cost scaling algorithm illustrates an important connection between the Solving maximum flow and the minimum is cost flow problems. We describe one such improvement . The generic cost scaling algorithm runs in 0(n^Tn log nC) time.

j). A natural alternative would be an augmenting path based method. The Improve-Approximation procedure section relied on a "pseudoflow-push" method. with Nj and N2 as the sets of supply and demand nodes respectively. This result follows from the facts arc. list) Thus the algorithm maintains an ordered and examines nodes it set of it a doubly linked in this order. approach does not seem improve the O(nTn) bound of the generic Improve-Approximation procedure. approach using the wave algorithm as a subroutine solves the log cost flow problem in 0(n^ nC) time..6. The double scaling algorithm is it the same as the cost scaling algorithm discussed in the previous section except that uses a more efficient version of the Improvein the previous to try Approximation procedure. however. a path in which each arc result in is admissible. Double Scaling Algorithm The double scaling approach combines ideas from both the RHS-scaling and cost scaling algorithms and obtains an improvement not obtained by shall describe the either algorithm alone. Notice that this altered ordering is a (i) new admissible network. We number of can. at least this one arc and. Thus. A capacitated minimum cost flow problem can be solved by first transforming the problem into an uncapacitated transportation problem (as described in Section 2. by Lemma to the algorithm requires 0(nm) arc saturations. and (iii) the rest of the admissible network does not change and so the previous order nodes (possibly relabels a eis is still valid. use ideas from the RHS-scaling algorithm to reduce the for augmentations to 0(n log U) an uncapacitated problem by ensuring that .135 the topological order to the topological ordering of the first position.e. we uncapacitated transportation network G = 0^^ u double scabng algorithm on the N2. Whenever node i. A). excess to a node with deficit over an admissible path. 5.6. node i precedes node in the order. This approach would send flow from a node with i. A natural implementation of this approach would 0(nm) augmentations since each augmentation would saturate 5. and again examines nodes in order starting node We Theorem minimum have established the following The cost scaling result. 5. For the sake of simplicity.4) and then applying the double scaling algorithm.9. the algorithm this moves at to the first place in this order i. (ii) node i has no incoming admissible j for each outgoing admissible arc (i.

We shall describe a method to determine admissible paths after First. also requires 0(n) time on average to find each augmenting path. + E for . at the termination of the procedure. j) A at the beginning of the procedure and. end. / determine an admissible path P from node k to some node with e(/) < 0.136 each augmentation carries sufficiently large flow. 5. algorithm called the double scaling algorithm. 0(n) time on average over a sequence of n augmentations. we obtain an £/2-optimal flow. n). while the network contains an active node do begin S(A) := ( i € Nj u N2 : e(i) ^A }. units of flow on P and update x. The double scaling algorithm uses the following Improve-Approximation procedure. from Lemma pseudoflow. c^. Thus. observe that it(j) first commenting e on the correctness of this procedure. x. A := A/2. this The procedure always augments flow on choice preserves the e/2-optimality of the admissible arcs and. by adding e to optimal (in fact.4. hence. The advantage problem of the double scaling algorithm. A:=2riogUl. This approach gives us an algorithm cost scaling phase performs a is that does cost scaling in the outer loop and within each this number of RHS-scaling phases. this algorithm. set X := 7t(j) := 7t(j) all j € N2. in is that the double scaling algorithm identifies an augmenting path fact. end.APPROXIMATION-n(e. procedure begin IMPROVE. while S(A) ^ do begin OlHS-scaling phase) select a node k in S(A) and delete it from S(A). ^ j -e for all (i. . and compute node imbalances. augment A end. In the double scaling algorithm app>ears to be similar to the shortest for the augmenting path algorithm maximum flow problem. a 0-optimal) for each e N2/ we obtain an e/2- pseudoflow. contrasted with solving a shortest path in the RHS-scaling algorithm.

the algorithm augments A units of flow from a node k in S(A) to a node / with e(/) < 0. the residual network does not contain an admissible arc { rctreat(i).7). - u. and is those that are later cancelled by a retreat step. the algorithm will discover an admissible path . becomes inadmissible. If P has at least one arc. if (u. there are two types of advance steps: those that add arcs to an admissible path (ii) on which the algorithm later performs an augmentation. : (i. at the node node i. We next coimt the number of advance steps. j) € A(i) and r^: > 0). i) from P. At the beginning of the A-scaling phase. During the scaling phase. leist we perform one of P. then ujxiate then delete + e/2 + min Cj. advanced). after most n advance steps of the first type. (pred(i). Ul RHS-scaling for each phases.137 Further. We admissible path P using a predecessor index. in the process. Thus. we delete this arc from P. each augmentation deletes a node from S(A) and after a most n augmentations. e(j) If the residual network contains an admissible arc (i. the method begins performs a total of new scaling phase.e. terminating when the last node deficit. j). We l+flog e(i) next consider the complexity of this implementation of the Improve-Approximation procedure. v) e P then prediy) steps.4 implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow. Each advance step adds an arc to the partial admissible path. Hence. If < 0. n(i) to 7t(i) If (i. as in the RHS-scaling algorithm. the arc (pred(i). then add (i. This operation reduces the excess at node k to a value less then is less A and ertsures that the excess at node /. The creating retreat step relabels (increases the potential oO node i for the purpose of i) new admissible arcs emanating from this node.. the procedure maintains the invariant property that all residual capacities are integer multiples of A and thus each augmentation can carry A units of flow. The algorithm maintain a partial identifies an admissible path by gradually building the path. Consequently. Since the set of admissible arcs at acyclic (by Lemma 5. The algorithm thus 0(n log U) augmentations. say of the following two whichever has a applicable.. then stop. At any point is in the algorithm. The proof of Lemma 5. and a retreat step deletes (i) an arc from the partial admissible path. S(2A) = 0. if there is any. i. i A< < 2A node e S(A). at than A. j) to P. Each execution of the procedure performs i.e. j).

though. The double scaling 0((nm + rr log U) log nC) time. The in basis in the simplex algorithm is often degenerate. therefore. n The amount of time needed to identify admissible arcs is 0( £ i=l lA(i)ln) = 0(nm) since between a potential increase of a node i. For problems that satisfy the similarity assumption. the number of the algorithm performs advance steps first typ>e at most 0(n^ log U). algorithm solves the uncapacitated transportation problem in To solve the capacitated minimum cost flow problem . Since the algorithm requires a total of 0(n log U) of advance steps is augmentations. capacity or cost of any arc). node potentials increase 0{t\^) times. Therefore. the algorithm will examine result. the simplex based approach does not give information about the changes in the solution as the data changes.138 and vsdll perform an augmentation. however. researchers and have conducted There this sensitivity analysis using the primal simplex or dual this simplex algorithms.10 minimum cost flow problem. 0(n^ log U).7.we first transform it into an uncapacitated transportation problem and then apply the double scaling algorithm. and by Lemma is 5. is. Traditionally. The total number of advance steps. and consequently changes the basis tree do not necessarily traiislate into the changes in the solution. I A(i) I arcs for testing admissibility. it tells us about the changes in the basts tree. The references describe further modest improvements algorithm. We leave it as an exercise for the reader to show that how the transformation permits us to use the double scaling algorithm to solve the capacitated minimum cost flow problem of the 0(nm log U log nC) time. The retreat at most O(n^) of the second type because each step increases a node potential. Sensitivity Analysis The purpose solution of a of sensitivity analysis cost is to determine changes in the optimum minimum flow problem resulting from changes in the data (supply/demand practitioners vector. The simplex based approach maintains a basis tree aruilysis every iteration and conducts sensitivity by determining changes in the b<isis tree precipitated by changes in the data. a conceptual drawback to at approach. a variant of this algorithm using more sophisticated data structures is currently the fastest polynomial-time algorithm for most classes of the 5.5. instead. . We have therefore established the following Theorem 5.

Let X* denote an optimum solution of a Cj. we must change the supply /demand values two nodes by equal magnitudes.1 minimum cost flow dictates that ie X N = 0. = - 7C*(i) + 7t*(j) denote the reduced Further. Arc Capacity Sensitivity Analysis We next consider a change in an arc capacity. d(k. we limit our discussion to a unit change of only a particular type. Augmenting one unit of flow from this node k to node into / along the shortest path in the residual network G(x') converts flow. In . Supply/Demand Sensitivity Analysis We becomes problem of first study the change in the supply/demand vector. let d(k.139 We present another approach for performing serisitivity analysis. node k node / with respect to the arc lengths Cj. The flow x* is feasible for the modified problem. this vector satisfies the dual feasibility conditions C5. Let n* be the corresponding node potentials and costs. Lemma implies that this flow optimum for the modified minimvmi cost flow problem. This approach does not share the drawback we have just mentioned. we can compute d(k. however. In a sense. Z (i. q) increases by one unit . /) for all pairs of nodes k and / single-source shortest path problems with nonnegative arc lengths. We show that the sensitivity analysis for the minimum flow problem essentially reduces to solving shortest path or maximum problems. cost flow problem.j)€ Cjj . . Hence. is units.6. / ) denote the shortest distance from node k Cj. of a 1. 5.j)6P to ^ij = X (i. Then x* a pseudoflow for the modified problem. Suppose that the supply/demand b(/) node k becomes bGc) + (Recall 1 and the supply/demand that feasibility of the of another node / - from Section b(i) 1. residual network with respect to the original arc lengths Since for node / in the any directed path to / ) P from node k to node / .K(k) + jt(l). For simplicity. equals the P cjj shortest distance from jt*(/) ). the reduced costs of all arcs in the residual network are by solving n nonnegative.1 pseudoflow / ) a Tliis augmentation changes the objective function value by d(k. hence. Suppose that the capacity of an arc (p. this discussion is quite general: it is possible to reduce more complex changes to a sequence of the simple changes cost flow we cor^sider. minimum Cj. and must increase one value and decrease the is other). plus ( 7t*(k) - At optimality. moreover.

1) + d(l. /) . for all pairs of nodes k and 1 Consequently. We at satisfy this requirement by increasing the by one unit.4 dictates that flow on the arc must equal flow on the arc unit at (p. This change increases the reduced cost If of arc (p. We can. the flow on the arc of flow is at its we decrease the flow by one unit and augment one unit path in the residual network. for the modified problem. then c_ ^ if after the change. if Cpq S 0. Similarly. p). q). Cpq = 1 < before the change. When strictly less the capacity of the arc (p. its Cpg < then condition C5. and usually they are within 5% of each other. and from other nodes to node 1 to compute upper bounds on all d(k. 0. Suppose an arc increases by one unit. we preserve the optimality conditions. before the change. This observation uses the /.2 . that the cost of we discuss changes in arc costs. if Cpq > 0. We convert the pseudoflow into a flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + d(q. q) capacity. if and hence optimun. then x* remains feasible. which (p. However. from node p to node q along the shortest This augmentation changes the objective function value by an amount -Cpn + d(p. often these upper bounds and the actual values are equal. then after the change c^ < 0. capacity. 0. it is an optimum flow for the modified problem. however.140 addition. Cost Sensitivity Analysis Finally. Cpg = before the violates the change and Xp_ > then after the change Cpq = 1 > and the solution . /) obtain useful upper bounds on these changes by solving only two shortest path problems. /) S d{k. fact that d(k. q) decreases by one unit and flow on the arc is than its capacity. In both the Ctises.C5. Recent empirical studies have suggested that these upper bounds are very close to the actual values. The preceding discussion shows how solution value in to determine changes in the optimum due to unit changes of any two supply /demand values or a unit change any arc capacity by solving n single -source shortest path problems. q) by one unit as well. However. q) we assume are integral. it satisfies the optimality If conditions C5.4. This flow is optimum from our observations concerning supply /demand sensitivity analysis. hence. which produces a pseudoflow with an excess of one node q and a deficit of one unit node p. we need all to determine shortest path distances from node to all other nodes.

q) to zero. permit the maximum flow algorithm. v° = x . - units more than that of the original problem.2 and Let v" denote the flow sent from node p to node q » If and x" denote the resulting arc flow. eeisy to verify by case aruilysis change in node potentials maintains the optimality conditions and. j) to-object assignments. q) flow on arc (p. q) • to zero.X. and send a maximum x__ units from the source to the We C5. (possibly negative) associated with each element The objective is to assign each person to one object . since otherwise would generate a solution that violates C5. N. » cut On the (X.X) other hand. furthermore.2. and It is every forward arc in the cutset with zero reduced cost has others at the arc's capacitated. (ii) define of node p as the source node and » node q as the sink node. q) equal to Consequently. to change flows only on arcs it with zero reduced costs. q) to zero. the optimal objective function values of the original and modified problems are the same.v° and obtain a feasible minimum is cost flow. We first try to reroute the flow x from node p to node q without violating any of the optimality conditions. To satisfy the optimality condition of the arc. and a cost Cj. We then decrease the node that potential of every this node in N-X by one unit.11 Assignment Problem The assignment problem special cases of the is one of the best-known and most intensively studied minimum is Section ( I 1. defined as follows: • (i) We at do so by solving is set a maximum flow problem the flow on the arc (p. q e N . (p. In this Ccise. say of objects cost Nj I = I N2 = n) 1 a collection of node pairs A C Nj x N2 representing possible person(i. in A. As already indicated in defined by a set N|. however. say of f)€rsoris. then x° denotes a minimum cost flow of the Pi modified problem.141 condition C5. 5. choosing the assignment with . decreases the reduced cost of arc the flow on arc (p. we must either reduce the (p. at node p and a deficit of x node Pi (iii) q. or change the potentiak so that the reduced cost of arc becomes zero. we v" can set In x^ . network flow problem. the objective function value of the modified problem x_.4. a set N2. thus creating an excess of X Pi sink. this problem . if v° < x then the maximum flow algorithm yields an s-t with the properties that p € X. this case.1 .

either explicitly or implicitly.142 minimum program: possible cost. i A 0-1 solution x of (5. then is assigned to j and j is assigned to i. j) Cj. set A. Several of these algorithms apply. = 1}. the successive shortest path algorithm for the typically select the initial These algorithms node potentials with the following values: nii) = for all i e N| cost flow problem. all j minimum e N2- and 7t(j) = min {cj. Researchers have suggested numerous algorithms for solving the assignment problem.j)e A) is with any partial assignment x an index set defined as X= {(i. Associated {i:(i. "ii {j:(i. (5. The assignment problem also known as the bipartite matching problem..X:: (5. j) € A.18b) (i : (i. A 0-1 solution x satisfying ^ 1 for all i € Ni and X ''ii - 1 fo'" 3^' j e No X .18c) ^ 0. The network G m= A | | arcs. The successive shortest path algorithm solves the assignment problem as a sequence of n shortest path problems with normegative arc lengths. j) e A : x^.j)eA) If = 1.C) is the time required to solve a shortest p>ath problem with nonnegative arc lengths) . arc = 1 if i a minimum cost flow Cj.foraUje N2. The problem can be formulated as the following linear Minimize 2(i. j) e A) for All reduced costs defined by these node potentials are nonnegative.m.m. j) X e A) =l.foraUi€ X:: Xji N-i. : (i. arc costs problem defined on a network and supply /demand specified as has 2n nodes <md b(i) e N| and b(i) = is -1 if i e N2.18a) e A ^ ' subject to {j : (i.C)) time. (Note that S(n. A node not assigned to any other node is unassigned. (5. is called a partial assignment.18) is an assignment. and consequently runs in 0(n S(n. xjj (5. for all (i. We Xjj ^ use the following notation. j) X € X) =l.1 8d) G The assignment problem is with node set N = N| u N2.

some objects may be unassigned and other a feasible objects may be overassigned.C)) time. which is also closely related to the successive shortest path algorithm. problems by implementations of runs in 0(n S(n. can solve the shortest path Consequently. This relaxed problem smallest Cjj is easy to solve: assign each person i to with the value. we can also use any algorithm for the to solve the shortest path problem with arbitrary arc lengths. Assignments and Shortest Paths We have seen that by solving a sequence of shortest path problems.4. Since these algorithms are special cases of other algorithms specify their details. moreover. As a result. The network simplex algorithm. j). if it The first application determines if the network contains shortest path. some implementations of it provide polynomial time bounds. we will discuss a different type of algorithm based upon the notion of an auction. with provisions basis. we have described earlier. we can solve any assignment problem. i and i'. Interestingly.m.143 The relaxation approach is another popular approach. we show another intimate connection between the assignment problem and the shortest path problem. assignment problem so. j) each node (artificial) i by two nodes (i. doesn't. is for maintaining a strongly feasible is fairly another solution procedure for the assignment problem. the Hungarian essentially the primal-dual variant of the successive shortest path algorithm. however. and. in this section. the constraint (5. the second application identifies a Both the appbcations use the node splitting transformation described in Section 2. a negative cycle. i').the tissignment problem. For problems that satisfy the similarity assumption. or relaxes. This approach efficient in practice. The node replaces each arc splitting tremsformation replaces (i. shortest paths The algorithm gradually builds from overassigned objects to assignment by identifying vmassigned objects and augmenting flows on these paths. The relaxation algorithm removes. Dijkstra's algorithm. this algorithm also One method. Before doing so.18c). a cost scaling algorithm provides the best-knowT> time bound fo. thus allowing any object to be assigned to more than one an object j person. we will not Rather. To do we apply the tissignment algorithm twice. is well knovkn solution procedure for the assignment problem. and adds an zero cost arc We first : note that the transformed network always has a feasible solution with cost zero . The algorithm solves at most n shortest path problems. by an arc (i. it Because this approach always maintains the optimality conditions.

Consequently. jo ) / • • • / ^'- ^^^ ^°^^ °^ *^'^ "partial" assignment nonpositive. suppose the original network contains { a negative cost cycle. This solution must contain at least one arc of the form set of (i. the cycle ~ • ~ Jk ~ )l ^ ^ negative cost cycle in the original network. the assignment containing all artificial arcs is (i. ^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore. the cost of the optimal assignment must be negative. • negative. j 2). Then the assigment negative cost.'). (J2 / it can be no ^ ^ • more expensive than the partial assignment is { (jj jA ) / • • • » (Jk. (J2 . (J2 / J3)/ • • • . the assignment must contain a Qk' ii arcs of the form is . (j^.. PA = (j| .144 namely. some partial assignment PA j| must be J2 But then by construction of the transformed network. iy\2 -J3 ' ' • * " - . suppose the cost of an optimeil assignment is i negative. i'). Jl^-jj.Iv Since the optimal assignment cost negative. t ) . We if next show that the optimal value of the assignment problem negative if and only the original network has a negative cost cycle. j') with * { j . First.. because j. . (Jk' J]) Conversely.

(a) The original network.3.145 (a) (b) Figure 5. . (b) The transformed network.

2').3(b) has the corresponding path 1-2-4-5 in Figure 5.price(j) (i. We assume that all utilities and prices are measured a We call a associate with each person i number - valued). 5'). At each an unassigned person bids on a car that has the highest margir\al utility. ((1. . We The bid (i. then we n. C = max j. the path 1-2-5 in Figure 5. 5').18). j) admissible if valued) = uj: price(j) and inadmissible otherwise. there an asking price for car represented by i price(j). can obtain a shortest path between a specific pair of nodes. For example. j Each person (i. (4. is an instance of the bit-scaling algorithm described in Section To describe the auction this algorithm.3(b). to reduce problem is to (5. buy n and has cars that are to be sold by auction. The objective this is to find an assignment with m<iximum Let We can Cj. j) i a nonnegative utility Uj. {lu^jl : (i. (2.3 for an example of this transformation. bid and has no admissible bid. The Auction Algorithm We now describe an algorithm for the assignment problem known as the auction algorithm. then value(i) is person i is next in turn to too high and we decrease this value to max (u^j . At each stage of the For a given set of - algorithm.6.146 If the original network contains no negative cost cycle. say from node 1 to node as follows. j) e A(i)}. 3')) in Figure 5. marginal utility of person for buying car is U|j price(j). since version appears more natural for interpreting the algorithm. (2.. 3'). the iteration. 4'). assignment (4. If algorithm requires every bid in the auction to be admissible. 1' We consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes.3(a) has the corresponding in Figure 5. value(i) ^ max {u^: . j) e A). 1 Now observe that each path from node to node n in the original network has a corresponding assignment of the same cost in the transformed network. and n and See Figure 5. 4')) and an assignment {(1. (3. for car utility. j asking prices. This scaling algorithm 1. Suppose n persons want is to interested in a subset of cars. which is an upper bound on : that person's highest marginal utility. We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to make the algorithm polynomial time.e.3(a). (3. 2'). and the converse is also true. = -uj. we cor\sider the maximization version of the assignment problem. in dollars. Consequently.price(j) : (i. j) e A(i)). for each set € A(i). an optimum assignment in the transformed network gives a shortest path in the original network. i.

e. let x° be the current assignment. the prices of cars increase and hence the marginal values is to the persons decrease. Consequently. j. is while some person begin select if unassigned do an unassigned person bid (i. with some valid For example. becomes uneissigned. execution of the auction algorithm and x* denote an value(i) is optimum assignment. the procedure yields an almost a more clever initialization. starts We now j describe this bidding procedure algorithmically. If a jjerson i makes a bid on then the price of car i j goes up by $1. choices for value(i) and value(i) = price(j). end. = price(j) + 1. if was one. utility of always an upper bound on the highest marginal - person i. valued) ^ Uj: price(j) for all (i. j) € A(i)}. the polynomial time version requires At termination. We now show of the that this procedure gives an assignment whose utility is vdthin $n optimum utility. begin let x". optimum tissignment procedure BIDDING(u. then person k becomes unassigned. cars. person k must bid on another car. end else update vzJue(i) : = max {uj: .. value. we set price(j) = for each car and max {u^ : (i. person there is assigned to car The person k who was the previous bidder for car j. x°.147 So the algorithm proceeds by persons bidding on car j. price). The auction stops when each person assigned a car. The procedure can i. j) e A(i). Subsequently. As the auction proceeds. therefore. j) e A(i)} for each person Although this initialization is sufficient for the pseud opolynomial time version.price(j) : (i. the initial assignment be a null assignment. Also. . person k was already assigned to car j. some is admissible then begin assign person price(j) if : i to car j. subsequent bids are of higher value. Let x" denote a partial assignment at some point during the Recall that i. end. j) i.

(i. Hence.21) and observing that unassigned cars in N2 have zero prices.148 X The partial Uji < (x. optimum assignment. We show that the value of any person decreases CXnC) . j) Z X° e "ii ^ + i € I °value(i).20) because priceCj) at the time of bidding value(i) = $1. Suppose we multiply Since all utilities by (n+1) before applying the Bidding procedure. (5.i)eX'' i€Ni I valued) + J€N2 satisfies the condition X price(j) (5. j) e X°. is number of steps the method must terminate with utility of this is at Then utility UB(x°) represents the of the assignment x" assignment (since Nj less empty) . We next discuss the complexity of the Bidding procedure as applied to the v^ith all utilities first assignment problem largest utility is multiplied by (n+1 ). N in (5. Using obtain n. (5. hence. Let Uj: - price(j) and immediately after the bid. however. It is easy to modify the method. to obtain an all utilities Uj. the most $n than the maximum utility. x° is Since the algorithm v^l either modify a node value or node price whenever not an assignment.21) with N° denoting the unassigned persons N^. within a finite a complete assignment x".22) N2 (5. must be optimal. goesupby UB(x°)= UB(x°) be defined as follows.23) As we show in our discussion to follow. The procedure yields an assignment that is within n units of the optimum value and. we UB(x^) ^ S value(i) + J I e price(j) - (5. for all (i.20) in (5. the algorithm can change the node values and prices at most a finite number of times. are now multiples of (n+1).19) assignment \° also - value(i) = Ujj price(j) + 1. two assignments with distinct toted utility will differ by at least (n+1) units. the C = (n+l)C. In this modified problem.

we decompose the original problem into a sequence of algorithm. Substituting this inequality in (5. since the price of car j person i i hais been aissigned to car I j and I increases by one unit. the values change O(n^C') times in value(i) > Uj. 149 times. ?£. some car By our previous arguments.gned. Each j. a person in valued). .price(j) after Further. Since decreasing the value of a person persor\s is i once takes 0( Ad) \ ) time.23) implies UBCx") S -n. Using a scaling technique in the auction algorithm ensures that the prices and values do not change too many times. Since all utilities are nonnegative. Thus. (5. The auction algorithm is potentially very slow because can increase prices (and thus decreases values) in small increments of $1 and the final prices can be as large as n^C (the values as small as -n^C).8. using arc" data structure permits us admissible bids in O(nmC') time. As in the bit -scaling technique described in Section 1. ie No 1 Since valued) decreases by at that the value of le«ist one unit each time at it changes..6.. we solve each problem in 0(nm) time and solve the original problem in 0(nm log nC) time. N^ We next examine the number of iterations performed i by the procedure. The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj. iteration either decreases the value of a person or assigns the person to total. Since C = nC. this inequality shows any person decreases I I most O(nC') times. Odog nC) assignment problems and and show solve each problem by the auction We use the optimum prices and values of a problem as a starting solution that the prices of the subsequent problem and values change only CXn) times per sctiling phaise.21) yields valued) ^ -n(C' + 1). to locate As can be shown. .. can be assigned at most A(i) times betvk^een two of consecutive decreases total This observation gives us a bound O(nmC') on the the "current number of times all bidders become ass'. K we have Theorem established the following result. the total time needed to ujxiate Veilues of all ( O ie I n I Ad) I C = O(nmC'). . The auction algorithm solves the assignment problem in O(n^mC) it time. 5.

j) e. The problem Pj^ is an assignment problem ujj. begin multiply by (n+1). . is K bits long. for its each person i.+ {0 or 1). We is next discuss the complexity of this assignment fdgorithm. in which the utility of arc (i. the purpose of each scaling phase to obtain good prices and values for the subsequent scaling phase. The scaling algorithm works as algorithm ASSIGNMENT. BIDDING(uK end. all Uj. In the k-lh obtains a near-optimum solution of the problem with the utilities k u--. the algorithm solves the assignment problem with the original utilities that in each scaling is and obtains an optimum solution of the original problem. utilities u-j= Luj.j) is the k if leading bits in the binary representation of assuming (by adding leading zeros necessary) that each Uj. the problem Pj^ has the arc or 1.150 Pj^ . prices satisfy value(i) and values ^ max {uj. A(i)). j) € A.price(j) : (i. K: = riog(n+l)Cl price(j) : = : for each car j. Observe phase. The Bidding procedure maintains these conditions throughout execution. price). for k : = 1 K do = : begin let ujj : L Ujj / 2^-^J for each (i. value(i) = 2 value + for each person i. Note that in the problem Pp all utilities are and subsequently k+1 u^- k = 2u. the algorithm starts with a null assignment. price(j) = 2 : price(j) for (i) each car 1 j. depending upon whether the newly added follows: bit is or 1. x°. The assignment algorithm performs scaling phase. value(i) = to for each person i. / 2'^*'^ J. value. In other words. In the last scaling phase. The crucial result that the prices and values change only 0(n) times during each execution of the . It is easy to verify that before the algorithm invokes the Bidding procedure. end. it a number of cost scaling phtises.

25). In this expression. runs in 0(nm log nC) time. = 2 u. the reduced utility of an assignment differs from the utility of that assignment by a constant amount. Therefore.24) Uij < 0. j) e A. we have (5. The assignment algorithm applies the Bidding procedure Odog nC) times and. Now assignment k-1 consider the reduced utilities of arcs in the assignment (5.25) where price'(j) and value'(i) are the corresponding values at the end of the (k-l)-st scaling phase. of arcs in x*'" If * are either -2 or -3. consequently. then (5. = (i.151 Bidding procedure. j) e x*^"'. (5.20) x*^"* (the final at tie end of the (k-l)-st scaling phase). The equality V 1 implies that u. y ic U:: j )U X ^ X e price(j) i jfe X'^ N2 X e Nj Consequently.21) yields I icNj valued) ^-4n. utility also an assignment that maximizes the reduced value(i) maximizes the utility. for any i. Substituting these relationships in (5. Hence. For any assignment we have value(i). we observe that the Bidding procedure would terminate in 0(nm) time. the optimum reduced utility is at least -3n. price(j) calling the and value(i) have the values computed x. _ u. for a given set of prices and values. value(i) k k-1 = 2 value'(i) + 1. y (i. j) e A.7. we set price(j) = 2 price'(j). Before calling the Bidding procedure. for aU (i. x° is some partial assignment in the k-th scaling phase. we find that the reduced utilities Uj. valued) decreases 0(n) times. .. for all (i. j price'(j) - value'(i) = -1.23) implies that UBCx") t -4n. just before Bidding procedure. We summarize our discussion. j) in the k-th scaling phase _ Ujj = Ujj ic - price(j) - value(i).+ (0 or 1). and Uj. Since t u- • - price(j) for each (i.26) Hence. (5. Using this result and (5. Using this result in the proof of Theorem 5. as We define the reduced utility of an arc (i.24) in (5.

.26) the number of unassigned persons is at to assign n1 most Vn.000. then the auction algorithm would assign would assign the 99% of the persons in 1% of the overall running time and the remaining 1% of the persons in the remaining 99% it of the time. as described in Section these shortest paths in 0(m) time. The 0(nm log nC) time.26).9. This version of the auction algorithm solves a scaling phase in 0(Vn m) time and its overall running time this is 0{-\fn m log nC).152 Theorem 5. If we invoke the similarity the best assumption. then version of the algorithm currently heis known time bound for solving the assignment problem . the algorithm takes CXVn m) time FVn 1 f>ersons fVn 1 )m) time to assign the remaining FVii persons. then by (5. n = 10. We all therefore terminate the execution of the auction algorithm when has assigned but rVn It 1 persons and use successive shortest path algorithms to assign these persons. improved to run 0(Vn m log nC) If This improvement i is based on the following implication of if (5.2. we prohibit person from bidding value(i) S 4Vn . first For example. scaling version of the auction algorithm solves the assignment problem in The in scaling version of the auction algorithin can be further time. will find algorithm. and 0((n if - Hence. so happens that the shortest paths have length 0(n) and thus Oial's 3.

this research . Ford and Fulkerson (1962]. These some insight into the problem structure and yielded incomplete algorithms. Soon researchers developed special purpose algorithms Dantzig. Reference Notes In this section. the tranportation problem. This discussion has three objectives: to review important theoretical contributions on each topic. It also covers the development which credited to Ford and Fulkerson.153 6. Their book. The network simplex algorithm for the capacitated the development of the minimum cost flow problem follov/ed from for linear bounded variable simplex method programming by Dantzig (1955]. solve these problems. Introduction The study cf network flow models predates the development of first linear programming techniques. considered the transportation problem. a special case of the studies provided minimum cost flow problem. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. conducted by Kantorovich (1939]. The studies in this problem domain. and Koopmans (1947]. The book by Dantzig (1962] contains a thorough description of these contributions along with historical perspectives. Interest in network problems grew with the advent of the simplex Dantzig (1951] specialized the simplex algorithm for noted the traingularity of the basis and integrality of (1956] generalized this algorithm by Dantzig in 1947. maximum flow problem and the assignment problem — mainly because of their to important applications. researchers began to exhibit increasing interest in the its minimum the cost flow problem as well as special cases-the shortest path problem. flow Since these pioneering works. presents a thorough discussion of the early research conducted by of flow decomp)osition theory. network problems and their generalizations emerged as major research topics in operations research. (ii) to point out inter-relationships among different algorithms. During the 1950's. Hitchcock [1941]. we present reference notes on topics covered in the (i) text. them and by is others. Ford and Fulkerson pioneered those efforts. and (iii) to comment on 6. Whereas Dantzig focused on the primal simplex based algorithms. He the optimum solution. Orden work by specializing the simplex algorithm for the uncapacitated minimum cost flow problem.1 the empirical aspects of the algorithms.

Deo. Papadimitriou and Steiglitz [1982] (Combinatorial Optimization: Algorithms and Complexity). the reader might consult the bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of 1985]). Hu [1969] (Integer Programming and Network Flows). Berge and Ghouila-Houri . Examples paper by Bodin. Hausman [1978]. Assad and Ball [1983] on vehicle routing and scheduling problems. Smith [1982] (Network Optimization Practice). Christophides [1975] (Graph Theory: [1976] (Linear An Algorithmic Approach). Golden.154 is documented in thousands of papers and many text and reference books. Jensen and Barnes [1980] [1980] (Algorithms for Network and (Network Flow Programming). Potts and Oliver [1972] (Flows in Transportation Networks). and Von Randow [1982. Frank and Transportation Frisch [1971] (Communication. Transmission and Networks). Bazaraa and Jarvis [1978] Programming and Network Flows). Swamy and Thulsiraman Networks and Algorithms). no single source provides a comprehensive account of network flow models and their impact on practice. [1981] (Graphs. field Several important books summarize developments in the literature: and serve as a guide to the Ford and Fulkerson [1962] (Flows in Networks). Phillips Garcia-Diaz [1981] (Fundamentals of Network Analysis). and Derigs Graphs). Syslo. Tarjan [1983] (Data Structures and Network Algorithms). Notable among these is the paper by Glover and Klingman [1976] on the applications of minimum problems. programming compiled by researchers at Bonn (Kastning [1976]. cost flow and generalized minimum domains cost flow A number of books written in special problem also contain valuable insight about the range of applicatior\s of network flow in this category are the modek. Since the applications of network flow modelsa are so pervasive. Kennington and Helgason Programming). Transportation and Scheduling). and Kowalik [1983] (Discrete Optimization Algorithms). We shall be surveying many important research papers in the following sections. Minieka [1978] (Optimization Algorithms for Networks and Graphs). Iri (1969] (Network Flows. (Programming in Netorks and As an additional source of references. books on commurucation networks by Bertsekas . Lawler (Combinatorial (Linear Optimization: Networks and Matroids). Murty [1976] and Combinatorial Programming). 11962] (Programming Games and Transportation Networks). Gondran and Minoux [1984] (Graphs and Algorithms). Several researchers have prepared general surveys of selected application areas. Rockafellar [1984] (Network Flows and [1988] Monotropic Optimization).

Hop>croft and Ullman [1974] is an excellent reference for simple data structures as arrays. This section. and independently by Dantzig [1960] and Whiting and Hillier [I960]. improved running times are possible The following table svimmarizes various implementations of Dijkstra's algorithm that have been designed to improve the running time in the worst case or in practice. This important paper. doubly is linked queues. d = [2 + m/n] represents the average degree of a node in the network plus . Label Setting Algorithms The first label setting algorithm was suggested by Dijkstra [1959]. focuses especially on issues of computational complexity. However. Pallattino. as well as a on facility location edited by Francis and Mirchandani Golden [1988] has described the census rounding application given in Section General references on data structure serve as a useful backdrop for the algorithms presented in this chapter. Ruggen and Starchi [1982] and Deo and Pang [1984]. greatly helped in popularizing scaling techiuques.1. we refer the reader to the extensive bibliographies compiled by Gallo.155 and Gallager [1987] and on transportation planning by collection of survey articles [1988]. which contains scaling algorithms for several network problems. As a guide to these results. The book by Tarjan [1983] another useful source of references for these topics as well as for more complex data structures such as dynamic trees. The is original implementation of Dijkstra's algorithm runs in 0(n2) time which running time for fully the optimal dense networks (those with m = fiCn^ )). Sheffi [1985]. lists. The book by Aho. linked lists. In the table. paper on scaling algorithm for combinatorial [1985] coined this term in his optimization problems. since any algorithm for sparse must examine every networks. stacks. 2. which summarizes some of this literature. arc. We Gabow have mentioned the "similarity assumption" throughout the chapter. 1. 6^ Shortest Path Problem The shortest path problem and its generalizations have a voluminous research literature. binary heaps or d-heaps.

156 « .

The R-heap implementation by a sequential search and improves the running time by a . This data is the same as the R-heap data structure described in Section 33. Choosing k = log C yields a time of 0(m log log C+n Depending on n. hence reducing the number of buckets from 1+ C if to l+(C/w). Glover. d* + w - 1] since each arc has length at least w - 1. then the algorithm will modify no other label in the range [d*. any choice of k. it runs in 0(nC + m log log nC) it Johiison [1982] suggested an improvement of this data structure and used to implement Dijkstra's algorithm in 0(m log log C) time. [1979] suggest several such improvements. Then. Kaas and Zijlstra [1977] suggested a data structure whose analysis depends upon the takes largest key D stored this in a heap. data structure that takes an average of Odog time for each node selection (and the subsequent deletion) step and an average of 0(1) time for each distance update. Denardo and Fox implemented the shortest path algorithm in 0(max{k C^^K m log (k+1). Dial [1969] suggested his implementation of Dijkstra's algorithm because of its encouraging empirical performance. is The correctness of this observation follows from the fact that d* the current minimum temporary temporary distance distance labels. that if Denardo and Fox [1. The Fibonacci heap an n) somewhat complex.m and C. but who use a Fibonacci heap data structure.157 Boas. The best strongly polynomial-time algorithm to date is due to Fredman and is Tarjan [1984] ingenious. This algorithm was independently discovered [1979] by Wagner[1976]. When Dijkstra's algorithm time. Dial. then we can use buckets of width w in Dial's algorithm. except that performs binary search over Odog C) buckets to insert nodes into buckets during the redistribution of ranges replaces the binary search and the distance updates. Kamey and Klingman which runs better its have proposed an improved version of algorithm is Dial's algorithm. successors have had improved worst- case behavior. this data structure implements 0(m + n log n) time.: (i. Dijkstra's algorithm in Consequently. The initialization of this algorithm 0(D) time and each heap operation takes Odog log is D). Johnson [1977b] proposed a related bucket scheme with exponentially growing widths and obtained the running time of structure it 0((m+n log Olog log C). Though Dial's only pseudopolynomial-time. other choices might lead modestly better time bound. using a multiple level bucket scheme. Observe w = max minlcj. implemented using data structure.j) € A}]. nk(l+C^/^/w)] bound to a time for log C). in practice.

We shall subsequently refer to A FORTRAN listing of this . all of its previous This approach permits the selection of much larger width of buckets. algorithm. as shown by Edmonds Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice. and later refined and tested by Pap>e [1974]. the first label correcting algorithm for - Subsequently. for which the algorithm of Johnson [1982] appears more attractive. time. The Fibonacci heap version it of two-level R-heap is very complex. the two-level bucket system redistributes the range of a subbucket over buckets. Orlin and Tarjan [1988] suggested the Rits heap implementation and further improvements. the shortest path problem. probably the most popular. the most general form nonpolynomial-time. If we invoke the similarity aissumption.158 factor of Odog log C). By using K = L = 2 log C/log log C. studied the theoretical properties of the Bellman's [1958] algorithm can also be regarded as a label correcting Though specific implementations of label correcting algorithms run in is 0(nm) [1970]. thus reducing the number of buckets. Incorporating a generalization of the Fibonacci heap data structure in the two-level bucket system with appropriate choices of K and L further reduces the time bound to 0(m + nVlog C ). Ahuja. however. this algorithm as D'Esopo and Pape's algorithm.3 uses a single level bucket A two-level bucket system improves further on the R-heap implementation of Dijkstra's algorithm. this approach currently all classes gives the fastest worst-case implementation of Dijkstra's algorithm for of graphs except very sparse ones. as described next. in section 3. The two-level data structure consists of K (big) buckets. this two-level bucket system version of Dijkstra's algorithm runs in 0(m+n log C/log log C) time. Ouring redistribution. Mehlhom. in practice. several other researchers - Ford and Fulkerson [1962] and Moore [1957] algorithm. each bucket being further subdivided into L (small) subbuckets. and so is unlikely that this algorithm would perform well Label Correcting Algorithm Ford [1956] suggested. The modification that adds a node to the LIST (see the description of the Modified Label Correcting Algorithm given in Section 3.4. in skeleton form. The R-heap implementation described system.) at the front if the algorithm has is previously examined the node earlier and at the end otherwise. This modification was conveyed to Pollack and Wiebenson [1960] by D'Esopo.

as runs in shown by Kershenbaum [1981]. the arc with largest violation of optimality condition) for the shortest path problem starting from an 0(n) artificial basis leads to Dijkstra's algorithm.. structures. Glover. Though this modified label correcting it algorithm has excellent computational behavior in the worst-case exponential time. called the partitioning shortest path (PSP) algorithm. while for networks with nonnegative arc lengths behavior. Primal simplex algorithms for the that efficient. Phillips and Schneider [1985]. For general networks. Hao and Kai [1986] described another simplex algorithm for the shortest path this problem: the number of pivots and running times for to those of algorithm are comparable Akgul's algorithm. Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm. the FSP algorithm runs it in 0(nm) time. that solve the all pair shortest path problem involve matrix The first such algorithm appears to be a part of the folklore. the number of pivots is if all arc costs are nonnegative.e. Dial. runs in 0(n2) time and has excellent computational their Other variants of the label correcting algorithms and found in Glover. Karney and pivoting in Klingman [1979] and Zadeh [1979] showed that Dantzig's pivot rule (i. The complexity of this algorithm is 0(n3 log n). Orlin [1985] showed that the simplex algorithm with Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots. This algorithm uses simple data . Using simple data structures. aiul also permits partial pricing All Pair Shortest Path Algorithms Most algorithms manipulation. shortest path problem with arbitrary arc lengths are not Akgul [1985a] developed a simplex algorithm for the shortest path problem that performs O(n^) pivots. This algorithm nms 0(n3) time and . Akgul's algorithm runs to in O(n^) time which can be reduced 0(nm + n^logn) using the Fibonacci heap data structure. Thus. Researchers have been interested in developing polynomial-time primal simplex algorithms for the shortest path problem. Goldfarb. Lawler [1976] describes this algorithm in his textbook. Ahuja and Orlin [1988] recently discovered a scaling variation of this approach that performs 0(n^ log C) pivots and runs in 0(nm log C) time.159 algorithm can be found in Pape [1980]. Glover. which can be improved slightly by using more sophisticated matrix multiplication procedures. The algorithm we have presented is due in to Floyd [1962] and is based on a theorem by Warshall [1962]. uses very T\atural pricing strategies. computational attributes can be Klingman.

Computational Results Researchers have extensively tested shortest path algorithms on a variety of network classes. however.m. however. for several other all pair shortest path From solve the all a worst -case complexity point of view. It is Dial's algorithm is the best label setting algorithm for the shortest faster than the original OCn^) implementation. compiler and the computer used. at this moment no comparison with . Dial. Kelton and Law [1978]. As pointed out approach takes CXnm) time to construct an equivalent problem with nonnegative arc lengths and takes 0(n S(n. d-heap or the all Fibonacci heap implementation of Dijkstra's algorithm for network classes tested is fcister by these researchers. The results of these studies also depend greatly upon the density of the network. the results of computational studies are only suggestive. the computational performance of an algorithm is depends upon many factors: for example. extrapolating the results. Van Vliet [1978]. [1979]. the binary heap. Glover. Unlike the worst<ase results. Hence. it might be desirable to pair shortest path problem as a sequence of single source shortest path in the text. the language. the manner in which the program written. and Fox Imai and [1984]. For very dense networks. this problems. Researchers have not yet tested the R-heap Dial's algorithm is implementation and so available. Denardo and Fox [1979] also find that Dial's algorithm all than their two-level bucket implementation for of their test problems.m. Klingman. Pape [1974].C)) time to solve the n shortest path problems (recall that S(n. The studies due to Gilsinn and Witzgall [1973]. Iri Kamey and Klingman [1979].C) shortest path is the time neede to solve a problem with nonnegative arc is lengths). The bibliography by Deo and Pang [1984] contains references algorithms. These studies generally suggest that path problem.160 is also capable of detecting the presence of negative cycles. Denardo . Phillips and Schneider [1985] and Gallo and Pallottino [1988] are representative of these contributions. rather than conclusive. Dantzig [1967] devised another procedure requiring exactly the same order of calculations. Glover. the in the algorithm by Fredman [1976] faster than this approach worst<ase complexity. and the distribution is of networks on which the algorithm tested. they observe that their implementation would be faster for very large shortest path problems.

for very dense networks. Feinstein and Shannon independently established the max-flow min-cut theorem.161 Among by Glover algorithm. upon the worst-case complexity of some. et al. for sparse networks. but slower for sparse networks. Studies generally suggest that. label setting algorithms are superior and.2 summarizes the running times of some of these algorithms. Ford and Fulkerson [1956] [1956] - and Elias. Figure 6. algorithms whose time bounds involve The U assume integral capacities. Fulkerson and Dantzig [1955] solved the maximum flow problem [1956] by specializing the primal simplex algorithm. Kelton and Law [1978] have conducted a computational study of several aill pair shortest path algorithms. In the figure. m is the number of arcs. n is the number of nodes. The study finds that their algorithm is superior to D'Esopo and Pape's label setting algorithms Other researchers have also compared with label correcting algorithms. and U is an upper bound on the integral arc capacities. but not all. and [1956] solved it by augmenting p>ath algorithms. . the algorithms Phillips by D'Esopo and Pape and by Glover. and Schneider [1985] are the two fastest. the bounds specified for the other algorithms apply to problems with arbitrary rational or real capacities. researchers have developed a number of algorithms for this problem. of these improvements have produced improvements in practice. Klingman. Several researchers - Dantzig and Fulkerson [1956]. bbel correcting algorithms perform better. This study indicates that Dantzig's [1967] algorithm is with a modification due to Tabourier [1973] faster (up to two times) than the Floyd- Warshall algorithm described in Section 3. Since then. 6. This study also finds that matrix manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks. whereas Ford and Fulkerson Elias et al.3 Maximum Flow Problem The maximum flow problem is distinguished by the long succession of research contributions that have improved algorithrr\s. the label correcting algorithn\s.5.

e. the labeling algorithm can perform infinite sequence of augmentations and might converge to a value different from flow value. containing the smallest possible number of arcs) in the residual network. this version of the labeling . J O nm 1^ U) r?- log log — log " U . [1974] [1977] 0(n2 VIS") [1978] Kumar and Maheshwari 0(n3) Galil [1980] 0(n5/3m2/3) [1980]. Shiloach [1978] 7 8 GalU and Naamad 0(nm CXn3) log2 n) Shiloach and Vishkin [1982] Sleator 9 10 11 and Tarjan [1983] 0(nm 0(n3) log n) Tarjan [1984] Gabow[1985] Goldberg [1985] 0(nm 0(n3) log U) 12 13 14 Goldberg and Tarjan [1986] Bertsekas [1986] CXnm 0(n3) log (n^/m)) 15 16 Cheriyan and Maheshwari [1987] 0(n2 Vm + •.. Ca) log . Running times of maximum flow algorithms.2. then the algorithm performs 0(nm) augmentations. They also showed that for arbitrary irrational arc capacities. Ford and Fulkerson [1956] observed that the labeling algorithm can perform as many an the as 0(nU) augmentations for networks with integer arc capacities. U 17 Ahuja. They one showed if the algorithm augments flow along a shortest path (i... Orhn and Tarjan [1988] (b) uvnm ol + n ^VlogU) (c) O nm V ( Table 6.162 # 1 Discoverers Running Time [1972] Edmonds and Karp Dinic [1970] 0(nm2) CKn2m) 0(n3) 2 3 4 5 6 Karzanov Cherkasky Malhotra. consequently. both with improved computational complexity. will A breadth first search of the network determine a shortest augmenting path. maximum that Edmonds and Karp [1972] suggested two specializations of the labeling algorithm. ) Ahuja and Orlin [1987] 0(nm + n^ .

A') is a flow that blocks flow augmentations residual capacity sense that G' contains no directed path with positive from the source node to the sink node.3. so that for every arc . of the labeling algorithm runs in 0(m2 log Dinic [1970] independently introduced the concept of shortest path networks. N2. .163 algorithm runs in 0(nm2) time. j) network can be partitioned in the layered nodes N]. the length of the layered network increases and a^er at most n iterations. They also showed that this equivalent both to all Edmonds and Karp's algorithm and to Dinic's algorithm in the sense that three algorithms enumerate the same augmenting paths in the same sequence. maintains distance Goldberg [1985] introduced distance labels in the context of his preflow push algorithm. Consequently. a blocking flow in a layered network by performing at most m augmentations. flow along a path with Edmonds and Karp's second idea was to augment maximum residual capacity. They proved that this algorithm to performs path 0(m log U) with maximum augmentations... Karzanov [1974] introduced the concept .e. blocking flow iteration. called layered networks is . U) time. but instead of constructing layered networks labels. i e Nk and j e Nk+1 for some k). . Distance labels offer several advantages: They are simpler to understand than layered networks. hence. The nodes . A layered network lie a subgraph of the residual network at least that contains only those nodes and arcs that on one shortest path from the source into layers of to the sink. his algorithm runs in OCn^m) times. Several researchers have contributed improvements to the computational complexity of maximum flow algorithms by developing more efficient algorithms to establish blocking flows in layered networks. The shortest augmenting path algorithm presented in Section 4. network connects nodes in adjacent layers (i.3 achieves the same time bound it as Dinic's algorithm. are easier to manipulate. in a total of 0(nm) time. A blocking flow in a layered in the network G' « (N'. for solving the maximum flow problem. the source is disconnected from the sink in the residual network. Dinic showed how to construct. Orbn and Ahuja [1987] developed the distance label based augmenting path algorithm given in Section algorithm is 4. Tarjan [1986] has shown how determine a this version residual capacity in 0(m) time on average. and have led to more efficient algorithms. His algorithm constructs layered networks Dinic showed that after each and establishes blocking flows in these networks. in a layered (i. The algorithms differ only in the manner in which they obtain these augmenting paths.

2-3 trees (see Aho. If 0(nm) time and the algorithm runs in 0(nm we invoke the similarity assumption. for The search more efficient maximum flow algorithms has stimulated researchers to develop first new data structure for implementing Dinic's algorithm.164 of preflows in a layered network. (See the technical report of Even (1976] for a comprehensive description of this algorithm and the paper by Tarjan [1984] for a that an simplified version. it saturates some arcs in this path. algorithm achieving Orlin and Ahuja [1987] have presented a variation of the Ga bow's same time bound. Gabow to the [1985] obtained a similar time bound by applying a bit scaling approach maximum flow problem. but the scaling algorithm much simpler to implement.) Karzanov showed implementation that maintains preflows and pushes flows from nodes with excesses. constructs a blocking flow in 0(n2) time. The such data structures were suggested independently by Shiloach [1978] and Galil [1980]. Kumar and Maheshwari [1978] present a conceptually simple maximum flow algorithm that runs in OCn^) time. Ehiring a scaling phase. the at initial flow value differs from the m£iximum flow value by most m units and so the shortest augmenting path algorithm (and also Dinic's algorithm) performs at scaling phase takes most m augmentations.3) takes 0(n) time on average to identify an augmenting path and. As outlined in Section 1. Cherkasky [1977] and Galil [1980] presented further improvements of Karzanov's algorithm. this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's capacity. each log C) time. and Naamad Dinic's algorithm (or the shortest augmenting path algorithm described in Section 4. in Hence. .7. Sleator and Tarjan [1983] improved this approach by using a data structure called dynamic trees to store and update path fragments. their implementation of Dinic's algorithm 0(nm (log n)2) time. The basic idea to store these path fragments using some data structure. Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify later to augmenting paths quickly. during the augmentation. Consequently. this time bound is is comparable to that of Sleator and Tarjan's algorithm. Malhotra. to Shiloach [1978] and Galil and in a Naamad [1980] showed how augment flows through path fragments way that finds a blocking rur\s flow in O(m(log n)^) time. Sleator and Tarjan's algorithm establishes a blocking flow in 0(m log n) time and thereby yields an 0(nm log n) time bound for Dinic's algorithm. saturated arcs from this path. If we delete the is we obtain a set of path fragments. for example.

165 Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the push algorithm. algorithm currently gives the best strongly polynomial-time bound for solving the maximum flow problem. If we invoke the similarity assumption.) Using a dynamic tree data structure. The use of the dynamic tree data structure its improves the running times of the excess-scaling algorithm and variations. the E>inic's and the FIFO preflow push algorithms. at each iteration. Goldberg and Tarjan [1986] the running time of the improved This FIFO preflow push algorithm to 0(nm log (n^/m). can be implemented in 0(nm log (2+p/nm) time using dynamic Although this . as ) algorithm improves to O nm log —— — ° +2 by using dyiuimic showT» in Ahuja. and adds the newly active nodes to the rear of the queue. Further. Cheriyan and Maheshwari [1987] showed Goldberg and Tarjan's highest-label preflow push algorithm actually performs ) OCn^Vin nonsaturating pushes and hence runs in OiriNm ) time. Previously. Ahuja. selects a node from the front of the queue. that Recently. Goldberg (1985] had shoum in the FIFO version of the algorithm that pushes flow from active nodes first-in-first-out order runs in OCn-^^ time. Scaling excesses by a factor of log U/log log U and pushing flow from a large excess node with the highest distance label. 0(nm + n^ Vlog U trees. Orlin and Tarjan [1988] reduced the U). Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an 0(nm + n^ log U) time bound. Tarjan [1987] conjectures that any preflow push algorithm that performs p nor«aturating pushes trees. j>erforms a push /relabel step at this node. it (This algorithm maintains a queue of active nodes. this algorithm improves Goldberg and Tarjan's for 0(iun log (n2/m)) algorithm by a factor of log n networks that are both non-sp>arse and nondense. Orlin and Tarjan [1988] obtained another variation of origir\al excess scaling algorithm which further reduces the number of nonsaturating pushes to 0(n^ VlogU ). Bertsekas [1986] obtained another his maximum flow algorithm by specializing minimum cost flow algorithm. this algorithm does not use any complex data structures. Orlin and Tarjan [1988]. though the improvements are not as For dramatic as they have been for example. number of nonsaturating pushes to OCn^ log U/ log log Ahuja. this algorithm closely resembles the Goldberg's FIFO preflow push algorithm.

that Ahuja. Orlin. [1987] have generalized these ideas for networks with Versions of the bipartite Let maximum = (N^ flow algorithms run considerably faster on a if j networks G u N2. (iii) and (iv) planar networks.e.e. Femandez-Baca and Martel small integer capacities. essentially Goldfarb and Hao [1988] developed such an algorithm. Stein and Tarjan [1988] it improved upon these ideas by shov^dng time bounds for all is possible to substitute nj for n in the preflow push algorithms to obtain the new time bounds for bipartite networks. Even and Tarjan [1975] showed that Dinic's algorithm solves the maximum flow problem on unit capacity networks in Orlin and O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time. every node network. Thus. This result implies that the FIFO preflow push algorithm and the .. is Observe that the maximum flow value for unit capacity networks less than n. and. flow problems: (ii) the maximum flow problem on (i. n^=|N J.166 conjecture is true for all known preflow push algorithms. j « j N^ | ). Tarjan[1988] recently showed how implement this algorithm in 0(nm logn) using dynamic trees. U=l). Martel and Fernandez-Baca such results by showing how the running times of Karzanov's and Malhotra et al. except source and sink.. this algorithm performs 0(nm) pivots and to can be implemented in ©(n^m) time. it is still open for the general case. n2 = |N2| andn = n^+n2[1985] obtained the Suppose first that nj < n2 Gusfield. has one incoming arc or one outgoing arc) bipartite networks. is augmented along a shortest path from the As one would expect. these problems are easier than are problems with large capacities. Developing a polynomial-time primal simplex algorithm for the flow problem has been an outstanding open problem for quite some time. Researchers have also investigated the following special cases of the maximum (i. maximum Recently. (i) unit capacity networks in the . Both of these algorithms rely on ideas contained in Hopcraft and Karp's [1973] algorithm for maximum bipartite matching.'s algorithms reduce from O(n^) to 0(n^ n2 ) and 0(nj + nm) respectively. This algorithm is based on selecting pivot arcs so that flow source to the sink. and so the shortest augmenting path in algorithm will solve these problems to solve 0(nm) time. unit capacity simple networks U=l. Ahuja [1987] have achieved the same time bounds using a modification of the shortest augmenting path algorithm. A) Nj j << j N2 |(or j N2 .

Edmonds and Karp. The studies performed by Hamacher [1979]. Galil and Malhotra achieve their worst-case bounds on those examples. log U) time. Dinic. especially for the excess-scaling algorithms. m + n. whether the algorithms achieve their worst- bounds some families of networks. Cherkasky. Some important [1979]. references for planar maximum flow algorithms are Itai and Shiloach Johnson and Venkatesan (1982] and Hassin and Johnson [1985]. and the bound O(n^m) for the generic preflow push algorithm The research community has not established similar results for other preflow It is push algorithms. Martel [1987] showed that the FIFO preflow push algorithm can take n(nm) time to solve a class of unit capacity networks.) in a two-dimensional plane so that arcs intersect one another only planar network has at most A 6n arcs. Cheung ..e. It is possible to solve the maximum flow problem on planar networks much at the more efficiently than on general networks. Several computational studies have assessed the empirical behavior of maximum flow algorithms. is Zadeh [1972] showed that the bound of Edmonds and Karp that the algorithm tight when bound m = n^. hence. i. the running times of the Specialized maximum flow algorithms on planar networks appear more attractive. Galil [1981] constructed an interesting class of examples and showed that the algorithms of et al. that have even better running times. however. (A network is called planar if it can be drawn nodes. Cheriyan [1988] has also constructed a for family of examples to show that the bound O(n^) FIFO preflow push algorithm is tight. ) and 0(n. worth mentioning. m + n. Karzanov. solve the bipartite maximum flow problem on networks in 0(n. Researchers have also investigated whether the worst-case bounds of the maximum case flow algorithms are for tight. respectively. are quite different than those for the general networks. Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow Vm) for the push algorithm is tight. that these knovkTi worst-case examples are quite artificial and are not likely to arise in practice. Other researchers have made some progress in constructing worst-case examples for preflow push algorithms. Even and Tarjan [1975] noted same examples imply that the of Dinic's algorithm is tight when m= n2- Baratz [1977] showed that the bound on Karzanov's algorithm is tight. solution techniques.167 original excess scaling algorithm.

the maximum maximum dynamic flow flow maximum flow value between every pair of nodes. tree We do not anticipate that dynamic practice.168 [1980]. Mote and Whitman [1979. These results. but Researchers have also tested the Malhotra et al. we wish to determine the flow problems. Derigs and Meier [1988]. Grigoriadis [1988]. A number of researchers are currently evaluating the computational performance of preflow push algorithms. Dinic's algorithm competitive with Karzanov's algorithm for sparse networks. using sophisticated data structures. Imai (1983] and Goldfarb [1986] and Grigoriadis are noteworthy. slower than the original Dinic's algorithm. highest-label preflow push algorithm runs the The excess-scaling algorithm and its variations have not been tested thoroughly. the worst-case performance of algorithms. and Ahuja. but slower for dense networks. we discuss two important generalizations of the (ii) problem: problem. do not apply to the multi-terminal maximum flow problem on directed networks. . Recently. is slower than the original Dinic's Hence. Ehnic's and Karzanov's algorithms in increasing is most classes of networks. Kodialam and Orlin [1988] have found that the preflow push algorithms are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for most classes of networks. Glover. the sophisticated data structures improve only are not useful empirically. Klingman. (i) the multi-terminal flow problem. their contribution has improve the worst- case p>erformances of algorithms. they observed that their implementation of Dinic's algorithm using dynamic tree data structure algorithm by a constant factor. Sleator and Tarjan (1983] reported a similar finding. Imai [1983] noted that Galil and Naamad's is [1980] implementation of Dinic's algorithm. Gusfield [1987] has suggested a simpler multi-terminal flow algorithm. to the development of algorithms use distance These studies rank Edmonds order of performance for and Karp. 1984). algorithm and the primal simplex algorithm due to Fulkerson and Dantzig [1955] and found these algorithms to be slower than Dinic's algorithm for most classes of networks. the fastest. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem. Among all nonscaling preflow push algorithms. as in others. Finally. implementations of preflow push algorithms would be useful in been to in this case. that These studies were conducted prior labels. however .

[1960] and Fulkerson [1961] independently discovered the out-of-kilter is The negative cycle algorithm credited to Klein [1967]. He observed for linear the spanning tree property of the basis and the solution. Klingnun and Whitman [1980] describe the . 1957] suggested the for the uncapacitated first combinatorial algorithms and capacitated transportation problem. j) in the is network a number to denoting the time needed to traverse possible flow from the source The objective send the maximum node first to the sink node within a given time period T. and Koopmans [1947]. (Ford and Fulkerson [1962] give this problem). Tomizava and Edmonds and Karp [1972] independently pointed out that if the computations use node potentials. Ford and Fulkerson [1958] showed that the maximum dynamic flow problem can be solved by solving a a nice treatment of nunimum in cost flow problem. integrabty property of the optimum Later his development of the upper bounding technique programming led to an efficient sp)ecializatior of the simplex algorithm for the discusses these topics.4 Minimum Cost Flow Problem cost The minimum flow problem has a rich history. Dantzig [1951] developed the first complete solution procedure for the transportation problem by specializing his simplex algorithm for linear programming. we associate with each arc that arc. caise of The classical transportation problem. Minty algorithm.169 In the simplest version of maximum dynamic tj: flow problem. known as the primal-dual algorithms. these algorithms are Ford and Fulkerson [1962] describe the cost flow problem. [1961] independently discovered the successive shortest path These researchers showed how to solve the minimum cost flow problem [1971] as a sequence of shortest path problems v^th arbitrary arc lengths. primal-dual algorithm for the minimum Jewell [1958]. minimum cost flow problem.was posed and solved (though incompletely) by Kantorovich [1939]. (i. then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths. Helgason and Kennington [1977] and Armstrong. Hitchcock [1941]. Iri [1960] and Busaker and Gowen algorithm. a special the minimum cost flow problem. 6. Orlin [1983] is to has considered infinite horizon dynannic flow problems which the objective minimize the average cost per period. Dantzig's book [1962] Ford and Fulkerson [1956.

Goldfarb and Reid [1977]. The book Kennington and Helgason [1980] an source for references and background material concerning these developements. Researchers have conducted extensive studies to determine the most effective pricing strategy. Mead and Grigoriadis [1986] have described other strategies have been . the negative cycle algorithm (which augments flow along a most negative cycle).. The network simplex algorithm and most popular with operations researchers. Gibby. Bradley. All these algorithms essentially cortsist of identifying shortest paths between appropriately defined nodes and augmenting flow along these paths. Grigoriadis and Hsu [1979]. Klingman and Napier [1974].e. the dual simplex algorithm. the successive shortest path algorithm.performs an exponential number of iterations. The fact that one example is bad for many network insightful algorithms suggests inter-relationship among the algorithms. Kamey. due to Srinivasan and Thompson [1973] and Glover. Klingman and that and Graves [1983] [1978]. The paper by Zadeh [1979] just showed this relationship by pointing out that each of the algorithms mentioned of are indeed equivalent in the sense that they perform the same sequence augmentations provided ties are broken using the same rule. its practical implementations have been first Johnson [1966] suggested the tree first manipulating data structure for implementing the simplex algorithm. The candidate list we described is due to Mulvey [1978a]. These studies show that the choice of the pricing strategy has a significant effect on both solution time and the number strategy BrovkTi of pivots required to solve minimum cost flow problems. these algorithms obtain shortest paths losing a method that can be regarded as an application of Dijkstra's algorithm. and Klingman of [1979] subsequently discovered is improved data excellent structures. significantly reduced the running time of the simplex algorithm. Glover.170 specialization of the linear cost flow programming dual simplex algorithm not discussed in this chapter). Bradley. selection of the entering variable. for the minimum problem (which is Each of these algorithms perform iterations that can (apparently) not be polynomially bounded. and Barr. Glover. Zadeh 11973b) has also described more pathological examples for network algorithms. Brown and Graves [1977]. Further. Zadeh [1973a] describes one such example on which each of several algorithms — the primal simplex algorithm with Dantzig's pivot rule. Klingman and Stutz [1974]. The implementations using these ideas. i. the primal-dual algorithm. Glover. and the out-of-kilter algorithm .

but the number is of consecutive degenerate pivots may be exponential. that this rule admits at most nm consecutive degenerate Goldfarb. (Leaist Recently Considered) rule which orders the arcs in an arbitrary. 1978) has {1976] and independently by Barr. On the theoretical front. Zadeh . and the assignment problem: Dial et al. Orlin [1985] showed. the uncapacitated this minimum cost flow problem a dual algorithm performs 0(n^log n) pivots for minimum cost flow problem. using a p>erturbation technique. The strongly feasible basis technique. degeneracy is both a computational and a theoretical issue. Cunningham showed pivots. Brown and Graves [1978]. researchers have developed such algorithms the for the shortest path problem. Schweitzer and Shlifer [1977] and Grigoriadis (1986]). Glover and Klingman contributed on both fronts. [1979]. Developing a polynomial-time primal simplex algorithm for the minimum cost flow problem is still open. Computational experience has shown that maintaining strongly feasible basis substantially reduces the number of degenerate pivots. the use of this technique led to a finitely converging primal simplex algorithm. It appears that the best pricing strategy depends both upon the size. This phenomenon known the as stalling. The algorithm then examines the arcs in the wrap-around fashion. Cunningham [1979] described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling. Hao and Kai [1987] have described more anti-stalling pivot rules for the minimum cost flow problem. Researchers have also been interested in developing polynomial-time simplex algorithms for the minimum cost flow problem or its special CJises. The strongly feasible basis technique prevents cycling during a sequence of consecutive degenerate pivots. each iteration starting at a place where it left off earlier. maximum flow problem.171 effective in practice. However. Gavish. One such rule is LRC fixed. Thus. that for integer data an implementation of the primal simplex algorithm that maintains feasible basis a strongly performs O(nmCU) pivots pivots when used with any arbitrary pricing strategy and 0(nm C log (mCU)) when used with Dantzig's pricing strategy. The only is polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]. proposed by Cunningham [1977a. and introduces the first eligible arc into the basis. network structure and the network Experience with solving large scale established that minimum cost flow problems has more than 90% of the pivoting steps in the simplex method can be degenerate (see Bradley. but manner. 1977b.

Akgul [1985a]. this algorithm maintains a (i) pseudoflow satisfying the optimality conditions. Brov^Ti and Graves [1977]. and Tseng have presented computational . Helgason and Kennington [1977] and Armstrong. data distributions. Goldfarb and Hao maximum flow problem. Hao and Kai [1986] and Ahu)a and OrUn [1988] for the [1988] for the shortest path problem. The algorithm operates so change it in the node potentials increases the dual objective function value and when finally determines the optimum dual objective function value. or latter case. it to a deficit node along a path cortsisting of arcs (ii) changing the potentials of a subset of nodes. Grigoriadis [1979] and Grigoriadis [1986] are noteworthy. Orlin [1985]. is NETGEN. Bradley. The algorithm proceeds by either augmenting flow from an excess node with zero reduced cost. Klingman and Whitman algorithm. investigation. it has also obtained an optimum primal Bertsekas solution. [1976] Glover. studies conducted by Glover. This relaxation algorithm has exhibited nice empirical behavior. of empirical studies have extensively tested minimum to cost flow algorithms for wide variety of network structures. Goldfarb. Kamey and Klingman and Hsu [1988] [1974]. and Roohy-Laleh [1980]. and problem The most common problem generator [1974]. and capacitated or uncapacitated transportation and minimum cost flow problems. lower or upper bounds so as to the optimality conditions. Hung [1983]. The attractive relaxation algorithms proposed by Bertsekas and his associates are other algorithms for solving the For the minimum cost flow problem and its generalization. to their In the satisfy resets flows on some arcs however. extended approach for the minimum cost flow cost flow problem with and for the generalized minimum problem (see Section 6. Orlin [1985]. Napier and Stutz which is capable of generating assignment. this flow assignment might change the excesses that each and deficits at nodes. [1985] suggested the relaxation algorithm for the Bertsekas and Tseng [1985] real data.6 for a definition of this problem). mirumum cost flow problem. minimum this cost flow problem (with integer data). due Klingman. Kamey. Klingman and Napier [1974] Glover. Bertsekas results for the relaxation algorithm. [1980] have reported on extensive studies of the dual simplex subject of The primal simplex algorithm has been a more rigorous . Kamey and Klingman [1974] and Aeishtiani and Magnanti have tested the primal-dual and out-of-kilter algorithms. Mulvey [1978b]. Akgul [1985b] and Ahuja and Orlin [1988] for the assignment problem.172 [1979]. A number sizes.

new At version of primal simplex algorithm faster than the relaxation this time. that determine a By using more effective pricing strategies good entering arc without examining all arcs. respectively. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm. the dual simplex algorithm. and does not evolve summarizes terms containing logarithms of C The table given in Figure 6. the primal-dual algorithm. Grigoriadis [1986] finds his algorithm. we would expect that All the the primal simplex algorithm should outperform other algorithms. of nodes and arcs. the out-of-kilter algorithm. and the relaxation code RELAX developed by Bertsekas and Tseng Polynomial-Time Algorithms In the recent past.3 these theoretical developments in solving the table reports running times for minimum cost flow problem. and the integral capacities. Computer codes public domain. computational studies have verified this expectation and until very recently the all primal simplex algorithm has been a clear winner for almost classes of network is problems. supplies and demands are bounded in absolute value by U. minimum if Recall that an algorithm in the is strongly polynomial-time its running time is polynomial number or U. m' of which are in absolute capacitated. researchers have actively pursued the design of fast (weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem.173 In view of Zadeh's [1979] result. it appears that the relaxation algorithm of Bertsekas and Tseng. maximum . and the primal simplex algorithm with Dantzig's pivot rule should have comparable running times. [1988]. However. and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the minimum cost flow problem in practice. It cissumes that the integral cost coefficients are bounded value by C. The networks with n nodes and m arcs. for some minimum cost flow problem are available in the These include the primal simplex codes RNET and NETFLOW developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980]. The term S() is the running time for the shortest path problem and the flow term M() represents the corresponding running time to solve a problem. we would expect that the successive shortest path algorithm.

m. m. U)) C M(n.174 Polynomial-Time Combinatorial Algorithms # 1 Discoverers Running Time [1972] Edmonds and Karp Rock Rock [1980] [1980] 0((n + m") log 2 3 4 5 6 0((n + U S(n. O) C M(n. 1988b] 0(nm log n log log n log (log U log nQ nC) Goldberg and Tarjan 0(nm 0(nm 0(nm 9 Ahuja. m. Goldberg. m. U)) nC) ) 0(n 0(n log log Bland and Jensen [1985] Goldberg and Tarjan [1988a] Bertsekas and Eckstein [1988] 0(nm log irr/nx) log nC) o(n3 log 7 7 8 Goldberg and Tarjan [1987] 0( n^ log nC Gabow and Tarjan [1987] [1987. Orlin U/log log U) log nC) and Tarjan [1988] and log log U log nQ Strongly Polynomial -Time Combinatorial Algorithms # . C)) m') log U S(n.

Edmonds and Karp [1972] developed the first (weakly) polynomial-time eilgorithm for the in Section 5.8 use the concept of approximate optimality. one using capacity scaling and the other using cost scaling. m) = (n^/m) Goldberg and Tarjan [1986] Using capacity and right-hand-side scaling. proposed a wave algorithm for the maximum flow problem. The pseudoflow push algorithms for the minimum cost flow problem discussed in Section 5. since they regarded as having practical utility. The wave algorithm . For problems that satisfy the similarity assumption. introduced independently by Bertsekas [1979] and Tardos [1985]. Orlin and Tarjan [1987] Strongly Polynomial -Time Bounds S(n. This cost scaling algorithm reduces the minimum cost flow problem to a sequence of 0(n log C) maximum flow problems. Discoverers m) = m+ nm n log n log Fredman and Tarjan [1984] M(n. Rock [1980] developed two different bit-scaling algorithms for the minimum cost flow problem. researchers gradually recognized that the scaling technique has great theoretical value as well as potential practical significance. and Ahuja.m. Mehlhom. m. was suggested by Orlin initially little [1988]. C) = nm ^%rT^gTJ log [ ^— + 2 J Ahuja. we invoke the similarity assumption. the best bounds for the shortest path and maximum flow problems are: Polynomial-Time Bounds S(n. The scaling technique it did not capture the interest of many researchers. Bland and Jensen [1985] independently discovered a similar cost scaling algorithm. Bertsekas [1986] developed the first pseudoflow push algorithm. This algorithm was pseudopolynomial-time. this Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section algorithm to Tarjan [1984] 5. C) = Discoverers min (m log log C. m + rh/logC ) Johnson [1982].8. Orlin and Tarjan [1988] M(n. minimum L> cost flow problem.175 For the sake of comparing the polynomial and strongly polynomial-time algorithms. However.7. The RHS-scaling algorithm presented the which a Vciriant of Edmonds-Karp algorithm.

5. then is strongly polynomial-time. required sophisticated data structures that impose a very high computational overhead. showed that the negative cycle algorithm . Although the wave This algorithm is very practical. analyzing an algorithm suggested by Weintraub [1974]. Gabow and 0(nm log n U log nC). For problems satisfying the similarity is assumption.8 .9. The second success was due Orlin and Tarjan scaling algorithm. cycle algorithm Both the algorithms are based on the negative due to Klein [1967]. Goldberg and Tarjan [1987] obtained a computational time that the bound of 0(nm log n log nC). situation has prompted researchers to investigate the possibility of improving the computational complexity of minimum first cost flow algorithms without using any complex data Tarjan [1987]. who developed the double scaling algorithm. Goldberg and Tarjan [1988b] showed that flow a it if the negative cycle algorithm cycle always augments along / minimum mean cycle (a W for which V (i. upon similar ideas. algorithms by Goldberg and Tarjan appear more attractive. its worst-case running time is not very attractive.3 contains the definition of a blocking flow. log structures. (The description of Dinic's algorithm in Section 6. the double scaling algorithm faster than all other algorithms for all network topologies except for very dense networks. Goldberg. [1988]. which was developed relies independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988]. Goldberg and Tarjan [1988a] obtained an 0(nm log (n^/m) log nC) bound for ^he wave algorithm.) finger tree (see Using both Mehlhom [1984]) and dynamic tree data structures. m log n)). They also showed minimum cost flow problem cam be solved using 0(n log nC) blocking flow computations. The success in this direction was due to who developed a triple scaling algorithm running in time to Ahuja. in these instances.. Scaling costs by an appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC) and a dynamic tree implementation improves the bound further to 0(nm log log U log nC). 176 for the minimum cost flow problem described in Section 5. 6 W this Goldberg and Tarjan described an implementation of approach running in time 0(nm(log n) minflog nC. These algorithms. except the wave algorithm. Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed other polynomial-time algorithms. The double as described in Section runs in 0(nm log U log nC) time. Barahona and Tardos if [1987]. |W | is minimum). Using a dynamic tree data structure in the generic pseudoflow push algorithm.j) Cj.

that can valued data as well as integer valued level. performs is 0(m log mCU) iterations.. the worst-case running time of this algorithm nearly as low cis the best weakly polynomieil-time algorithm.. when applied minimum cost flow problem performs 0(n^-^ mK) operations. Interior point linear programming algorithms are another source of polynomial-time algorithms for the minimum cost flow problem. they describe a method (based upon solving to an auxiliary assignment problem) determine a disjoint set of augmenting cycles with the property that augmenting flows along these cycles improves the flow cost by at least as much as augmenting flow along any single cycle. m log n)) shortest path is problems.177 augments flow along then it a cycle with maximum improvement in the objective function. Kapoor and to the Vaidya [1986] have shown that Karmarkar's [1984] algorithm. This desire was motivated primarily by (Indeed. and also highlighted the desire to develop a strongly polynomial-time algorithm. For very sparse networks.Tr\^ log (mCU) S(n. in practice. Galil and Tardos time. source of the difficult or underlying complexity in solving a problem. even for problems that satisfy the similarity assumption. is Currently. and Orlin [1988] provided subsequent improvements in the running Goldberg and Tarjan [1988a] obtained another strongly polynomial time Goldberg and algorithm by slightly modifying their pseudoflow push algorithm. theoretical considerations. [1986].) C and log U typically range from 1 to 20.e. Fujishige [1986].e. This algorithm solves the minimum cost flow problem as a sequence of 0(min(m log U. NP-hard). where . Edmonds and Karp the [1972] proposed the first polynomial-time algorithm for minimum cost flow problem. the fastest strongly polynomial-time algorithm due to Orlin [1988]. Since identifying a cycle with maximum improvement difficult (i. Their algorithm runs in 0(. in principle. O) time. m. the terms log in n. are problems more equally difficult to solve as the values of the tmderlying data becomes increasingly larger? The Tardos first strongly polynomial-time minimum cost flow algorithm is due to [1985]. Tarjan [1988b] also show that their algorithm that proceeds by cancelling minimvun mean cycles is also strongly polynomial time. at a more fundamental i. network flow algorithms data. Several researchers including Orlin [1984]. identify the and (ii) they might. and are sublinear Strongly polynomial-time algorithms are (i) theoretically attractive for at least two reasons: run on real they might provide.

scaling algorithms have the potential to be competitive with the best other algorithms. and Orlin have obtained contradictory Testing the right-hand-side scaling algorithm for the minimum cost flow problem.5 Assignment Problem The assignment problem has been emphasis in the literature has a popular research topic. s and a sink node t. [1955]. To use this solution approach. At fully this time.i) problem by adding node .. Bland and Jensen [1985] also reported encouraging results with their cost scaling algorithm. cost flow problem. they found the scaling algorithm to be competitive with the relaxation algorithm for some classes of problems.4 for the lie minimum algorithms. and introducing and unit for all i€N|. features. Vaidya [1986] suggested another algorithm for linear programming that solves the minimum cost flow problem in 0(n^-^ y[m K) time.ar . the scaling algorithms [1986] not as efficient as the non-scaling algorithms.t) first transform the assignment problem into a a source minimum arcs cost flow (s. Boyd results. many of these algorithms share common The successive shortest path algorithm. appears to at the heart of many assignment due to This algorithm is implicit in the first assignment algorithm Kuhn known as the Hungarian method. we (j. minimum cost flow problem. According to the even though they might provide the best-worst case bounds on running eu-e times. described in Section 5. We believe that when implemented with appropriate speed-up techniques. Asymptotically. these time bounds are worse than that of the double scaling algorithm. Although the research community has developed several different algorithms for the assignment problem. The algorithm successively obtains a shortest path from with respect to the lir«. [1972]. the research community has yet to develop sufficient evidence to assess the computational worth of scaling and interior point linear for the programming algorithms folklore. and is explicit in the papers by Tomizava [1971] and Edmonds and Karp When applied to an assignment problem on the network G = (N^ u N2 . The primary efficient been on the development of empirically algorithms rather than the development of algorithms with improved worst-case complexity. 178 K= log n + log C + log U. 6. and for all J€N2 these arcs have zero cost s to t capacity. A) the successive shortest path algorithm operates as follows.

costs leads to shortest path problems with nonnegative arc details of Weintraub and Barahona [1979] worked out the Edmonds-Karp assignment algorithm for the assignment problem. Lawler [1976] described an Oiri^) . Carraresi and Hoffman and Markowitz path problem to [1963] pointed out the transformation of a shortest an assignment problem. [1972] independently pointed out that Tomizava and Edmonds and Karp working with reduced lengths. and augments one unit of flow along the shortest path. the to Hungarian method solves a (particularly simple) maximum flow problem send the maximum possible flow from the source node s to the sink node t using arcs vdth zero reduced cost. S(n. If the shortest paths from the source node we use the labeling algorithm to solve the resulting maximum flow problems. [1960] and Busaker and Gowen [1971] [1961] on the minimum cost flow problem. (For 0(nm + nS(n. too. Glover The more recent [1986] is threshold and Klingman also a successive shortest path algorithm which integrates their threshold shortest path algorithm (see Glover. the Hungarian method. then these applications take a total of 0(nm) time time. However. the problem augments flow along one path augments flow along all Hungarian method to the sink node.m. where S(n. overall.m.m. The fact that the assignment problem can be solved as a sequence of n shortest Iri path problems with arbitrary arc lengths follows from the works of Jewell [1958]. updates the node potentials.C) min(m m+nVlogC}.m. in Whereas the successive shortest path an iteration. the research community considered it to be O(n^) method.m.179 programming reduced costs. Kuhn's [1955] Hungarian method shortest path algorithm. is the time needed to solve a shortest path is For a naive implementation of Dijkstra's algorithm. For problems satisfying the similarity assumption. S(n. The algorithm solves the assignment problem by n applications of the shortest path algorithm for nonnegative arc lengths and runs in 0(nS(n. Sodini [1986] also suggested a similar threshold assignment algorithm. some time after the development of the Hungarian method as described by Kuhn.C) O(n^) and for a Fibonacci heap implementation is it is 0(m+nlogn). algorithm by Glover.C)) time. since there are n augmentatior\s and each augmentation takes 0(m) runs in Consequently. is the primal-dual version of the successive After solving a shortest path problem and updating the node potentials. Glover and Klingman [1984]) with the flow augmentation process.C)) time.mC)) = 0(nS(n.C) problem. log log C.

The relaxation approach for the (1969]. This approach closely related to the successive shortest path algorithm. only n are nonzero.C)) time. Subsequently. minimum cost flow problem is due to E>inic is and Kronrod Hung eind Rom [1980] and Engquist [1982].180 implementation of the method. Subsequent research focused on developing . The basis of the assignment problem is highly degenerate. a primal algorithm that maintains a feasible it assignment and gradually converts into an optimum assignment by augmenting flows along negative cycles or by modifying node potentials. These authors to developed the details of the network simplex algorithm when implemented maintain a strongly feasible basis for the assignment problem. Both the algorithms maintain optimality of the intermediate solution and work toward feasibility by solving at most n shortest path problems with nonnegative arc lengths. Probably because of this excessive degeneracy.C)) time.) Jonker and Volgenant [1986] suggested some practical improvements of the Hungarian method. [1969] The algorithms of Dinic and Kronrod but and Engquist [1982] are essentially the same as the one we in the just described.m. objects Throughout the relaxation algorithm. they also reported encouraging computational results. reoptimizes over All of these algorithms the previous basis to obtain another strongly feaisible basis. each augmentation. every person assigned. Researchers have also studied primal simplex algorithms for the assignment problem. Glover and Klingman [1977a] devised the strongly feasible basis technique. The successive shortest path algorithm maintains a solution w^ith unassigned persons and objects. Derigs [1985] notes that the shortest path computations vmderlie this method. run in 0(nS(n. of its 2n-l variables.m.C)) time. the shortest path computations are somewhat disguised paper of Dinic and Kronrod [1969]. Another algorithm worth mentioning This algorithm is is due to Balinski and Gomory [1964]. and with no person or is object overassigned. the mathematical programming community did not conduct much research on the network simplex method for the assignment problem until Barr. The algorithm of Hung and Rom after [1980] maintains a strongly feaisible basis rooted at an overassigned node and. The major difference the nature of the infeasibility. but may be overassigned or unassigned. Both approaches start writh is in an infeasible assignment and gradually make it feasible. many researchers realized that the Hungarian method in fact runs in 0(nS(n.m. and that it rurrs in 0(nS(n.

A naive implementation of the algorithm runs in [1988] described a scaling version of Dantzig's pivot 0(n^m log nC). some variants of this Balinski's algorithm performs O(n^) pivots and runs O(n^) time. initially. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots. Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm and showed that for the eissignment problem this rule requires O(n^lognC) pivots. whereas the algorithm by Bertsekas and Eckstein increases prices that preserves e-optimality of the solution. Ahuja and Orlin rule that performs 0(n^log C) pivots and can be implemented to run in 0(nm log C) time using simple data structures. is due to Bertsekas and uses basic ideas originally [1988] described a Bertsekas and Eckstein more recent its version of the auction algorithm. Balinski [1985] developed the signature method. this threshold value equals C and within O(n^) pivots its value is halved. it it (Although his basic algorithm maintains a is not a dual simplex algorithm in the traditional sense because at does not necessarily increase the dual objective algorithm do have this property.C)) time. Goldfarb [1985] described some implementations of O(n^) time using simple data structures and in Balinski's algorithm that run in 0(nm + n^log n) time using Fibonacci heaps.m. . the algorithm we have presented increases the prices of the objects by one unit at a time. This algorithm essentially in amounts to solving n shortest path problems and runs 0(nS(n.ISl polynomial-time simplex algorithms. which is a dual simplex algorithm for the eissignment problem. his algorithm performs 0(n^log nC) pivots. Hence. dual feasible basis.) in every iteration. analysis is Out presentation of the auction algorithm tmd somewhat different that the one given by Bertsekas and Eckstein [1988]. The auction algorithm suggested in Bertsekas [1979]. Hung [1983] describes a pivot rule that performs at at most O(n^) consecutive degenerate pivots and most 0(n log nC) nondegenerate pivots. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots. essentially consists of pivoting in any arc with sufficiently large reduced The algorithm defines the term "sufficiently large" iteratively. The algorithm cost. For example. by the maximum amount Bertsekas is [1981] has presented another algorithm for the assignment problem which cost flow in fact a specialization of his relaxation algorithm for the minimum problem (see Bertsekas [1985]).

it is difficult to assess their computational merits. Section 5. the similarity assumption. but the two algorithms would probably have different computational attributes. His algorithm performs O(log C) scaling phases and solves each phase in OCn'^'^m) time. The primal simplex algorithm is slower than the the latter primal-dual. relaxation and successive shortest path algorithms. using bit-scaling of costs. the successive shortest path algorithms Among due to Glover et al. Carpento. showed that the scaling version of the auction Bertsekas and Eckstein [1988] algorithm runs in this 0(nm log nC). results to date seem to justify the following observations about the algorithms' relative performance. This time bound For problems satisfying best time is comparable to that of Gabow and Tarjan 's algorithm. these two algorithms achieve the boimd to solve the assignment problem without using any sophisticated data structure. three approaches. problem is 0(nm + n^ log n) which is achieved by many assignment Scaling algorithms can do better for problems that satisfy the similarity first scciling assumption.11 has presented a modified version of algorithm in Orlin and Ahuja [1988]. Nevertheless.Currently. Glover and Klingman [1977a] on the network simplex method. Bertsekas and Eckstein is found that the scaling version of the auction algorithm competitive with Jonker and Volgenant's algorithm. Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5. As mentioned previously. years. and by Glover [1986] and Jonker and Volgenant [1987] on the successive shortest path methods. Martello and Toth [1982] [1988] on the primal-dual method. Martello and Trlh [1988] present . by Engquist et al. most of the research effort devoted to assignment algorithms has stressed the development of empirically faster algorithms. on the relaxation methods. thereby achieving jm OCn'^' ^m log C) time bound. Since no paper has compared all of these zilgorithms. Using the concept of e-optimality. the best strongly polynomial-time bound to solve the assignment algorithms. Gabow [1985] . They also improved the time bound of the auction algorithm to 0(n^'^m lognC). by McGinnis [1983] and Carpento. Some representative computational studies are those conducted by Barr. [1986] and Jonker and Volgenant [1988] [1987] appear to be the fastest. Over the many computational studies have compared one algorithm with a few other algorithms. algorithm running in time 0(n^' Gabow and Tarjan [1987] developed another scaling push algorithm the assignment ^m log nC). developed the algorithm for the assignment problem.8 solves problem in 0(nm log nC) since every push is a saturating push.

If In xj: models of generalized network flows. and network design. then the arc is gainy.e.1b) [vj. For example. is a is nonnegative flow multiplier dissociated with the lossy and.i) "'ji'^ji = K'if» = s S 0. We shall now discuss these topics briefly. Maximize v^ (6ia) subject to X {j: "ij {j: S (j. commodity network flow problems with linear Several other generic topics in the broader problem theoretical (i) network optimization are of considerable and practical interest. four other topics deserve mention: (ii) generalized network flows. 1 < rj: < then the arc Tjj if 1 < Tj. arcs do not necessarily conserve flow. Researchers have studied several generalized network flow problems. extension of the conventional An maximum two flow problem is the generalized maximum flow problem which either maximizes the flow out of a source the flow into a sink node or maximizes of node (these objectives are different!) The source version the problem can be states as the following linear program. in this chapter assume that arcs the flow entering an arc equals the flow leaving the arc.183 several cases. if i = . t for aU i E N (6. i. In particular. j). (iii) multicommodity flows. < «>. Generalized Network Flows The flow problems we have considered conserve flows. In the conventional flow networks.t. j.. units of flow enter an arc (i. Tj. FORTRAN implementations of assignment algorithms for dense and sparse 6.6 Other Topics Our domain of discussion in this paper has featured single costs. = for all arcs. Generalized network flows arise in may application contexts. then Tj: Xj: units "arrive" at arc.j) € A) € A) s. if i ?t (i. If node 1. the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods. (iv) convex cost flows.

because of flow losses and gains within arcs. for all (i. find their implementation to be very efficient in practice. The approach. The generalized maximum flow problem has many similarities with the minimum minimum cost flow problem.j) Cjj (x^j). the objective function can be written in the form V (i. and the primal-dual algorithm for the cost flow problem apply to the generalized maximum flow problem. The recent paper by Goldberg. and Klingman among they Elam it is et al. which is an extension of the ordinary minimum cost flow problem. is essentially a primal-dual algorithm. convex cost flow problems with separable cost functions. note that Vg not necessarily equal to v^. Further. are not pseudopolynomial-time. Even problems with nonseparable. Problems containing nonconvex nonseparable cost terms such as xj2 e A are substantially X-J3 more difficult to solve and continue to pose a significant challenge for the mathematical programming community. . the negative cycle algorithm. These are three main approaches to solve this problem. The third approach. find that about 2 to 3 times slower than their implementations for the ordinary minimum [1988b]. j) e A.184 < x^j < uj: . Glover others.. is due to Jewell [1982]. mainly because the optimal arc flows and node potentials might be fractional. Convex Cost Flows We shall restrict this brief discussion to i. These algorithms. but convex objective functions are more difficult to solve.e. we wish to determine the minimum first cost flow in a generalized network satisfying the specified supply/demand requirements of nodes. The paper by Truemper [1977] surveys these approaches. due to Bertsekeis and Tseng generalizes their minimum cost flow relaxation algorithm for the generalized minimum cost flow problem. Extended versions of the successive shortest path algorithm. cost flow algorithm. Plotkin and Tardos [1986] describes the first polynomial-time combinatorial algorithms for the generalized maximum flow problem. Note that the capacity restrictions apply to the flows entering is the arcs. however. The second approach [1979] the primal simplex algorithm studied by Elam. typically. In the generalized minimum cost flow problem.

There a well-known technique for transforming linear functions to a linear a separable convex program with piecewise and Magnanti standard [1972]). The paper by Ahuja. negative cycle algorithm. thus increasing the problem size. (xjj) for each (i.j) Cj. Batra. (6. < x^j for all (i. j) with only three . (xj. alternatives are possible.j) e A. (62c) In this formulation. Cj..g. (xjj) is a piecewise linear function. is a convex function.185 analysts rely on the general nonlinear programming techniques to solve these problems. it is possible to cost carry out this transformation implicitly and therefore modify many minimum flow algorithms such as the successive shortest path algorithm. we don't).2a) e A subject to Y {j: (i. to solve convex cost flow problems without increasing the problem [1984] illustrates this technique size.) is problems: each Cj.2b) e A < Ujj . Hax This transformation reduces the convex cost flow problem to a it minimum cost flow problem: introduces one arc for each linear segment in the cost functions. However. (xj. The research community has focused on two (i) classes of separable convex costs flow each Cj.) (6. and Gupta and suggests a pseudopolynomial time algorithm. to approximate a convex function of one variable to any desired degree of accuracy. classes of Solution techniques used to solve the two problems are quite is different. then we could solve the if problem exactly using a linear approximation for any arc (i. convex problem a priori (which of we knew the optimal solution to a separable course. of (ii) a continuously differentiate function. Observe that segments chosen (if it is possible to use a piecewise linear function. program (see. primal-dual and out-of-kilter algorithms. More elaborate For example. Bradley. j) e A. The separable convex cost flow problem has the follow^ing formulation: Minimize V (i. with linear necessary) with sufficiently small size.j) ^i] {j: € A S (j. e.i) ''ji = ^^'^' ^°^ all i € N.

Klincewicz [1983]. approximation. Rockafellar [1984]. that the b*^ problem contains r distinct commodities numbered k. same underlying network. If (See Meyer [1979] for an example could we were interested in only integer solutions. Dembo and Klincewicz [1981]. the versions of the convex cost flow problems can be solved in polynomial [1984] has devised a polynomial-time algorithm for Minoux one of [1986] its special mininimum quadratic cost flow problem. Hosein and Tseng [1987]. but share common a linear arc capacities. then we choose the breakpoints of the linear approximation at the set of integer values. topic are Ali.186 breakpoints: at 0.j)e k c^: k x^(6. Helgason and Kennington [1978]. cases. using ideas from nonlinear progamming for solving this general separable convex cost flow problems. of this approach). 1 Let denote the supply/demand vector of commodity cost flow Then the multicommodity minimum ^ problem can be formulated as follows: Minimize V 1^=1 V (i. Some time. Uj. and therefore solve the problem in pseudopolynomial time. Researchers have suggested other solution strategies. Kennington and Helgason Meyer and Kao [1981]. we state programming formulation of the multicommodity minimum problem and its cost flow problem and point the reader to contributions to this specializations. Suppose through r. to obtain Minoux has also developed a polynomial-time algorithm the convex const flow problem. an integer optimum solution of Muticommodity Flows Multicommodity flow problems arise when several commodities use the In this section. and the optimal flow on the arc. Some important references on this [1980].3a) A subject to . This observation has prompted researchers to devise adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous. and Bertsekas. Florian [1986]. Any other breakpoint in the linear approximation would be irrelevant and adding other points would be computationally wasteful. coarser.

Shein and pseudopolynomial time by a labeling algorithm. decomposition and partitioning methods. represented respectively by to and tK The t*^ maximize the sum of flows that can be sent from s*^ to for all k. We refer the reader to . (6.3d). (6. subsequently generalized this decomposition approach to linear programming. Further. Researchers have proposed three basic approaches for solving the general multicommodity minimum resource-directive cost flow problems: price-directive decomposition.j) k k ~ ^i ' ^OT a\] i and k. every s*^ commodity k has objective a is source node and a sink node. < k u. Frisch [1968] showed how source or a to solve the multicommodity maximum flow problem with a common common sink by a single application of any maximum flow algorithm. then decomposes single commodity minimum cost flow corxstraints problems.j) e A) e A y ktl ' k X. (6. With the presence of the bundle the essential problem in a is to distribute the capacity of each arc to individual costs. (63c) < k Xj. Hu [1963] showed how network in to solve the two-commodity maximum flow problem on an undirected Rothfarb. the total flow on any arc cannot exceed capacity.j) and all k . 1] {j: {j: V (i. one for each commodity. as captured by (6. '^ < u:j. As indicated by its the "bundle constraints" (6.3c). Ford and Fulkerson [1958] solved the general multicommodity Dantzig and Wolfe maximum [1960] flow problem using a column generation algorithm. x-- and k c-- represent the amont of flow and the unit cost of flow for commodity k on arc (i.3c). commodities way that minimizes overall flow We problem is first consider some special cases.3).187 k X.j).3d) k In this formulation. The multicommodity maximum flow a special instance of In this problem.j)..3b) ''ii (i.. the model contains additional capacity each arc. for all (i. . (6. for ^ all (i. restrictions on the flow of each commodity on Observe that it if the multicommodity flow problem does not contain bundle into r constraints.

al. some may restrict the underlying network topology (for instance. the algorithms developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et [1984]). algorithmic developments on the multicommodity minimum made on cost flow problem have not progressed at nearly the pace as the progress the single commodity minimum cost flow problem. in other applications. in some applications. These network design models contain is that indicate whether or not an arc included in the network. Although specialized primal simplex software can solve the single commodity problem 10 to 100 times faster than the general purpose linear programming systems.3). Many design problems can be stated as fixed cost network flow problems: is (some) arcs have an associated fixed cost which incurred whenever the arc carries 0-1 variables yjj any flow. related The design decisions yjj and routing decisions by "forcing" constraints of the form 2 k=l ''ii - "ij yij ^^^ ' ^" ^^'^^ which replace the bundle constraints multicommodity flow problem (6. Typically.are multicommodity flows.j) flow to be the arc's design capacity constraints Many modelling enhancements are possible. for example.j) to be zero if not included in the network design. the constraint on arc Ujj (i. these models involve k x^. Network Design We network. Unfortunately. the network must be a tree. of the form (6.3c) in the convex cost k These constraints force the flow the arc is x^- of each if commodity k on the arc is arc (i. The book by Kennington and Helgason [1980] describes the details of a primal simplex decomposition algorithm for the multicommodity minimum cost flow problem. the network might . The design problem is of its considerable importance in practice and has generated an extensive literature of own. have focused on solution methods that is.188 the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these methods. for finding optimal routings in a on analysis rather than synthesis. restricts the total included.

network design problems require solution techniques from any integer programming and other type of solution methods from combinatorial optimization. Acknowledgments We Wong and are grateful to Michel Goemans. and Prime Computer. . is many different objective functions arise in practise. These solution methods include dynamic programming. One of the most popular "" Minimize £ ^ k=l (i^j)e k c• k x^^ + Y. Usually. Inc. and by Grants from Analog Devices. The research Presidential of the first and third authors was supported in part by the Young Investigator Grant 8451517-ECS of the National Science Foundation. optimization-based heuristics. by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research. Lav^ence Wolsey . 1987] have described the broad range of applicability of network design models and summarize solution methods network design literature. ^ (i.j) A V ij € A (as well zs fixed costs k which models commodity dependent per unit routing costs c Fjj for • the design arcs).Richard Robert Tarjan for a careful reading of the manuscript and many for useful suggestions. Apple Computer. Magnanti and Wong [1984] and Minoux [1985. We are particularly grateful to William Cunningham many valuable and detailed comments.189 need alternate paths to ensure reliable operations). dual ascent procedures.. Hershel Safer. and integer programming decomposition (Lagrangian relaxation. Also. Benders decomposition) as well as emerging ideas from the field of polyhedral combinatorics. for these problems as well as many references from the [1988] discuss Nemhauser and Wolsey many underlying methods from integer programming and combinatorial optimization.

. 055-76.K. Sloan School Management. of Shortest Path and Simplex Method...E... Department State University. Orlin. Akgul.B. J. Technical Report No. Orlin. for the Shortest Path. A Fast and Simple Algorithm for the Maximum M. K. Faster Algorithms for the Shortest Path Problem.B. Hop>croft. Working Paper No. 1988. 1976. Finding Minimum-Cost Rows by Double of Scaling. 1987. Cambridge. Working Paper 1966-87. and R. R.. J.. M. Operations Research Center. Flow Algorithms. L. and R. and J.B.K.T.K.B. and Orlin. 1988..D. N. Problem. Batra.B. R. and J.K..E. Tarjan. Research Report. M. Reading.K.190 References Aashtiani. Stein. R. Magnanti. Cambridge.. 2047-88. 1984.K. J. Personal Communication. 1985a. OR Aho.E.B. MA. Ullman. K.I. Gupta.C. 193.. Cambridge. To appear.T. MA. and S. 1988. R. M. A. MA.I. Tarjan. Res. Addison-Wesley. R.. 1988. Ahuja. Mehlhom. Sloan School of Management. J.. To appear.T.T. C. R. A Parametric Algorithm for the Convex Cost Network Flow and Related Problems. Improved Algorithms for Network Flow Problen«.V. Implementing Prin\al-E>ual Network Operations Research Center. Tarjan.I. R. Cambridge.I. 1988. Technical Report Cambridge. M. R. Working Paper 1905-87. Kodialam.K. The Design and Analysis of Computer Algorithms.I. ]. 1987. MA. To appear Ahuja. Orlin. K.E.E. . and R. North Carolina Raleigh. Orlin. and T. R. Flow Problem. Orlin. Ahuja. 1974. Improved Primal Simplex Algorithms Cost Flow Problems. Ahuja. Bipartite J.V. MA. Res. Sloan School of Management. 222-25 Goldberg. M.A. J. .B. Tarjan. MA. in Oper. J. A. Ahuja. and Ahuja. Assignment and Minimum and Ahuja. Computer Science and Operations Research. Orlin.T. Improved Time Bounds for the Maximum Flow M. Euro. Ahuja. 1988.of Oper. L. 16.. H. J.

Glover. Tardos.. Raleigh. Basis Algorithm Ban. Symposium on . and D. L. of Mathematics. R. A. Baratz. The Convex Cost Netwrork Flow Problem: A State-of-the-Art Survey. A Network Augmenting of the International Path Basis Algorithm for Transshipment Problems. Klingman. V.191 Akgul.127-134. MA. B.. Patty. 1984. Signature Methods for the Assignment Problem. Proceedings External Methods and System Analysis. D. Klingman. Helgason. Oper.C. 1978. Res. 1980. Glover. Technical Report OREM 78001. Dept. J. A. McCarl and P. Research Report. J. Cambridge. Barahona. 33. Laboratory for Computer Science. 16. 1987. M. I. N. MIT. Implementation and Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. 1977a.T. Euro. Man. A Genuinely Polynomial Primal Simplex Algorithm for the Research Report. A Survey. K. Sci. Note on Weintraub's Minimum Cost Flow Algorithm. A Primal Method for the Assignment and Transportation Problems. Kennington.. B. A. Math. and E. Comory. Texeis. 527-536.L. Forces Karzanov Algorithm to O(n^) Running Time. and D. Prog. Armstrong. F. Farhangian. 1977b.E.. Trans. 12. The Alternating Path for the Assignment Problem. Department of Computer Science and Assignment Problem. 10. M.D. 4. Whitman. R.. North Carolina State University. Shetty. Ali. 1978. and J. Res. and R. R.I. Bamett... Oper. R. 1-13. M. Klingman.I. Balinski. F. Barr. Multicommodity Network Problems: Applications and Computations. Construction and Analysis of a Network Flow Problem Which Technical Report TM-83. 1985. 1964. Wong.37-91.. M.L. Cambridge. Assad. Operations Research.. 578-593.E. F. Ali. MA. Networks 8. 403-420. Kennington. LIE. B. D. Multicommodity Network Flows Balinski. 1985b. and D. 1977. Southern Methodist University.

Relaxation Methods for Network J. Data Networks. Distributed Relaxation Methods for Linear Network Flow Problems. of Operations Research 14.. M. 1958. D. Math. Generalized Alternating Path Algorithm for Transportation Problems. D. Oper. P. 1978. Prog. IXial Coordinate Step Methods for Linear Network Flow Problems. and D. A Nev^ Algorithm for the Assignment Problem.. D. 1985. Greece. Res. Series B. P. D.. Bazaraa.P. Ghouila-Houri. and R. Glover. Prog. C. Laboratory Cambridge. Tseng. Math. 1981. 152-171. of Spanning Tree Labeling Procedures for Network Optimization. in Math. MA. A Unified Framev^ork for Primal-Dual Methods in Minimum Cost Network Flow Problems. 87-90.I. Flow Problems with Convex Arc Costs. R.I. John Wiley 1979. Gallager. and 1978. Linear Programming and Network Flows. QuaH. Prentice-Hall. Glover. of 25th IEEE Conference on Decision and Control. 16. Bertsekas. Bertsekas.. M. 1987. and P. Bertsekas. 1987. Cambridge. A Distributed Algorithm for the Assignment Problem. ]. Klingman. On a Routing Problem. P.. 105-123. R. 25.1219-1243. M.P. Prog. 1987. D. R.. A... for Information Decision Systems. SIAM of Control and Optimization . Bertsekas. Bertsekas. 32.J.. Laboratory for Information Decision systems. 1979. and A. P. Barr.T.T. Berge. Also in Annals 1988. D. Appl. 125-145. To appear Bertsekas. Euro. Working Paper. Report LIDS-P-1653. Proc. 16-34. Athens. 137-144. F. D.P. 1962. Bellman.P. Bertsekas. INFOR J. and J. John Wiley & Sons. Programming. D. & Sons. The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem. and D. Enhancement 17. MA..192 Barr. 21. Klingman. Eckstein. Hosein. P. 2. Jarvis. Games and Transportation Networks. Math. Bertsekas. . 1986.

B. A Procedure for Determining a Family of 15. Van Emde. L. O. 1988a. Theory 10. P. (eds. Toth. 1986. 23.. Bland. R. S.. et (ed. Cornell University. Applied Mathematical Programming. of Vehicles L. and P. 86-93. Bertsekas. Technical Report 661. R. Simeone et al. J. 1977.B. G. Tseng. Cheriyan. for Linear Minimum Cost Network Flow Problems. L. Addison-Wesley. C. D. Magnanti. Operational MD. 1961. Graves. 10.). of Operations Research 13. Bombay. Sys. Busaker. Eur. and E. Baltimore. Zijlstra. and Orlin. Tata Institute of Fundamental Research. On the Computational Behavior of a Polynomial-Time Network Flow Algorithm. Comp. 125-190. Design and Implementation of an Efficient Priority Queue. India. and D.P. Bradley. A. and P. Minimal-Cost Network Flow Patterns. Routing and Scheduling and Crews. Research Office..R. 36.. 93-114. and M. Technical Report No. D. Ithaca. and G. 1977. and J. G. An Efficient Algorithm for the Bipartite Matching Problem. 1985.O. 193-224.G. School of Operations Research and Industrial Engineering. Bodin. Boyd.. and T. Tseng. Parametrized Worst Case Networks for Preflow Push Algorithms.. A.193 Bertsekas. Man. Golden. Simeone. C. 1-38. FORTRAN Codes for Network As Annals and P... Hax. John Hopkins University. 1988b. Carraresi. Oper. 1977.Y. and P. Res. 21. Brown.J. P. 1983. Carpento. S. Math.L. Optimization. P. Technical Report. 1988. 1986. O. In B. Bradley. G. 1988. The Relax Codes al. Boas. Res. Personal Communication. A. In B. Res.). Algorithms and Codes for the Assignment Problem. Design and Implementation of Large Sri. FORTRAN Codes for Network As Annals and J. Kaas. Computer Science Group. Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems. Martello. Sodini. . Assad. Scale Primal Transshipment Algorithms. 99-127.. Optimization. of Operations Research 33. 65-211.P. Gowen.G. Oper. Ball. A.. D. N. R. Jensen. Oper.

B. All Shortest Routes in a Graph. Indian Institute of Technology.W. G. Analysis of Production and Allocation.B.W.H. Dept. (ed. Res. Princeton University Press. 1979. 101-111. Cheung. 6. Decomposition Principle for Linear Programs. G. Algorithm for Cor\struction of Maximum Flow in Networks with Complexity of OCV^ Economical Problems 7. NY. Graph Theory : An Algorithmic Approach. Economeirica 23. Wolfe. ACM Trans. Activity Koopmans 359-373. India. 105-116. N. R. 187-190. 4. Man.R. On the Max-Flow Min-Cut Theorem of Networks. G. NJ. 174-183. .C. Tucker (ed.. Academic Press.N. Dantzig. Linear Programming and Extensions. In P. W. 1-16. 1956. Computational Comparison of Eight Methods for the Mzocimum Network Flow Problem. Application of the Simplex Method to a Transportation Problem. 215-221. Princeton. 196-208.194 Cheriyan. Dantzig. Cunningham. 11. 1967. B.B. 1960. Flow. 1975. Christophides. Cunningham.). Upper Bounds. Vl ) Operation. New Delhi. of Computer Science and Engineering.). Analysis of Preflow Push Algorithms for Maximum Network Technical Report. 1960. G. 1987. On the Shortest Route through a Network.B. In H. Linear Inequalities and Related Systems. W.. and S. Math.H. Princeton University Press. Sd.B. G.V. T. Rfs. Dantzig. and Block Triangularity Programming. on Math. Mafft.. Dantzig. Dantzig. G. Rosenthiel Graphs. Dantzig. 91-92. Kuhn and A. Pro^. G. John Wiley & Sons. Fulkerson. Dantzig. and P. 1951. Oper. 1977. Theoretical Properties of the Network Simplex Method. A Network Simplex Method. Software 6. 8. 1976. Maheshwari. Cherkasky. Inc. of Oper. J. 1962. Annals of Mathematics Study 38. and D. Secondary Constraints.B. (ed. in Linear 1955. In T. Mathematical Methods of Solution of 112-125 (in Russian)..). Theory of Gordon and Breach. 1980.

Technical Report. 1324-1326. ACM 12. and D. Ontario. 632-633. Study 15. D. Networks 9. 161-186. Algorithm 360: Shortest Path Forest with Topological Ordering. and M.. U. R. Denardo. Kamey. 1979. Reaching. Dial.57-102. R. Dinic. and J. A Scaled Reduced Gradient Algorithm for Costs. 1984. Fox..V. 1988. 275-323. Dijkstra. Dokl. Prog. University of Waterloo. 300. Springer-Verlag. Meier. Shortest Path Algorithms: Taxonomy and Annotation. 125-147. 1981. Kronrod. Network Flow Problen\s with Convex Separable Deo. E. Unpublished paper.L. Klingman. Dial. 27. Canada. An Algorithm for Solution of the Assignment Problem. Networks 14. Comm. G.195 Dembo. 1979. University of Bayreuth. Soviet Maths. Pruning and Buckets. Edmonds. 1970. Glover. and Vol. The Shortest Augmenting Path Method for Solving Assignment Problems: 4.. U. 1277-1280. R. Shortest-Route Methods: 1. U. 1969. Exponential Grov^h of the Simplex Method for the Shortest Path Problem.. 1988. 1959.A.. A Computational Arvalysis of Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees. Implementing Goldberg's Max-Flow Algorithm: A Computational Investigation. Math. Klincewicz. Res. S. W.269-271. 1985. Programming in Networks and Graphs. . and C Pang. E. A Note on Two Problems in Connexion with Graphs. Algorithm for Solution of a Problem of Soviet Maximum Flow in Networks with Power Estimation.A. and B. E. Motivation and Computational Experience. 11. E. 2-[5-248. West Germany.. Annals of Operations Research Derigs. 1970. 1969. Numeriche Mathematics 1. Derigs. Derigs. Lecture Notes in Economics and Mathematical Systems. J.A. Math. Doklady 10. F. Oper. Dinic. N.

AM Comput. Santa Monica. Feiitstein. Jr. of Oper. S. R. Prog. Study 26. 1956. Even. Theory TT-2. Technical Report TM-80.E. and D. Network Flow and Testing Graph Connectivity. M. A Successive Shortest Path Algorithm for the Assignment Problem. J. J. and D. Man. 5.. Elias.W. 248-264. Elam. J.. Comm. The Max-Flow Algorithm of Dinic and Karzanov: An Exposition. 24-32. Math. 1956. 507-518.. 1979. 1956. SI S. Fulkerson. Even. 4. A. Tarjan. 1987. 1979.M.I. ACM 19. 1975. INFOR 20. 8. 1972. Report Rand Corp.. L. Ford. 167-196. M. Solving the Trar\sportation Problem. CA. and C. Research Report.R. 370-384. Nonlinear Cost Network Models in Transportation Analysis. Graph Algorithms. Note on Maximum Flow Through a Network. A Strongly Convergent Primal Simplex Algorithm for Generalized Networks. Ford. Canad. Cambridge.E. S. Iowa Algorithmica. 1956. Computer Science Press.R. 1982. 1976. State University.R.. Sd. and R. L. Department of Computer Science.. F. Ford. 1986. Maryland. and D. Fulkerson. Jr. On the Efficiency of Maximum Flow To appear in Algorithms on Networks with Small Integer Capacities. Shannon. 399-404. 3. Femandez-Baca. and R. Martel. Infor.. Even.R. }. P. and C. Florian. Ames. 39-59. IRE Trans. Network Flow Theory. Math. 4. Math.R. 117-119. on Engquist. lA. L. Res. 1962.. Jr. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems.U. 345. . Maximal Flow through a Network. D.. /.T. Floyd. Laboratory for Computer Science.. MA.. Algorithm 97: Shortest Path. Klingman. >4CM P-923.196 Edmonds. Karp. M. Glover..

. R. . 148-168.E. Math. Fredman. R. Constructing Maximal Dynamic Flows from Static Flows.R. Naval Res.. R. SIAM ]. 9. Dantzig.R. 6. 197 Ford. L. H. and D.R.. 5.. Discrete Location Theory.. also in /. 1985.ofComput. Jr. (submitted). Comp. Fulkerson. L. H. M. 47-54. An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm. 25th Annual IEEE Symp. Prog. 1986. I. Flows in Networks. Ford. Man. Fulkerson. Computation of Maximum Flow in Networks. and Transportation Networks. D. Math. 31. Communication. Fulkerson. and Frisch. Res. S. D... 1958. and C. Scaling Algorithms for Network Problems. 1988. and P. Fulkerson. of Computing 83 - 89. Quart. R.R. Fulkerson. L. 338-346. Addison-Wesley. A Suggested Computation for Maximal Multicommodity Network Flow. Fredman. 596-615.Sci.L. and D. 1986. An Out-of-Kilter Method for Minimal Cost Flow Problems. Gabow. Cost Circulation 298-309.N. Francis. M.). 1955. L.R. 1957. Tarjan. 35. Tarjan. Faster Scaling Algorithms for Network SIAM ]. Comput. 1961. 1962. Gabow. Appl. Sci of ACM 34(1987). Frank. and DR. and D. on the Complexity of the Shortest Path Problem.. 18-27. Fujishige. Transmission. 1987. Jr. Naval Res. 2. and Problems.. J. 1984.R. Fulkerson.. Princeton. Ford.. 4.N.. 1971.B.. To appear. H.E. SIAM J.. Logist. Fibonacci Heaps and Their Uses in of Improved Network Optimization Algorithms. Jr. on Found. Oper.. John Wiley & Sons.T. 97-101. 277-283.Sys. New Bounds 5. NJ. Quart. Princeton University Press. 1958. and R. Ford.R. Log. 419-433. Mirchandani (eds. Sci. A Primal-Dual Algorithm for the Capacitated Hitchcock Problem. L.

226-240. 14. Z. Glover. Gilsinn. Maffioli. Glover. 1984. A Comparison of Pivot Selection Rules for Primal Simplex Based Network Codes. Z. and M. Res. F. 1973. 3-79. and C. Gallo. B. and S. Klingman.. Minimum Cost Network Eow Problem. An 0(VE log^ V) Algorithm for the Maximum Flow Problem. Min-Cost Flow Algorithm.. F. OCV^/S E^/^) Algorithm for the Maximum Flow Problem. /.. The Threshold Shortest Path Algorithm. and A. Italy. on the Found. Glover. ofComput. Bureau of Standards. Gavish. (eds. Sys.. 1981. Pallottino. Proc. Witzgall. 1977. Klingman.. and Primal-Dual Computer Codes 4. Networks 191-212. S. C. J. Math. Glover. No. Math. Schweitzer. Shortest Paths: A Bibliography. P.. Network Flow Algorithms. Naamad. 12. Kamey. G. Glover. R. Klingman. 1986. Galil. Z. Gibby.. F. In Fortran Codes for Network Optimization. Simeone. and Its E. Ruggen. D. D. Klingman. Pallottino. 1988. Shortest Path Algorithms. 1.198 GaUl. B. and D. G. and S. 221-242. Shlifer. G. 1980. D. P. Letters 2. Netxvorks 14. Gallo. Washington. EXial 1974. 1980. R. F. and D. Study 26. Toth. Theoretical Comp.. Gallo. Sofmat Document 81 -PI -4-SOFMAT-27. A Performance Comparison of Labeling Technical Note 772. National Algorithms for Calculating Shortest Path Trees. Galil. and E. Pallottino As Annals of Operations Research 13. F. Sci.. Sci. and G. D. Z. .). 1983. On the Theoretical Efficiency of Various 103-111. 199-202. Mead. Implementation and Computational for Comparisons of Primal. 136-146. Prog. 27th Annual Symp.. The Zero Pivot Phenomenon in Transportation Problems and Computational Implications. Threshold Assignment Algorithm. of Comp. 1986. 21. Glover. 1982. An 0(n^(m + n log n) log n) Sci. 203-217. Galil. 12-37.C. Tardos. Oper. . Starchi. Acta Informatica 14. Prog. Rome. and D.

Glover. and RE. Stutz. and D. Change Criteria. D.I. 293-298. Schneider. Applications of Management Glover.. 1106-1128. Naval Res. and N.V..I..V. 65-73. Klingman. 109-175. for the F. F. 1985. Combiiuitorial Algorithms for the Generalized Circulation Problem. 1988. New Polynomial Sci. Laboratory for Computer MA..E. Phillips. D. Science 3. D. Shortest Path Algorithms and Their Computational Attributes. 1985. AIIE Transactions Glover.. F. Cambridge. Proc. Solving Minimum Cost Flow Problem by of Proc. 1984. Basis and Solution Algorithms Problem. 12. on the Theory Comp. Klingman. Tarjan. 33. Napier. Sd. 1979. A. D. Klingman. . 1985.. J. A Computational Study on for Tranportation Start Procedures. INFOR Goldberg. Kamey. S. 20.. M. A. Goldberg. N. Whitman. A New Max-Flow for Algorithm. Oper. and R. 19th ACM Symp. Technical Report MIT/LCS/TM-291. 1986. A Primal Simplex Variant Maximum Flow F. Whitman. and R.. 31. M. and D.. Glover. Klingman. 41-61. J. A New Polynomially Bounded Shortest Path Algorithm. D. R. 136-146. F. A. and A. Netvk'ork Applications in Industry and Government. 1974. Phillips. 9. A New Approach to the Maximum Flow /.V. 1976. D.T. 1987. Comprehensive Computer Evaluation and Enhancement of Maximum Flow Algorithms. 18th ACM Symp. Tarjan. Laboratory Computer Science. Quart. Goldberg.. Goldberg. Research Report. 136-146. Problem. 363-376. Man. To appear in ACM. 1974. Augmented Threaded Index Method for Network Optimization. Res.A. Successive Approximation. Mote. Klingman. and J. Klingman. Logis. MA. and Tardos. 793-813. Klingman. Glover. A. Problem..199 Glover. Mote. D. Plotkin.F. and D. Man. 31. on the Theory of Comput. Science. Glover. F. E.V. Cambridge..T.

D. Networks 149-183. NY. Goldfarb. and T.E.D. R... At Most nm Pivots and O(n^m) Time. 7. Technical Report. Goldberg. B. MA. 1977.. Golden. A Practicable Steepest Edge Simplex Algorithm. Goldfarb. in New York. E. Res. D.V. Hao.. NY. and T. f. Optimization. Successive Approximation. Goldfarb. In B. T. 1985. New York. and R.. M. C. Reid. 1988a. Goldfarb. Efficient Dual Simplex Algorithms for the Assignment Problem. Gomory. )To (A revision of Goldberg and Tarjan appear in Math. 1987. Kai. B. 1986. Department of Operations Research and Industrial Engineering. Multi-Terminal Network Flows. 12.. A Primal Simplex Algorithm that Solves the Maximum Flow Problem University. Department of Operations Research and Industrial Engineering. Prog. Hao.) FORTRAN Codes for Network Goldfarb. 1961. and M. 83-124. Tarjan. J. Research Report. 1988b. Deterministic Network Optimization: A Bibliography. . Columbia University..200 Goldberg. Goldfarb. 2(Hh ACM Golden. and J. NY. and R. on the Theory of Comp.V. Finding Minimum-Cost Circulations by Symp. 388-397. D. 551-570. 1S7-203. Department of Operations Research and Columbia University. Seminar given OperatJons Research Center. Cambridge. Columbia New York.. J. Math. and S. D. L. Solving Minimum Cost Flow Problem by [1987]. 1986. 33. and Network Simplex Methods for Maximum Simeone et al. Industrial Engineering.ofSlAM 9. (eds. A. and J. Prog.K. Canceling Negative Cycles. I.361-371. D. Controlled Rounding of Tabular Data for the Cerisus Bureau at the : An Application of LP and Networks. Hao. Magnanti. Math. Proc.E.. D. A. Grigoriadis.. Taijan. A Computational Comparison of the Dinic Flow. 1988. Anti-Stalling Pivot Rules for the Network Simplex Algorithm. Efficient Shortest Path Simplex Algorithms. Research Report. . Hu. As Annals of Operations Research 13. Oper. 1977. Kai. 1988. and S.

1977. University of California. A. Technical Report No. Phys . Martel.. Comput. 1986. Computer Science and Engineering. YALEN/DCS/TR-356. R. New Hamachar. Network Row. 224-230. 1963. Res. 20. University. D. 26. D. M. Numerical Investigations on the Maximal Flow Algorithm of 22. 375-379. 1941. Yale Haven. and J. Oper. of a Product from Several Sources to Numerous Facilities. Computing Hassin.M... C. CT. and D. M. C. F. /. Johnson. 10. 1963. Fast Algorithms for Bipartite Gusfield. Prog. Very Simple Algorithms and Programs Dept. and H. Quart. of for All Pairs Network Flow Analysis. R. Helgason. Grigoriadis. D. CA. 63-68. Wiley-Interscience. D. 1984. . SIAM of Comp. Graphs and Algorithms. 2. Subroutines. Maximum Flow in Undirected Planar Networks. 1978. H. Hsu. . 344-260. M. Bulletin of the ACM Gusfield. Hu. V. Grigoriadis. L. M. Karp. Markowitz. Naval Hopcroft.. Research Report No. Vol. 17-18. B. Personal Communication. 1985. 1979. J. Programming and Related Areas: A Classified Bibliography. M.. 17-29. Lecture Notes in Economics and Mathematical Systems. CSE-87-1. An O(nlog^n) Algorithm for 14. E. Math. Hausman. 1973. Log. 83-111. 1985. Davis. Femandez-Baca. 160. An Efficient Procedure for 9. Implementing Hitchcock. 612-^24. The Distribution Math. A Note on Shortest Path.. Hoffman. Assignment. 1979. and R. D. Study Grigoriadis. The Rutgers Minimum Cost Network Flow 26. Integer SIAM J.. Res. An n ' Algorithm for Maximun Matching in Bipartite Graphs. and M.-< Karzanov. L. Minoux. 11. J. . An Efficient Implementation of the Network Simplex Method. J. and Transportation Problems. Springer-Verlag. a Dual-Simplex Network Flow Algorithm. 1988.201 Gondran. and D. Kennington. T. Multicommodity Network Flows. AIIE Trans. D. and T. SIGMAP 1987. 225-231.

202

Hu, T.C.

1969. Integer Programming and Network Flours.

Addison-Wesley.

Hung, M.
Oper.Res.

S.

1983.

A

Polynomial Simplex Method for the Assignment Problem.

31,595-600.

Hung, M.
Oper. Res
.

S.,

and W. O. Rom.

1980.

Solving the Assignment Problem by Relaxation.

28, 969-892.

Imai, H.

1983.

On

the Practical Efficiency of

Various

Maximum Flow

Algorithms,

/.

Oper. Res. Soc. Japan

26,61-82.

Imai, H.,

and M.

Iri.

1984.

Practical Efficiencies of Existing Shortest-Path Algorithms
/.

and
Iri,

a

New

Bucket Algorithm.

of the Oper. Res. Soc. Japan 27, 43-58.

M.

1960.

A New Method

of Solving Transportation-Network Problems.

J.

Oper.

Res. Soc. Japan 3, 27-87.

Iri,

M.

1969. Network Flaws, Transportation and Scheduling.

Academic

Press.

Itai,

A.,

and

Y. Shiloach.

1979.

Maximum Flow

in Planar

Networks.

SIAM

J.

Comput.

8,135-150.

Jensen, P.A., and

W.

Barnes.

1980.

Network Flow Programming. John Wiley

&

Sons.

Jewell,

W.

S.

1958.

Optimal Flow Through Networks.

Interim Technical Report

No.

8,

Operation Research Center, M.I.T., Cambridge,

MA.
Gair>s.

Jewell,
499.

W.

S.

1962.

Optimal Flow Through Networks with

Oper. Res.

10, 476-

Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.

/.

ACM

24,1-13.

JohT\son, D. B.

1977b.

Efficient Special

Purpose Priority Queues.
1-7.

Proc. 15th

Annual

Allerton Conference on

Comm., Control and Computing,

Johnson, D.

B.

1982.

A

Priority

Queue

in

Which

Initialization

and Queue

Operations Take

OGog

log D) Time. Math. Sys. Theory 15, 295-309.

203
Johnson, D.
B.,

and

S.

Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
Proceedings of the 20th Annual

Directed Planar Networks in O(n^/^logn) time. In
Allerton Conference on

Comm.

Control, and Computing.

Univ. of Dlinois, Urbana-

Champaign,
Johnson,

IL.

E. L.

1966.

Networks and Basic
1986.

Solutions. Oper. Res. 14, 619-624.

Jonker, R., and T. Volgenant.

Improving the Hungarian Assignment

Algorithm. Oper. Res.

Letters 5, 171-175.

Jonker, R.,

and A. Volgenant.

1987.

A

Shortest

Augmenting Path Algorithm
38, 325-340.

for

Dense and Sparse Linear Assignment Problems. Computing
Kantorovich, L. V.
of Production.
in Mfln. Sci.

1939.

Mathematical Methods in the Organization and Planning

Publication

House

of the Leningrad University, 68 pp.

Translated

6(1960), 366-422.

Kapoor,

S.,

and

P.

Vaidya.

1986.

Fast

Algorithms for Convex Quadratic
Proc. of the 18th

Programming and Multicommodity Flows,
Theory of Comp.
,

ACM

Symp.

on the

147-159.

Karmarkar, N.

1984.

A New

Polynomial-Time Algorithm

for Linear

Programming.

Combinatorica 4, 373-395.

Karzanov, A.V.

1974.

Determining the Maximal Flow in a Network by the Method

of Preflows. Soviet Math. Doklady 15, 434-437.

Kastning, C.

1976.

Integer

Programming and Related Areas:

A

Classified Bibliography.

Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.

Kelton,

W.

D.,

and A. M. Law.

1978.

A

Mean-time Comparison of Algorithms
Networks
8,

for

the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.

97-106.

Kennington,

J.L.

1978.

Survey of Linear Cost Multicommodity Network Flows. Oper.

Res. 26, 209-236.

Kennington,

J.

L.,

and

R. V. Helgason.

1980.

Algorithms for Network

Programming,

Wiley-Interscience,

NY.

204

Kershenbaum, A. 1981.
400.

A

Note on Finding Shortest Path Trees. Networks

11,

399-

Klein,

M.

1967.

A

Primal Method for Minimal Cost Flows. Man.

Sci.

14, 205-220.

Klincewicz,

J.

G.

1983.

A Newton Method

for

Convex Separable Network Flow

Problems. Networks

13, 427-442.

Klingman,

D., A. Napier,

and

Large Scale Capacitated

NETGEN: A Program for Assignment, Transportation, and Minimum
J.

Stutz.

1974.

Generating

Cost Flow

Network Problems. Man. So. 20,814-821.

Koopmans,

T.

C.

1947.

Optimum
17 (1949).

Utilization of the Transportation System.

Proceedings of the International Statistical Conference,

Washington, DC. Also

reprinted

as supplement to Econometrica

Kuhn, H. W.

1955.

The Hungarian Method

for the

Assignment Problem. Naval

Res.

Log. Quart. 2, 83-97.

Lawler, E.L. 1976. Combinatorial Optimization:

Networks and Matroids. Holt, Rinehart

and Winston.
Magnanti,
T. L.

1981.

Combinatorial Optimization and Vehicle Fleet Planning:

Perspectives and Prospects. Networks 11, 179-214.

Magnanti,

T.L.,

and

R. T.

Wong.

1984.

Network Design and Tranportation Planning:

Models and Algorithms.

Trans. Sci. 18, 1-56.

Malhotra, V. M., M. P. Kumar, and
for Finding

S.

N. Maheshwari. 1978.

An CK V
I

1

3)

Algorithm

Maximum Flows
1987.

in

Networks. Inform.

Process. Lett. 7

,

277-278.

Martel, C. V.

A

Comparison

of Phase

and Non-Phase Network Flow

Algorithms.

Research Report, Dept. of Electrical and Computer Engineering,

University of California, Davis, CA.

McGinnis,

L.F.

1983.

Implementation and Testing of a Primal-Dual Algorithm

for

the Assignment Problem. Oper. Res. 31, 277-291.

Mehlhom,

K. 1984.

Data Structures and Algorithms.

Springer Verlag.

205 Meyer, R.R. 1979.

Two Segment
C. Y. Kao.

Separable Programming. Man.

Sri. 25,

285-295.

Meyer,

R. R.

and

1981.

Secant Approximation Methods for Convex

Optimization. Math. Prog. Study 14, 143-162.

Minieka,

E.

1978.

Optimization Algorithms for Networks and Graphs.

Marcel Dekker,

New

York.

Minoux, M.

1984.
J.

A

Polynomial Algorithm for

Mirumum

Quadratic Cost Flow

Problems. Eur.

Oper. Res. 18, 377-387.

Minoux, M.

1985.

Network Synthesis and Optimum Network Design Problems:
Technical Report, Laboratoire MASI,

Models, Solution Methods and Applications.
Universite Pierre
et

Marie Curie,

Paris, France.

Minoux, M.

1986.

Solving Integer

Minimum

Cost Flows with Separable Convex

Cost Objective Polynomially. Math. Prog. Study 26, 237-239.

Minoux, M.

1987.

Network Synthesis and E>ynamic Network Optimization. Annals

of Discrete Mathematics 31, 283-324.

Minty, G.

J.

1960.

Monotone Networks.

Proc. Roy. Soc.

London

,

257 Series A, 194-212.

Moore,

E.

F.

1957.

The Shortest Path through a Maze.
the Theory of Switching Part

In Proceedings
II;

of the

International

Symposium on

The Annals of the

Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.

Mulvey,
266-270.

J.

1978a.

Pivot Strategies for Primal-Simplex

Network Codes.

J.

ACM

25,

Mulvey,

J.

1978b. Testing a Large-Scale

Network Optimization Program. Math.

Prog.

15,291-314.

Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley

&

Sons.

Nemhauser,
Wiley

G.L.,

and L.A. Wolsey.

1988.

Integer

and Combinatorial Optimization. John

&

Sons.

Orden, A. 1956. The Transshipment Problem. Man.

Sci. 2,

276-285.

377-387..H.. and A. Orlin. Orlin. 7. J. and R. J.. Res. 1984. Pape. Cambridge. Maximum-Throughput Dynamic Network Flows. In V. J.T.I.B. 101-191. Sloan School of Management. Technical Report No. Rock. . OR 178-88. Discrete Structures and Algorithms . 20th ACM and Symp. 450-455. Ahuja. 1985. 166-178. Prog. and W. R. New E>istance-E>irected Algorithms for Maximum MA. 1981.212-222. on the Theory of Comp.. ACM Trans. Garcia-Diaz. Ahuja. Munich. 1988. B. Proc. 1980. 1980. School of Management.M. Orlin.Algorithms for the Shortest Route Problem.. Potts. 1988. B.224-230. 214-231. Carl Hansen.(ed. Fundamentals of Network Analysis. J. Study Orlin. Prentice- HaU. 1972. Math. Steiglitz. 1983.106 Orlin.. 1974. C.. Page . Combinatorial Optimization: Algorithms and Complexity. Floips in Transportation Netxvorks. 8. New MA. D. M. Algorithm 562: Shortest Path Lenghts. Scaling Algorithms for the Assignment Minimum Cycle Mean Problems.T. 1987.. U. J. On the Simplex Algorithm for 24. Cambridge. Prentice-Hall. Math. 1982. Software 6. B. Phillips. K. MA. Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem. Academic Press. M. Sloan Technology. 1960. Implementation and Efficiency of Moore. Wiebenson. Math. Prog. B. Cambridge. Scaling Techniques for Miiumal Cost Network Flows. J. and Flow and Parametric Maximum Flow Problems. Oliver. Operations Research Center. Oper. Math.T.I. Pollack. Pape. B. R.. Solutions of the Shortest-Route Problem-A Review. H. Papadimitriou. 1615-84.B. M. Networks and Generalized Networks. 27. Prog. K. Working Paper No. and K. Massachusetts Ii\stitute of Working Paper 1908-87.). U. and R. A Faster Strongly Polynomial Minimum Cost Flow Algorithm. Orlin.

Stanford University. A Data Structure for Dynamic Trees. Dissertation. PA. L. Shein. Method. 1968. Prentice-Hall. Canada. Sons.. Sheffi. Y. 1982.. D.. 5. Swamy. An OCn^ log n) Parallel Max-Flow Algorithm. D. Improvements to the Theoretical Efficiency of the Network Simplex Unpublished Ph. Deo. Networks. and K. Oper. Techniques for Primal Transportation Algorithm.N. 16. Common Terminal MuJticommodity Flow. Computer Science Dept. 4. All Shortest Distances in a Graph: An Improvement to Dantzig's Inductive Algorithm. Graphs. Y. N. Tarjan. Math. Philadelphia. and J. Y. and G. An 0(nl log^(I)) Maximum Flow Algorithm. Tabourier. John & M. E. Wiley Syslo. Discrete Optimization Algorithms. Comput.128-'i46. New Jersey. Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods. 1984. and I.. Disc. Smith.E.T. Thompson.362-391. Kowalik. 1985. Sys. R. 1978. 1982. Sons. 194-213. 1983. 1983. Algorithms 3 . N. SIAM. Y.M. CA. M. Network Optimisation Practice: A Computational Guide. R. Shiloach. and U. B. E. Tardos.. 1985. Data Structures and Network Algorithms. & -. 1983. and Algorithms.. Combinatorica 247-255. Benefit-Cost Analysis of Coding /. Prentice-Hall. Ottawa. Carleton University..S. A Strongly Polynomial Minimum Cost Circulation Algorithm. E. - Srinivasan. Network Flows and Monotropic Optimization.. 1973..Sci. . Tarjan. Wiley- Roohy-Laleh. Rothfarb. Shiloach. V.S. 83-87. /. Frisch. Sleator. /. K. John Wiley . Thulsiraman. Technical Report STAN-CS-78-702. and R. Vishkin. Res.D. ACM 20. D. 24. 1973. 202-205.207 Rockafellar. T. 1981. Interscience. 1980. P.

A. Algorithms for Maximum Network Flow. Vol.Math. D. Personal Communication. Transp. ACM 9. A Theorem on Boolean A Matrices. Wagner. 173-194. Von Randow. A Shortest Path Algorithm for Edge - Sparse Graphs. Sci. Res. 1982. 1985. 12. 1974.197. Tomizava. 23^-57. 243. . 21. Tarjan.11-12. 1972. 1-11. Springer-Verlag. /. 265-268. R. A Simple Version of Karzanov's Blocking Flow Algorithm. On Some 1. E. Techniques Useful for Solution of Transportation Network Problems. J. R. 1986. on the Van Vliet. Vol. Letters 2 . On Max Flow with Gair\s and Pure Min-Cost Flows. 1987. 29-38. 1978. Weintraub. 1962.7-20.208 Tarjan. Oper. R. R. S. N. Study 26. Personal Communication. R. Tarjan. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems.. Primal Algorithm to Solve Network Flow Problems with Convex Costs. 32. E. ACM Warshall. R.450-456. 1977. Math. An Algorithm for Linear Programming which Requires 0(((m Proc. 1987. Appl. of the 19th +n)n^ + (m+n)^-^n)L) Arithmetic Operations. SI AM ]. Networks Truemper. Prog. R. Man. 1976. Vaidya. A. 87-97. 1988. Springer-Verlag.Res. K. E. Von Randow. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. 1984. 1978-1981. E. Tarjan. Theory of Comp. 1981-1984. P. ACM Symp. Improved Shortest Path Algorithms for Transport Networks.

Zadeh. 184-192. 1960. 1979. Oper. Quart. Math. More Pathological Examples for Network Flow Problems. and F. N. 1979. 11. Theoretical Efficiency of the /. Stanford University. 255-266. Cost Flow Algorithms.. N. D. for the Simplex Method and other Minimum Zadeh. and J. . 26. . 37-40. Whiting.217-224. N. Problem. A Method for Finding the Shortest Route Through a Road Network. y4CM 19. P. 1964. Zadeh. J. Chile. 347-348. W.. A. Res. Prog. y4CM 7 . J. Prog. A Ehial Algorithm for the Assignment 2. of Operations Research. N. Departmente de Industrias Report No. Edmonds-Karp Algorithm for Computing Maximal Flows. WiUiams. A. Math. 1973b. Algorithm 232: Heapsort. Barahona. Dept. Comm. Universidad de Chile-Sede Occidente. 1973a. 5. Hillier.209 Weintraub. Near Equivalence of Network Flow Algorithms. CA. 1972. Technical Report No. Zadeh. A Bad Network Problem 5.

l^8^7 U^6 .

.

.

.

f^cr J CM- OS 1992 • ::m \995t- o 1994 Lib-26-67 .„_ .Date Due ne m^ ?«.* > SZQ0^ nrr ^^.5 4Pi? 2 7 1991 W t 1 . 0.

MIT LIBRARIES DUPl I 3 TDSD DQ5b72fl2 b .

Sign up to vote on this title
UsefulNot useful