^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

208016. B. INDIA . and James Sloan School of Management Massachusetts Institute of Technology Cambridge. Ahuja* Thomas L.NETWORK FLOWS Ravindra K. MA. Kanpur . 02139 . Orlin On leave from Indian Institute of Technology. Magnanti.

LffiRARF --^ JUN 1 .MIT.

1 4.1 Applications 1.2 5.8 5.7 5. Linear and Integer Programming Network Transformations Shortest Paths 3.3 Notation and Definitions 1.11 Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis Assignment Problem Reference Notes References .2 3.2 Complexity Analysis 1.6 Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns 5.4 5.5 Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm Dijkstra's Dial's Maximum Flows 4.5 5.5 Search Algorithms 1.4 Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm 4.6 Developing Polynomial Time Algorithms Basic Properties of 21 Z2 Z3 24 Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks.3 4.2 4.3 5.4 3.5 Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems Minimum 5.1 5.10 5.9 5.NETWORK FLOWS OVERVIEW Introduction 1.3 3.1 3.4 Network Representations 1.

.

Moreover.. flows on arcs and mass balance at nodes) have natural mathematical representations. Many results in network optimization are routinely used to design and evaluate computer systems. plane methods and branch and bound procedures of integer programming. we concentrate on network flow problems and highlight a number of recent theoretical and algorithmic advances.Network Flows Perhaps no subfield of mathematical programming is more alluring than network optimization. science concerning data structures and ideas from computer and efficient data manipulation have had a major optimization algorithms. Network optimization is also alluring to methodologists. network optimization has inspired many of the most fundamental results in all of optimization. Indeed. networks combinatorial optimization. have served as the major prototype for several theoretical domaiiis (for example. lives. impact on the design and implementation of many network many The aim optimization. price directive decomposition algorithms for both linear programming and So did cutting combinatorial optimization had their origins in network optimization. and polyhedral methods of In addition.g. electrical. For example. communication and many other a consequence. This combination of widespread applicability and ease of assimilation has undoubtedly been instrumental in the evolution of network planning models as one of the most widely used modeling techniques in all of operatior^s research and applied mathematics. primal-dual methods of linear and nonlinear programming. Highway. Networks provide a concrete setting for testing and devising new theories. practitioners and of non-specialists can readily understand the mathematical descriptions network optimization problems and the basic ruiture of techniques used to solve these problems. rail. Moreover. the field of matroids) for a and as the core model wide variety of min/max duality results in discrete mathematics. topics: We have divided the discussion into the following broad major . because the physical operating characteristics of networks (e. physical networks pervade our everyday As even non-specialists recognize the practical importance and the wide ranging applicability of networks. network optimization has served as a fertile meeting ground for ideas from optimization and computer science. of this paf>er is to summarilze of the fundamental ideas of network In particular.

a more extensive survey would take us far beyond the scope of our discussion. In is we briefly describe a few prototypical applications. listed In this chapter.. (e. To illustrate the breadth of network applications. quantitively. in this section we present several important preliminaries We discuss (i) different ways to measure the networks of performance of algorithms. . As a prelude to the remainder of our discussion. however. We.g. we will consider four different types of networks arising in practice: . that we consider some models requiring solution techniques For the purposes of we will not describe in this chapter. the multicommodity flows. we limit our discussions to the problems (i) above. and (iv) the network design. particularly linear programming. Our discussion intended to illustrate a range of applications and to be suggestive of how network flow problems arise in practice. briefly describe these problems in Section 6. polynomial-time) algorithms. Among good we have presented those that to structure are simple and are likely to be efficient in practice.6 and provide some important references. will not be covered in our survey. Some important generalizations of these problems such as (ii) the generalized network flows. and two generic proof techniques that have proven be useful designing polynomial-time algorithms. We have attempted our discussion so that it not only provides a survey of the field for the specialists.Applications Basic Prof)erties of Network Flows '' Shortest Path Problems Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems Much of our discussion focuses on the design of provably good algorithms. arise in numerous application settings emd in a variety of guises. this discussion. Applications Networks this section. (iii) (ii) graph notation and vtirious ways that to represent a few basic ideas from computer science (iv) underUe the design to many in 1.1 algorithms. but also serves as an introduction and summary to the non-specialists who have a basic working knowledge of the rudiments of optimization.

node. = 0.Ic) We refer to the vector x (xjj) as the flow in the network. associated with every arc b(i) e A. if b(i) > 0. These four categories are not exhaustive and overlap Nevertheless.1b) implies that the total flow out of a node minus the total flow into that node must equal .j)e]\} - Xxji {j:(j. (1..• • Physical networks (Streets. Network flow models are • • • also used for several purposes: Descriptive modeling (answering "what is?" questions) Predictive modeling (answering "what will be?" questions) Normative modeling (answering "what should be?" questions. We first introduce the basic underlying network flow model and some useful notation. performing optimization) that is. The Network Flow Model Let G = (N. then node i is a supply node. wires) Route networks Space-time networks (Scheduling networks) • • Derived networks (Through problem trai^formations) in coverage. railbeds.: ' (1. if b(i) then node | is a | demand node.i)6^A} =b(i). The constraint (1. for all (i. We will illustrate models in each of these categories. (1 ..1a) (i. A) be a directed network with a cost (i. representing i its supply or demand. j) Cjj. x.1b) /jj < Xjj S u^ = . they provide a useful taxonomy for summarizing a variety of applications. and |.. pipelines. j) e A.j)€A^ subject to X^ii {j:(i. then node is a transhipment Let n = N | and m= A The minimum cost network flow problem can be formulated as follows: Minimize ^ C. and a capacity integer Uj. a lower bound /. We associate with each If b(i) node i i e N an number < 0. foralli€N.

€ {N : b(i) < 0) if Consequently. cost flow problem (1. total supply must equal total demand the mass balance cor\straints are to have any feasible solution. j with a -1 coefficient. entries are all zeros except for the )-th entry which a flow variable app>ears in two mass balance equations. Summing gives all the mass balance constraints eliminates all the flow variables and i € I N b(i) = 0.. Frequently. the net supply /demand of the node.2 -1. the given lower bounds /j. for each arc. j) Nj. all the mass is balance equations gives the zero equation Ox = equal to minus the or equivalently.1c) We henceforth refer to this constraint as the moss The flow must also satisfy the lower bound and capacity constraints which we refer to as the flow bound constraints.e. We let Njj represent the column of N and denote the j-th unit vector which is is a 1. The flow bounds might model later that they physical capacities.2) minimize { ex Nx = b and / <xSu in terms of a node-arc incidence matrix N. let e. central role in the The following special ccises of the minimum cost flow problem play a theory and applications of network flows.3. column vector Note that each i whose x-. j). then summing 0. = node . and hence redundant.or i € {N : Ib(i) = Mi) > 0) Ib(i) i . Figure 2. For now. (ii) If the total supply does equal the total demand. Later in Sections and we consider some of the consequences of this special structure. are all zero. any equation is sum of all other equations. In matrix notation. contractual obligations or simply operating ranges of interest. Therefore the column The matrix nonzero. we : represent the minimum ). (1. as an outflow from node to Cj with a +1 coefficient and as an inflow is corresponding to arc (i. all N has very special structure: only 2m out of its nm total entries are of its nonzero entries are +1 or -1. we make two (i) observations. balance constraint. we show can be made zero without any loss of generality.1 gives an example of the node-arc incidence matrix. 1. The matrix N has one row for each node of the network and one column corresponding to arc of size n (i. and each column h<is exactly one +1 and one 2. .

(1.(a) An example network.2) 1 2 3 4 5 .

that these route choices each other. if two users traverse the same link.. The objective is to assign each person to exactly way that a minimum cost flow problem on a network minimizes the cost of the assignment. j) € A). traffic that The time to do so depends upon is traffic conditions. Each of these users must choose a route through the network. consider the problem of managing. existence and uniqueness of equilibrium solutions). his or her home) and a point of destination his or her workplace in the central business district).A c Nj one is X N2 representing possible person-to-object assignments. affect however. Now also suppose that each user of the system has a point of origin (e.. or whether or not to construct a new road or bridge. Physical Networks "^ The one that familiar city street map is perhaps the prototypical physical network. we need a descriptive model how to model traffic flows and measure the performance of any design as well as a effect of predictive model for measuring the any change in the system. the longer the travel time to (e. Used in the mode of "what if . associated with each element object in a in A. all other ULsers continue to use their specified paths in the equilibrium solution) to reduce his travel time. Now us make the behavioral assumption that each user wishes to travel possible. is there a flow pattern in the his (or her) choice of network with the property that no user can unilaterally change origin to destination path (that is. that tells us In order to make these decisions intelligently. and a cost (i. Operations researchers have setting. a limits.. j) C. We can then use these models to answer a variety of "what if planning questions. or designing. the more flows on the link. as well as related theory developed a set of sophisticated models for this problem (concerning. The following type these types of questions. for example. and the most readily comes to inind when we envision a network. street As one to illustration. traverse it. between his or her origin and destination as quickly as along a shortest travel time path. let they add to each other's travel time because of the added congestion on the link.g. specifies of equilibrium line of the network flow model permits us to answer that Each network has an associated delay function how long it takes to traverse this link. network decide upon such issues as speed one way street assignments. that is. This situation leads to the following equilibrium problem vdth an embedded set of network optimization problems (shortest path problems).g. The Jissignment problem G = (N^ u N2. Many network planning problems arise in this problem context. Note. A) with b(i) = 1 for all i i e Nj and b(i) = -1 for all e N2 (we set l^:= and u^. and algorithms for computing equilibrium solutions. = 1 for all (i.

Rather than solving the problem directly on the physical network. a network equilibrium model forms the heairt of the Project Independence Energy Systems (LPIES) model developed by the U. and even from the distribution center (on a local delivery truck) to the final If customer (or in some cases just to the distribution center). planning problems arise design. how can we lay out or smallest possible integrated circuit to make the necessary connections between components and maintain necessary sejjarations between the wires (to avoid electrical interference). which are one level of abstraction removed from physical networks.he its problem context.scenario analysis. and Kirkhoff s Law represents the network mass balance equations. Route Networks Route networks. (iv) from the rail head (by truck) to a distribution center. (ii) from a plant (by truck) (iii) from the rail station to a rail head elsewhere in the system. in this case the transportation network. we posed These models are actively used in practice. Indeed. construct transportation routes. For example. we assign the arc with the composite . is a very large-scale integrated circuit (VLSI In this setting the nodes of the network correspond to electrical components and the links correspond to wires that connect these links. Another type of physical network circuit). an arc connecting a supply point and center might correspond to a complex four leg distribution channel with legs to a rail station. Each arc connecting a supply point to a retail center incurs upon some physical network. are familiar to most students of operations research and management science. each with a given aistomer costs based demand. In this setting. these models permit analysts to answer the type of questions previously. *. Ohm's Law serves as the analog of the congestion function for the traffic equilibrium problem. retail (i) we preprocess the data and Consequently. Department of Energy as an analysis tool for guiding public policy on energy. The basic equilibrium model of electrical networks is another example. the Urban Mass Transit Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement. The traditional operations research transportation at its plants problem is illustrative. Similar types of models arise in many other problem contexts.S. in this Numerous network . A shipper with supplies must ship to geographically dispersed retail centers. For example.

In this application context. assuming that each machine has the capacity to perform only one job. Figure economic 1. we wish to meet prescribed demands for a product in each of the T time periods. In these instances it is often convenient to formulate a network flow problem facility (a on a "space— time network" with several nodes representing a particular machine. for instance. . In this problem context. applications. period. a noted study conducted several years ago permitted Hunt Wesson Foods Corporation to save over $1 million annually. we would identify the supply points with jobs to be performed. a prize winning practice paper written several years ago described an application of such a network planning system by the Cahill costs May Roberts Pharmaceutical Company (of Ireland) to reduce overall distribution by 20%. find the flows is from plants to customers that minimizes overall This type of model used in numerous applications.2. and network flows to cost out (or optimize flows) for any using this approach. particularly in problem contexts such as machine scheduling. a warehouse. In each d^ lot size problem. which represents a core planning model is in production planning.distribution cost of this route. Many address this related problems arise in this type of problem setting. and one . we can produce I^ at level Xj and /or we can meet the demand by drav^g upon inventory from the previous t f)eriod. while improving customer service as well. The network representing this problem has T+ 1 nodes: one node = 1. One problem special case of the transportation problem merits note — the assignment This problem has numerous that we introduced previously in this section. . possible to type of decision problem using integer programming methodology for sites choosing the distribution given choice of sites. . The solution to the problem specifies the minimum cost assignment of the jobs to the machines. an airport) but at different points in time. this all the intermediary legs. Space Time Networks Frequently in practice. and the cost associated with arc i as the cost of completing job on machine j. the (i. the an important example. j) demand points with available machines. 2. as well as with the distribution capacity for classic problem becomes a network transportation model: costs. the It is design issue of deciding upon the location of the distribution centers. T represents each of the planning periods. . As but one illustration. we wish to schedule some production or service activity over time.

this problem is easily solved as a we must find the minimum cost path of If we impose to that demand point). T. The mass balance equation period models the basic accounting equation: incoming inventory plus production that period must equal demand plus all final inventory. whenever we in period . cost: that is. t (i. . Whenever the production and holding costs are linear. flow problem. The flow on (t. . the problem becomes a minimum cost network shortest path problem (for each demand period.node represents the "source" of Xj all production. 2. One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^. x^ > 0). the cost on each arc for this either linear (for inventory carrying arcs) or linear plus a fixed cost (for production arcs). t arc (0. The mass balance equation fir\al for node indicates that demand (assuming zero beginning and zero t inventory . Consequently. production and inventory arcs from node capacities on production or inventory.e. we incur a fixed cost t In addition we may h^ incur a per unit production cost c^ in period and a per t unit inventory cost for carrying any unit of inventory from period problem is t to i>eriod + 1. t) prescribes the production level level I^ in period t.. over the entire planning period) must be produced in some period = 1. . Id. the objective function for . arises frequently in practice. Network flow model of the economic lot size problem. no matter how much or how little. Hence. and the flow on arc t + 1) represents the inventory for each in to t be carried from period to period t + 1 . Figure 1^.

or the production facility might be producing several products that are linked by common share production costs or by changeover cost (for example. for example (i) the production might have limited production capacity or limited storage for inventory. efficiently as a problem on an auxiliary network G' defined 1 The network G' i nodes j).). t)) and each other arc is an inventory carrying solution. The arcs are of two types: service arcs connecting (ii) two airports. Hence we can obtain the optimum production schedule by of the solving a shortest path problem. In this application setting.g.. most enhanced models are structure quite difficult to solve (they are NP<omplete). 6 to wait for a later flight.. or to wait If A. in no period do we both carry inventory from the previous period and produce. a A. or that cases.2 . is a production arc (of the (0.M.M.10 the problem is concave. Observe that for every production in schedule satisfying the production property. until example revenues vdth each service or leg. until 11 overnight at New York from 11 P. j) nodes i and j with < j. Another classical network flow scheduling problem is the airline scheduling problem used to identify a flight schedule for an airline.M. in this we identify network flow network (with no external supply demand) will specify a set of flight plans (circulation of airplanes through the airline's fleet network). any such concave cost network flow problem always has a special type of optimum solution solution. The length of arc is equal to the production and inventory cost of i satisfying the demand of the periods from to j-1. we produce enough to meet the demand for an integral number of contiguous periods.M. Moreover. This problem's spanning tree solution known as a spanning trees decomposes form into disjoint directed paths. This observation implies the following production property: in the each time we produce. to T+ 1. the common limited production facilities.g. to Boston at 11 to stay at New York from 10 A. for New York at 10 A. layover arcs that permit a plane A. and for every pair of (i. The production property permits us shortest path consists of to solve the problem very as follows.. we may need to change dies in an automobile stamping plant when making In different types of fenders). the next morning. the first arc on each path arc. an airport) and a point (i) time (e. it contains an arc (i. New York at 10 A. As we indicate in Section 2. Many enhancements facility (ii) model are possible. G' contair\s a directed path 1 G' from node to node T + 1 of the same objective function veilue and vice-versa. A flow that maximizes revenue will prescribe a schedule for an .M.M. though the embedded network often proves to be useful in designing either heuristic or optimization methods. each in node represents both a geographical location (e.M.

The same type of network representation arises in many other dynamic scheduling applications.11 of planes. point. The foUovdng examples illustrate this Single Duty Crew Scheduling. Time Period/Duty Number .3 illustrates a number of possible duties for the drivers of a bus company. Figure 1. Derived Networks This category a "grab is bag" of specialized applications and illustrates that arise in surprising sometimes network flow problems ways from problems that on the surface might not appear to involve networks.

Moreover.12 In this formulation the binary variable x: indicates whether 0) (x. the problem cost in the to ship in one unit of flow from node 1. = a of we select the j-th duty.2b) subtract each equation from the equation below to the system. the matrix A represents the matrix of duties Vs. the transformed problem p)ath would be a general minimum cost network flow problem. ^5 unit 1 Figure 1. at Therefore.4. To make this identification. If instead of requiring a single driver to be on duty in each period. rather than a shortest problem. for example. each column in the first revised system will have a single +1 (corresponding to the hour of the duty in the column just of A) and last a single -1 (corresponding to the row in A. to node 9 minimum network given Figure which is an instance of the shortest path problem. = 1) or not (x.4. We show that this problem a shortest path problem. to we specify a number network be on duty in each period. workers need to in complete a variety of tasks that are related by precedence conditions. This transformation does not change the solution to Now add a redundant equation equal minus the sums of all the equations in the revised system. Shortest path formulation of the single duty scheduling problem. constructing a house. and b is column vector whose components are all Observe 's that the ones in each column A occur in consecutive rows because each driver duty contains a single work is shift (no split shifts or work breaks). Because of the structure of A. the following operations: In (1. the revised right hand side vector of the problem will have a +1 is in row 1 and a -1 in the last (the 1 appended) row. or the added row. that Hes below the +1 in the column of A). the same this case the right transformation would produce a flow problem. but in arbitrary. Critical Path Scheduling and Networks Derived from Precedence Conditions In construction and many other project planning applications. we perform it. . a builder must pour the foundation before framing the house and complete the framing before beginning to install either electrical or plumbing fixtures. hand side coefficients (supply and demands) could be Therefore.

. "start" job we add we to two dummy both with zero processing time: a a "completion" job J to be completed before any other job can begin and have completed this all + 1 that cannot be initiated until other jobs. . is coefficient. the cannot start until job jobs. Sj Note. + ^ 2- f Xjj si I {j:(i. otherwise. j (j = 1.j)€X subject to ^ 2^ X:.j)eA) {j:(j. thereby giving us a network. one with one coefficient and one with a minus one structure. The precedence constraints imply that for each arc job j (i. We are to choose the constraints jobs start time of each job j so that we honor a set of specified precedence If and complete the overall project as quickly as possible.ifi = 0. . that we move variable to the left hand side of the a plus constraint. seems the bear no resemblance to network optimization. to Suppose we need complete J jobs and that job S. we represent the by nodes. j) e A. 2. Then we vdsh .Sq T subject to Sj S Sj + tj . which is a linear program in the variables if s: . The linear programming dual xj: of this (i. (i. For convenience of notation. J) requires t: days to complete. then each constraint contains exactly two variables. problem has a familiar If we associate a dual variable with each arc then the dual of this problem maximize V t.13 This type of application can be formulated mathematically as follows. however. j) in the network. problem: minimize sj^^ .i)€!^) -l. for each arc (i . j) . . Let G = (N. X.ifi = J + l all i € N . then the precedence constraints can be represented by arcs. i has been completed. A) represent the network corresponding to solve the following optimization augmented project. ^ . for l. On to the surface. this problem. .

14 .

be a zero-one variable indicating whether (i) = 1) or not (y. we could consider the most efficient use of these resources to complete the overall project as quickly as possible. This problem requires us to determine the longest path in the network G from node to node J + 1 with tj as the arc length of arc (i.. Researchers and practitioners have enhanced this basic model in several ways. and an arc connecting to node j . and the revenue n as the demand demand node j.g.5. linear y. yj S 0) whenever we that need wish to mine block to maximize before block total i. Certain versions of this problem can be formulated as minimum cost flow problems. It is the longest sequence of jobs needed precedence conditions. < 1. if resources are available for expediting individual jobs. itself is particularly for managing it large-scale corwtruction The critical path important because identifies those jobs that require managerial attention in order to complete the project as quickly as possible. rather than network flow problem with a node for each block. a variable for at each precedence constraint. y. The provisions any given mining technology. for all (i. we can never remove a block until it. = 0) we extract block the problem will contain j a constraint y. this path has become known as the critical path heis and a the problem has become known as the critical path problem. The dual linear program (obtained from the constraints programming version = will be a of the problem (with the ^ y. this figure. S 0. the value of the ore in the block minus the j cost for extracting the block) If and we wish to extract blocks to (y^ maximize - overall revenue. and perhaps the geography of the mine. j). we let j. This network will also have a dummy "collection it node" with (that is. Consider the open mine shown in Figure 1. As shown of in we have divided the region to be mined into blocks. equal to minus the sum of the rj's. impose restrictions on how we can remove the blocks: that lies for example. we have removed any block immediately above restrictions on the "angle" of mining the blocks might impose similar precedence conditions. Suppose now that each block has an associated revenue n (e. (ii) an objective function specifying over all we revenue ny.15 xj. This model become principal tool in project projects. and . Since delaying any job in this sequence must necessarily delay the completion of the overall project. summed or 1) blocks j. management. ^ y^ (or. y. The open pit mining problem is another network flow problem that arises from pit precedence conditions. For example. This longest path has the to fulfill the sp>ecified following interpretation. j) 6 A .

either up or dov^n to the nearest multiple of three. the flow on this arc must be the column sum. so that the entries in the table continue to add to the (rounded) row and column sums. and the overall sum of the entries in the new table adds to a rounded version of the overall that sum in the original table. In addition. information and not disclose It the Bureau has an obligation to protect the source of its statistics that can be attributed to any particular individual. flows on the arcs incident to node The critical path scheduling problem and open pit mining problem illustrate one arise indirectly. round each entry in the table. way that network flow problems related Whenever. By law. Figure network flow problem corresponding to the census data specified in Figure we rescale all the flows. i: we add a supersource s to the i-th network connected to each row node Similarly. Census Bureau uses census infonnation to construct millions of tables wide variety of purposes. Since the upper leftmost entry in this table a 1. two variables in a linear program are by a precedence conditions. The network contains a node for each row in the table and one node for each j column. can attempt to do so by rounding the census information contained in any Consider. ^ 1 in the original linear program.7 up or down. We might disguise the information in this table as follows. this arc corresponds to the upper bound constraint is y. The dual problem one of finding a network flow that minimizes ths sum of 0. the dual linear program will be a network flow problem. Figure 1. rounded the flow on this arc 1. the data is shown in Figure 1.16 block j).6. we add a supersink with the arc connecting each j-th column node j to this node. table. rounded up or dov^n. We also add an arc connecting node t and node s. must be the sum of illustrates the 1. the tabulated information might disclose information about a particular individual. It contains an arc connecting node j): i (corresponding to row ij-th i) and node (corresponding to column the flow on this arc should be the entry in the prescribed table. The problem can be a feasible flow in a network and can be solved by an application of the maximum flow algorithm.S. rounded either up or dov^T*. meeisuring them in integral units of the rounding base .6(b) shows a cast as finding rounded version of the data meets this criterion. including the row and column sums. If all entries in the original table rounded up or down. Matrix Rounding of Census Information The for a U. the only constraints in the problem are precedence constraints. say. the flow on this arc t must be the row sum. for example. the variable corresponding to this If precedence constraint in the dual linear program v^ll have a network flow structure.6(a).

000 .$30.000 Column Total .$50.16a Time in ^service (hours) <1 Income less 1-5 <5 than $10.000 mure than $50.000 $30.000 .(XX) 1 $10.

Researchers have designed many of the algorithms described in this chapter specifically to improve worst-case complexity while simultaneously maintaining good empirical behavior. then the flow on each arc must be integral at one of two of this consecutive integral values. this type of analysis provides performance guarantees. Thus. rather than statistical estimates. The major empirical analysis is to estimate how algorithms behave in practice. As an example of a worst-case result within this chapter. typically Empirical analysis measures the computational time of an algorithm using statistical sampling on objective of a distribution (or several distributions) of problem instances. and only secondarily on empirical behavior. we will prove . Whenever is C (or U) appears in the complexity arulysis. and average-case analysis. for the algorithms performance. its relative merits. terms of several basic problem parameters: the number of nodes (m). we bound the running time of network algorithms in (n). these problems have an (corresponding to 2-dimensional "cuts" in the table) that algorithms to find rounded versions of the tables. Each of these three performance measures has appropriate for certain purposes. Average-case analysis differs from empirical analysis because provides rigorous mathematical proofs of average-case performance. will not be a network structure flow problem. worst-case analysis is the primary measure of Worst-Case Analysis For worst-case analysis. corresponding to tables with more than two dimensions. this chapter will focus primarily on worst-case analysis. is any problem instance. we assume that each cost (or capacity) integer valued. worst-case analysis.16b (multiples of 3 in our example). The formulation of a more general version imbedded network problem. the number of arcs and upper bounds C and U on the cost coefficients and the arc capacities. Worst-case analysis aims to provide upper bounds on the number of steps that a given algorithm can take on Therefore. and is Nevertheless. The objective of average-case analysis to estimate the expected number of steps taken it by an algorithm. Nevertheless. we present. we can exploit in divising 12 Complexity Analysis There are three basic approaches for measuring the performance of an algorithm: empirical analysis.

The leeist value of the constants not determined solely by the algorithm. if Therefore. it is also highly sensitive to the choice of the computer language. 4. For large practical problems. researchers have widely adopted the 0( 1. For all all of the algorithms that we present.17 that the number is less of steps for the label correcting algorithm to solve the shortest path problem than pnm steps for some sufficiently large constant p. most of which are quite appropriate most of today's computers. To avoid the need to compute or mention the constant p. Estimating the constants correctly is is fundamentally difficult. The counting for of steps relies on a number of assumptions. replacing the expressions: requires "the label correcting algorithm pmn steps for some constant p" with the equivalent expression "the running is time of the label correcting algorithm 0(nm)." The 0( ) notation avoids the need to state a specific constant. m. instead. 2. Counting Steps The running time of steps it of a network algorithm is determined by counting the number performs. assuming that is m ^ n. the constant factors do not contribute nearly as much to the running time as do the factors involving n. the actual running time is lOnm^ + 2'^'^n^m. which. sufficiently large values of we mean the term that would dominate bounds are other terms for n and m. the constant terms 2''^'^n'^m this dominant even though most practical term would dominate. . the time is called asymptotic running times. ) Consequently. this notation indicates only the dominant terms of the all running time. has led to a flourishing of research on the worst<ase performance of algorithms. By dominant. the use of the 0( notation typically has permited analysts to avoid the prohibitively difficult analysis required to compute the leading constants. and even to the choice of the computer. ) notation for several reasons: Ignoring the constants greatly simplifies the analysis. For example. Although ignoring the may have undesirable feature. C or U. 3. in turn. the constant terms are relatively small integers for the terms in the complexity bound. then we would state that the running time O(nm^). Observe that the for running time indicates that the lOnm^ term values of n and m. researchers typically use a "big O" notation.

if known as the similarity assumption. is justified by the fact that 0( is ) notation ignores differences in running times of at most a constant factor..000 for networks with 1000 nodes.l The computer being executed carries out instructions sequentially. a computer must access a number of words of data and this thus takes more than a constant number of steps. be in part an addition or division. in comparing two running times. takes equal time. most one instruction A1. log C and to a log U (e. is some constant This assumption. we will not discuss parallel implementations of network flow «dgorithms. For a network problem. to perform each operation on very large numbers.18 Al. i. it is 0((n + m)flog n + log C + log U)). running time is bounded by a polynomial function in m. log C and log U. even by counting all other computer operations. we will typically assume for that both C and U k. On may the other hand. a computer must store large numbers in several words of its memory. with at at a time. m. C = Oirr-) and U = 0(n'^). the running time of one of the polynomial-time maximum flow algorithms we consider is 0(nm + n^ log U). quite /) reasonable in practice.2 implicitly assumes that the only operations to and tirithmetic operations. we were to restrict costs to be less than lOOn-^. By envoking Al. Consequently. we are adhering to a sequential model of computations. on results for the today's computers we would present.000. be counted are comparisons Al .g.e.l. which the time difference between an addition and a multiplication on essentially all modem computers. the assumption that each arithmetic operation takes one step lead us to underestimate the aisymptotic running time of arithmetic operations involving very large numbers on real computers since. researchers refer if its network algorithm as a polynomial-time algorithm n. in practice. Other instances of . the input length a low order polynomial function of n.. Polynomial-Time Algorithms An the algorithm is said to be a polynomial-time algorithm if its running time is is boimded by a polynomial function of the input length. obtain the same asymptotic worst-case it algorithms that we Our cissumption that each operation. The input length of a problem number is of bits needed to represent that problem. we would allow costs to be as large as 100. Therefore.000. For example. In fact. are polynomially bounded in n. For example.2 Each comparison and basic arithmetic operation counts as one step. To avoid systematic underestimation of the running time.

any polynomial-time algorithm is asymptotically superior to any exponential-time algorithm. Moreover. Even n is in extreme cases this is true. Qn n must be larger than 2"^^^'^^^. experience has Figure 1. pseudopolynomial-time its running time is polynomially bounded in is m. 0(2^). For problems that satisfy the similarity assumption. polynomial-time algorithms are strongly polynomial-time because log C = Odog n) and log U= CXlog n). The class of pseudopolynomial-time algorithms algorithms. flow algorithm alluded therefore. Much practical shown that. small degree. For example. a polynomial function only n and m. C The maximum algorithm. First. the polynomials in practice are typically of a .19 polynomial-tiine bounds are said to be a strongly O(n^m) and 0(n log n). 0(n!) and 0(n^°g polynomial function of n and log if "). A polynomial-time algorithm is is polynomial-time algorithm in if its running time bounded by or log U.) polynomial-time algorithms. pseudopolynomial-time algorithms become polynomial-time algorithms. but the algorithms will not be attractive if C and U are high degree polynomiab in n. n^'^OO is smaller than tP'^^^E^ ^ if sufficiently large. (Observe that nC cannot be bounded by is C) We say that an algorithm n.8 illustrates the asymptotic superiority of The second reason is more pragmatic. An algorithm is said to be an exponential-time algorithm if its running time grows of exp)onential time a as a function that can not be polynomially bovmded. as a rule. polynomial-time algorithms perform better than exponential time algorithms. In particular. Some examples bounds are 0(nC). There are two major reasons for preferring polynomial-time algorithms to exponential-time algorithms. we envoke the similarity assumption. C and U. is not a strongly polynomial-time is The if interest in strongly polynomial-time algorithms all primarily theoretical. and does not involve log to. an important subclass of exponential-time Some instances of pseudopolynomial-time bounds are 0(m + nC) and 0(mC). this case.

20 APPROXIMATE VALUES .

1. as a cutset of G. In this chapter. (ij. A graph G' = (N'. We assume throughout nodes in a graph. We shall use similar conventions for A graph G = (N. j) e A. . ij-. the path contains i2 . Frequently. An arc (i. i| For simplicity of notation. and no superset of Q has this property. for each € A. . A(i). We we shall often use the terminology path to designate either a directed or an undirected path. i and j j. we shall sometimes refer to a path as a set of (sequence oO arcs without mention of the nodes. . . A) is called a bipartite graph (i.( ij. nodes and arcs ip (ip 12^. If any ambiguity might arise. and a capacity Uj:. A graph G' = is (N'.. j) if its i node set j N can be partitioned into and A' two subsets N| and N2 so that for each arc in A. We j.e. e N| and if e N2. is defined as the set of arcs emanating from node of a i. j). G if = (N. A) N' = N and A' c A. whichever is appropriate from context. or arc (ij^+i . to is A graph is said to be connected pairs of nodes are that the it disconnected.). . representing cycles. we distinguish two special the source s and sink t. j) emanates from node Tlie arc adjacency The of j arc is an outgoing of node i and an incoming arc of node i. and ij^^-j on the path. • • . othervs^se. refer to node i tail jmd node (i. .i.^ as the internal nodes of the path. A-Q) disconnected. A directed is cycle is a directed path together with the arc i|) and an undirected cycle an imdirected path together with the arc (ij. . > with each arc (i. . ij. A directed (\2 r-1. list node i. 13. An undirected path is defined similarly except that for any two consecutive nodes either arc (ij^. A cutset connected. we shall often refer to a path as a sequence of nodes - i2 - -ij^ when its arcs are apparent from the problem context. A(i) = {(i. We associate that Uj. Alternatively. shall explicitly state directed or undirected path. 13). A') a spanning subgraph of G = (N. i\^+-[) i^. A) is a sequence of distinct (ij^. as the i. A) if N' CN c A. Two nodes i and i j are said to be connected j. The degree node is the number of incoming and outgoing arcs incident to that node. j) has two end points. . . i) or (i^ . if) satisfying the property that ij^+p € A for each k= . A') is a subgraph of G= (N. path in . a cost Cj. we always assume graph G is is We refer to any set Q c A with the property that the graph G' = (N.21 I N I and m= A I I . i\^ We refer to the nodes i3 . j) e A : € N}. j) as the head of arc aire (i. The arc (i. 12. if the graph contains at least one if all undirected path from connected. and say that the arc (i.j) is incident to nodes i and j.- • • . j) (i.

structures. any nontree arc to a spanning tree creates exactly one Removing any two arc in this cycle again creates a spanning tree.4 Network Representations The complexity of a network algorithm depends not only on the algorithm. of is which only space 2m words have nonzero values. j) with the property that 1 if arc € A. Another popular way = network the node-node adjacency I matrix representation. X and N-X. subtree of a tree T is a connected subgraph of T. T are called tree arcs. a tree with degree equal to one called a leaf node. The arc costs and capacities are . to represent a network representation is not efficient. T are A spanning tree of G = (N. but to represent the also upon the manner used network within a computer and the storage results. A acyclic if it contains no cycle. scheme used for maintaining and updating the intermediate The running time of an algorithm (either worst<ase or empirical) can often be improved by representing In this section. Arcs belonging to a spaiming tree called nontree arcs. A tree is a connected acyclic graph. A node in nc des. the element I^: This representation stores an n x n matrix (i. we have already described the node-arc incidence matrix representation of a network. Removing any tree-arc creates subtrees. A tree T is said to be a spanning A tree of G if and T is a spanning subgraph arcs not belonging to 1 of G. any arc belonging tree. to this cutset is added to the subtrees. we state it othervdse. The addition of cycle. the resulting graph is again a spanning In this chapter. Arcs a whose end points belong to two If different subtrees of a spanning tree created by deleting tree-arc constitute a cutset. we some popular ways In Section 1. Clearly. Each least two leaf A spanning tree contains a unique path between any two nodes. We shall alternatively represent the cutset Q as the graph is node partition (X. This scheme requires nm this words to store a network. and Ijj = otherwise. N-X). the network discuss more cleverly and by using improved data of representing a network. We represent the logarithm of any number b by 1. A) is has exactly ntree has at tree arcs.22 partitions the graph into two sets of nodes.1. we assume that logarithms are of base 2 unless log b.

(c) The reverse star representation. head) cost cost 1- 2 3 1 4 2 3 2 3 1 4 5 4 2 1 6 7 8 4 1 3 4 2 3 (b) The forward star representation.23 (a) A network example arc number 1 point (tail. head) cost 2 3 4 5 6 7 8 . arc number 1 (tail.

we store the incoming arcs node i at positions rpoint(i) to (rpoint(i+l) . we can simply store the arc numbers and once we know the from the forward 1. representation of the network given in Figure The forward outgoing arcs at star representation allows us to determine efficiently the set of set of any node. then node i has no outgoing arc. which denotes the first arrays that contains information about an incoming arc at node consistency. both sparse and dei^se. set point(l) = 1 and point(n+l) = m+1. that indicates the smallest i. head) and We also maintain a pointer with each node i.9(c). This representation is adequate for very dense networks. (tail. For the sake of we at set rpoint(l) = 1 and rpoint(n+l) = m+1. To determine. maintain a reverse position in these pointer with each node denoted by rpoint(i). The arc (1. This data structure gives us the representation shov^Ti in Figure Observe that by storing both the forward and reverse star representation S. . 2) hcis 1. The forward star and reverse star representations are probably the most popular ways to represent networks. store the (tail. Arcs emanating from the same node can be numbered the cost of arcs in this order. we will maintain a significant duplicate information. we n can create a reverse star representation as follows. 1. We numbers in an m-array trace. 2) So instead of storing head) and cost of arcs. arc has arc number arc number 4 in the forward star representation. incidence list (These representations are also literature.1).9(b) specifies the forward star 1. number i in the arc list of an arc emanating from - node 1) in Hence the outgoing list. We also i. then the arcs emanating from node arbitrarily. but is not attractive for storing a sparse network. the incoming arcs at any node efficiently. We then sequentially store the (taU. Figure complete trace array. we number the arcs emanating from node 1. and so on. in order and sequentially head) and the cost of incoming arcs of node i. Starting from a forward star representation. For consistency. arcs of node - are stored at positions point(i) to (point(i+l) the arc If point(i) > point(i+l) 1. We can avoid this duplication by eircs. we need an additional data structure known as the reverse star representation. simultaneously. As earlier.9(a). We examine the nodes j = 1 to j.24 also stored in n x n matrices. denoted by point(i). storing arc (3.9(d) gives the arc numbers. head) and the cost of the For example. numbers ir\stead of the (tail. we can always retrieve the associated information store circ star representation. Figure 1.) first known as representation in the computer science The forward star representation numbers the arcs in a certain order: 2.

let us suppose that we wish to find all the nodes graph s. i. predi]) = i. the search algorithm will mark more nodes. j) admissible if node i is marked and node is j is unmarked. and Initially. all nodes in the to network are one of two marked or unmarked. Tl e follovkdng algorithm summarizes the basic iterative steps. G = (N. Search algorithms attempt to find property. only the source node marked. The algorithm we say that node is a predecessor terminates when the graph contains no (i. (i. j) admissible arcs. Subsequently. we discuss two of the most commonly used search techniques: breadth-first search and depth-first search. . Whenever i the procedure marks of a new node by examining an j admissible arc node j. in At every point states: in the search procedure. A) that are reachable through directed paths from a distinguished node called the source.5 Search Algorithms Search algorithnvs are fundamental graph techniques. in a all nodes in a network that satisfy a particular For purposes of illustration. different variants of search lie at the heart of many network algorithms.e. The marked nodes are is known be reachable from the source. and the status of unmarked nodes yet to be determined. In this section.25 1. inadmissible We call an arc otherwise. by examining admissible arcs..

The same data also used in the maximum flow and minimum i cost flow algorithms A(i) of arcs discussed in later sections. (i. end. this algoirthm terminates. is The search algorithm examines inadmissible. The predecessor indices define a tree consisting of marked We structure use the following data structure to identify admissible is arcs. it the algorithm marks a new node and adds it to LIST. Each node has a current arc Initially. node i from LIST. it this list sequentially list and whenever the current arc arc. add node end else delete j to LIST. while LIST * do begin select a if node i i in LIST. j. Since the algorithm marks any node at most once. it executes the while loop at most 2n times. When from nodes. mark node LIST := {s). begin unmark all in N. which i is the current candidate for being examined next. end. Arcs in each list can be arranged arbitrarily. it arc in the arc the ciirrent When the algorithm reaches the end of the arc arc. first the current arc of node is the arc in A(i). Now consider the effort spent in identifying the . We maintain with each node the list emanating (i. Each iteration of the while loop either finds an admissible arc or does not. makes the next list.26 algorithm SEARCH. it has marked all nodes in G that are reachable s via a directed path. nodes s. and in the latter Ccise deletes a marked node from LIST. In the former case. declares that the node has no admissible It is easy to show that the search algorithm runs in 0(m + n) = 0(m) time. j) from it. j) node is incident to an admissible arc then begin mark node pred(j) := i.

first-out order. feasible solutions.e. the search algorithm selects the marked nodes in the last-in. Therefore. For cost flow instance. and minimum .e. does not specify the order for examining and adding If Different rules give rise to different search techniques. at most once. This s. and U.. H is a function of n. as usual. the search in algorithm examines a total of ie X A(i) = m N and thus terminates 0(m) time. that data are integral and that algorithms maintain integer solutions at intermediate stages of computations. Hence. i. kind of search amounts to visiting the nodes in order of increasing distance from therefore. This algorithm to performs a deep probe. It marks nodes s to i in the nondecreasing order of their distance from the with the distance from i. in the m. the set LIST is maintained as a queue. this version of search is called a breadth-first search. In this section. C. i. then the search algorithm selects the marked nodes in the order.27 admissible arcs. nodes are always selected from the front and added to the front. We assume. Geometric Improvement Approach The geometric improvement approach shows polynomial time if that an algorithm runs in at every iteration it makes an improvement proportioT\al to the solutioiis. nodes to LIST. in the problem H = maximum mCU. difference between the objective function values of the current and optimum Let H be an upper bound on the difference in objective function values between any two For most network problems. as described. s. we scan arcs in A(i) arcs.. meeisured as minimum number of arcs in a directed path from s to Another popular method is to maintain the set LIST as a stack. first-out to the rear. flow problem H = mU. creating a path as long as possible. and backs up one node initiate a new probe when it can mark no new nodes from the tip of the path. this version of search is called a depth-first search. will we briefly outline the basic ideas all underlying these two approaches. nodes are always selected from the front and added first-in. L6 Developing Polynomial-Time Algorithms Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence) approach. and the scaling approach. The algorithm. in this instance. For each node i.

we describe the simplest form of scaling which we call bit-scaling. We A have stated this result for minimization versions of optimization problems." a the statement geometric convergence rate are polynomial time In order to develop polynomial time algorithms using this approach.e. Further. the algorithm must terminate wathin 0((log H)/a) iterations. therefore. Consider a consecutive sequence of starting 2/a iterations from iteration k.z*). and.3) implies that a(z^ .1.z*)/2 ^ z^ - z^-^^ ^ aCz^ . we can look for local improvement techniques that lead to large fixed percentage) improvements for the in the objective function. the algorithm improves the objective function value by at least aCz*^ .z*)/2 units. then (1. similar result applies to maximization versions of optimization problems. The geometric improvement approach might be summarized by "network algorithms that have algorithms. the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1.e. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value.) and Scaling Approach Researchers have extensively used an approach called scaling to derive polynomial-time algorithms for a wide variety of network and combinatorial optimization problems.z*)/2 units. Then the algorithm terminates in O((log H)/a) iterations.28 Lemma 1..11 presents an example of a bit-scaling algorithm for .2 maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. q the algorithm improves the objective function value by no more than aCz*^ . (See Sections 5. The quantity (z*^ - z*) represents the total possible improvement in the objective function value after the k-th iteration. suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*) (13) for (i.3. then the algorithm would determine an optimum solution within these 2/a iterations. If in each iteration. (i. In this discussion. The maximum augmenting path algorithm for the 4.z*) by a factor of 2 within these 2/a iterations. Since H is the maximum possible improvement and every objective function value is an integer. Section 5. the algorithm must have reduced the total possible improvement (z*^.. On the other hand. Proof. if at some iteration.

. Sections 4 and 5. adding leading zeros necessary to make each capacity K bits long. .-j serves as the starting solution for problem Pj^. K. Then the its problem Pj^ the capacity of each arc as the k leading bits in binary representation. Further. using more refined versions of scaling. of Observation. for each k = 2. The capacity an arc in P^ is tivice that in Pf^^j plus or 1. the optimum solution is of problem Pj^^. : bit. and each successive problem . Using the bit-scaling technique. Let K = Flog Ul and would consider suppose if that we represent each arc capacity as a K bit binary number. . . P2. describe polynomial-time algorithms for the maximum flow and minimum cost flow problems.29 the assignment problem. The is scaling technique useful whenever reoptimization from a good starting solution solving the problem from scratch. Figure 1.. . P3. we solve a problem P parametrically as a sequence of problems P^. Pj^ the problem P^ approximates data to the first . more efficient than For example.10 illustrates an example of this type of scaling. consider a network flow problem whose largest arc capacity has value U. The manner of defining arc capacities easily implies the following observation. the problem P2 approximates data to the second bit.. is a better approximation until Pj^ = P.

. (b) (c) Network with binary expansion of The problems Pj. and P3. (a) Network with arc capacities. Example of a bit-scaling technique.10.30 100 <=^ (a) (b) PI : P2 100 P3: 010 (c) Figure 1. P2. arc capacities.

i by 2.1 flow problem. In general. solution of Pi^_i can be easily reoptimized to obtain an Hence.. For example. (ii) The optimal solution problem Pj. of The problem P^ is generally easy to solve. begin reoptimize using the obtain an optimum solution of end.e. . whereas time bound is the scaling version of the labeling algorithm runs in the non-scaling version runs in latter O(nmU) time.i to Pj^.^ and Pj^ are quite similar. it then is we maximum flow from source to sink by at most 1). maximum flow value for problem Pj. The former Thus this polynomial and the bound is only pseudopolynomial. 0(m^ log U) time. begin obtain an for k : optimum to solution of P^. vj^ < m because multiplying the flow X]^_^ by 2 takes care of the I's doubling of the capacities and the additional can increase the maximum increase the flow value by at most m units (if we add 1 to the capacity of any arc. taking O(m^) time.31 The following algorithm encodes a generic version algorithm BIT-SCALING. Consider. we obtain a feasible flow for Pj^. of the bit-scaling technique. end. Let vj^ denote the vj^. for this approach to work. the maximum and is xj^ flow problem. simple scaling algorithm improves the running time dramatically. Thus (i. Moreover. for example. the number of problems solved is OOog n).i plus or 1. This approach is very robust. Therefore. = 2 K do optimum solution of Pj^. (iii) optimum For problems that satisfy the similarity assumption._i is an excellent starting solution for problem Pj^ since Pj^.^ twice capacity in Pj^. because of the following reasons. Pj^ denote an arc flow corresponding to its In the problem the capacity of an arc xj^. in part. variants of it have led to improved algorithms for both the maximum flow and minimum cost flow problems. the optimum solution of Pj^. If we multiply the optimum flow 2vj^_'j for Pj^. This approach works well (i) for these applications. the labeling algorithm as discussed in would perform the reoptimization in at most m augmentations.. claissical easier to reoptimize such a maximum Section 4. reoptimization needs to be only a little more efficient by a factor of log n) than optimization.

Then we partially characterize optimal solutions to network flow problems and demonstrate that these problems always have certain special types of optimal solutions (so<alled cycle free solutions). j) 1 if arc (i. BASIC PROPERTIES OF As a NETWORK FLOWS we describe several basic prelude to the rest of this chapter.1 or as flows on paths and cycles. it worthwhile develop several connections between these In the arc formulation (1. as the first step in our discussion. on arc (i. j) equals the sum of the flows h(p) and f(q) for all paths p and cycles q that contain this arc. or algorithms.1). is contained in path p and is otherwise. the flow on path p. we discuss a few useful 2. We begin by showing how network flow problems can be modeled Section in either of two equivalent ways: as flows on arcs as in our formulation in 1. We next establish several important connections between network flows and linear and integer programming. similarly.1 Flow Decomposition Properties and Optimality Conditions It is natural to view network flow problems in either of two ways: as flows on arcs or as flows on paths and cycles. The path and the network. We j) formalize this observation by defining some new notation: 5jj(p) 1 if equals (i. 6jj(q) equals arc is contained in cycle q and otherwise. the flow in on cycle which are defined p in P and every directed cycle q Q. we will find alternate formulations. Therefore. in this section properties of network flows.32 2. only consider these special types of solutions. transformations of network flow problems. the basic decision variables are flows Xj: on arcs cycles (i. we need Finally. cycle formulation starts with an enumeration of the paths Its P and Q of decision variables are h(p). each view has own to advantages. its models. In the context of developing underlying theory. Then ^i3= I p€ P 5ij(p)h(p)+ X qe hf<i^^^^^- Q . Notice that every set of path and cycle flows uniquely determines arc flows in a natural way: the flow xj. for every directed path and f(q). j). and spanning tree Consequently. q. in designing algorithms.

b(ij^) we = let h(p) = inin min (i.33 If the flow vector x is expressed in this way. and each time we we reduce the flow on some arc to zero. every has a unique representation as nonnegative arc flows. j) in we obtain a cycle q. the path and cycle . We give an algorithmic proof to show any feasible arc flow x can be decomposed Oq. p. If and in the latter case [b(iQ). into path and cycle If flows. we say that the flow is represented f is eis path flows and cycle flows and that the path flow vector h and cycle flow vector cycle flow representation of the flow.. otherwise the (i^. the original flow the sum of flows on the paths and cycles identified procedure. Note that one of these cases will occur within n steps. 12) mass balance constraint (1. as) path and cycle flows? The following result provides an affirmative answer to this question. -b(ij^). cycles C2. out of have nonzero flow.1: Theorem Flow Decomposition Property (Directed Case).1b) of node flow. i^ implies that some other arc carries positive We repeat this argument until either we encounter a demand node ig to or we revisit a previously examined node. a path and Can we represent it reverse this process? That is. we obtain a directed path. Every directed path and cycle flow Conversely. at m we need that ig is a to establish only the converse assertions. can we decompose any arc flow into (i. Now observe that each time we identify to zero. j) e p)].1. we reduce the identify supply /demand of some node or the flow on some arc a cycle. In the former case ij^ we obtain a directed path p from the supply node some demand node consisting solely of arcs with positive flow.2. Then some arc i|) carries a positive flow.e. Proof. We lecist repeat this process with the redefined problem until the network contains no supply node (and hence no demand node). must find a is cycle. nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties: C2. j) we let f(q) = min {x^: (i. these. If b(ijj) + h(p) and : = Xj. and repeat the procedure. in this Ceise which 0. Every path with positive flow connects a supply node of x to a demand node most of x. (i. xj: we obtain a directed (xj: : cycle q. j) € q) and redefine = Xj: - f(q) for each arc in q.h(p). At most n+m paths and cycles have nonzero flow. is a demand node then we stop. a path. and redefine b(iQ) = b(iQ) . Consequently. Then we select a transhipment node with at one outgoing arc with positive flow as the starting node. - h(p) for each arc x^. In the light of our previous observations. We terminate when for the redefined problem x = by the Clearly. (i. i^j Suppose supply node. 2.

(p) and S^jCq) to be arc (i. C2. to a sink node of x. In this Ccise. 6j:(q) is still In this more general setting. our representation using the notation and -1 if valid v^th the following provision: we now define 6j. is possible to state the decomposition property in a somewhat more general form that permits arc flows xj. it a number of important consequences. every arc flow x can be flow has a unique representation as arc flows. on p as a flow with value and -h(p) on each backward We define a cycle flow in the 5j.2. for each arc (i. at most m cycles This proof at is similar to that of ij^_-j Theorem 2. the paths and cycles can be undirected. and can contain arcs with negative flows. if < Xjj + < Ujj. final Each undirected path which has an orientation from its initial to its node. Theorem 2. Flow Decomposition Property (Undirected Case). any arc with positive flow occurs as a forward arc and any arc with negative flow occurs as a backward arc. j) is a backward arc of the path or cycle.34 representation of the given flow x contains at most (n + m) total paths and cycles. At most n+m paths and cycles have nonzero flow. The other steps can be modified accordingly. A cycle q with > is called an augmenting 5jj(q) f(q) cycle with respect to a flow x e q. j) .3. of which there are It at most m cycles. Proof. is that we extend the path (ij^ . h(p) on each forward arc A path flow will be defined arc. has forward arcs and backward arcs which are defined as arcs along and opposite to the path's orientation.5. p. As enables us to compare any two solutions of a network flow problem in a particularly convenient way and to show how we can build one solution from another by a sequence of simple operations. these. We need flow f(q) the concept of augmenting cycles with respect to a flow x. Every path with positive flow connects a source node of x For every path and cycle. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: C2. to is be negative. Every path and cycle Conversely. out of C2. have nonzero flow.(p) same way.1.'j ij^) with positive flow or an arc ij^_| ) with negative flow. The major modification . even though the underlying network directed. The flow decomposition property has one example.4. some node by adding an arc (ij^.

. Further.) f(qj^^) Uj. We define the cost of an augmenting q as c(q) = V (i. . j) 6 A (i. + SjjCqr) fCq^. i. (i. qm that contains it.j)€A k=l . q2 . . + 6j:(qj(. of these cycle flows qj^ to x. f(q-)). .e. . Now q-j. by condition C2.. j). Nx = b..e.. each term between and the rightmost Ujj. The f(q) is c(q) f(q). j) e A (i. moreover. is an augmenting cycle with respect to the flow x. j) is either a forward arc on each cycle q^. the flow remains feasible if some positive amount of flow (namely cycle f(q)) is augmented around the cycle q.4 of the flow decomposition property. 0<y<u. < Xj. (i. q2. j) < qj^. j) e A (i. yjj < Consequently.. Ny = b.. + 5jj(qr) f(qr) < Ujj. change in flow cost for augmenting around cycle q with flow Suppose < X < u and that x and y are any two solutions to a network flow problem.. The cost of an augmenting cycle represents the change € A if in cost of a feasible solution we augment along the cycle with one unit of flow.. j) e A (i.. we can find (i. .. q2. if inequality in this expression has the for each cycle qj^ . for any arc (i. . j) Cj. flow decomposition implies that z can be represented as cycle flows. . Consequently. q^ that contains it or a backward arc on each cycle x^. 5jj(q).. Therefore.. - i. j) we have + 6ij(q2) < yjj = Xjj + 5jj(q^) fCq^) f(q2) + . q.35 In other words. the resulting solution remains feasible on each arc Hence. Since y = x + z.. is. - Then the difference vector z = y x satisfies the homogeneous equations Nz = Ny Nx = 0. same < sign. note (i.) satisfying the property that for each arc of A. zjj = 6ij(qi) f(qi) + 5jj(q2) f(q2) + . . for each arc e That we add any (i. j) at most r < m cycle flows f(q])/ f(qj. j) e A k=l r (i. each cycle q^ that . arc . .

Further.x can be decomposed most m augmenting cycles and the sum of the costs of these cycles equals cx* . A feasible flow x is an optimum flow if and only if admits no negative cost augmenting cycle. the cost of y equals the cost of x any two feasible solutions of a flow on at most m augmenting nicies and y he plus the cost of flow on the augmenting cycles. Further. ex* < cx then one of these cycles must have a negative cost. Suppose that X is any feasible solution. Cycle Free and Spanning Tree Solutions We start by assuming that x is a feasible solution to the network flow problem minimize { cx : Nx = b and / ^x<u ) and that / = 0. and that x ^ x*. network flows stems from In the example. arc flows a simple observation concerning the example in Figure are given besides each arc. Theorem Augmenting Cycle Property.3: result. The augmenting characterizing the cycle property permits us to formulate optimality conditions for optimum solution of the x* is minimum cost flow problem. We have thus obtained the following Theorem it 2. The augmenting into at If cycle property implies that the difference vector X* . Then y equals x plus the with respect to x. is also an optimum flow. that an optimum solution of the minimum cost flow problem.ex. Optimality Conditions.cx > Since x* is an optimum flow. Let X network flow problem. then cx* .4. Much of the underlying theory of 2. cx* = cx and x result.1. and costs .x has a 0. 2J.36 We have thus established the following important 2. nonnegative cost. if every augmenting cycle in the decomposition of x* .

37 3.e..e. positive or zero cost cycle - $4 - $3 = $ -1. Per unit change in cost = A = $2 + $1 + $3 Let us refer to this incremental cost negative. i. we set 6 as large as possible while preserving 4 - 3-6^0 and we no 8 S 0. we set 6 all = 3. 2 + 6^0. and on at least 4 + 6 S 0. arc flows. Note that adding a given amount this of flow 6 to all the arcs pointing in a clockwise direction all and subtracting flow from at arcs pointing in the counterclockwise direction preserves the mass balance is each of the node. A as the q/cle cost and say A. note that the per unit incremental cost for this flow change cost of the clockwise arcs the sum minus the sum of the cost of counterclockvkdse arcs.$4 3-e <D 2+e 4. longer have positive flow on arcs in the Similarly. to minimize cost nonnegativity of that in the cycle.. or 6 < 3.. we were to change C|2 from 2 to 4). or 6 > at and again find a lower cost solution with the flow one arc in the cycle value zero. then -2) we would decrease 6 as much as possible (i. we must on 6.e. 4+e <!) cycle. that the cycle is a depending upon the sign of Consequently. if the cycle cost were positive (i. in all our example. (at i..$3 i 2. 5 + 6^0. Since the objective function -2 at depends linearly we optimize it by selecting 6 = 3 or 6 = which point one arc in the cycle has a flow value of zero. Figure Improving flow around a being that all Let us assume for the time arcs are uncapacitated. Also. select 6 in the interval -2 <6 < 3.1. . Note new solution 6 = 3).e. that is. of all We can restate this observation in another way: to preserve nonnegativity flows. The network in this figure contains flow around an undirected cycle.

problem minimize ex If the objective function value of the network { : Nx = b. then the range of flows that preserves flows) feasibility Ceise -2 mass balances. j) is a p'ee arc with respect to a given feasible flow x if Xj. in this <6< and we can find a solution as 6. . In this terminology. Therefore. e. is at its some arc on the cycle. initial flow we can apply our previous argument repeatedly. again an interval.g. good as the original that is. lies strictly (i. the network contains no cycle made up In general. lower and upper bounds on 1. (i. upper bound (x^2 = ^ ^t 6 = 1). one by choosing 6 = for -2 or 6 = 1. or arbitrarily small (negative) in a positive cost cycle.. one cycle and establish the following 2. j) between the lower and upper bounds imposed is restricted if its upon it. at a given any time.5: fundamental result: Theorem optimization Cycle Free Property. At these values of the solution is cycle free.e. equals either its lower or if upper bound. our prior observations apply to any cycle in a network. a solution x has the "cycle free property" entirely of free arcs. such as 6 units on all arcs. Let us say that an arc (i. either the flow is zero (the lower bound) or Some observations additional notation will be helpful in encapsulating and summarizing our up to this point. we are indifferent to all solutions in the interval -2 < 9 < 3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero. for example. (ii) If we impose upper bounds on is the flow. Note that the lower bound assumption imposed upon the objective value is necessary to rule out situations in which the flow change variable 6 in our prior argument can be made arbitrarily large in a negative cost cycle. then at least one cycle free solution solves the problem. 1 <x <u } is bounded from below on the feasible region and the problem has a feasible solution. this condition rules out any negative cost directed cycle with no upper bounds on its arc flows..38 We (i) If can extend this observation in several ways: the per unit cycle cost A = 0. We will also say that arc flow xj.

39
useful to interpret the cycle free property in another way.

It is

Suppose

that the

network
nodes).

is

connected

(i.e.,

there

is

an undirected path connecting every two pairs of
is

Then, either a given cycle free solution x contains a free arc that

incident to

each node in the network, or

we

can add to the free arcs some restricted arcs so that the

resulting set S of arcs has the following three properties:

(i)
(ii)

S contains

all

the free arcs in the current solution,

S contaiT\s no undirected cycles, and

(iii)

No

superset of S satisfies properties

(i)

and
(i)

(ii).

We

will refer to

any

set

S of arcs satisfying

through

(iii) eis

a spanning tree of
a

the network

and any

feasible solution x for the

network together with
(At times

spanning

tree S

that contains all free arcs as a spanning tree solution.

we

will also refer to a

given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs

may

be needed to form the spanning tree

S.)

Figure
that
it

2.2. illustrates a

spanning
is)

tree

corresponding to a cycle free solution. Note
set of free arcs into a

may

be possible (and often
(e.g.,

to

complete the
wdth arc
(3,

spanning

tree

in several

ways

replace arc

(2, 4)

5) in

Figure

2.2(c)); therefore, a

given

cycle free solution can correspond to several spanning trees S.

We
If

will say that a

spanning tree solution x
this case, the

is

nondegenerate

if

the set of free arcs forms a spanning tree.
to the

In

spanning tree S corresponding
are not incident to)
all

flow x

is

unique.

the free arcs do

rot span

(i.e.,

the nodes, then any spanning tree corresponding to
arc's

this solution will contain at least

one arc whose flow equals the
vdll say that the

lower or upper

bound

of the arc.

In this case,

we

spanning

tree

is

degenerate.

40

(4,4)

(1,6)

(0,5)

(a)

An example network with

arc

flows and capacities represented as

(xj:, uj:

).

©
(b)

A cycle free solution.

<D

©
(c)

A

spanning

tree solution.

Figure

2.2.

Converting a cycle free solution to

a

spanning

tree solution.

41

When

restated in the terminology of spanning trees, the cycle free property
result in

becomes another fundamental

network flow theory.
If the objective

Theorem

2.6:

Spanning Tree Property.
problem
minimize
{ex:

function value of the network

optimization

Nx

=

b,

I

<x <

u]

is

bounded from below on the

feasible

region and the problem has a feasible solution

then at least one spanning tree solution solves the problem.

We
of the flow

might note

that the

spanning

tree property is valid for

concave cost versions
is

problem as

well,

i.e.,

those versions where the objective function

a concave
is

function of the flow vector
valid because
if

x.

This extended version of the spanning tree property
is

the incremental cost of a cycle

negative at

some

point, then the

incremental cost remains negative (by concavity) as

we augment

positive

amount

of

flow around the

cycle.

Hence,

we

can increase flow in a negative cost cycle until

at least

one arc reaches
2.3

its

lower or upper bound.

Networks, Linear and Integer Programming

The

cycle free property

and spanning

tree property

have many other important

consequences.

In particular, these

two properties imply

that

network flow theory bes

at

the cusp between

two

large

and important subfields of optimization—linear and integer

programming.

This positioning may, to a large extent, account for the emergence of
a cornerstone of mathematical

network flow theory as
Triangularity Property

programming.

Before establishing our

first

results relating

network flows
that

to linear

and integer
S has
at

programming, we
least

first

make

a

few observations. Note
is,

any spanning

tree

one

(actually at

lecist

two) leaf nodes, that
if

a

node

that is incident to only

one arc

in the

spanning

tree.

Consequently,

we

rearrange the rows and columns of the
is

node-arc incidence matrix of S so that the leaf node

row

1

and
-1,

its

incident arc
lies

is

column

1,

then

row

1

has only a single nonzero entry, a +1 or a
If
is

which

on the
its

diagonal of the node-arc incidence matrix.
incident arc from S, the resulting network

we now remove

this lecif

node and

a

spanning tree on the remaining nodes.
1

Consequently, by rearranging
for the

all

but

row and column
that

of the node-arc incidence matrix

spanning

tree,

we

can

now assume

row

2 has

-t-1

or

-1

element on the

42

diagonal and zeros

to the right of the diagonal.

Continuing

in this

way

permits us to
n-1

rearrange the node-arc incidence matrix of the spanning tree so that

its first

rows

is

lower triangular. Figure

2.3

shows

the resulting lower triangular form (actually, one of

several possibilities) for the spanning tree in Figure 2.2(c).

nodes
5

L =

+1.1) is an integer But this observation implies that the diagonal element of components -1. This argument shows that for problems with integral data.43 Now further suppose that the / supply/demand vector b and lower and upper bound Then since every vectors and u have all integer components. and u are integer. ako satisfy another well-known property: they always have. extreme point solutions. now if we move x] to the right of the equality in for X 2 the right hand side remains this is integral and we can solve from the second equation. Since the spanning tree property ensures that network flow tree solutions.2 shows that this integrality property is also valid in the more general situation in which the objective function is concave. i. that solutions x with the property that x cannot be z.. the problem has a feasible solution. we might expect to discover that extreme point .8. continuing forward substitution by successively solving for one variable at a time shows that x^ integral. 1 <x <u } the vectors solution. network flow problems always have cycle free solutions. Linear programs. and b. Integrality Property. Network flow problems are distinguished as the most important large class of problems with this prop>erty. emalysis. If the objective value of the network optimization minimize is { ex: Nx = b.1). Since. expressed tis a weighted combination of two other feasible solutions y and as x = ay + (l-a)z for some weight < a < 1. as the leist objective function ex is a linear program result shows. always has an integer solution. we have established the following Theorem problem 2. implies that x| is integreil. problems always have spanning fundamental result. then the problem has at least one integer optimum Our observation at the end of Section 2. in the parlance of convex is. bounded from below on the feasible region. or generalizations with concave cost objective functions. or b - Mx^ (2. yr- equals -1).e. Relationship to Linear Programming The network flow problem with the which. of x' are integral as well: since the first U equals +1 or the first equation in (2. 1. an arc lower or upper bound and the right hand side M has integer components (each equal to vector. every spanning tree solution is integral. as we have seen. component of 0.

I <x <u ) bounded from below on the feasible region and the problem has a feasible solution. extreme points are usually represented algebraically as basic solutions. First. conversely. Conversely. conversely. for these special solutions. every basic solution a spanning tree solution. suppose that x not an extreme point and is represented as x = ay + (l-a)z with these vectors for which y and z differ. and indeed they are as shown by the next result. < yjj < and " let Nj = 0' denote the submatrix of N corresponding to these arcs that the cycle. spanning tree solutions correspond to basic solutions. Consequently. uij. it X is not an extreme point solution. if x not a cycle free solution. then it cannot be an extreme point.9. Then NjCz^ > ) which implies.44 solutions and cycle free solutions are closely related.x^) is a compatible partitioning of Also suppose that we eliminate the redundant row so that B is a nonsingular matrix. Let us now make one final connection between networks and linear and integer programming— namely. minimize is { ex: Nx = b. network flow problems correspond to extreme points. 1. y' yij and zij z' be the ujj components zjj of /ij < < xij < < or /jj < < (i. Then . xij j).e. Consider a linear form Ax = b and suppose x. Let x'. this result is is easy to establish. N = [B. y^ and z^. components if x^. this cycle contains only free arcs in the solution x. as in our discussion of Figure 2. Every spanning tree solution to a is network flow problem a basic solution and. We can extend B to a basis of the constraint matrix by adding a Just as cycle free solutions for maximal number of columns. every extreme point is a cycle free solution. we define two feasible solutions y and z with the property is that X = (l/2)y + (l/2)z. if the objective value of the network optimization problem 2.M] for some basis B and that x = (x . yjj network contains an imdirected cycle with not equal to Zij for any arc on the But by definition of the Therefore. then is not a cycle free solution. With the background developed already.1. every cycle free solution is an extreme point and. the columns B of the constraint matrix of a between their linear program corresponding to variables strictly lower and upper bounds are linearly independent. In linear programming. between program of the basis and the that integrality property.10: Basis Property. then the problem has an extreme point solution. Proof. < a< i.. Theorem Extreme Point Property. since by perturbing the -6 flow by a small amount 6 and by a small amount around a cycle with free arcs. Theorem is 2. For network flow problems. by flow decomposition.

then x^ if all and consequently x^ is an integer. j) will have a lower bound of This transformation has a . we describe some of these important transformations. and u are all integers. must be equal to 4l (An induction argument. If an arc (i. call a matrix it A unimodular unimodular of its its bases have determinants either +1 or <md call totally -1. But then. using an expansion of determinants by minors. vector whenever x^. For Otherwise.4 Network Transformations Frequently. of x' as it is possible to find each component sums and multiples of components of if b' =b - Mx^ and B. S is singular. or -1. to show equivalences of different network problems. Tl.+l. a node-arc incident matrix let is unimodular. it is easy to see that the determinant of S it the product of the determinants of the spanning trees and. Consequently. is it has determinant must correspond to a cycle free solution. or to put a network problem into a standard form required by a computer code. Xy. Even more. or How Since bases of are these notions related to network flows and the integrality property? N correspond to sparming trees. analysts use network transformations to simplify a network problem. unimodular.45 Bx^ = b . the b. which a spanning tree on each is of its connected components. Therefore. If it is totally 0. 2. Let us -1. network flow problem is totally unimodular.) The constraint matrix of a Theorem Total Unimodularity Property. - Also. divided by det(B). by Xjj+ l^- in the problem formulation. provides this totally an alternate proof of unimodular property. or x^ = B-^(b Mx^). determinant of B. the triangularity property shows that the determinant of any basis (excluding the redundant row now). / A corresponds to a basic feasible solution x and the problem data A. and therefore equals +1 or -1. if all of square submatrices have determincmt equal to either 0. (Removing Nonzero Lower Bounds). the determinant of B equals +1 or of all integers. In this subsection.Mx^. the -1. by Cramer's rule from linear algebra. variable the flow on arc (i.11: minimum cost M 2. As measured by the new 0. it S be any square submatrix of N. j) has a positive lower boimd l^y then we can replace Xjj. equals the product of the diagonal elements in the triangular representation of the basis. therefore. then x^ is an integer if and M are composed In particular. partitioning of b.

X. . In the network context. By subtracting (2.j = X^j = Sjj arc capacities. V.. b(i) (Cjj . an arc has a positive capacity we can remove the capacity. b(j) b(i) -Uij (Cjj . this transformation implies the follov^dng. we begin by sending /j. constraint (i. a flow ^k' " ^" *^^ transformed network yields a flow of = Xjj^ of the same cost in the . making the j) arc uncapacitated.Sjj = -Ujj (2. These algebraic manipulations correspond to the following network transformation. O Removing ^©< t I © Xjj. = Ujj. using the following ideas. the corresponding flow in the transformed network both the flows x and x' = ik Xjj and = Uj. j) (Cij'Uij-V CD lower bound to zero. in only one. ^ a positive (i. Sj: additional node k with equation (2. b(j) oo) + Uij (0. If x^.2) from the mass balance constraint of node we assure that each of Xj. is a flow on arc is X. now appears in three mass balance constraints and j.4. b(j) b(i)-/ij b(i) + / 'Cij. + Sj. <D then Transforming If {Removing Capacities). Multiplying both sides by we obtain -Xjj . j) in the original Xjj^ network. appear in exactly two constraints-in one with the positive sign and in the other with the negative sign. 46 simple network interpretation.Ujj) CD Figure T2.2) This transformation is tantamount to turning the slack variable into an for that node. units of flow on the arc and then measure incremental flow above b(i) /jj. and Sj. if we introduce a slack variable > 0. <D 2. Uj:.5. (i.Xj:. have the same Xj: cost.oo) Ujj) <T) Xjj <^ Figure 2. can be written as -1. x^: The capacity Sj. Likewise.2) as the mass balance constraint Observe that the variable xj.

The new flow X •: measures the amount of flow we "remove" from the "full capacity" flow of b(i) b(j) b(i)-Ujj b(i) + Ujj CD <D Figure 2. (i'. Let arc flow Ujj if it is represent the capacity of the arc is (i. transformation valid. x^j Further. (j. © two nodes capacity. i) i into and i' and replaces each original arc (i. i') i T4. This transformation splits each node (k. uncapacitated.6. Consequently. and x:j^ are both nonnegative. j) by Cj: X • in the problem formulation..7 illustrates the resulting network all when we carry out the node splitting transformation for the nodes of a network. (Node Splitting). = x^< Ujj. Doing so replaces arc with its associated cost by the arc i) v^ath a cost -Cj. . j) by an cost of the same cost and and each arc by an arc i. j) or an upper in variable: bound on the replace x^. We also add arcs of cost zero for each Figure 2. since this x^j^ + Xjj^ = u^. Uj.. j) send Ujj units of flow on the arc and then replace arc by arc (j. j) of the same and capacity. (i. i') 0< of arc reversal. (Arc Reversal). » An example arc (k. and is x^j^. This transformation a change (i.47 original network. This transformation has the following network interpretation: (i. i) vdth cost -Cj. this transformation permits us to remove arcs with negative costs. T3. Therefore.

(b) The transformed network. (a) The original network.48 (a) (b) Figure 2. node i with the new throughput . is This transformation also used in practice for representing node activities and node data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of we simply associate arc (i.7. We to shall see the usefulness of this transformation in Section 5. i').11 when we use it reduce a shortest path problem with arbitrary arc lengths to an assignment problem.

The algorithmic approaches for solving problem types setting and (ii) Cem be classified into two groups—label to and label correcting. In this section. cheapest. In this section. The problem arises when trying to determine the shortest. (ii) and (iii). whereas label correcting methods apply to networks with negative arc lengths as well. outlining one special implementation of this general approach that runs in polynomial time and another implementation that perfomns very . We then describe two more sophisticated implementations that achieve in practice improved running times emd in theory. The label setting methods are applicable networks with nonnegative arc lengths. finding various types of constrained shortest paths between nodes shortest paths with turn penalties.49 3. shortest paths visiting specified nodes. The (i) major types of shortest path problems. Researchers have studied several different (directed) shortest path models. practical experience has efficient shown is the label correcting methods to be modestly more Dijkstra's algorithm first the most popular label setting method. Label setting methods designate one or more labels as permanent (optimum) at each iteration. Consequently. are finding shortest paths from one node to other nodes all when arc lengths are nonnegative. Label correcting methods consider as temporary until the final step label setting all labels when they all become f>ermanent. in increasing all order of solution difficulty. the k-th shortest path).. node. nevertheless. We will show that methods have the most attractive worst-case performance. Next. SHORTEST PATHS Shortest path problems are the most fundamental and also the most commonly encountered problems shortest path in the study of transportation and communication networks. we discuss problem types (i) (i). we discuss a simple implementation of this algorithm that achieves a time bound of 0(n2). Each approach assigns tentative distance labels (shortest path distances) to nodes at each step. or most pairs of rebable path between one or many nodes in a network. algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often call for the solution of a large number of shortest path problems as subroutines.g. (ii) finding shortest paths from one node to (iii) other nodes for networks with arbitrary arc lengths. designing amd testing shortest path efficient algorithms for the problem has been a major area of research in network optimization. More importantly. we consider a generic version of the label correcting method. and (iv) finding shortest paths from every node to every other (e.

j) e A }. G contains a directed path from s to every artificial arc (s. denoted by d(i): the label i. and each other node j a temporary label equal to Cgj € A. and otherwise. j) network G= (N. the label of a node are i is its shortest distance from the source node along a path whose internal nodes selects a all permanently labeled. j). We suppose that node s is a specially designated node. and let C = max Cjj : (i. designate the node vdth the algorithmic representation is a .1 We consider a (i. At each iteration. we further assume that arc lengths are nonnegative. j) otherwise is temporary. The algorithm label. we s a permanent «> label of zero. Let A(i) represent the set of arcs emanating from node { € N. permanent it once we know that it represents the shortest distance from s to give node if (s.2 3.50 well in practice. and in this section as well as in Sections 3. aissodated with each arc i e A. We invoke this connectivity assumption throughout Dijkstra's algorithm finds shortest paths from the source node from node s s to all other nodes. and scans au-cs in A(i) to it update the distamce all of adjacent nodes. Dijkstra's Algorithm 3. Initially. The algorithm terminates when has designated relies nodes as permanently labeled. The following (which basic implementation of Dijkstra's algorithm. The basic idea of the algorithm of their distances from s. is to fan out and label nodes is in order Each node i has a label. and assume without any loss of generality that the network other node.3. we discuss a method to solve the all pairs shortest path problem. The correctness of the algorithm on the key observation we prove later) that it is always possible to minimum temporary label as permanent. we assume amd that aire lengths are integer numbers. We can ensure this condition by adding an with a suitably large arc length. In this section. for each this section. Finally.A) with an arc length Cj. node j. node i with the minimum labels temporary makes it permanent.

the algorithm requires 0(n) time to identify the node with minimum temporary label and . whereas the label of each node in T is j) the length of a shortest path subject to the restriction that each node in the path (except belongs to P. {distance update) for each if (i. The algorithm i associates a predecessor index. Then it is possible to transfer the node i in T to with the smallest label d(i) to P for the following reason: that is any path P from the source node i must contain a first node k i in T. the temporary labels of some nodes > T+ Cj: (i) might decrease. the we use an inductive argument. and d(j) : = «» otherwise. end. with each node € N. € A(i) do then d(j) : d(j) > d(i) + Cjj = d(i) + Cjj and pred(j) : = i. node k must be is at i at least as far away from the source as node since its label least that of node i. In an iteration. At termination. because node could become an internal node in the must thus scan all of the arcs (i. while P * begin N do (node selection) let i e T be a node T: for which d(i) = min {d(j) : j € T). if updates the labels of nodes in T (i).j) = T-{i}. Cgj and pred(j) : = s if (s. P: = Pu(i). After the algorithm has permanently in node i. begin P:=(s). tentative shortest paths to these nodes. This observation shows that the length of path P is at least d(i) and hence labeled i it is valid to permanently label node i. these indices allow us to trace back along a shortest path from each node to the source. d(s) d(j) : : = = and pred(s) = : 0.51 algorithm DIJKSTRA. in the algorithm. the segment of the path P between node k and node has a nonnegative length because arc lengths are nonnegative. sets. To establish the validity of Dijkstra's algorithm. At each point nodes are partitioned into two P and T. furthermore. The algorithm updates these indices (tentative) shortest path ensure that s to pred(i) is the last node prior to i on the from node node i. Assume that the label of each node j in P is the length of a shortest path from the source. T: = N-{s). d(i) . denoted to by pred(i).j) e A . d(j) We + Cj. However. end. then setting d(j) = d(i) The computational time its for this algorithm can be split into the time required by two basic operatior\s--selecting nodes and ujjdating i distances. j) in A(i).

Researchers have attempted to reduce the node selection time without substantially increasing the time for updating distances. nC. Subsequently the best we (A all describe an implementation using R-heaps. the algorithm requires Oirr-) time for selecting nodes and CX ^ ie A(i) | | ) = 0(m) time for N thus runs in O(n^) updating distances.1. Bucket k stores each node whose temporary distance network and. One by . These implementations have either its dramatically reduced the running time of the algorithm in practice or improved worst case complexity. . To improve we must ask the following question. m. using clever data structures. The distance nondecreasing. suggested several implementations of the algorithm. overall. . they have.52 takes 0( A(i) I I )) time to update the distance labels of adjacent nodes. is we describe Oial's algorithm. selection. and C. more complex version of R-heaps gives the best worst-case performance for choices of the parameters n. the computation time by maintaining distances fashion? Ehal's algorithm tries to accomplish this objective. In the following discussion. and reduces the algorithm's fact: computation time using the foUouing that FACT 3. never decreases the distance label of any permanently labeled node since arc lengths are nonnegative. Thus. 2. hence. all algorithm is node selection. In the identify the first node selection step. We maintain nC+1 buckets numbered label is k. 1. Instead of scanning temporarily labeled nodes at each iteration to find the one with the minimum in a sorted distance label. The distance node in this bucket minimum. This implementation time. of Dijkstra's algorithm Dijkstra's algorithm has been a subject of much research. we scan the buckets in increasing order until label of each we is nonempty bucket. and while scanning arcs in A(i) during the distance update step. Consequently. which currently comparable to the best label setting algorithm in practice. FACT 3..) 3^ Dial's Implementation in Dijkstra's The bottleneck operation the algorithm's performance. can we reduce in practice. which is nearly known most implementation of Dijkstra's algorithm from the perspective of worst-case analysis.1 suggests the following scheme for node 0. nC is Recall that C represents the largest arc length in the all an upper bound on the distance labels of the nodes.. labels Dijkstra's algorithm designates as permanent are This fact follows from the observation that the algorithm permanently labels a node i with smallest temporary label d(i).

Dial's algorithm uses C+1 buckets numbered 0. C which can be viewed as arranged in a circle as in Figure 3.53 one. during the entire execution of the algorithm. We then resume the scanning of higher numbered buckets in increasing order to select the next nonempty bucket. allows us to reduce the If d(i) is the number FACT 3. 0. k-1. for some k € P (by the property all finite of distance updates).. d(j) = d(k) + Cj. k+2. because of and so forth. storing to its two pointers for each entry: one pointer immediate predecessor and one to its immediate successor. One implemention uses a data structure knov\T» a doubly In this data structure.2. This storage scheme bucket k contains a node with . making them permanent and scanning their lists to update distance labels of adjacent nodes. and for each finitely labeled node j in T. . minimum distance label. in 0(1) time. Now. it is possible to add. This d(j) in implementation stores a temporarily labeled node j with distance label the bucket d(j) mod (C+1). . then buckets k+1. temporary labels are bracketed from below by Consequently. 2. this algorithm runs in following fact 0(m + nC) time and uses nC+1 buckets. The of buckets to C+1. 1. . we order the content of each bucket arbitrarily...1).: cj^. . In other words. . delete. distance label that the algorithm designates as permanent at the d(j) beginning of an iteration. By storing the content of these buckets carefully.. or delete label. by rearranging the pointers. store nodes in increeising values of the distance labels. k stores temporary labeled nodes with distance however. Doing so permits the topmost relabel us. efficiently. Consequently.. i. at any point in time this bucket also implies that vvill if hold only nodes with the same distance labels. k+2(C+l). and select the next element of any bucket very constant. this transfer requires 0(1) time..e. nodes with temporary distance in labels. it as we nodes and decrease any node's temporary distance we move from a higher index bucket to a lower index bucket. < d(i) + C for each finitely This fact follows by noting that (ii) (i) d(k) < d(i) for eacl k e P (by FACT 3. 1. . k+(C+l). C. to select easily a node. bucket labels k. FACT 3.. in fact. 2.1. Consequently. arc we delete these rodes from the bucket. then at the end of that iteration labeled node j in T. We need not store the nodes with to a bucket infinite temporary distance labels first any of the buckets-we can add them when they receive a finite distance label. node from the list. Hence. C+1 buckets suffice to store d(i) and from above by finite d(i) + C.2. d(j) < d(i) + < d(i) + C. add a bottommost node. bls a time bounded by some linked list.

and the number of passes through Dial's algorithm. resulting in a large computation time. typically does not encounter these difficulties in practice. however. is that C may be very large. In addition. The first implementation considers all the . necessitating large storage and increased computational time. left off earlier.1. is is rot attractive theoretically. R-Heap Implementation Our first O(n^) implementation of Dijkstra's algorithm and then Dial's implementation represent two extremes.54 k-l Figure 3. in it algorithm runs in is 0(m + nC) time which if not even polynomial time. The search heis for the theoretically fastest implementations of Dijkstra's algorithm In the led researchers to develop several new data structures for sparse networks.3. pseudopolynomial if time. The algorithm. to identify the first nonempty where it reexamines the buckets starting at the place A potential disadvantage of this scheme. and C = 2" the algorithm takes exponential time in the worst case. The Rather. 3. Bucket arrangement in Dial's algorithm Dial's algorithm examines the buckets sequentially. the previous The discussion sections of this implementation can skip it of a more advanced nature than and the reader without any loss of continuity. For most applications. is C is not n. it a wrap around fashion. we is consider an implementation using a data structure called a runs in redistributive heap (R-heap) that 0(m + n log nC) time. the algorithm as may wrap around many as n-1 times. then the algorithm runs O(n^) time. very large. C = n'. next section. all of the buckets much less than however. as compared to the original algorithm. For example. In the next iteration. in bucket.

redistributive heaps that that the we present. If we could devise a variable width scheme. Could we improve upon these methods by all. . The algorithm each time it change the ranges of the buckets dynamically. reallocate we dynamically modify the ranges of numbers stored each bucket and we nodes with temporary distance labels in a is 1. Using widths of factor of k. The buckets are numbered as is K = nCl We do not represent the range of bucket k by range(k) which a (possibly empty) if closed interval of integers. 1. size k permits us to reduce the number of buckets needed by a label. perhaps by storing many. 8. Using a width of TOO. so to speak) and searches for a node with the smallest label. We now Flog describe an R-heap in 1 more detail. For a given shortest path problem. the cardinality of the range called its width. uses variable length widths and changes the ranges dynamically.. Dial's algorithm separates nodes by storing any two nodes with different labels in different buckets. and the resulting algorithm reduces to Dijkstra's implementation. its For the preceding example. 0. set We store permanent nodes. redistributes the . 1. k arbitrarily large. changes the ranges.55 temporarily labeled nodes together (in one large bucket. but not bucket? labels in a For example. the range of bucket k is [100k . so number of buckets needed in only Odog nC). to find the is we avoid the need to search the entire bucket minimum. . Moreover. as in the previous algorithm. way that stores the minimum distance label in a bucket whose width In this way.h the wide bucket and narrow bucket approaches. say. . the widths of the buckets are is 1. . Indeed. the running time of this version of the R-heap algorithm 0(m + n log nC). The R-heap algorithm we consider next In the version of 16. for each bucket reduces the number of buckets.. different we could store temporary labels from 100k to lOOk+99 in bucket that can be stored in a bucket is k.. The nodes in bucket k are denoted by the CONTENT(k). adopting an intermediate approach. 2. 4. The temporary labels make up the range of the bucket. lOOk+99] and width is TOO. we need original only one bucket. if But in order to find the smallest distance we need is to search all of the elements in the smallest index nonempty bucket. with a width of numbered bucket... instead of storing only nodes with a temporary label of k in the k-th bucket. we could conceivably retain the advantages of bo. We store a will it temporary node i in bucket k d(i) e range(k). and nodes in the buckets. 2. the R-heap consists of + flog nCl buckets. but still requires us to search through the lowest numbered bucket to find the node with minimum temporary one for the lowest label. In fact.

.. since each node can be shifted at most K = 1 + flog nCl times. it Actually. range(K) = [2^-1 . redistributing the range [8 we need only to 4 redistribute the subrange [11 15].. 2.. distance label without searching nodes in bucket is The following observation helpful. each of the elements of bucket 4 moves to a lower indexed bucket.. [10 11]. example that the initial minimum quickly determined to be We could verify this is fact by verifying that buckets through 3 are empty and bucket 4 nonempty.. These ranges will change dynamically. ranged) = range(2) = [2 3).. range(4) = [8 . we know no temporary v^l ever again be than 8. and hence buckets to 3 v^ll never be needed again. finding a node with smallest temporary distance label) by a sequence of redistribution steps in which we shift is nodes constantly to lower indexed buckets. Eventually. for 15]. however. to first minimum temporary label is 11. the widths of the buckets initial will not increase beyond their distance label is widths. Essentially. the buckets have the following ranges: rarge(0) = [0]. shift (or redistribute) its temporarily labeled nodes into the appropriate buckets and 3). 1. we would Since we will be scanning find the all of the elements of bucket 4 in the redistribute step... . At all this point. in the Suppose range [8 . these buckets idle.e. we 4.. [9]. In this case the resulting ranges of buckets .56 Initially. and the algorithm selects in an additional 0(1) time. we can redistribute the range of bucket 4 (whose width is 8) the previous buckets (whose combined width [12. label in the bucket. [1]. we have replaced the node selection step (i. Thus.. 15]. Roughly speaking. [8]. resulting in the ranges 0. 15]. the minimum temporary it label is in a bucket with width one. 7]. the redistribution time 0(n log nC) time in total. rangeO) = [4 . could not identify the minimum is . 2^-1]. carry out these operations a bit differently. Rather than leaving is 8) to . makes sense example 15]. Suppose for . and We then set the range of bucket 4 to and we (0. Since the that minimum index nonempty bucket label the bucket less whose range is [8 15]. that the minimum Then rather than .

. . bucket has width one. (13 .7] 6 [32 .. So. C=20 and K = flog 1201 = 7. whose width To reiterate.3 specifies the starting solution of Dijkstra's algorithm and the initial R-heap.3] (3) 3 [4 .63] [64 .31] {5} Buckets: 12 [2 . greater than 1.2 The shortest path example. To select the node with the smallest distance label. In number beside each length. the minimum nonempty to buckets bucket is whose width we redistribute the range of bucket k into buckets to k-1. we scan the buckets is 0. We now the figure. and then we reassign the content of bucket k time is The is redistribution 0(n log nC) and the running time of the algorithm 0(m + n log nC). Figure 3. source Figure 3. 1. K to find the first nonempty bucket.3 The initial R-heap. to k-1. at the end of this redistribution. the has width 1.15] nC=120 5 [16. we is 1. [15]. 7 127] Ranges: CONTENT: (2. are guaranteed that the minimum temporary label is stored in bucket 0. we do is not carry out the actual node selection step until the If minimum nonempty bucket k.2. e.. every node in this bucket has the same (minimum) distance . In our example. For this problem. bucket nonempty.. the illustrate R-heaps on the shortest path example given in Figure arc indicates its 3. 14]... [12].57 would be [n]..2.. Nodei: Label d(i): 12 13 [0] [1] 3 4 15 5 6 20 4 [8 .4) (6) Figure 3. Since bucket label. Moreover. .

So identify the first we sequentially scan the buckets from right to 9.4 shows the new R-heap. is contained in the range of present bucket.5) to change the distance label of node 5 from 20 to 9. starting at bucket is 4. which Node 5 moves from bucket Figure 3. its We check whether the is new distance label of node 5 5.58 algorithm designates node 3 as permanent. bucket whose range contains the number 5 to bucket 4. deletes node 3 from the R-heap. and scans the arc (3. move bucket to a lower 5. Node i: . to index bucket. It isn't. node 5 should left. which bucket Since its distance label has decreased.

the bucket is k-1 and reinsert content to If the range of bucket k is [/ . O(nK) is node can move at K times. Thus. width < 2"^ and since the width of widths of the 2*^. a moves most lower indexed bucket. since there are K+1 Therefore. . label of a node in djj^j^. so the nodes total move a total of at most nK times. 1. a bound on node movements.. k ^ 2. then any then node in the selected bucket has the minimum distance label. u] and the smallest distance is Idjj^jp . buckets. CONTENT(2) = e. We now summarize our discussion.59 CONTENT(O) = (5). 1.. . 2. each node can move most K times. Next we consider the node buckets from left selection step. integer to bucket 0. bucket 4 . The algorithm the first redistributes the useful range in the following manner: 1. CONTENTO) = 0.. the next integer to bucket 3. the modified we sequentially scan lower numbered buckets from right to left and add the node to the appropriate bucket. This redistribution of ranges and the subsequent reinsertions of labels to bucket nodes empties bucket k and moves the nodes with the smallest distance 0. we can redistribute the useful range of bucket k over the buckets . k-1 in the manner described. 1. and moves the node with the We are now then in a position to outline the general j algorithm and analyze If its complexity. all we move can time. e CONTENT(k) and that d(j) decreases. Overall. Whenever we examine it a node in the nonempty bucket k with the at smallest index. the next two integers to bucket htis the next four integers to bucket and so on. to a and the term 0(m + nK) time. . If If This operation takes 0(K) time per iteration and O(nK) time in k=0 or k=l. . This redistribution necessarily empties smallest distance label to bucket 0. then the useful range of the bucket u]. this operation takes The term m reflects the number it of distance ujxlates. nK arises because the total every time a node moves. {2. to a lower indexed bucket. the node selection steps take O(nK) Since K = [log nC"L the algorithm runs in 0(m + n log nC) time. we assign 2.. 0. say bucket total. Node selection begins by scanning the k. Since bucket k 1. . Suppose that d(j) « range(k)... .. 0. CONTENTO) = CONTENT(4) = 4). . first buckets can be as large as 2*^'^ for a total potential 0. we its redistribute the "useful" range of bucket k into the buckets those buckets. to right to identify the first nonempty bucket..

as the name implies. For probelm that satisfy the similarity assumption (see Section bound becomes 0(m+ n it log n). Most label correcting algorithms have the capability to detect the presence of negative cycles. Label Correcting Algorithms Label correcting algorithms. Label correcting algorithms can be viewed as a procedure for solving the following recursive equations: d(s) d(j) = 0. The label correcting algorithms are conceptually more general than the label setting algorithms and are applicable to more general To produce situations. is possible to reduce this all bound further to 0(m + n Vlog n which is a linear time algorithm for but the sparsest classes of shortest path problems. a directed cycle whose arc lengths sum to a negative value. every cycle in the network has a positive length. (3. i. FACT 3. shortest paths. then they represent the shortest path lengths from the node: .1) (d(i) = min + Cjj : i € N). these algorithms typically require that the network does not contain any negative directed cycle. path problem in 0(m The R-heap implementation + n log nC) time. when they all become permanent simultaneously.e. usual. Using substantially more sophisticated data ). for example.2 Let d(i) for i e N If d(s) = and if in addition the labels satisfy the following conditions.2 permits us to reduce the number of buckets to 1 + flog CT This refined implementation of the algorithm runs in 1. for each j e N - {s}. to networks containing negative length arcs. these algorithms maintain distance labels as temporary until the end. d(j) denotes the length of a shortest path from the source node to node These equations are knov^m as Bellman's equations and represent necessary conditions These conditions are also sufficient if for optimality of the shortest path problem. of Dijkstra's algorithm solves the shortest This algorithm requires 1 + flog nCl buckets.2). conditions which is more suitable from the viewpoint of be a set of labels. Theorem source 3. maintain tentative distance labels for nodes and correct the all labels at every iteration. (3. Unlike label setting algorithms.2) As j.60 Theorem 3. structures. 3..4.1. We will prove an alternate version of these label correcting algorithms. 0(m this + n log C) time.

j) Cj. < + Cjj for all j) e A.2 correspond to primal feeisibility for the linear programming formulation dual feasibility.1. At any point in the algorithm. > d(i) based upon the simple observation that whenever the current path from the source to node i. the label d(i) is either «» indicating that it is we have yet to discover any path from the source to node j. Proof. These inequalities imply that (i. cycle. of length d(i).2. Let P consist of Ciii2/ nodes s = - i2 i3 ••• ••• - 'k = < ) Condition C3. together with is the arc (i.2. .j) ^ii ^ ^' since the labels d(i) cancel W e W W is out in the summation. d(i) d(j) is the length of d(i) some path from (i.j) is a shorter path to node j than the current path of length d(j). which implies the conclusion of the theorem. or the length of some path from the source to node d(j) j. The algorithm + Cjj. it is an upper bound on the shortest path length.2 correspond to label correcting algorithms as From this perspective. We show that if the labels d(i) satisfy C3.2.2 implies that d(i2) ^ d(i^) + Ci^i2 = + Cij^-iijc d{i3) < d(i2) + Ci2i3' / d(ij^) d(ij^. of the shortest path problem.j) V e (d(i) - d(j) Cjj) T!. first is The generic label correcting algorithm that we consider a general procedure for successively updating distance labels d(i) until they satisfy the conditions C3. (i. Suppose C3. Therefore d(j) is a lower bound on the length of any directed path from e P node j.61 C3.-j) Adding these inequalities yields d(j) = d(ij^) < V (i. the source to including a shortest path from s to j. We d(i) satisfy note that if the network contains a negative cycle then that the no set of labels d(i) satisfies C3. the source node to node i.2. we might view and methods that always maintain primal feasibility try to achieve dual feasibility. Consider any directed path i-j P from the source to node j. Since d(i) is the length of some path from the source to node i. network did contain a negative cycle d(i) W and some labels (i. + d(j) + = Cj: ^ for each e W. then they are also lower bounds on the - shortest path lengths. and C32.j) Consequently.1 in Theorem 3. This conclusion contradicts our assumption that a negative Conditions C3. . Conditions C3.

correcting algorithm. = and pred(s) = : 0.3 correcting algorithm Wher: applied to a network containing no negative cycles. d(j) at bounded from above by nC all and below by -nC. these instances a polynomial time do have exponentially large values algorithm.(s). if the arc if then update = d(i) + Cj:. end. do = d(i) + Cjj. = oo for each j € N . A arcs that nice feature of this label correcting algorithm is its flexibility: we can select the finite do not satisfy conditions C3. Thus when data are in number of distance updates is O(n^C). this conclusion imphes the . is and hence represent finite if there are We now note that this algorithm Since d(j) is no negative cost cycles and if the data are integral.2. is that without a further on the choice of arcs. satisfies d(j) while some arc begin d(j) : (i. the algorithm updates integral. the most 2nC times. scan arcs in satisfies this condition. Terminate the algorithm algorithm the modified no distance label changes during an entire pass. In each pass. The correctness of the label correcting algorithm follows Cj. the labels d(i) satisfy d(j) < d(i) + the shortest path lengths. then the n. j) e A. method. Proof. for all (i. the modified requires 0(nm) time to determine shortest paths from the source to every other node. and hence the algorithm runs pseudopolynomial time. from Theorem 3. number of steps can grow exponentially with (Since the algorithm of C. the label correcting if algorithm does not necessarily run in polynomial time. restriction One drawback Indeed. A in order and check the condition d(j) > d(i) + Cj:. To obtain bound for the we can organize the computations carefully in the following manner. We call this label Theorem label 3. we start with pathological instances of the problem and make a poor choice of arcs at every iteration.j) > d(i) + Cj.2 in any order and of the still assure the convergence. in Arrange the arcs A in some (possibly arbitrary) order. Now make d(j) passes through A. Since each pass requires 0(1) computations for each arc. however.) is pseudopolynomial time. At termination. end.62 algorithm begin d(s) d(j) : : LABEL CORRECTING. pred(j) : = i. We show that the algorithm performs at most n-1 passes through the arc list.

Let d''(j) denote the length of the shortest path from the source let D'^(j) to j node j consisting of r or fewer arcs. (ii) it contains exactly r arcs. We perform induction on the value of Suppose D^*^(j) < d''"Uj) I^Cj) for each € N. As network during every pass through the arc arcs in the arc list It aill need not do so. Further. On the other hand. n-1. that < r. Thus. when we make one more pass. We claim. represent the distance label of node D''(j) after r . while scanning the arcs. the inequality fol3o\*^ from the induction hypothesis. d''(j) for each j € N and each r j = 1. the algorithm terminates with the shortest path distances and the network does not contain any negative distance labels in all cycle. Finally. . The provisions { of the modified labeling algorithm imply that < min {ly'^(j). most n-1 passes. Now suppose that during one pass through the arc the algorithm does not change the distance label of a d(j) node i. and d^(j) = min i*j {d''"^(i) + c^:]. the algorithm does not update any distance label it during an entire pass.. the shortest path from the source to any after at node consists of at most n-1 arcs. Next note that the shortest path to node j containing no more than case (i). up to the (n-l)-th pass. Consequently. In this case. In dJi]) d^'Q) = d''"^(j). inductively. list. . )). the modified label correcting algorithm considers every arc of the list. min i*j D''"^(i) + Cj. scanning arcs in A(i) and testing the optimality conditions.)). D^'Cj) < d'^(j) for all j e N. in the n-th pass. then has a set of labels d(j) satisfying C3-2. This situation cannot occur ui\less the network contair\s a negative cost cyde Practical Improvements stated so far. passes through the arc for list. we consider one node at a time.)) > min {D^" "•()). more nodes in the algorithm modifies If the distance label of (a some node i changes then the network contains a directed walk path i together with a cycle that have one or common) from node all 1 to of length greater than n-1 arcs that has snnaller distance than paths from the source node to i.63 0(nm) bound. during the next pass S d(i) + Cj.. Hence. or in case (ii). the algorithm terminates v^th the shortest path lengths. = min (d''"^(j). min {d^"''(i) + Cj. Suppose we order the by their tail nodes so that arcs with the same tail node appear i consecutively on the list. r arcs either (i) has no more than r-1 arcs. Then. we note that Therefore. The modified label correcting algorithm is also capable of detecting the presence If of negative cycles in the network. min (D''"''(i) + Cj. the n-1 passes. j) 6 A(i) and the . for every (i.

. practice. To achieve this savings. then some nodes may have i as a predecessor. The following procedure this further m. It is advantageous to update the distances for these nodes immediately. i we to see has already appeeired in the LIST.odification of the a formal description of modified label correcting method. terminates in 0(nm) time. end. : «> for each j e N- LIST = (s). the algorithm is i. j) for each e A(i) do + Cjj if d(j) > d(i) then begin d(j) : = d(i) + C|j pred(j) if j : = i. first-out order to assure that performs passes through is the arc A and. then we add to the beginning of LIST. While adding a node If to LIST. end. 64 algorithm need not maintain a It test these conditions. i This heuristic rule has the follovdng plausible justification. The modification i alters the manner check in which the algorithm adds nodes whether the it to LIST. the worst-case Though this change makes the algorithm very attractive in . scans this list list in the first-in. Empirical studies indicate that with this change several times faster for many reasonable problem classes. algorithm begin d(s) d(j) : MODIFIED LABEL CORRECTING. but greatly improves running time in practice. (i. otherwise we add If it to the end of LIST. rather than update them from other nodes and then update them again when we consider node alone. « LIST then add j to the end of LIST. the it algorithm can list of nodes whose distance labels have changed since it last examined them. = = : and pred(s) = : 0. end. Another modification of this algorithm sacrifices its its polynomial time behavior in the worst case. while LIST* begin do select the first element i of LIST. consequently. the node has previously appeared on the LIST. delete i from LIST. (s). yes.

note that for any path from node k to node / X (i. If the network has nonnegative arc lengths. (i.) 3.. transformation. become nonnegative after the we can apply Dijkstra's algorithm n-1 additional times to determine shortest path distances between all pairs of nodes in the transformed network. It is based on dynamic programming.5. The algorithm well suited for sparse graphs. distances from s The algorithm Cu = (i. j) as Cj. j) path distances define the d(j) or indicates the presence of a negative cycle. Further. cor\sidering each node node once. and if the network contains no negative cost cycle. j) either terminates with the shortest In the former case.65 running time of the algorithm algorithm source to is is exponential. All Pairs Shortest Path Algorithm In certain applications of the shortest path problem. this version of the label correcting the fastest algorithm in practice for finding the shortest path from a single all nodes in non-dense networks.j)eP labels d(j) cancel out in the all summation.d(k) to the corresponding shortest path distance in the transformed network. (For the problem of finding a shortest path from are a single source node to a single sink. then we can solve the all pairs shortest path as the source problem by applying Dijkstra's algorithm n times. we P new length of the arc Cj. for each (i. Condition C3. We use the modified label correcting algorithm to compute shortest path to all other nodes.2 implies that t for all € A. / We then obtain the shortest path distance between nodes k and in the original network by adding d(/) .e. If the network contains arcs with negative arc lengths. The second better suited for dense graphs. Indeed. + d(i) - d(j) e A. combines the modified algorithm is label correcting algorithm and Dijkstra's algorithm. This transformation thus changes the length of paths between a pair of nodes by a constant amount (depending on the pair) and Since arc lengths consequently preserves shortest paths. first In this section is describe two It algorithms to solve this problem. we need we to determine shortest path distances between all pairs of nodes. i. certain variants of the label setting algorithm more efficient in practice. connected by directed paths. then we can fist transform the network to one with nonnegative arc lengths as follows a Let s be node from which all nodes in the network are reachable. This approach requires 0(nm) time to solve the first shortest path problem. the method takes an extra .j)€P ^ii ~ X ^ii "*" ^^^^ ~ '^^'^ since the intermediate (i.

or does pass through the node in which case d^'*'^{i. r) + d^Cr. j) denote the actual shortest path distance. and d^'+^Ci. j that passes through the nodes 1. To compute i d''"*'^(i. we first . j subject to the i condition that the path uses only the nodes as internal nodes. (d^'U.C) = m+ to n log nC. r) d'^"''^(i. j).66 0(n S(n. It is possible to solve the pairs previous equations recursively for increasing values of and by varying the node is N X N for a fixed value of r. 2. j) known as Floyd's algorithm. . in which case = d^(i.. 1. Thus we have d^(i. jX d^Ci.j) = Cjj. j) + d^ir. r. S(n.C) lengths. j). . r-1 (and and j) Let d(i. In the expression S(n... m.m. j).. We as follows: d'"(i. We over assume that = » for all node pairs (i. j) " the length of a shortest path from node i to node . solve the all Another way pairs shortest path is problem is by dynamic define the programming.m. the time needed to solve a shortest path problem with nonnegative arc For the R-heap implementations of Dijkstra's algorithm we considered previously. observe that a shortest path from node either (i) to node r. C) time is to compute the remaining shortest path distances. 2. j) e A. The following procedure a formal description of this algorithm. The approach we present variables d^(i. r does not pass through the node r. j) = min Cj. j)). j) = d''(i. (ii) .

Floyd's algorithm jilgorithm. j) for more transparent from x the followang theorem. it runs in OCn-') time. d(i. j) -: j T • if d(i.ork contains a path from node to node j of length d(i. This path can be obtained by tracing the predecessor indices. j). ))•. predd. node (i. j) : = 0. for each node pair (i. netw. for some node from node r. and in each iteration it performs 0(1) computations for each node pair. i) = for all i. STOP. j) length of some path from node i to node j. is in many respects similar to the modified label correcting This relationship becomes 3. j) : = pred(r.*i. + i) d(r. The algorithm for either terminates vdth the shortest path distances or stops i. Theorem (i. e NxN d(i. pred(i. pred(i. last node prior to node j in the tentative shortest path from the node i to node The algorithm maintains the property i that for each finite d(i. if i = j and < then the network contains a negative cycle. p. r. d(i. . r : = n do (i. j) denotes the j. then they represent the shortest path distances: (i) (ii) d(i. for each for each A to do d(i. when < 0. = <« j) : and = i. j) € NxN j) : do d(i. Consequently. r) do + d(r. r) + c^: for all i. j) pairs € 1 (i. (Hi) < d(i. Proof. this theorem is a consequence of Theorem 3. = Cj. r) d(i. d(i. end. j). i) Hence. j) e N N satisfy the following conditions. j). j) is the d(i. j).2. j) > then . r) i + d(r. This algorithm performs n iterations. : < > begin d(i. j) for each .4 If d(i. For fixed i. The index pred(i. end. i) < some node In the latter case. j). This cycle can be obtained by using the predecessor indices.67 algorithm begin for all ALL PAIRS SHORTEST PATHS. and j. Floyd's algorithm uses predecessor indices. j) : = d(i. and pred(i. the union of the r to tentative shortest paths to node r and from node node i contains a negative cycle.

the maximum number reliability of node disjoint paths that nodes? These and similar its measures indicate the robustness of the network to failure of components. for consider a capacitated network (i. k) € A) designates the arcs emanating from node In the maximum flow problem. what all is the minimum number join this pair of of nodes whose removal from the network destroys what is paths joining a particular pair of nodes? Or. the solution of the maximum flow problem with capacity data chosen judiciously establishes other performance measures for a network. we discuss several algorithms for computing the maximum flow between two nodes solving the in a network. {(i. is the maximum flow that can be sent between any two nodes? tire The resolution of this question determines the "best" use of capacities and establishes a reference point against which to compare other ways of using the network. j) are (j. MAXIMUM FLOWS An important characteristic of a network is its capacity to carry flow. i) is There is no generality in making this assumption since all we allow zero capacity arcs. Moreover. We then consider improved versions of the basic labeling algorithm with better theoretical performance guarantees. Formally. We assume that for every arc in A. The source s and sink (i.68 4. j) G = GM. both theoretically and computationally. validity of these algorithms rests upon the celebrated max-flow min-cut theorem of network flows. j) € A). As earlier. We begin The by introducing a basic labeling algorithm for maximum flow problem. For example. We Uj. given capacities on the arcs. In particular. What. Let U = max (u^. two distinguished nodes also in A. of the loss of network. t we wish to find the maximum flow from the source node is s to the sink node that satisfies the arc capacities. : (i. communication systems planning and several other application domains. A) with a nonnegative t integer capacity any arc e A. In this section. we describe preflow push algorithms that have recently emerged as the for solving the most powerful techniques maximum flow problem. the arc adjacency i. defined as A(i) = k) : (i. the problem to . This remarkable theorem has a number of surprising implications in machine and vehicle scheduling. We also we can assume set the without any loss of generality that arc capacities are finite (since capacity of any uncapacitated arc equal to the sum of the capacities of list all capacitated arcs).

i) which can be cancelled flow to node Consequently. x. Algorithms whose complexity bounds involve U assume integrality of data. "V' ^ > = ^' < Xj: < Ujj . Figure 4. j) € A) € A) e A. the unused capacity to increase (i..t. i) Xjj = \ ^ ifi*s. the integrality assumption of residual network is not a restrictive assumption in practice. j) (4. for each (i. The and j. + xij . y {j : Xjj {) : y (j. . j) maximum (j. of any arc i e j A represents the (i. The following high-level (and flexible) description of the algorithm summarizes the basic iterative steps.1c) It is possible to relax the integrality assumption on arc capacities for is some algorithms. Given a the residual capacity.j on arc x^: (j. j) we consider. ifi 0. without specifying any particular algorithmic strategy for how to determine augmenting paths.69 Maximize v subject to V. Note. residual capacity has two components: (ii) x^. positive residual capacities the residual represent it network (with respect to the flow and as G(x). additional flow that can be sent from node (i) to u^: node - using the arcs and of arc i). though this assumption necessary for others. however.1 illustrates an example of a residual network. is crucial to the algorithms (i. We call the network consisting of the arcs with x). rj.foraUiG N.1 Labeling Algorithm and the Max-Flow Min-Cut Theorem One of the simplest is and most path intuitive algorithms for solving the maximum The flow problem the augmenting algorithm due to Ford and Fulkerson. The concept flow x. 4. that rational arc capacities can always be transformed to integer arc capacities by appropriately scaling the data. (4.1b) (i. algorithm proceeds by identifying directed paths from the source to the sink in the residual network and augmenting flows on these paths. .1a) r = s. the current flow rj. (4. j). until the residual network contains no such path. = Uj. Thus.

a search of the residual network to find a It does so by fanning out from the source node s to find a directed tree containing nodes that are reachable from the source along a directed path in the residual network. AUGMENTING PATH. We now more detail. augment A end. the flows only is easier to work directly with residual capacities and to compute when the algorithm terminates. finitely. First. units of flow along P and update G(x). The in arc (i. or (iii) convex combination of and For our purposes. Xjj by A in the original it network. of the residual network corresponds to (ii) increase in by A a in the original network.70 algorithm begin x: = 0. The follows from the proof of the max-flow min-cut theorem. j) e P). At any step. we refer to the nodes in the tree as labeled and those its not in the tree as unlabeled. A directed path from the source to the sink in the residual network path. augmenting A units of flow along P decreases discuss this algorithm in r^: by A and a by A. or (i) a decreeise in (ii). The . The labeling algorithm performs directed path from s to t. is also called an augmenting The residual capacity on the path. The algorithm terminates all when has scanned labeled nodes and the sink remains unlabeled. j) e P. For each increases r:j (i. It then erases the labels and repeats this process. while there begin is a path P from s to t in G(x) do A = min : (rjj : (i. we need method to to identify a directed path from the source to the sink in the residual network or show that the network contains no such path. the sink becomes labeled and the algorithm sends the maximum possible flow on the path from s to it t. The following algorithmic description specifies the steps of the labeling algorithm in detail. The algorithm selects a labeled node and scans arc adjacency list (in the residual network) to label more uiilabeled nodes. j) of an augmenting path is the minimum an residual capacity of any arc that definition of the residual capacity implies (i) an additional flow of A Xj. Second. Eventually. we need to show that the algorithm terminates Finally. last result we must establish that the algorithm termirtates with a maximum flow. end.

71 Network with arc capacities.) Network with a flow x. Node 1 is the source and node 4 is the sink. . c The residual network with residual arc capacities. Figure 4. (Arcs not shown have zero capacities.1 Example of a residua] network.

otherwise we set x^: = and x:j = fj. A = min : (rj. The rjj final residual capacities r = uj. labels and go to loop. pred(i). (loop) end.x:j = uj.u^. . for each if e A(i) do rj. Hence. j) e P). (i. end else quit the loop. augment A erase all units of flow along P. while L * begin and t is unlabeled do select a node (i. to the source.. j) i € L.the can be used to obtain the arc flows as follows. . The predecessor indices allow us along the path from node algorithm LABELING. mark end end. Since arc flows satisfy xj: .xj: + x:j x:j. end. if u^: > rj: we can set x^. . L: = (s). .72 algorithm maintains a predecessor index. = Ujj .r^. if t is labeled then begin use the predecessor labels to trace back to obtain the augmenting path P from s to : t. and = 0. j as labeled and add this node to L. for each labeled node i indicating the to trace back rode that caused node a i to be labeled. j is unlabeled and > then begin pred(j) : = i. begin loop pred(j) : = for each j e N.Fjj.

S) of an s-t cutset (S.5) i€ S J6 S . (4. cutset partitions set A . i and j both belong to S. S) = X X ie S je "ij ^'^•^^ S cutset equals the value of the flow (4. net flow across an s-t We refer to v as the value of the flow.4) j€ S in the first < u^. I. and capacity constraints of For this flow vector X. j) with e S and € S called a backward arc in the cutset Let X be a flow vector satisfying the flow conservation (4.S: an S is the set of nodes connected t to Conversely. S) is defined as C(S. A if N s. An is arc (i. j s-t cutset. and an arc (i. any partition of the node set as S and S with s e S and e S defines an S).2) S S j e S Def ne the capacity C(S. x^j in equation for node Cemcels equation for node we obtain ^=1 ie S Substituting x^.'^ij - I_ i€ S X je S ''ij = Fx^S. S) as Fx<S< S)= i X G S j X_Xij e i I_ X e Xij. S). we introduce some if new definitions and notation. S)< X Z_ "ij ^ C<S. let v be the amount of flow leaving the source. into two A cutset is called am s-t cutset the source and the sink nodes are contained in different subsets of nodes S cind S = N . summation and xj: ^ in the second summation shows that Fx(S. (4. Q c A is a cutset the Q Yias this property.Q) disconnected eind no superset of subsets. Consequently.73 In order to show that the algorithm obtains a maximum flow. S).1). alternatively designate s-t cutset as i (S. The flow x determines the cutset (S. S).1 We in claim that the flow across any s-t and does not exceed the cutset capacity. Adding the flow conservation constraints b) for nodes j S and noting -Xjj in that when nodes i.3 that a set is subnetwork G' = (N. Recall from Section 1. (4. j we (S. j) with i e S cind e S is called a forward arc.

But does at it terminate finitely? Each labeling eirc iteration of the algorithm scans any node most once. Adding in the flow conservation equations for nodes in S. The proof of this theorem not only establishes the max-flow min-cut property. vector) it has at hemd both the maximum flow value (and a maximum flow capacity s-t and a minimum cutset. the If all labeling iteration scans each arc at most once capacities are integral (s. eis guarantee that the problem always has a maximvmn flow as long capacity.) Define some cutset has finite S to be the set of labeled initial nodes flow in the residual x.5) S). the bound . (Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts. for each forward arc in the cutset (S. <md requires 0(m) computations.4). Note that = xjj t e S.4) yields V = Fx(S.. since x is a maximum flow. Coi^equently. Making these substitutions in (4. Since the labeling algorithm increases the flow value by iteration. then the capacity of the cutset at least N . S). holds as some is choice of x and some choice of an s-t cutset (S. the cutset the S) is a minimum capacity cutset and its capacity equals maximum flow value v. We thus have established the theorem. and x^. This bound on the if number is of iterations not entirely satisfactory for large values of U. apply the labeling algorithm with the Let S= N- Clearly. the conditions S) < Ujj and ^ imply that = Uj.1. S) = i ^ e S j ]£ € S Ujj = C(S. it one unit in any is terminates within nU iterations. to I Proof. is duahty theory. we obtain (4. 74 This result is the weak duahty property Like most of the maximum it flow problem when the "easy" half of the viewed as a linear program. weak duality results.{s}) is at most nU. s e S and S. The maximum value of flow from s Theorem equals the 4.6) But we have observed earlier that v (S. nodes S cannot be labeled from the nodes in (S. - Xj: + Xjj. is a lower bound on the capacity s-t of any s-t cutset. network G(x) when we S. value. Let x denote the a maximum flow vector and v denote the maximum flow (Linear programming theory. inspecting each in A(i). The more substantive strong duaUty property an equality for cisserts that (4. U = 2". hence rj: for each forward arc x. Consequently. (i.. or our subsequent algorithmic developments. = for each backward arc in the cutset. S) (4. arc and bounded by a finite number U. j) in the cutset xj. Since rj: = U|. but the same argument shows that when the labeling algorithm terminates. This strong duality property the max-flow min-cut theorem.

second drawback of the labeling algorithm the is its "forget fulness". the algorithm can indeed perform that many iterations.4 if overcome this difficulty and obtain an optimum flow even the capacities are irrational. thein augmenting path algorithms to find a maximum is flow in no more initial m augmentations. the capacities are irrational. No algorithm developed in the literature comes close to achieving this it is bound. A iteration. possible to improve considerably on the bound of 0(nU) augmentations of the basic labeling algorithm. therefore destroys potentially useful information. we need know a maximum theoretical flow. the algorithm may not terminate: although the successive flow values converge. Several refinements of the algorithms. in principle. Thus if the method is to be effective. in theory. even though Erasing the labels much of this information may be valid in the next residual network. they may not converge to the select the maximum flow value.2 . we must augmenting paths carefully.2 Decreasing the Number the of Augmentations The bound not satisfactory of nU on a number of augmentations in the labeling algorithm is from theoretical perspective. At each algorithm generates node labels that contain information about to other nodes. In addition.1) is true even if the data are irrational. then also is a maximum it flow (flows around cycles do not change flow value). to apply this flow decomposition argument. Flow decomposition shows should be able X is that. Nevertheless. This result shows that is. For suppose an optimum flow and y it any flow (possibly zero). the max-flow min-cut theorem (and our proof of Theorem 4. it we should retain a label when can be used profitably in later computations. Ideally. 4. moreover. By the flow decomposition property.75 exponential in the number of nodes. to Unfortunately. is possible to obtain x from y by a sequence of at most s to t m augmentations on If augmenting paths from plus flows around augmenting cycles. including those we consider in Section 4. augmenting paths from the source described erases the labels The implementation we have when it proceeds from one iteration to the next. without further take fiCnU) augmentations.2 illustrates. possible to find a maximum flow using at most m augmentations.4. . the augmenting path algorithm may example given in Figure 4. if Moreover. we define x' x' as the flow vector obtained from y by applying only the augmenting paths. as the modifications. Furthermore.

. alternately along s-a-b-t and s-b-a-t.1 10^.2 A pathological example for the labeling arc capacities. the flow maximum. s-a-b-t. is After 2 xlO^ augmentations. (c) s-b-a-t. algorithm.0 (b) 10^.76 (a) 10 \l 10^. (a) The input network with (b) After aug^nenting along the path After augmenting along the path Arc flow is indicated beside the arc capacity.1 (0 Figiire 4.

after capacity augmentations. at least 1 Since this capacity is initially at U and must be until the flow is maximum. We can improve this running time by exploiting the minimum distance from any .77 One natural specialization of the augmenting path algorithm is to augment flow along a "shortest path" from the source to the sink. 4. defined as a path consisting of the least number of arcs. within m augmentations. This specialization also leads to improved complexity. L of labeled nodes as a queue. then it by examining the labeled nodes in the residual network. Unfortunately. the flow must be maximum.) Since no path contains is at more than n-1 arcs. would obtain a shortest path in the Each of these iterations would take 0(m) steps both worst case and in practice. this rule guarantees that the number of augmentations most (n-l)m. 2m consecutive maximum have capacity augmentations. and (by our subsequent observations) the resulting computation time would be O(nm^).6. this computation time fact that the is excessive. starting with flow - At or one of these augmentations must augment the flow by an amount for v)/2m otherwise we will a maximum flow. in a first-in.) In the following section. then the length of any increases. shortest path either stays the Moreover. By flow decomposition. If we augment same or flow along a shortest path. we consider another algorithm for reducing the number of augmentations. sequence of least less. (We will prove these results next section. the in the length of the shortest path is guaranteed to increase. Thus after augmentations. (Note 0(m log U) maximum that we are essentially repeating the argument used to establish the geometric improvement approach discussed in Section 1. a path of An the alternative is to augment flow along maximum residual capacity. the network contains to (v* - at most m augmenting paths whose residual capacities sum v). Thus the maximum capacity augmenting path has residual capacity at least (v*-v)/m. first-out order. the algorithm would reduce the capacity of a 2m or fewer maximum capacity most augmenting path by the capacity a factor of at least two. Now (v* consider a v.3 Shortest Augmenting Path Algorithm would be to successively If A natural approach to augmenting along shortest paths first look for shortest paths by performing a breadth the labeling algorithm maintains the set search in the residual network. Let v be any flow value and v* be maximum flow value.

it for other nodes network it is not necessary to maintain exact distances.. Since d(s) is a lower bound on the length of any path from the source to the sink. By fully exploiting this property. 1. The Algorithm The concept of distance labels w^ill prove to be an important construct in the 4. in Figure 4.1 C4-2.. though d = (3. satisfies the We say that a distance function valid follovdng two conditions: C4. For any admissible path of length k. refer to d(i) as the distance label of It and condition C4. These inequalities . However.. -\ - t be any path of length k in the residual network from node i to t. > 0. node is i We condition. we maintain without incuring any significant . 0) represents the exact distance label. 0. any shortest path from node i to t contains at leaist d(i) arcs. Let = i^ - - i3 - . An arc (i. hence. suffices to have valid distances. j) € A with r^. d(i2) 2 d(i3) + 1. Other arcs are inadmissible. Then. We now admissible if it define satisfies some d(i) additional notation. maximum flow algorithms that we discuss in this section and in Sections 4..2 we have d(i) = d(i|) < d(i2) + 1. to t. . each of the distance labels for nodes in the in the exact. j) in the residual network is t = d(j) + 1. 0) is distance label. A path from s to consisting entirely of admissible arcs is an admissible path.78 node i to the sink node t is monotonically nondecreasing over all augmentations. The algorithm we describe next repeatedly augments flow along admissible paths. 2.2 as the validit}/ is easy to demonstrate that i d(i) a lower boimd on i the length of the i2 shortest directed path from to t in the residual network. Whenever we augment along path is a path. we refer to the algorithm as the shortest augmenting path algorithm. the distance label d(i) equals the length of the shortest path from to in the residual network. For example. which are lower bounds on the exact to distances. we can reduce the average time per augmentation to 0(n). d(j) + 1 for every arc (i. 0. node i to be less than the distance from cost. d(ij^) < d(t) + 1 = 1. imply that d(i) < k for any path of length k in the residual network and.1(c). Thus. d(s) = k. from C4.4 Tj: is and A if it distance function d : N -* Z"*" with respect to the residual capacities a fimction from is the set of nodes to the nonnegative integers. There is no particular urgency compute these distances i exactly.5. the algorithm augments flows along shortest paths in the residual network. then a valid we call the distance labels exact. i If t for each node i. d = (0. d(t) d(i) = < 0. By allowing flexibility the distance label of in the algorithm.

starting at the sink node. = t then AUGMENT and set i» : s. The algorithm terminates when d(s) S n.e. end. inadmissible (assuming # s).e. as the new no admissible arc emanates from node then i* the algorithm performs the retreat step. This step increeises the distance label of node it so that at least one admissible arc emanates from operation). j) on the path. maintains a path from the source node admissible arcs. the partial admissible path becomes an contains node the algorithm makes a maximum possible augmentation on this path and begins again with the source as the current node. 0. end. consisting entirely of We = call this i partial admissible path and store it using predecessor of the pred(j) for each arc (i.. begin let (i*. The algorithm performs one retreat. adds to the partial admissible path. i'. i*. one at a time. end. Consequently. The algorithm generates an It admissible path by adding admissible circs.79 We can compute the initial distance labels by performing a backward breadth first search of the residual network. we delete (pred(i*). We next describe the algorithm formally. SHORTEST AUGMENTING PATH. j*) be an admissible arc in = i* A(i*). = s. as follows. admissible path (i. procedure ADVANCE(i»). j*) node: advance or The advance step it identifies some and admissible arc designates j* emanating from node current node. first X = : perform backward breadth search of the residual network from node 1* : t to obtain the distance labels d(i). (we i*) refer to this step as a relabel i* Increasing d(i*) makes the arc (predd*). indicating that the network contains no augmenting path from the source algorithm begin to the sink. i*) from the partial admissible path and node pred(i*) becomes the new current node. If i*. i. two steps at the current (i*. .. pred(j') : and i* : = j*. to some node path a called the current node. while begin d(s) < n do if i* has an admissible arc then ADVANCE(i*) = else if i* RETREAT(i*). Whenever t). indices.

node. Correctness of the Algorithm We maximum first show that the shortest augmentation algorithm correctly solves the flow problem. list the current-arc of node sequentially list is the arc in its is arc list.1. remains unchanged throughout the algorithm. inductively. We show that the algorithm maintains valid distance labels at every step by performing induction on the number of augment and relabel steps. ?t then i* : = pred(i*). has a current-arc (i. procedure begin AUGMENT. In our subsequent discussion we shall always assume that the algorithms select admissible arcs using this technique.2. each relabel step strictly increases the distance label of a node. We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node Each node i emanating from Arcs in each a i. Moreover. Initially.e. j) which i is the current candidate for the first next advance step. that the distance valid prior to a step. The algorithm examines this it and whenever the current arc inadmissible. Assume. A = min : {rjj : (i. using predecessor indices identify an augmenting path P from the source to the sink.80 procedure RETREAT(i'). algorithm constructs valid distance function is Initially. augment A end. j) € A(i*) and ^- > ). list can be arranged arbitrarily. units of flow along path P. When i the algorithm has examined all arcs in A(i). begin d(i*) if !• : = min s { d(j) + 1 : (i. but the order. the labels.. i. . once decided. j) € P). Proof. makes the next arc in the arc it the current arc. updates the distance label of node arc in its and the current arc once again becomes the implicitly first arc list. after an augment step (when the and (ii) after a relabel step. end. each step. satisfies the validity (i) condition C4. We need to check whether these conditions remain valid residual graph changes). The shortest augmenting path algorithm maintains valid distance labels at Lemma 4.

< k < n. we can obtain a minimum For s-t cutset as follows. S). Theorem flow. though. affect the validity of the i) with rjj > might. j) in the residual network.1 since Oj^ ^n-1. Hence. The shortest augmenting path algorithm correctly computes a maximum Proof. (Recall that d(s) ^ n. to the residual network does not (i.81 (i) A flow augmentation on arc (i. Consider the (i. j) e (S. additional arc d(i) (j. > 0. the choice for changing d(i) ensures that the condition d(i) < d(j) + 1 remains valid for all (i.2. thereby establishing the second part of the lemma. S). d(i) < min{d(j) + (i. and rj. When d(s. S). since = d(j) + by the admissibility property of the augmenting path. must be zero e N: d(i) > k*) S. create an and. Let S = {i some k* < n . Since t. node is i when the current arc reaches the end of arc Observe that an arc (i. but this modification distance function for this arc.) = k and S = N . j) e A(i) satisfies d(i) = d(j) + rj. list when A(i). j) s-t cutset (S. a relabel step at if (ii) The algorithm performs list A(i). j) might delete this arc from the residual network. Finally. also create an additional condition d(j) < j) Augmentation on arc + 1 that needs to be d(i) satisfied. the for all arcs Gc. (S. By construction. The validity condition C4. let a^ denote for the number of nodes with distance label equal to k. since d(i) increases. j) € A(i) and > 0) = d'(i). therefore. j) inadmissible at some stage. in addition. 4. however. which is the termination criterion for the generic augmenting path algorithm. Hence. ^ n and the algorithm terminates. S) is a minimum cutset and the current flow is maximum. 1 The distance labels satisfy this validity condition. . d(i) > d(j) + rj: for all e (S. d(s) is a lower bound on the length of the shortest augmenting path from s to this condition implies that the network contains no augmenting path from the source to the sink. Thus. then no arc 1 : (i.S. then it remains inadmissible until d(i) increases because of our inductive hypothesis that the current arc reaches the end of the arc 1 distance labels are nondecreasing. At termination of the algorithm. s e sets 1 V S and t e and both the S and S are nonempty. i) conditions dOc) < d(i) + 1 remain valid in the residual network. Note that Oj^.2 implies that s-t = for each (i. The algorithm terminates when d(s) ^ n.

its and each retreat step decrecises length by one. The first term comes from the number of of augmentations. j) until flow sent back to (at which point = d'(i) . S n. 4. each I execution requiring 0( A(i) I ) time. n^m) advance steps. of relabel steps is Thus the algorithm relabels a node at most n times and the total number bounded by n'^. After the algorithm has relabeled selects node i i at most n times.3. i. After having performed list I A(i) i. next show that the algorithm computes a maximvun flow in O(n^m) time.. (b) The number of augment steps at most nrnfl.e.82 Complexity of the Algorithm We Lemma number Proof. + 1 ^ d(i) + = d(j) + 2). such scannings. the algorithm total reaches the end of the arc and relabels node Thus the time spent in all . the algorithm requires at most 0(n^ + retreat (relabel) steps. between two consecutive saturations of arc (i. The total time spent in all relabel operations is V i€ n I A(i) I = 0(nm). Then no more flow can be d'(j) on 1 (i. (a) Each distance is label increases at most n times. Consequently. we consider the time spent in identifying admissible N The time taken to identify the admissible arc of arcs. the algorithm performs the relabel operation 0(n) times. resulting O(n^m) total effort in the augmentation steps. Each augment step saturates zero. j) d(j) increases by at least 2 units. which are bounded by nm/2 by For each node i. The algorithm performs 0(nm) flow augmentations and each augmentation takes in 0(n) time. the algorithm never node again during an advance step since for every node k in the current path. Hence. at least one arc. 4. node I i is 0(1) plus the time sf)ent in scanning arcs in A(i). and the second term from the number the previous lemma. The shortest augmenting path algorithm runs in O(n^m) time. Cortsequently. at most n/2 times and the number of arc saturations is no more Theorem Proof. j) can become saturated than nm/2. the total is of relabel steps at most n^ . Each advance step increases the length of the partial admissible path by one.2. Each relabel step at node i increeises d(i) d(i) by at least one. j) becomes saturated sent at some iteration (at is which from d(i) j = i d(j) + 1). since each partial admissible path has length at most n. decreases its residual capacity to Suppose that the arc (i. d(k) < d(s) < n. Finally. total any arc (i. From this point on.

a done after it has already found the algorithm can be improved by detecting the presence of a maximum flow. The proof of Theorem 4. for ^ k < n.e. satisfactory for a worst-case analysis. except in very dense networks. Potential Functions and an Alternate Proof of Lemma 4. The idea of augmenting flows along easy to implement in practice. This implementation of the maximum flow algorithm runs in difficult 0(nm log n) time and obtaining further These improvements appears quite implementations interest. shortest paths is intuitively appealing and The resulting algorithms is tight. = for some k* < n. of a sophisticated data structure.83 scannings is 0( V i€ nlA(i)l) = 0(nm). Vkith sophisticated data structures appear to be primarily of theoretical however. augmenting path algorithm The use to perform augmentation.2(b) A functions. The use of potential functions enables us to define an "accounting" relationship between the occurrences of various steps of an algorithm that can be used to . The algorithm updates it after every relabel operation and terminates whenever first finds a gap in the { a array.e. because maintaining the data structures requires substantial overhead that tends to increase rather than reduce the computationjd times in practice. i. powerful method for proving computational time bounds is to use potential Potential function techniques are general purpose techniques for proving the complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function. shortest The only way is improve the running time of the fewer computations per . The minimum cutset prior to this array performing these relabeling operations. Researchers have observed empirically major portion of which is that the algorithm spends too much time in relabeling.3 also suggests an alternative temnination condition criteria for is the shortest augmenting path algorithm. called dynamic trees reduces the average time for each augmentation from 0(n) to OGog n). As we have seen earlier. aj^ with distance label equal to k. then S) denotes a minimum cutset. but The termination of d(s) ^ n may not be efficient in practice... (S. ex. The combination of these time bounds N establishes the theorem. We can do so by maintaining the number of nodes » i. A detailed discussion of dynamic trees is beyond the scope of this chapter. if S = i : d(s) > k*). identify at most 0(nm) augmenting paths and this bound on particular examples these algorithms to perform f2(nm) augmentations.

Thus the number of augmentations most m + nm was = 0(nm). Rather than formally introducing potential functions. arcs at the number of admissible eis end of the k-th step. and increases F by the all same amount. relabel operation. In fact. representative of the potential function argument. potential increases only The when the algorithm relabels distances. This relabels increase in F is at most nm over relabelings. of augmentations. .4 Freflow-Push Algorithms Augmenting path algorithms send flow by augmenting along step further arc.84 obtain a bound on the steps that might be difficult to obtain using other arguments. K steps before it Clearly. the number the of augmentations using bounds on the number of relabels. since the algorithm any node at most n times (as a consequence of Lemma its 4. and thus we can bound In general. a path. Let the algorithm perform 0. decomposes into k basic of operations of sending a flow of these basic operations as a push. relabeling of Each node i creates as cis I A(i) I new admissible arcs. we count a step either an augmentation or as a terminates. We shall refer to each A path augmentation has one advantage over a single push: at all it maintains conservation of flow nodes. Suppose in the shortest augmenting path algorithm we kept track of the number Let F(k) denote the of admissible arcs in the residual network. for the purpose of this argument. Since the initial value of F is at most is m more than terminal value. 4. we illustrate the technique by showing is that the number of augmentations in the shortest augmenting path algorithm 0(nm).1) and V i€ n I A(i) I = N nm. F(0) < m and many F(K) ^ Each augmentation decreases the residual capacity of at least one arc to zero and hence reduces F by at least one unit. the push-based algorithms such as those we develop in this and the following sections necessarily violate conservation of flow. This argument objective to is fairly Our bound the number We did so by defining a potential function that decreases whenever the algorithm performs an augmentation. This basic decomposes into the more elementary operation of sending flow along an Thus sending a flow of A A units along a path of k arcs units along an arc of the path. the total decrease in F due to is at all augmentations m + nm. we of bound number of steps of one type in terms of knovm boiands on the number steps of other types.

First. they are better suited distributed or parallel computation. i) ''ji (j : € A) X'^ij (i. algorithms perform all The preflow-push iteration of the le<ist operations using only local information. the network contains at e(i) one a node i e N . We will refer to any such flows as preflows. labels. its initialization and t) its termination). as in the augmenting path algorithm described in the last section. The goal of each iterative step is to choose some active node and to send excess closer to the sink.foralli€ N-{s. to the current distance labels.. i. the best preflow-push algorithms currently outperform the best augmenting path algorithms in theory as well as in practice.85 Rather. Fourth.) (We Preflow-push algorithms have several advantages over augmentation based algorithms. with its > 0.1c) and the following relaxation y {j:(j. define the distance labels and admissible arcs as in the previous section. of the generic (ii) preflow-push methods are pushing the flow on an admissible and updating a distance label. these algorithms permit the flow into a node to exceed the flow out of this node. € A) (j:(i. closeness being measured with respect algorithms. t} as e(»>= {) : Z (j. (i) The two basic operations arc.j) € A) a The preflow-push algorithms maintain a given preflow x. For i we define the excess for each node e N- {s.i) Xjj - y '^ij SO . The Generic Algorithm A preflow of (4. t).1b): x is a function x: A —» R that satisfies (4. they are more general and more flexible. As If in the shortest aug. The algorithm terminates when the network contains no active nodes. The preflow-push algorithm uses the following subroutines: . Second. preflow at each intermediate stage. they can push flow for closer to the sink before identifying augmenting paths. j) € A) • We refer to a node with positive excess as an active node.menting path we send to flow only on admissible arcs. At each algorithm (except active node.e. Third. the it method cannot send excess increases the distance label from this node nodes with smaller distance it then of the node so that creates at least one new admissible arc.{s. We adopt the convention that the source and sink nodes are never active.

end. j) e A(i) and > 0}. to determine initial distance labels d(i). in this network. end. It might be instructive to visualize the generic preflow-push algorithm in terms of a physical network. j) then push 5 = min{e(i). stairting at node t. PREPROCESS. perform a backward breadth first-search of the residual network. end. : r^:) units of flow from 1 : node Tj: i to node j else replace d(i) by min {d(j) + (i. end. and to the sink. and nonsaturating otherwise.. nodes represent joints. begin if the network contains an admissible arc (i. we visualize flow in an . while the network contains an begin select active node do an active node i. arcs represent flexible water pipes. j) e A(s) and d(s) : = n.86 procedure PREPROCESS. and the distance function measures how far nodes are above the ground. by 5 units. We refer to the process of increasing the distance label of a node as a relabel operation. j) increases both saturating if and r. We say that a push of 6 units of flow on arc is 5 = rj. Xgj : = Ugj for each arc (s. algorithm begin PREFLOW-PUSH. procedure PUSH/RELABEL(i). we v^h to send water from the source In addition. to create at least The piirpose of the relabel operation is one admissible arc on which the algorithm can perform further pushes. A push of 5 units e(j) from node i to node j decreases both e(i) and r^: by 6 units and (i. begin x: = 0. The following generic version of the preflow-push algorithm combines the subroutines just described. PUSH/RELABEL(i).

so that the algorithm can begin by selecting all some node with incident to node positive excess. 3) an active can be selected again for further pushes. d(l)+l} = min{2. 4) has residual capacity r24 = of value 6 1 and d(2) = d(4) + the algorithm performs a (saturating) of push = min {2. 4) is deleted from the residual is still (4. Initially. occasionally flow becomes trapped locally neighbors. Third. the current candidate for the list.2. if the algorithm relabels each node 0(n) . Since node 2 (2. 1} units. however.5) = 2. In the push/relabel(i) step. the remaining excess flow eventually flows back towards the source. but they do not satisfy the distance condition. Suppose the select step 1. it node 2 to 1. 2) is added to the residual network. The arc and (2. it gives each node s a positive excess. we is identify an admissible arc in A(i) using the same data structure we used in the shortest (i.87 admissible arc as water flowing downhill. examines node 2.3(a) specifies the preflow determined by the preprocess step. s. the algorithm performs a relabel operation and gives node 2 a new distance d'(2) = min {d(3) + 1. the residual network will never contain a directed path from s to will be and so there never any need to push flow from s again. since the preprocessing step saturates is arcs none of these arcs admissible and setting d(s) = n will satisfy the is validity condition C4. Arc (2. we are also guaranteed that in subsequent iterations t. We maintain vrith each node i a current arc which push operation. we move at a the source node upward. node that has no downhill At this point. The push reduces the excess network and arc node. 1) have positive residual capacities. and again water flows downhill towards the sink. Figure 4. water flows downhill towards the sink. lists We have seen earlier that takes 0(nm) total time. all the water flows either into the sink or into the Figure 4. Since arc (2. a lower bound on the length of t. Eventually. no flow than can reach the sink. We choose the current arc by sequentially scanning the arc scanning the arc times. As we continue to move nodes upwards. The algorithm terminates when source. and water flows to its neighbors. The preprocessing node adjacent to step accomplishes several important tasks. Second.1(a). since d(s) = n t. we move the node upward. j) augmenting path algorithm.3 illustrates the push/relabel steps applied to the example given in Figure 4. First. In general. Hence. any shortest path from s to the residual network contains no path from s to Since distances in d are nondecre<ising.

(a) d(3) = 1 d(l) =4 d(4) = d(2) = l 1 6^ = (b) After the execution of step PUSH(2). .88 d(3) = 1 e3=4 d(l) = 4 d(4) = d(2) = 1 e.= 2 The residual network after the preprocessing step.

be an . Complexity of the Algorithm We now important times. begin by establishing one result: first always valid and do not increase too many The of these conclusions follows from Lemma because as in the shortest augmenting path algorithm. Figure 4. the residual a flow. paths from s to active nodes. Assuming that the generic preflow-push algorithm terminates. This condition total is the termination criterion of the augmenting path algorithm. Since d(s) = network contains no path from the source to the sink. that distance labels are We 4. connected to At any stage of the preflow-push algorithm. arcs directed into the sink is and thus the flow on the maximum flow value. the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily when no admissible arc emanates from it.1. we can easily resides show that it finds a maximum flow. any preflow x can be decomposed with respect (i) to the original (ii) network G into nonnegative flows along paths from the source s to Let i t. The algorithm terminates when the excess is either at the source or at the sink implying that the current preflow r. Lemma is 43. and (iii) the flows around directed cycles. The second conclusion follows from the following lemma.3 An illustration of push and relabel steps. Proof. each node i with positive excess node s by a directed path from i to s in the residual network. By the flow decomposition theory. analyze the complexity of the algorithm.89 d(3) = 1 d(l) = 4 d(4) = d(2) = 2 (c) After the execution of step RELABEL(2).

it had a positive excess. and d(i) < 2n for all i e is I. Each distance is label increases at . new excess at node d(j). the total is of relabel steps at most 2n^ (b) The number of saturating pushes at most nm. the algorithm does not minimize over an empty Lemma Proof. Since the total increase in d(i) throughout the running time of the i algorithm for each node distance labels is is bounded by 2n''.2. is at most 2n^. i and hence s. and hence a directed path from i to s.6. create a A saturating push on arc might 1. The last time the algorithm relabeled node i. j) it performs a saturating or a nonsaturating push. Since < n. thereby increasing the number of active nodes by and increasing F by which may be as much as 2n per saturating push. The proof is ver>' much similar to that of Lemma 4.2 imply that (a) d(i) < d(s) + n - 1 < 2n. V i€ I d(i). x. Case The <ilgorithm is unable to find an admissible arc along which it can push flow. In this case the distance label of node i increases by e ^ 1 units. I denote the set of active nodes. Lemma Proof. and so (i. Lemma number 4. and flows around cycles do not P contribute to the excess at node Then the residual network contains the reversal of O' with the orientation of each arc reversed). This lemma imples set. 4. Consequently. Proof.4. Then there t must be a path P from s to i in the flow decomposition of since paths from s to i.5. 4. and hence 2n'^m Next note that a nonsaturating push on arc (i. Cor^ider the potential function F = . This operation increases F by at most e units. The number of nonsaturating pushes is O(n^m). For each node i e N. j) over all saturating pushes. does not . At termination.90 active node relative to the preflou' x in G. During the push/ relabel (i) one of the following two must apply: 1. Let III We prove the lemma using an argument based on potential functions. most 2n times. the total increase in F due to increases in bounded by is Case 2. dii) < 2n. The algorithm able to identify an arc on which it can push flow. j. F cases zero. that during a relabel step. 2n. the initial value of F (after the preprocessing step) step. the residual network contained a path of length at most n-1 from node fact that d(s) to node The = n and condition C4.

that adds to S nodes become active following a push and are not already in S. example. doubly linked delete. that the algorithm relabels no node during n node examinations. Each nonsaturating push decreases F by one unit and F always remains nonnegative. The algorithm maintains a set S of active nodes. and so on. However. we immediately obtain a bound of O(n^) on the number of node examinations. the preflow-push and its algorithm has several nice features. further improvements. Hence. the nortsaturating pushes can occur most 2n^ + 2n^ + 2n^m = O(n^m) times. we can derive many max different algorithms select {d(i) from the generic version. and deletes from S nodes that become inactive following a nonsaturating push. lists) Several data structures (for example. of the algorithm.91 increase III. then excess reaches the sink node and the algorithm terminates. it is easy to implement the preflow-push algorithm theorem: O(n'^m) time. We maximum at summarize these possible increase in facts. A Specialization of the Generic Algorithm The running time of the generic preflow-push algorithm is comparable to the bound of the shortest augmenting path algorithm. in Then nodes with distance h* push flow turn. is push flow to nodes with distance h*-2. Consequently. we always an active node with the highest distance label for : Let h* = e(i) > 0. to nodes with distance and these nodes. Finally. in particular. its flexibility potential for By specifying different rules for selecting nodes for push/relabel For operations. Note all If a if node relabeled then excess moves up and then gradually comes cor\secutive dov^n. then F decrejises by an amount d(i). proving the lemma. We have thus established the following Theorem 1. it from in in 0(1) time. Consequently. The nonsaturatirg push will decrease F by d(i) since i becomes inactive. . this algorithm performs O(n^) nonsaturating pushes. i e N) at some point h*-l. decreeise in is at least 1 unit per norxsaturating push. node F j was active before the push. or select elements are available for storing S so that the algorithm can add. suppose that push/relabel step. Since the algorithm requires O(n^) relabel operations. we indicate how the algorithm keeps track of active nodes for the It push/relabel steps.4 The generic preflow-push algorithm runs in O(n'^m) time. but it simultaneously increases F by If d(j) = d(i) - 1 if the push causes node j to become The net active. Each node examination entails at most one nonsaturating push. The initial value of F is at most 2n^ and the F is Irr- + 2n^m.

from O(n^m) 0(n^ log U). Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling . Theorem 4.92 variable level which is an upper bound on the highest index lists r for which LlST(r) is nonempty.5 Excess-Scaling Algorithm at The generic preflow-push algorithm allows flows violate each intermediate step to mass balance equations. active node) is By pushing flows from active nodes. We can store these as doubly linked lists so that adding. observe no particular pattern in In this section. the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an : one measure of the infeasibility of a preflow. Suppose . except that e^^g^^ eventually decreases to vtdue we develop an excess- scaling technique that systematically reduces Cjj^^ to 0. Researchers have shown using more clever analysis that the ) highest label preflow push algorithm in fact runs in 0(n^ Vrn time. The following theorem now evident. it We algorithm as the excess-scaling algorithm since is bcised on scaling the node excesses. algorithm pushes flow from nodes whose excess is A/2 S ^jj^ax^^- "^^ choice assures that during nonsaturating pushes the algorithm sends relatively large excess closer to the sink. Note. The excess-scaling algorithm is based on the following ideas. starting at LIST(level) We identify the highest indexed lists. attempts to satisfy the meiss balance equations. The algorithm also does not allow the maximum excess to increase beyond A.5 The preflcnv-push algorithm O(n^) time. that always pushes flow from an active node ipith the highest distance label runs in U preflow push algorithm is The O(n^) bound and can be improved. refer to this U represents the largest arc capacity in the network. though. that during Cj^^g^. We to will next describe another implementation of the generic preflow-push algorithm that dramatically reduces the Recall that number of nonsaturating pushes. or selecting an element takes 0(1) time. This algorithmic strategy may prove to be useful for the following reason. deleting. Pushes carrying small amounts of flow are of little benefit and can cause bottlenecks that retards the algorithm's progress. nonempty list and sequentially scanning the lower indexed needed is We leave it as an exercise to show that the overall effort to scan the lists is bounded by n plus is the total increase in the distance labels which O(n^). 4. we would 0. the execution of the generic algorithm. for the highest label straightforward.

e(j)} This change will the ensure that the algorithm permits no excess to exceed A. more than A/2. It is node Vkdll j could not send the accumulated flow closer to the sink. it pushes 6 = min {e(i). We refer to a specific scaling phase with a A as the /^-scaling phase. a new scaling ph«ise begins. A= 2' ^°6 ^ when ' the logarithm has base 2.93 The algorithm also does not allow the maximum excess to increase beyond A. effort. Selection Rule. j. Ehjring the A-scaling phase. K:=2riogUl. algorithm EXCESS-SCALING. Initially. U < A < 2U. and thus the algorithm need to increase its distance and return much of is its excess back toward the source. Among all nodes with excess of distance label (breaking ties arbitrarily). 6 = min {e(i). This algorithmic strategy may prove to be useful for the following reason. Thus. The algorithm uses following node selection rule to guarantee that no node excess exceeds A. excess-scaling algorithm uses the same step push/relabel(i) as in the generic preflow-push algorithm. Ij. ejy. end. begin PREPROCESS. may vary up and down during When Ul + 1 Cjj^g^ < A/2.. A . The algorithm performs a number of dominator A decreasing from phase certain value of scaling phases with the value of the excess- to phase. A/2 < Cj^g^ < A and ejj^^^ the phase. Thus.ax decreases to value and we obtain The the maximum flow. Tj. pushing too much flow to any node likely to be a wasted The excess-scaling algorithm has the follouang algorithmic description. After the algorithm has peformed flog scaling phases. but with one slight difference: instead of pushing units. select a node with minimum . end. for k : = K down to do begin (A-scaling phase) A: = 2^ while the network contains a node i with e(i) > A/2 do perform push/relabel(i) while ensuring that no node excess exceeds A.} units of flow. Suppose likely that several nodes send flow to a single node creating a very large excess.

in F during this scaling is phase sum to 8rr. Using this potential function N Since the algorithm has first. e'(j) - be the e(j)) j after All the push. j) In either F decreases. 4. the second assertion a consequence of the The e(i) is initial value of F the beginning of the A-scaling phase d(i) is bounded by 2n^ because step. we will establish the first assertion of the is is lemma.8. of flow. j). Consider the potential function F = ^ ie e(i) d(i)/A. the increase in F due to node relabelings most 2n'^ over scaling phases). i A and sends at leaist A/2 tmits of flow at least from node 1/2 units.4). and d(j) {e(i). we ensure that in a nonsaturating push the Jilgorithm sends e(j). it performs either a saturating or a nonsaturating push. Then e'(j) = e(j) + min {e(i). This relabeling operation the totcil increases F by at most e units because < A. throughout the running of the algorithm increase in F is bounded by 2n (by Lemma is the total due to the relabeling of nodes bounded by 2n^ is at in the A-scaling all phase (actually. Case 2. we have e(i) > A/2 and excess e(j) is < A/2. at most 2n^ (from Case the number of nonsaturating pushes bounded by . ijj) units of flow. by sending min more than A/2.e(j)) > min {A/2. Lemma 4.. (i. During the push/relabeKi) one of the following two cases must apply: Case 1. the push operation increases only Let Tj. node excesses thus remain less than or equal to A. A < + A- e(j) <A . Since for each increaise in d(i) 4. C4. Proof. No excess ever exceeds A. The algorithm satisfies the following two conditions: Each nonsaturating push sends at least A/2 units of flow.. The algorithm is able to identify an arc on which it can push flow and so Ccise. The excess-scaling algorithm performs O(n^) nonsaturating pushes per and scaling phase 0(n^ log U) pushes in total.1. nonsaturating push on arc since d(j) = d(i) .4. at leaist A/2 vmits excess at node e(j) Further. since node i is a node with smallest distance = d(i) label (i. Proof. - 1 < d(i) since arc is Hence.7. r^. Odog U) at scaling phases. For every push on arc (i. In this case the distance label of node i increases e(i) by e ^ 1 units. bounded by A and bounded by 2n. i. to node j after this operation F decreaases by is at Since the initial value of F at the beginning of a A-scaling phase most 2n^ and the increases 1).94 Lemma C43. j) among nodes whose admissible. A . The algorithm is unable to find an admissible arc along which it can push flow.

we add an s* to t*. choice gives us a pseudoflow with e(i) We problem by solving a maximum flow set x^: = /j: for each arc (i. we can summarize our discussion by the following Theorem time. ^ denote the lower bound for flow on any eu'C (i. hence. operation. and super sink. otherwise. If \ e(i) . j) e A. we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. among nodes with more than A/2. preflow-push method in Section lists 4. . node 0. has a feasible solution. with capacity -e(i). result. Although the maximum flow problem v^th zero lower bounds always infecisible. We e(i) introduce a super source. and a variable level which a lower bound on the smallest index list r for which LlST(r) is nonempty. relabel operations and finding admissible arcs point. determine the feeisibUlity of this problem with zero lower bounds as follows. Making in the this identification is we use a scheme similar to the one used label.95 This lemma implies a bound of 0(nm all + n^ log U) for the excess-scaling algorithm since we have already seen that other operations — such as saturating pushes.i) with capacity e(i). however. We is leave as an exercise to show needed to scan the lists is bounded by the number not a bottleneck of pushes performed by the algorithm plus 0(n log U) and. Let /j. 4. We then solve a v* problem from Let x* denote the maximum v* = {i: flow and e(i) maximum flow denote the maximum is flow value in the transformed network. j) as x^.4 for the definition of a pseudoflow with both a excesses and deficits). and for each node i with e(i) < 0. For each node i with > we add an t*) arc (s*. This i representing the excess or deficit of any node e N. (We refer the reader to Section 5. the problem wiih nonnegative lower bounds could be We can. arc (i.4 to find a e(i) node with the highest distance d(i) We is maintain the LIST(r) = {i € N : > A/2 and = r). We identify the lowest indexed nonempty starting at LIST(level) and sequentially scanning the higher indexed that the overall effort lists. — require 0(nm) time. With this observation. then the original problem > 0) is feasible and choosing the flow on each is arc (i. node t*. + /jj a feasible flow. the problem infeasible. Up to this we have if ignored the method needed to identify a node with the excess minimum distance label easy. j) e A.6 The preflow-push algorithm with excess-scaling runs in 0(nm + n^ log U) Networks with Lower Bounds on Flows To conclude this section. s*.

initially first we apply any of the maximum flow as algorithms with only one change: rj. These observations show that it is possible to solve the problem with nonnegative lower bounds by two applications of the cilgorithms maximum maximum flow flow we have already discussed. It is possible to establish the optimality of the solution generated by the algorithm by generalizing the max-flow min-cut theorem to accomodate situations with lower bounds.96 Once we have found = (ujj a feasible flow. the residual capacity for incre<ising flow cmd for decreasing flow on arc (j. j) - Xjj) + (xjj - /jj). (i. . j) respectively. define the residual capacity of an arc (i. The and second tenns on arc in this expression denote. i).

max Cj. by adding artificial arcs (1. in Section 2. is feasible. > 0. max ( ujj : (i. We consider the following node-arc formulation of the problem. j) and 1) for each € N and assigning a large cost and a very large capacity to each of these . (i. i) X^!k) = ''ii t)(>)' for a" > e N.. directed path We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes. for each (i. otherwise. supply/demand and problem We also assume that the minimum cost flow satisfies the following two conditions. this condition. Let that the lower bounds ( /j. Feasibility Assumption.e. j) X € X) X:: (j : (j.1b) < xjj < Ujj. j) € A. if We (j.1. (5. it is infeasible. MINIMUM COST FLOWS In this section. can ascertain the feasibility of the minimum cost flow problem by solving a flow problem as follows. Connectedness Assumption. on arc flows are all zero and that arc costs are [ C } = ).1a) (i.j)€A^ subject to {j : (i. : (i. Minimize 2^ Cj. add an arc (s*. for each node with < 0. Introduce a super source node i s*.4 imply that these assumptions do not impose any We remind the reader of our blanket capacity) are integral.: ' {5. i) with capacity b(i). x. we consider algorithmic approaches for the minimum cost flow problem. (5. We assume that X ieN ^(^^ - and that the minimum cost flow problem has a feasible solution. and a super and sink node i For each node b(i) with arc b(i) (i. impose j necessary. We maximum t*. Now solve a maximum flow problem cost from s* to t*.2. assumption that all data (cost. A5. j) € A The transformations Tl and T3 loss of generality. j) e A ) and U = max max { lb(i)l : ie N}. the maximum flow value equals {i : T b(D > b(i) 0) then the minimum flow problem A5. add an If t*) with capacity -b(i).97 5.1c) We assume nonnegative.

5.1. j) For the original network contains both the arcs i and (j. The concept example. if of residual networks poses some (i. j) G(x) corresponding to a flow x arcs i) (i. rather simple complementary slackness conditions..x^. i). j) has cost rjj and x^.. rather than changing our notation. Moreover. and the has cost -Cj: and residual capacity = The residual network consists only of arcs with positive residual capacity. residual capacity = u^j . This equivalence implies the following alternate statement of Theorem Theorem 2. CXir algorithms rely on the concept of residual networks.1 for the definition of augmenting cycle). view.2. A feasible flow x is an optimum flow if and only if the residual network G(x) contains no negative cost directed cycle. of this problem inherits linear many of these properties. more general parallel arcs However. we (or. the minimum The cost flow problem has a number of important theoretical properties. the minimum cost flow problem and its dual have. No such arc would appear in a minimum cost solution unless the problem contains no feasible solution without artificial arcs. notational difficulties. due to its special structure.1. The arc (i. then the residual j network may contain two arcs from node i to node and/or two j arcs from node to node with possibly different to costs. we can case. Our notation for arcs assumes that at most one arc joins easily treat this one node any other node. i). e A by two arc (j. j) is defined as follows: Cj: We replace each arc r^. The residual network (i. . Duality and Optimality Conditions As we have seen programming dual in Section 1. from a linear programming point of In this section. we can produce a network without any Observe that parallel arcs).98 arcs. and (j. will tissume that never arise by inserting extra nodes on parallel arcs. state the linear we formally programming dual problem and derive the complementary slackness conditions. By using more complex notation. any directed cycle in the residual network G(x) is an augmenting cycle with respect to the flow x and vice-versa (see Section 2.4. 5.

j) N X e A "ij ^i\ ^ (5 2a) ' subject to 7c(i) - 7c(j) - 6ij < Cjj . 99 We each arc generality.1b) redundant. substituting this result in (5. The complementary slackness conditions Xjj for this primal-dual pair are: > => 7i(i) - n(j) - 5jj = Cjj (5. Further.5) = =* 7c(i) - 7t(j) < Cjj . . j).2c) and Ji(i) are unrestricted. j). . (5. € A. therefore assume 7c(l) = (i. (5. consider the j) minimum is cost flow problem that this (5.6).2b) 5jjS 0.3) Whenever = > for some arc (i.1) assuming that Uj.8) yields (5. j) variables to an arbitrary value.j)e A.1b). (5..7) To see this equivalence. j) e A. It possible to show 7t(i) assumption imposes no loss of i We associate a dual variable with the mass balance corwtraint of node is Since one of the constraints in (5. Xj. associate a dual variable 6jj We. for all (i.8) Since (5.6) Xij n(i) - n{]) ^ qj < Xj: (5.4) These conditions are equivalent Xj: to the following optimality conditions: (5.1c). implies that n(i) - n(j) - 5jj = Cjj . The condition (5. foraU (i.4) Uj: implies that 6jj = 0. > for (i. (5.3) 6jj > ^ Xjj = Ujj. . we that can set one of these dual 0. . (5. in (5. 0<xjj <u^j=* = Ujj=> Jt(i)- Jt(j) = Cjj.3) implies that 7t(i)-7t(j) -5jj = Cjj. in (5. (5. suppose that < Uj: for some arc (i.1) is: Maximize X ie t)(') '^(i^ ~ (i. we The with the upper bound constraint of arc dual problem to (5. Xj: < Uj.

Then The in the residual Cj:.j)€ (-Jt(i) W + Jt(j)) (i. Finally.2b) gives (5.7). then = U|j. i) C5. simplify C5. To < see this result.7) imply that a pair x. j) in the residual network G(x).5) define the reduced cost of an arc (i. that the condition C5. It is easy to establish the equivalence between these optimality conditions and the condition stated in satisf)'ing Theorem 5. when stated in we retain for the sake of completeness. network.6. The conditions if it - (5. . t for each arc (i. would contain arc with Cj.2 implies that d(j) < d(i) + q. These conditions. '^ S 0.1. n of flows and node potentials C5.5 and C5. ^ S (i. j) and C5. We (5.4) imples that = and substituting this result in (5. some (i. Hence.4.6.1 C5.j)e W q: = '' (i. < Ujj.6 implies that X (i. Observe however.2.2 X If is feasible. j) then (5. then then Xjj = 0. -t^ Cjj . for some arc (i. in the original Cjj.. 0. to: terms of the residual network. Note note that if that the condition C5. C5. Further .4. (i.1. Let W be any directed cycle in the residual network.100 Substituting 5jj S in this 6jj equation gives (5.3. if xj: = < uj. shortest path optimality condition C3. n of flows and node potentials optimal satisfies the follov^ing conditions: C5.3 follows it from the conditions C5. respect to the arc lengths are well defined. residual network C5. = Cj: - Ji(i) + is n(j). network the shortest distances from node 1. the residual network contains no negative cost cycle.3 C5. suppose that x is feasible and C(x) does not contain a negative cycle. 0.5 C5.6 (Primal feasibility) x (E>ual feasibility) Cj.j)€ W C:.2 and C5.4 If If < x^: Xj. i)eW To see the converse. j) as Cj. C5. Cjj Cjj Cjj > = < 0. + I (i. j) in A. > and Xj. is feasible.j)€ XW C. > subsumes for some arc (j. Cjj = Xjj But then for Cjj contradicting A similar contradiction arises if < and < Uj. Condition C5.6 Cj. Consider any pair x.5). with Let d(i) denote the shortest distance from node 1 to node i. C5. then the 0.

Hence.. = «« for each (i. 1^. 5^ Relationship to Shortest Path and Maximum Flow Problems The minimum cost flow problem generalizes both the shortest path and maximum flow problems.5 and C5. the This relationship will be cost flow problem. and Uj. Then < q. 71 - d. with Cj: c^g = -1 and u^^ = for each arc ~ (in fact. j) e A (in fact. Uj^ m • max {u|. as the nodes in G.Jt(i) + 7t(j) = Cj.*. Suppose that 7t is an optimal dual solution and c is the vector of reduced costs. algorithms for the minimum cost flow problem solve both the shortest path and maximum flow problems as special cases.1 minimum We how to obtain an optimum dual solution from an optimum primal solution by solving a single shortest path problem. more transparent when we discuss algorithms have already shov^m in Section 5. and setting = (i. other nodes can be formulated as a minimum cost flow problem by setting b(l) = (n . same supply /demand well as a lower Any arc (i. Similarly. improved algorithms for the for these two problems have improved algorithms minimum cost flow for problem. : (i. . the pair satisfies C5. j) in G(x). many of the algorithms use shortest path minimum and/or maximum for the cost flow problem either explicitly or implicitly flow algorithms as subroutines. setting Uj: equal to any integer greater than (n 1) will suffice if we wish s to to maintain t finite capacities).6. algorithms for the shortest path and great use in solving the minimum cost flow problen. for all (i.1) b(i) = -1 for all 1 * s. j) in G(x). the maximum = flow problem from node node can be transformed to the s) minimum cost flow problem by introducing an additional arc (t. We define the cost-residual network G* = (N. A*) as follows. Thus. The shortest path problem from node s to all . j) e The nodes in G* have the A* has an upper bound u^:* as bound defined as follows: . j) € A. led to Consequently.101 for aU (i. Let n = x. j) e A) would suffice). Conversely. maximum flow problems are of Indeed. + d(i) - d(j) = Cj. We now show how to obtain an optimal primal solution from an optimal dual solution by solving a single maximum flow problem.

at the same time. (i.4 implies the flow on arc flow. . A* contains an arc in A with Cj. We first eliminate the lower this bounds of arcs as described in Section 2. j) (iii) For each (i.2-C5. the algorithm terminates.. Then an optimum solution of the maximum minimum problem in G.2 dictates that xj: = in the optimum (i. primal-dual.102 (i) (ii) For each For each (i.4 in and then transform problem to a maximum cost flow problem as described assumption A5. In this and the following cost flow sections. A* contains an arc in A with c. < 0. r Now network the problem is reduced to finding a feasible flow in the cost-residual that satisfies the lower and upper bound restrictions of arcs and. If cjj must be at the arc's upper bound in the optimum = 0. arc j) with Uj. electrical engineers and many others have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem. . > for some (i. j) flow. cycle algorithm maintains a primal feasible solution It The negative to attain x and strives dual feasibility. we discuss most of these important algorithms for the them. = 0. and hf = 0- The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5. Notable examples are the negative cycle. j) A with Cj.* = uj. then any flow value will satisfy the condition C5.4.1. then condition C5. it Theorem 5.3. j) with u^* = (i. then C5.. 5. > 0. j) 6 A. primal simplex and scaling-based algorithms. Similarly. out-of-kilter. minimum problem and point out relationships between We first consider the negative cycle algorithm. meets the supply/demand constraints of the nodes. j) € A. successive shortest path. Negative Cycle Algorithm Operations researchers.1 implies that has found a minimum cost flow. If Cj. j) in (i. Let x* denote the x*+/* is flow in the transformed network. does so by identifying negative cost directed cycles in the in these cycles.3. residual network G(x) and augmenting flows The algorithm terminates when when the residual network contains no negative cost cycles. 1^:* =Uj. if Cjj < for some (i. A* contains an (i. j) with u^:* = 1j:» = 0. computer scientists.

begin establish a feasible flow x in the network. Identifying a negative cost cycle with maximum improvement due in the objective function value. at least one augmenting cycle with respect Hence. due to degeneracy.103 algorithm NEGATIVE CYCLE.1. A cycle 3. improvements to ex -ex*. j) e W). it maintains a tree and node potentials that enable to identify a negative cost cycle in 0(m) effort. augment end. This algorithm can be improved in the following three ways (which irizpV summarize) we briefly (i) Identifying a negative cost cycle in effort (to much less than 0(nm) time. j) e W)). which requires 0(nm) time at least Every iteration reduces the initial flow cost by zero is one unit. Let x be some flow and an optimum flow. objective due to flow augmentations on these augmenting cycles sum Consequently. end. the algorithm always augments flow along a . One algorithm for identifying a negative cost the label correcting algorithm for the shortest path problem. the simplex algorithm cannot necessarily send a positive amoimt (ii) of flow along this cycle. described in Section to identify a negative cycle. The augmenting cycle theorem (Theorem 2. 6 units of flow along the cycle W and update G(x).3) implies that x* equals x plus the flow on at most in cost augmenting cycles with respect to x.4. = min [t^ (i. while C(x) contains a negative cycle do begin use some algorithm 5 : to identify a negative cycle W. is feasible flow in the network can be found by solving a maximum flow problem as explained just after assumption A5. j) IW € m (min ^ (rjj : (i. Further. Since mCU is an upper bound on an cost. It The simplex algorithm solution be discussed later) nearly achieves this objective. However. the flow cost and a lower bound on the optimum flow algorithm terminates after at most O(mCU) iterations and requires O(nm^CU) time in total. The improvement is in the objective function to the augmentation x* be along a cycle W - (i. if to x must decrease the function by at least (ex -cx*)/m.

time. A pseudoflow is a function x A -» R satisfying only : <md normegativity constraints. then called the deficit. the its to the next. step and attempts to achieve dual In contrast.1 implies an optimum flow within 0(m log mCU) iterations. and T denote the . (iii) Identifying a negative cost cycle vdth ais its minimum mean it cost. but violates the supply/demand constraints of the nodes. if e(i) < 0. i) X€ A] ''ii - {j: (i. Successive Shortest Path Algorithm The negative cycle algorithm maintains primal feasibility of the solution at every feaisibility. iterations. cycle is that the method would Finding a of this maximum improvement a difficult problem.104 cycle with obtain maximum improvement. then from one iteration moreover. but a modest variation approach yields a polynomial time algorithm for the minimum cost flow problem. all The algorithm when the current solution satisfies the supply/demand the capacity i constraints. It maintains a solution x that satisfies the normegativity and capacity constraints.j) X€ a1 e(i) ''ii' for all i e N. j At each step. the successive shortest path algorithm maintains dual feasibility of the solution at every step and strives to attain primal feasibility. We define the mean cost cycle is a of a cycle cycle cost divided by the number of arcs It is contains. researchers have shown the negative cycle algorithm always augments the flow along a minimum mean is cycle. For any pseudoflow x. A node i vdth = called balanced. 5. absolute value decreases by a factor of l-(l/n) within m Since mean cost of the minimum mean -1/n. If e(i) -e(i) is > for some node i. A minimum mean whose mean cost is as small as possible. the algorithm selects a node i with extra supply and a node with unfulfilled demand and sends flow from terminates i to j along a shortest path in the residual network. then Lemma 1. the minimum mean (negative) cycle 1. we define the imbalance of node as e(i) = b(i) + {j: (j.1 cycle value nondecreasing. then e(i) is called the excess is of node Let S i. possible to identify a minimum mean that if cycle in 0(nm) or 0(Vri m log nC) Recently.4. is bounded from below by -C and bounded from above by Lemma implies that this algorithm will terminate in 0(nm log nC) iterations.

- . j) (i. . We in are now its in a position to prove the lemma.nil) + n(k). and so (j. The residual network corresponding to a pseudoflow is defined in the same way that we define the residual network for a flow.6. - Jt(i) + n(j) in these conditions and using 7t'(i) = 7t(i) - d(i) yields qj" = Cjj 7:'(i) + n'(j) S 0.6 with respect to the node potentials to n'.e. But since for each arc 6 P Cjj = 0.6 unth respect to the node potentials it. Besides using them to prove the correctness of the algorithm. fe P Z C. the node ?'> potentials change all path lengths between a specific pair of nodes by a constant amount. Y fe C:. Suppose a pseudoflow x satisfies the dual feasibility condition C5.pect to the potentials (i. j) in G(x). i) to the residual network. Observe that for any directed path P from a node k to a node /.' = Cjj for on the shortest path P from node k node since d(j) = d(i) + for € P and Cj: = c^. Furthermore. Then x' also satisfies the dual feasibility conditions with respect to some node potentials.6 for this arc. node k any node v in G(x) with respect to the arc lengths We claim that x also Jt' satisfies the dual feasibility conditions with re. is and the Cjj. reversal arc (j. '' = (i. Let d(v) denote the shortest path distances from Cj. 5. j) in G(x). for all (i. suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). Since x satisfies the dual feasibility conditions with respect to the node potentials Cj: we have to ^ for all (i. Cj.. j) C5. we use them to ensure that the arc . The shortest path optimality conditions C3. /. x every arc every arc satisfies (i. The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj. Hence. Augmenting flow along any = arc P maiintains the dual feasibility condition C5. i) also satisfies C5. for all (i.. = 7t-d. j) in G(x)..1. jt. shortest path with respect to the same bls the shortest path with respect to The correctness of the successive shortest path algorithm rests on the following result. The node potentials play a very important role in this algorithm. Lemma Proof.2) imply that d(j)<d(i)+ Substituting cjj . j) (i. (i. Next note that Cj. Hence..105 sets of excess and deficit nodes respectively. Cj: = - Cj.c(i) + Jt(j). Augmenting flow on an Cj: arc (i. j) may add .

e(i) compute imbalances while S ^ and initialize the sets S and T. then T* because the sum of excesses always equals the sum of deficits. we set x = 0. and = begin X : = 7t : 0. Each iteration of algorithm solves a shortest path problem with nonnegative arc lengths and reduces the supply of some node by Cj: at least one unit. Further. The successive n. So the overall complexity of is this algorithm is 0(nU S(n. -e(/). m. The algorithm however. the shortest path problem at each iteration can be solved using Dijkstra's algorithm. if U is an upper bound on iterations. is the best strongly polynomial -time bound implement Dijkstra's algorithm is CXm + n log n) and the best (weakly) polynomial time bound is 0(min {m log log C. more The following formal statement of this method. T. which is a feasible pseudoflow and arc C5. O). f>olynomial . the algorithm terminates in at most the arc lengths nU Since are nonnegative. thus enabling us to solve the shortest path subproblems efficiently.6 with respect to the node potentials n = Also. units of flow along the path P. j) € P } ]. 5*0. end. by assumption.106 lengths are nonnegative. X.. augment 6 update end. all lengths are nonnegative. m it + is nVlogC ) ). shortest path algorithm largest pseudopolynomial time since is. S and To satisfies initialize the algorithm. ujxJaten 6 : = 7t-d. C) the time taken by Dijkstra's algorithm. of the successive shortest path algorithm summarizes the steps algorithm SUCCESSIVE SHORTEST PATH. the connectedness assumption implies that the residual network G(x) contains a directed path from this node k to node /. the largest supply of any node. d(j) determine shortest path distances from node k to all Cj. if since. polynomial in m and the supply U. to Currently. other nodes in G(x) with respect to the reduced costs let P denote : a shortest path from k to 1. where S(n. = min [ min { rj: : (i. do begin select a node k e S and a node / € T. e(k). Consequently. m.

. U)). each 7:(j) becomes 7t(j) - d(j)) and then solves a maximum flow problem to send the reduced maximum possible flow from the source to the sink using only arcs with zero that the excess of cost.7. Primal-Dual and Out-of-Kilter Algorithms The primal-dual algorithm is very similar to the successive shortest path problem. where S(n. The basic if Cj. out-of-kilter algorithm satisfies only the mass balance cortstraints and may idea is violate the dual feasibility conditions to drive the flow on an arc (i. C) and M(n. the algorithm has an overall complexity of 0(min (nU S(n. U) respectively denote the solution times of shortest p>ath and maximum flow algorithms. > 0. comes closer to satisfying the mass balance However. and the flow bound restrictior«. To explain the primal-dual algorithm. In Section 5. This bound is better than that of the successive shortest path algorithm. the algorithm incurs the additional expense of solving a maximum flow problem at each iteration. m. The flow observation follows from the fact that after we have solved the maximum problem. might send flow along many paths. represented by k^:.5. and to permit any flow between and Uj: if Cj: The kilter number. m. a special case of the minimum cost flow problem for which U = 1.107 time for the assignment problem. Thus. as before. and also assures that the node potential of the sink latter strictly decreases. These observations give a bound of min {nU. j) to Uj. m. These algorithnns modify the flow and potentials so that the flow at each step constraints. we will develop a polynomial time algorithm for the minimum cost flow problem using the successive shortest path algorithm in conjunction with scaling. of course. nC M(n. we could The just as well have violated other constraints at intermediate steps. the adding nodes and arcs as in the assumption A5. it except that instead of sending flow on only one path during an iteration. coi^equently. . drive the flow to zero if = 0. in the next ^ 1. but. the network contains no path from the source to the sink in the residual network consisting iteration d(t) entirely of arcs with zero reduced costs. Cj: < 0.e. we transform the minimum cost flow problem into a single-source and single-sink problem (possibly by At every iteration. C). 5. m. The algorithm guarantees some node strictly decreases at each iteration. nC} on the number of iterations since the magnitude of each node potential is bounded by nC.1). but that mass balance constraints. The successive shortest path and primal-dual algorithnw maintain a solution that satisfies the dual feasibility conditions violates the and the flow bound iteratively constraints. primal-dual algorithm solves a shortest path problem from the source to update the node potentials (i.

k^j = I x^j I and for an arc (i. leaving arcs and pivots using the tree data structiire. that of the successive shortest path algorithm. - x^: I . j) with c^j < 0. which is a spanning tree. At each it iteration. For example.3) permits the algorithm to achieve these efficiencies. Finally. Then the algorithm network and would obtain augment this a shortest path to node {(i. of an arc (i. the out-of. . structure. Through extensive empirical testing. the last The advances made in two decades for maintaining and upxiating the tree structure efficiently have substantially improved the speed of the algorithm. k^. j) terminates when all arcs are in-kilter. researchers have also improved the performance of the simplex algorithm by developing various heuristic rules for identifying entering variables. We first define the concept of a basis structure and describe a data structure to store and to manipulate the basis. j) is defined cis the minimum increase or decrease in the flow necessary to satisfy its j) flow bound constraint and dual feasibility condition. we show how guarantee the finiteness of the network simplex algorithm. version of the primal network simplex algorithm its is Though no known to run in polynomial time. In this section. The Section 2. but P u The proof of the correctness of algorithm more detailed than. Suppose the kilter would decrease by increasing flow on P from node in the cycle j the arc. 5. with is Cjj > 0. best implementations are empirically comparable to or better than other minimum cost flow algorithms. of an arc (i. we describe the network simplex algorithm in detail. for an arc I (i.kilter algorithm reduces the kilter number number of at least one arc.108 kjj. Network Simplex Algorithm The network simplex algorithm specialization of for the minimum cost flow problem for is a the bounded variable primal simplex algorithm cost flow linear programming. streamlining of the simplex problem offers several computations and eliminating the tree structure of the basis (see »need to explicitly maintain the simplex tableau. and node potentials for any basis We then show how to compute arc flows We next discuss how to perform various to simplex operations such as the selection of entering arcs. particularly. i in the residual at least is one unit of flow similar to. An arc with k^: = said to be in-kilter. The special structure of the minimum benefits. j)). = u^.6.

Then. then equations (5. j).9) 1 tree path in B from node to node j.1b) and (B. through the arc (i. A basic solution of the minimum The cost flow set problem defined by a triple i. little later We shall see a that if nil) = 0.. / (5. = for each e L. possible to obtain a set of node potentials n so that the reduced costs defined by = Cj. for each (i. A + feasible basis structure U) is called an optimum basis structure if it is Cj. (5. j) e B. B. and setting (5. bounds. for each for each (i. The condition (5. the problem has a feasible solution satisfying (5. L. L. L and tree.109 The network simplex algorithm maintains a basic feasible solution at is each stage U).9) Cij .11) has a similar interpretation. L. j) A basis xj: structure (B. The network simplex algorithm maintains iteration a feasible basis structure at each until it and successively improves the basis structure becomes an optimum basic structure. if U) as a basis structure. U p>artition and L and the arc set A. L. The condition not profitable for any nonbasic arc in L. B denotes the set of basic arcs. € L. . u^: for called feasible setting Xj. j) € U. The following algorithmic description specifies the essential steps of the procedure.jc(i) + 7t(j) for a nonbeisic arc (i. p in L denotes the change in the cost of flow achieved by sending one unit of flow through the tree path from node 1 to node j i. arcs of a spanrung U by respectively denote the sets of nonbasic arcs at their lower and upper U) is j) g U. (i. (B. = each (i.e. and then returning the flow (5. imply that -7t(j) denotes the length of the cj. We refer to the triple (B.1c). . = Cj. Cjj . j) (5.10) implies that this along the tree path from node circulation of flow is to node 1. - nii) n(j) satisfy the following optimality conditions: Cjj = S < .11) These optimality conditions have a nice economic interpretation.10) . (i.

j) with flow S and arc set (j. thread(i). 1 is the root node. We next describe one such tree representation. The node potentials for basis are easily computed using (5. /) violating the optimality conditions.1 for an example of the We associate three indices with each node i in the tree: a predecessor index. j) and 1) with sufficiently large costs and capacities. The this L consists of the remaining arcs. violates the optimality conditions while some arc begin select do an entering arc (k. pred(i). depthd). In the following discussion. q). 1) with flow b(j) if b(j) > 0. end. forming a cycle and augment the maximum possible flow determine the leaving arc (p. and a thread index.2 provides one way basic feasible solution. begin determine an initial btisic feasible flow x and the corresponding basis structure (B. We consider the tree We assume that node as "hanging" from a specially designated node. Maintaining the Tree Structure The specialized network simplex algorithm is possible because of the spanning tree property of the beisis. have assumed that for every node j € N - {!). compute node potentials for this basis structure. of obtaining an initial We (j.9). baisis add arc to the spanning tree corresponding to the in this cycle. we will see later. perform a basis exchange and update node potentials. U). L. root. the network contains arcs (1. we describe the various steps performed by the network simplex algorithm Obtaining an Initial in greater detail. Basis Structure Our connectedness assumption A5. The algorithm requires the tree to be represented so that the simplex algorithm can perform operations efficiently and update the representation quickly when the basis changes. end. /) (k. Each node has a unique path connecting it . See Figxire 5. called the tree. The -b(j) if b(j) initial basis B includes the arc set (1.110 algorithm NETWORK SIMPLEX. jmd the as U is empty. a depth i index.

For our example. number of arcs in the path. Note that by iteratively using the predecessor indices. its successors. starting node 5. U). and (ii) the descendants of any node are consecutive elements The thread indices provide a particularly convenient i: means for visiting (or finding) all i.1 shows an example of these indices. descendants of a node visited until the We 5. 7. As we will see. and 7 in order.1. 9) is contair« the descendents In Figure 5. 6. The thread indices can be formed by performing a depth first search of the tree as described in Section 1. The simplex method has two given basis structure. 7. simply follow the thread from node recording the nodes depth of the visited node becomes at at least as large as node i. 9. For the root node these The Figure 5. A node with no successors called a leaf node. we can enumerate the path from any node to the root node. nodes 4. The predecessor index stores the stores the first node in that path (other than node i) and the depth index indices are zero. the node set (5. its We say that pred(i) of a is the predecessor of node i i and i is a successor of node The descendants and so node i consist of the node itself.5 and setting the thread of a node to be the node encountered after the itself node in this depth first search. and then finally returning to the root. successors of successors. L.1). and 9 are leaf nodes.Ill to the root. finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the simplex method. this sequence would read For each node i. (i) the predecessor of each node appears sequence before the node in the traversal. 6. and (ii) basic steps: (i) determining the node p>otentials of a computing the arc flows for a given basis efficiently structure. 8. 1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5. a sequence of nodes that walks or the way through nodes of the tree. We now describe how to perform these steps using the tree indices. of node 5 in Figure 5. thread (i) specifies the next node in the traversal visited after node i. The thread threads its indices define a traversal of the tree. we visit nodes 3. 8.1. which are the 5. pred(i). Computing Node Potentials and Flows for a Given Basis Structure We first consider the problem of computing node potentials n for a given basis structure (B. descendants of node and then left visit node Since node 3's depth equals that of node we know that we have the "descendant tree" lying below node 5. itself. Note that the value of one node potential . We assume that n(l) = 0. This traversal satisfies the following in the two properties. For example. For example. on. 8. starting at the root and visiting nodes in a "top to bottom" and "left to right" order.

j) in B.1b) is redundant.112 can be set arbitrarily since one constraint in (5. These conditions can alternatively be stated as 1 . We compute the remaining node potentials using the conditions that Cj: = for each arc (i.

The thread compute node potentials 0(n) time using the following method. say node indices allow us to i. We proceed. if (i.12). j) 6 € A then . while node and move in toward the root using the predecessor computing flows this task. (5. end.12) The basic idea indices to is to start at node 1 and fan out along the tree arcs using the thread compute other node visits potentials. i) A then 7t(j) : = 7t(i) + j : = thread (j). = 0. procedure begin 7t(l): COMPUTE POTENTIALS. the procedure can all comput in 7t(j) using (5. j while ^ 1 do begin i : = pred(j). j. U). L. in the reverse order: indices. if (j.:(]) : = 7t(i) - Cj. node it has already evaluated the potential of hence. The following procedure accomplishes ..113 n(j) = Ji(i) - Cjj. j: = thread(l). end. Cjj. however. j) e B. on arcs encountered along the way. The traversal assures that whenever this its fanning out procedure predecessor. for every arc (i. A similar procedure will permit us to compute flows on basic arcs for a given start at the leaf basis structure (B.

= U|j for these arcs. which precisely . units at Xj: node i and makes the same amount available initial This effect of setting nodes. = u^: explains the adjustments in the supply/demand of The manner for up>dating e(j) implies that each e(j) represents the j. Now additional consider the steps of the method. while T*{1) do begin select a leaf i : node j in the subtree T. sum of the adjusted supply /demand of nodes in the subtree hanging from node is Since this subtree connected to the rest of the tree only by the arc (i. end. of identifying leaf nodes in T is to select nodes in the reverse order of the all A simple procedure completes this task in 0(n) time: push the nodes into a stack in order of their appearance on the thread. we set x^. demand of Uj. Xj. j) € : T then = e(j). this arc must carry -e(j) (or e(j)) units of flow to satisfy the adjusted supply /demand of nodes in the subtree. Note that in the thread traversal. descendants. Thus. let T be the basis tree. = b(i) for aU i € N. : = -e(j). the reverse thread traversal examines each node examining descendants. it a lower triangular matrix (see Theorem is possible to solve these equations by forward substitution. j delete node and the arc incident to it from T. i)). from e(i) set X|j = u^j. One way thread indices. j) (or (j. 2. and add u^: to e(j). and then take them out from the top one at a time.114 procedure begin e(i) : COMPUTE FLOWS. which B represents the columns Since B is in the node-arc incidence matrix N corresponding to 2. j) : for each e U do subtract Uj.6 in Section is the spanning tree T. The procedure Compute Flows in essentially solves the system of equations Bx = b. each node appears after prior to its its Hence. This assignment creates an at j. (i. end.3). = pred(j). The arcs in the set U must carry flow node equal to their capacity. else Xjj add e(j) to e(i). if (i.

115 what the algorithm does.. adding to the candidate list the arcs (if any) that violate the optimality condition. The next major iteration begins with the node where the previous major nodes cyclically as it iteration ended. adds arcs emanating from them Once minor the algorithm has formed the candidate list in a major iteration. list which is very time<onsuming.10) or The method used for selecting an entering arc among these eligible arcs has a inajor effect selects I on the performance of the simplex algorithm. These arcs violate condition (5. the procedure substitution. We repeat this list selection process for nodes i+1. selecting arcs in a two-phase procedure cor«isting of major iterations and minor iterations. We examine arcs emanating from nodes. scanning all candidate arcs and choosing a nonbasic arc from this that violates the optimality condition the most to enter the basis. i+2. .. but must examine each arc at each iteration. it performs list iterations. at its upper bound with positive reduced cost. each major iteration. we rebuild the with another major . we ufxiate the candidate list by removing those arcs no longer violate the optimality limit conditions.e. one node emanating from node i at a time. In a major iteration. until either we have examined all nodes or the has reached its maximum allowable size. the algorithm examines to the candidate list. might require the fewest number of iterations in practice. An i. (5. of equations n Similarly. implementation that an arc that violates the optimality condition the most. but might require a of the relatively number of iterations due to the list poor arc choice. has the largest value of Cjj I among such arcs. Compute Potentials solves the system B = c by back Entering Arc types of arcs are eligible to enter the basis: a negative is Two bound with aiiy nonbasic arc at its lower a reduced cost or any nonbasic arc eligible to enter the basis. examining the arc optimality condition large cyclically and selecting the first arc that violates the would quickly find the entering arc. Once minor the list becomes empty or we have reached a specified be performed at on the list number of iterations to iteration. This approach also offers sufficient flexibility for fine tuning to special problem classes. The algorithm maintains a candidate list of arcs violating the optimality conditions. that As we scan the arcs.. we construct the candidate list. In other words.11). On the other hand. One most successful implementations uses a candidate approach that strikes an effective compromise between these two strategies.

W consists of the arc (k.. /) pivot cycle. j) e W. ^j=[Xi. namely.116 Leaving Arc ^ Suppose we basis select the arc (k. then this cycle consists of the arcs {(((k. If P(i) send 6 = min {5jj : (i. say node w. but it can be improved. and is select an arc (p. as indicated in the following procedure. j) W) units of flow around W. q) with 5pQ = 6 as the leaving arc. The crucial operation in this step in the basis to identify the cycle denotes the unique path from any node .(P(k) i to the root node. /) if Oc. the nodes in Repeat the same operation for node until we encounter a node already is labeled. refer to as the apex. In other words. /)} u P(k) u P(/)) n P(/))). Start at node k and using all predecessor indices trace the path from this node to the root and label this path. j) change the flow as much as possible until one of the arcs in the W reaches 5j: its lower or upper bound. |Uj: - X|: if (i. e this arc leaves the basis. opposite to the orientation of sets of arcs in /) W as the same as that of € L. cycle We (i. denote the . if(i. Let W and W respectively. and e U. has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex W.j)eW. the first common P(/) ancestor of nodes k and to The cycle /). P(/). The simultaneous use of depth and predecessor indices. which sometimes referred (k. The addition is of this arc to the to as the B forms exactly one (undirected) cycle W. those in w and the root. The maximum flow is change on an arc W that satisfies the flow bound constraints . /) and the disjoint portions of P(k) and Using predecessor indices alone permits us to identify the cycle W as / follows. along with the arc (k. Node w. up It node w. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of its the current solution. We define the orientation of (k. . /) if (k. e We W. eliminates this extra work. around W along and opposite to the cycle's orientation. W contains the portions of the path P(k) and This method is efficient. 1) as the entering arc. which we might /.

typically The entire flow change operation takes CKn) time worstose. basis. then the pivot if is said to be degenerate. : = i. q). If the leaving arc differs from becomes a more extensive ch«mges are needed. The deletion Note of the arc (p. not containing q. T2. . which would happen when 6 = uj^j the basis does not change. a basis exchange it is is a pivot operation. begin i : = k and i j : = /. If 6 = 0. the arc (k. simple modification of this procedure permits first it to determine the flow 6 that can be augmented along W as it determines the common W. A /. Basis Exchange In the terminology of the simplex method.J) . If the leaving arc is the same as the entering arc. ancestor w of nodes k and Using predecessor indices to again traverse the cycle the algorithm can then update in the flows on arcs. then set L to the set U. /) Xpg = Upg. into two subtrees— one. In this instance. (p. q) from the previous b<isis partitions the set of nodes . A basis is called degenerate flow on some basic arc equals lower or upper bound. and the other. /) has . but examines only a small subset of the nodes. q) from the previous basis yields a new basis spanning The node potentials also change and can be updated as follows. the root node. or vice versa.117 * ' procedure IDENTIFY CYCLE. end. merely moves from the the entering arc. q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or Oc. otherwise its nondegenerate. the arc (p. w end. T^ containing the that the subtree root node. /) for a leaving arc (p. T2 hangs from node p or node The arc (k. Observe that a degenerate pivot occurs only in a degenerate Each time the method exchanges an entering arc (k. while ^ j do begin if depth(i) > depth(j) then if i : = pred(i) j : else depth(j) > depth(i) then j : = pred(j) else i : = pred(i) and = pred(j). and nondegenerate otherwise. In this instamce. it must update the basis structure. Adding that is again a and deleting tree.

: : q e T2 then y = q else y = : p. they change by the eimount indices. the conditions n(l) = 0. and + 7t(j) = for all arcs in the new basis imply that the potentials of nodes in the subtree T^ remain unchanged. if k e T| then change = 7t(y) - Cjj else change = Cjj : : = 7t(y) + change. 118 one endpoint Cjj . end. and the potentials of nodes in the subtree T2 change by a constant amount. if / e T| and k € T2. however. This step is rather involved and we refer the reader to the reference material cited in Section 6. time.11). z : = thread(y). pose theoretical difficulties that we address next. The following method. moves from one basis structure obtains a basis structure that satisfies the optimality conditions (5. Degenerate pivots. 2 : = thread (z). - If k e T^ and / e T2. in T2 change by Cj^/ . then all the node potentials Cjj.7t(i) in T-j and the other in T2. using the thread and depth updates the node potentials quickly. During a nondegenerate pivot is which 6 > the new basis structure has a cost that 61 cy I units lower than the previous basis structiire.9)- easy to is show that the algorithm terminates in a finite number of steps if each pivot operation nondegenerate. The final step in the basis exchange is to ujxiate various indices. . procedure begin if UPDATE POTENTIALS.. however. Since there are a finite number of basis structures and every basis structure has a unique associated cost. We do note. As is easy to verify. to another until (5. Recall that I cj^/ I represents the net decrease in the (in cost per unit flow sent 0).4 for it is the details. around the cycle W. It is it as just described. that possible to update the tree indices in 0(n) Termination The network simplex algorithm. while depth(z) < depth(y) do begin 7c(z) : = 7:(z) + change. end. the network simplex algorithm will terminate finitely assunung nondegeneracy.

U) be a basis structure of the data. the feister in simplex algorithm terminates finitely. in Computational studies have shown that as many as 90% of the pivot operations common runs networks can be degenerate. feasible basis. . L. . moreover. its See Figure 5. L. the root). is equivalent to the combinatorial rule knov^Ti as the strongly feasible The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej. ££. Let (B. We show that a particular perturbation technique for the network simplex method basis technique.... n. minimum cost flow problem with integral As earlier. i. we conceive of a basis tree as a tree hanging from the root node. infinite repetitive is sequence of degenerate pivots.2 for an example of a strongly Observe that this definition implies that no upward pointing at its eirc can be at upper bound and no downward pointing arc can be lower bound. positive We say that a basis structure of flow from U) is strongly feasible if we can send a amount any node in the tree to the root along arcs in the tree without violating any of the flow bounds.. As we show next. (ii) i 1 = 2 ti < 1. hand-side vector so that every convert an This technique slightly pertvirbs the right- fecisible basis is nondegenerate and so that it is easy to optimum solution of the perturbed problem to an optimum solution of the original problem. . The tree arcs either are upward pointing (towards the root) or are downward pointing (away from (B. it practice as well. if it satisfies the following conditions: (i) Ej > n for all i = 2. Degeneracy in network problems not only a theoretical issue. but also a practical one. t^) is a feasible perturbation ..e. Researchers have constructed very small network examples for which poor choices lead to cycling.119 Strongly Feasible Bases The network simplex algorithm does not of iterations unless necessarily terminate in a finite number we impose an an additional restriction on the choice of entering and leaving arcs. called a strongly feasible basis. by maintaining a special type of basis. ar»d . for avoiding cycling in the The perturbation technique is a well-known method simplex algorithm for linear programming. . 3.

the following statements are equivalent: (i) (B. is (ii) No upward the basis (B. 1/n. (i) ^ (ii). If (i. pointing arc of the basis bound. . for the perturbation e = (-(n-l)/n.l)/n Another choice is Ej = a* for i = 2. L. Theorem For any basis structure U) of the minimum cost flow problem. 120 r (iii) El = i L ^^ = 2 is Cj . As noted strictly earlier.. j) by k€ X El. o chosen as a very small justification positive number.. is nonintegral and thus nonzero.. i cannot send any flow to the root. is at its upper bound and no downward pointing arc of at its lower (iii) U) L. Proof. The procedure we gave Compute-Flows..2. earlier in this section. The perturbation changes the flow on for the basic arcs. . . 2. (B. violating the definition of a strongly feasible the For same =^ reason. = 1/n with for i = 2. Suppose true. is feasible if we replace b by b+e. One E| possible choice for a feasible perturbation ).. .. j) is a downward pointing arc of tree B and D(j) is the set of descendants of node Ei. j. L. 1/n). (ii) (iii). the perturbed solution remains feasible. U) is strongly feasible. Suppose an upward pointing arc (i. n (and thus = -{n . If (i. j) is at its upper bound. L. Similar reasoning shows that after we have downward pointing arcs also remain feeisible. if we by b+e.. perturbed the problem. . (i. for any feasible perturbation e replace b (iv) (B. Since the flow on an upward pointing arc is integral and strictly less (integral) upp>er bound. implies that perturbation of b by e changes the flow on basic arcs in the following maimer: 1. perturbation increases the flow on an upward pointing arc by an amount between than its and 1. i. 2/n.- Since < X < rXi) k € CKi) 1. j) by k€D(j) 1. then the perturbation decreases the flow in arc the resulting flow (i. is X Ew- Since < keD(j) Z < nonintegral and thus nonzero. j) is an upward pointing arc of tree B and in arc D(i) is the set of descendants of node El. then the perturbation increases the flow the resulting flow 5. n. no dov^mward pointing arc can be that (ii) is at its lower bound. U) is feasible . Then node basis.

there no need to actually perform the perturbation. . The method initial basis always gives such a basis. cortsider the perturbed . (iv) =* (i). it is guaranteed to converge. 1/n. x^.. is Instead. we remove (i. 1/n). . Even though this rule permits degenerate pivots. for pxjinting arcs. Therefore. Combinatorial Version of Perturbation The network simplex algorithm described earlier to construct the starts with a strongly feasible basis. Consider the feasible basis structure (B. thus maintain strong feasibility by f>erturbing b by a suitable perturbation is However. flows on the the U|: downward upward pointing arcs increase. the algorithm will terminate in at most of nmCU iterations. 1/n. L. flows on the resulting flows are integral. the p>erturbation Consider the same basis tree for the replace original problem. 1/n) is a feasible perturbation. As a corollary. this equivalence shows any implementation of the simplex algorithm that maintains a strongly feasible basis performs at most nmCU pivots. any implementation feasible the simplex algorithm that maintains a strongly basis runs in pseudopolynomial time. b + e by b). and > for downward pointing arcs. 1/n. With this perturbation. This theorem shows that maintaining a strongly feasible basis is equivalent to applying the ordinary simplex algorithm to the perturbed problem. Each arc in the basis B has a If positive nonintegral flow. L. The algorithm selects the leaving arc in a degenerate pivot carefully so that the next basis is also . ( - To establish this result.. every pivot operation augments at at least 1/n units of flow and therefore decreases the objective function value by units. < and U) is strongly feasible for the origiruil problem. 1/n Since mCU is is an upper bound on the objective function value of the starting a lower solution and zero bound on the minimum objective function value. We can e. 1/n. Follows directly because e = (-(n-l)/n. the flow leeist on every arc is a multiple of 1/n.e.. U) of the perturbed problem. This result implies that both approaches obtain exactly the same sequence of basis structures if they use the that same rule to select the entering arcs.. x^: upward pointing arcs decreaise. Consequently. Figure 5. (B.2 will illustrate our discussion of this method. . we can maintain strong feasibility using a "combinatorial rule" that the original simplex equivalent to applying method after we have imposed the perturbation..121 (iii) => (iv). Consequently. problem with the perturbation e = (n-l)/n.

then leaves the basis. When We next do so. Notice that since the previous basis was strongly feasible. the current pivot was a degenerate pivot. q). is the common tree. every node in the to the orientation of W^ can augment flow back to the root opposite If W^ and in node w. change flow values. /) is at its lower bound and the apex w /). W along its introducing an arc into the basis for the network simplex say arc (p. encountered in traversing the orientation starting at the apex w. W2 = for W - W| {(p. the i. show that this rule guarantees that the next basis is strongly feasible. when we traverse the cycle along Further.122 feasible. q). node k the subtree T2 and . We distinguish two cases. To we show that in this basis every node in the cycle W can send positive flow to the W between the let its root node. The leaving lies in from node k node w. j) W that satisfy = 5. i. See Figure 5. Now must observe that before the pivot. then the pivot flow along the arcs in Wj. the algorithm cycle identifies the blocking arcs. pivot cycle select the leaving arc as the last blocking arc. is those arcs (i. apex w and arc - (p. some basic arcs will be at their lower or upper bounds. /. the algorithm selects the leaving arc in accordance with the following rule: Combinatorial Pivot Rule. it (k. Define the orientation of segments W^ and W2 to W. We now study the effect of the basis (k.. every node could to the root send positive flow node.e. then W^ must be contained the segment of feasibility. q)). Since arc arc belongs to the path enters the basis to change on node potentials during a at its lower bound. hence. because by the property of strong every node on the path from node to node w can send a positive amount of flow to the root before the pivot and. method. If the blocking arc arc. /) to the basis We define 5jj the orientation of the cycle as the After in the If updating the flow. W between node w and node / k. no arc on this path can be a blocking arc in a degenerate pivot. is This conclusion completes the proof that the next basis strongly feasible. every node in W^ be able to send positive flow to the root after the pivot as well. ancestor of nodes k and Let W be the cycle formed by adding arc same as that of arc (k. q) is W2 blocking and every node contained in the segment orientation of W2 and via node w. cycle contains more than one blocking then the next basis will be degenerate. Since arc (p. unique. /) degenerate pivot. every node in therefore. the current pivot of augmented segment via a positive amount was a nondegenerate pivot.. If W2 can send positive flow to the root along the Now consider nodes contained in the segment W^. thus. no arc in be compatable vdth the orientation of W. In this case. Let W^ be the segment of the cycle orientation. cj^j < 0. first Suppose that the entering arc (k. since the pivot does not W^ could send positive flow to the root and.2 for an illustration of the segments the last blocking arc in W| is and W2 our example. Hence.e.

/) with the largest This technique value of I Cj^j | among all arcs that violate the optimality conditions). W as opposite to its the orientation of arc The criteria to select the leaving arc remaii\s unchanged-the leaving arc starting at is the Icist blocking arc encountered in traversing W along orientation node w.. x denote the current flow. 1/n. A > If denote the maximum violation of the optimality condition of any nonbasic the algorithm next pivots in a nonbasic arc corresponding to the maximum Hence. thus.. at the k-th iteration of the simplex algorithm.. L. /). Consequently. this degenerate pivot strictly increases the sum of all node potentials (which by our prior potentials is assumptions the is integral). If the entering arc (k. We have already shown that any version of the network simplex algorithm that maintairis a strongly feasible basis performs O(nmCU) pivots. ^k. we consider the perturbed z*^ problem with perturbation function value = (-(n-l)/n. Let denote the objective of the perturbed minimum cost flow problem (B. As . 1/n). violation.c^j > 0. and structure. we can reduce the number of pivots 0(nmU log H).^k+l^^/n (513) We now need an upper bound on the It is total possible that improvement in the objective function after the k-th iteration. easy to show .e. . the network simplex algorithm implemented using Dantzig's pivot rule. node / is contained in the subtree T2 and. also yields polynomial lime simplex algorithms for the shortest path and assignment problems. Using Dantzig's pivot rule to and geometric improvement arguments. in In this case. after the Cj^^j . the arc that most violates the optimality conditions (that is. Complexity Results The strongly feasible basis technique implies some nice theoretical results about i. Since the sum of all node bounded from below. U) denote the current basis Let arc. the pivot again increases the sum node potentials. earlier. /) is at its upper bound.123 the potentials of all nodes in T2 change by the amount . 1/n. pivot all nodes T2 again increase by the amount of the consequently. pivoting in the arc (k. with H defined as e H = mCU. then the objective function value decreases by at least A/n units. then we define the orientation of the cycle (k. number So of successive degenerate pivots is finite. far we have assumed that the entering arc is at its lower bound.

A strongly feasible basis. .4) (2.2) 0. 10). Ujj). capacities represented as The entering arc the blocking arcs are (2. 5).5) Entering arc Figure 5. This pivot is a degenerate pivot. (x^:. and the leaving arc is (7. 5). The segments W^ and W2 are as shown. The figure shows the flows and is (9.2.124 ap>exw (3. 3) and (7.5) (0.

This readjustment of at flow decreases the objective function by most mAU. '' total improvement (i. 0(nmU log W) iterations.14b) For a given basis structure (B. . the with respect to in the the objective objective function £ £ c. if H = mCU. j) € A. Xj. L.j)€A^ is equal to the total improvement Further.1.j)€A'^ function Cj.. (5. and by leaving the flow on the basic arcs unchanged. j) € A ' ' problem: f minimize subject to X {i. is bounded by the total . = u^ for all arcs (i.j)6 C. for all (i.14) by setting Xj.. 5. We have thus shown that z^-z»^mAU.15) and (5.' (i. the network simplex algorithm terminates in follows.. Combining (5. We summarize our discussion as Theorem The network simplex algorithm that maintains a strongly feasible basis and uses Dantzig's pivot rule performs 0(nmU log H) pivots.j)e A ' ' (i.3. (514a) A »] 1] < xjj < Ujj.'" improvement • in the following relaxed (i. U). the total improvement with respect to the objective function ^ C:: x.15) we obtain nmu By Lemma 1.125 (i. j) € L vdth Cj: < 0.13) (5. x.. we construct an optimum solution of (5. x..j)€ A^ ^ ieN Since the rightmost term in this expression is a constant for fixed values of the node potentials.. j) e U with Cjj > 0. by setting xj: = for all arcs (i.

Let S(A) = e(i) ^A and let T(A) = 0. minimum cost flow problem. either S(2A) = or T(2A) = In the given i A-scaling phase. The next two sections present polynomial time algorithms based cost scaling.4. it This algorithm can be applied to the capacitated minimum cost flow problem 2. to In fact.4. we perform a number of augmentations. sufficiently large The RHS-sc<iling algorithm guarantees each augmentation carries flow and thereby reduces the number of augmentations substantially. Hence. Then at the beginning of the A-scaling phase. The definition of A implies that within n augmentations the algorithm will decrease A by a factor of at scaling least 2. These results can be found in the references cited in Section 6. Much A be as we did in the excess scaling algorithm for the either 2' (i) maximum for all i. In this we describe an algorithm based on a right-hand-side scaling (RHS-scaling) technique. The inherent drawback path algorithm is that augmentations may carry relatively small amounts of flow. This definition implies that the is sum of excesses { i : (whose magnitude ) equal to the sum of deficits) bounded by 2nA. 5. It x and the imbalances e(i) as defined in performs a number of scaling phases. the least power of 2 satisfying Initially. scaling. resulting in a fairly large that number of augmentations in the worst case. We i.126 This result gives polynomial time bounds for the shortest path and assignment problems since both can be formulated as minimum cost flow problems with U = n and U = 1 respectively.e. { j e(j) < -A ). and each of these augmentations A imits of flow. each from a carries node c S(A) to a node € j T(A). flow problem. it is possible to modify the algorithm and use the previous in arguments pivots show in that the simplex algorithm solves these problems 0(n^ log C) and runs 0(nm log C) total time. after has been converted into an uncapacitated problem (as described in Section The algorithm uses the pseudoflow Section 5. within Odog U) . upon cost and simultaneous right-hand-side and is The RHS-scaling algorithm an improved version of the successive shortest in the successive shortest path algorithm. is : e(i) < 2A or e(i) > -2A for all but not necessarily both. Scaling techniques are among the most effective algorithmic strategies for designing polynomial time algorithms for the section..7 Right-Hand-Side Scaling Algorithm ni . (ii) we let i. we begin a new scaling phase. shall illustrate RHS-scaling on the uncapacitated Uj: minimum cost flow problem.4). At this point. a problem with = » for each (i. A= '°S ^ '. j) e A.

By the integrality of data. This fact follows from the follovdng . This flow that invariant property and the connectedness assumption (A5. units of flow along the path P. . X. to a it is correctly solves the problem because during the able to send A units of flow on the shortest path from a node k € SiA) result.^ 2f log while the network contains a node with nonzero imbalance do begin S(A):={i€ N:e(i)^A). S(A) and T(A).127 phase. The following algorithmic a formal statement of the RHS-scaling algorithm.2) ensure in S(A) to a we can always send A units of flow from a node description is node in T(A). e := b. all imbalances are now zero and the algorithm has found an optimum flow. end. node / e T(A). U1. begin X := 0. . while S(A) * and T(A) * e do begin select a node k e S(A) and a node / e T(A). T(A) := { i € N : e(i) < -A ). A := A/2. A < 1. The driving force behind this scaling technique is an invariant property (which is we will prove later) that each arc flow in the A-scaling phase a multiple of A. algorithm RHS-SCALING. end. all determine shortest path distances d from node k to in the residual other nodes network G(x) with respect to the reduced costs let P denote the shortest path from node k to node /. augment A update end. ^ . > . let n be the shortest path distances in G(0). The RHS-scahng algorithm A-scaling phcise. update n:=n-d.

one method of solving the problem cajjacitated minimum cost flow to first transform the capacitated 2.128 Lemma 5. 0. this fact would imply the conclusion of the theorem. O) time. each scaling phase can perform most n augmentations. similar proof applies when T(2A) = At the beginning of the scaling i S(A) | < Observe that A< at a e(i) < 2A for each node deficit. m. O) time. because fails to Lemma 5. A units and preserves the inductive A decrease in the scale factor by a factor of 2 also preserves the inductive This result implies the conclusion of the lemma. The shortest path problem on the transformed problem can be solved (using some clever techniques) in S(n. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U Sin. Consequently. initial network are always integer multiples of We use induction on the number of augmentations and scaling phases.. C) denote the time to solve a shortest path problem on a network with nonnegative arc lengths. hypothesis.4. and each seeding phase performs at most n+m augmentations. I ends I node with a and carries A units of flow.2. C) time. the RHS-scaling algorithm solves the capacitated minimum in cost flow problem in 0(m log U S(n. The Each residual capacities are a multiple of A because they or are either or «. augmentation changes the residual capacities by hypothesis. Each augmentation starts at a node it in S(A). or T(2A) = 0. decreases S(A) by one. either S(2A) = We consider the case when phase. A n. to an uncapacitated one using the technique described in Section We then apply the RHS-scaling algorithm on the transformed network. The residual capacities of arcs in the residual A Proof. The RHS-scaling algorithm is a special case of the successive shortest path cost flow. algorithm and thus terminates with a minimum We show that the algorithm performs l+Flog at most n augmentations per scaling phase. Theorem 5. As we noted problem is previously.2 does not apply for this situation. Applying the scaling algorithm problem introduces some directly to the capacitated minimum cost flow subtlety. A recently developed modest variation of the problem RHS-scaling algorithm solves the capacitated minimum cost flow 0(m lof^ n . m. Proof. e S(A). Let S(n. m. m. At the beginning of the A-scaling phase. S(2A) = I 0. The inductive hypothesis be true initially since the residual capacities are or Uj. Consequently. Since the algorithm requires Ul seeding phases. The transformed network contains n+m nodes.4. at therefore.

Proof.6 when e is 0. e > if x together with some node potentials n satisfy the following C5.: = Y C. Since arc costs are integral. j) at its upper bound.8 for e feasibility conditior« ^ C. 5.8. The follovsdng facts are useful for analysing the cost scaling algorithm. j) X W' ^ 6 ^\\ 0.3. (Primal feasibility) x (e -EHial feasibility) is Cj. The algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation procedure that transforms an e-optimal flow into an e/2-optimal flow. Hence. Clearly.1 the flow is optimum. an arc (i. Any e -optimal feasible flow for E<l/n is an optimum flow. The cost scaling algorithm treats e as a parameter e. These conditions are a relaxation of the original optimality conditions e -optimality conditions permit -e < Cj. the residual network contaii« no negative cost cycle and from Theorem 5. This method is currently the best strongly polynomial-time algorithm for solving the minimum cost flow problem. is Lemma 5. j) at its lower bound and e S is > for an arc (i.. This algorithm can be viewed as a generalization of the preflow-push algorithm for the flow problem. this result implies that (i.^-n£>-l. any feasible flow with zero 1 node potentials satisfies C5. feasible. We The Cjj refer to these conditions as the e -optimality conditions. The e-dual imply that for all any directed cycle W in the residual network. and iteratively obtains e-optimal flows for successively smaller values of Initially e = C.7 C5. A flow x is said to be e -optimal for some conditions. Now consider an e-optimal flow with e < /n.5 and C5. ^ -e for each arc (i. After l+Tlog nCl . Cost Scaling Algorithm We now maximum describe a cost scaling algorithm for the miiumum cost flow problem. Any feasible flow e -optimal for ekC. j) in the residual network G(x). and finally e < 1/n. < for and reduce to C5. i^ C.129 (m + n log n)) time. This algorithm relies on the concept of approximate optimality.8. which a relaxation of the usual optimality conditions.

j) e A(i) and r^j end. begin if G(x) contains an admissible arc (i.APPROXIMATION-I(£. j) in G(x). More formally.. 1 while e S /n do begin IMPROVE. e < 1/n and the algorithm terminates with an optimum flow. j) in the residual network admissible -e/2 < < The basic shall operations are selecting active nodes and pushing flows on admissible arcs. otherwise is nonsaturating. We also refer to the a relabel bls updating of the potential of a node as a operation is to relabel operation.8). see later that pushing flows on admissible arcs preserves the e/2-dual conditions. algorithm COST SCALING. 130 cost scaling phases. We feasibility The Improve-Approximation procedure uses the following subroutine. rj: } units of flow from : node > 0). e(i) > and call an arc (i. i to node j := 7c(i) + e/2 + min { c^: (i. i with 0. and then gradually converting the pseudoflow into a flow while feasibility conditions. an optimum flow for the minimum cost flow problem. we can state the algorithm as follows. i The Improve-Approximation procedure transforms an E/2-optimal flow. E:=£/2. begin j: := let X be any feasible flow. Recall that r^: denotes the residual capacity of an arc (i. j) then push 6 else Jt(i) := min { e(i). It e -optimal flow into an does so by is (i) first converting an e -optimal flow into an 0-optimal if it satisfies pseudoflow (a pseudoflow x (ii) called e -optimal the e -dual feasibility conditions C5. discussion of preflow-push algorithms for the maximum it flow problem. always maintaining the e/2-dual active We if call a node c^. Moreover. X is x. As if in our earlier r^. re). end. 5 = then we refer to the push as saturating. end.. and e := C. procedure PUSH/RELABEL(i). we use the same data structure . The purpose of create new admissible arcs.

. at termination. 131 used in the maximuin flow algorithms (i. But since -e/2 S is Cj. j) any value of > 0. ^ -e/2. j) to identify admissible arcs. maintains the condition cj^ t -e/2 for all arc (k. yields an e/2-optimal flow. > then Cjj Xj.APPROXIMATION-I(e. j) in the residual network. For each node i. end. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow. while the network contains an active node do begin select an active node i.8 i satisfied for (i. begin if Cjj if x. j) e A(i) > 0) units. PUSH/RELABEL(i). node The current arc is found by sequentially scanning the arc of the The following generic version summarizes its Improve-Approximation procedure essential operations. after we Jt(i) by e/2 + min rj: Cj: : (i. This proof is similar to that of Lemma 4. The correctness of this procedure rests on the iollowing result. At the beginning of the procedure. end. In addition. the procedure preserves e/2-optimality of the pseudoflow throughout and. The algorithm relabels node when Cj. procedure IMPROVE.4.i) in the Therefore. We (j. and at termination yields an e /2-optimal flow. it algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow is a 0-optiCTiaI that the pseudoflow). { By our and fjj rule for increasing potentials. ^ for every arc increaise (i. Lemma 5. := else < then Xj: := uj. Jt). increasing residual network. Cjj > and the condition C5.1. we i. j) might add its reversal i) to the residual network.. the reduced cost of every arc Ji(i) with > still satisfies Cj. the (in fact. maintain a currenl arc which is the current candidate for pushing flow out of list A(i). use induction on the number of push/relabel steps to show algorithm preserves £/2-optimality of the pseudoflow. Proof. Pushing flow on arc (i. compute node imbalances. < (by the criteria of c admissibility).

using a variation pseudoflow x and the flow x' repectively.16) apeP^J Applying the £ - optimality conditions to arcs on the path P in G(x'). Let n and to the n' be the node potentials corresponding possible to show. v^. ^ on the path P in G(x). . with the - P = vq . (i. (5.i)€ _C. It is of the flow decomposition properties discussed in Section 2.. units.. that the complexity of the generic version is We a show O(n^m) and then describe specialized version running in time OCn-^). networks implies that there property that sequence of nodes v = vq. (5.17) 7l'(w) < 7t*(v) + /£ + I (j. that for every node v with positive imbalance in x there exists a satisyfing the properties that (i) node w with negative imbalance in x and a path P x. we obtain X ^-/(e/2). we obtain (5..optimality conditions C:... to those of the preflow-push algorithms for the maximum Lemma 5.1.j)eP'J Combii\ing (5. Alternatively.5.132 We will next analyze the complexity of the Improve-Approximation procedure. = 7t'(v) + /£ - 2 C.17) gives Jt(v) < n'(v) (7c(w) - n'(w)) + (3/2)/£. the facts that (i) k(w) = it'(w) (the potentials of it a node with a negative (ii) / imbalance does not change because the algorithm never selects for push/relabel). P^' (i..j)eP 7i(v) < Jt(w) + /(e/2) + y Cjj. its vj = w . Let X be the current £/2-optimal pseudoflow and x' be the e-optimal flow at the end of the previous cost scaling phase.. P is an augmenting path with respect to x'. Proof. < is and (iii) each increase in potential increases Ji(v) by at least e/2 The len\ma now immediate.16) and + (5.v-j - . Applying the e/2.j V| is a path in G(x'). - path in G(x) and reversal to arcs P = vp vj. . and (ii) its reversal P is an augmenting path with respect to exists a v^ is a This fact in terms of the residual .18) Now we use n.. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure. These time bounds are comparable flow problem..

Therefore the algorithm can create no directed cycles. by Lemmas of and 5.AppToximation procedure performs 0(nm) saturating pushes. As in the maximum is flow algorithm.7. The Improve. relabel operation since the relabel operation increases 7t(i) by at least e/2 units. we obtain the following . hence. Lemma 5. Approximation l+Tlog nCl times. We establish this result relabels. A relabel operation at may create new but (k. we need one more result.5 most 3n2 relabel operations and 0(nm) saturation pushes. is by an induction argument applied to the number of pushes and The result is true at the beginning of each cost scaling phase because the pseudoflow 0-optimal and the network contains no admissible arc.5 ar\d essentially (i. to Approximation procedure which take O(n^m) time. if the algorithm adds create its reversal to the residual network.Approximation procedure performs 0(n m) nonsaturating pushes. To bound number of nonsaturating pushes. that 5. the algorithm resulting in also saturates any arc 0(n) times the 0(nm) total saturating pushes. the bottleneck operation in the Improvethe nor«aturating pushes. i) push flow on an arc with Cjj Cj: < 0.8. amounts to showing i between two consecutive saturations of an arc j the potentials of both the nodes and increase at least once. j). Thus pushes do not new node admissible arcs and i preserve the inductive hypothesis. 5. k -e/2 before a relabel The latter result is true operation. i). Since the algorithm performs 5. The admissible network acyclic throughout the cost scaling algorithms.6. The Improve. following result is crucial to analyse the complexity of the cost scaling algorithms.133 Lemma Proof. these observations yield a bound O(nTn) on the number of nonsaturating pnjshes. it also deletes cj^j all admissible arcs because for any arc i). This proof is similar to that of Lemma 4. number of nodes that are reachable from node i in the to admissible network and the potential function F = i X g^i)- Th^ proof amounts at active showing that a relabel operation or a saturating push can increase F by 1 most n units and each nonsaturating push decreases F by at at least unit. We The define the admissible network as the network consisting solely of admissible arcs. Since any node p>otential increases 0(n) times. (i. The algorithm takes 0(nm) time perform saturating pushes. then > 0. arcs while identifying admissible arcs.6. j). j) We always (j. admissible arcs (i. is Lemma Proof. and cj^j ^ after the (k. Let g(i) be the let Proof (Sketch). and the same time to scan Since the cost scaling algorithm calls Improveresult.

algorithm. Each node examination entails at most one nonsaturating push. an Improve-Approximation problem very similar to solving a Just as in the generic preflow-push algorithm for the maximum flow problem. which in turn push fiow to even higher so on. and A relabel operation changes the numbering of nodes and starts to the topological order. and thus the to the topological order.134 Theorem 5S. < j. active nodes have discharged their Since the algorithm requires O(n^) relabel of OCn-^) on the operations.7). or bottleneck operation is the number of nonsaturating pushes. numbered nodes. The algorithm uses the network can acyclicity of the admissible network. We then move node from its present position in . The wave algorithm selects active is the same as the Improve-Approximation procedure. As is well known. j) in the network. but it nodes for the push/relabel step in a specific order. We describe one such improvement . We now describe a relabel operation. Suppose that while examining node i. the Researchers have using si>ecific order. maximum flow problem. i. nodes i of an acyclic be ordered so that for each arc (i. Observe pushes do not change the admissible network since they do not create new admissible operations. the all algorithm performs no relabel operation then excesses and the algorithm obtains a flow. the algorithm relabels Note that after the relabel operation at node the network contains no incoming admissible i arc at node i (see the proof of Lemma 5. arcs. It is possible to determine this that ordering. The wave algorithm examines each node is active. method again if examine the nodes according However. to When examined in this order. suggested improvements based on examining nodes in some clever data structures. The cost scaling algorithm illustrates an important connection between the Solving maximum flow and the minimum is cost flow problems. the wave algorithm performs O(n^) nor\saturating pushes per Improve- Approximation. called the wave algorithm. we immediately obtain a bound number of node examinations. Consequently. in the topological order and if the node then it performs a push/relabel step. The relabel may create new admissible arcs and consequently may affect the topological ordering of nodes. called a topological ordering of nodes. The generic cost scaling algorithm runs in 0(n^Tn log nC) time. in 0(m) time. active nodes push flow higher numbered nodes. however. within n cortsecutive node examinations. procedure for obtaining a top)ological order of nodes after each initial An topological ordering is determined using an 0(m) it.

6. (ii) node i has no incoming admissible j for each outgoing admissible arc (i.4) and then applying the double scaling algorithm. This result follows from the facts arc. Notice that this altered ordering is a (i) new admissible network. with Nj and N2 as the sets of supply and demand nodes respectively. A natural implementation of this approach would 0(nm) augmentations since each augmentation would saturate 5. The Improve-Approximation procedure section relied on a "pseudoflow-push" method. A capacitated minimum cost flow problem can be solved by first transforming the problem into an uncapacitated transportation problem (as described in Section 2. by Lemma to the algorithm requires 0(nm) arc saturations. node i precedes node in the order.135 the topological order to the topological ordering of the first position. Whenever node i. list) Thus the algorithm maintains an ordered and examines nodes it set of it a doubly linked in this order. approach does not seem improve the O(nTn) bound of the generic Improve-Approximation procedure. excess to a node with deficit over an admissible path. and again examines nodes in order starting node We Theorem minimum have established the following The cost scaling result. and (iii) the rest of the admissible network does not change and so the previous order nodes (possibly relabels a eis is still valid. We number of can.. a path in which each arc result in is admissible. the algorithm this moves at to the first place in this order i. use ideas from the RHS-scaling algorithm to reduce the for augmentations to 0(n log U) an uncapacitated problem by ensuring that . approach using the wave algorithm as a subroutine solves the log cost flow problem in 0(n^ nC) time. A natural alternative would be an augmenting path based method. A).9. The double scaling algorithm is it the same as the cost scaling algorithm discussed in the previous section except that uses a more efficient version of the Improvein the previous to try Approximation procedure. we uncapacitated transportation network G = 0^^ u double scabng algorithm on the N2.e.6. 5. Double Scaling Algorithm The double scaling approach combines ideas from both the RHS-scaling and cost scaling algorithms and obtains an improvement not obtained by shall describe the either algorithm alone. j). at least this one arc and. Thus. This approach would send flow from a node with i. however. For the sake of simplicity. 5.

units of flow on P and update x. algorithm called the double scaling algorithm. end. This approach gives us an algorithm cost scaling phase performs a is that does cost scaling in the outer loop and within each this number of RHS-scaling phases. 0(n) time on average over a sequence of n augmentations. this algorithm. observe that it(j) first commenting e on the correctness of this procedure. and compute node imbalances. c^. also requires 0(n) time on average to find each augmenting path. A := A/2. in is that the double scaling algorithm identifies an augmenting path fact. + E for .136 each augmentation carries sufficiently large flow. In the double scaling algorithm app>ears to be similar to the shortest for the augmenting path algorithm maximum flow problem. ^ j -e for all (i. at the termination of the procedure. hence.APPROXIMATION-n(e. n). from Lemma pseudoflow. j) A at the beginning of the procedure and. 5. a 0-optimal) for each e N2/ we obtain an e/2- pseudoflow. A:=2riogUl. while S(A) ^ do begin OlHS-scaling phase) select a node k in S(A) and delete it from S(A). The double scaling algorithm uses the following Improve-Approximation procedure. we obtain an £/2-optimal flow. / determine an admissible path P from node k to some node with e(/) < 0. procedure begin IMPROVE. The advantage problem of the double scaling algorithm. this The procedure always augments flow on choice preserves the e/2-optimality of the admissible arcs and. .4. by adding e to optimal (in fact. Thus. while the network contains an active node do begin S(A) := ( i € Nj u N2 : e(i) ^A }. set X := 7t(j) := 7t(j) all j € N2. We shall describe a method to determine admissible paths after First. contrasted with solving a shortest path in the RHS-scaling algorithm. augment A end. end. x.

the residual network does not contain an admissible arc { rctreat(i). terminating when the last node deficit. Since the set of admissible arcs at acyclic (by Lemma 5. and a retreat step deletes (i) an arc from the partial admissible path. (pred(i). the procedure maintains the invariant property that all residual capacities are integer multiples of A and thus each augmentation can carry A units of flow. as in the RHS-scaling algorithm. j). then stop. At the beginning of the A-scaling phase.137 Further. If < 0. and is those that are later cancelled by a retreat step. the algorithm will discover an admissible path . Ul RHS-scaling for each phases. The algorithm thus 0(n log U) augmentations. j) € A(i) and r^: > 0). The algorithm maintain a partial identifies an admissible path by gradually building the path. becomes inadmissible.. n(i) to 7t(i) If (i. Each execution of the procedure performs i. e(j) If the residual network contains an admissible arc (i. the algorithm augments A units of flow from a node k in S(A) to a node / with e(/) < 0. At any point is in the algorithm. This operation reduces the excess at node k to a value less then is less A and ertsures that the excess at node /. in the process. say of the following two whichever has a applicable. Thus. after most n advance steps of the first type. each augmentation deletes a node from S(A) and after a most n augmentations. at than A. if (u. j). advanced). During the scaling phase. : (i. We next coimt the number of advance steps. The creating retreat step relabels (increases the potential oO node i for the purpose of i) new admissible arcs emanating from this node. there are two types of advance steps: those that add arcs to an admissible path (ii) on which the algorithm later performs an augmentation. then add (i. j) to P. Hence. we delete this arc from P.e. if there is any. v) e P then prediy) steps. i.. the method begins performs a total of new scaling phase. at the node node i.e.4 implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow. Consequently. i) from P. leist we perform one of P. the arc (pred(i).7). We admissible path P using a predecessor index. Each advance step adds an arc to the partial admissible path. We l+flog e(i) next consider the complexity of this implementation of the Improve-Approximation procedure. The proof of Lemma 5. - u. then ujxiate then delete + e/2 + min Cj. S(2A) = 0. If P has at least one arc. i A< < 2A node e S(A).

the simplex based approach does not give information about the changes in the solution as the data changes. I A(i) I arcs for testing admissibility. Since the algorithm requires a total of 0(n log U) of advance steps is augmentations. therefore. a conceptual drawback to at approach.7. the algorithm will examine result. and consequently changes the basis tree do not necessarily traiislate into the changes in the solution. node potentials increase 0{t\^) times. The double scaling 0((nm + rr log U) log nC) time. For problems that satisfy the similarity assumption. and by Lemma is 5. is.10 minimum cost flow problem. The retreat at most O(n^) of the second type because each step increases a node potential. The total number of advance steps. Therefore. The references describe further modest improvements algorithm. researchers and have conducted There this sensitivity analysis using the primal simplex or dual this simplex algorithms. Traditionally. n The amount of time needed to identify admissible arcs is 0( £ i=l lA(i)ln) = 0(nm) since between a potential increase of a node i. 0(n^ log U). though. algorithm solves the uncapacitated transportation problem in To solve the capacitated minimum cost flow problem . The simplex based approach maintains a basis tree aruilysis every iteration and conducts sensitivity by determining changes in the b<isis tree precipitated by changes in the data.5. capacity or cost of any arc). the number of the algorithm performs advance steps first typ>e at most 0(n^ log U). instead.we first transform it into an uncapacitated transportation problem and then apply the double scaling algorithm. . however. We have therefore established the following Theorem 5. Sensitivity Analysis The purpose solution of a of sensitivity analysis cost is to determine changes in the optimum minimum flow problem resulting from changes in the data (supply/demand practitioners vector. a variant of this algorithm using more sophisticated data structures is currently the fastest polynomial-time algorithm for most classes of the 5. The in basis in the simplex algorithm is often degenerate.138 and vsdll perform an augmentation. it tells us about the changes in the basts tree. We leave it as an exercise for the reader to show that how the transformation permits us to use the double scaling algorithm to solve the capacitated minimum cost flow problem of the 0(nm log U log nC) time.

Then x* a pseudoflow for the modified problem. is units.j)6P to ^ij = X (i.K(k) + jt(l). Hence. Suppose that the supply/demand b(/) node k becomes bGc) + (Recall 1 and the supply/demand that feasibility of the of another node / - from Section b(i) 1. We show that the sensitivity analysis for the minimum flow problem essentially reduces to solving shortest path or maximum problems. This approach does not share the drawback we have just mentioned. moreover. node k node / with respect to the arc lengths Cj. hence. Let n* be the corresponding node potentials and costs. minimum Cj. d(k. Supply/Demand Sensitivity Analysis We becomes problem of first study the change in the supply/demand vector. In . Suppose that the capacity of an arc (p. Arc Capacity Sensitivity Analysis We next consider a change in an arc capacity. 5. Let X* denote an optimum solution of a Cj. residual network with respect to the original arc lengths Since for node / in the any directed path to / ) P from node k to node / . we limit our discussion to a unit change of only a particular type. /) for all pairs of nodes k and / single-source shortest path problems with nonnegative arc lengths. For simplicity. equals the P cjj shortest distance from jt*(/) ). plus ( 7t*(k) - At optimality. of a 1. / ) denote the shortest distance from node k Cj. The flow x* is feasible for the modified problem. In a sense. we can compute d(k. Augmenting one unit of flow from this node k to node into / along the shortest path in the residual network G(x') converts flow.6. however. cost flow problem.1 minimum cost flow dictates that ie X N = 0. this discussion is quite general: it is possible to reduce more complex changes to a sequence of the simple changes cost flow we cor^sider. and must increase one value and decrease the is other). Z (i. .139 We present another approach for performing serisitivity analysis. Lemma implies that this flow optimum for the modified minimvmi cost flow problem.j)€ Cjj .1 pseudoflow / ) a Tliis augmentation changes the objective function value by d(k. this vector satisfies the dual feasibility conditions C5. = - 7C*(i) + 7t*(j) denote the reduced Further. the reduced costs of all arcs in the residual network are by solving n nonnegative. q) increases by one unit . we must change the supply /demand values two nodes by equal magnitudes. let d(k.

often these upper bounds and the actual values are equal. before the change. 1) + d(l.C5.140 addition. /) obtain useful upper bounds on these changes by solving only two shortest path problems. q) we assume are integral. which produces a pseudoflow with an excess of one node q and a deficit of one unit node p. Cost Sensitivity Analysis Finally. We at satisfy this requirement by increasing the by one unit. The preceding discussion shows how solution value in to determine changes in the optimum due to unit changes of any two supply /demand values or a unit change any arc capacity by solving n single -source shortest path problems. however. it is an optimum flow for the modified problem. then after the change c^ < 0. if Cpq > 0. 0. then x* remains feasible. Suppose an arc increases by one unit. which (p. This flow is optimum from our observations concerning supply /demand sensitivity analysis. When strictly less the capacity of the arc (p. and usually they are within 5% of each other. q). Recent empirical studies have suggested that these upper bounds are very close to the actual values. In both the Ctises. if and hence optimun. for the modified problem. if Cpq S 0. the flow on the arc of flow is at its we decrease the flow by one unit and augment one unit path in the residual network. /) S d{k. Similarly. fact that d(k. we need all to determine shortest path distances from node to all other nodes. that the cost of we discuss changes in arc costs.4. q) by one unit as well. /) . However. 0. its Cpg < then condition C5. q) capacity. Cpg = before the violates the change and Xp_ > then after the change Cpq = 1 > and the solution . This change increases the reduced cost If of arc (p. and from other nodes to node 1 to compute upper bounds on all d(k. Cpq = 1 < before the change.2 . from node p to node q along the shortest This augmentation changes the objective function value by an amount -Cpn + d(p. we preserve the optimality conditions. capacity. then c_ ^ if after the change. We can. This observation uses the /. for all pairs of nodes k and 1 Consequently. We convert the pseudoflow into a flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + d(q. it satisfies the optimality If conditions C5. However.4 dictates that flow on the arc must equal flow on the arc unit at (p. q) decreases by one unit and flow on the arc is than its capacity. hence. p).

We first try to reroute the flow x from node p to node q without violating any of the optimality conditions.11 Assignment Problem The assignment problem special cases of the is one of the best-known and most intensively studied minimum is Section ( I 1. say of objects cost Nj I = I N2 = n) 1 a collection of node pairs A C Nj x N2 representing possible person(i. and send a maximum x__ units from the source to the We C5. We then decrease the node that potential of every this node in N-X by one unit. then x° denotes a minimum cost flow of the Pi modified problem. a set N2.1 . eeisy to verify by case aruilysis change in node potentials maintains the optimality conditions and.X) other hand. In this Ccise.X. we v" can set In x^ .2 and Let v" denote the flow sent from node p to node q » If and x" denote the resulting arc flow. To satisfy the optimality condition of the arc. q) to zero.141 condition C5. we must either reduce the (p. and a cost Cj. N. say of f)€rsoris. » cut On the (X. q e N . however. decreases the reduced cost of arc the flow on arc (p. v° = x . at node p and a deficit of x node Pi (iii) q. q) equal to Consequently. permit the maximum flow algorithm. (ii) define of node p as the source node and » node q as the sink node. j) to-object assignments. q) to zero. q) flow on arc (p. (possibly negative) associated with each element The objective is to assign each person to one object . - units more than that of the original problem. since otherwise would generate a solution that violates C5. thus creating an excess of X Pi sink. q) • to zero.2. the objective function value of the modified problem x_. if v° < x then the maximum flow algorithm yields an s-t with the properties that p € X.v° and obtain a feasible minimum is cost flow. this case. and It is every forward arc in the cutset with zero reduced cost has others at the arc's capacitated.4. to change flows only on arcs it with zero reduced costs. or change the potentiak so that the reduced cost of arc becomes zero. network flow problem. in A. defined as follows: • (i) We at do so by solving is set a maximum flow problem the flow on the arc (p. this problem . choosing the assignment with . (p. As already indicated in defined by a set N|. the optimal objective function values of the original and modified problems are the same. 5. furthermore.

C) is the time required to solve a shortest p>ath problem with nonnegative arc lengths) .m. (5. j) X € X) =l. Several of these algorithms apply.1 8d) G The assignment problem is with node set N = N| u N2. i A 0-1 solution x of (5.18c) ^ 0. is called a partial assignment. (5. set A. A 0-1 solution x satisfying ^ 1 for all i € Ni and X ''ii - 1 fo'" 3^' j e No X . then is assigned to j and j is assigned to i.18b) (i : (i. Researchers have suggested numerous algorithms for solving the assignment problem. Associated {i:(i.142 minimum program: possible cost. for all (i. The assignment problem also known as the bipartite matching problem. the successive shortest path algorithm for the typically select the initial These algorithms node potentials with the following values: nii) = for all i e N| cost flow problem.j)eA) If = 1. xjj (5.18) is an assignment.X:: (5.C)) time. = 1}. arc = 1 if i a minimum cost flow Cj.18a) e A ^ ' subject to {j : (i. : (i.m.foraUje N2. The successive shortest path algorithm solves the assignment problem as a sequence of n shortest path problems with normegative arc lengths.j)e A) is with any partial assignment x an index set defined as X= {(i.. j) e A : x^. all j minimum e N2- and 7t(j) = min {cj. j) X e A) =l. arc costs problem defined on a network and supply /demand specified as has 2n nodes <md b(i) e N| and b(i) = is -1 if i e N2. The network G m= A | | arcs. j) e A) for All reduced costs defined by these node potentials are nonnegative. j) € A. "ii {j:(i. (Note that S(n. either explicitly or implicitly. and consequently runs in 0(n S(n.foraUi€ X:: Xji N-i. The problem can be formulated as the following linear Minimize 2(i. j) Cj. A node not assigned to any other node is unassigned. We Xjj ^ use the following notation.

or relaxes. the Hungarian essentially the primal-dual variant of the successive shortest path algorithm. shortest paths The algorithm gradually builds from overassigned objects to assignment by identifying vmassigned objects and augmenting flows on these paths. can solve the shortest path Consequently. Since these algorithms are special cases of other algorithms specify their details. a cost scaling algorithm provides the best-knowT> time bound fo. the constraint (5.the tissignment problem. we show another intimate connection between the assignment problem and the shortest path problem. Dijkstra's algorithm. The relaxation algorithm removes. This relaxed problem smallest Cjj is easy to solve: assign each person i to with the value. Before doing so. The node replaces each arc splitting tremsformation replaces (i. however. The network simplex algorithm. j). some implementations of it provide polynomial time bounds.C)) time. we have described earlier. Assignments and Shortest Paths We have seen that by solving a sequence of shortest path problems. This approach efficient in practice. is for maintaining a strongly feasible is fairly another solution procedure for the assignment problem.143 The relaxation approach is another popular approach. and. if it The first application determines if the network contains shortest path. this algorithm also One method. we can also use any algorithm for the to solve the shortest path problem with arbitrary arc lengths. we can solve any assignment problem. As a result. To do we apply the tissignment algorithm twice. assignment problem so.18c). Interestingly. doesn't. and adds an zero cost arc We first : note that the transformed network always has a feasible solution with cost zero . i and i'. we will not Rather. we will discuss a different type of algorithm based upon the notion of an auction. moreover.m.4. For problems that satisfy the similarity assumption. i'). by an arc (i. in this section. with provisions basis. thus allowing any object to be assigned to more than one an object j person. some objects may be unassigned and other a feasible objects may be overassigned. the second application identifies a Both the appbcations use the node splitting transformation described in Section 2. it Because this approach always maintains the optimality conditions. The algorithm solves at most n shortest path problems. problems by implementations of runs in 0(n S(n. which is also closely related to the successive shortest path algorithm. is well knovkn solution procedure for the assignment problem. a negative cycle. j) each node (artificial) i by two nodes (i.

. the assignment must contain a Qk' ii arcs of the form is . • negative. t ) .Iv Since the optimal assignment cost negative. i'). First. some partial assignment PA j| must be J2 But then by construction of the transformed network. ^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore. This solution must contain at least one arc of the form set of (i. (J2 . (j^. the assignment containing all artificial arcs is (i. PA = (j| . Consequently. suppose the cost of an optimeil assignment is i negative.'). Jl^-jj. j') with * { j .144 namely. jo ) / • • • / ^'- ^^^ ^°^^ °^ *^'^ "partial" assignment nonpositive. suppose the original network contains { a negative cost cycle. We if next show that the optimal value of the assignment problem negative if and only the original network has a negative cost cycle. (Jk' J]) Conversely.. the cycle ~ • ~ Jk ~ )l ^ ^ negative cost cycle in the original network. because j. (J2 / J3)/ • • • . the cost of the optimal assignment must be negative. Then the assigment negative cost. (J2 / it can be no ^ ^ • more expensive than the partial assignment is { (jj jA ) / • • • » (Jk.. j 2). iy\2 -J3 ' ' • * " - .

(a) The original network. (b) The transformed network. .145 (a) (b) Figure 5.3.

5').. since version appears more natural for interpreting the algorithm. bid and has no admissible bid. the iteration. is an instance of the bit-scaling algorithm described in Section To describe the auction this algorithm. 2'). The objective this is to find an assignment with m<iximum Let We can Cj. We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to make the algorithm polynomial time. (3.3 for an example of this transformation. value(i) ^ max {u^: . j) i a nonnegative utility Uj. j) admissible if valued) = uj: price(j) and inadmissible otherwise. We assume that all utilities and prices are measured a We call a associate with each person i number - valued). We The bid (i. which is an upper bound on : that person's highest marginal utility. 1 Now observe that each path from node to node n in the original network has a corresponding assignment of the same cost in the transformed network. (2. 3'). The Auction Algorithm We now describe an algorithm for the assignment problem known as the auction algorithm. j) e A). and n and See Figure 5. 1' We consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes. marginal utility of person for buying car is U|j price(j). buy n and has cars that are to be sold by auction. then we n. say from node 1 to node as follows. If algorithm requires every bid in the auction to be admissible. in dollars. (4.price(j) : (i. At each stage of the For a given set of - algorithm.price(j) (i. i. At each an unassigned person bids on a car that has the highest margir\al utility.3(b) has the corresponding path 1-2-4-5 in Figure 5. C = max j.e. assignment (4. j) e A(i)). 4'). j Each person (i. for car utility. This scaling algorithm 1. can obtain a shortest path between a specific pair of nodes. 4')) and an assignment {(1. an optimum assignment in the transformed network gives a shortest path in the original network. to reduce problem is to (5. ((1. then value(i) is person i is next in turn to too high and we decrease this value to max (u^j .3(a). there an asking price for car represented by i price(j). (2. we cor\sider the maximization version of the assignment problem.18).6. For example.3(b). j) e A(i)}. = -uj. (3. . for each set € A(i).146 If the original network contains no negative cost cycle. 3')) in Figure 5. the path 1-2-5 in Figure 5. and the converse is also true. 5').3(a) has the corresponding in Figure 5. j asking prices. {lu^jl : (i. 2'). Suppose n persons want is to interested in a subset of cars. Consequently.

choices for value(i) and value(i) = price(j). The procedure can i. person k must bid on another car. let x° be the current assignment. Also. therefore.. becomes uneissigned. j. j) i. if was one. end. Subsequently. utility of always an upper bound on the highest marginal - person i. We now show of the that this procedure gives an assignment whose utility is vdthin $n optimum utility. person k was already assigned to car j. = price(j) + 1. cars. . price).147 So the algorithm proceeds by persons bidding on car j. begin let x". x°. person there is assigned to car The person k who was the previous bidder for car j. the polynomial time version requires At termination. starts We now j describe this bidding procedure algorithmically. then person k becomes unassigned. j) € A(i)}. end else update vzJue(i) : = max {uj: . is while some person begin select if unassigned do an unassigned person bid (i.price(j) : (i. the procedure yields an almost a more clever initialization. the prices of cars increase and hence the marginal values is to the persons decrease. we set price(j) = for each car and max {u^ : (i. j) e A(i)} for each person Although this initialization is sufficient for the pseud opolynomial time version. execution of the auction algorithm and x* denote an value(i) is optimum assignment. value. Let x" denote a partial assignment at some point during the Recall that i. The auction stops when each person assigned a car.e. Consequently. If a jjerson i makes a bid on then the price of car i j goes up by $1. valued) ^ Uj: price(j) for all (i. j) e A(i). with some valid For example. the initial assignment be a null assignment. some is admissible then begin assign person price(j) if : i to car j. subsequent bids are of higher value. end. As the auction proceeds. optimum tissignment procedure BIDDING(u.

22) N2 (5. It is easy to modify the method. the most $n than the maximum utility. the C = (n+l)C. Using obtain n.i)eX'' i€Ni I valued) + J€N2 satisfies the condition X price(j) (5.20) in (5. must be optimal. we UB(x^) ^ S value(i) + J I e price(j) - (5.19) assignment \° also - value(i) = Ujj price(j) + 1. We next discuss the complexity of the Bidding procedure as applied to the v^ith all utilities first assignment problem largest utility is multiplied by (n+1 ). is number of steps the method must terminate with utility of this is at Then utility UB(x°) represents the of the assignment x" assignment (since Nj less empty) . (i. the algorithm can change the node values and prices at most a finite number of times.21) and observing that unassigned cars in N2 have zero prices. We show that the value of any person decreases CXnC) . (5. j) Z X° e "ii ^ + i € I °value(i). two assignments with distinct toted utility will differ by at least (n+1) units. x° is Since the algorithm v^l either modify a node value or node price whenever not an assignment.21) with N° denoting the unassigned persons N^. In this modified problem.23) As we show in our discussion to follow. within a finite a complete assignment x". however. hence. goesupby UB(x°)= UB(x°) be defined as follows. Let Uj: - price(j) and immediately after the bid. Hence. j) e X°. to obtain an all utilities Uj. for all (i.148 X The partial Uji < (x. are now multiples of (n+1). (5. Suppose we multiply Since all utilities by (n+1) before applying the Bidding procedure.20) because priceCj) at the time of bidding value(i) = $1. optimum assignment. N in (5. The procedure yields an assignment that is within n units of the optimum value and.

using arc" data structure permits us admissible bids in O(nmC') time. the values change O(n^C') times in value(i) > Uj..price(j) after Further. Thus. 5.6. this inequality shows any person decreases I I most O(nC') times. Substituting this inequality in (5.. The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj.23) implies UBCx") S -n. we decompose the original problem into a sequence of algorithm. Using a scaling technique in the auction algorithm ensures that the prices and values do not change too many times. Odog nC) assignment problems and and show solve each problem by the auction We use the optimum prices and values of a problem as a starting solution that the prices of the subsequent problem and values change only CXn) times per sctiling phaise. Each j. some car By our previous arguments. The auction algorithm solves the assignment problem in O(n^mC) it time. to locate As can be shown. iteration either decreases the value of a person or assigns the person to total. ie No 1 Since valued) decreases by at that the value of le«ist one unit each time at it changes. we solve each problem in 0(nm) time and solve the original problem in 0(nm log nC) time.. Since all utilities are nonnegative. 149 times. .8. As in the bit -scaling technique described in Section 1. . a person in valued). K we have Theorem established the following result. . ?£.gned. N^ We next examine the number of iterations performed i by the procedure. since the price of car j person i i hais been aissigned to car I j and I increases by one unit. The auction algorithm is potentially very slow because can increase prices (and thus decreases values) in small increments of $1 and the final prices can be as large as n^C (the values as small as -n^C). Since decreasing the value of a person persor\s is i once takes 0( Ad) \ ) time.21) yields valued) ^ -n(C' + 1). the total time needed to ujxiate Veilues of all ( O ie I n I Ad) I C = O(nmC'). can be assigned at most A(i) times betvk^een two of consecutive decreases total This observation gives us a bound O(nmC') on the the "current number of times all bidders become ass'. Since C = nC. (5.

utilities u-j= Luj. price). / 2'^*'^ J.150 Pj^ . in which the utility of arc (i. K: = riog(n+l)Cl price(j) : = : for each car j. value.j) is the k if leading bits in the binary representation of assuming (by adding leading zeros necessary) that each Uj. . x°. depending upon whether the newly added follows: bit is or 1. Note that in the problem Pp all utilities are and subsequently k+1 u^- k = 2u. for its each person i. price(j) = 2 : price(j) for (i) each car 1 j. In the k-lh obtains a near-optimum solution of the problem with the utilities k u--. In other words. The assignment algorithm performs scaling phase. the problem Pj^ has the arc or 1. j) € A. j) e.price(j) : (i. value(i) = 2 value + for each person i. The scaling algorithm works as algorithm ASSIGNMENT. prices satisfy value(i) and values ^ max {uj. it a number of cost scaling phtises. Observe phase. The crucial result that the prices and values change only 0(n) times during each execution of the . end.+ {0 or 1). BIDDING(uK end. all Uj. is K bits long. the algorithm starts with a null assignment. We is next discuss the complexity of this assignment fdgorithm. for k : = 1 K do = : begin let ujj : L Ujj / 2^-^J for each (i. the algorithm solves the assignment problem with the original utilities that in each scaling is and obtains an optimum solution of the original problem. the purpose of each scaling phase to obtain good prices and values for the subsequent scaling phase. The Bidding procedure maintains these conditions throughout execution. A(i)). It is easy to verify that before the algorithm invokes the Bidding procedure. begin multiply by (n+1). In the last scaling phase. value(i) = to for each person i. The problem Pj^ is an assignment problem ujj.

we set price(j) = 2 price'(j). In this expression. j) e A.24) in (5. = 2 u.+ (0 or 1). price(j) calling the and value(i) have the values computed x. for aU (i. we find that the reduced utilities Uj. (5. for any i.20) x*^"* (the final at tie end of the (k-l)-st scaling phase). y (i.21) yields I icNj valued) ^-4n. just before Bidding procedure.151 Bidding procedure. for a given set of prices and values. Before calling the Bidding procedure. _ u. Since t u- • - price(j) for each (i. Now assignment k-1 consider the reduced utilities of arcs in the assignment (5. . The assignment algorithm applies the Bidding procedure Odog nC) times and. and Uj. The equality V 1 implies that u. j) e x*^"'. j) e A. for all (i.25). runs in 0(nm log nC) time. utility also an assignment that maximizes the reduced value(i) maximizes the utility. For any assignment we have value(i). we observe that the Bidding procedure would terminate in 0(nm) time. j) in the k-th scaling phase _ Ujj = Ujj ic - price(j) - value(i). y ic U:: j )U X ^ X e price(j) i jfe X'^ N2 X e Nj Consequently. value(i) k k-1 = 2 value'(i) + 1. consequently.24) Uij < 0. Hence.. valued) decreases 0(n) times. the optimum reduced utility is at least -3n. of arcs in x*'" If * are either -2 or -3. Substituting these relationships in (5. (5. as We define the reduced utility of an arc (i.26) Hence.7. = (i. x° is some partial assignment in the k-th scaling phase. the reduced utility of an assignment differs from the utility of that assignment by a constant amount. we have (5. We summarize our discussion. Using this result and (5. j price'(j) - value'(i) = -1.25) where price'(j) and value'(i) are the corresponding values at the end of the (k-l)-st scaling phase. then (5.23) implies that UBCx") t -4n. Therefore. Using this result in the proof of Theorem 5.

. We all therefore terminate the execution of the auction algorithm when has assigned but rVn It 1 persons and use successive shortest path algorithms to assign these persons. will find algorithm.152 Theorem 5. improved to run 0(Vn m log nC) If This improvement i is based on the following implication of if (5.000.26) the number of unassigned persons is at to assign n1 most Vn. we prohibit person from bidding value(i) S 4Vn . The 0(nm log nC) time. This version of the auction algorithm solves a scaling phase in 0(Vn m) time and its overall running time this is 0{-\fn m log nC).26). n = 10.9.2. If we invoke the similarity the best assumption. and 0((n if - Hence. so happens that the shortest paths have length 0(n) and thus Oial's 3. first For example. then the auction algorithm would assign would assign the 99% of the persons in 1% of the overall running time and the remaining 1% of the persons in the remaining 99% it of the time. scaling version of the auction algorithm solves the assignment problem in The in scaling version of the auction algorithin can be further time. then version of the algorithm currently heis known time bound for solving the assignment problem . the algorithm takes CXVn m) time FVn 1 f>ersons fVn 1 )m) time to assign the remaining FVii persons. then by (5. as described in Section these shortest paths in 0(m) time.

The book by Dantzig (1962] contains a thorough description of these contributions along with historical perspectives. this research . Whereas Dantzig focused on the primal simplex based algorithms. Reference Notes In this section. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. them and by is others. Ford and Fulkerson pioneered those efforts.153 6. flow Since these pioneering works. the tranportation problem. Soon researchers developed special purpose algorithms Dantzig. This discussion has three objectives: to review important theoretical contributions on each topic. Orden work by specializing the simplex algorithm for the uncapacitated minimum cost flow problem. we present reference notes on topics covered in the (i) text. considered the transportation problem. maximum flow problem and the assignment problem — mainly because of their to important applications. researchers began to exhibit increasing interest in the its minimum the cost flow problem as well as special cases-the shortest path problem. Interest in network problems grew with the advent of the simplex Dantzig (1951] specialized the simplex algorithm for noted the traingularity of the basis and integrality of (1956] generalized this algorithm by Dantzig in 1947. He the optimum solution. conducted by Kantorovich (1939]. and (iii) to comment on 6. These some insight into the problem structure and yielded incomplete algorithms. The network simplex algorithm for the capacitated the development of the minimum cost flow problem follov/ed from for linear bounded variable simplex method programming by Dantzig (1955]. and Koopmans (1947]. The studies in this problem domain. Hitchcock [1941]. It also covers the development which credited to Ford and Fulkerson. During the 1950's. network problems and their generalizations emerged as major research topics in operations research. Introduction The study cf network flow models predates the development of first linear programming techniques. (ii) to point out inter-relationships among different algorithms. Their book. solve these problems. a special case of the studies provided minimum cost flow problem. presents a thorough discussion of the early research conducted by of flow decomp)osition theory. Ford and Fulkerson (1962].1 the empirical aspects of the algorithms.

Bazaraa and Jarvis [1978] Programming and Network Flows). 11962] (Programming Games and Transportation Networks). Hausman [1978]. Murty [1976] and Combinatorial Programming). [1981] (Graphs. Several researchers have prepared general surveys of selected application areas. Phillips Garcia-Diaz [1981] (Fundamentals of Network Analysis). Potts and Oliver [1972] (Flows in Transportation Networks). Lawler (Combinatorial (Linear Optimization: Networks and Matroids). (Programming in Netorks and As an additional source of references. no single source provides a comprehensive account of network flow models and their impact on practice. Rockafellar [1984] (Network Flows and [1988] Monotropic Optimization). Christophides [1975] (Graph Theory: [1976] (Linear An Algorithmic Approach). Smith [1982] (Network Optimization Practice). programming compiled by researchers at Bonn (Kastning [1976]. Papadimitriou and Steiglitz [1982] (Combinatorial Optimization: Algorithms and Complexity). Tarjan [1983] (Data Structures and Network Algorithms). Syslo.154 is documented in thousands of papers and many text and reference books. Transportation and Scheduling). Deo. Berge and Ghouila-Houri . and Von Randow [1982. We shall be surveying many important research papers in the following sections. Notable among these is the paper by Glover and Klingman [1976] on the applications of minimum problems. and Kowalik [1983] (Discrete Optimization Algorithms). field Several important books summarize developments in the literature: and serve as a guide to the Ford and Fulkerson [1962] (Flows in Networks). Minieka [1978] (Optimization Algorithms for Networks and Graphs). Since the applications of network flow modelsa are so pervasive. the reader might consult the bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of 1985]). Hu [1969] (Integer Programming and Network Flows). Frank and Transportation Frisch [1971] (Communication. Kennington and Helgason Programming). Assad and Ball [1983] on vehicle routing and scheduling problems. Gondran and Minoux [1984] (Graphs and Algorithms). Examples paper by Bodin. and Derigs Graphs). Transmission and Networks). books on commurucation networks by Bertsekas . Iri (1969] (Network Flows. Golden. cost flow and generalized minimum domains cost flow A number of books written in special problem also contain valuable insight about the range of applicatior\s of network flow in this category are the modek. Jensen and Barnes [1980] [1980] (Algorithms for Network and (Network Flow Programming). Swamy and Thulsiraman Networks and Algorithms).

doubly is linked queues.1. and independently by Dantzig [1960] and Whiting and Hillier [I960]. 2. As a guide to these results. However. linked lists. which summarizes some of this literature.155 and Gallager [1987] and on transportation planning by collection of survey articles [1988]. improved running times are possible The following table svimmarizes various implementations of Dijkstra's algorithm that have been designed to improve the running time in the worst case or in practice. 1. We Gabow have mentioned the "similarity assumption" throughout the chapter. 6^ Shortest Path Problem The shortest path problem and its generalizations have a voluminous research literature. arc. stacks. we refer the reader to the extensive bibliographies compiled by Gallo. This important paper. In the table. binary heaps or d-heaps. The book by Aho. d = [2 + m/n] represents the average degree of a node in the network plus . since any algorithm for sparse must examine every networks. Sheffi [1985]. This section. The book by Tarjan [1983] another useful source of references for these topics as well as for more complex data structures such as dynamic trees. as well as a on facility location edited by Francis and Mirchandani Golden [1988] has described the census rounding application given in Section General references on data structure serve as a useful backdrop for the algorithms presented in this chapter. Pallattino. paper on scaling algorithm for combinatorial [1985] coined this term in his optimization problems. lists. Ruggen and Starchi [1982] and Deo and Pang [1984]. focuses especially on issues of computational complexity. The is original implementation of Dijkstra's algorithm runs in 0(n2) time which running time for fully the optimal dense networks (those with m = fiCn^ )). Hop>croft and Ullman [1974] is an excellent reference for simple data structures as arrays. Label Setting Algorithms The first label setting algorithm was suggested by Dijkstra [1959]. which contains scaling algorithms for several network problems. greatly helped in popularizing scaling techiuques.

156 « .

this data structure implements 0(m + n log n) time. Johnson [1977b] proposed a related bucket scheme with exponentially growing widths and obtained the running time of structure it 0((m+n log Olog log C).j) € A}]. it runs in 0(nC + m log log nC) it Johiison [1982] suggested an improvement of this data structure and used to implement Dijkstra's algorithm in 0(m log log C) time. Glover. Dial [1969] suggested his implementation of Dijkstra's algorithm because of its encouraging empirical performance. any choice of k. This data is the same as the R-heap data structure described in Section 33. The R-heap implementation by a sequential search and improves the running time by a . Dial.157 Boas. Observe w = max minlcj. in practice. The initialization of this algorithm 0(D) time and each heap operation takes Odog log is D). data structure that takes an average of Odog time for each node selection (and the subsequent deletion) step and an average of 0(1) time for each distance update. When Dijkstra's algorithm time. Kaas and Zijlstra [1977] suggested a data structure whose analysis depends upon the takes largest key D stored this in a heap. nk(l+C^/^/w)] bound to a time for log C). that if Denardo and Fox [1. is The correctness of this observation follows from the fact that d* the current minimum temporary temporary distance distance labels. except that performs binary search over Odog C) buckets to insert nodes into buckets during the redistribution of ranges replaces the binary search and the distance updates. Though Dial's only pseudopolynomial-time. hence reducing the number of buckets from 1+ C if to l+(C/w). implemented using data structure. successors have had improved worst- case behavior. Dijkstra's algorithm in Consequently. Denardo and Fox implemented the shortest path algorithm in 0(max{k C^^K m log (k+1).m and C. then the algorithm will modify no other label in the range [d*. Then.: (i. d* + w - 1] since each arc has length at least w - 1. [1979] suggest several such improvements. but who use a Fibonacci heap data structure. The Fibonacci heap an n) somewhat complex. This algorithm was independently discovered [1979] by Wagner[1976]. Choosing k = log C yields a time of 0(m log log C+n Depending on n. then we can use buckets of width w in Dial's algorithm. Kamey and Klingman which runs better its have proposed an improved version of algorithm is Dial's algorithm. using a multiple level bucket scheme. The best strongly polynomial-time algorithm to date is due to Fredman and is Tarjan [1984] ingenious. other choices might lead modestly better time bound.

as shown by Edmonds Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice. By using K = L = 2 log C/log log C.) at the front if the algorithm has is previously examined the node earlier and at the end otherwise. in section 3. several other researchers - Ford and Fulkerson [1962] and Moore [1957] algorithm. the most general form nonpolynomial-time. in skeleton form. The Fibonacci heap version it of two-level R-heap is very complex. and later refined and tested by Pap>e [1974].3 uses a single level bucket A two-level bucket system improves further on the R-heap implementation of Dijkstra's algorithm. the shortest path problem. the two-level bucket system redistributes the range of a subbucket over buckets. this approach currently all classes gives the fastest worst-case implementation of Dijkstra's algorithm for of graphs except very sparse ones. each bucket being further subdivided into L (small) subbuckets. this algorithm as D'Esopo and Pape's algorithm. The modification that adds a node to the LIST (see the description of the Modified Label Correcting Algorithm given in Section 3. If we invoke the similarity aissumption. as described next. thus reducing the number of buckets. We shall subsequently refer to A FORTRAN listing of this . Incorporating a generalization of the Fibonacci heap data structure in the two-level bucket system with appropriate choices of K and L further reduces the time bound to 0(m + nVlog C ). and so is unlikely that this algorithm would perform well Label Correcting Algorithm Ford [1956] suggested. Ahuja. algorithm. The two-level data structure consists of K (big) buckets. Ouring redistribution. This modification was conveyed to Pollack and Wiebenson [1960] by D'Esopo. Orlin and Tarjan [1988] suggested the Rits heap implementation and further improvements.158 factor of Odog log C). studied the theoretical properties of the Bellman's [1958] algorithm can also be regarded as a label correcting Though specific implementations of label correcting algorithms run in is 0(nm) [1970]. time. probably the most popular. The R-heap implementation described system. all of its previous This approach permits the selection of much larger width of buckets. in practice. the first label correcting algorithm for - Subsequently. Mehlhom.4. however. for which the algorithm of Johnson [1982] appears more attractive. this two-level bucket system version of Dijkstra's algorithm runs in 0(m+n log C/log log C) time.

Glover. while for networks with nonnegative arc lengths behavior. runs in 0(n2) time and has excellent computational their Other variants of the label correcting algorithms and found in Glover. which can be improved slightly by using more sophisticated matrix multiplication procedures. Primal simplex algorithms for the that efficient. uses very T\atural pricing strategies. called the partitioning shortest path (PSP) algorithm. Glover. Phillips and Schneider [1985].159 algorithm can be found in Pape [1980]. Dial. the number of pivots is if all arc costs are nonnegative. the arc with largest violation of optimality condition) for the shortest path problem starting from an 0(n) artificial basis leads to Dijkstra's algorithm. Karney and pivoting in Klingman [1979] and Zadeh [1979] showed that Dantzig's pivot rule (i. Using simple data structures. The complexity of this algorithm is 0(n3 log n). Researchers have been interested in developing polynomial-time primal simplex algorithms for the shortest path problem. the FSP algorithm runs it in 0(nm) time. Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm. computational attributes can be Klingman. Goldfarb. Ahuja and Orlin [1988] recently discovered a scaling variation of this approach that performs 0(n^ log C) pivots and runs in 0(nm log C) time. shortest path problem with arbitrary arc lengths are not Akgul [1985a] developed a simplex algorithm for the shortest path problem that performs O(n^) pivots. Lawler [1976] describes this algorithm in his textbook. Hao and Kai [1986] described another simplex algorithm for the shortest path this problem: the number of pivots and running times for to those of algorithm are comparable Akgul's algorithm. Though this modified label correcting it algorithm has excellent computational behavior in the worst-case exponential time. Akgul's algorithm runs to in O(n^) time which can be reduced 0(nm + n^logn) using the Fibonacci heap data structure. structures. Thus. The algorithm we have presented is due in to Floyd [1962] and is based on a theorem by Warshall [1962]. This algorithm nms 0(n3) time and .. Orlin [1985] showed that the simplex algorithm with Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots. aiul also permits partial pricing All Pair Shortest Path Algorithms Most algorithms manipulation. as runs in shown by Kershenbaum [1981]. that solve the all pair shortest path problem involve matrix The first such algorithm appears to be a part of the folklore. This algorithm uses simple data .e. For general networks.

Hence.160 is also capable of detecting the presence of negative cycles. the manner in which the program written. Glover. Computational Results Researchers have extensively tested shortest path algorithms on a variety of network classes. The bibliography by Deo and Pang [1984] contains references algorithms. it might be desirable to pair shortest path problem as a sequence of single source shortest path in the text. the binary heap. they observe that their implementation would be faster for very large shortest path problems. As pointed out approach takes CXnm) time to construct an equivalent problem with nonnegative arc lengths and takes 0(n S(n. For very dense networks. however. Pape [1974]. extrapolating the results.m. and the distribution is of networks on which the algorithm tested. Van Vliet [1978]. rather than conclusive. Unlike the worst<ase results. Kelton and Law [1978]. the results of computational studies are only suggestive. Dantzig [1967] devised another procedure requiring exactly the same order of calculations. however. Iri Kamey and Klingman [1979]. the computational performance of an algorithm is depends upon many factors: for example.C) shortest path is the time neede to solve a problem with nonnegative arc is lengths). Glover.C)) time to solve the n shortest path problems (recall that S(n. The studies due to Gilsinn and Witzgall [1973]. [1979]. the language. It is Dial's algorithm is the best label setting algorithm for the shortest faster than the original OCn^) implementation. compiler and the computer used. for several other all pair shortest path From solve the all a worst -case complexity point of view. These studies generally suggest that path problem. the in the algorithm by Fredman [1976] faster than this approach worst<ase complexity. this problems.m. Dial. Klingman. Denardo and Fox [1979] also find that Dial's algorithm all than their two-level bucket implementation for of their test problems. Denardo . and Fox Imai and [1984]. Researchers have not yet tested the R-heap Dial's algorithm is implementation and so available. at this moment no comparison with . The results of these studies also depend greatly upon the density of the network. Phillips and Schneider [1985] and Gallo and Pallottino [1988] are representative of these contributions. d-heap or the all Fibonacci heap implementation of Dijkstra's algorithm for network classes tested is fcister by these researchers.

Kelton and Law [1978] have conducted a computational study of several aill pair shortest path algorithms. and U is an upper bound on the integral arc capacities. but not all. researchers have developed a number of algorithms for this problem.3 Maximum Flow Problem The maximum flow problem is distinguished by the long succession of research contributions that have improved algorithrr\s. Studies generally suggest that. upon the worst-case complexity of some. . n is the number of nodes. This study also finds that matrix manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks. label setting algorithms are superior and. 6. Several researchers - Dantzig and Fulkerson [1956]. and Schneider [1985] are the two fastest. m is the number of arcs. This study indicates that Dantzig's [1967] algorithm is with a modification due to Tabourier [1973] faster (up to two times) than the Floyd- Warshall algorithm described in Section 3.5. bbel correcting algorithms perform better. Ford and Fulkerson [1956] [1956] - and Elias. algorithms whose time bounds involve The U assume integral capacities. and [1956] solved it by augmenting p>ath algorithms. Since then.161 Among by Glover algorithm. for very dense networks. the bounds specified for the other algorithms apply to problems with arbitrary rational or real capacities.2 summarizes the running times of some of these algorithms. Feinstein and Shannon independently established the max-flow min-cut theorem. The study finds that their algorithm is superior to D'Esopo and Pape's label setting algorithms Other researchers have also compared with label correcting algorithms. whereas Ford and Fulkerson Elias et al. Fulkerson and Dantzig [1955] solved the maximum flow problem [1956] by specializing the primal simplex algorithm. Figure 6. Klingman. the label correcting algorithn\s. of these improvements have produced improvements in practice. the algorithms Phillips by D'Esopo and Pape and by Glover. et al. In the figure. for sparse networks. but slower for sparse networks.

2. Orhn and Tarjan [1988] (b) uvnm ol + n ^VlogU) (c) O nm V ( Table 6. then the algorithm performs 0(nm) augmentations. ) Ahuja and Orlin [1987] 0(nm + n^ . maximum that Edmonds and Karp [1972] suggested two specializations of the labeling algorithm.. containing the smallest possible number of arcs) in the residual network.e. Running times of maximum flow algorithms. [1974] [1977] 0(n2 VIS") [1978] Kumar and Maheshwari 0(n3) Galil [1980] 0(n5/3m2/3) [1980]. both with improved computational complexity.. Ford and Fulkerson [1956] observed that the labeling algorithm can perform as many an the as 0(nU) augmentations for networks with integer arc capacities. They also showed that for arbitrary irrational arc capacities. U 17 Ahuja.162 # 1 Discoverers Running Time [1972] Edmonds and Karp Dinic [1970] 0(nm2) CKn2m) 0(n3) 2 3 4 5 6 Karzanov Cherkasky Malhotra. consequently. will A breadth first search of the network determine a shortest augmenting path. J O nm 1^ U) r?- log log — log " U . this version of the labeling . They one showed if the algorithm augments flow along a shortest path (i. Shiloach [1978] 7 8 GalU and Naamad 0(nm CXn3) log2 n) Shiloach and Vishkin [1982] Sleator 9 10 11 and Tarjan [1983] 0(nm 0(n3) log n) Tarjan [1984] Gabow[1985] Goldberg [1985] 0(nm 0(n3) log U) 12 13 14 Goldberg and Tarjan [1986] Bertsekas [1986] CXnm 0(n3) log (n^/m)) 15 16 Cheriyan and Maheshwari [1987] 0(n2 Vm + •. Ca) log .. the labeling algorithm can perform infinite sequence of augmentations and might converge to a value different from flow value.

in a total of 0(nm) time. j) network can be partitioned in the layered nodes N]. Orbn and Ahuja [1987] developed the distance label based augmenting path algorithm given in Section algorithm is 4. maintains distance Goldberg [1985] introduced distance labels in the context of his preflow push algorithm. The algorithms differ only in the manner in which they obtain these augmenting paths. flow along a path with Edmonds and Karp's second idea was to augment maximum residual capacity. and have led to more efficient algorithms. They proved that this algorithm to performs path 0(m log U) with maximum augmentations.e. for solving the maximum flow problem. They also showed that this equivalent both to all Edmonds and Karp's algorithm and to Dinic's algorithm in the sense that three algorithms enumerate the same augmenting paths in the same sequence. A blocking flow in a layered in the network G' « (N'. of the labeling algorithm runs in 0(m2 log Dinic [1970] independently introduced the concept of shortest path networks. his algorithm runs in OCn^m) times. The shortest augmenting path algorithm presented in Section 4. The nodes . Consequently. U) time. called layered networks is . Karzanov [1974] introduced the concept .3.. are easier to manipulate. A') is a flow that blocks flow augmentations residual capacity sense that G' contains no directed path with positive from the source node to the sink node.3 achieves the same time bound it as Dinic's algorithm. network connects nodes in adjacent layers (i. hence. blocking flow iteration. but instead of constructing layered networks labels. the length of the layered network increases and a^er at most n iterations.163 algorithm runs in 0(nm2) time. His algorithm constructs layered networks Dinic showed that after each and establishes blocking flows in these networks. . N2. in a layered (i. i e Nk and j e Nk+1 for some k). the source is disconnected from the sink in the residual network. A layered network lie a subgraph of the residual network at least that contains only those nodes and arcs that on one shortest path from the source into layers of to the sink.. so that for every arc . . a blocking flow in a layered network by performing at most m augmentations. Distance labels offer several advantages: They are simpler to understand than layered networks. Tarjan [1986] has shown how determine a this version residual capacity in 0(m) time on average. Several researchers have contributed improvements to the computational complexity of maximum flow algorithms by developing more efficient algorithms to establish blocking flows in layered networks. Dinic showed how to construct.

their implementation of Dinic's algorithm 0(nm (log n)2) time. during the augmentation. The such data structures were suggested independently by Shiloach [1978] and Galil [1980]. Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify later to augmenting paths quickly. 2-3 trees (see Aho. for example.3) takes 0(n) time on average to identify an augmenting path and. Kumar and Maheshwari [1978] present a conceptually simple maximum flow algorithm that runs in OCn^) time.164 of preflows in a layered network. If 0(nm) time and the algorithm runs in 0(nm we invoke the similarity assumption. . algorithm achieving Orlin and Ahuja [1987] have presented a variation of the Ga bow's same time bound. constructs a blocking flow in 0(n2) time. it saturates some arcs in this path. (See the technical report of Even (1976] for a comprehensive description of this algorithm and the paper by Tarjan [1984] for a that an simplified version. to Shiloach [1978] and Galil and in a Naamad [1980] showed how augment flows through path fragments way that finds a blocking rur\s flow in O(m(log n)^) time. this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's capacity.7. the at initial flow value differs from the m£iximum flow value by most m units and so the shortest augmenting path algorithm (and also Dinic's algorithm) performs at scaling phase takes most m augmentations. but the scaling algorithm much simpler to implement. As outlined in Section 1. Gabow to the [1985] obtained a similar time bound by applying a bit scaling approach maximum flow problem. Ehiring a scaling phase. and Naamad Dinic's algorithm (or the shortest augmenting path algorithm described in Section 4. If we delete the is we obtain a set of path fragments. each log C) time.) Karzanov showed implementation that maintains preflows and pushes flows from nodes with excesses. The basic idea to store these path fragments using some data structure. in Hence. Malhotra. Sleator and Tarjan [1983] improved this approach by using a data structure called dynamic trees to store and update path fragments. Consequently. saturated arcs from this path. Cherkasky [1977] and Galil [1980] presented further improvements of Karzanov's algorithm. Sleator and Tarjan's algorithm establishes a blocking flow in 0(m log n) time and thereby yields an 0(nm log n) time bound for Dinic's algorithm. this time bound is is comparable to that of Sleator and Tarjan's algorithm. for The search more efficient maximum flow algorithms has stimulated researchers to develop first new data structure for implementing Dinic's algorithm.

this algorithm does not use any complex data structures. Goldberg and Tarjan [1986] the running time of the improved This FIFO preflow push algorithm to 0(nm log (n^/m). number of nonsaturating pushes to OCn^ log U/ log log Ahuja. Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an 0(nm + n^ log U) time bound. Previously. that Recently. this algorithm improves Goldberg and Tarjan's for 0(iun log (n2/m)) algorithm by a factor of log n networks that are both non-sp>arse and nondense. can be implemented in 0(nm log (2+p/nm) time using dynamic Although this . this algorithm closely resembles the Goldberg's FIFO preflow push algorithm. 0(nm + n^ Vlog U trees. at each iteration. Further. Cheriyan and Maheshwari [1987] showed Goldberg and Tarjan's highest-label preflow push algorithm actually performs ) OCn^Vin nonsaturating pushes and hence runs in OiriNm ) time. The use of the dynamic tree data structure its improves the running times of the excess-scaling algorithm and variations. Orlin and Tarjan [1988] reduced the U). Bertsekas [1986] obtained another his maximum flow algorithm by specializing minimum cost flow algorithm. If we invoke the similarity assumption. j>erforms a push /relabel step at this node. as ) algorithm improves to O nm log —— — ° +2 by using dyiuimic showT» in Ahuja. Scaling excesses by a factor of log U/log log U and pushing flow from a large excess node with the highest distance label. it (This algorithm maintains a queue of active nodes.165 Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the push algorithm. Orlin and Tarjan [1988]. algorithm currently gives the best strongly polynomial-time bound for solving the maximum flow problem. the E>inic's and the FIFO preflow push algorithms. though the improvements are not as For dramatic as they have been for example. selects a node from the front of the queue. and adds the newly active nodes to the rear of the queue. Ahuja. Tarjan [1987] conjectures that any preflow push algorithm that performs p nor«aturating pushes trees. Goldberg (1985] had shoum in the FIFO version of the algorithm that pushes flow from active nodes first-in-first-out order runs in OCn-^^ time. Orlin and Tarjan [1988] obtained another variation of origir\al excess scaling algorithm which further reduces the number of nonsaturating pushes to 0(n^ VlogU ).) Using a dynamic tree data structure.

Martel and Fernandez-Baca such results by showing how the running times of Karzanov's and Malhotra et al. Ahuja [1987] have achieved the same time bounds using a modification of the shortest augmenting path algorithm. Orlin. flow problems: (ii) the maximum flow problem on (i.e. This result implies that the FIFO preflow push algorithm and the . Stein and Tarjan [1988] it improved upon these ideas by shov^dng time bounds for all is possible to substitute nj for n in the preflow push algorithms to obtain the new time bounds for bipartite networks. and. every node network. Femandez-Baca and Martel small integer capacities. this algorithm performs 0(nm) pivots and to can be implemented in ©(n^m) time.166 conjecture is true for all known preflow push algorithms. maximum Recently. U=l). Both of these algorithms rely on ideas contained in Hopcraft and Karp's [1973] algorithm for maximum bipartite matching. has one incoming arc or one outgoing arc) bipartite networks. Researchers have also investigated the following special cases of the maximum (i. these problems are easier than are problems with large capacities. n^=|N J. A) Nj j << j N2 |(or j N2 . n2 = |N2| andn = n^+n2[1985] obtained the Suppose first that nj < n2 Gusfield. essentially Goldfarb and Hao [1988] developed such an algorithm. j « j N^ | ).e. it is still open for the general case.. is Observe that the maximum flow value for unit capacity networks less than n. and so the shortest augmenting path in algorithm will solve these problems to solve 0(nm) time. unit capacity simple networks U=l. (iii) and (iv) planar networks. This algorithm is based on selecting pivot arcs so that flow source to the sink.. Even and Tarjan [1975] showed that Dinic's algorithm solves the maximum flow problem on unit capacity networks in Orlin and O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time. except source and sink. [1987] have generalized these ideas for networks with Versions of the bipartite Let maximum = (N^ flow algorithms run considerably faster on a if j networks G u N2. (i) unit capacity networks in the . Developing a polynomial-time primal simplex algorithm for the flow problem has been an outstanding open problem for quite some time. that Ahuja. Thus. is augmented along a shortest path from the As one would expect.'s algorithms reduce from O(n^) to 0(n^ n2 ) and 0(nj + nm) respectively. Tarjan[1988] recently showed how implement this algorithm in 0(nm logn) using dynamic trees.

Martel [1987] showed that the FIFO preflow push algorithm can take n(nm) time to solve a class of unit capacity networks. ) and 0(n. The studies performed by Hamacher [1979]. m + n. the running times of the Specialized maximum flow algorithms on planar networks appear more attractive. Cheung . Some important [1979]. however.) in a two-dimensional plane so that arcs intersect one another only planar network has at most A 6n arcs. worth mentioning. Edmonds and Karp. that these knovkTi worst-case examples are quite artificial and are not likely to arise in practice. are quite different than those for the general networks. Cheriyan [1988] has also constructed a for family of examples to show that the bound O(n^) FIFO preflow push algorithm is tight. solve the bipartite maximum flow problem on networks in 0(n. Other researchers have made some progress in constructing worst-case examples for preflow push algorithms. and the bound O(n^m) for the generic preflow push algorithm The research community has not established similar results for other preflow It is push algorithms. Galil and Malhotra achieve their worst-case bounds on those examples.e. m + n. Researchers have also investigated whether the worst-case bounds of the maximum case flow algorithms are for tight. especially for the excess-scaling algorithms. i. solution techniques. It is possible to solve the maximum flow problem on planar networks much at the more efficiently than on general networks. whether the algorithms achieve their worst- bounds some families of networks. is Zadeh [1972] showed that the bound of Edmonds and Karp that the algorithm tight when bound m = n^. respectively. Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow Vm) for the push algorithm is tight. Dinic. Cherkasky. references for planar maximum flow algorithms are Itai and Shiloach Johnson and Venkatesan (1982] and Hassin and Johnson [1985]. log U) time. (A network is called planar if it can be drawn nodes. Several computational studies have assessed the empirical behavior of maximum flow algorithms. that have even better running times. Karzanov.. Even and Tarjan [1975] noted same examples imply that the of Dinic's algorithm is tight when m= n2- Baratz [1977] showed that the bound on Karzanov's algorithm is tight.167 original excess scaling algorithm. hence. Galil [1981] constructed an interesting class of examples and showed that the algorithms of et al.

the worst-case performance of algorithms. using sophisticated data structures. Recently. highest-label preflow push algorithm runs the The excess-scaling algorithm and its variations have not been tested thoroughly. Klingman. tree We do not anticipate that dynamic practice. slower than the original Dinic's algorithm. 1984). do not apply to the multi-terminal maximum flow problem on directed networks. Mote and Whitman [1979. Gusfield [1987] has suggested a simpler multi-terminal flow algorithm. that These studies were conducted prior labels. Finally. Imai [1983] noted that Galil and Naamad's is [1980] implementation of Dinic's algorithm. the maximum maximum dynamic flow flow maximum flow value between every pair of nodes. Imai (1983] and Goldfarb [1986] and Grigoriadis are noteworthy. however . Dinic's algorithm competitive with Karzanov's algorithm for sparse networks. algorithm and the primal simplex algorithm due to Fulkerson and Dantzig [1955] and found these algorithms to be slower than Dinic's algorithm for most classes of networks.168 [1980]. A number of researchers are currently evaluating the computational performance of preflow push algorithms. Ehnic's and Karzanov's algorithms in increasing is most classes of networks. they observed that their implementation of Dinic's algorithm using dynamic tree data structure algorithm by a constant factor. the sophisticated data structures improve only are not useful empirically. and Ahuja. Derigs and Meier [1988]. Kodialam and Orlin [1988] have found that the preflow push algorithms are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for most classes of networks. but Researchers have also tested the Malhotra et al. These results. their contribution has improve the worst- case p>erformances of algorithms. is slower than the original Dinic's Hence. the fastest. Among all nonscaling preflow push algorithms. Glover. Grigoriadis [1988]. we wish to determine the flow problems. (i) the multi-terminal flow problem. to the development of algorithms use distance These studies rank Edmonds order of performance for and Karp. but slower for dense networks. Sleator and Tarjan (1983] reported a similar finding. as in others. we discuss two important generalizations of the (ii) problem: problem. . Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem. implementations of preflow push algorithms would be useful in been to in this case.

Dantzig's book [1962] Ford and Fulkerson [1956. and Koopmans [1947]. Minty algorithm. primal-dual algorithm for the minimum Jewell [1958]. He observed for linear the spanning tree property of the basis and the solution. Iri [1960] and Busaker and Gowen algorithm. [1960] and Fulkerson [1961] independently discovered the out-of-kilter is The negative cycle algorithm credited to Klein [1967]. (i. [1961] independently discovered the successive shortest path These researchers showed how to solve the minimum cost flow problem [1971] as a sequence of shortest path problems v^th arbitrary arc lengths.169 In the simplest version of maximum dynamic tj: flow problem. minimum cost flow problem. Dantzig [1951] developed the first complete solution procedure for the transportation problem by specializing his simplex algorithm for linear programming. caise of The classical transportation problem. integrabty property of the optimum Later his development of the upper bounding technique programming led to an efficient sp)ecializatior of the simplex algorithm for the discusses these topics. 6. a special the minimum cost flow problem. known as the primal-dual algorithms. j) in the is network a number to denoting the time needed to traverse possible flow from the source The objective send the maximum node first to the sink node within a given time period T. Helgason and Kennington [1977] and Armstrong. then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths. 1957] suggested the for the uncapacitated first combinatorial algorithms and capacitated transportation problem. (Ford and Fulkerson [1962] give this problem). Ford and Fulkerson [1958] showed that the maximum dynamic flow problem can be solved by solving a a nice treatment of nunimum in cost flow problem. Tomizava and Edmonds and Karp [1972] independently pointed out that if the computations use node potentials. we associate with each arc that arc. Klingnun and Whitman [1980] describe the .was posed and solved (though incompletely) by Kantorovich [1939]. these algorithms are Ford and Fulkerson [1962] describe the cost flow problem.4 Minimum Cost Flow Problem cost The minimum flow problem has a rich history. Orlin [1983] is to has considered infinite horizon dynannic flow problems which the objective minimize the average cost per period. Hitchcock [1941].

Researchers have conducted extensive studies to determine the most effective pricing strategy. Bradley. Zadeh 11973b) has also described more pathological examples for network algorithms.. All these algorithms essentially cortsist of identifying shortest paths between appropriately defined nodes and augmenting flow along these paths. and Klingman of [1979] subsequently discovered is improved data excellent structures. Bradley. Goldfarb and Reid [1977]. Grigoriadis and Hsu [1979]. Glover. Glover. Brown and Graves [1977].170 specialization of the linear cost flow programming dual simplex algorithm not discussed in this chapter). The network simplex algorithm and most popular with operations researchers. These studies show that the choice of the pricing strategy has a significant effect on both solution time and the number strategy BrovkTi of pivots required to solve minimum cost flow problems. The candidate list we described is due to Mulvey [1978a]. The book Kennington and Helgason [1980] an source for references and background material concerning these developements.e. the successive shortest path algorithm. i. Gibby.performs an exponential number of iterations. its practical implementations have been first Johnson [1966] suggested the tree first manipulating data structure for implementing the simplex algorithm. Zadeh [1973a] describes one such example on which each of several algorithms — the primal simplex algorithm with Dantzig's pivot rule. and the out-of-kilter algorithm . The implementations using these ideas. The paper by Zadeh [1979] just showed this relationship by pointing out that each of the algorithms mentioned of are indeed equivalent in the sense that they perform the same sequence augmentations provided ties are broken using the same rule. Klingman and Stutz [1974]. Mead and Grigoriadis [1986] have described other strategies have been . the negative cycle algorithm (which augments flow along a most negative cycle). Klingman and Napier [1974]. due to Srinivasan and Thompson [1973] and Glover. Kamey. Glover. these algorithms obtain shortest paths losing a method that can be regarded as an application of Dijkstra's algorithm. The fact that one example is bad for many network insightful algorithms suggests inter-relationship among the algorithms. the dual simplex algorithm. selection of the entering variable. and Barr. significantly reduced the running time of the simplex algorithm. Klingman and that and Graves [1983] [1978]. Further. for the minimum problem (which is Each of these algorithms perform iterations that can (apparently) not be polynomially bounded. the primal-dual algorithm.

(Leaist Recently Considered) rule which orders the arcs in an arbitrary. the use of this technique led to a finitely converging primal simplex algorithm. Glover and Klingman contributed on both fronts. Zadeh . Schweitzer and Shlifer [1977] and Grigoriadis (1986]). Cunningham [1979] described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling. The strongly feasible basis technique. Hao and Kai [1987] have described more anti-stalling pivot rules for the minimum cost flow problem. Brown and Graves [1978]. proposed by Cunningham [1977a. [1979]. Thus. Orlin [1985] showed. maximum flow problem. Researchers have also been interested in developing polynomial-time simplex algorithms for the minimum cost flow problem or its special CJises. However. the uncapacitated this minimum cost flow problem a dual algorithm performs 0(n^log n) pivots for minimum cost flow problem. The strongly feasible basis technique prevents cycling during a sequence of consecutive degenerate pivots. On the theoretical front. using a p>erturbation technique. Developing a polynomial-time primal simplex algorithm for the minimum cost flow problem is still open. The only is polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]. that for integer data an implementation of the primal simplex algorithm that maintains feasible basis a strongly performs O(nmCU) pivots pivots when used with any arbitrary pricing strategy and 0(nm C log (mCU)) when used with Dantzig's pricing strategy. The algorithm then examines the arcs in the wrap-around fashion. 1977b. network structure and the network Experience with solving large scale established that minimum cost flow problems has more than 90% of the pivoting steps in the simplex method can be degenerate (see Bradley. It appears that the best pricing strategy depends both upon the size. and introduces the first eligible arc into the basis.171 effective in practice. 1978) has {1976] and independently by Barr. This phenomenon known the as stalling. and the assignment problem: Dial et al. Cunningham showed pivots. that this rule admits at most nm consecutive degenerate Goldfarb. researchers have developed such algorithms the for the shortest path problem. Computational experience has shown that maintaining strongly feasible basis substantially reduces the number of degenerate pivots. but the number is of consecutive degenerate pivots may be exponential. but manner. One such rule is LRC fixed. degeneracy is both a computational and a theoretical issue. Gavish. each iteration starting at a place where it left off earlier.

Kamey and Klingman and Hsu [1988] [1974]. and capacitated or uncapacitated transportation and minimum cost flow problems. A number sizes. Mulvey [1978b]. lower or upper bounds so as to the optimality conditions. Akgul [1985a]. Bradley. Hao and Kai [1986] and Ahu)a and OrUn [1988] for the [1988] for the shortest path problem. The algorithm operates so change it in the node potentials increases the dual objective function value and when finally determines the optimum dual objective function value. Helgason and Kennington [1977] and Armstrong. The algorithm proceeds by either augmenting flow from an excess node with zero reduced cost. Orlin [1985]. Hung [1983]. Akgul [1985b] and Ahuja and Orlin [1988] for the assignment problem. Goldfarb and Hao maximum flow problem.6 for a definition of this problem). studies conducted by Glover. mirumum cost flow problem. The attractive relaxation algorithms proposed by Bertsekas and his associates are other algorithms for solving the For the minimum cost flow problem and its generalization. Klingman and Napier [1974] Glover. this algorithm maintains a (i) pseudoflow satisfying the optimality conditions. Bertsekas results for the relaxation algorithm. and Tseng have presented computational . or latter case. and Roohy-Laleh [1980]. Grigoriadis [1979] and Grigoriadis [1986] are noteworthy. it has also obtained an optimum primal Bertsekas solution. this flow assignment might change the excesses that each and deficits at nodes. Napier and Stutz which is capable of generating assignment. it to a deficit node along a path cortsisting of arcs (ii) changing the potentials of a subset of nodes. [1976] Glover. minimum this cost flow problem (with integer data). [1985] suggested the relaxation algorithm for the Bertsekas and Tseng [1985] real data. Klingman and Whitman algorithm. due Klingman. [1980] have reported on extensive studies of the dual simplex subject of The primal simplex algorithm has been a more rigorous . investigation. of empirical studies have extensively tested minimum to cost flow algorithms for wide variety of network structures.172 [1979]. and problem The most common problem generator [1974]. Orlin [1985]. extended approach for the minimum cost flow cost flow problem with and for the generalized minimum problem (see Section 6. Brov^Ti and Graves [1977]. data distributions. This relaxation algorithm has exhibited nice empirical behavior. Kamey and Klingman [1974] and Aeishtiani and Magnanti have tested the primal-dual and out-of-kilter algorithms. is NETGEN. Goldfarb. to their In the satisfy resets flows on some arcs however. Kamey.

[1988]. and the primal simplex algorithm with Dantzig's pivot rule should have comparable running times. m' of which are in absolute capacitated. The networks with n nodes and m arcs.173 In view of Zadeh's [1979] result. The term S() is the running time for the shortest path problem and the flow term M() represents the corresponding running time to solve a problem. Computer codes public domain. and the integral capacities. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm. the out-of-kilter algorithm. maximum . computational studies have verified this expectation and until very recently the all primal simplex algorithm has been a clear winner for almost classes of network is problems. and the relaxation code RELAX developed by Bertsekas and Tseng Polynomial-Time Algorithms In the recent past. and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the minimum cost flow problem in practice. It cissumes that the integral cost coefficients are bounded value by C. it appears that the relaxation algorithm of Bertsekas and Tseng. minimum if Recall that an algorithm in the is strongly polynomial-time its running time is polynomial number or U. researchers have actively pursued the design of fast (weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem. we would expect that the successive shortest path algorithm.3 these theoretical developments in solving the table reports running times for minimum cost flow problem. that determine a By using more effective pricing strategies good entering arc without examining all arcs. the primal-dual algorithm. Grigoriadis [1986] finds his algorithm. However. the dual simplex algorithm. supplies and demands are bounded in absolute value by U. for some minimum cost flow problem are available in the These include the primal simplex codes RNET and NETFLOW developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980]. new At version of primal simplex algorithm faster than the relaxation this time. and does not evolve summarizes terms containing logarithms of C The table given in Figure 6. respectively. of nodes and arcs. we would expect that All the the primal simplex algorithm should outperform other algorithms.

Orlin U/log log U) log nC) and Tarjan [1988] and log log U log nQ Strongly Polynomial -Time Combinatorial Algorithms # . U)) C M(n. C)) m') log U S(n. m.174 Polynomial-Time Combinatorial Algorithms # 1 Discoverers Running Time [1972] Edmonds and Karp Rock Rock [1980] [1980] 0((n + m") log 2 3 4 5 6 0((n + U S(n. m. m. 1988b] 0(nm log n log log n log (log U log nQ nC) Goldberg and Tarjan 0(nm 0(nm 0(nm 9 Ahuja. O) C M(n. U)) nC) ) 0(n 0(n log log Bland and Jensen [1985] Goldberg and Tarjan [1988a] Bertsekas and Eckstein [1988] 0(nm log irr/nx) log nC) o(n3 log 7 7 8 Goldberg and Tarjan [1987] 0( n^ log nC Gabow and Tarjan [1987] [1987. Goldberg. m.

we invoke the similarity assumption. since they regarded as having practical utility. Bland and Jensen [1985] independently discovered a similar cost scaling algorithm. For problems that satisfy the similarity assumption. The scaling technique it did not capture the interest of many researchers. Mehlhom. Orlin and Tarjan [1988] M(n. minimum L> cost flow problem. the best bounds for the shortest path and maximum flow problems are: Polynomial-Time Bounds S(n. m) = (n^/m) Goldberg and Tarjan [1986] Using capacity and right-hand-side scaling. was suggested by Orlin initially little [1988]. and Ahuja. The pseudoflow push algorithms for the minimum cost flow problem discussed in Section 5. C) = nm ^%rT^gTJ log [ ^— + 2 J Ahuja. Bertsekas [1986] developed the first pseudoflow push algorithm. m. The wave algorithm . C) = Discoverers min (m log log C. m + rh/logC ) Johnson [1982]. This algorithm was pseudopolynomial-time. introduced independently by Bertsekas [1979] and Tardos [1985]. researchers gradually recognized that the scaling technique has great theoretical value as well as potential practical significance.175 For the sake of comparing the polynomial and strongly polynomial-time algorithms.7. Discoverers m) = m+ nm n log n log Fredman and Tarjan [1984] M(n. Orlin and Tarjan [1987] Strongly Polynomial -Time Bounds S(n. The RHS-scaling algorithm presented the which a Vciriant of Edmonds-Karp algorithm. one using capacity scaling and the other using cost scaling.8. this Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section algorithm to Tarjan [1984] 5. proposed a wave algorithm for the maximum flow problem. However.8 use the concept of approximate optimality.m. Edmonds and Karp [1972] developed the first (weakly) polynomial-time eilgorithm for the in Section 5. This cost scaling algorithm reduces the minimum cost flow problem to a sequence of 0(n log C) maximum flow problems. Rock [1980] developed two different bit-scaling algorithms for the minimum cost flow problem.

9. cycle algorithm Both the algorithms are based on the negative due to Klein [1967]. upon similar ideas. [1988].3 contains the definition of a blocking flow. 6 W this Goldberg and Tarjan described an implementation of approach running in time 0(nm(log n) minflog nC. Although the wave This algorithm is very practical. For problems satisfying the similarity is assumption.. Goldberg and Tarjan [1987] obtained a computational time that the bound of 0(nm log n log nC). (The description of Dinic's algorithm in Section 6. m log n)). Using a dynamic tree data structure in the generic pseudoflow push algorithm. Goldberg and Tarjan [1988b] showed that flow a it if the negative cycle algorithm cycle always augments along / minimum mean cycle (a W for which V (i. Gabow and 0(nm log n U log nC). The success in this direction was due to who developed a triple scaling algorithm running in time to Ahuja. |W | is minimum). Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed other polynomial-time algorithms. log structures.j) Cj. who developed the double scaling algorithm. then is strongly polynomial-time.8 . the double scaling algorithm faster than all other algorithms for all network topologies except for very dense networks. The double as described in Section runs in 0(nm log U log nC) time. in these instances. algorithms by Goldberg and Tarjan appear more attractive. 176 for the minimum cost flow problem described in Section 5. They also showed minimum cost flow problem cam be solved using 0(n log nC) blocking flow computations. except the wave algorithm.) finger tree (see Using both Mehlhom [1984]) and dynamic tree data structures. These algorithms. Goldberg. showed that the negative cycle algorithm . Scaling costs by an appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC) and a dynamic tree implementation improves the bound further to 0(nm log log U log nC). Goldberg and Tarjan [1988a] obtained an 0(nm log (n^/m) log nC) bound for ^he wave algorithm. its worst-case running time is not very attractive. required sophisticated data structures that impose a very high computational overhead. 5. The second success was due Orlin and Tarjan scaling algorithm. which was developed relies independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988]. Barahona and Tardos if [1987]. situation has prompted researchers to investigate the possibility of improving the computational complexity of minimum first cost flow algorithms without using any complex data Tarjan [1987]. analyzing an algorithm suggested by Weintraub [1974].

e. is Currently. This desire was motivated primarily by (Indeed. For very sparse networks. Fujishige [1986]. that can valued data as well as integer valued level.. theoretical considerations. the fastest strongly polynomial-time algorithm due to Orlin [1988]. performs is 0(m log mCU) iterations. source of the difficult or underlying complexity in solving a problem. and also highlighted the desire to develop a strongly polynomial-time algorithm. in practice. Kapoor and to the Vaidya [1986] have shown that Karmarkar's [1984] algorithm. and Orlin [1988] provided subsequent improvements in the running Goldberg and Tarjan [1988a] obtained another strongly polynomial time Goldberg and algorithm by slightly modifying their pseudoflow push algorithm. the terms log in n. Galil and Tardos time.Tr\^ log (mCU) S(n. Since identifying a cycle with maximum improvement difficult (i. the worst-case running time of this algorithm nearly as low cis the best weakly polynomieil-time algorithm. Edmonds and Karp the [1972] proposed the first polynomial-time algorithm for minimum cost flow problem. they describe a method (based upon solving to an auxiliary assignment problem) determine a disjoint set of augmenting cycles with the property that augmenting flows along these cycles improves the flow cost by at least as much as augmenting flow along any single cycle. m. where .e. This algorithm solves the minimum cost flow problem as a sequence of 0(min(m log U. and are sublinear Strongly polynomial-time algorithms are (i) theoretically attractive for at least two reasons: run on real they might provide. Interior point linear programming algorithms are another source of polynomial-time algorithms for the minimum cost flow problem. at a more fundamental i. Their algorithm runs in 0(. Several researchers including Orlin [1984]. Tarjan [1988b] also show that their algorithm that proceeds by cancelling minimvun mean cycles is also strongly polynomial time. identify the and (ii) they might. m log n)) shortest path is problems. in principle. when applied minimum cost flow problem performs 0(n^-^ mK) operations.177 augments flow along then it a cycle with maximum improvement in the objective function.) C and log U typically range from 1 to 20. network flow algorithms data. NP-hard).. [1986]. even for problems that satisfy the similarity assumption. are problems more equally difficult to solve as the values of the tmderlying data becomes increasingly larger? The Tardos first strongly polynomial-time minimum cost flow algorithm is due to [1985]. O) time.

Although the research community has developed several different algorithms for the assignment problem. these time bounds are worse than that of the double scaling algorithm. we (j. The algorithm successively obtains a shortest path from with respect to the lir«. Vaidya [1986] suggested another algorithm for linear programming that solves the minimum cost flow problem in 0(n^-^ y[m K) time. To use this solution approach. [1955]. and Orlin have obtained contradictory Testing the right-hand-side scaling algorithm for the minimum cost flow problem. many of these algorithms share common The successive shortest path algorithm. the scaling algorithms [1986] not as efficient as the non-scaling algorithms. and for all J€N2 these arcs have zero cost s to t capacity. appears to at the heart of many assignment due to This algorithm is implicit in the first assignment algorithm Kuhn known as the Hungarian method. At fully this time. Asymptotically. We believe that when implemented with appropriate speed-up techniques. The primary efficient been on the development of empirically algorithms rather than the development of algorithms with improved worst-case complexity. 178 K= log n + log C + log U. 6. and introducing and unit for all i€N|.t) first transform the assignment problem into a a source minimum arcs cost flow (s. Bland and Jensen [1985] also reported encouraging results with their cost scaling algorithm. and is explicit in the papers by Tomizava [1971] and Edmonds and Karp When applied to an assignment problem on the network G = (N^ u N2 . A) the successive shortest path algorithm operates as follows. the research community has yet to develop sufficient evidence to assess the computational worth of scaling and interior point linear for the programming algorithms folklore. described in Section 5. Boyd results. According to the even though they might provide the best-worst case bounds on running eu-e times. s and a sink node t.ar .4 for the lie minimum algorithms..5 Assignment Problem The assignment problem has been emphasis in the literature has a popular research topic. features.i) problem by adding node . they found the scaling algorithm to be competitive with the relaxation algorithm for some classes of problems. scaling algorithms have the potential to be competitive with the best other algorithms. cost flow problem. minimum cost flow problem. [1972].

[1972] independently pointed out that Tomizava and Edmonds and Karp working with reduced lengths. S(n.C) O(n^) and for a Fibonacci heap implementation is it is 0(m+nlogn). For problems satisfying the similarity assumption.C)) time. Lawler [1976] described an Oiri^) .C) problem. then these applications take a total of 0(nm) time time. Kuhn's [1955] Hungarian method shortest path algorithm. the research community considered it to be O(n^) method.C)) time. Sodini [1986] also suggested a similar threshold assignment algorithm.m. Glover The more recent [1986] is threshold and Klingman also a successive shortest path algorithm which integrates their threshold shortest path algorithm (see Glover. in Whereas the successive shortest path an iteration. the problem augments flow along one path augments flow along all Hungarian method to the sink node. (For 0(nm + nS(n. is the time needed to solve a shortest path is For a naive implementation of Dijkstra's algorithm. since there are n augmentatior\s and each augmentation takes 0(m) runs in Consequently. Carraresi and Hoffman and Markowitz path problem to [1963] pointed out the transformation of a shortest an assignment problem. the Hungarian method.m.C) min(m m+nVlogC}. the to Hungarian method solves a (particularly simple) maximum flow problem send the maximum possible flow from the source node s to the sink node t using arcs vdth zero reduced cost. too. The algorithm solves the assignment problem by n applications of the shortest path algorithm for nonnegative arc lengths and runs in 0(nS(n. overall.m.m. If the shortest paths from the source node we use the labeling algorithm to solve the resulting maximum flow problems. some time after the development of the Hungarian method as described by Kuhn. is the primal-dual version of the successive After solving a shortest path problem and updating the node potentials. updates the node potentials.m. where S(n. The fact that the assignment problem can be solved as a sequence of n shortest Iri path problems with arbitrary arc lengths follows from the works of Jewell [1958]. Glover and Klingman [1984]) with the flow augmentation process. [1960] and Busaker and Gowen [1971] [1961] on the minimum cost flow problem. algorithm by Glover.mC)) = 0(nS(n. and augments one unit of flow along the shortest path. However. log log C. costs leads to shortest path problems with nonnegative arc details of Weintraub and Barahona [1979] worked out the Edmonds-Karp assignment algorithm for the assignment problem. S(n.179 programming reduced costs.

180 implementation of the method. The algorithm of Hung and Rom after [1980] maintains a strongly feaisible basis rooted at an overassigned node and. The basis of the assignment problem is highly degenerate. Researchers have also studied primal simplex algorithms for the assignment problem. and with no person or is object overassigned.C)) time. [1969] The algorithms of Dinic and Kronrod but and Engquist [1982] are essentially the same as the one we in the just described. minimum cost flow problem is due to E>inic is and Kronrod Hung eind Rom [1980] and Engquist [1982]. every person assigned. The successive shortest path algorithm maintains a solution w^ith unassigned persons and objects. The relaxation approach for the (1969]. of its 2n-l variables. but may be overassigned or unassigned. only n are nonzero.m. reoptimizes over All of these algorithms the previous basis to obtain another strongly feaisible basis. Glover and Klingman [1977a] devised the strongly feasible basis technique. and that it rurrs in 0(nS(n.C)) time. the mathematical programming community did not conduct much research on the network simplex method for the assignment problem until Barr.m.) Jonker and Volgenant [1986] suggested some practical improvements of the Hungarian method. the shortest path computations are somewhat disguised paper of Dinic and Kronrod [1969]. objects Throughout the relaxation algorithm. each augmentation. a primal algorithm that maintains a feasible it assignment and gradually converts into an optimum assignment by augmenting flows along negative cycles or by modifying node potentials. run in 0(nS(n.m. Subsequent research focused on developing . Both the algorithms maintain optimality of the intermediate solution and work toward feasibility by solving at most n shortest path problems with nonnegative arc lengths. many researchers realized that the Hungarian method in fact runs in 0(nS(n. Another algorithm worth mentioning This algorithm is is due to Balinski and Gomory [1964]. This approach closely related to the successive shortest path algorithm. These authors to developed the details of the network simplex algorithm when implemented maintain a strongly feasible basis for the assignment problem. The major difference the nature of the infeasibility. Subsequently. Both approaches start writh is in an infeasible assignment and gradually make it feasible. they also reported encouraging computational results. Derigs [1985] notes that the shortest path computations vmderlie this method. Probably because of this excessive degeneracy.C)) time.

The auction algorithm suggested in Bertsekas [1979].C)) time. This algorithm essentially in amounts to solving n shortest path problems and runs 0(nS(n. whereas the algorithm by Bertsekas and Eckstein increases prices that preserves e-optimality of the solution. the algorithm we have presented increases the prices of the objects by one unit at a time. Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm and showed that for the eissignment problem this rule requires O(n^lognC) pivots. some variants of this Balinski's algorithm performs O(n^) pivots and runs O(n^) time. Goldfarb [1985] described some implementations of O(n^) time using simple data structures and in Balinski's algorithm that run in 0(nm + n^log n) time using Fibonacci heaps. initially. it it (Although his basic algorithm maintains a is not a dual simplex algorithm in the traditional sense because at does not necessarily increase the dual objective algorithm do have this property. A naive implementation of the algorithm runs in [1988] described a scaling version of Dantzig's pivot 0(n^m log nC). is due to Bertsekas and uses basic ideas originally [1988] described a Bertsekas and Eckstein more recent its version of the auction algorithm. this threshold value equals C and within O(n^) pivots its value is halved. by the maximum amount Bertsekas is [1981] has presented another algorithm for the assignment problem which cost flow in fact a specialization of his relaxation algorithm for the minimum problem (see Bertsekas [1985]). Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots. analysis is Out presentation of the auction algorithm tmd somewhat different that the one given by Bertsekas and Eckstein [1988]. his algorithm performs 0(n^log nC) pivots. Hung [1983] describes a pivot rule that performs at at most O(n^) consecutive degenerate pivots and most 0(n log nC) nondegenerate pivots. Balinski [1985] developed the signature method. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots.) in every iteration.ISl polynomial-time simplex algorithms. which is a dual simplex algorithm for the eissignment problem. dual feasible basis. The algorithm cost.m. Ahuja and Orlin rule that performs 0(n^log C) pivots and can be implemented to run in 0(nm log C) time using simple data structures. For example. . Hence. essentially consists of pivoting in any arc with sufficiently large reduced The algorithm defines the term "sufficiently large" iteratively.

[1986] and Jonker and Volgenant [1988] [1987] appear to be the fastest. thereby achieving jm OCn'^' ^m log C) time bound. results to date seem to justify the following observations about the algorithms' relative performance.11 has presented a modified version of algorithm in Orlin and Ahuja [1988]. Over the many computational studies have compared one algorithm with a few other algorithms. The primal simplex algorithm is slower than the the latter primal-dual. most of the research effort devoted to assignment algorithms has stressed the development of empirically faster algorithms. Since no paper has compared all of these zilgorithms. problem is 0(nm + n^ log n) which is achieved by many assignment Scaling algorithms can do better for problems that satisfy the similarity first scciling assumption. Glover and Klingman [1977a] on the network simplex method. it is difficult to assess their computational merits. using bit-scaling of costs. the best strongly polynomial-time bound to solve the assignment algorithms. As mentioned previously.8 solves problem in 0(nm log nC) since every push is a saturating push. the successive shortest path algorithms Among due to Glover et al. Gabow [1985] . these two algorithms achieve the boimd to solve the assignment problem without using any sophisticated data structure. Martello and Toth [1982] [1988] on the primal-dual method. years. Carpento. Using the concept of e-optimality. by McGinnis [1983] and Carpento. This time bound For problems satisfying best time is comparable to that of Gabow and Tarjan 's algorithm. They also improved the time bound of the auction algorithm to 0(n^'^m lognC). relaxation and successive shortest path algorithms.Currently. but the two algorithms would probably have different computational attributes. Nevertheless. algorithm running in time 0(n^' Gabow and Tarjan [1987] developed another scaling push algorithm the assignment ^m log nC). Section 5. and by Glover [1986] and Jonker and Volgenant [1987] on the successive shortest path methods. Some representative computational studies are those conducted by Barr. His algorithm performs O(log C) scaling phases and solves each phase in OCn'^'^m) time. showed that the scaling version of the auction Bertsekas and Eckstein [1988] algorithm runs in this 0(nm log nC). Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5. developed the algorithm for the assignment problem. three approaches. the similarity assumption. on the relaxation methods. by Engquist et al. Bertsekas and Eckstein is found that the scaling version of the auction algorithm competitive with Jonker and Volgenant's algorithm. Martello and Trlh [1988] present .

1b) [vj.j) € A) € A) s. Generalized network flows arise in may application contexts. four other topics deserve mention: (ii) generalized network flows. i.e. j). Researchers have studied several generalized network flow problems.183 several cases. (iii) multicommodity flows. For example. the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods. and network design. arcs do not necessarily conserve flow. if i = . units of flow enter an arc (i. then Tj: Xj: units "arrive" at arc. extension of the conventional An maximum two flow problem is the generalized maximum flow problem which either maximizes the flow out of a source the flow into a sink node or maximizes of node (these objectives are different!) The source version the problem can be states as the following linear program. (iv) convex cost flows. < «>.. = for all arcs. commodity network flow problems with linear Several other generic topics in the broader problem theoretical (i) network optimization are of considerable and practical interest. Maximize v^ (6ia) subject to X {j: "ij {j: S (j. if i ?t (i. In particular. If node 1. In the conventional flow networks. FORTRAN implementations of assignment algorithms for dense and sparse 6. is a is nonnegative flow multiplier dissociated with the lossy and. in this chapter assume that arcs the flow entering an arc equals the flow leaving the arc. t for aU i E N (6.i) "'ji'^ji = K'if» = s S 0. We shall now discuss these topics briefly. Tj. Generalized Network Flows The flow problems we have considered conserve flows.6 Other Topics Our domain of discussion in this paper has featured single costs.t. then the arc is gainy. 1 < rj: < then the arc Tjj if 1 < Tj. If In xj: models of generalized network flows. j.

for all (i. is essentially a primal-dual algorithm. The recent paper by Goldberg. Even problems with nonseparable. Note that the capacity restrictions apply to the flows entering is the arcs. The paper by Truemper [1977] surveys these approaches.e. . however. The approach. Convex Cost Flows We shall restrict this brief discussion to i. the negative cycle algorithm. the objective function can be written in the form V (i. note that Vg not necessarily equal to v^.. and the primal-dual algorithm for the cost flow problem apply to the generalized maximum flow problem. j) e A. In the generalized minimum cost flow problem. typically. These are three main approaches to solve this problem. The generalized maximum flow problem has many similarities with the minimum minimum cost flow problem. we wish to determine the minimum first cost flow in a generalized network satisfying the specified supply/demand requirements of nodes. Problems containing nonconvex nonseparable cost terms such as xj2 e A are substantially X-J3 more difficult to solve and continue to pose a significant challenge for the mathematical programming community. is due to Jewell [1982]. find that about 2 to 3 times slower than their implementations for the ordinary minimum [1988b]. find their implementation to be very efficient in practice. Further. which is an extension of the ordinary minimum cost flow problem. Plotkin and Tardos [1986] describes the first polynomial-time combinatorial algorithms for the generalized maximum flow problem. are not pseudopolynomial-time. These algorithms. The second approach [1979] the primal simplex algorithm studied by Elam. Extended versions of the successive shortest path algorithm.j) Cjj (x^j). cost flow algorithm. and Klingman among they Elam it is et al. because of flow losses and gains within arcs. but convex objective functions are more difficult to solve. mainly because the optimal arc flows and node potentials might be fractional. The third approach.184 < x^j < uj: . convex cost flow problems with separable cost functions. due to Bertsekeis and Tseng generalizes their minimum cost flow relaxation algorithm for the generalized minimum cost flow problem. Glover others.

program (see. of (ii) a continuously differentiate function. classes of Solution techniques used to solve the two problems are quite is different. Bradley.185 analysts rely on the general nonlinear programming techniques to solve these problems. More elaborate For example.2a) e A subject to Y {j: (i. The research community has focused on two (i) classes of separable convex costs flow each Cj. The separable convex cost flow problem has the follow^ing formulation: Minimize V (i.j) Cj. The paper by Ahuja. with linear necessary) with sufficiently small size. alternatives are possible. < x^j for all (i. (62c) In this formulation. j) e A. negative cycle algorithm. There a well-known technique for transforming linear functions to a linear a separable convex program with piecewise and Magnanti standard [1972]). and Gupta and suggests a pseudopolynomial time algorithm. Observe that segments chosen (if it is possible to use a piecewise linear function. Batra. (xj. (6. thus increasing the problem size. (xj. is a convex function. we don't).) (6.) is problems: each Cj.j) e A. then we could solve the if problem exactly using a linear approximation for any arc (i. (xjj) for each (i. However.j) ^i] {j: € A S (j. it is possible to cost carry out this transformation implicitly and therefore modify many minimum flow algorithms such as the successive shortest path algorithm. convex problem a priori (which of we knew the optimal solution to a separable course.i) ''ji = ^^'^' ^°^ all i € N. j) with only three . Hax This transformation reduces the convex cost flow problem to a it minimum cost flow problem: introduces one arc for each linear segment in the cost functions..2b) e A < Ujj . Cj. primal-dual and out-of-kilter algorithms. e. (xjj) is a piecewise linear function. to solve convex cost flow problems without increasing the problem [1984] illustrates this technique size. to approximate a convex function of one variable to any desired degree of accuracy.g.

same underlying network. Researchers have suggested other solution strategies. and therefore solve the problem in pseudopolynomial time. but share common a linear arc capacities. Florian [1986]. Klincewicz [1983]. Uj. Kennington and Helgason Meyer and Kao [1981]. and Bertsekas. cases. Suppose through r. we state programming formulation of the multicommodity minimum problem and its cost flow problem and point the reader to contributions to this specializations. Some time. Helgason and Kennington [1978]. to obtain Minoux has also developed a polynomial-time algorithm the convex const flow problem. using ideas from nonlinear progamming for solving this general separable convex cost flow problems.3a) A subject to . This observation has prompted researchers to devise adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous. coarser. Any other breakpoint in the linear approximation would be irrelevant and adding other points would be computationally wasteful. 1 Let denote the supply/demand vector of commodity cost flow Then the multicommodity minimum ^ problem can be formulated as follows: Minimize V 1^=1 V (i. approximation. Some important references on this [1980]. the versions of the convex cost flow problems can be solved in polynomial [1984] has devised a polynomial-time algorithm for Minoux one of [1986] its special mininimum quadratic cost flow problem.186 breakpoints: at 0. and the optimal flow on the arc. If (See Meyer [1979] for an example could we were interested in only integer solutions. Rockafellar [1984]. that the b*^ problem contains r distinct commodities numbered k.j)e k c^: k x^(6. of this approach). an integer optimum solution of Muticommodity Flows Multicommodity flow problems arise when several commodities use the In this section. then we choose the breakpoints of the linear approximation at the set of integer values. topic are Ali. Dembo and Klincewicz [1981]. Hosein and Tseng [1987].

the model contains additional capacity each arc. (6. (6. subsequently generalized this decomposition approach to linear programming. (63c) < k Xj. As indicated by its the "bundle constraints" (6. 1] {j: {j: V (i. . Researchers have proposed three basic approaches for solving the general multicommodity minimum resource-directive cost flow problems: price-directive decomposition. '^ < u:j. for ^ all (i. the total flow on any arc cannot exceed capacity.j). every s*^ commodity k has objective a is source node and a sink node. commodities way that minimizes overall flow We problem is first consider some special cases.187 k X.j).j) and all k . then decomposes single commodity minimum cost flow corxstraints problems.3c). With the presence of the bundle the essential problem in a is to distribute the capacity of each arc to individual costs. (6. Frisch [1968] showed how source or a to solve the multicommodity maximum flow problem with a common common sink by a single application of any maximum flow algorithm. < k u.j) e A) e A y ktl ' k X. We refer the reader to . restrictions on the flow of each commodity on Observe that it if the multicommodity flow problem does not contain bundle into r constraints. one for each commodity.. represented respectively by to and tK The t*^ maximize the sum of flows that can be sent from s*^ to for all k. Hu [1963] showed how network in to solve the two-commodity maximum flow problem on an undirected Rothfarb. Ford and Fulkerson [1958] solved the general multicommodity Dantzig and Wolfe maximum [1960] flow problem using a column generation algorithm. The multicommodity maximum flow a special instance of In this problem. decomposition and partitioning methods.. as captured by (6.j) k k ~ ^i ' ^OT a\] i and k.3c). Further. Shein and pseudopolynomial time by a labeling algorithm.3). x-- and k c-- represent the amont of flow and the unit cost of flow for commodity k on arc (i. (6.3b) ''ii (i.3d).3d) k In this formulation. for all (i.

the constraint on arc Ujj (i.are multicommodity flows. in some applications. Network Design We network. algorithmic developments on the multicommodity minimum made on cost flow problem have not progressed at nearly the pace as the progress the single commodity minimum cost flow problem. These network design models contain is that indicate whether or not an arc included in the network. restricts the total included. the network might . of the form (6. al. these models involve k x^. related The design decisions yjj and routing decisions by "forcing" constraints of the form 2 k=l ''ii - "ij yij ^^^ ' ^" ^^'^^ which replace the bundle constraints multicommodity flow problem (6.j) flow to be the arc's design capacity constraints Many modelling enhancements are possible. Many design problems can be stated as fixed cost network flow problems: is (some) arcs have an associated fixed cost which incurred whenever the arc carries 0-1 variables yjj any flow. some may restrict the underlying network topology (for instance. Typically. Although specialized primal simplex software can solve the single commodity problem 10 to 100 times faster than the general purpose linear programming systems.3c) in the convex cost k These constraints force the flow the arc is x^- of each if commodity k on the arc is arc (i.j) to be zero if not included in the network design. Unfortunately. in other applications.3). The book by Kennington and Helgason [1980] describes the details of a primal simplex decomposition algorithm for the multicommodity minimum cost flow problem. The design problem is of its considerable importance in practice and has generated an extensive literature of own.188 the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these methods. the network must be a tree. the algorithms developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et [1984]). have focused on solution methods that is. for finding optimal routings in a on analysis rather than synthesis. for example.

. optimization-based heuristics. These solution methods include dynamic programming. Hershel Safer. Acknowledgments We Wong and are grateful to Michel Goemans. is many different objective functions arise in practise. dual ascent procedures. Apple Computer. Lav^ence Wolsey . 1987] have described the broad range of applicability of network design models and summarize solution methods network design literature. by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research. and integer programming decomposition (Lagrangian relaxation.189 need alternate paths to ensure reliable operations). Magnanti and Wong [1984] and Minoux [1985. One of the most popular "" Minimize £ ^ k=l (i^j)e k c• k x^^ + Y. The research Presidential of the first and third authors was supported in part by the Young Investigator Grant 8451517-ECS of the National Science Foundation. for these problems as well as many references from the [1988] discuss Nemhauser and Wolsey many underlying methods from integer programming and combinatorial optimization.j) A V ij € A (as well zs fixed costs k which models commodity dependent per unit routing costs c Fjj for • the design arcs). Usually. Inc. network design problems require solution techniques from any integer programming and other type of solution methods from combinatorial optimization. ^ (i. and Prime Computer.. Also. We are particularly grateful to William Cunningham many valuable and detailed comments. and by Grants from Analog Devices. Benders decomposition) as well as emerging ideas from the field of polyhedral combinatorics.Richard Robert Tarjan for a careful reading of the manuscript and many for useful suggestions.

Operations Research Center. To appear Ahuja. Tarjan. 1985a. Sloan School Management. and S. Cambridge. C. MA. Tarjan.V. M.. R.. Reading. Addison-Wesley. and J. in Oper.of Oper.T. Orlin.K. R.. Assignment and Minimum and Ahuja. and R. To appear.I. Technical Report Cambridge. 1988. J. Finding Minimum-Cost Rows by Double of Scaling.I. Improved Time Bounds for the Maximum Flow M. Improved Primal Simplex Algorithms Cost Flow Problems. K. M. 1988. R.I. Personal Communication. 222-25 Goldberg. Cambridge. M.. To appear. Ahuja. ]. and J. MA.E.K. 1988. 16. L. R. Hop>croft. and R. Ahuja. OR Aho.. 1988.. 2047-88. 1987.E.E. Orlin. and R.. H.E. Res.B. MA. J. Kodialam.. Magnanti. . MA.B. Gupta. 193. and T.I. .B. Department State University. 1988. Computer Science and Operations Research. J. K. 1974. Ahuja. Sloan School of Management.K. Ullman.K. Orlin. Cambridge. Mehlhom. Research Report. Orlin. J.K. Euro. Tarjan. Akgul. Ahuja. Cambridge. R. J. 055-76. J.190 References Aashtiani. Stein.. 1987.E.. Working Paper 1966-87. Flow Problem. M. A.T. Improved Algorithms for Network Flow Problen«. N.B. Ahuja. Flow Algorithms. Orlin. Working Paper No. R.B. R. M.K. Problem. Orlin.D. R.T... 1988.T.. Bipartite J. Res. Technical Report No. A. L. for the Shortest Path. R. Batra. The Design and Analysis of Computer Algorithms.C.V. A Fast and Simple Algorithm for the Maximum M. 1984.B. Sloan School of Management. Working Paper 1905-87.K. MA. of Shortest Path and Simplex Method.I. K. Implementing Prin\al-E>ual Network Operations Research Center. J.. and Ahuja. 1976.T.B. North Carolina Raleigh. Faster Algorithms for the Shortest Path Problem. A Parametric Algorithm for the Convex Cost Network Flow and Related Problems.A. and Orlin. Tarjan. MA.

M. Networks 8. Baratz. 1-13. Kennington. 403-420. Man. J. MA. The Convex Cost Netwrork Flow Problem: A State-of-the-Art Survey. and E. 1977a. McCarl and P. Texeis. 1978. Operations Research. 1977. Proceedings External Methods and System Analysis. Signature Methods for the Assignment Problem. Note on Weintraub's Minimum Cost Flow Algorithm.I.C.. N.127-134. F.. I. Oper. J. Basis Algorithm Ban. L. Euro. Implementation and Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. Southern Methodist University. A.E. D. R. 1964. Ali. 1985b. Glover. Glover. 1978. R. 16. North Carolina State University. Shetty. Klingman. 527-536. Kennington. F. 33.191 Akgul. Oper. and D.. Cambridge. A Genuinely Polynomial Primal Simplex Algorithm for the Research Report.. Ali. Armstrong. Prog. Multicommodity Network Problems: Applications and Computations. B. 10. Construction and Analysis of a Network Flow Problem Which Technical Report TM-83.E.. 1985. A Survey. Department of Computer Science and Assignment Problem.37-91. Technical Report OREM 78001.I. R. Sci.. of Mathematics. Res. and J. and D. Helgason. and D. Forces Karzanov Algorithm to O(n^) Running Time. Math. B. B. K. MIT.L. Multicommodity Network Flows Balinski.L. 1984.D. M. Laboratory for Computer Science. M. 578-593. Bamett. Trans. Klingman. LIE. Research Report. A Primal Method for the Assignment and Transportation Problems. Barr. D.. 12. Cambridge. MA. Barahona.T.. 1980. A Network Augmenting of the International Path Basis Algorithm for Transshipment Problems. Assad. R. Farhangian. Whitman. and R. F. Res. Symposium on . Tardos.. Raleigh. 4. 1977b. Dept. Klingman. Wong. Patty. Balinski. V. 1987. Comory. A. The Alternating Path for the Assignment Problem. A. M.

Games and Transportation Networks. To appear Bertsekas. D.. D. M. Math. Enhancement 17. D. John Wiley & Sons.. Tseng. and J. 152-171. Gallager. Data Networks. and R. M. Bertsekas. .P. Prog. Greece. of Operations Research 14. C. Klingman.. Barr. 1987. IXial Coordinate Step Methods for Linear Network Flow Problems. Bertsekas.. and D. Distributed Relaxation Methods for Linear Network Flow Problems. A Unified Framev^ork for Primal-Dual Methods in Minimum Cost Network Flow Problems. P. Prog. 2.. Relaxation Methods for Network J. and D. Klingman. Math. Bazaraa.P. Berge. R. 105-123.P. D. Flow Problems with Convex Arc Costs. Linear Programming and Network Flows. Oper. Bertsekas. D. Cambridge. Generalized Alternating Path Algorithm for Transportation Problems. of 25th IEEE Conference on Decision and Control.I. R.T. Math. QuaH. & Sons. 1958. Series B. 16-34. 125-145. A Distributed Algorithm for the Assignment Problem. Glover..J.T. M. On a Routing Problem. Eckstein.. 1986. 137-144.. and A. Bellman.. A Nev^ Algorithm for the Assignment Problem. Res.P. P. Euro. Bertsekas. Also in Annals 1988. of Spanning Tree Labeling Procedures for Network Optimization. P. John Wiley 1979. 1985. 1962. P. for Information Decision Systems. Appl. INFOR J. Prog. D. Bertsekas. 21. and P. P. Ghouila-Houri. Jarvis. Bertsekas. Report LIDS-P-1653.. R. 1981. 25. Bertsekas. 32. MA. D. Laboratory Cambridge. 1978. SIAM of Control and Optimization . 1987. The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem. and 1978.1219-1243. Working Paper.I. 1979. ]. Programming. Laboratory for Information Decision systems. Proc. 16. 1987. MA. Glover. 87-90. Prentice-Hall. F. in Math. A. D. Hosein.192 Barr. Athens.

O. Brown. et (ed. Design and Implementation of an Efficient Priority Queue. FORTRAN Codes for Network As Annals and J. Optimization. Personal Communication. 1988a. A.G. 99-127. C. Sodini. and Orlin. In B.. R. A Procedure for Determining a Family of 15. Assad. FORTRAN Codes for Network As Annals and P.Y. Gowen. Res. Ball. Bombay. Theory 10. Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems. The Relax Codes al. Oper. Res. O. Jensen. Busaker.P. On the Computational Behavior of a Polynomial-Time Network Flow Algorithm. Math. B.L. Carraresi. Comp. 1977. 1985. 1988b. Tseng. Martello. D. 1986.. Cheriyan. Bodin. G. Graves. Addison-Wesley. Operational MD. Magnanti.. and T. Tseng. Carpento. Boyd. A. Bland. 1988. L.J. Oper. (eds. Algorithms and Codes for the Assignment Problem.G. Technical Report.. Simeone et al. D. S.). Bradley. Toth.. 36. Design and Implementation of Large Sri. Scale Primal Transshipment Algorithms. Golden. Parametrized Worst Case Networks for Preflow Push Algorithms. Man. Bertsekas. . Technical Report No. John Hopkins University. of Operations Research 13. G. Cornell University. and E. D. Ithaca. Optimization. An Efficient Algorithm for the Bipartite Matching Problem. of Operations Research 33.R. and J. Simeone. 86-93. R. Applied Mathematical Programming. Tata Institute of Fundamental Research. P. 1-38. In B.). Van Emde. Res. and D. 1977. A.. Computer Science Group.B.. Zijlstra. 125-190. School of Operations Research and Industrial Engineering. Sys. 10. and M. Oper. 65-211. and P. Kaas. 1961. Eur. 21. L. C. 1977. J. of Vehicles L. Research Office. A. Baltimore. 1983. 93-114. 1986. 23. R. N. S..P.193 Bertsekas. G. P. and G..O. Minimal-Cost Network Flow Patterns. Technical Report 661. Hax. and P. P.. for Linear Minimum Cost Network Flow Problems. 193-224. Routing and Scheduling and Crews. 1988. Bradley. India. and P. Boas.

Rfs. 174-183. 215-221. ACM Trans. 187-190. New Delhi. G. 1967. 1960.B. Kuhn and A.. 1-16. A Network Simplex Method.N. Rosenthiel Graphs. 1975.H. Fulkerson. Res. Activity Koopmans 359-373. Mathematical Methods of Solution of 112-125 (in Russian). . 196-208. Christophides.).. In H. Dantzig. Dantzig. Tucker (ed. W. G. G. 101-111.W. T. Cunningham. Annals of Mathematics Study 38. Wolfe. Dantzig. Cheung. G. (ed. Maheshwari. and S. 1979. Princeton University Press. John Wiley & Sons. in Linear 1955.B. 1951. In P. Application of the Simplex Method to a Transportation Problem.C. Sd. Math. of Oper. Flow. Decomposition Principle for Linear Programs. G. Academic Press. Vl ) Operation. Computational Comparison of Eight Methods for the Mzocimum Network Flow Problem. Upper Bounds. on Math. 1962. Analysis of Preflow Push Algorithms for Maximum Network Technical Report. Graph Theory : An Algorithmic Approach. On the Max-Flow Min-Cut Theorem of Networks. Algorithm for Cor\struction of Maximum Flow in Networks with Complexity of OCV^ Economical Problems 7. Secondary Constraints. All Shortest Routes in a Graph. Cherkasky. N. Dantzig. 1960. J. Inc. 6.H. NY. R.R. 11. Dept. 1956. India.V. Oper.B.194 Cheriyan. and Block Triangularity Programming.B. Theory of Gordon and Breach. 1980. W. and D.. Princeton University Press. Analysis of Production and Allocation. G. Dantzig.). (ed. of Computer Science and Engineering. Dantzig. Mafft. 1977. and P. Princeton. 105-116. Theoretical Properties of the Network Simplex Method. 8. 1987. In T.W. Cunningham. Linear Programming and Extensions. Indian Institute of Technology. NJ. Linear Inequalities and Related Systems. B. Man.). On the Shortest Route through a Network. Software 6. Dantzig. 4..B.B. Economeirica 23. Pro^. 91-92. G. 1976.

Network Flow Problen\s with Convex Separable Deo. An Algorithm for Solution of the Assignment Problem. Ontario. and B. E. 161-186. Algorithm for Solution of a Problem of Soviet Maximum Flow in Networks with Power Estimation. 1984.V.. 1979.A. Dokl. 300. Algorithm 360: Shortest Path Forest with Topological Ordering. 1969.A.269-271. Doklady 10. Lecture Notes in Economics and Mathematical Systems. Dinic. 1988. R. Dial. 1277-1280. and Vol. 1970. Res. Oper. 1988. 11.. Soviet Maths. G. Numeriche Mathematics 1. Meier.57-102. Reaching. Shortest-Route Methods: 1.. and M. Pruning and Buckets. 1981. 2-[5-248. A Note on Two Problems in Connexion with Graphs. U. Programming in Networks and Graphs. Networks 14. Glover. The Shortest Augmenting Path Method for Solving Assignment Problems: 4. Comm. Implementing Goldberg's Max-Flow Algorithm: A Computational Investigation. Denardo. Kamey. Canada. S.. West Germany. Motivation and Computational Experience.L. . Derigs. Networks 9. Study 15. 1979. and C Pang. E. A Scaled Reduced Gradient Algorithm for Costs. 1959. U.A.. ACM 12. F. 632-633. 27. Dial. Shortest Path Algorithms: Taxonomy and Annotation. Derigs. Exponential Grov^h of the Simplex Method for the Shortest Path Problem. 275-323.195 Dembo.. 1324-1326. and D. E. and J. Technical Report. University of Bayreuth. R. Klincewicz. Math. W. Klingman. 1969. A Computational Arvalysis of Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees. R. N. University of Waterloo. J. 1970. Springer-Verlag. Unpublished paper. E. 1985. Dijkstra. U. D. Prog. Dinic. Kronrod. 125-147. Fox. Math. Edmonds. Annals of Operations Research Derigs.

Network Flow and Testing Graph Connectivity. 167-196. M. A Strongly Convergent Primal Simplex Algorithm for Generalized Networks.R. on Engquist. 1979.M.U. }. Computer Science Press. 1979. Theory TT-2... Sd. Report Rand Corp. Prog. 507-518. Infor. J. Math. INFOR 20. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. Fulkerson. Nonlinear Cost Network Models in Transportation Analysis. 117-119. Canad. Karp. Man. of Oper. IRE Trans. Shannon. Glover. L. Network Flow Theory. 1986. Jr. M. S. 39-59. 8. Cambridge.. P. Solving the Trar\sportation Problem. Elias. Maximal Flow through a Network. . and D. Femandez-Baca. Ames.W. Ford.T. 1976. A Successive Shortest Path Algorithm for the Assignment Problem. 1972. 4. and R. On the Efficiency of Maximum Flow To appear in Algorithms on Networks with Small Integer Capacities. Elam. F.196 Edmonds.R. State University. Laboratory for Computer Science.E. L..I.R. and R.. Fulkerson.. 1956. 4.R. 399-404. Ford. 1956.E. Res. and C. Ford. Math. The Max-Flow Algorithm of Dinic and Karzanov: An Exposition. 3. Comm.. Even. >4CM P-923.. J.. Tarjan. 1956. Iowa Algorithmica. Note on Maximum Flow Through a Network. /. 370-384. Klingman. Feiitstein. Florian. 1962.. 24-32. Technical Report TM-80. and C.R. Department of Computer Science. Jr. A. SI S. Math. S. MA. 1975. Santa Monica. Even. J. 1956. 1987. 5. Maryland. R. Martel. Even. M.. 248-264. ACM 19. L. lA. Study 26. 1982. and D. Research Report. Jr. Floyd. 345. Graph Algorithms. Algorithm 97: Shortest Path. and D. CA. AM Comput.. D.

H.E. 5.N. 1986. Ford. 1987.. Constructing Maximal Dynamic Flows from Static Flows. R. D. L. Communication.R.B. and DR. and Transportation Networks. 6. Ford. Ford. R. .Sci. 1958. 9. 2. Sci of ACM 34(1987). L. L. 31.. 197 Ford. Princeton University Press. SIAM ]. Fredman. Mirchandani (eds. 4. Fredman. and D. Quart. of Computing 83 - 89. Sci. Scaling Algorithms for Network Problems..R. New Bounds 5.R. 1955. Man... Fulkerson. Oper.L. H. NJ. Faster Scaling Algorithms for Network SIAM ]. L. SIAM J.. An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm.R. 1985. Computation of Maximum Flow in Networks. A Suggested Computation for Maximal Multicommodity Network Flow.. M. 35.. Gabow. 277-283.. and Frisch. H. 596-615. 148-168. Dantzig. Logist. L. Transmission. on the Complexity of the Shortest Path Problem. Cost Circulation 298-309.. Fibonacci Heaps and Their Uses in of Improved Network Optimization Algorithms.T. S. Naval Res. Quart. 1988.. D. 338-346. Math. Tarjan. R. Jr. Gabow. Naval Res. Princeton. J. Fulkerson. Appl. Jr. on Found.R. (submitted). Francis. also in /. Fulkerson. Jr. 1957. Res. and R. Math. Log. To appear. Fujishige. Addison-Wesley. Fulkerson. John Wiley & Sons.R. Fulkerson. Tarjan. 97-101. Discrete Location Theory. 25th Annual IEEE Symp. and C. Comput. M. and D. 47-54. 18-27. Fulkerson. 1961. Flows in Networks.. R.. A Primal-Dual Algorithm for the Capacitated Hitchcock Problem. and P. 1984. 1958. and Problems.).Sys. 1986... and D.E. An Out-of-Kilter Method for Minimal Cost Flow Problems. Comp. Prog.ofComput. 1962. 1971. 419-433.R. Frank.N. I.

Gallo. Gallo. Glover.. Galil. Gallo. Galil. No. Shortest Paths: A Bibliography. Gavish. EXial 1974. F.. Simeone. Maffioli. Klingman. Italy. An 0(VE log^ V) Algorithm for the Maximum Flow Problem. Tardos. /. 12. Ruggen. P. . Acta Informatica 14. B. Kamey. F. Toth. 21. R.. Pallottino. Oper. 1981. Glover. Math. Witzgall. In Fortran Codes for Network Optimization. R. and D. Sci. and C. B. 221-242. Shlifer. J. Prog.. Mead. D. Prog. S. The Threshold Shortest Path Algorithm.. G. Schweitzer. Rome. Z. 1980. 27th Annual Symp. A Comparison of Pivot Selection Rules for Primal Simplex Based Network Codes. 199-202. Math. 1984. 12-37. 1988. 136-146. and Primal-Dual Computer Codes 4. of Comp. on the Found. Proc. Z.198 GaUl. Gilsinn. and A. 203-217. and G. Naamad. and D. Klingman. and E. 14. Theoretical Comp. Network Flow Algorithms. OCV^/S E^/^) Algorithm for the Maximum Flow Problem. G.. 1986. Sci. A Performance Comparison of Labeling Technical Note 772. C.C. Sys. G. Glover. D. Threshold Assignment Algorithm. 1986. and S. D. 226-240. Z. On the Theoretical Efficiency of Various 103-111. and M. and S. Z. 1983.). Study 26. F. Glover. Res. Bureau of Standards. Min-Cost Flow Algorithm... (eds. F. Klingman. and D. Shortest Path Algorithms. Sofmat Document 81 -PI -4-SOFMAT-27. An 0(n^(m + n log n) log n) Sci. 1980. 1. Netxvorks 14. . D. Pallottino As Annals of Operations Research 13. 3-79. The Zero Pivot Phenomenon in Transportation Problems and Computational Implications. Minimum Cost Network Eow Problem.. 1982. Glover. Gibby. Letters 2. National Algorithms for Calculating Shortest Path Trees. Klingman. Implementation and Computational for Comparisons of Primal. Galil. F. 1977. Washington. 1973. P. and Its E. Glover. Starchi. ofComput.. Networks 191-212. Pallottino.

J.. 1974. M. Klingman.. and RE. Goldberg. 1979. 1984. 363-376. A New Max-Flow for Algorithm. Augmented Threaded Index Method for Network Optimization. Tarjan. Klingman. 1106-1128. Schneider. Naval Res. and R. F. 1976.199 Glover. S. F.. Man. A New Polynomially Bounded Shortest Path Algorithm. Man. 18th ACM Symp. F. Klingman. Laboratory for Computer MA. 1986. Applications of Management Glover. Comprehensive Computer Evaluation and Enhancement of Maximum Flow Algorithms. Res. . D. MA.V. New Polynomial Sci. A. Laboratory Computer Science. Science. A. D. Klingman. and D. Goldberg.. 33. To appear in ACM. Phillips. 65-73. 1987. F. Science 3. E. Solving Minimum Cost Flow Problem by of Proc..T. D. AIIE Transactions Glover.I. Technical Report MIT/LCS/TM-291.F. Glover. 1985. Kamey. D. 1988. A Primal Simplex Variant Maximum Flow F. Netvk'ork Applications in Industry and Government. Klingman. D. Glover.. Goldberg. 9. Problem.E. 1985.. Napier. 1974. Glover. 12. Change Criteria. Glover.I. INFOR Goldberg..T. Basis and Solution Algorithms Problem. on the Theory of Comput. Quart. Logis. A New Approach to the Maximum Flow /. 109-175. Whitman. 793-813. Whitman. 136-146. Successive Approximation. Tarjan.. Stutz. 31. Phillips. Combiiuitorial Algorithms for the Generalized Circulation Problem. and Tardos. and R. D. and D. Klingman. A. R.V. Shortest Path Algorithms and Their Computational Attributes. and J. 20. J. Problem. 1985. 31. 41-61. 19th ACM Symp.A. Proc.V. and N. for the F. on the Theory Comp.V. Sd. Plotkin. and D. N.. Mote. M. Mote. 136-146. D.. Oper. Klingman. A. Research Report. Cambridge.. A Computational Study on for Tranportation Start Procedures. and A. 293-298. Cambridge.

NY. I. A Practicable Steepest Edge Simplex Algorithm. Efficient Dual Simplex Algorithms for the Assignment Problem. Controlled Rounding of Tabular Data for the Cerisus Bureau at the : An Application of LP and Networks. Prog. f. Gomory. 1987. 1988a. D. D. and M. E. M. (eds. Hao. Networks 149-183. 1S7-203. Kai. 1988b. in New York. Deterministic Network Optimization: A Bibliography. Math. Technical Report. C.V. 1986. Prog.D. 83-124. . Grigoriadis. and T. 1986. At Most nm Pivots and O(n^m) Time. and J. Department of Operations Research and Industrial Engineering. NY. 7. and Network Simplex Methods for Maximum Simeone et al..V. 1985. Department of Operations Research and Columbia University. Goldfarb.E. Hao. Optimization. 1977. Goldfarb. Department of Operations Research and Industrial Engineering. Finding Minimum-Cost Circulations by Symp. A Computational Comparison of the Dinic Flow. 33. D. Res.. on the Theory of Comp. MA. New York.K. 551-570. 1961. )To (A revision of Goldberg and Tarjan appear in Math. L. Successive Approximation. Canceling Negative Cycles. and R.. Hu.361-371. Efficient Shortest Path Simplex Algorithms.. 388-397. D. A. 2(Hh ACM Golden. J. Goldfarb.. J. A Primal Simplex Algorithm that Solves the Maximum Flow Problem University. NY. D. . Golden. As Annals of Operations Research 13. 1988. Research Report. Math. and T. Goldberg.E.ofSlAM 9. and R. Kai. 12. Oper. Goldfarb. and S. Industrial Engineering. D. A. In B. Solving Minimum Cost Flow Problem by [1987]. Anti-Stalling Pivot Rules for the Network Simplex Algorithm. Hao. Multi-Terminal Network Flows. Goldfarb. Columbia University.. Cambridge.. 1977. B. R.200 Goldberg. T. Proc. Reid. Tarjan. Columbia New York..) FORTRAN Codes for Network Goldfarb. Taijan. and J. 1988. Seminar given OperatJons Research Center. Magnanti. and S. B. Research Report...

AIIE Trans. and M. Fast Algorithms for Bipartite Gusfield. An Efficient Procedure for 9.201 Gondran. Femandez-Baca. Oper. Helgason. L. R. and Transportation Problems. 20. Very Simple Algorithms and Programs Dept. H. An n ' Algorithm for Maximun Matching in Bipartite Graphs. R. 1984. Maximum Flow in Undirected Planar Networks. . 225-231. D.. Phys . and J. Log. Math. M. Study Grigoriadis. 10. CT. Springer-Verlag. 63-68. 344-260. 375-379. Grigoriadis. 1978. 1973.M. 1986. University. SIGMAP 1987. Res. and D. . 17-29. 26. 1985. . E.. M. T. 160. Hausman. D. Minoux. Network Row. D. J. A Note on Shortest Path. Programming and Related Areas: A Classified Bibliography. L. J. Wiley-Interscience. Lecture Notes in Economics and Mathematical Systems. 1988. /. D. Implementing Hitchcock. a Dual-Simplex Network Flow Algorithm. Personal Communication. Martel. Research Report No. 1941. Johnson. Vol. New Hamachar. M. M.. 1979. Res. 1985. C. Technical Report No. Comput. Hu.. and T. Numerical Investigations on the Maximal Flow Algorithm of 22.. Prog. of a Product from Several Sources to Numerous Facilities. D. Naval Hopcroft. and H. B. 1963. Kennington. J. 11. 2. of for All Pairs Network Flow Analysis. CSE-87-1. F. University of California. 83-111. Quart. and D.-< Karzanov. Subroutines. 1963. Hoffman. Integer SIAM J.. C. A.. An Efficient Implementation of the Network Simplex Method. 17-18. Hsu. V. and R. Karp. Bulletin of the ACM Gusfield. M. The Rutgers Minimum Cost Network Flow 26. 224-230. Yale Haven. Multicommodity Network Flows. 1979. An O(nlog^n) Algorithm for 14. Computing Hassin. Graphs and Algorithms. D. Davis. Markowitz. Assignment. The Distribution Math. 1977. Computer Science and Engineering. CA. SIAM of Comp. Grigoriadis. YALEN/DCS/TR-356. 612-^24.

202

Hu, T.C.

1969. Integer Programming and Network Flours.

Addison-Wesley.

Hung, M.
Oper.Res.

S.

1983.

A

Polynomial Simplex Method for the Assignment Problem.

31,595-600.

Hung, M.
Oper. Res
.

S.,

and W. O. Rom.

1980.

Solving the Assignment Problem by Relaxation.

28, 969-892.

Imai, H.

1983.

On

the Practical Efficiency of

Various

Maximum Flow

Algorithms,

/.

Oper. Res. Soc. Japan

26,61-82.

Imai, H.,

and M.

Iri.

1984.

Practical Efficiencies of Existing Shortest-Path Algorithms
/.

and
Iri,

a

New

Bucket Algorithm.

of the Oper. Res. Soc. Japan 27, 43-58.

M.

1960.

A New Method

of Solving Transportation-Network Problems.

J.

Oper.

Res. Soc. Japan 3, 27-87.

Iri,

M.

1969. Network Flaws, Transportation and Scheduling.

Academic

Press.

Itai,

A.,

and

Y. Shiloach.

1979.

Maximum Flow

in Planar

Networks.

SIAM

J.

Comput.

8,135-150.

Jensen, P.A., and

W.

Barnes.

1980.

Network Flow Programming. John Wiley

&

Sons.

Jewell,

W.

S.

1958.

Optimal Flow Through Networks.

Interim Technical Report

No.

8,

Operation Research Center, M.I.T., Cambridge,

MA.
Gair>s.

Jewell,
499.

W.

S.

1962.

Optimal Flow Through Networks with

Oper. Res.

10, 476-

Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.

/.

ACM

24,1-13.

JohT\son, D. B.

1977b.

Efficient Special

Purpose Priority Queues.
1-7.

Proc. 15th

Annual

Allerton Conference on

Comm., Control and Computing,

Johnson, D.

B.

1982.

A

Priority

Queue

in

Which

Initialization

and Queue

Operations Take

OGog

log D) Time. Math. Sys. Theory 15, 295-309.

203
Johnson, D.
B.,

and

S.

Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
Proceedings of the 20th Annual

Directed Planar Networks in O(n^/^logn) time. In
Allerton Conference on

Comm.

Control, and Computing.

Univ. of Dlinois, Urbana-

Champaign,
Johnson,

IL.

E. L.

1966.

Networks and Basic
1986.

Solutions. Oper. Res. 14, 619-624.

Jonker, R., and T. Volgenant.

Improving the Hungarian Assignment

Algorithm. Oper. Res.

Letters 5, 171-175.

Jonker, R.,

and A. Volgenant.

1987.

A

Shortest

Augmenting Path Algorithm
38, 325-340.

for

Dense and Sparse Linear Assignment Problems. Computing
Kantorovich, L. V.
of Production.
in Mfln. Sci.

1939.

Mathematical Methods in the Organization and Planning

Publication

House

of the Leningrad University, 68 pp.

Translated

6(1960), 366-422.

Kapoor,

S.,

and

P.

Vaidya.

1986.

Fast

Algorithms for Convex Quadratic
Proc. of the 18th

Programming and Multicommodity Flows,
Theory of Comp.
,

ACM

Symp.

on the

147-159.

Karmarkar, N.

1984.

A New

Polynomial-Time Algorithm

for Linear

Programming.

Combinatorica 4, 373-395.

Karzanov, A.V.

1974.

Determining the Maximal Flow in a Network by the Method

of Preflows. Soviet Math. Doklady 15, 434-437.

Kastning, C.

1976.

Integer

Programming and Related Areas:

A

Classified Bibliography.

Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.

Kelton,

W.

D.,

and A. M. Law.

1978.

A

Mean-time Comparison of Algorithms
Networks
8,

for

the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.

97-106.

Kennington,

J.L.

1978.

Survey of Linear Cost Multicommodity Network Flows. Oper.

Res. 26, 209-236.

Kennington,

J.

L.,

and

R. V. Helgason.

1980.

Algorithms for Network

Programming,

Wiley-Interscience,

NY.

204

Kershenbaum, A. 1981.
400.

A

Note on Finding Shortest Path Trees. Networks

11,

399-

Klein,

M.

1967.

A

Primal Method for Minimal Cost Flows. Man.

Sci.

14, 205-220.

Klincewicz,

J.

G.

1983.

A Newton Method

for

Convex Separable Network Flow

Problems. Networks

13, 427-442.

Klingman,

D., A. Napier,

and

Large Scale Capacitated

NETGEN: A Program for Assignment, Transportation, and Minimum
J.

Stutz.

1974.

Generating

Cost Flow

Network Problems. Man. So. 20,814-821.

Koopmans,

T.

C.

1947.

Optimum
17 (1949).

Utilization of the Transportation System.

Proceedings of the International Statistical Conference,

Washington, DC. Also

reprinted

as supplement to Econometrica

Kuhn, H. W.

1955.

The Hungarian Method

for the

Assignment Problem. Naval

Res.

Log. Quart. 2, 83-97.

Lawler, E.L. 1976. Combinatorial Optimization:

Networks and Matroids. Holt, Rinehart

and Winston.
Magnanti,
T. L.

1981.

Combinatorial Optimization and Vehicle Fleet Planning:

Perspectives and Prospects. Networks 11, 179-214.

Magnanti,

T.L.,

and

R. T.

Wong.

1984.

Network Design and Tranportation Planning:

Models and Algorithms.

Trans. Sci. 18, 1-56.

Malhotra, V. M., M. P. Kumar, and
for Finding

S.

N. Maheshwari. 1978.

An CK V
I

1

3)

Algorithm

Maximum Flows
1987.

in

Networks. Inform.

Process. Lett. 7

,

277-278.

Martel, C. V.

A

Comparison

of Phase

and Non-Phase Network Flow

Algorithms.

Research Report, Dept. of Electrical and Computer Engineering,

University of California, Davis, CA.

McGinnis,

L.F.

1983.

Implementation and Testing of a Primal-Dual Algorithm

for

the Assignment Problem. Oper. Res. 31, 277-291.

Mehlhom,

K. 1984.

Data Structures and Algorithms.

Springer Verlag.

205 Meyer, R.R. 1979.

Two Segment
C. Y. Kao.

Separable Programming. Man.

Sri. 25,

285-295.

Meyer,

R. R.

and

1981.

Secant Approximation Methods for Convex

Optimization. Math. Prog. Study 14, 143-162.

Minieka,

E.

1978.

Optimization Algorithms for Networks and Graphs.

Marcel Dekker,

New

York.

Minoux, M.

1984.
J.

A

Polynomial Algorithm for

Mirumum

Quadratic Cost Flow

Problems. Eur.

Oper. Res. 18, 377-387.

Minoux, M.

1985.

Network Synthesis and Optimum Network Design Problems:
Technical Report, Laboratoire MASI,

Models, Solution Methods and Applications.
Universite Pierre
et

Marie Curie,

Paris, France.

Minoux, M.

1986.

Solving Integer

Minimum

Cost Flows with Separable Convex

Cost Objective Polynomially. Math. Prog. Study 26, 237-239.

Minoux, M.

1987.

Network Synthesis and E>ynamic Network Optimization. Annals

of Discrete Mathematics 31, 283-324.

Minty, G.

J.

1960.

Monotone Networks.

Proc. Roy. Soc.

London

,

257 Series A, 194-212.

Moore,

E.

F.

1957.

The Shortest Path through a Maze.
the Theory of Switching Part

In Proceedings
II;

of the

International

Symposium on

The Annals of the

Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.

Mulvey,
266-270.

J.

1978a.

Pivot Strategies for Primal-Simplex

Network Codes.

J.

ACM

25,

Mulvey,

J.

1978b. Testing a Large-Scale

Network Optimization Program. Math.

Prog.

15,291-314.

Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley

&

Sons.

Nemhauser,
Wiley

G.L.,

and L.A. Wolsey.

1988.

Integer

and Combinatorial Optimization. John

&

Sons.

Orden, A. 1956. The Transshipment Problem. Man.

Sci. 2,

276-285.

106 Orlin. J. 1980.. and A. Papadimitriou. Math. J. 7. M.. 1983. 214-231. and R.... Prog. B. on the Theory of Comp. Page .B. Oliver. Algorithm 562: Shortest Path Lenghts. Prentice-Hall. Orlin. C. Phillips. 1988. Proc. Orlin. 1974. 1960.T. 1980. 377-387. 166-178. and R. J. B. 1982. Maximum-Throughput Dynamic Network Flows. H. Oper. New E>istance-E>irected Algorithms for Maximum MA. Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem. Ahuja. Orlin.M. 450-455. 8. Academic Press. J. 1987. Potts. R.H. Pape.. School of Management. Fundamentals of Network Analysis. Wiebenson. Math. Pollack. J.224-230. Munich. Orlin.212-222. 1615-84. Scaling Techniques for Miiumal Cost Network Flows. Prog. U. B. and W. 101-191. 1972. Math. 1985.I. Technical Report No.(ed. Sloan School of Management. J. 27. On the Simplex Algorithm for 24. A Faster Strongly Polynomial Minimum Cost Flow Algorithm. Floips in Transportation Netxvorks. Cambridge. Discrete Structures and Algorithms . ACM Trans. U. In V.T. 1984. Cambridge. 20th ACM and Symp. 1988. Prentice- HaU. Res.T. Steiglitz. Operations Research Center. Garcia-Diaz. Working Paper No.I. Cambridge. K. Pape.B. R.. Massachusetts Ii\stitute of Working Paper 1908-87.). Prog.. Math. Study Orlin. D. K. B. Solutions of the Shortest-Route Problem-A Review. New MA. Software 6. Combinatorial Optimization: Algorithms and Complexity. . OR 178-88. Ahuja. MA.. M. B. Sloan Technology. Carl Hansen. and K. and Flow and Parametric Maximum Flow Problems.Algorithms for the Shortest Route Problem. Networks and Generalized Networks. Implementation and Efficiency of Moore. Scaling Algorithms for the Assignment Minimum Cycle Mean Problems. 1981. Rock. M.

Wiley- Roohy-Laleh. Canada. L.T. Oper. A Strongly Polynomial Minimum Cost Circulation Algorithm. Swamy. Improvements to the Theoretical Efficiency of the Network Simplex Unpublished Ph.. An 0(nl log^(I)) Maximum Flow Algorithm.E. 1983. Data Structures and Network Algorithms. 1983. 1982. Tarjan. 1973. Res. E. PA. Rothfarb. K. E. B.362-391. M. Disc. Sheffi. Math. D. Y. 194-213. Shiloach.S. Frisch. Wiley Syslo.. D. Network Optimisation Practice: A Computational Guide. 4.S. Carleton University.Sci.. Sys. ACM 20. Tardos. CA..M. John & M. 1985. P. A Data Structure for Dynamic Trees. Method. Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods. Vishkin. Computer Science Dept. 1980. Prentice-Hall. Comput. Ottawa. Sons. 24. 16. 1982. 202-205. /. Common Terminal MuJticommodity Flow. 1985.. Discrete Optimization Algorithms. Graphs. V. Y. D. Shein. Networks.207 Rockafellar. New Jersey. & -. Y. Algorithms 3 . Smith. SIAM. Shiloach. 1968. N. Network Flows and Monotropic Optimization. Stanford University.N. Tabourier. 83-87. /. 1981. and Algorithms. R. Philadelphia. and U. Prentice-Hall.. 1973. Dissertation. N. 5. Sons. and R. Interscience. Y. All Shortest Distances in a Graph: An Improvement to Dantzig's Inductive Algorithm. Techniques for Primal Transportation Algorithm... Technical Report STAN-CS-78-702. 1983. Sleator. Combinatorica 247-255. and K. T.128-'i46.D. and I. Kowalik.. 1978. Thompson. E. and G. and J. An OCn^ log n) Parallel Max-Flow Algorithm. R. Thulsiraman. 1984. John Wiley . Deo. - Srinivasan. Tarjan. Benefit-Cost Analysis of Coding /. .

1-11. J. Tomizava. A Shortest Path Algorithm for Edge - Sparse Graphs. 1985.Res. /. K. Sci. ACM 9. Von Randow.11-12. R. R. R. 23^-57. 1986. Vol. Appl. . 1978. Primal Algorithm to Solve Network Flow Problems with Convex Costs. Wagner. 1978-1981. Tarjan. Vol. 1982. 1987. Transp. Math. Theory of Comp.208 Tarjan. On Some 1. E. Improved Shortest Path Algorithms for Transport Networks. 32.197. Letters 2 . R. 243. ACM Symp. R. On Max Flow with Gair\s and Pure Min-Cost Flows. Weintraub. 1988. E. Springer-Verlag.. 29-38. SI AM ]. Techniques Useful for Solution of Transportation Network Problems. Personal Communication. E. 265-268. R.Math. Prog.450-456. Res. Vaidya. S. Tarjan. R. E. 1987. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. Oper. D. 87-97. Springer-Verlag. Von Randow. Man. P. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems.7-20. of the 19th +n)n^ + (m+n)^-^n)L) Arithmetic Operations. Networks Truemper. 1976. 1972. A Theorem on Boolean A Matrices. A. 1984. A. An Algorithm for Linear Programming which Requires 0(((m Proc. Study 26. 12. on the Van Vliet. 21. 1977. 1974. Tarjan. Algorithms for Maximum Network Flow. A Simple Version of Karzanov's Blocking Flow Algorithm. 1962. 173-194. 1981-1984. Personal Communication. ACM Warshall. N.

Whiting. Math. 1972. 1979. Theoretical Efficiency of the /. Quart. Departmente de Industrias Report No. A Bad Network Problem 5. WiUiams. 255-266. Res. D. y4CM 7 . 1960. 5. P. J. Universidad de Chile-Sede Occidente. Oper. Edmonds-Karp Algorithm for Computing Maximal Flows. Near Equivalence of Network Flow Algorithms. W. Hillier.209 Weintraub. and F. Zadeh. of Operations Research. Chile.217-224. 1979. Cost Flow Algorithms. 37-40. Prog.. CA. 26. J. . 184-192. Problem. More Pathological Examples for Network Flow Problems. Barahona. Stanford University. . 1964. N. 1973b. Technical Report No. Zadeh. y4CM 19. 1973a. Prog. N. 11. Comm. 347-348.. Zadeh. and J. A Ehial Algorithm for the Assignment 2. N. A. A Method for Finding the Shortest Route Through a Road Network. A. Algorithm 232: Heapsort. N. Math. Dept. for the Simplex Method and other Minimum Zadeh.

l^8^7 U^6 .

.

.

.

Date Due ne m^ ?«. 0.* > SZQ0^ nrr ^^.„_ . f^cr J CM- OS 1992 • ::m \995t- o 1994 Lib-26-67 .5 4Pi? 2 7 1991 W t 1 .

MIT LIBRARIES DUPl I 3 TDSD DQ5b72fl2 b .

Sign up to vote on this title
UsefulNot useful