^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

and James Sloan School of Management Massachusetts Institute of Technology Cambridge.NETWORK FLOWS Ravindra K. MA. Orlin On leave from Indian Institute of Technology. 02139 . Kanpur . B. Ahuja* Thomas L. INDIA .208016. Magnanti.

LffiRARF --^ JUN 1 .MIT.

5 5.4 5.3 5.4 Network Representations 1.NETWORK FLOWS OVERVIEW Introduction 1.1 3.7 5.2 5.4 3.2 3.3 3.2 Complexity Analysis 1.1 Applications 1.9 5.3 Notation and Definitions 1.11 Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis Assignment Problem Reference Notes References .1 5. Linear and Integer Programming Network Transformations Shortest Paths 3.5 Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems Minimum 5.3 4.8 5.6 Developing Polynomial Time Algorithms Basic Properties of 21 Z2 Z3 24 Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks.2 4.5 Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm Dijkstra's Dial's Maximum Flows 4.6 Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns 5.1 4.5 Search Algorithms 1.4 Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm 4.10 5.

.

Highway.. primal-dual methods of linear and nonlinear programming. topics: We have divided the discussion into the following broad major . have served as the major prototype for several theoretical domaiiis (for example.g. Moreover. impact on the design and implementation of many network many The aim optimization. Moreover. we concentrate on network flow problems and highlight a number of recent theoretical and algorithmic advances. For example. rail. electrical. price directive decomposition algorithms for both linear programming and So did cutting combinatorial optimization had their origins in network optimization. science concerning data structures and ideas from computer and efficient data manipulation have had a major optimization algorithms.Network Flows Perhaps no subfield of mathematical programming is more alluring than network optimization. and polyhedral methods of In addition. the field of matroids) for a and as the core model wide variety of min/max duality results in discrete mathematics. Many results in network optimization are routinely used to design and evaluate computer systems. practitioners and of non-specialists can readily understand the mathematical descriptions network optimization problems and the basic ruiture of techniques used to solve these problems. Network optimization is also alluring to methodologists. communication and many other a consequence. because the physical operating characteristics of networks (e. flows on arcs and mass balance at nodes) have natural mathematical representations. lives. Networks provide a concrete setting for testing and devising new theories. network optimization has served as a fertile meeting ground for ideas from optimization and computer science. Indeed. networks combinatorial optimization. This combination of widespread applicability and ease of assimilation has undoubtedly been instrumental in the evolution of network planning models as one of the most widely used modeling techniques in all of operatior^s research and applied mathematics. physical networks pervade our everyday As even non-specialists recognize the practical importance and the wide ranging applicability of networks. of this paf>er is to summarilze of the fundamental ideas of network In particular. plane methods and branch and bound procedures of integer programming. network optimization has inspired many of the most fundamental results in all of optimization.

in this section we present several important preliminaries We discuss (i) different ways to measure the networks of performance of algorithms. however. and (iv) the network design. that we consider some models requiring solution techniques For the purposes of we will not describe in this chapter. Among good we have presented those that to structure are simple and are likely to be efficient in practice. arise in numerous application settings emd in a variety of guises. Our discussion intended to illustrate a range of applications and to be suggestive of how network flow problems arise in practice. We have attempted our discussion so that it not only provides a survey of the field for the specialists. quantitively.1 algorithms. In is we briefly describe a few prototypical applications. and two generic proof techniques that have proven be useful designing polynomial-time algorithms. a more extensive survey would take us far beyond the scope of our discussion.6 and provide some important references. (iii) (ii) graph notation and vtirious ways that to represent a few basic ideas from computer science (iv) underUe the design to many in 1. polynomial-time) algorithms. To illustrate the breadth of network applications. the multicommodity flows. Some important generalizations of these problems such as (ii) the generalized network flows. will not be covered in our survey. Applications Networks this section. (e. . this discussion. but also serves as an introduction and summary to the non-specialists who have a basic working knowledge of the rudiments of optimization.g. We. As a prelude to the remainder of our discussion. briefly describe these problems in Section 6. particularly linear programming.Applications Basic Prof)erties of Network Flows '' Shortest Path Problems Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems Much of our discussion focuses on the design of provably good algorithms. we will consider four different types of networks arising in practice: .. listed In this chapter. we limit our discussions to the problems (i) above.

1b) /jj < Xjj S u^ = . wires) Route networks Space-time networks (Scheduling networks) • • Derived networks (Through problem trai^formations) in coverage. for all (i. j) Cjj.. if b(i) then node | is a | demand node. if b(i) > 0.i)6^A} =b(i). The constraint (1.j)€A^ subject to X^ii {j:(i.. foralli€N. then node i is a supply node. (1 . We will illustrate models in each of these categories. they provide a useful taxonomy for summarizing a variety of applications. = 0. and a capacity integer Uj. These four categories are not exhaustive and overlap Nevertheless.• • Physical networks (Streets. a lower bound /. x.1a) (i. representing i its supply or demand. A) be a directed network with a cost (i. Network flow models are • • • also used for several purposes: Descriptive modeling (answering "what is?" questions) Predictive modeling (answering "what will be?" questions) Normative modeling (answering "what should be?" questions.Ic) We refer to the vector x (xjj) as the flow in the network.. (1.1b) implies that the total flow out of a node minus the total flow into that node must equal . pipelines. associated with every arc b(i) e A. then node is a transhipment Let n = N | and m= A The minimum cost network flow problem can be formulated as follows: Minimize ^ C.j)e]\} - Xxji {j:(j. We first introduce the basic underlying network flow model and some useful notation.: ' (1. j) e A. performing optimization) that is. The Network Flow Model Let G = (N. railbeds. We associate with each If b(i) node i i e N an number < 0. and |. node.

total supply must equal total demand the mass balance cor\straints are to have any feasible solution. all the mass is balance equations gives the zero equation Ox = equal to minus the or equivalently. Later in Sections and we consider some of the consequences of this special structure.1 gives an example of the node-arc incidence matrix. we make two (i) observations. 1. and each column h<is exactly one +1 and one 2. we : represent the minimum ). Frequently.2 -1. Figure 2. contractual obligations or simply operating ranges of interest. j). Summing gives all the mass balance constraints eliminates all the flow variables and i € I N b(i) = 0. balance constraint. j with a -1 coefficient. In matrix notation.or i € {N : Ib(i) = Mi) > 0) Ib(i) i . for each arc. the net supply /demand of the node.1c) We henceforth refer to this constraint as the moss The flow must also satisfy the lower bound and capacity constraints which we refer to as the flow bound constraints. = node .. are all zero. (ii) If the total supply does equal the total demand. as an outflow from node to Cj with a +1 coefficient and as an inflow is corresponding to arc (i. column vector Note that each i whose x-. the given lower bounds /j. entries are all zeros except for the )-th entry which a flow variable app>ears in two mass balance equations. any equation is sum of all other equations. Therefore the column The matrix nonzero. (1. central role in the The following special ccises of the minimum cost flow problem play a theory and applications of network flows. € {N : b(i) < 0) if Consequently. cost flow problem (1. then summing 0.3. The flow bounds might model later that they physical capacities.e. and hence redundant. The matrix N has one row for each node of the network and one column corresponding to arc of size n (i. For now. all N has very special structure: only 2m out of its nm total entries are of its nonzero entries are +1 or -1. let e. we show can be made zero without any loss of generality. . j) Nj.2) minimize { ex Nx = b and / <xSu in terms of a node-arc incidence matrix N. We let Njj represent the column of N and denote the j-th unit vector which is is a 1.

2) 1 2 3 4 5 .(a) An example network. (1.

the longer the travel time to (e. Many network planning problems arise in this problem context. the more flows on the link. or designing. network decide upon such issues as speed one way street assignments. Now also suppose that each user of the system has a point of origin (e. we need a descriptive model how to model traffic flows and measure the performance of any design as well as a effect of predictive model for measuring the any change in the system. as well as related theory developed a set of sophisticated models for this problem (concerning. Now us make the behavioral assumption that each user wishes to travel possible.. for example. street As one to illustration. consider the problem of managing. and algorithms for computing equilibrium solutions. j) C. Note.g. j) € A). that tells us In order to make these decisions intelligently. if two users traverse the same link. Operations researchers have setting. and the most readily comes to inind when we envision a network. that these route choices each other.A c Nj one is X N2 representing possible person-to-object assignments. The following type these types of questions. and a cost (i. or whether or not to construct a new road or bridge. is there a flow pattern in the his (or her) choice of network with the property that no user can unilaterally change origin to destination path (that is. between his or her origin and destination as quickly as along a shortest travel time path. We can then use these models to answer a variety of "what if planning questions. traffic that The time to do so depends upon is traffic conditions. The objective is to assign each person to exactly way that a minimum cost flow problem on a network minimizes the cost of the assignment. A) with b(i) = 1 for all i i e Nj and b(i) = -1 for all e N2 (we set l^:= and u^. a limits..g. Each of these users must choose a route through the network. that is. affect however. = 1 for all (i. specifies of equilibrium line of the network flow model permits us to answer that Each network has an associated delay function how long it takes to traverse this link. Physical Networks "^ The one that familiar city street map is perhaps the prototypical physical network. all other ULsers continue to use their specified paths in the equilibrium solution) to reduce his travel time. This situation leads to the following equilibrium problem vdth an embedded set of network optimization problems (shortest path problems). let they add to each other's travel time because of the added congestion on the link. associated with each element object in a in A.. traverse it. The Jissignment problem G = (N^ u N2. existence and uniqueness of equilibrium solutions). his or her home) and a point of destination his or her workplace in the central business district). Used in the mode of "what if .

Similar types of models arise in many other problem contexts. The basic equilibrium model of electrical networks is another example. a network equilibrium model forms the heairt of the Project Independence Energy Systems (LPIES) model developed by the U. these models permit analysts to answer the type of questions previously.scenario analysis. how can we lay out or smallest possible integrated circuit to make the necessary connections between components and maintain necessary sejjarations between the wires (to avoid electrical interference). an arc connecting a supply point and center might correspond to a complex four leg distribution channel with legs to a rail station. retail (i) we preprocess the data and Consequently. The traditional operations research transportation at its plants problem is illustrative. the Urban Mass Transit Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement. Indeed. are familiar to most students of operations research and management science. (iv) from the rail head (by truck) to a distribution center. Ohm's Law serves as the analog of the congestion function for the traffic equilibrium problem. (ii) from a plant (by truck) (iii) from the rail station to a rail head elsewhere in the system. A shipper with supplies must ship to geographically dispersed retail centers. which are one level of abstraction removed from physical networks. each with a given aistomer costs based demand. construct transportation routes.S. For example. in this case the transportation network. in this Numerous network . Route Networks Route networks. we posed These models are actively used in practice. is a very large-scale integrated circuit (VLSI In this setting the nodes of the network correspond to electrical components and the links correspond to wires that connect these links. Rather than solving the problem directly on the physical network. *. we assign the arc with the composite . Each arc connecting a supply point to a retail center incurs upon some physical network. In this setting. Another type of physical network circuit). and Kirkhoff s Law represents the network mass balance equations.he its problem context. For example. Department of Energy as an analysis tool for guiding public policy on energy. and even from the distribution center (on a local delivery truck) to the final If customer (or in some cases just to the distribution center). planning problems arise design.

an airport) but at different points in time. period.distribution cost of this route. In each d^ lot size problem. the It is design issue of deciding upon the location of the distribution centers. a warehouse. applications. the an important example. we would identify the supply points with jobs to be performed. find the flows is from plants to customers that minimizes overall This type of model used in numerous applications. and the cost associated with arc i as the cost of completing job on machine j. In this application context.2. and one . for instance. j) demand points with available machines. As but one illustration. assuming that each machine has the capacity to perform only one job. T represents each of the planning periods. . The network representing this problem has T+ 1 nodes: one node = 1. as well as with the distribution capacity for classic problem becomes a network transportation model: costs. and network flows to cost out (or optimize flows) for any using this approach. we wish to meet prescribed demands for a product in each of the T time periods. a noted study conducted several years ago permitted Hunt Wesson Foods Corporation to save over $1 million annually. possible to type of decision problem using integer programming methodology for sites choosing the distribution given choice of sites. this all the intermediary legs. Many address this related problems arise in this type of problem setting. the (i. One problem special case of the transportation problem merits note — the assignment This problem has numerous that we introduced previously in this section. which represents a core planning model is in production planning. a prize winning practice paper written several years ago described an application of such a network planning system by the Cahill costs May Roberts Pharmaceutical Company (of Ireland) to reduce overall distribution by 20%. while improving customer service as well. particularly in problem contexts such as machine scheduling. we wish to schedule some production or service activity over time. 2. . Figure economic 1. Space Time Networks Frequently in practice. . In this problem context. The solution to the problem specifies the minimum cost assignment of the jobs to the machines. . we can produce I^ at level Xj and /or we can meet the demand by drav^g upon inventory from the previous t f)eriod. In these instances it is often convenient to formulate a network flow problem facility (a on a "space— time network" with several nodes representing a particular machine.

x^ > 0). flow problem. Figure 1^. t (i. the problem becomes a minimum cost network shortest path problem (for each demand period. we incur a fixed cost t In addition we may h^ incur a per unit production cost c^ in period and a per t unit inventory cost for carrying any unit of inventory from period problem is t to i>eriod + 1. whenever we in period . . the objective function for . over the entire planning period) must be produced in some period = 1.node represents the "source" of Xj all production. cost: that is. The mass balance equation fir\al for node indicates that demand (assuming zero beginning and zero t inventory . Consequently. The flow on (t. and the flow on arc t + 1) represents the inventory for each in to t be carried from period to period t + 1 . this problem is easily solved as a we must find the minimum cost path of If we impose to that demand point). no matter how much or how little. t arc (0. Whenever the production and holding costs are linear. T. One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^. production and inventory arcs from node capacities on production or inventory. ..e. Network flow model of the economic lot size problem. the cost on each arc for this either linear (for inventory carrying arcs) or linear plus a fixed cost (for production arcs). 2. arises frequently in practice. t) prescribes the production level level I^ in period t. Id. Hence. . The mass balance equation period models the basic accounting equation: incoming inventory plus production that period must equal demand plus all final inventory.

Many enhancements facility (ii) model are possible. j) nodes i and j with < j. or to wait If A. for New York at 10 A.. the first arc on each path arc. This observation implies the following production property: in the each time we produce. though the embedded network often proves to be useful in designing either heuristic or optimization methods.M.M.g.M. t)) and each other arc is an inventory carrying solution. The production property permits us shortest path consists of to solve the problem very as follows. we produce enough to meet the demand for an integral number of contiguous periods. in this we identify network flow network (with no external supply demand) will specify a set of flight plans (circulation of airplanes through the airline's fleet network). The arcs are of two types: service arcs connecting (ii) two airports. until 11 overnight at New York from 11 P. The length of arc is equal to the production and inventory cost of i satisfying the demand of the periods from to j-1. it contains an arc (i. As we indicate in Section 2. most enhanced models are structure quite difficult to solve (they are NP<omplete). to T+ 1..M. New York at 10 A.. the next morning.2 . in no period do we both carry inventory from the previous period and produce. 6 to wait for a later flight. efficiently as a problem on an auxiliary network G' defined 1 The network G' i nodes j). Another classical network flow scheduling problem is the airline scheduling problem used to identify a flight schedule for an airline.10 the problem is concave.). is a production arc (of the (0. the common limited production facilities. This problem's spanning tree solution known as a spanning trees decomposes form into disjoint directed paths. and for every pair of (i. to Boston at 11 to stay at New York from 10 A. G' contair\s a directed path 1 G' from node to node T + 1 of the same objective function veilue and vice-versa. for example (i) the production might have limited production capacity or limited storage for inventory. In this application setting. Observe that for every production in schedule satisfying the production property. a A. we may need to change dies in an automobile stamping plant when making In different types of fenders). until example revenues vdth each service or leg. Moreover.M.g. Hence we can obtain the optimum production schedule by of the solving a shortest path problem. each in node represents both a geographical location (e.M. any such concave cost network flow problem always has a special type of optimum solution solution.M. an airport) and a point (i) time (e. or that cases. A flow that maximizes revenue will prescribe a schedule for an . or the production facility might be producing several products that are linked by common share production costs or by changeover cost (for example. layover arcs that permit a plane A.

11 of planes. Figure 1. Derived Networks This category a "grab is bag" of specialized applications and illustrates that arise in surprising sometimes network flow problems ways from problems that on the surface might not appear to involve networks.3 illustrates a number of possible duties for the drivers of a bus company. The foUovdng examples illustrate this Single Duty Crew Scheduling. The same type of network representation arises in many other dynamic scheduling applications. point. Time Period/Duty Number .

2b) subtract each equation from the equation below to the system. Because of the structure of A. and b is column vector whose components are all Observe 's that the ones in each column A occur in consecutive rows because each driver duty contains a single work is shift (no split shifts or work breaks). Critical Path Scheduling and Networks Derived from Precedence Conditions In construction and many other project planning applications. the same this case the right transformation would produce a flow problem.12 In this formulation the binary variable x: indicates whether 0) (x. but in arbitrary. to node 9 minimum network given Figure which is an instance of the shortest path problem. workers need to in complete a variety of tasks that are related by precedence conditions. ^5 unit 1 Figure 1. rather than a shortest problem. a builder must pour the foundation before framing the house and complete the framing before beginning to install either electrical or plumbing fixtures. the problem cost in the to ship in one unit of flow from node 1. . = a of we select the j-th duty. or the added row. the transformed problem p)ath would be a general minimum cost network flow problem. constructing a house. the revised right hand side vector of the problem will have a +1 is in row 1 and a -1 in the last (the 1 appended) row.4. we perform it. that Hes below the +1 in the column of A). the matrix A represents the matrix of duties Vs. each column in the first revised system will have a single +1 (corresponding to the hour of the duty in the column just of A) and last a single -1 (corresponding to the row in A. If instead of requiring a single driver to be on duty in each period.4. = 1) or not (x. This transformation does not change the solution to Now add a redundant equation equal minus the sums of all the equations in the revised system. hand side coefficients (supply and demands) could be Therefore. Moreover. the following operations: In (1. to we specify a number network be on duty in each period. for example. Shortest path formulation of the single duty scheduling problem. at Therefore. We show that this problem a shortest path problem. To make this identification.

+ ^ 2- f Xjj si I {j:(i. this problem. 2. (i. On to the surface. however. . We are to choose the constraints jobs start time of each job j so that we honor a set of specified precedence If and complete the overall project as quickly as possible. to Suppose we need complete J jobs and that job S. . then the precedence constraints can be represented by arcs. j (j = 1.13 This type of application can be formulated mathematically as follows. problem has a familiar If we associate a dual variable with each arc then the dual of this problem maximize V t. then each constraint contains exactly two variables. thereby giving us a network. otherwise. "start" job we add we to two dummy both with zero processing time: a a "completion" job J to be completed before any other job can begin and have completed this all + 1 that cannot be initiated until other jobs. A) represent the network corresponding to solve the following optimization augmented project. we represent the by nodes. for l. j) e A. . The linear programming dual xj: of this (i. . for each arc (i . that we move variable to the left hand side of the a plus constraint. i has been completed.ifi = 0. which is a linear program in the variables if s: . the cannot start until job jobs.ifi = J + l all i € N . problem: minimize sj^^ .j)eA) {j:(j.i)€!^) -l. Let G = (N.. j) in the network. J) requires t: days to complete. For convenience of notation. Sj Note. ^ . seems the bear no resemblance to network optimization. Then we vdsh . X. is coefficient. The precedence constraints imply that for each arc job j (i.j)€X subject to ^ 2^ X:. one with one coefficient and one with a minus one structure.Sq T subject to Sj S Sj + tj . j) .

14 .

The open pit mining problem is another network flow problem that arises from pit precedence conditions. The provisions any given mining technology. yj S 0) whenever we that need wish to mine block to maximize before block total i. This longest path has the to fulfill the sp>ecified following interpretation. Consider the open mine shown in Figure 1. we have removed any block immediately above restrictions on the "angle" of mining the blocks might impose similar precedence conditions. This model become principal tool in project projects. Certain versions of this problem can be formulated as minimum cost flow problems. and the revenue n as the demand demand node j. the value of the ore in the block minus the j cost for extracting the block) If and we wish to extract blocks to (y^ maximize - overall revenue. we could consider the most efficient use of these resources to complete the overall project as quickly as possible. It is the longest sequence of jobs needed precedence conditions. for all (i. management. and perhaps the geography of the mine. itself is particularly for managing it large-scale corwtruction The critical path important because identifies those jobs that require managerial attention in order to complete the project as quickly as possible. if resources are available for expediting individual jobs. For example. Since delaying any job in this sequence must necessarily delay the completion of the overall project. rather than network flow problem with a node for each block. y. impose restrictions on how we can remove the blocks: that lies for example. < 1. ^ y^ (or. As shown of in we have divided the region to be mined into blocks. a variable for at each precedence constraint. This problem requires us to determine the longest path in the network G from node to node J + 1 with tj as the arc length of arc (i. and . be a zero-one variable indicating whether (i) = 1) or not (y. S 0. This network will also have a dummy "collection it node" with (that is. Researchers and practitioners have enhanced this basic model in several ways. this figure. this path has become known as the critical path heis and a the problem has become known as the critical path problem. = 0) we extract block the problem will contain j a constraint y. we can never remove a block until it.. (ii) an objective function specifying over all we revenue ny. j) 6 A . we let j. summed or 1) blocks j.g. y. j). Suppose now that each block has an associated revenue n (e. The dual linear program (obtained from the constraints programming version = will be a of the problem (with the ^ y. and an arc connecting to node j .15 xj. linear y. equal to minus the sum of the rj's.5.

rounded up or dov^n. way that network flow problems related Whenever. must be the sum of illustrates the 1. It contains an arc connecting node j): i (corresponding to row ij-th i) and node (corresponding to column the flow on this arc should be the entry in the prescribed table. Figure 1. flows on the arcs incident to node The critical path scheduling problem and open pit mining problem illustrate one arise indirectly. If all entries in the original table rounded up or down.6(a). We also add an arc connecting node t and node s. and the overall sum of the entries in the new table adds to a rounded version of the overall that sum in the original table. We might disguise the information in this table as follows. information and not disclose It the Bureau has an obligation to protect the source of its statistics that can be attributed to any particular individual. round each entry in the table. Matrix Rounding of Census Information The for a U. Census Bureau uses census infonnation to construct millions of tables wide variety of purposes. the dual linear program will be a network flow problem. The problem can be a feasible flow in a network and can be solved by an application of the maximum flow algorithm. the flow on this arc must be the column sum.7 up or down. meeisuring them in integral units of the rounding base .16 block j). The network contains a node for each row in the table and one node for each j column. Since the upper leftmost entry in this table a 1. the data is shown in Figure 1.6. the only constraints in the problem are precedence constraints. so that the entries in the table continue to add to the (rounded) row and column sums. this arc corresponds to the upper bound constraint is y. table. the flow on this arc t must be the row sum. including the row and column sums. we add a supersink with the arc connecting each j-th column node j to this node. the variable corresponding to this If precedence constraint in the dual linear program v^ll have a network flow structure. rounded either up or dov^T*. By law. for example. In addition. can attempt to do so by rounding the census information contained in any Consider. rounded the flow on this arc 1.6(b) shows a cast as finding rounded version of the data meets this criterion. i: we add a supersource s to the i-th network connected to each row node Similarly. ^ 1 in the original linear program.S. Figure network flow problem corresponding to the census data specified in Figure we rescale all the flows. say. The dual problem one of finding a network flow that minimizes ths sum of 0. either up or dov^n to the nearest multiple of three. the tabulated information might disclose information about a particular individual. two variables in a linear program are by a precedence conditions.

000 .$30.000 .$50.000 mure than $50.16a Time in ^service (hours) <1 Income less 1-5 <5 than $10.(XX) 1 $10.000 $30.000 Column Total .

The objective of average-case analysis to estimate the expected number of steps taken it by an algorithm. we present.16b (multiples of 3 in our example). and only secondarily on empirical behavior. this chapter will focus primarily on worst-case analysis. we can exploit in divising 12 Complexity Analysis There are three basic approaches for measuring the performance of an algorithm: empirical analysis. Worst-case analysis aims to provide upper bounds on the number of steps that a given algorithm can take on Therefore. we will prove . Thus. Researchers have designed many of the algorithms described in this chapter specifically to improve worst-case complexity while simultaneously maintaining good empirical behavior. these problems have an (corresponding to 2-dimensional "cuts" in the table) that algorithms to find rounded versions of the tables. we bound the running time of network algorithms in (n). As an example of a worst-case result within this chapter. worst-case analysis. Each of these three performance measures has appropriate for certain purposes. Average-case analysis differs from empirical analysis because provides rigorous mathematical proofs of average-case performance. typically Empirical analysis measures the computational time of an algorithm using statistical sampling on objective of a distribution (or several distributions) of problem instances. and is Nevertheless. terms of several basic problem parameters: the number of nodes (m). then the flow on each arc must be integral at one of two of this consecutive integral values. and average-case analysis. is any problem instance. will not be a network structure flow problem. its relative merits. for the algorithms performance. The major empirical analysis is to estimate how algorithms behave in practice. rather than statistical estimates. we assume that each cost (or capacity) integer valued. The formulation of a more general version imbedded network problem. the number of arcs and upper bounds C and U on the cost coefficients and the arc capacities. corresponding to tables with more than two dimensions. worst-case analysis is the primary measure of Worst-Case Analysis For worst-case analysis. Whenever is C (or U) appears in the complexity arulysis. this type of analysis provides performance guarantees. Nevertheless.

" The 0( ) notation avoids the need to state a specific constant. researchers typically use a "big O" notation. For example. researchers have widely adopted the 0( 1. has led to a flourishing of research on the worst<ase performance of algorithms. then we would state that the running time O(nm^). Estimating the constants correctly is is fundamentally difficult. For large practical problems. if Therefore. 4. ) notation for several reasons: Ignoring the constants greatly simplifies the analysis. 3. it is also highly sensitive to the choice of the computer language. The leeist value of the constants not determined solely by the algorithm. in turn. the constant terms 2''^'^n'^m this dominant even though most practical term would dominate. By dominant. instead. Observe that the for running time indicates that the lOnm^ term values of n and m. the constant factors do not contribute nearly as much to the running time as do the factors involving n. C or U. To avoid the need to compute or mention the constant p. . and even to the choice of the computer. Counting Steps The running time of steps it of a network algorithm is determined by counting the number performs. the time is called asymptotic running times. 2. the use of the 0( notation typically has permited analysts to avoid the prohibitively difficult analysis required to compute the leading constants. this notation indicates only the dominant terms of the all running time. ) Consequently. The counting for of steps relies on a number of assumptions. most of which are quite appropriate most of today's computers.17 that the number is less of steps for the label correcting algorithm to solve the shortest path problem than pnm steps for some sufficiently large constant p. the actual running time is lOnm^ + 2'^'^n^m. Although ignoring the may have undesirable feature. m. sufficiently large values of we mean the term that would dominate bounds are other terms for n and m. replacing the expressions: requires "the label correcting algorithm pmn steps for some constant p" with the equivalent expression "the running is time of the label correcting algorithm 0(nm). which. assuming that is m ^ n. the constant terms are relatively small integers for the terms in the complexity bound. For all all of the algorithms that we present.

takes equal time. log C and log U. which the time difference between an addition and a multiplication on essentially all modem computers. we are adhering to a sequential model of computations. the assumption that each arithmetic operation takes one step lead us to underestimate the aisymptotic running time of arithmetic operations involving very large numbers on real computers since. Consequently. m. On may the other hand. obtain the same asymptotic worst-case it algorithms that we Our cissumption that each operation. to perform each operation on very large numbers. By envoking Al.000.18 Al.l The computer being executed carries out instructions sequentially..g. To avoid systematic underestimation of the running time. i. a computer must access a number of words of data and this thus takes more than a constant number of steps. we would allow costs to be as large as 100. even by counting all other computer operations. quite /) reasonable in practice. is justified by the fact that 0( is ) notation ignores differences in running times of at most a constant factor. Other instances of . in practice. on results for the today's computers we would present. researchers refer if its network algorithm as a polynomial-time algorithm n. In fact. Polynomial-Time Algorithms An the algorithm is said to be a polynomial-time algorithm if its running time is is boimded by a polynomial function of the input length.l. we will not discuss parallel implementations of network flow «dgorithms. it is 0((n + m)flog n + log C + log U)). For a network problem. For example.2 implicitly assumes that the only operations to and tirithmetic operations.. we will typically assume for that both C and U k. the running time of one of the polynomial-time maximum flow algorithms we consider is 0(nm + n^ log U). be in part an addition or division. be counted are comparisons Al . in comparing two running times.2 Each comparison and basic arithmetic operation counts as one step. The input length of a problem number is of bits needed to represent that problem. log C and to a log U (e. For example.e. a computer must store large numbers in several words of its memory.000 for networks with 1000 nodes. running time is bounded by a polynomial function in m. C = Oirr-) and U = 0(n'^). the input length a low order polynomial function of n. are polynomially bounded in n. with at at a time. we were to restrict costs to be less than lOOn-^. most one instruction A1. is some constant This assumption. if known as the similarity assumption.000. Therefore.

0(n!) and 0(n^°g polynomial function of n and log if "). flow algorithm alluded therefore. 0(2^). pseudopolynomial-time algorithms become polynomial-time algorithms. First. For problems that satisfy the similarity assumption.) polynomial-time algorithms. C and U. and does not involve log to. There are two major reasons for preferring polynomial-time algorithms to exponential-time algorithms. a polynomial function only n and m.19 polynomial-tiine bounds are said to be a strongly O(n^m) and 0(n log n). as a rule. we envoke the similarity assumption. Moreover. n^'^OO is smaller than tP'^^^E^ ^ if sufficiently large. Even n is in extreme cases this is true.8 illustrates the asymptotic superiority of The second reason is more pragmatic. Much practical shown that. an important subclass of exponential-time Some instances of pseudopolynomial-time bounds are 0(m + nC) and 0(mC). Qn n must be larger than 2"^^^'^^^. For example. Some examples bounds are 0(nC). but the algorithms will not be attractive if C and U are high degree polynomiab in n. C The maximum algorithm. The class of pseudopolynomial-time algorithms algorithms. polynomial-time algorithms are strongly polynomial-time because log C = Odog n) and log U= CXlog n). this case. any polynomial-time algorithm is asymptotically superior to any exponential-time algorithm. (Observe that nC cannot be bounded by is C) We say that an algorithm n. small degree. the polynomials in practice are typically of a . experience has Figure 1. is not a strongly polynomial-time is The if interest in strongly polynomial-time algorithms all primarily theoretical. In particular. A polynomial-time algorithm is is polynomial-time algorithm in if its running time bounded by or log U. pseudopolynomial-time its running time is polynomially bounded in is m. An algorithm is said to be an exponential-time algorithm if its running time grows of exp)onential time a as a function that can not be polynomially bovmded. polynomial-time algorithms perform better than exponential time algorithms.

20 APPROXIMATE VALUES .

A') a spanning subgraph of G = (N. A graph G' = (N'. i) or (i^ . ij. j) if its i node set j N can be partitioned into and A' two subsets N| and N2 so that for each arc in A. we distinguish two special the source s and sink t. whichever is appropriate from context. or arc (ij^+i .. for each € A. In this chapter. 12. (ij. A directed (\2 r-1. j) as the head of arc aire (i. shall explicitly state directed or undirected path. A(i).( ij. othervs^se. i and j j. j) (i. i\^ We refer to the nodes i3 . as a cutset of G. a cost Cj. . and no superset of Q has this property. . A graph G' = is (N'. . representing cycles. we shall sometimes refer to a path as a set of (sequence oO arcs without mention of the nodes. list node i. nodes and arcs ip (ip 12^. .- • • . We we shall often use the terminology path to designate either a directed or an undirected path. We associate that Uj. e N| and if e N2. 13. 13). > with each arc (i. the path contains i2 . A(i) = {(i. i| For simplicity of notation. . if) satisfying the property that ij^+p € A for each k= . We j. and a capacity Uj:. Frequently.). A) is called a bipartite graph (i. j) has two end points.e. G if = (N. A cutset connected. j) e A. . is defined as the set of arcs emanating from node of a i. and ij^^-j on the path. . A directed is cycle is a directed path together with the arc i|) and an undirected cycle an imdirected path together with the arc (ij. we always assume graph G is is We refer to any set Q c A with the property that the graph G' = (N. Two nodes i and i j are said to be connected j. refer to node i tail jmd node (i. A) is a sequence of distinct (ij^. and say that the arc (i. if the graph contains at least one if all undirected path from connected. A) if N' CN c A. as the i. .21 I N I and m= A I I .j) is incident to nodes i and j. . We assume throughout nodes in a graph. A) N' = N and A' c A. If any ambiguity might arise. j). A-Q) disconnected. An arc (i. . 1. to is A graph is said to be connected pairs of nodes are that the it disconnected. A') is a subgraph of G= (N. The arc (i. ij-. we shall often refer to a path as a sequence of nodes - i2 - -ij^ when its arcs are apparent from the problem context.i. j) e A : € N}. We shall use similar conventions for A graph G = (N. j) emanates from node Tlie arc adjacency The of j arc is an outgoing of node i and an incoming arc of node i. An undirected path is defined similarly except that for any two consecutive nodes either arc (ij^. • • .^ as the internal nodes of the path. path in . Alternatively. The degree node is the number of incoming and outgoing arcs incident to that node. i\^+-[) i^.

the resulting graph is again a spanning In this chapter. N-X). The addition of cycle. of is which only space 2m words have nonzero values. T are called tree arcs.22 partitions the graph into two sets of nodes. We shall alternatively represent the cutset Q as the graph is node partition (X. Arcs a whose end points belong to two If different subtrees of a spanning tree created by deleting tree-arc constitute a cutset. j) with the property that 1 if arc € A. any arc belonging tree. we some popular ways In Section 1. A node in nc des. we have already described the node-arc incidence matrix representation of a network. Each least two leaf A spanning tree contains a unique path between any two nodes. the element I^: This representation stores an n x n matrix (i. X and N-X. The arc costs and capacities are . but to represent the also upon the manner used network within a computer and the storage results. A tree is a connected acyclic graph. This scheme requires nm this words to store a network. A acyclic if it contains no cycle. A tree T is said to be a spanning A tree of G if and T is a spanning subgraph arcs not belonging to 1 of G. we state it othervdse. we assume that logarithms are of base 2 unless log b. scheme used for maintaining and updating the intermediate The running time of an algorithm (either worst<ase or empirical) can often be improved by representing In this section. Another popular way = network the node-node adjacency I matrix representation. We represent the logarithm of any number b by 1. to this cutset is added to the subtrees. the network discuss more cleverly and by using improved data of representing a network. T are A spanning tree of G = (N. structures. A) is has exactly ntree has at tree arcs. to represent a network representation is not efficient. any nontree arc to a spanning tree creates exactly one Removing any two arc in this cycle again creates a spanning tree.4 Network Representations The complexity of a network algorithm depends not only on the algorithm. Arcs belonging to a spaiming tree called nontree arcs.1. subtree of a tree T is a connected subgraph of T. Removing any tree-arc creates subtrees. Clearly. a tree with degree equal to one called a leaf node. and Ijj = otherwise.

23 (a) A network example arc number 1 point (tail. arc number 1 (tail. (c) The reverse star representation. head) cost cost 1- 2 3 1 4 2 3 2 3 1 4 5 4 2 1 6 7 8 4 1 3 4 2 3 (b) The forward star representation. head) cost 2 3 4 5 6 7 8 .

head) and We also maintain a pointer with each node i. We also i.9(a). set point(l) = 1 and point(n+l) = m+1. incidence list (These representations are also literature. We can avoid this duplication by eircs. 1. For the sake of we at set rpoint(l) = 1 and rpoint(n+l) = m+1. The arc (1.1). The forward star and reverse star representations are probably the most popular ways to represent networks. Starting from a forward star representation.9(d) gives the arc numbers. which denotes the first arrays that contains information about an incoming arc at node consistency. then the arcs emanating from node arbitrarily. simultaneously.) first known as representation in the computer science The forward star representation numbers the arcs in a certain order: 2.9(b) specifies the forward star 1. numbers ir\stead of the (tail.24 also stored in n x n matrices. number i in the arc list of an arc emanating from - node 1) in Hence the outgoing list. arcs of node - are stored at positions point(i) to (point(i+l) the arc If point(i) > point(i+l) 1. We then sequentially store the (taU. representation of the network given in Figure The forward outgoing arcs at star representation allows us to determine efficiently the set of set of any node. but is not attractive for storing a sparse network. 2) hcis 1. store the (tail. both sparse and dei^se. in order and sequentially head) and the cost of incoming arcs of node i. We examine the nodes j = 1 to j. 2) So instead of storing head) and cost of arcs. maintain a reverse position in these pointer with each node denoted by rpoint(i). This data structure gives us the representation shov^Ti in Figure Observe that by storing both the forward and reverse star representation S. Figure 1. and so on. We numbers in an m-array trace. we can simply store the arc numbers and once we know the from the forward 1. As earlier. (tail. we will maintain a significant duplicate information. that indicates the smallest i. . arc has arc number arc number 4 in the forward star representation. To determine. For consistency.9(c). then node i has no outgoing arc. This representation is adequate for very dense networks. denoted by point(i). head) and the cost of the For example. we number the arcs emanating from node 1. storing arc (3. Arcs emanating from the same node can be numbered the cost of arcs in this order. we can always retrieve the associated information store circ star representation. Figure complete trace array. we store the incoming arcs node i at positions rpoint(i) to (rpoint(i+l) . we n can create a reverse star representation as follows. the incoming arcs at any node efficiently. we need an additional data structure known as the reverse star representation.

j) admissible arcs. we discuss two of the most commonly used search techniques: breadth-first search and depth-first search. different variants of search lie at the heart of many network algorithms. Subsequently. . G = (N. Search algorithms attempt to find property. inadmissible We call an arc otherwise.5 Search Algorithms Search algorithnvs are fundamental graph techniques. j) admissible if node i is marked and node is j is unmarked. The marked nodes are is known be reachable from the source. and Initially. A) that are reachable through directed paths from a distinguished node called the source. Tl e follovkdng algorithm summarizes the basic iterative steps. The algorithm we say that node is a predecessor terminates when the graph contains no (i. in At every point states: in the search procedure.e. and the status of unmarked nodes yet to be determined. by examining admissible arcs. the search algorithm will mark more nodes. (i. predi]) = i. in a all nodes in a network that satisfy a particular For purposes of illustration. Whenever i the procedure marks of a new node by examining an j admissible arc node j. all nodes in the to network are one of two marked or unmarked. i. let us suppose that we wish to find all the nodes graph s..25 1. only the source node marked. In this section.

(i. and in the latter Ccise deletes a marked node from LIST. node i from LIST. nodes s. mark node LIST := {s). while LIST * do begin select a if node i i in LIST. makes the next list. it has marked all nodes in G that are reachable s via a directed path. Since the algorithm marks any node at most once. it executes the while loop at most 2n times. first the current arc of node is the arc in A(i). We maintain with each node the list emanating (i. which i is the current candidate for being examined next. it the algorithm marks a new node and adds it to LIST. add node end else delete j to LIST. When from nodes. it this list sequentially list and whenever the current arc arc. Arcs in each list can be arranged arbitrarily. j) node is incident to an admissible arc then begin mark node pred(j) := i.26 algorithm SEARCH. The same data also used in the maximum flow and minimum i cost flow algorithms A(i) of arcs discussed in later sections. j. j) from it. begin unmark all in N. In the former case. it arc in the arc the ciirrent When the algorithm reaches the end of the arc arc. end. Each node has a current arc Initially. declares that the node has no admissible It is easy to show that the search algorithm runs in 0(m + n) = 0(m) time. The predecessor indices define a tree consisting of marked We structure use the following data structure to identify admissible is arcs. Each iteration of the while loop either finds an admissible arc or does not. Now consider the effort spent in identifying the . this algoirthm terminates. is The search algorithm examines inadmissible. end.

in the problem H = maximum mCU. in the m. flow problem H = mU.e. that data are integral and that algorithms maintain integer solutions at intermediate stages of computations. For each node i. as usual. meeisured as minimum number of arcs in a directed path from s to Another popular method is to maintain the set LIST as a stack. It marks nodes s to i in the nondecreasing order of their distance from the with the distance from i. We assume. the search algorithm selects the marked nodes in the last-in. Geometric Improvement Approach The geometric improvement approach shows polynomial time if that an algorithm runs in at every iteration it makes an improvement proportioT\al to the solutioiis. the search in algorithm examines a total of ie X A(i) = m N and thus terminates 0(m) time. This s. and U. Hence. For cost flow instance. i. this version of search is called a breadth-first search. i. we scan arcs in A(i) arcs. and the scaling approach. H is a function of n. In this section. then the search algorithm selects the marked nodes in the order. and backs up one node initiate a new probe when it can mark no new nodes from the tip of the path. first-out to the rear. s. C. L6 Developing Polynomial-Time Algorithms Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence) approach. nodes to LIST.. The algorithm. will we briefly outline the basic ideas all underlying these two approaches. the set LIST is maintained as a queue. nodes are always selected from the front and added to the front. in this instance.27 admissible arcs. creating a path as long as possible. nodes are always selected from the front and added first-in. difference between the objective function values of the current and optimum Let H be an upper bound on the difference in objective function values between any two For most network problems. This algorithm to performs a deep probe. feasible solutions. Therefore. as described. first-out order.. kind of search amounts to visiting the nodes in order of increasing distance from therefore. this version of search is called a depth-first search. at most once.e. does not specify the order for examining and adding If Different rules give rise to different search techniques. and minimum .

11 presents an example of a bit-scaling algorithm for . we describe the simplest form of scaling which we call bit-scaling. Since H is the maximum possible improvement and every objective function value is an integer.3) implies that a(z^ .e.3. the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1.z*)/2 units. similar result applies to maximization versions of optimization problems. If in each iteration.e. Then the algorithm terminates in O((log H)/a) iterations..z*).z*)/2 units. the algorithm must have reduced the total possible improvement (z*^..28 Lemma 1. Further. therefore. Section 5. On the other hand. The maximum augmenting path algorithm for the 4. Proof. (i. then (1. the algorithm must terminate wathin 0((log H)/a) iterations." a the statement geometric convergence rate are polynomial time In order to develop polynomial time algorithms using this approach. and. Consider a consecutive sequence of starting 2/a iterations from iteration k. suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*) (13) for (i. q the algorithm improves the objective function value by no more than aCz*^ . In this discussion. if at some iteration.) and Scaling Approach Researchers have extensively used an approach called scaling to derive polynomial-time algorithms for a wide variety of network and combinatorial optimization problems.2 maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. the algorithm improves the objective function value by at least aCz*^ .1. The quantity (z*^ - z*) represents the total possible improvement in the objective function value after the k-th iteration. (See Sections 5. We A have stated this result for minimization versions of optimization problems. we can look for local improvement techniques that lead to large fixed percentage) improvements for the in the objective function. The geometric improvement approach might be summarized by "network algorithms that have algorithms.z*)/2 ^ z^ - z^-^^ ^ aCz^ . Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value. then the algorithm would determine an optimum solution within these 2/a iterations.z*) by a factor of 2 within these 2/a iterations.

. using more refined versions of scaling. is a better approximation until Pj^ = P.29 the assignment problem. . The capacity an arc in P^ is tivice that in Pf^^j plus or 1. adding leading zeros necessary to make each capacity K bits long. The manner of defining arc capacities easily implies the following observation. Further. The is scaling technique useful whenever reoptimization from a good starting solution solving the problem from scratch. Using the bit-scaling technique. of Observation.10 illustrates an example of this type of scaling.. Let K = Flog Ul and would consider suppose if that we represent each arc capacity as a K bit binary number. K. P3. . the problem P2 approximates data to the second bit. . more efficient than For example. the optimum solution is of problem Pj^^. and each successive problem . consider a network flow problem whose largest arc capacity has value U. Then the its problem Pj^ the capacity of each arc as the k leading bits in binary representation. Sections 4 and 5.-j serves as the starting solution for problem Pj^. . Figure 1. describe polynomial-time algorithms for the maximum flow and minimum cost flow problems.. we solve a problem P parametrically as a sequence of problems P^. for each k = 2. Pj^ the problem P^ approximates data to the first . : bit. P2.

. P2.30 100 <=^ (a) (b) PI : P2 100 P3: 010 (c) Figure 1. and P3. (a) Network with arc capacities. (b) (c) Network with binary expansion of The problems Pj.10. arc capacities. Example of a bit-scaling technique.

e. Consider. maximum flow value for problem Pj.^ twice capacity in Pj^. the number of problems solved is OOog n). Therefore. (ii) The optimal solution problem Pj. variants of it have led to improved algorithms for both the maximum flow and minimum cost flow problems.i by 2.31 The following algorithm encodes a generic version algorithm BIT-SCALING. Moreover. for this approach to work. = 2 K do optimum solution of Pj^. of the bit-scaling technique. Let vj^ denote the vj^. we obtain a feasible flow for Pj^._i is an excellent starting solution for problem Pj^ since Pj^. in part. 0(m^ log U) time. The former Thus this polynomial and the bound is only pseudopolynomial. For example. In general.i plus or 1. vj^ < m because multiplying the flow X]^_^ by 2 takes care of the I's doubling of the capacities and the additional can increase the maximum increase the flow value by at most m units (if we add 1 to the capacity of any arc. the maximum and is xj^ flow problem. the labeling algorithm as discussed in would perform the reoptimization in at most m augmentations. of The problem P^ is generally easy to solve. Thus (i.1 flow problem. Pj^ denote an arc flow corresponding to its In the problem the capacity of an arc xj^. . for example. whereas time bound is the scaling version of the labeling algorithm runs in the non-scaling version runs in latter O(nmU) time..i to Pj^. claissical easier to reoptimize such a maximum Section 4. because of the following reasons. the optimum solution of Pj^. reoptimization needs to be only a little more efficient by a factor of log n) than optimization. end. begin reoptimize using the obtain an optimum solution of end. This approach is very robust. taking O(m^) time. If we multiply the optimum flow 2vj^_'j for Pj^. This approach works well (i) for these applications. solution of Pi^_i can be easily reoptimized to obtain an Hence..^ and Pj^ are quite similar. it then is we maximum flow from source to sink by at most 1). begin obtain an for k : optimum to solution of P^. (iii) optimum For problems that satisfy the similarity assumption. simple scaling algorithm improves the running time dramatically.

the basic decision variables are flows Xj: on arcs cycles (i. Therefore. cycle formulation starts with an enumeration of the paths Its P and Q of decision variables are h(p). for every directed path and f(q). the flow on path p. BASIC PROPERTIES OF As a NETWORK FLOWS we describe several basic prelude to the rest of this chapter. we discuss a few useful 2. in this section properties of network flows. or algorithms.1 Flow Decomposition Properties and Optimality Conditions It is natural to view network flow problems in either of two ways: as flows on arcs or as flows on paths and cycles. j). each view has own to advantages. 6jj(q) equals arc is contained in cycle q and otherwise. j) equals the sum of the flows h(p) and f(q) for all paths p and cycles q that contain this arc. in designing algorithms. and spanning tree Consequently. the flow in on cycle which are defined p in P and every directed cycle q Q.32 2. In the context of developing underlying theory. on arc (i. Then ^i3= I p€ P 5ij(p)h(p)+ X qe hf<i^^^^^- Q . We next establish several important connections between network flows and linear and integer programming. Notice that every set of path and cycle flows uniquely determines arc flows in a natural way: the flow xj. q. similarly.1). We j) formalize this observation by defining some new notation: 5jj(p) 1 if equals (i. We begin by showing how network flow problems can be modeled Section in either of two equivalent ways: as flows on arcs as in our formulation in 1. The path and the network.1 or as flows on paths and cycles. as the first step in our discussion. its models. only consider these special types of solutions. is contained in path p and is otherwise. Then we partially characterize optimal solutions to network flow problems and demonstrate that these problems always have certain special types of optimal solutions (so<alled cycle free solutions). we will find alternate formulations. j) 1 if arc (i. it worthwhile develop several connections between these In the arc formulation (1. transformations of network flow problems. we need Finally.

Every path with positive flow connects a supply node of x to a demand node most of x. as) path and cycle flows? The following result provides an affirmative answer to this question. In the former case ij^ we obtain a directed path p from the supply node some demand node consisting solely of arcs with positive flow. a path. the original flow the sum of flows on the paths and cycles identified procedure. must find a is cycle. and each time we we reduce the flow on some arc to zero. and repeat the procedure. Every directed path and cycle flow Conversely. these. Proof. At most n+m paths and cycles have nonzero flow. If and in the latter case [b(iQ). Now observe that each time we identify to zero. otherwise the (i^. 2. b(ij^) we = let h(p) = inin min (i. out of have nonzero flow. p. we obtain a directed path. We give an algorithmic proof to show any feasible arc flow x can be decomposed Oq.1.2. j) e p)]. Then some arc i|) carries a positive flow. is a demand node then we stop. the path and cycle . nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties: C2. into path and cycle If flows. (i. we reduce the identify supply /demand of some node or the flow on some arc a cycle. j) we let f(q) = min {x^: (i. Then we select a transhipment node with at one outgoing arc with positive flow as the starting node. can we decompose any arc flow into (i.33 If the flow vector x is expressed in this way. - h(p) for each arc x^. Note that one of these cases will occur within n steps. we say that the flow is represented f is eis path flows and cycle flows and that the path flow vector h and cycle flow vector cycle flow representation of the flow. every has a unique representation as nonnegative arc flows. (i. In the light of our previous observations.e.1: Theorem Flow Decomposition Property (Directed Case). xj: we obtain a directed (xj: : cycle q. and redefine b(iQ) = b(iQ) . i^j Suppose supply node. Consequently.h(p). j) in we obtain a cycle q. We lecist repeat this process with the redefined problem until the network contains no supply node (and hence no demand node). cycles C2..1b) of node flow. 12) mass balance constraint (1. If b(ijj) + h(p) and : = Xj. in this Ceise which 0. i^ implies that some other arc carries positive We repeat this argument until either we encounter a demand node ig to or we revisit a previously examined node. j) € q) and redefine = Xj: - f(q) for each arc in q. -b(ij^). a path and Can we represent it reverse this process? That is. We terminate when for the redefined problem x = by the Clearly. at m we need that ig is a to establish only the converse assertions.

our representation using the notation and -1 if valid v^th the following provision: we now define 6j. for each arc (i. C2.5. final Each undirected path which has an orientation from its initial to its node. As enables us to compare any two solutions of a network flow problem in a particularly convenient way and to show how we can build one solution from another by a sequence of simple operations. Theorem 2. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: C2. it a number of important consequences.2. We need flow f(q) the concept of augmenting cycles with respect to a flow x. The flow decomposition property has one example.4. Proof. the paths and cycles can be undirected.3. The major modification . Every path and cycle Conversely. every arc flow x can be flow has a unique representation as arc flows. of which there are It at most m cycles. Flow Decomposition Property (Undirected Case). at most m cycles This proof at is similar to that of ij^_-j Theorem 2. these. if < Xjj + < Ujj. At most n+m paths and cycles have nonzero flow. j) is a backward arc of the path or cycle. even though the underlying network directed. The other steps can be modified accordingly. A cycle q with > is called an augmenting 5jj(q) f(q) cycle with respect to a flow x e q.34 representation of the given flow x contains at most (n + m) total paths and cycles. has forward arcs and backward arcs which are defined as arcs along and opposite to the path's orientation. out of C2. h(p) on each forward arc A path flow will be defined arc. any arc with positive flow occurs as a forward arc and any arc with negative flow occurs as a backward arc.'j ij^) with positive flow or an arc ij^_| ) with negative flow. is that we extend the path (ij^ . Every path with positive flow connects a source node of x For every path and cycle. and can contain arcs with negative flows. p. on p as a flow with value and -h(p) on each backward We define a cycle flow in the 5j. 6j:(q) is still In this more general setting. to a sink node of x. In this Ccise. have nonzero flow.(p) and S^jCq) to be arc (i. is possible to state the decomposition property in a somewhat more general form that permits arc flows xj. j) . to is be negative.(p) same way. some node by adding an arc (ij^.1.

< Xj. The f(q) is c(q) f(q).. The cost of an augmenting cycle represents the change € A if in cost of a feasible solution we augment along the cycle with one unit of flow. of these cycle flows qj^ to x.. Since y = x + z. q. the flow remains feasible if some positive amount of flow (namely cycle f(q)) is augmented around the cycle q.e. j) we have + 6ij(q2) < yjj = Xjj + 5jj(q^) fCq^) f(q2) + . q2. Now q-j.. . + SjjCqr) fCq^. . j) < qj^. 0<y<u.. .. (i. + 6j:(qj(. change in flow cost for augmenting around cycle q with flow Suppose < X < u and that x and y are any two solutions to a network flow problem. flow decomposition implies that z can be represented as cycle flows. Further. is. - Then the difference vector z = y x satisfies the homogeneous equations Nz = Ny Nx = 0. j). q^ that contains it or a backward arc on each cycle x^. . Consequently. Nx = b. arc . - i. each term between and the rightmost Ujj.. . each cycle q^ that . for each arc e That we add any (i. j) e A (i. note (i. (i. 5jj(q)..) satisfying the property that for each arc of A. + 5jj(qr) f(qr) < Ujj. qm that contains it. j) at most r < m cycle flows f(q])/ f(qj.. j) e A (i. same < sign. we can find (i. Ny = b. q2. . Therefore. yjj < Consequently. by condition C2. zjj = 6ij(qi) f(qi) + 5jj(q2) f(q2) + .35 In other words. q2 .. . is an augmenting cycle with respect to the flow x. the resulting solution remains feasible on each arc Hence.. j) is either a forward arc on each cycle q^.j)€A k=l .. for any arc (i.) f(qj^^) Uj. f(q-))..4 of the flow decomposition property. . if inequality in this expression has the for each cycle qj^ .e. We define the cost of an augmenting q as c(q) = V (i. j) 6 A (i. . i.. j) e A (i. moreover. j) Cj. j) e A k=l r (i.

Then y equals x plus the with respect to x. the cost of y equals the cost of x any two feasible solutions of a flow on at most m augmenting nicies and y he plus the cost of flow on the augmenting cycles. network flows stems from In the example. Suppose that X is any feasible solution. that an optimum solution of the minimum cost flow problem. The augmenting into at If cycle property implies that the difference vector X* . if every augmenting cycle in the decomposition of x* . and that x ^ x*.3: result. 2J. Further. We have thus obtained the following Theorem it 2. is also an optimum flow. Optimality Conditions. and costs . ex* < cx then one of these cycles must have a negative cost. arc flows a simple observation concerning the example in Figure are given besides each arc.4. Much of the underlying theory of 2. Further. Cycle Free and Spanning Tree Solutions We start by assuming that x is a feasible solution to the network flow problem minimize { cx : Nx = b and / ^x<u ) and that / = 0. The augmenting characterizing the cycle property permits us to formulate optimality conditions for optimum solution of the x* is minimum cost flow problem. Let X network flow problem.1. then cx* . Theorem Augmenting Cycle Property. cx* = cx and x result.x has a 0.cx > Since x* is an optimum flow.36 We have thus established the following important 2.x can be decomposed most m augmenting cycles and the sum of the costs of these cycles equals cx* . nonnegative cost. A feasible flow x is an optimum flow if and only if admits no negative cost augmenting cycle.ex.

A as the q/cle cost and say A. we set 6 as large as possible while preserving 4 - 3-6^0 and we no 8 S 0. 4+e <!) cycle. then -2) we would decrease 6 as much as possible (i. Note that adding a given amount this of flow 6 to all the arcs pointing in a clockwise direction all and subtracting flow from at arcs pointing in the counterclockwise direction preserves the mass balance is each of the node.e. we were to change C|2 from 2 to 4). 2 + 6^0. of all We can restate this observation in another way: to preserve nonnegativity flows. to minimize cost nonnegativity of that in the cycle. select 6 in the interval -2 <6 < 3. Since the objective function -2 at depends linearly we optimize it by selecting 6 = 3 or 6 = which point one arc in the cycle has a flow value of zero. Also. or 6 > at and again find a lower cost solution with the flow one arc in the cycle value zero. we set 6 all = 3. that the cycle is a depending upon the sign of Consequently. we must on 6. Note new solution 6 = 3).$3 i 2..$4 3-e <D 2+e 4. (at i. or 6 < 3. 5 + 6^0.. if the cycle cost were positive (i..e. in all our example. positive or zero cost cycle - $4 - $3 = $ -1.37 3.1.e.e. note that the per unit incremental cost for this flow change cost of the clockwise arcs the sum minus the sum of the cost of counterclockvkdse arcs.. that is. Figure Improving flow around a being that all Let us assume for the time arcs are uncapacitated. The network in this figure contains flow around an undirected cycle. Per unit change in cost = A = $2 + $1 + $3 Let us refer to this incremental cost negative. . and on at least 4 + 6 S 0. i. longer have positive flow on arcs in the Similarly. arc flows.

good as the original that is. again an interval. upper bound (x^2 = ^ ^t 6 = 1).. is at its some arc on the cycle. this condition rules out any negative cost directed cycle with no upper bounds on its arc flows. lies strictly (i. e. initial flow we can apply our previous argument repeatedly. problem minimize ex If the objective function value of the network { : Nx = b. . either the flow is zero (the lower bound) or Some observations additional notation will be helpful in encapsulating and summarizing our up to this point. one cycle and establish the following 2. in this <6< and we can find a solution as 6. j) between the lower and upper bounds imposed is restricted if its upon it. 1 <x <u } is bounded from below on the feasible region and the problem has a feasible solution. At these values of the solution is cycle free. (i. (ii) If we impose upper bounds on is the flow. a solution x has the "cycle free property" entirely of free arcs. equals either its lower or if upper bound. one by choosing 6 = for -2 or 6 = 1. then at least one cycle free solution solves the problem. such as 6 units on all arcs. then the range of flows that preserves flows) feasibility Ceise -2 mass balances. our prior observations apply to any cycle in a network.. or arbitrarily small (negative) in a positive cost cycle.e. Note that the lower bound assumption imposed upon the objective value is necessary to rule out situations in which the flow change variable 6 in our prior argument can be made arbitrarily large in a negative cost cycle. In this terminology. at a given any time. j) is a p'ee arc with respect to a given feasible flow x if Xj. Therefore. the network contains no cycle made up In general.g. we are indifferent to all solutions in the interval -2 < 9 < 3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero. Let us say that an arc (i.38 We (i) If can extend this observation in several ways: the per unit cycle cost A = 0. lower and upper bounds on 1.5: fundamental result: Theorem optimization Cycle Free Property. We will also say that arc flow xj. for example.

39
useful to interpret the cycle free property in another way.

It is

Suppose

that the

network
nodes).

is

connected

(i.e.,

there

is

an undirected path connecting every two pairs of
is

Then, either a given cycle free solution x contains a free arc that

incident to

each node in the network, or

we

can add to the free arcs some restricted arcs so that the

resulting set S of arcs has the following three properties:

(i)
(ii)

S contains

all

the free arcs in the current solution,

S contaiT\s no undirected cycles, and

(iii)

No

superset of S satisfies properties

(i)

and
(i)

(ii).

We

will refer to

any

set

S of arcs satisfying

through

(iii) eis

a spanning tree of
a

the network

and any

feasible solution x for the

network together with
(At times

spanning

tree S

that contains all free arcs as a spanning tree solution.

we

will also refer to a

given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs

may

be needed to form the spanning tree

S.)

Figure
that
it

2.2. illustrates a

spanning
is)

tree

corresponding to a cycle free solution. Note
set of free arcs into a

may

be possible (and often
(e.g.,

to

complete the
wdth arc
(3,

spanning

tree

in several

ways

replace arc

(2, 4)

5) in

Figure

2.2(c)); therefore, a

given

cycle free solution can correspond to several spanning trees S.

We
If

will say that a

spanning tree solution x
this case, the

is

nondegenerate

if

the set of free arcs forms a spanning tree.
to the

In

spanning tree S corresponding
are not incident to)
all

flow x

is

unique.

the free arcs do

rot span

(i.e.,

the nodes, then any spanning tree corresponding to
arc's

this solution will contain at least

one arc whose flow equals the
vdll say that the

lower or upper

bound

of the arc.

In this case,

we

spanning

tree

is

degenerate.

40

(4,4)

(1,6)

(0,5)

(a)

An example network with

arc

flows and capacities represented as

(xj:, uj:

).

©
(b)

A cycle free solution.

<D

©
(c)

A

spanning

tree solution.

Figure

2.2.

Converting a cycle free solution to

a

spanning

tree solution.

41

When

restated in the terminology of spanning trees, the cycle free property
result in

becomes another fundamental

network flow theory.
If the objective

Theorem

2.6:

Spanning Tree Property.
problem
minimize
{ex:

function value of the network

optimization

Nx

=

b,

I

<x <

u]

is

bounded from below on the

feasible

region and the problem has a feasible solution

then at least one spanning tree solution solves the problem.

We
of the flow

might note

that the

spanning

tree property is valid for

concave cost versions
is

problem as

well,

i.e.,

those versions where the objective function

a concave
is

function of the flow vector
valid because
if

x.

This extended version of the spanning tree property
is

the incremental cost of a cycle

negative at

some

point, then the

incremental cost remains negative (by concavity) as

we augment

positive

amount

of

flow around the

cycle.

Hence,

we

can increase flow in a negative cost cycle until

at least

one arc reaches
2.3

its

lower or upper bound.

Networks, Linear and Integer Programming

The

cycle free property

and spanning

tree property

have many other important

consequences.

In particular, these

two properties imply

that

network flow theory bes

at

the cusp between

two

large

and important subfields of optimization—linear and integer

programming.

This positioning may, to a large extent, account for the emergence of
a cornerstone of mathematical

network flow theory as
Triangularity Property

programming.

Before establishing our

first

results relating

network flows
that

to linear

and integer
S has
at

programming, we
least

first

make

a

few observations. Note
is,

any spanning

tree

one

(actually at

lecist

two) leaf nodes, that
if

a

node

that is incident to only

one arc

in the

spanning

tree.

Consequently,

we

rearrange the rows and columns of the
is

node-arc incidence matrix of S so that the leaf node

row

1

and
-1,

its

incident arc
lies

is

column

1,

then

row

1

has only a single nonzero entry, a +1 or a
If
is

which

on the
its

diagonal of the node-arc incidence matrix.
incident arc from S, the resulting network

we now remove

this lecif

node and

a

spanning tree on the remaining nodes.
1

Consequently, by rearranging
for the

all

but

row and column
that

of the node-arc incidence matrix

spanning

tree,

we

can

now assume

row

2 has

-t-1

or

-1

element on the

42

diagonal and zeros

to the right of the diagonal.

Continuing

in this

way

permits us to
n-1

rearrange the node-arc incidence matrix of the spanning tree so that

its first

rows

is

lower triangular. Figure

2.3

shows

the resulting lower triangular form (actually, one of

several possibilities) for the spanning tree in Figure 2.2(c).

nodes
5

L =

problems always have spanning fundamental result. This argument shows that for problems with integral data.1). the problem has a feasible solution. as we have seen. in the parlance of convex is. bounded from below on the feasible region. as the leist objective function ex is a linear program result shows. network flow problems always have cycle free solutions. then the problem has at least one integer optimum Our observation at the end of Section 2. Since the spanning tree property ensures that network flow tree solutions. 1 <x <u } the vectors solution. Integrality Property. If the objective value of the network optimization minimize is { ex: Nx = b. Linear programs. that solutions x with the property that x cannot be z. continuing forward substitution by successively solving for one variable at a time shows that x^ integral. we have established the following Theorem problem 2..2 shows that this integrality property is also valid in the more general situation in which the objective function is concave.43 Now further suppose that the / supply/demand vector b and lower and upper bound Then since every vectors and u have all integer components. and b. or b - Mx^ (2. always has an integer solution. extreme point solutions. component of 0. yr- equals -1). of x' are integral as well: since the first U equals +1 or the first equation in (2. 1. Since.8. and u are integer. or generalizations with concave cost objective functions. +1. now if we move x] to the right of the equality in for X 2 the right hand side remains this is integral and we can solve from the second equation. an arc lower or upper bound and the right hand side M has integer components (each equal to vector. ako satisfy another well-known property: they always have.1) is an integer But this observation implies that the diagonal element of components -1. every spanning tree solution is integral. Relationship to Linear Programming The network flow problem with the which. expressed tis a weighted combination of two other feasible solutions y and as x = ay + (l-a)z for some weight < a < 1. Network flow problems are distinguished as the most important large class of problems with this prop>erty. implies that x| is integreil.e. emalysis. i. we might expect to discover that extreme point .

Theorem is 2.x^) is a compatible partitioning of Also suppose that we eliminate the redundant row so that B is a nonsingular matrix. Then NjCz^ > ) which implies. < yjj < and " let Nj = 0' denote the submatrix of N corresponding to these arcs that the cycle. we define two feasible solutions y and z with the property is that X = (l/2)y + (l/2)z. if x not a cycle free solution. as in our discussion of Figure 2. < a< i. this cycle contains only free arcs in the solution x. and indeed they are as shown by the next result. conversely. y^ and z^. yjj network contains an imdirected cycle with not equal to Zij for any arc on the But by definition of the Therefore. every extreme point is a cycle free solution.e. Theorem Extreme Point Property.9..10: Basis Property. the columns B of the constraint matrix of a between their linear program corresponding to variables strictly lower and upper bounds are linearly independent. extreme points are usually represented algebraically as basic solutions. then is not a cycle free solution. Consequently. Then . between program of the basis and the that integrality property. We can extend B to a basis of the constraint matrix by adding a Just as cycle free solutions for maximal number of columns. Proof. 1. Conversely. Let x'. then the problem has an extreme point solution. conversely. xij j). With the background developed already. First. since by perturbing the -6 flow by a small amount 6 and by a small amount around a cycle with free arcs. y' yij and zij z' be the ujj components zjj of /ij < < xij < < or /jj < < (i. this result is is easy to establish. it X is not an extreme point solution. uij. network flow problems correspond to extreme points. suppose that x not an extreme point and is represented as x = ay + (l-a)z with these vectors for which y and z differ. by flow decomposition. if the objective value of the network optimization problem 2. In linear programming. Let us now make one final connection between networks and linear and integer programming— namely. for these special solutions. components if x^. then it cannot be an extreme point. minimize is { ex: Nx = b. For network flow problems. every cycle free solution is an extreme point and. every basic solution a spanning tree solution. spanning tree solutions correspond to basic solutions.44 solutions and cycle free solutions are closely related. Every spanning tree solution to a is network flow problem a basic solution and. I <x <u ) bounded from below on the feasible region and the problem has a feasible solution.M] for some basis B and that x = (x .1. Consider a linear form Ax = b and suppose x. N = [B.

or x^ = B-^(b Mx^). equals the product of the diagonal elements in the triangular representation of the basis. using an expansion of determinants by minors. As measured by the new 0. by Cramer's rule from linear algebra. network flow problem is totally unimodular.Mx^. it is easy to see that the determinant of S it the product of the determinants of the spanning trees and. or -1. divided by det(B).11: minimum cost M 2. partitioning of b. analysts use network transformations to simplify a network problem. Even more. Tl. Xy. If an arc (i. unimodular.+l. In this subsection. by Xjj+ l^- in the problem formulation. we describe some of these important transformations. Therefore. j) has a positive lower boimd l^y then we can replace Xjj. determinant of B. of x' as it is possible to find each component sums and multiples of components of if b' =b - Mx^ and B. if all of square submatrices have determincmt equal to either 0. to show equivalences of different network problems. call a matrix it A unimodular unimodular of its its bases have determinants either +1 or <md call totally -1. is it has determinant must correspond to a cycle free solution.) The constraint matrix of a Theorem Total Unimodularity Property. S is singular. it S be any square submatrix of N. Consequently. must be equal to 4l (An induction argument. a node-arc incident matrix let is unimodular. or to put a network problem into a standard form required by a computer code. vector whenever x^. or How Since bases of are these notions related to network flows and the integrality property? N correspond to sparming trees.4 Network Transformations Frequently. variable the flow on arc (i. j) will have a lower bound of This transformation has a . But then. and therefore equals +1 or -1. and u are all integers. - Also. (Removing Nonzero Lower Bounds). If it is totally 0. then x^ if all and consequently x^ is an integer. the triangularity property shows that the determinant of any basis (excluding the redundant row now). which a spanning tree on each is of its connected components. the determinant of B equals +1 or of all integers. 2. the -1. / A corresponds to a basic feasible solution x and the problem data A. For Otherwise. Let us -1. then x^ is an integer if and M are composed In particular. therefore. provides this totally an alternate proof of unimodular property.45 Bx^ = b . the b.

if we introduce a slack variable > 0. V.4. in only one.2) This transformation is tantamount to turning the slack variable into an for that node.Xj:. Multiplying both sides by we obtain -Xjj . x^: The capacity Sj. units of flow on the arc and then measure incremental flow above b(i) /jj. the corresponding flow in the transformed network both the flows x and x' = ik Xjj and = Uj. this transformation implies the follov^dng. constraint (i. If x^. <D 2. have the same Xj: cost. Sj: additional node k with equation (2. By subtracting (2. + Sj. X. can be written as -1. j) in the original Xjj^ network.. In the network context. we begin by sending /j. using the following ideas.Sjj = -Ujj (2.5. ^ a positive (i. a flow ^k' " ^" *^^ transformed network yields a flow of = Xjj^ of the same cost in the . making the j) arc uncapacitated. b(j) b(i) -Uij (Cjj .2) as the mass balance constraint Observe that the variable xj. an arc has a positive capacity we can remove the capacity. appear in exactly two constraints-in one with the positive sign and in the other with the negative sign. Likewise. 46 simple network interpretation.2) from the mass balance constraint of node we assure that each of Xj. O Removing ^©< t I © Xjj. These algebraic manipulations correspond to the following network transformation. .oo) Ujj) <T) Xjj <^ Figure 2. j) (Cij'Uij-V CD lower bound to zero. now appears in three mass balance constraints and j. (i.j = X^j = Sjj arc capacities. = Ujj. b(j) b(i)-/ij b(i) + / 'Cij. Uj:. is a flow on arc is X. b(i) (Cjj . and Sj.Ujj) CD Figure T2. <D then Transforming If {Removing Capacities). b(j) oo) + Uij (0.

T3. j) send Ujj units of flow on the arc and then replace arc by arc (j. We also add arcs of cost zero for each Figure 2... since this x^j^ + Xjj^ = u^. This transformation a change (i.6. uncapacitated. and x:j^ are both nonnegative. and is x^j^. (i. » An example arc (k. Let arc flow Ujj if it is represent the capacity of the arc is (i. i') 0< of arc reversal. (i'. j) of the same and capacity. (j. This transformation splits each node (k. j) by an cost of the same cost and and each arc by an arc i. this transformation permits us to remove arcs with negative costs. Uj. Consequently. i') i T4. © two nodes capacity. = x^< Ujj. (Arc Reversal). Therefore. i) i into and i' and replaces each original arc (i.47 original network. The new flow X •: measures the amount of flow we "remove" from the "full capacity" flow of b(i) b(j) b(i)-Ujj b(i) + Ujj CD <D Figure 2. Doing so replaces arc with its associated cost by the arc i) v^ath a cost -Cj.7 illustrates the resulting network all when we carry out the node splitting transformation for the nodes of a network. j) by Cj: X • in the problem formulation. i) vdth cost -Cj. (Node Splitting). x^j Further. j) or an upper in variable: bound on the replace x^. This transformation has the following network interpretation: (i. . transformation valid.

(b) The transformed network. i').48 (a) (b) Figure 2. (a) The original network. We to shall see the usefulness of this transformation in Section 5.7. node i with the new throughput . is This transformation also used in practice for representing node activities and node data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of we simply associate arc (i.11 when we use it reduce a shortest path problem with arbitrary arc lengths to an assignment problem.

Label setting methods designate one or more labels as permanent (optimum) at each iteration. We will show that methods have the most attractive worst-case performance. (ii) and (iii). are finding shortest paths from one node to other nodes all when arc lengths are nonnegative. we discuss problem types (i) (i). Researchers have studied several different (directed) shortest path models. whereas label correcting methods apply to networks with negative arc lengths as well.49 3. Label correcting methods consider as temporary until the final step label setting all labels when they all become f>ermanent. shortest paths visiting specified nodes. The (i) major types of shortest path problems.g. The algorithmic approaches for solving problem types setting and (ii) Cem be classified into two groups—label to and label correcting. Next. or most pairs of rebable path between one or many nodes in a network. practical experience has efficient shown is the label correcting methods to be modestly more Dijkstra's algorithm first the most popular label setting method. Consequently. cheapest. we discuss a simple implementation of this algorithm that achieves a time bound of 0(n2). Each approach assigns tentative distance labels (shortest path distances) to nodes at each step. In this section. the k-th shortest path). We then describe two more sophisticated implementations that achieve in practice improved running times emd in theory. node. nevertheless. and (iv) finding shortest paths from every node to every other (e. SHORTEST PATHS Shortest path problems are the most fundamental and also the most commonly encountered problems shortest path in the study of transportation and communication networks. In this section. The label setting methods are applicable networks with nonnegative arc lengths. outlining one special implementation of this general approach that runs in polynomial time and another implementation that perfomns very . designing amd testing shortest path efficient algorithms for the problem has been a major area of research in network optimization.. The problem arises when trying to determine the shortest. (ii) finding shortest paths from one node to (iii) other nodes for networks with arbitrary arc lengths. algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often call for the solution of a large number of shortest path problems as subroutines. More importantly. in increasing all order of solution difficulty. finding various types of constrained shortest paths between nodes shortest paths with turn penalties. we consider a generic version of the label correcting method.

for each this section. and otherwise. and let C = max Cjj : (i. j) otherwise is temporary. The following (which basic implementation of Dijkstra's algorithm. and in this section as well as in Sections 3. j) e A }. j). The algorithm label.2 3. and each other node j a temporary label equal to Cgj € A. Finally. At each iteration. We invoke this connectivity assumption throughout Dijkstra's algorithm finds shortest paths from the source node from node s s to all other nodes. G contains a directed path from s to every artificial arc (s. We can ensure this condition by adding an with a suitably large arc length.3. permanent it once we know that it represents the shortest distance from s to give node if (s. we assume amd that aire lengths are integer numbers. we s a permanent «> label of zero. Let A(i) represent the set of arcs emanating from node { € N. In this section. and scans au-cs in A(i) to it update the distamce all of adjacent nodes. is to fan out and label nodes is in order Each node i has a label. The basic idea of the algorithm of their distances from s. and assume without any loss of generality that the network other node. We suppose that node s is a specially designated node. we discuss a method to solve the all pairs shortest path problem. aissodated with each arc i e A.1 We consider a (i.A) with an arc length Cj. node i with the minimum labels temporary makes it permanent. The correctness of the algorithm on the key observation we prove later) that it is always possible to minimum temporary label as permanent. we further assume that arc lengths are nonnegative. node j. designate the node vdth the algorithmic representation is a . Dijkstra's Algorithm 3.50 well in practice. Initially. The algorithm terminates when has designated relies nodes as permanently labeled. the label of a node are i is its shortest distance from the source node along a path whose internal nodes selects a all permanently labeled. j) network G= (N. denoted by d(i): the label i.

d(s) d(j) : : = = and pred(s) = : 0. At termination. begin P:=(s). if updates the labels of nodes in T (i). d(i) . end. then setting d(j) = d(i) The computational time its for this algorithm can be split into the time required by two basic operatior\s--selecting nodes and ujjdating i distances. denoted to by pred(i). In an iteration. After the algorithm has permanently in node i. Cgj and pred(j) : = s if (s. At each point nodes are partitioned into two P and T. tentative shortest paths to these nodes. in the algorithm. the temporary labels of some nodes > T+ Cj: (i) might decrease.j) = T-{i}. the algorithm requires 0(n) time to identify the node with minimum temporary label and . To establish the validity of Dijkstra's algorithm. with each node € N. end. {distance update) for each if (i. node k must be is at i at least as far away from the source as node since its label least that of node i. The algorithm updates these indices (tentative) shortest path ensure that s to pred(i) is the last node prior to i on the from node node i. because node could become an internal node in the must thus scan all of the arcs (i. the segment of the path P between node k and node has a nonnegative length because arc lengths are nonnegative. sets. P: = Pu(i).51 algorithm DIJKSTRA. furthermore. The algorithm i associates a predecessor index. d(j) We + Cj.j) e A . Then it is possible to transfer the node i in T to with the smallest label d(i) to P for the following reason: that is any path P from the source node i must contain a first node k i in T. the we use an inductive argument. these indices allow us to trace back along a shortest path from each node to the source. j) in A(i). Assume that the label of each node j in P is the length of a shortest path from the source. T: = N-{s). However. € A(i) do then d(j) : d(j) > d(i) + Cjj = d(i) + Cjj and pred(j) : = i. This observation shows that the length of path P is at least d(i) and hence labeled i it is valid to permanently label node i. while P * begin N do (node selection) let i e T be a node T: for which d(i) = min {d(j) : j € T). and d(j) : = «» otherwise. whereas the label of each node in T is j) the length of a shortest path subject to the restriction that each node in the path (except belongs to P.

m. FACT 3. . and while scanning arcs in A(i) during the distance update step. nC. 2. This implementation time. never decreases the distance label of any permanently labeled node since arc lengths are nonnegative.. can we reduce in practice. Consequently. We maintain nC+1 buckets numbered label is k. Bucket k stores each node whose temporary distance network and. and reduces the algorithm's fact: computation time using the foUouing that FACT 3. the algorithm requires Oirr-) time for selecting nodes and CX ^ ie A(i) | | ) = 0(m) time for N thus runs in O(n^) updating distances. using clever data structures. labels Dijkstra's algorithm designates as permanent are This fact follows from the observation that the algorithm permanently labels a node i with smallest temporary label d(i). we scan the buckets in increasing order until label of each we is nonempty bucket.) 3^ Dial's Implementation in Dijkstra's The bottleneck operation the algorithm's performance. 1. Thus. The distance node in this bucket minimum. In the identify the first node selection step.1. is we describe Oial's algorithm. These implementations have either its dramatically reduced the running time of the algorithm in practice or improved worst case complexity.1 suggests the following scheme for node 0. . the computation time by maintaining distances fashion? Ehal's algorithm tries to accomplish this objective. which currently comparable to the best label setting algorithm in practice. which is nearly known most implementation of Dijkstra's algorithm from the perspective of worst-case analysis. all algorithm is node selection. Instead of scanning temporarily labeled nodes at each iteration to find the one with the minimum in a sorted distance label. In the following discussion.52 takes 0( A(i) I I )) time to update the distance labels of adjacent nodes. selection. hence. To improve we must ask the following question. nC is Recall that C represents the largest arc length in the all an upper bound on the distance labels of the nodes. suggested several implementations of the algorithm. more complex version of R-heaps gives the best worst-case performance for choices of the parameters n. Researchers have attempted to reduce the node selection time without substantially increasing the time for updating distances. overall. they have. of Dijkstra's algorithm Dijkstra's algorithm has been a subject of much research. One by . Subsequently the best we (A all describe an implementation using R-heaps.. and C. The distance nondecreasing.

then at the end of that iteration labeled node j in T. 2.1. 0. In other words... k stores temporary labeled nodes with distance however. by rearranging the pointers. k+2.e. d(j) < d(i) + < d(i) + C. Now.. efficiently. delete. C+1 buckets suffice to store d(i) and from above by finite d(i) + C. we order the content of each bucket arbitrarily. . One implemention uses a data structure knov\T» a doubly In this data structure. making them permanent and scanning their lists to update distance labels of adjacent nodes. k+2(C+l). node from the list. k-1.. We then resume the scanning of higher numbered buckets in increasing order to select the next nonempty bucket. C. to select easily a node. By storing the content of these buckets carefully. store nodes in increeising values of the distance labels. The of buckets to C+1. this transfer requires 0(1) time. Doing so permits the topmost relabel us. during the entire execution of the algorithm. add a bottommost node. . FACT 3. distance label that the algorithm designates as permanent at the d(j) beginning of an iteration. Hence. 2. bls a time bounded by some linked list. storing to its two pointers for each entry: one pointer immediate predecessor and one to its immediate successor. in fact. d(j) = d(k) + Cj.1). and select the next element of any bucket very constant. Consequently. allows us to reduce the If d(i) is the number FACT 3. Consequently. < d(i) + C for each finitely This fact follows by noting that (ii) (i) d(k) < d(i) for eacl k e P (by FACT 3.2. This storage scheme bucket k contains a node with . it is possible to add..2. . . at any point in time this bucket also implies that vvill if hold only nodes with the same distance labels.. . k+(C+l). nodes with temporary distance in labels. then buckets k+1. We need not store the nodes with to a bucket infinite temporary distance labels first any of the buckets-we can add them when they receive a finite distance label. minimum distance label. it as we nodes and decrease any node's temporary distance we move from a higher index bucket to a lower index bucket. This d(j) in implementation stores a temporarily labeled node j with distance label the bucket d(j) mod (C+1).53 one. in 0(1) time. bucket labels k. because of and so forth. or delete label. temporary labels are bracketed from below by Consequently.. .: cj^. 1. arc we delete these rodes from the bucket. Dial's algorithm uses C+1 buckets numbered 0. for some k € P (by the property all finite of distance updates). C which can be viewed as arranged in a circle as in Figure 3. i. this algorithm runs in following fact 0(m + nC) time and uses nC+1 buckets. 1. and for each finitely labeled node j in T.

very large. next section. The first implementation considers all the . and C = 2" the algorithm takes exponential time in the worst case. is is rot attractive theoretically. In addition. we is consider an implementation using a data structure called a runs in redistributive heap (R-heap) that 0(m + n log nC) time. all of the buckets much less than however. For most applications. 3. Bucket arrangement in Dial's algorithm Dial's algorithm examines the buckets sequentially. as compared to the original algorithm. then the algorithm runs O(n^) time. The algorithm. however. For example. typically does not encounter these difficulties in practice.3. and the number of passes through Dial's algorithm. The search heis for the theoretically fastest implementations of Dijkstra's algorithm In the led researchers to develop several new data structures for sparse networks. the previous The discussion sections of this implementation can skip it of a more advanced nature than and the reader without any loss of continuity. in bucket. necessitating large storage and increased computational time. is C is not n. C = n'. to identify the first nonempty where it reexamines the buckets starting at the place A potential disadvantage of this scheme. The Rather.54 k-l Figure 3. the algorithm as may wrap around many as n-1 times. is that C may be very large. it a wrap around fashion.1. R-Heap Implementation Our first O(n^) implementation of Dijkstra's algorithm and then Dial's implementation represent two extremes. in it algorithm runs in is 0(m + nC) time which if not even polynomial time. pseudopolynomial if time. resulting in a large computation time. left off earlier. In the next iteration.

1. The algorithm each time it change the ranges of the buckets dynamically. 0. so number of buckets needed in only Odog nC). if But in order to find the smallest distance we need is to search all of the elements in the smallest index nonempty bucket. set We store permanent nodes. to find the is we avoid the need to search the entire bucket minimum. redistributes the . k arbitrarily large. For a given shortest path problem. Using widths of factor of k. we could conceivably retain the advantages of bo. . so to speak) and searches for a node with the smallest label. the widths of the buckets are is 1. and the resulting algorithm reduces to Dijkstra's implementation. Dial's algorithm separates nodes by storing any two nodes with different labels in different buckets. The R-heap algorithm we consider next In the version of 16. size k permits us to reduce the number of buckets needed by a label. perhaps by storing many. The nodes in bucket k are denoted by the CONTENT(k). 2. reallocate we dynamically modify the ranges of numbers stored each bucket and we nodes with temporary distance labels in a is 1. the running time of this version of the R-heap algorithm 0(m + n log nC).h the wide bucket and narrow bucket approaches. 8. and nodes in the buckets. the range of bucket k is [100k . uses variable length widths and changes the ranges dynamically. as in the previous algorithm. Using a width of TOO. way that stores the minimum distance label in a bucket whose width In this way.55 temporarily labeled nodes together (in one large bucket. redistributive heaps that that the we present. . Indeed. we need original only one bucket. If we could devise a variable width scheme. Could we improve upon these methods by all. different we could store temporary labels from 100k to lOOk+99 in bucket that can be stored in a bucket is k. .. In fact. The buckets are numbered as is K = nCl We do not represent the range of bucket k by range(k) which a (possibly empty) if closed interval of integers.. We store a will it temporary node i in bucket k d(i) e range(k). say. the cardinality of the range called its width. The temporary labels make up the range of the bucket. with a width of numbered bucket.. but not bucket? labels in a For example. 4. its For the preceding example. changes the ranges. 2.. instead of storing only nodes with a temporary label of k in the k-th bucket. We now Flog describe an R-heap in 1 more detail.. adopting an intermediate approach. Moreover. . lOOk+99] and width is TOO. 1. for each bucket reduces the number of buckets. but still requires us to search through the lowest numbered bucket to find the node with minimum temporary one for the lowest label. the R-heap consists of + flog nCl buckets.

e. example that the initial minimum quickly determined to be We could verify this is fact by verifying that buckets through 3 are empty and bucket 4 nonempty. Since the that minimum index nonempty bucket label the bucket less whose range is [8 15]. 1. range(K) = [2^-1 . [1]. carry out these operations a bit differently.. [9].. rangeO) = [4 . we would Since we will be scanning find the all of the elements of bucket 4 in the redistribute step. and the algorithm selects in an additional 0(1) time. in the Suppose range [8 . and hence buckets to 3 v^ll never be needed again. ranged) = range(2) = [2 3). resulting in the ranges 0. Rather than leaving is 8) to . redistributing the range [8 we need only to 4 redistribute the subrange [11 15]. and We then set the range of bucket 4 to and we (0. 7]. Roughly speaking.. At all this point... we have replaced the node selection step (i. [8].. makes sense example 15]. In this case the resulting ranges of buckets . Essentially. finding a node with smallest temporary distance label) by a sequence of redistribution steps in which we shift is nodes constantly to lower indexed buckets. Suppose for . to first minimum temporary label is 11. we can redistribute the range of bucket 4 (whose width is 8) the previous buckets (whose combined width [12. each of the elements of bucket 4 moves to a lower indexed bucket. the widths of the buckets initial will not increase beyond their distance label is widths. the redistribution time 0(n log nC) time in total. 15]. we 4. the buckets have the following ranges: rarge(0) = [0].. These ranges will change dynamically. we know no temporary v^l ever again be than 8.. 2.. these buckets idle. distance label without searching nodes in bucket is The following observation helpful. .. for 15]. it Actually. 15]. range(4) = [8 .. the minimum temporary it label is in a bucket with width one. [10 11]. label in the bucket. that the minimum Then rather than . however.56 Initially. could not identify the minimum is . 2^-1]. Eventually. shift (or redistribute) its temporarily labeled nodes into the appropriate buckets and 3). since each node can be shifted at most K = 1 + flog nCl times. Thus.

greater than 1. bucket has width one. we scan the buckets is 0. .2 The shortest path example. K to find the first nonempty bucket. source Figure 3.63] [64 . and then we reassign the content of bucket k time is The is redistribution 0(n log nC) and the running time of the algorithm 0(m + n log nC).3 The initial R-heap. e. (13 .57 would be [n].. the has width 1. we is 1.4) (6) Figure 3.3 specifies the starting solution of Dijkstra's algorithm and the initial R-heap.. We now the figure. In our example. whose width To reiterate. For this problem. [12]. Moreover. at the end of this redistribution.. In number beside each length.7] 6 [32 .. To select the node with the smallest distance label.2. So. 1. the illustrate R-heaps on the shortest path example given in Figure arc indicates its 3.. are guaranteed that the minimum temporary label is stored in bucket 0. C=20 and K = flog 1201 = 7.15] nC=120 5 [16.3] (3) 3 [4 . 14]. Figure 3. the minimum nonempty to buckets bucket is whose width we redistribute the range of bucket k into buckets to k-1.31] {5} Buckets: 12 [2 .. bucket nonempty. 7 127] Ranges: CONTENT: (2. we do is not carry out the actual node selection step until the If minimum nonempty bucket k. to k-1. [15]. Nodei: Label d(i): 12 13 [0] [1] 3 4 15 5 6 20 4 [8 .. ..2. every node in this bucket has the same (minimum) distance . Since bucket label.

58 algorithm designates node 3 as permanent. node 5 should left. It isn't.4 shows the new R-heap.5) to change the distance label of node 5 from 20 to 9. deletes node 3 from the R-heap. Node i: . its We check whether the is new distance label of node 5 5. bucket whose range contains the number 5 to bucket 4. So identify the first we sequentially scan the buckets from right to 9. which bucket Since its distance label has decreased. is contained in the range of present bucket. to index bucket. and scans the arc (3. which Node 5 moves from bucket Figure 3. move bucket to a lower 5. starting at bucket is 4.

. we can redistribute the useful range of bucket k over the buckets . CONTENTO) = CONTENT(4) = 4). this operation takes The term m reflects the number it of distance ujxlates. k-1 in the manner described. 2. to a and the term 0(m + nK) time. and moves the node with the We are now then in a position to outline the general j algorithm and analyze If its complexity. each node can move most K times. we assign 2. e CONTENT(k) and that d(j) decreases. This redistribution necessarily empties smallest distance label to bucket 0. O(nK) is node can move at K times. {2. bucket 4 . since there are K+1 Therefore. If If This operation takes 0(K) time per iteration and O(nK) time in k=0 or k=l. the modified we sequentially scan lower numbered buckets from right to left and add the node to the appropriate bucket. Next we consider the node buckets from left selection step. . k ^ 2.. first buckets can be as large as 2*^'^ for a total potential 0. buckets.. 1. width < 2"^ and since the width of widths of the 2*^. u] and the smallest distance is Idjj^jp . to a lower indexed bucket. the node selection steps take O(nK) Since K = [log nC"L the algorithm runs in 0(m + n log nC) time. . . Node selection begins by scanning the k. . so the nodes total move a total of at most nK times. 1. then any then node in the selected bucket has the minimum distance label. . 0. say bucket total.. a bound on node movements.. Whenever we examine it a node in the nonempty bucket k with the at smallest index. Since bucket k 1.. 0. to right to identify the first nonempty bucket. We now summarize our discussion. a moves most lower indexed bucket.59 CONTENT(O) = (5). we its redistribute the "useful" range of bucket k into the buckets those buckets. the next two integers to bucket htis the next four integers to bucket and so on. the next integer to bucket 3. The algorithm the first redistributes the useful range in the following manner: 1. .. CONTENT(2) = e.. nK arises because the total every time a node moves. then the useful range of the bucket u]. integer to bucket 0. all we move can time. . Suppose that d(j) « range(k). 1. Overall. label of a node in djj^j^. Thus. This redistribution of ranges and the subsequent reinsertions of labels to bucket nodes empties bucket k and moves the nodes with the smallest distance 0. CONTENTO) = 0. the bucket is k-1 and reinsert content to If the range of bucket k is [/ .

then they represent the shortest path lengths from the node: . for each j e N - {s}. FACT 3. is possible to reduce this all bound further to 0(m + n Vlog n which is a linear time algorithm for but the sparsest classes of shortest path problems. of Dijkstra's algorithm solves the shortest This algorithm requires 1 + flog nCl buckets.2 Let d(i) for i e N If d(s) = and if in addition the labels satisfy the following conditions.e. usual. a directed cycle whose arc lengths sum to a negative value. 3. to networks containing negative length arcs. for example.1) (d(i) = min + Cjj : i € N). path problem in 0(m The R-heap implementation + n log nC) time. We will prove an alternate version of these label correcting algorithms.1. Using substantially more sophisticated data ). as the name implies. (3. when they all become permanent simultaneously.2).60 Theorem 3. these algorithms maintain distance labels as temporary until the end.2) As j.2 permits us to reduce the number of buckets to 1 + flog CT This refined implementation of the algorithm runs in 1. structures.4.. shortest paths. these algorithms typically require that the network does not contain any negative directed cycle. The label correcting algorithms are conceptually more general than the label setting algorithms and are applicable to more general To produce situations. maintain tentative distance labels for nodes and correct the all labels at every iteration. Theorem source 3. i. 0(m this + n log C) time. every cycle in the network has a positive length. (3. Most label correcting algorithms have the capability to detect the presence of negative cycles. conditions which is more suitable from the viewpoint of be a set of labels. Label Correcting Algorithms Label correcting algorithms. For probelm that satisfy the similarity assumption (see Section bound becomes 0(m+ n it log n). Label correcting algorithms can be viewed as a procedure for solving the following recursive equations: d(s) d(j) = 0. Unlike label setting algorithms. d(j) denotes the length of a shortest path from the source node to node These equations are knov^m as Bellman's equations and represent necessary conditions These conditions are also sufficient if for optimality of the shortest path problem.

We d(i) satisfy note that if the network contains a negative cycle then that the no set of labels d(i) satisfies C3.2. . Conditions C3. of the shortest path problem.j) ^ii ^ ^' since the labels d(i) cancel W e W W is out in the summation. Suppose C3. (i.2 correspond to label correcting algorithms as From this perspective. cycle.61 C3.1. Since d(i) is the length of some path from the source to node i.2 implies that d(i2) ^ d(i^) + Ci^i2 = + Cij^-iijc d{i3) < d(i2) + Ci2i3' / d(ij^) d(ij^. first is The generic label correcting algorithm that we consider a general procedure for successively updating distance labels d(i) until they satisfy the conditions C3. and C32.2. Let P consist of Ciii2/ nodes s = - i2 i3 ••• ••• - 'k = < ) Condition C3. or the length of some path from the source to node d(j) j.j) Cj. we might view and methods that always maintain primal feasibility try to achieve dual feasibility. then they are also lower bounds on the - shortest path lengths.j) V e (d(i) - d(j) Cjj) T!. the source to including a shortest path from s to j.2. Consider any directed path i-j P from the source to node j. the source node to node i. of length d(i). it is an upper bound on the shortest path length. These inequalities imply that (i. the label d(i) is either «» indicating that it is we have yet to discover any path from the source to node j. together with is the arc (i.2 correspond to primal feeisibility for the linear programming formulation dual feasibility. + d(j) + = Cj: ^ for each e W. This conclusion contradicts our assumption that a negative Conditions C3.j) Consequently. > d(i) based upon the simple observation that whenever the current path from the source to node i. . d(i) d(j) is the length of d(i) some path from (i. We show that if the labels d(i) satisfy C3. Therefore d(j) is a lower bound on the length of any directed path from e P node j. < + Cjj for all j) e A.j) is a shorter path to node j than the current path of length d(j).1 in Theorem 3.2. network did contain a negative cycle d(i) W and some labels (i. which implies the conclusion of the theorem. At any point in the algorithm. Proof.-j) Adding these inequalities yields d(j) = d(ij^) < V (i. The algorithm + Cjj.

The correctness of the label correcting algorithm follows Cj. Since each pass requires 0(1) computations for each arc. satisfies d(j) while some arc begin d(j) : (i. the algorithm updates integral. Now make d(j) passes through A. these instances a polynomial time do have exponentially large values algorithm. do = d(i) + Cjj.j) > d(i) + Cj. j) e A. restriction One drawback Indeed. this conclusion imphes the . number of steps can grow exponentially with (Since the algorithm of C. the modified requires 0(nm) time to determine shortest paths from the source to every other node. the most 2nC times.62 algorithm begin d(s) d(j) : : LABEL CORRECTING.) is pseudopolynomial time. In each pass.2 in any order and of the still assure the convergence. for all (i. is and hence represent finite if there are We now note that this algorithm Since d(j) is no negative cost cycles and if the data are integral. At termination. We show that the algorithm performs at most n-1 passes through the arc list. however. correcting algorithm. in Arrange the arcs A in some (possibly arbitrary) order. A in order and check the condition d(j) > d(i) + Cj:. end. the labels d(i) satisfy d(j) < d(i) + the shortest path lengths. if the arc if then update = d(i) + Cj:. d(j) at bounded from above by nC all and below by -nC. scan arcs in satisfies this condition. from Theorem 3.3 correcting algorithm Wher: applied to a network containing no negative cycles. Terminate the algorithm algorithm the modified no distance label changes during an entire pass. = oo for each j € N . To obtain bound for the we can organize the computations carefully in the following manner. and hence the algorithm runs pseudopolynomial time. Proof. then the n. method. A arcs that nice feature of this label correcting algorithm is its flexibility: we can select the finite do not satisfy conditions C3. end.2. Thus when data are in number of distance updates is O(n^C).(s). the label correcting if algorithm does not necessarily run in polynomial time. we start with pathological instances of the problem and make a poor choice of arcs at every iteration. pred(j) : = i. is that without a further on the choice of arcs. = and pred(s) = : 0. We call this label Theorem label 3.

Thus. the inequality fol3o\*^ from the induction hypothesis. passes through the arc for list. up to the (n-l)-th pass. n-1. min {d^"''(i) + Cj. Consequently. Suppose we order the by their tail nodes so that arcs with the same tail node appear i consecutively on the list. r arcs either (i) has no more than r-1 arcs. . we note that Therefore. the algorithm does not update any distance label it during an entire pass. most n-1 passes. for every (i. during the next pass S d(i) + Cj. when we make one more pass. that < r. This situation cannot occur ui\less the network contair\s a negative cost cyde Practical Improvements stated so far. the algorithm terminates v^th the shortest path lengths. represent the distance label of node D''(j) after r . In dJi]) d^'Q) = d''"^(j). the modified label correcting algorithm considers every arc of the list. min (D''"''(i) + Cj. The provisions { of the modified labeling algorithm imply that < min {ly'^(j).. more nodes in the algorithm modifies If the distance label of (a some node i changes then the network contains a directed walk path i together with a cycle that have one or common) from node all 1 to of length greater than n-1 arcs that has snnaller distance than paths from the source node to i. scanning arcs in A(i) and testing the optimality conditions. Let d''(j) denote the length of the shortest path from the source let D'^(j) to j node j consisting of r or fewer arcs. the algorithm terminates with the shortest path distances and the network does not contain any negative distance labels in all cycle. while scanning the arcs. Finally. list. min i*j D''"^(i) + Cj. the n-1 passes. D^'Cj) < d'^(j) for all j e N. or in case (ii). Hence.)). Now suppose that during one pass through the arc the algorithm does not change the distance label of a d(j) node i. then has a set of labels d(j) satisfying C3-2. Next note that the shortest path to node j containing no more than case (i). On the other hand. As network during every pass through the arc arcs in the arc list It aill need not do so.)) > min {D^" "•()). and d^(j) = min i*j {d''"^(i) + c^:]. = min (d''"^(j). Then. )). In this case. inductively. We claim.. (ii) it contains exactly r arcs. Further. We perform induction on the value of Suppose D^*^(j) < d''"Uj) I^Cj) for each € N. we consider one node at a time. The modified label correcting algorithm is also capable of detecting the presence If of negative cycles in the network.63 0(nm) bound. . in the n-th pass. the shortest path from the source to any after at node consists of at most n-1 arcs. j) 6 A(i) and the . d''(j) for each j € N and each r j = 1.

yes. terminates in 0(nm) time. The modification i alters the manner check in which the algorithm adds nodes whether the it to LIST. While adding a node If to LIST. Another modification of this algorithm sacrifices its its polynomial time behavior in the worst case. the node has previously appeared on the LIST. delete i from LIST. (i. « LIST then add j to the end of LIST. then we add to the beginning of LIST. It is advantageous to update the distances for these nodes immediately. (s). while LIST* begin do select the first element i of LIST. consequently. the worst-case Though this change makes the algorithm very attractive in . otherwise we add If it to the end of LIST. i we to see has already appeeired in the LIST. then some nodes may have i as a predecessor. end. but greatly improves running time in practice. 64 algorithm need not maintain a It test these conditions. rather than update them from other nodes and then update them again when we consider node alone. algorithm begin d(s) d(j) : MODIFIED LABEL CORRECTING. j) for each e A(i) do + Cjj if d(j) > d(i) then begin d(j) : = d(i) + C|j pred(j) if j : = i. scans this list list in the first-in. Empirical studies indicate that with this change several times faster for many reasonable problem classes. end. = = : and pred(s) = : 0. : «> for each j e N- LIST = (s). practice. first-out order to assure that performs passes through is the arc A and. The following procedure this further m.. the it algorithm can list of nodes whose distance labels have changed since it last examined them. the algorithm is i.odification of the a formal description of modified label correcting method. end. To achieve this savings. i This heuristic rule has the follovdng plausible justification.

distances from s The algorithm Cu = (i. and if the network contains no negative cost cycle. for each (i. combines the modified algorithm is label correcting algorithm and Dijkstra's algorithm. transformation. (i.65 running time of the algorithm algorithm source to is is exponential.2 implies that t for all € A. / We then obtain the shortest path distance between nodes k and in the original network by adding d(/) . then we can fist transform the network to one with nonnegative arc lengths as follows a Let s be node from which all nodes in the network are reachable.j)eP labels d(j) cancel out in the all summation. connected by directed paths. certain variants of the label setting algorithm more efficient in practice. j) path distances define the d(j) or indicates the presence of a negative cycle. This approach requires 0(nm) time to solve the first shortest path problem. Condition C3.) 3. first In this section is describe two It algorithms to solve this problem. If the network has nonnegative arc lengths.. Indeed. then we can solve the all pairs shortest path as the source problem by applying Dijkstra's algorithm n times. If the network contains arcs with negative arc lengths.e. this version of the label correcting the fastest algorithm in practice for finding the shortest path from a single all nodes in non-dense networks. All Pairs Shortest Path Algorithm In certain applications of the shortest path problem. We use the modified label correcting algorithm to compute shortest path to all other nodes. The algorithm well suited for sparse graphs. This transformation thus changes the length of paths between a pair of nodes by a constant amount (depending on the pair) and Since arc lengths consequently preserves shortest paths. j) as Cj. i.d(k) to the corresponding shortest path distance in the transformed network. j) either terminates with the shortest In the former case. The second better suited for dense graphs. It is based on dynamic programming. Further. become nonnegative after the we can apply Dijkstra's algorithm n-1 additional times to determine shortest path distances between all pairs of nodes in the transformed network. we need we to determine shortest path distances between all pairs of nodes. the method takes an extra .5. (For the problem of finding a shortest path from are a single source node to a single sink.j)€P ^ii ~ X ^ii "*" ^^^^ ~ '^^'^ since the intermediate (i. cor\sidering each node node once. note that for any path from node k to node / X (i. we P new length of the arc Cj. + d(i) - d(j) e A.

.j) = Cjj.C) = m+ to n log nC. r) d'^"''^(i. j that passes through the nodes 1.m. solve the all Another way pairs shortest path is problem is by dynamic define the programming. We as follows: d'"(i. j) known as Floyd's algorithm.. r. j) e A. 2. j) = d''(i. It is possible to solve the pairs previous equations recursively for increasing values of and by varying the node is N X N for a fixed value of r. observe that a shortest path from node either (i) to node r. m. j)). j) + d^ir. Thus we have d^(i. The approach we present variables d^(i. j). j). 2.C) lengths. 1. j) " the length of a shortest path from node i to node . (d^'U. r-1 (and and j) Let d(i. . in which case = d^(i. C) time is to compute the remaining shortest path distances. The following procedure a formal description of this algorithm. j) = min Cj. and d^'+^Ci. In the expression S(n. We over assume that = » for all node pairs (i. or does pass through the node in which case d^'*'^{i.m.. j).. we first . S(n.66 0(n S(n. r) + d^Cr. . (ii) . jX d^Ci. the time needed to solve a shortest path problem with nonnegative arc For the R-heap implementations of Dijkstra's algorithm we considered previously. To compute i d''"*'^(i. r does not pass through the node r. j subject to the i condition that the path uses only the nodes as internal nodes. j) denote the actual shortest path distance.

if i = j and < then the network contains a negative cycle. Theorem (i. j). j). for each for each A to do d(i. j) length of some path from node i to node j. This path can be obtained by tracing the predecessor indices. j) : = d(i. and in each iteration it performs 0(1) computations for each node pair. d(i. = <« j) : and = i. pred(i. and pred(i. node (i. j) -: j T • if d(i. r : = n do (i. r. r) d(i. end.*i. Floyd's algorithm jilgorithm. j) denotes the j. d(i. j) for each . and j. when < 0. r) i + d(r. j) € NxN j) : do d(i. This cycle can be obtained by using the predecessor indices.2. j) : = pred(r. j) : = 0. = Cj. then they represent the shortest path distances: (i) (ii) d(i. netw. Proof. For fixed i. d(i. STOP. Floyd's algorithm uses predecessor indices. for some node from node r. j) e N N satisfy the following conditions. . r) do + d(r. the union of the r to tentative shortest paths to node r and from node node i contains a negative cycle. it runs in OCn-') time. j). The algorithm for either terminates vdth the shortest path distances or stops i. last node prior to node j in the tentative shortest path from the node i to node The algorithm maintains the property i that for each finite d(i. : < > begin d(i. for each node pair (i. This algorithm performs n iterations. is in many respects similar to the modified label correcting This relationship becomes 3. p. + i) d(r. r) + c^: for all i. e NxN d(i. j). j) for more transparent from x the followang theorem. this theorem is a consequence of Theorem 3.4 If d(i. (Hi) < d(i. j). predd. i) < some node In the latter case. j) > then .67 algorithm begin for all ALL PAIRS SHORTEST PATHS. pred(i. i) = for all i. end. j) pairs € 1 (i.ork contains a path from node to node j of length d(i. Consequently. i) Hence. ))•. The index pred(i. j) is the d(i.

We also we can assume set the without any loss of generality that arc capacities are finite (since capacity of any uncapacitated arc equal to the sum of the capacities of list all capacitated arcs). of the loss of network. The source s and sink (i. We assume that for every arc in A. j) € A). In this section. i) is There is no generality in making this assumption since all we allow zero capacity arcs. For example. the problem to . What. : (i. the maximum number reliability of node disjoint paths that nodes? These and similar its measures indicate the robustness of the network to failure of components. we describe preflow push algorithms that have recently emerged as the for solving the most powerful techniques maximum flow problem. Moreover. communication systems planning and several other application domains. j) are (j. defined as A(i) = k) : (i. given capacities on the arcs. This remarkable theorem has a number of surprising implications in machine and vehicle scheduling. is the maximum flow that can be sent between any two nodes? tire The resolution of this question determines the "best" use of capacities and establishes a reference point against which to compare other ways of using the network. We then consider improved versions of the basic labeling algorithm with better theoretical performance guarantees. both theoretically and computationally. validity of these algorithms rests upon the celebrated max-flow min-cut theorem of network flows. j) G = GM. We begin The by introducing a basic labeling algorithm for maximum flow problem. MAXIMUM FLOWS An important characteristic of a network is its capacity to carry flow.68 4. what all is the minimum number join this pair of of nodes whose removal from the network destroys what is paths joining a particular pair of nodes? Or. we discuss several algorithms for computing the maximum flow between two nodes solving the in a network. t we wish to find the maximum flow from the source node is s to the sink node that satisfies the arc capacities. k) € A) designates the arcs emanating from node In the maximum flow problem. Let U = max (u^. A) with a nonnegative t integer capacity any arc e A. two distinguished nodes also in A. the solution of the maximum flow problem with capacity data chosen judiciously establishes other performance measures for a network. for consider a capacitated network (i. As earlier. We Uj. Formally. the arc adjacency i. {(i. In particular.

the unused capacity to increase (i. (4. The and j. "V' ^ > = ^' < Xj: < Ujj . rj. . the integrality assumption of residual network is not a restrictive assumption in practice.1c) It is possible to relax the integrality assumption on arc capacities for is some algorithms.t.1b) (i. Given a the residual capacity. is crucial to the algorithms (i. that rational arc capacities can always be transformed to integer arc capacities by appropriately scaling the data. The following high-level (and flexible) description of the algorithm summarizes the basic iterative steps.j on arc x^: (j. The concept flow x. x. positive residual capacities the residual represent it network (with respect to the flow and as G(x). Figure 4. j) (4.1 Labeling Algorithm and the Max-Flow Min-Cut Theorem One of the simplest is and most path intuitive algorithms for solving the maximum The flow problem the augmenting algorithm due to Ford and Fulkerson. however.1a) r = s. algorithm proceeds by identifying directed paths from the source to the sink in the residual network and augmenting flows on these paths. j). 4. ifi 0. additional flow that can be sent from node (i) to u^: node - using the arcs and of arc i).foraUiG N. . i) which can be cancelled flow to node Consequently. without specifying any particular algorithmic strategy for how to determine augmenting paths. Thus. We call the network consisting of the arcs with x). the current flow rj.1 illustrates an example of a residual network. j) € A) € A) e A. i) Xjj = \ ^ ifi*s. for each (i. j) maximum (j. = Uj.. of any arc i e j A represents the (i. + xij . y {j : Xjj {) : y (j. Note. (4. Algorithms whose complexity bounds involve U assume integrality of data. j) we consider.69 Maximize v subject to V. though this assumption necessary for others. until the residual network contains no such path. residual capacity has two components: (ii) x^.

end. j) e P). For each increases r:j (i. Eventually. The algorithm terminates all when has scanned labeled nodes and the sink remains unlabeled.70 algorithm begin x: = 0. The algorithm selects a labeled node and scans arc adjacency list (in the residual network) to label more uiilabeled nodes. last result we must establish that the algorithm termirtates with a maximum flow. augment A end. The following algorithmic description specifies the steps of the labeling algorithm in detail. We now more detail. First. The . A directed path from the source to the sink in the residual network path. of the residual network corresponds to (ii) increase in by A a in the original network. The in arc (i. finitely. Second. AUGMENTING PATH. j) e P. At any step. or (i) a decreeise in (ii). or (iii) convex combination of and For our purposes. we need method to to identify a directed path from the source to the sink in the residual network or show that the network contains no such path. we refer to the nodes in the tree as labeled and those its not in the tree as unlabeled. Xjj by A in the original it network. the flows only is easier to work directly with residual capacities and to compute when the algorithm terminates. augmenting A units of flow along P decreases discuss this algorithm in r^: by A and a by A. The follows from the proof of the max-flow min-cut theorem. the sink becomes labeled and the algorithm sends the maximum possible flow on the path from s to it t. The labeling algorithm performs directed path from s to t. units of flow along P and update G(x). while there begin is a path P from s to t in G(x) do A = min : (rjj : (i. is also called an augmenting The residual capacity on the path. It then erases the labels and repeats this process. j) of an augmenting path is the minimum an residual capacity of any arc that definition of the residual capacity implies (i) an additional flow of A Xj. we need to show that the algorithm terminates Finally. a search of the residual network to find a It does so by fanning out from the source node s to find a directed tree containing nodes that are reachable from the source along a directed path in the residual network.

1 Example of a residua] network.71 Network with arc capacities. Node 1 is the source and node 4 is the sink.) Network with a flow x. (Arcs not shown have zero capacities. . Figure 4. c The residual network with residual arc capacities.

u^. to the source. while L * begin and t is unlabeled do select a node (i. The predecessor indices allow us along the path from node algorithm LABELING. if u^: > rj: we can set x^. end else quit the loop. j) e P). augment A erase all units of flow along P.xj: + x:j x:j. and = 0. A = min : (rj. j is unlabeled and > then begin pred(j) : = i. for each if e A(i) do rj. (i. The rjj final residual capacities r = uj. otherwise we set x^: = and x:j = fj. labels and go to loop. Hence. pred(i). (loop) end. begin loop pred(j) : = for each j e N. . L: = (s). for each labeled node i indicating the to trace back rode that caused node a i to be labeled.r^. mark end end. j) i € L..the can be used to obtain the arc flows as follows. .72 algorithm maintains a predecessor index. . = Ujj . . Since arc flows satisfy xj: . j as labeled and add this node to L. if t is labeled then begin use the predecessor labels to trace back to obtain the augmenting path P from s to : t.x:j = uj.Fjj. end.

alternatively designate s-t cutset as i (S. Q c A is a cutset the Q Yias this property.5) i€ S J6 S . (4. Recall from Section 1. i and j both belong to S. Adding the flow conservation constraints b) for nodes j S and noting -Xjj in that when nodes i. A if N s. x^j in equation for node Cemcels equation for node we obtain ^=1 ie S Substituting x^. j s-t cutset.1). we introduce some if new definitions and notation. cutset partitions set A .Q) disconnected eind no superset of subsets. S) of an s-t cutset (S. I. An is arc (i. S). S). let v be the amount of flow leaving the source.4) j€ S in the first < u^. S) = X X ie S je "ij ^'^•^^ S cutset equals the value of the flow (4. Consequently. net flow across an s-t We refer to v as the value of the flow.2) S S j e S Def ne the capacity C(S. S) as Fx<S< S)= i X G S j X_Xij e i I_ X e Xij. j) with e S and € S called a backward arc in the cutset Let X be a flow vector satisfying the flow conservation (4.S: an S is the set of nodes connected t to Conversely. S). (4. The flow x determines the cutset (S. summation and xj: ^ in the second summation shows that Fx(S.73 In order to show that the algorithm obtains a maximum flow. into two A cutset is called am s-t cutset the source and the sink nodes are contained in different subsets of nodes S cind S = N . and capacity constraints of For this flow vector X. S) is defined as C(S. and an arc (i.3 that a set is subnetwork G' = (N. (4. j) with i e S cind e S is called a forward arc. j we (S. any partition of the node set as S and S with s e S and e S defines an S).1 We in claim that the flow across any s-t and does not exceed the cutset capacity.'^ij - I_ i€ S X je S ''ij = Fx^S. S)< X Z_ "ij ^ C<S.

but the same argument shows that when the labeling algorithm terminates. We thus have established the theorem. network G(x) when we S. The maximum value of flow from s Theorem equals the 4. - Xj: + Xjj. is a lower bound on the capacity s-t of any s-t cutset. eis guarantee that the problem always has a maximvmn flow as long capacity.5) S). the bound . Making these substitutions in (4. or our subsequent algorithmic developments. it one unit in any is terminates within nU iterations. j) in the cutset xj. S) = i ^ e S j ]£ € S Ujj = C(S. S) (4. = for each backward arc in the cutset.{s}) is at most nU. Adding in the flow conservation equations for nodes in S. the If all labeling iteration scans each arc at most once capacities are integral (s. then the capacity of the cutset at least N .. and x^.4) yields V = Fx(S. since x is a maximum flow. This bound on the if number is of iterations not entirely satisfactory for large values of U. Consequently. hence rj: for each forward arc x. S). <md requires 0(m) computations.) Define some cutset has finite S to be the set of labeled initial nodes flow in the residual x. Note that = xjj t e S. (Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts. nodes S cannot be labeled from the nodes in (S. (i. Since the labeling algorithm increases the flow value by iteration. Since rj: = U|. This strong duality property the max-flow min-cut theorem. The proof of this theorem not only establishes the max-flow min-cut property. inspecting each in A(i). U = 2". vector) it has at hemd both the maximum flow value (and a maximum flow capacity s-t and a minimum cutset. Let x denote the a maximum flow vector and v denote the maximum flow (Linear programming theory. apply the labeling algorithm with the Let S= N- Clearly.4).1. The more substantive strong duaUty property an equality for cisserts that (4. to I Proof. But does at it terminate finitely? Each labeling eirc iteration of the algorithm scans any node most once. is duahty theory. we obtain (4. s e S and S. the conditions S) < Ujj and ^ imply that = Uj. arc and bounded by a finite number U.6) But we have observed earlier that v (S.. weak duality results. for each forward arc in the cutset (S. the cutset the S) is a minimum capacity cutset and its capacity equals maximum flow value v. 74 This result is the weak duahty property Like most of the maximum it flow problem when the "easy" half of the viewed as a linear program. holds as some is choice of x and some choice of an s-t cutset (S. Coi^equently. value.

Ideally. the capacities are irrational. Flow decomposition shows should be able X is that. Several refinements of the algorithms. to Unfortunately. they may not converge to the select the maximum flow value. including those we consider in Section 4. then also is a maximum it flow (flows around cycles do not change flow value). At each algorithm generates node labels that contain information about to other nodes. even though Erasing the labels much of this information may be valid in the next residual network. is possible to obtain x from y by a sequence of at most s to t m augmentations on If augmenting paths from plus flows around augmenting cycles. as the modifications. the augmenting path algorithm may example given in Figure 4. augmenting paths from the source described erases the labels The implementation we have when it proceeds from one iteration to the next.75 exponential in the number of nodes.2 . if Moreover. it we should retain a label when can be used profitably in later computations. possible to improve considerably on the bound of 0(nU) augmentations of the basic labeling algorithm. Furthermore. second drawback of the labeling algorithm the is its "forget fulness". to apply this flow decomposition argument. possible to find a maximum flow using at most m augmentations. In addition. . A iteration. therefore destroys potentially useful information.2 illustrates. Thus if the method is to be effective. Nevertheless. in principle. without further take fiCnU) augmentations. moreover. the max-flow min-cut theorem (and our proof of Theorem 4.1) is true even if the data are irrational.4 if overcome this difficulty and obtain an optimum flow even the capacities are irrational. thein augmenting path algorithms to find a maximum is flow in no more initial m augmentations. This result shows that is. in theory. No algorithm developed in the literature comes close to achieving this it is bound.4. we need know a maximum theoretical flow. For suppose an optimum flow and y it any flow (possibly zero). the algorithm may not terminate: although the successive flow values converge. the algorithm can indeed perform that many iterations.2 Decreasing the Number the of Augmentations The bound not satisfactory of nU on a number of augmentations in the labeling algorithm is from theoretical perspective. 4. By the flow decomposition property. we define x' x' as the flow vector obtained from y by applying only the augmenting paths. we must augmenting paths carefully.

76 (a) 10 \l 10^. s-a-b-t.1 (0 Figiire 4.1 10^. is After 2 xlO^ augmentations. (c) s-b-a-t. (a) The input network with (b) After aug^nenting along the path After augmenting along the path Arc flow is indicated beside the arc capacity.0 (b) 10^. algorithm. the flow maximum. . alternately along s-a-b-t and s-b-a-t.2 A pathological example for the labeling arc capacities.

then the length of any increases. sequence of least less.6. this rule guarantees that the number of augmentations most (n-l)m. we consider another algorithm for reducing the number of augmentations.) In the following section. This specialization also leads to improved complexity. If we augment same or flow along a shortest path. a path of An the alternative is to augment flow along maximum residual capacity. the in the length of the shortest path is guaranteed to increase. the algorithm would reduce the capacity of a 2m or fewer maximum capacity most augmenting path by the capacity a factor of at least two. Thus after augmentations. first-out order. the network contains to (v* - at most m augmenting paths whose residual capacities sum v). L of labeled nodes as a queue. (We will prove these results next section. Now (v* consider a v. the flow must be maximum. 2m consecutive maximum have capacity augmentations. within m augmentations.77 One natural specialization of the augmenting path algorithm is to augment flow along a "shortest path" from the source to the sink. We can improve this running time by exploiting the minimum distance from any . then it by examining the labeled nodes in the residual network. starting with flow - At or one of these augmentations must augment the flow by an amount for v)/2m otherwise we will a maximum flow. defined as a path consisting of the least number of arcs.) Since no path contains is at more than n-1 arcs. (Note 0(m log U) maximum that we are essentially repeating the argument used to establish the geometric improvement approach discussed in Section 1. By flow decomposition. Thus the maximum capacity augmenting path has residual capacity at least (v*-v)/m. shortest path either stays the Moreover. after capacity augmentations. this computation time fact that the is excessive. in a first-in. Let v be any flow value and v* be maximum flow value. would obtain a shortest path in the Each of these iterations would take 0(m) steps both worst case and in practice. at least 1 Since this capacity is initially at U and must be until the flow is maximum. 4. Unfortunately. and (by our subsequent observations) the resulting computation time would be O(nm^).3 Shortest Augmenting Path Algorithm would be to successively If A natural approach to augmenting along shortest paths first look for shortest paths by performing a breadth the labeling algorithm maintains the set search in the residual network.

then a valid we call the distance labels exact. 2. 1. These inequalities .78 node i to the sink node t is monotonically nondecreasing over all augmentations.2 as the validit}/ is easy to demonstrate that i d(i) a lower boimd on i the length of the i2 shortest directed path from to t in the residual network. Whenever we augment along path is a path. By fully exploiting this property. refer to d(i) as the distance label of It and condition C4. hence. 0) represents the exact distance label. d(i2) 2 d(i3) + 1. node i to be less than the distance from cost. imply that d(i) < k for any path of length k in the residual network and. Other arcs are inadmissible. The Algorithm The concept of distance labels w^ill prove to be an important construct in the 4. > 0. Since d(s) is a lower bound on the length of any path from the source to the sink. By allowing flexibility the distance label of in the algorithm.. Then. satisfies the We say that a distance function valid follovdng two conditions: C4. in Figure 4. suffices to have valid distances. d(s) = k. j) in the residual network is t = d(j) + 1. we refer to the algorithm as the shortest augmenting path algorithm. though d = (3. any shortest path from node i to t contains at leaist d(i) arcs. we can reduce the average time per augmentation to 0(n).1 C4-2. For any admissible path of length k.1(c). The algorithm we describe next repeatedly augments flow along admissible paths.. maximum flow algorithms that we discuss in this section and in Sections 4. which are lower bounds on the exact to distances. 0. There is no particular urgency compute these distances i exactly. However.5. For example. the algorithm augments flows along shortest paths in the residual network. Let = i^ - - i3 - . 0) is distance label. from C4. the distance label d(i) equals the length of the shortest path from to in the residual network. d(ij^) < d(t) + 1 = 1. i If t for each node i. .. d(t) d(i) = < 0.. We now admissible if it define satisfies some d(i) additional notation. d = (0. it for other nodes network it is not necessary to maintain exact distances. each of the distance labels for nodes in the in the exact. to t.4 Tj: is and A if it distance function d : N -* Z"*" with respect to the residual capacities a fimction from is the set of nodes to the nonnegative integers. -\ - t be any path of length k in the residual network from node i to t. j) € A with r^. node is i We condition. An arc (i. Thus. 0. d(j) + 1 for every arc (i. we maintain without incuring any significant .2 we have d(i) = d(i|) < d(i2) + 1. A path from s to consisting entirely of admissible arcs is an admissible path.

indicating that the network contains no augmenting path from the source algorithm begin to the sink. as the new no admissible arc emanates from node then i* the algorithm performs the retreat step. while begin d(s) < n do if i* has an admissible arc then ADVANCE(i*) = else if i* RETREAT(i*). (we i*) refer to this step as a relabel i* Increasing d(i*) makes the arc (predd*). admissible path (i.e. 0. i*. i'.e. end. indices. = s. begin let (i*. . i*) from the partial admissible path and node pred(i*) becomes the new current node. The algorithm performs one retreat. the partial admissible path becomes an contains node the algorithm makes a maximum possible augmentation on this path and begins again with the source as the current node.. as follows. adds to the partial admissible path. starting at the sink node. end. Consequently. pred(j') : and i* : = j*. end. we delete (pred(i*). The algorithm terminates when d(s) S n. two steps at the current (i*. This step increeises the distance label of node it so that at least one admissible arc emanates from operation).79 We can compute the initial distance labels by performing a backward breadth first search of the residual network. Whenever t). j*) be an admissible arc in = i* A(i*). j*) node: advance or The advance step it identifies some and admissible arc designates j* emanating from node current node. to some node path a called the current node. consisting entirely of We = call this i partial admissible path and store it using predecessor of the pred(j) for each arc (i. We next describe the algorithm formally. procedure ADVANCE(i»). The algorithm generates an It admissible path by adding admissible circs. SHORTEST AUGMENTING PATH. i. = t then AUGMENT and set i» : s. maintains a path from the source node admissible arcs. inadmissible (assuming # s). first X = : perform backward breadth search of the residual network from node 1* : t to obtain the distance labels d(i). one at a time. j) on the path.. If i*.

We show that the algorithm maintains valid distance labels at every step by performing induction on the number of augment and relabel steps. end. satisfies the validity (i) condition C4.e. The shortest augmenting path algorithm maintains valid distance labels at Lemma 4. In our subsequent discussion we shall always assume that the algorithms select admissible arcs using this technique. each relabel step strictly increases the distance label of a node. procedure begin AUGMENT. list can be arranged arbitrarily. but the order. The algorithm examines this it and whenever the current arc inadmissible. remains unchanged throughout the algorithm. j) € A(i*) and ^- > ).2. j) which i is the current candidate for the first next advance step. We need to check whether these conditions remain valid residual graph changes). list the current-arc of node sequentially list is the arc in its is arc list. makes the next arc in the arc it the current arc. algorithm constructs valid distance function is Initially. j) € P). Proof. When i the algorithm has examined all arcs in A(i). . has a current-arc (i. updates the distance label of node arc in its and the current arc once again becomes the implicitly first arc list. Moreover. inductively.1. Initially. each step. that the distance valid prior to a step. Correctness of the Algorithm We maximum first show that the shortest augmentation algorithm correctly solves the flow problem. ?t then i* : = pred(i*). units of flow along path P.. We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node Each node i emanating from Arcs in each a i. A = min : {rjj : (i. Assume. after an augment step (when the and (ii) after a relabel step. using predecessor indices identify an augmenting path P from the source to the sink.80 procedure RETREAT(i'). node. the labels. augment A end. once decided. i. begin d(i*) if !• : = min s { d(j) + 1 : (i.

Since t. a relabel step at if (ii) The algorithm performs list A(i). d(i) < min{d(j) + (i. affect the validity of the i) with rjj > might. s e sets 1 V S and t e and both the S and S are nonempty. which is the termination criterion for the generic augmenting path algorithm. At termination of the algorithm. S). > 0. d(i) > d(j) + rj: for all e (S. Let S = {i some k* < n . The shortest augmenting path algorithm correctly computes a maximum Proof.2. therefore. and rj. then it remains inadmissible until d(i) increases because of our inductive hypothesis that the current arc reaches the end of the arc 1 distance labels are nondecreasing. Hence. 4. ^ n and the algorithm terminates.2 implies that s-t = for each (i. Finally.1 since Oj^ ^n-1. but this modification distance function for this arc. must be zero e N: d(i) > k*) S. When d(s. By construction. j) € A(i) and > 0) = d'(i). we can obtain a minimum For s-t cutset as follows. S). in addition. The algorithm terminates when d(s) ^ n. 1 The distance labels satisfy this validity condition.) = k and S = N . . also create an additional condition d(j) < j) Augmentation on arc + 1 that needs to be d(i) satisfied. j) might delete this arc from the residual network. < k < n. j) e (S. create an and. d(s) is a lower bound on the length of the shortest augmenting path from s to this condition implies that the network contains no augmenting path from the source to the sink. (S. since = d(j) + by the admissibility property of the augmenting path. list when A(i). thereby establishing the second part of the lemma. the for all arcs Gc. j) s-t cutset (S.81 (i) A flow augmentation on arc (i. j) in the residual network. i) conditions dOc) < d(i) + 1 remain valid in the residual network. node is i when the current arc reaches the end of arc Observe that an arc (i. the choice for changing d(i) ensures that the condition d(i) < d(j) + 1 remains valid for all (i. then no arc 1 : (i. let a^ denote for the number of nodes with distance label equal to k. (Recall that d(s) ^ n. to the residual network does not (i. Consider the (i. j) inadmissible at some stage. The validity condition C4. though. Theorem flow. j) e A(i) satisfies d(i) = d(j) + rj. Thus. additional arc d(i) (j. S) is a minimum cutset and the current flow is maximum. S). however. Hence. Note that Oj^.S. since d(i) increases.

(b) The number of augment steps at most nrnfl. its and each retreat step decrecises length by one. resulting O(n^m) total effort in the augmentation steps. After the algorithm has relabeled selects node i i at most n times. 4. Finally. each I execution requiring 0( A(i) I ) time. 4. The first term comes from the number of of augmentations. i. S n. After having performed list I A(i) i. at least one arc. Cortsequently.82 Complexity of the Algorithm We Lemma number Proof. the algorithm total reaches the end of the arc and relabels node Thus the time spent in all . which are bounded by nm/2 by For each node i. The algorithm performs 0(nm) flow augmentations and each augmentation takes in 0(n) time. The total time spent in all relabel operations is V i€ n I A(i) I = 0(nm). between two consecutive saturations of arc (i. next show that the algorithm computes a maximvun flow in O(n^m) time. such scannings. The shortest augmenting path algorithm runs in O(n^m) time. j) becomes saturated sent at some iteration (at is which from d(i) j = i d(j) + 1). Each augment step saturates zero. of relabel steps is Thus the algorithm relabels a node at most n times and the total number bounded by n'^. + 1 ^ d(i) + = d(j) + 2).2. the algorithm performs the relabel operation 0(n) times. total any arc (i. d(k) < d(s) < n. since each partial admissible path has length at most n. j) d(j) increases by at least 2 units.. at most n/2 times and the number of arc saturations is no more Theorem Proof. the algorithm requires at most 0(n^ + retreat (relabel) steps. we consider the time spent in identifying admissible N The time taken to identify the admissible arc of arcs. Consequently. j) can become saturated than nm/2. node I i is 0(1) plus the time sf)ent in scanning arcs in A(i). j) until flow sent back to (at which point = d'(i) . Then no more flow can be d'(j) on 1 (i. Each advance step increases the length of the partial admissible path by one. Hence.e. and the second term from the number the previous lemma. n^m) advance steps.3. the algorithm never node again during an advance step since for every node k in the current path. From this point on. (a) Each distance is label increases at most n times. the total is of relabel steps at most n^ . Each relabel step at node i increeises d(i) d(i) by at least one. decreases its residual capacity to Suppose that the arc (i.

augmenting path algorithm The use to perform augmentation.3 also suggests an alternative temnination condition criteria for is the shortest augmenting path algorithm.e. i. The proof of Theorem 4. The minimum cutset prior to this array performing these relabeling operations. ex. because maintaining the data structures requires substantial overhead that tends to increase rather than reduce the computationjd times in practice. powerful method for proving computational time bounds is to use potential Potential function techniques are general purpose techniques for proving the complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function. called dynamic trees reduces the average time for each augmentation from 0(n) to OGog n). if S = i : d(s) > k*). = for some k* < n. but The termination of d(s) ^ n may not be efficient in practice. of a sophisticated data structure. The algorithm updates it after every relabel operation and terminates whenever first finds a gap in the { a array. Researchers have observed empirically major portion of which is that the algorithm spends too much time in relabeling. identify at most 0(nm) augmenting paths and this bound on particular examples these algorithms to perform f2(nm) augmentations.83 scannings is 0( V i€ nlA(i)l) = 0(nm). (S.e. except in very dense networks. A detailed discussion of dynamic trees is beyond the scope of this chapter. shortest paths is intuitively appealing and The resulting algorithms is tight.2(b) A functions. a done after it has already found the algorithm can be improved by detecting the presence of a maximum flow. shortest The only way is improve the running time of the fewer computations per . This implementation of the maximum flow algorithm runs in difficult 0(nm log n) time and obtaining further These improvements appears quite implementations interest. satisfactory for a worst-case analysis. Potential Functions and an Alternate Proof of Lemma 4.. Vkith sophisticated data structures appear to be primarily of theoretical however. then S) denotes a minimum cutset. for ^ k < n. As we have seen earlier.. The combination of these time bounds N establishes the theorem. The idea of augmenting flows along easy to implement in practice. aj^ with distance label equal to k. The use of potential functions enables us to define an "accounting" relationship between the occurrences of various steps of an algorithm that can be used to . We can do so by maintaining the number of nodes » i.

the number the of augmentations using bounds on the number of relabels. we illustrate the technique by showing is that the number of augmentations in the shortest augmenting path algorithm 0(nm). the total decrease in F due to is at all augmentations m + nm. relabeling of Each node i creates as cis I A(i) I new admissible arcs. 4. and increases F by the all same amount. . for the purpose of this argument. In fact. potential increases only The when the algorithm relabels distances. since the algorithm any node at most n times (as a consequence of Lemma its 4. we of bound number of steps of one type in terms of knovm boiands on the number steps of other types. a path. of augmentations. This basic decomposes into the more elementary operation of sending flow along an Thus sending a flow of A A units along a path of k arcs units along an arc of the path. and thus we can bound In general. Rather than formally introducing potential functions. K steps before it Clearly. This argument objective to is fairly Our bound the number We did so by defining a potential function that decreases whenever the algorithm performs an augmentation. decomposes into k basic of operations of sending a flow of these basic operations as a push. the push-based algorithms such as those we develop in this and the following sections necessarily violate conservation of flow. Suppose in the shortest augmenting path algorithm we kept track of the number Let F(k) denote the of admissible arcs in the residual network. Let the algorithm perform 0.84 obtain a bound on the steps that might be difficult to obtain using other arguments. Thus the number of augmentations most m + nm was = 0(nm). Since the initial value of F is at most is m more than terminal value. We shall refer to each A path augmentation has one advantage over a single push: at all it maintains conservation of flow nodes. representative of the potential function argument. arcs at the number of admissible eis end of the k-th step.4 Freflow-Push Algorithms Augmenting path algorithms send flow by augmenting along step further arc. relabel operation. we count a step either an augmentation or as a terminates.1) and V i€ n I A(i) I = N nm. This relabels increase in F is at most nm over relabelings. F(0) < m and many F(K) ^ Each augmentation decreases the residual capacity of at least one arc to zero and hence reduces F by at least one unit.

Fourth. they can push flow for closer to the sink before identifying augmenting paths.1b): x is a function x: A —» R that satisfies (4.foralli€ N-{s. they are more general and more flexible. algorithms perform all The preflow-push iteration of the le<ist operations using only local information.1c) and the following relaxation y {j:(j.j) € A) a The preflow-push algorithms maintain a given preflow x.) (We Preflow-push algorithms have several advantages over augmentation based algorithms. the it method cannot send excess increases the distance label from this node nodes with smaller distance it then of the node so that creates at least one new admissible arc. they are better suited distributed or parallel computation. At each algorithm (except active node..85 Rather. (i) The two basic operations arc. to the current distance labels. with its > 0. the network contains at e(i) one a node i e N . of the generic (ii) preflow-push methods are pushing the flow on an admissible and updating a distance label. First. labels. as in the augmenting path algorithm described in the last section. € A) (j:(i.menting path we send to flow only on admissible arcs. As If in the shortest aug. Third. The preflow-push algorithm uses the following subroutines: . the best preflow-push algorithms currently outperform the best augmenting path algorithms in theory as well as in practice.i) Xjj - y '^ij SO . closeness being measured with respect algorithms. For i we define the excess for each node e N- {s. t). j) € A) • We refer to a node with positive excess as an active node. its initialization and t) its termination). t} as e(»>= {) : Z (j. i) ''ji (j : € A) X'^ij (i. preflow at each intermediate stage.e. We will refer to any such flows as preflows. these algorithms permit the flow into a node to exceed the flow out of this node. Second. We adopt the convention that the source and sink nodes are never active. The goal of each iterative step is to choose some active node and to send excess closer to the sink.{s. The Generic Algorithm A preflow of (4. define the distance labels and admissible arcs as in the previous section. The algorithm terminates when the network contains no active nodes. i.

perform a backward breadth first-search of the residual network. while the network contains an begin select active node do an active node i. arcs represent flexible water pipes. algorithm begin PREFLOW-PUSH. It might be instructive to visualize the generic preflow-push algorithm in terms of a physical network. procedure PUSH/RELABEL(i). to create at least The piirpose of the relabel operation is one admissible arc on which the algorithm can perform further pushes. in this network.86 procedure PREPROCESS. end. end. j) then push 5 = min{e(i). j) increases both saturating if and r. end. and nonsaturating otherwise. and the distance function measures how far nodes are above the ground. begin x: = 0. We refer to the process of increasing the distance label of a node as a relabel operation. We say that a push of 6 units of flow on arc is 5 = rj. by 5 units. j) e A(s) and d(s) : = n. to determine initial distance labels d(i). PUSH/RELABEL(i). stairting at node t. : r^:) units of flow from 1 : node Tj: i to node j else replace d(i) by min {d(j) + (i. nodes represent joints. begin if the network contains an admissible arc (i. we v^h to send water from the source In addition. Xgj : = Ugj for each arc (s. j) e A(i) and > 0}.. PREPROCESS. we visualize flow in an . The following generic version of the preflow-push algorithm combines the subroutines just described. A push of 5 units e(j) from node i to node j decreases both e(i) and r^: by 6 units and (i. and to the sink. end.

examines node 2. we move the node upward.2. We maintain vrith each node i a current arc which push operation. The preprocessing node adjacent to step accomplishes several important tasks.3 illustrates the push/relabel steps applied to the example given in Figure 4. if the algorithm relabels each node 0(n) . no flow than can reach the sink. Since arc (2.3(a) specifies the preflow determined by the preprocess step. d(l)+l} = min{2. Eventually. however. The push reduces the excess network and arc node. we is identify an admissible arc in A(i) using the same data structure we used in the shortest (i. the current candidate for the list. any shortest path from s to the residual network contains no path from s to Since distances in d are nondecre<ising. it node 2 to 1. the algorithm performs a relabel operation and gives node 2 a new distance d'(2) = min {d(3) + 1. occasionally flow becomes trapped locally neighbors. the residual network will never contain a directed path from s to will be and so there never any need to push flow from s again. node that has no downhill At this point. 1} units. lists We have seen earlier that takes 0(nm) total time. Suppose the select step 1. In general. 1) have positive residual capacities. Hence. j) augmenting path algorithm. it gives each node s a positive excess. and again water flows downhill towards the sink. Second. but they do not satisfy the distance condition. 4) has residual capacity r24 = of value 6 1 and d(2) = d(4) + the algorithm performs a (saturating) of push = min {2. Initially.87 admissible arc as water flowing downhill. we are also guaranteed that in subsequent iterations t.5) = 2. a lower bound on the length of t. s. As we continue to move nodes upwards. First. We choose the current arc by sequentially scanning the arc scanning the arc times. 4) is deleted from the residual is still (4. we move at a the source node upward. Figure 4. all the water flows either into the sink or into the Figure 4. since d(s) = n t. since the preprocessing step saturates is arcs none of these arcs admissible and setting d(s) = n will satisfy the is validity condition C4. and water flows to its neighbors. Since node 2 (2. 3) an active can be selected again for further pushes. the remaining excess flow eventually flows back towards the source. 2) is added to the residual network.1(a). Third. The arc and (2. water flows downhill towards the sink. The algorithm terminates when source. In the push/relabel(i) step. so that the algorithm can begin by selecting all some node with incident to node positive excess. Arc (2.

88 d(3) = 1 e3=4 d(l) = 4 d(4) = d(2) = 1 e. . (a) d(3) = 1 d(l) =4 d(4) = d(2) = l 1 6^ = (b) After the execution of step PUSH(2).= 2 The residual network after the preprocessing step.

This condition total is the termination criterion of the augmenting path algorithm. any preflow x can be decomposed with respect (i) to the original (ii) network G into nonnegative flows along paths from the source s to Let i t. that distance labels are We 4. and (iii) the flows around directed cycles. each node i with positive excess node s by a directed path from i to s in the residual network.89 d(3) = 1 d(l) = 4 d(4) = d(2) = 2 (c) After the execution of step RELABEL(2). the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily when no admissible arc emanates from it. the residual a flow.1. Figure 4.3 An illustration of push and relabel steps. Proof. connected to At any stage of the preflow-push algorithm. be an . arcs directed into the sink is and thus the flow on the maximum flow value. Lemma is 43. we can easily resides show that it finds a maximum flow. paths from s to active nodes. analyze the complexity of the algorithm. Complexity of the Algorithm We now important times. The second conclusion follows from the following lemma. Assuming that the generic preflow-push algorithm terminates. begin by establishing one result: first always valid and do not increase too many The of these conclusions follows from Lemma because as in the shortest augmenting path algorithm. The algorithm terminates when the excess is either at the source or at the sink implying that the current preflow r. By the flow decomposition theory. Since d(s) = network contains no path from the source to the sink.

most 2n times. and flows around cycles do not P contribute to the excess at node Then the residual network contains the reversal of O' with the orientation of each arc reversed).6.90 active node relative to the preflou' x in G. j) it performs a saturating or a nonsaturating push. 4. the total increase in F due to increases in bounded by is Case 2. 4. Let III We prove the lemma using an argument based on potential functions. create a A saturating push on arc might 1. x. In this case the distance label of node i increases by e ^ 1 units. Each distance is label increases at . This operation increases F by at most e units. During the push/ relabel (i) one of the following two must apply: 1. I denote the set of active nodes. Proof. The algorithm able to identify an arc on which it can push flow. For each node i e N. Then there t must be a path P from s to i in the flow decomposition of since paths from s to i. Since < n. does not . thereby increasing the number of active nodes by and increasing F by which may be as much as 2n per saturating push. This lemma imples set.4.2. F cases zero. dii) < 2n.5. Cor^ider the potential function F = .2 imply that (a) d(i) < d(s) + n - 1 < 2n. and so (i. the total is of relabel steps at most 2n^ (b) The number of saturating pushes at most nm. the residual network contained a path of length at most n-1 from node fact that d(s) to node The = n and condition C4. j. Since the total increase in d(i) throughout the running time of the i algorithm for each node distance labels is is bounded by 2n''. The proof is ver>' much similar to that of Lemma 4. 2n. i and hence s. is at most 2n^. Lemma Proof. The last time the algorithm relabeled node i. and hence 2n'^m Next note that a nonsaturating push on arc (i. Lemma number 4. it had a positive excess. Case The <ilgorithm is unable to find an admissible arc along which it can push flow. At termination. j) over all saturating pushes. The number of nonsaturating pushes is O(n^m). new excess at node d(j). Consequently. and hence a directed path from i to s. the algorithm does not minimize over an empty Lemma Proof. and d(i) < 2n for all i e is I. V i€ I d(i). that during a relabel step. the initial value of F (after the preprocessing step) step.

. or select elements are available for storing S so that the algorithm can add. i e N) at some point h*-l. Each node examination entails at most one nonsaturating push. The algorithm maintains a set S of active nodes. node F j was active before the push. We have thus established the following Theorem 1. and so on. Each nonsaturating push decreases F by one unit and F always remains nonnegative. in Then nodes with distance h* push flow turn. we always an active node with the highest distance label for : Let h* = e(i) > 0. We maximum at summarize these possible increase in facts. in particular. and deletes from S nodes that become inactive following a nonsaturating push. then excess reaches the sink node and the algorithm terminates. further improvements. Note all If a if node relabeled then excess moves up and then gradually comes cor\secutive dov^n. to nodes with distance and these nodes. that the algorithm relabels no node during n node examinations. Consequently. proving the lemma. is push flow to nodes with distance h*-2. Hence. its flexibility potential for By specifying different rules for selecting nodes for push/relabel For operations. of the algorithm. The nonsaturatirg push will decrease F by d(i) since i becomes inactive. this algorithm performs O(n^) nonsaturating pushes. lists) Several data structures (for example. doubly linked delete. example. but it simultaneously increases F by If d(j) = d(i) - 1 if the push causes node j to become The net active. decreeise in is at least 1 unit per norxsaturating push. The initial value of F is at most 2n^ and the F is Irr- + 2n^m. we indicate how the algorithm keeps track of active nodes for the It push/relabel steps. Since the algorithm requires O(n^) relabel operations. then F decrejises by an amount d(i). suppose that push/relabel step. the preflow-push and its algorithm has several nice features. it from in in 0(1) time. we immediately obtain a bound of O(n^) on the number of node examinations. A Specialization of the Generic Algorithm The running time of the generic preflow-push algorithm is comparable to the bound of the shortest augmenting path algorithm. However. Consequently. we can derive many max different algorithms select {d(i) from the generic version. Finally. the nortsaturating pushes can occur most 2n^ + 2n^ + 2n^m = O(n^m) times.91 increase III. it is easy to implement the preflow-push algorithm theorem: O(n'^m) time.4 The generic preflow-push algorithm runs in O(n'^m) time. that adds to S nodes become active following a push and are not already in S.

This algorithmic strategy may prove to be useful for the following reason. active node) is By pushing flows from active nodes. that during Cj^^g^. 4. The excess-scaling algorithm is based on the following ideas. algorithm pushes flow from nodes whose excess is A/2 S ^jj^ax^^- "^^ choice assures that during nonsaturating pushes the algorithm sends relatively large excess closer to the sink. it We algorithm as the excess-scaling algorithm since is bcised on scaling the node excesses. we would 0.5 Excess-Scaling Algorithm at The generic preflow-push algorithm allows flows violate each intermediate step to mass balance equations.92 variable level which is an upper bound on the highest index lists r for which LlST(r) is nonempty. the execution of the generic algorithm. The following theorem now evident. deleting. starting at LIST(level) We identify the highest indexed lists. nonempty list and sequentially scanning the lower indexed needed is We leave it as an exercise to show that the overall effort to scan the lists is bounded by n plus is the total increase in the distance labels which O(n^). Researchers have shown using more clever analysis that the ) highest label preflow push algorithm in fact runs in 0(n^ Vrn time. except that e^^g^^ eventually decreases to vtdue we develop an excess- scaling technique that systematically reduces Cjj^^ to 0. for the highest label straightforward. We to will next describe another implementation of the generic preflow-push algorithm that dramatically reduces the Recall that number of nonsaturating pushes. though. attempts to satisfy the meiss balance equations. Note. The algorithm also does not allow the maximum excess to increase beyond A. Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling . from O(n^m) 0(n^ log U). observe no particular pattern in In this section. the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an : one measure of the infeasibility of a preflow. Suppose . Pushes carrying small amounts of flow are of little benefit and can cause bottlenecks that retards the algorithm's progress. We can store these as doubly linked lists so that adding. refer to this U represents the largest arc capacity in the network. Theorem 4.5 The preflcnv-push algorithm O(n^) time. that always pushes flow from an active node ipith the highest distance label runs in U preflow push algorithm is The O(n^) bound and can be improved. or selecting an element takes 0(1) time.

Ij. algorithm EXCESS-SCALING. Thus. for k : = K down to do begin (A-scaling phase) A: = 2^ while the network contains a node i with e(i) > A/2 do perform push/relabel(i) while ensuring that no node excess exceeds A. Tj. Initially. A/2 < Cj^g^ < A and ejj^^^ the phase. Suppose likely that several nodes send flow to a single node creating a very large excess. K:=2riogUl. A .93 The algorithm also does not allow the maximum excess to increase beyond A. We refer to a specific scaling phase with a A as the /^-scaling phase. may vary up and down during When Ul + 1 Cjj^g^ < A/2. A= 2' ^°6 ^ when ' the logarithm has base 2. U < A < 2U.ax decreases to value and we obtain The the maximum flow. The algorithm uses following node selection rule to guarantee that no node excess exceeds A.. j. effort. Ehjring the A-scaling phase. excess-scaling algorithm uses the same step push/relabel(i) as in the generic preflow-push algorithm. After the algorithm has peformed flog scaling phases.e(j)} This change will the ensure that the algorithm permits no excess to exceed A. end. but with one slight difference: instead of pushing units. It is node Vkdll j could not send the accumulated flow closer to the sink.} units of flow. begin PREPROCESS. ejy. a new scaling ph«ise begins. and thus the algorithm need to increase its distance and return much of is its excess back toward the source. Selection Rule. Among all nodes with excess of distance label (breaking ties arbitrarily). Thus. more than A/2. it pushes 6 = min {e(i). select a node with minimum . end. 6 = min {e(i). This algorithmic strategy may prove to be useful for the following reason. pushing too much flow to any node likely to be a wasted The excess-scaling algorithm has the follouang algorithmic description. The algorithm performs a number of dominator A decreasing from phase certain value of scaling phases with the value of the excess- to phase.

the increase in F due to node relabelings most 2n'^ over scaling phases). Odog U) at scaling phases. ijj) units of flow. During the push/relabeKi) one of the following two cases must apply: Case 1. Since for each increaise in d(i) 4.. Then e'(j) = e(j) + min {e(i). at leaist A/2 vmits excess at node e(j) Further. e'(j) - be the e(j)) j after All the push. by sending min more than A/2. and d(j) {e(i). i.4). Using this potential function N Since the algorithm has first. j) among nodes whose admissible. we have e(i) > A/2 and excess e(j) is < A/2. (i. The algorithm is unable to find an admissible arc along which it can push flow. - 1 < d(i) since arc is Hence. since node i is a node with smallest distance = d(i) label (i. r^. we will establish the first assertion of the is is lemma. nonsaturating push on arc since d(j) = d(i) . This relabeling operation the totcil increases F by at most e units because < A. Case 2.4. it performs either a saturating or a nonsaturating push. i A and sends at leaist A/2 tmits of flow at least from node 1/2 units.8. j). The excess-scaling algorithm performs O(n^) nonsaturating pushes per and scaling phase 0(n^ log U) pushes in total.7. to node j after this operation F decreaases by is at Since the initial value of F at the beginning of a A-scaling phase most 2n^ and the increases 1). in F during this scaling is phase sum to 8rr. No excess ever exceeds A. Lemma 4. A < + A- e(j) <A .1. The algorithm satisfies the following two conditions: Each nonsaturating push sends at least A/2 units of flow.94 Lemma C43.. A . node excesses thus remain less than or equal to A. The algorithm is able to identify an arc on which it can push flow and so Ccise. throughout the running of the algorithm increase in F is bounded by 2n (by Lemma is the total due to the relabeling of nodes bounded by 2n^ is at in the A-scaling all phase (actually. Proof. the push operation increases only Let Tj. Proof. at most 2n^ (from Case the number of nonsaturating pushes bounded by . For every push on arc (i. In this case the distance label of node i increases e(i) by e ^ 1 units. bounded by A and bounded by 2n. of flow. Consider the potential function F = ^ ie e(i) d(i)/A. the second assertion a consequence of the The e(i) is initial value of F the beginning of the A-scaling phase d(i) is bounded by 2n^ because step. 4. j) In either F decreases.e(j)) > min {A/2. C4. we ensure that in a nonsaturating push the Jilgorithm sends e(j).

This i representing the excess or deficit of any node e N. preflow-push method in Section lists 4. We then solve a v* problem from Let x* denote the maximum v* = {i: flow and e(i) maximum flow denote the maximum is flow value in the transformed network. Let /j. hence.6 The preflow-push algorithm with excess-scaling runs in 0(nm + n^ log U) Networks with Lower Bounds on Flows To conclude this section. Up to this we have if ignored the method needed to identify a node with the excess minimum distance label easy. + /jj a feasible flow. (We refer the reader to Section 5. we add an s* to t*. 4.4 for the definition of a pseudoflow with both a excesses and deficits). If \ e(i) . among nodes with more than A/2. result. we can summarize our discussion by the following Theorem time. — require 0(nm) time. We is leave as an exercise to show needed to scan the lists is bounded by the number not a bottleneck of pushes performed by the algorithm plus 0(n log U) and. then the original problem > 0) is feasible and choosing the flow on each is arc (i. ^ denote the lower bound for flow on any eu'C (i. and for each node i with e(i) < 0. node 0. j) as x^. has a feasible solution. With this observation. arc (i. and a variable level which a lower bound on the smallest index list r for which LlST(r) is nonempty. the problem infeasible. however. and super sink. We identify the lowest indexed nonempty starting at LIST(level) and sequentially scanning the higher indexed that the overall effort lists. node t*. otherwise. relabel operations and finding admissible arcs point. Although the maximum flow problem v^th zero lower bounds always infecisible. For each node i with > we add an t*) arc (s*. Making in the this identification is we use a scheme similar to the one used label.95 This lemma implies a bound of 0(nm all + n^ log U) for the excess-scaling algorithm since we have already seen that other operations — such as saturating pushes.i) with capacity e(i). s*. j) e A. we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. with capacity -e(i). We e(i) introduce a super source. j) e A. determine the feeisibUlity of this problem with zero lower bounds as follows.4 to find a e(i) node with the highest distance d(i) We is maintain the LIST(r) = {i € N : > A/2 and = r). choice gives us a pseudoflow with e(i) We problem by solving a maximum flow set x^: = /j: for each arc (i. operation. the problem wiih nonnegative lower bounds could be We can. .

j) respectively. It is possible to establish the optimality of the solution generated by the algorithm by generalizing the max-flow min-cut theorem to accomodate situations with lower bounds. the residual capacity for incre<ising flow cmd for decreasing flow on arc (j. initially first we apply any of the maximum flow as algorithms with only one change: rj. i). define the residual capacity of an arc (i. . (i. These observations show that it is possible to solve the problem with nonnegative lower bounds by two applications of the cilgorithms maximum maximum flow flow we have already discussed. j) - Xjj) + (xjj - /jj).96 Once we have found = (ujj a feasible flow. The and second tenns on arc in this expression denote.

add an If t*) with capacity -b(i).1a) (i. Now solve a maximum flow problem cost from s* to t*. j) X € X) X:: (j : (j. can ascertain the feasibility of the minimum cost flow problem by solving a flow problem as follows.4 imply that these assumptions do not impose any We remind the reader of our blanket capacity) are integral.2.1b) < xjj < Ujj. directed path We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes. add an arc (s*. the maximum flow value equals {i : T b(D > b(i) 0) then the minimum flow problem A5. this condition. j) e A ) and U = max max { lb(i)l : ie N}. x. Feasibility Assumption. Let that the lower bounds ( /j.j)€A^ subject to {j : (i. Introduce a super source node i s*. if We (j. i) with capacity b(i). it is infeasible. on arc flows are all zero and that arc costs are [ C } = ). (i. MINIMUM COST FLOWS In this section. We assume that X ieN ^(^^ - and that the minimum cost flow problem has a feasible solution.e. we consider algorithmic approaches for the minimum cost flow problem. max ( ujj : (i. : (i. (5.. > 0. by adding artificial arcs (1. Minimize 2^ Cj. Connectedness Assumption. j) and 1) for each € N and assigning a large cost and a very large capacity to each of these . max Cj.97 5. (5. impose j necessary. in Section 2. supply/demand and problem We also assume that the minimum cost flow satisfies the following two conditions. is feasible. We maximum t*. assumption that all data (cost. j) € A. for each (i.: ' {5. We consider the following node-arc formulation of the problem.1. and a super and sink node i For each node b(i) with arc b(i) (i. A5. j) € A The transformations Tl and T3 loss of generality. i) X^!k) = ''ii t)(>)' for a" > e N.1c) We assume nonnegative. otherwise. for each node with < 0.

1. A feasible flow x is an optimum flow if and only if the residual network G(x) contains no negative cost directed cycle. residual capacity = u^j . The concept example. e A by two arc (j. from a linear programming point of In this section. any directed cycle in the residual network G(x) is an augmenting cycle with respect to the flow x and vice-versa (see Section 2. Duality and Optimality Conditions As we have seen programming dual in Section 1. the minimum cost flow problem and its dual have. rather simple complementary slackness conditions. we can case. due to its special structure. we can produce a network without any Observe that parallel arcs). j) has cost rjj and x^. i). 5. CXir algorithms rely on the concept of residual networks. By using more complex notation. No such arc would appear in a minimum cost solution unless the problem contains no feasible solution without artificial arcs.4. notational difficulties. we (or. j) For the original network contains both the arcs i and (j. 5. view. The residual network (i.. will tissume that never arise by inserting extra nodes on parallel arcs. the minimum The cost flow problem has a number of important theoretical properties. . i). The arc (i. more general parallel arcs However. j) is defined as follows: Cj: We replace each arc r^. if of residual networks poses some (i.2..1 for the definition of augmenting cycle). then the residual j network may contain two arcs from node i to node and/or two j arcs from node to node with possibly different to costs. Moreover.1. and (j.98 arcs. of this problem inherits linear many of these properties.x^. j) G(x) corresponding to a flow x arcs i) (i. This equivalence implies the following alternate statement of Theorem Theorem 2. state the linear we formally programming dual problem and derive the complementary slackness conditions. Our notation for arcs assumes that at most one arc joins easily treat this one node any other node. and the has cost -Cj: and residual capacity = The residual network consists only of arcs with positive residual capacity. rather than changing our notation.

(5.1b) redundant. . implies that n(i) - n(j) - 5jj = Cjj . Further.3) implies that 7t(i)-7t(j) -5jj = Cjj. The condition (5.j) N X e A "ij ^i\ ^ (5 2a) ' subject to 7c(i) - 7c(j) - 6ij < Cjj .8) Since (5.1c). .6). . Xj. for all (i. substituting this result in (5. j) e A.8) yields (5. (5.7) To see this equivalence. (5.1) assuming that Uj.. It possible to show 7t(i) assumption imposes no loss of i We associate a dual variable with the mass balance corwtraint of node is Since one of the constraints in (5. Xj: < Uj. 99 We each arc generality. . 0<xjj <u^j=* = Ujj=> Jt(i)- Jt(j) = Cjj. foraU (i. > for (i. we that can set one of these dual 0.1) is: Maximize X ie t)(') '^(i^ ~ (i. € A.2b) 5jjS 0. j) variables to an arbitrary value.3) 6jj > ^ Xjj = Ujj.j)e A.4) These conditions are equivalent Xj: to the following optimality conditions: (5. therefore assume 7c(l) = (i.2c) and Ji(i) are unrestricted. (5. (5.1b). consider the j) minimum is cost flow problem that this (5. in (5. suppose that < Uj: for some arc (i.3) Whenever = > for some arc (i. j). associate a dual variable 6jj We. in (5. we The with the upper bound constraint of arc dual problem to (5.5) = =* 7c(i) - 7t(j) < Cjj .6) Xij n(i) - n{]) ^ qj < Xj: (5. (5.4) Uj: implies that 6jj = 0. j). The complementary slackness conditions Xjj for this primal-dual pair are: > => 7i(i) - n(j) - 5jj = Cjj (5.

simplify C5.2. respect to the arc lengths are well defined.1. when stated in we retain for the sake of completeness. j) in the residual network G(x). Cjj Cjj Cjj > = < 0. j) as Cj. j) then (5. would contain arc with Cj. Condition C5. Further .2 X If is feasible. shortest path optimality condition C3. Hence. + I (i. for some arc (i. (i. to: terms of the residual network. Then The in the residual Cj:. suppose that x is feasible and C(x) does not contain a negative cycle. Finally. i)eW To see the converse. that the condition C5. network.5 and C5. then then Xjj = 0.6.7). > and Xj. some (i. These conditions.3 C5.6 Cj. The conditions if it - (5.2b) gives (5. To < see this result. with Let d(i) denote the shortest distance from node 1 to node i. 0. C5. is feasible. < Ujj. n of flows and node potentials C5. Note note that if that the condition C5..4.6 implies that X (i.7) imply that a pair x. '^ S 0.6.2 and C5. t for each arc (i. It is easy to establish the equivalence between these optimality conditions and the condition stated in satisf)'ing Theorem 5. the residual network contains no negative cost cycle. 0. ^ S (i. then = U|j.j)€ (-Jt(i) W + Jt(j)) (i. n of flows and node potentials optimal satisfies the follov^ing conditions: C5. j) in A.5) define the reduced cost of an arc (i.3.5). C5. . if xj: = < uj. in the original Cjj. -t^ Cjj . = Cj: - Ji(i) + is n(j). network the shortest distances from node 1. j) and C5. C5. Cjj = Xjj But then for Cjj contradicting A similar contradiction arises if < and < Uj. i) C5.3 follows it from the conditions C5. We (5.2 implies that d(j) < d(i) + q. > subsumes for some arc (j.j)€ XW C. then the 0. residual network C5. Consider any pair x.j)e W q: = '' (i.6 (Primal feasibility) x (E>ual feasibility) Cj. Observe however.4 If If < x^: Xj.5 C5.1 C5.4.j)€ W C:.1.100 Substituting 5jj S in this 6jj equation gives (5.4) imples that = and substituting this result in (5. Let W be any directed cycle in the residual network.

The shortest path problem from node s to all . setting Uj: equal to any integer greater than (n 1) will suffice if we wish s to to maintain t finite capacities). Hence. more transparent when we discuss algorithms have already shov^m in Section 5. Then < q. improved algorithms for the for these two problems have improved algorithms minimum cost flow for problem.1 minimum We how to obtain an optimum dual solution from an optimum primal solution by solving a single shortest path problem. other nodes can be formulated as a minimum cost flow problem by setting b(l) = (n . j) in G(x). algorithms for the shortest path and great use in solving the minimum cost flow problen. for all (i. We now show how to obtain an optimal primal solution from an optimal dual solution by solving a single maximum flow problem. = «« for each (i. : (i. the pair satisfies C5. algorithms for the minimum cost flow problem solve both the shortest path and maximum flow problems as special cases. many of the algorithms use shortest path minimum and/or maximum for the cost flow problem either explicitly or implicitly flow algorithms as subroutines. led to Consequently. Similarly. j) e A (in fact. Thus. the This relationship will be cost flow problem. 71 - d. with Cj: c^g = -1 and u^^ = for each arc ~ (in fact. and setting = (i. A*) as follows.101 for aU (i.*. j) e The nodes in G* have the A* has an upper bound u^:* as bound defined as follows: . 5^ Relationship to Shortest Path and Maximum Flow Problems The minimum cost flow problem generalizes both the shortest path and maximum flow problems. Conversely.6. Suppose that 7t is an optimal dual solution and c is the vector of reduced costs. Let n = x. . 1^. j) € A.Jt(i) + 7t(j) = Cj.5 and C5.1) b(i) = -1 for all 1 * s. same supply /demand well as a lower Any arc (i. We define the cost-residual network G* = (N. and Uj. j) e A) would suffice). the maximum = flow problem from node node can be transformed to the s) minimum cost flow problem by introducing an additional arc (t.. as the nodes in G. + d(i) - d(j) = Cj. Uj^ m • max {u|. maximum flow problems are of Indeed. j) in G(x).

j) with u^* = (i. j) (iii) For each (i. cycle algorithm maintains a primal feasible solution It The negative to attain x and strives dual feasibility. . successive shortest path. > 0. if Cjj < for some (i. In this and the following cost flow sections. A* contains an arc in A with Cj. If cjj must be at the arc's upper bound in the optimum = 0. = 0.2-C5. j) flow. A* contains an (i. meets the supply/demand constraints of the nodes. electrical engineers and many others have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem. then condition C5. primal simplex and scaling-based algorithms.1. Negative Cycle Algorithm Operations researchers. 5. < 0.3. minimum problem and point out relationships between We first consider the negative cycle algorithm.* = uj. arc j) with Uj. then C5. j) A with Cj. then any flow value will satisfy the condition C5.3. does so by identifying negative cost directed cycles in the in these cycles.. residual network G(x) and augmenting flows The algorithm terminates when when the residual network contains no negative cost cycles. Similarly. Notable examples are the negative cycle. j) € A. j) in (i. .4 implies the flow on arc flow. Then an optimum solution of the maximum minimum problem in G. out-of-kilter. (i. primal-dual.4 in and then transform problem to a maximum cost flow problem as described assumption A5. and hf = 0- The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5. at the same time. r Now network the problem is reduced to finding a feasible flow in the cost-residual that satisfies the lower and upper bound restrictions of arcs and. we discuss most of these important algorithms for the them. j) 6 A. > for some (i. it Theorem 5. 1^:* =Uj. Let x* denote the x*+/* is flow in the transformed network.1 implies that has found a minimum cost flow. A* contains an arc in A with c.102 (i) (ii) For each For each (i..4.2 dictates that xj: = in the optimum (i. j) with u^:* = 1j:» = 0. the algorithm terminates. We first eliminate the lower this bounds of arcs as described in Section 2. computer scientists. If Cj.

described in Section to identify a negative cycle. augment end. Since mCU is an upper bound on an cost.4. begin establish a feasible flow x in the network. = min [t^ (i. j) e W). The augmenting cycle theorem (Theorem 2. A cycle 3. It The simplex algorithm solution be discussed later) nearly achieves this objective. objective due to flow augmentations on these augmenting cycles sum Consequently. it maintains a tree and node potentials that enable to identify a negative cost cycle in 0(m) effort. at least one augmenting cycle with respect Hence. is feasible flow in the network can be found by solving a maximum flow problem as explained just after assumption A5. the flow cost and a lower bound on the optimum flow algorithm terminates after at most O(mCU) iterations and requires O(nm^CU) time in total. the simplex algorithm cannot necessarily send a positive amoimt (ii) of flow along this cycle. 6 units of flow along the cycle W and update G(x). if to x must decrease the function by at least (ex -cx*)/m. Let x be some flow and an optimum flow. Further.103 algorithm NEGATIVE CYCLE. end.1. while C(x) contains a negative cycle do begin use some algorithm 5 : to identify a negative cycle W. However. This algorithm can be improved in the following three ways (which irizpV summarize) we briefly (i) Identifying a negative cost cycle in effort (to much less than 0(nm) time. j) IW € m (min ^ (rjj : (i. j) e W)). Identifying a negative cost cycle with maximum improvement due in the objective function value. The improvement is in the objective function to the augmentation x* be along a cycle W - (i. improvements to ex -ex*. which requires 0(nm) time at least Every iteration reduces the initial flow cost by zero is one unit.3) implies that x* equals x plus the flow on at most in cost augmenting cycles with respect to x. the algorithm always augments flow along a . One algorithm for identifying a negative cost the label correcting algorithm for the shortest path problem. due to degeneracy.

For any pseudoflow x.1 cycle value nondecreasing. absolute value decreases by a factor of l-(l/n) within m Since mean cost of the minimum mean -1/n. if e(i) < 0. time.4. but a modest variation approach yields a polynomial time algorithm for the minimum cost flow problem. i) X€ A] ''ii - {j: (i. then Lemma 1. cycle is that the method would Finding a of this maximum improvement a difficult problem. the minimum mean (negative) cycle 1. then from one iteration moreover. possible to identify a minimum mean that if cycle in 0(nm) or 0(Vri m log nC) Recently.104 cycle with obtain maximum improvement. but violates the supply/demand constraints of the nodes. A minimum mean whose mean cost is as small as possible. step and attempts to achieve dual In contrast. It maintains a solution x that satisfies the normegativity and capacity constraints. (iii) Identifying a negative cost cycle vdth ais its minimum mean it cost. j At each step. we define the imbalance of node as e(i) = b(i) + {j: (j. If e(i) -e(i) is > for some node i. iterations. A pseudoflow is a function x A -» R satisfying only : <md normegativity constraints.j) X€ a1 e(i) ''ii' for all i e N. and T denote the . then called the deficit. is bounded from below by -C and bounded from above by Lemma implies that this algorithm will terminate in 0(nm log nC) iterations. the algorithm selects a node i with extra supply and a node with unfulfilled demand and sends flow from terminates i to j along a shortest path in the residual network. the successive shortest path algorithm maintains dual feasibility of the solution at every step and strives to attain primal feasibility. researchers have shown the negative cycle algorithm always augments the flow along a minimum mean is cycle. the its to the next. A node i vdth = called balanced.1 implies an optimum flow within 0(m log mCU) iterations. then e(i) is called the excess is of node Let S i. 5. We define the mean cost cycle is a of a cycle cycle cost divided by the number of arcs It is contains. Successive Shortest Path Algorithm The negative cycle algorithm maintains primal feasibility of the solution at every feaisibility. all The algorithm when the current solution satisfies the supply/demand the capacity i constraints.

6 with respect to the node potentials to n'. j) C5. The node potentials play a very important role in this algorithm. Besides using them to prove the correctness of the algorithm.6 for this arc. Cj: = - Cj. Lemma Proof. node k any node v in G(x) with respect to the arc lengths We claim that x also Jt' satisfies the dual feasibility conditions with re. j) may add . we use them to ensure that the arc .6. We in are now its in a position to prove the lemma.6 unth respect to the node potentials it. i) also satisfies C5. Let d(v) denote the shortest path distances from Cj. j) (i.. Y fe C:. = 7t-d. j) in G(x). Since x satisfies the dual feasibility conditions with respect to the node potentials Cj: we have to ^ for all (i.. - . the node ?'> potentials change all path lengths between a specific pair of nodes by a constant amount. Then x' also satisfies the dual feasibility conditions with respect to some node potentials.105 sets of excess and deficit nodes respectively. for all (i. Observe that for any directed path P from a node k to a node /. for all (i. is and the Cjj. i) to the residual network. Augmenting flow along any = arc P maiintains the dual feasibility condition C5. Cj. . Furthermore. x every arc every arc satisfies (i. fe P Z C. '' = (i. The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj.1. and so (j. - Jt(i) + n(j) in these conditions and using 7t'(i) = 7t(i) - d(i) yields qj" = Cjj 7:'(i) + n'(j) S 0.' = Cjj for on the shortest path P from node k node since d(j) = d(i) + for € P and Cj: = c^. (i. The residual network corresponding to a pseudoflow is defined in the same way that we define the residual network for a flow. reversal arc (j. Hence. Hence. 5. jt. suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). j) in G(x).nil) + n(k).c(i) + Jt(j). /.pect to the potentials (i. But since for each arc 6 P Cjj = 0.. shortest path with respect to the same bls the shortest path with respect to The correctness of the successive shortest path algorithm rests on the following result.e. The shortest path optimality conditions C3. Suppose a pseudoflow x satisfies the dual feasibility condition C5. j) (i. j) in G(x)..2) imply that d(j)<d(i)+ Substituting cjj . Augmenting flow on an Cj: arc (i. Next note that Cj.

Further. and = begin X : = 7t : 0. end. where S(n.106 lengths are nonnegative. X. S and To satisfies initialize the algorithm. of the successive shortest path algorithm summarizes the steps algorithm SUCCESSIVE SHORTEST PATH. So the overall complexity of is this algorithm is 0(nU S(n. m. j) € P } ]. f>olynomial . shortest path algorithm largest pseudopolynomial time since is. we set x = 0. 5*0. polynomial in m and the supply U. thus enabling us to solve the shortest path subproblems efficiently. units of flow along the path P. d(j) determine shortest path distances from node k to all Cj. the shortest path problem at each iteration can be solved using Dijkstra's algorithm. The algorithm however. The successive n. -e(/). to Currently. e(i) compute imbalances while S ^ and initialize the sets S and T. m. the algorithm terminates in at most the arc lengths nU Since are nonnegative. if since. then T* because the sum of excesses always equals the sum of deficits. m it + is nVlogC ) ). the connectedness assumption implies that the residual network G(x) contains a directed path from this node k to node /. e(k). O). C) the time taken by Dijkstra's algorithm. = min [ min { rj: : (i. is the best strongly polynomial -time bound implement Dijkstra's algorithm is CXm + n log n) and the best (weakly) polynomial time bound is 0(min {m log log C. more The following formal statement of this method. if U is an upper bound on iterations. which is a feasible pseudoflow and arc C5. by assumption. Consequently.6 with respect to the node potentials n = Also. all lengths are nonnegative. other nodes in G(x) with respect to the reduced costs let P denote : a shortest path from k to 1. ujxJaten 6 : = 7t-d. the largest supply of any node. T. do begin select a node k e S and a node / € T. augment 6 update end. Each iteration of algorithm solves a shortest path problem with nonnegative arc lengths and reduces the supply of some node by Cj: at least one unit..

C). m. the algorithm incurs the additional expense of solving a maximum flow problem at each iteration. These observations give a bound of min {nU. each 7:(j) becomes 7t(j) - d(j)) and then solves a maximum flow problem to send the reduced maximum possible flow from the source to the sink using only arcs with zero that the excess of cost. > 0. The successive shortest path and primal-dual algorithnw maintain a solution that satisfies the dual feasibility conditions violates the and the flow bound iteratively constraints. drive the flow to zero if = 0. where S(n. U)). . j) to Uj. Cj: < 0. and to permit any flow between and Uj: if Cj: The kilter number. it except that instead of sending flow on only one path during an iteration. nC} on the number of iterations since the magnitude of each node potential is bounded by nC. This bound is better than that of the successive shortest path algorithm. in the next ^ 1.5. and the flow bound restrictior«. Thus. the adding nodes and arcs as in the assumption A5. but. the network contains no path from the source to the sink in the residual network consisting iteration d(t) entirely of arcs with zero reduced costs. In Section 5. coi^equently. comes closer to satisfying the mass balance However. we will develop a polynomial time algorithm for the minimum cost flow problem using the successive shortest path algorithm in conjunction with scaling. primal-dual algorithm solves a shortest path problem from the source to update the node potentials (i. To explain the primal-dual algorithm.e. and also assures that the node potential of the sink latter strictly decreases.. C) and M(n.107 time for the assignment problem.7. represented by k^:. m. 5.1). we could The just as well have violated other constraints at intermediate steps. out-of-kilter algorithm satisfies only the mass balance cortstraints and may idea is violate the dual feasibility conditions to drive the flow on an arc (i. U) respectively denote the solution times of shortest p>ath and maximum flow algorithms. Primal-Dual and Out-of-Kilter Algorithms The primal-dual algorithm is very similar to the successive shortest path problem. The flow observation follows from the fact that after we have solved the maximum problem. These algorithnns modify the flow and potentials so that the flow at each step constraints. a special case of the minimum cost flow problem for which U = 1. as before. m. we transform the minimum cost flow problem into a single-source and single-sink problem (possibly by At every iteration. m. The algorithm guarantees some node strictly decreases at each iteration. might send flow along many paths. of course. nC M(n. but that mass balance constraints. The basic if Cj. the algorithm has an overall complexity of 0(min (nU S(n.

The special structure of the minimum benefits. For example. Finally. j) is defined cis the minimum increase or decrease in the flow necessary to satisfy its j) flow bound constraint and dual feasibility condition. and node potentials for any basis We then show how to compute arc flows We next discuss how to perform various to simplex operations such as the selection of entering arcs. = u^. the last The advances made in two decades for maintaining and upxiating the tree structure efficiently have substantially improved the speed of the algorithm. k^j = I x^j I and for an arc (i. particularly. The Section 2. the out-of. . Network Simplex Algorithm The network simplex algorithm specialization of for the minimum cost flow problem for is a the bounded variable primal simplex algorithm cost flow linear programming. 5.108 kjj. we show how guarantee the finiteness of the network simplex algorithm. version of the primal network simplex algorithm its is Though no known to run in polynomial time. - x^: I . At each it iteration. We first define the concept of a basis structure and describe a data structure to store and to manipulate the basis. i in the residual at least is one unit of flow similar to. but P u The proof of the correctness of algorithm more detailed than. j) terminates when all arcs are in-kilter. of an arc (i.3) permits the algorithm to achieve these efficiencies. best implementations are empirically comparable to or better than other minimum cost flow algorithms. Suppose the kilter would decrease by increasing flow on P from node in the cycle j the arc. researchers have also improved the performance of the simplex algorithm by developing various heuristic rules for identifying entering variables. streamlining of the simplex problem offers several computations and eliminating the tree structure of the basis (see »need to explicitly maintain the simplex tableau. which is a spanning tree. Through extensive empirical testing. leaving arcs and pivots using the tree data structiire.6. k^. structure. we describe the network simplex algorithm in detail. of an arc (i. In this section. j)). for an arc I (i. j) with c^j < 0. that of the successive shortest path algorithm.kilter algorithm reduces the kilter number number of at least one arc. An arc with k^: = said to be in-kilter. Then the algorithm network and would obtain augment this a shortest path to node {(i. with is Cjj > 0.

j) (5. if U) as a basis structure.1c). Then..9) Cij . The condition not profitable for any nonbasic arc in L. through the arc (i.11) These optimality conditions have a nice economic interpretation. A + feasible basis structure U) is called an optimum basis structure if it is Cj. j). € L. / (5. j) A basis xj: structure (B. bounds.e.jc(i) + 7t(j) for a nonbeisic arc (i. (i. B denotes the set of basic arcs. L.1b) and (B. B. = each (i. imply that -7t(j) denotes the length of the cj. and setting (5. and then returning the flow (5. u^: for called feasible setting Xj. The following algorithmic description specifies the essential steps of the procedure. = for each e L. j) e B. j) € U. U p>artition and L and the arc set A. The condition (5. . arcs of a spanrung U by respectively denote the sets of nonbasic arcs at their lower and upper U) is j) g U. The network simplex algorithm maintains iteration a feasible basis structure at each until it and successively improves the basis structure becomes an optimum basic structure. p in L denotes the change in the cost of flow achieved by sending one unit of flow through the tree path from node 1 to node j i. possible to obtain a set of node potentials n so that the reduced costs defined by = Cj. for each (i. L. We refer to the triple (B. L. (B.11) has a similar interpretation.109 The network simplex algorithm maintains a basic feasible solution at is each stage U). (i. the problem has a feasible solution satisfying (5. for each for each (i. Cjj .9) 1 tree path in B from node to node j. A basic solution of the minimum The cost flow set problem defined by a triple i. .10) implies that this along the tree path from node circulation of flow is to node 1. - nii) n(j) satisfy the following optimality conditions: Cjj = S < . = Cj. L. L and tree.10) . then equations (5. (5. little later We shall see a that if nil) = 0.

Basis Structure Our connectedness assumption A5. q).110 algorithm NETWORK SIMPLEX. root. Maintaining the Tree Structure The specialized network simplex algorithm is possible because of the spanning tree property of the beisis. We next describe one such tree representation. 1) with flow b(j) if b(j) > 0. we describe the various steps performed by the network simplex algorithm Obtaining an Initial in greater detail. depthd). In the following discussion. called the tree. forming a cycle and augment the maximum possible flow determine the leaving arc (p. pred(i). 1 is the root node. we will see later. Each node has a unique path connecting it . The node potentials for basis are easily computed using (5.9). See Figxire 5. thread(i).2 provides one way basic feasible solution. compute node potentials for this basis structure. begin determine an initial btisic feasible flow x and the corresponding basis structure (B. baisis add arc to the spanning tree corresponding to the in this cycle. The -b(j) if b(j) initial basis B includes the arc set (1. /) (k. and a thread index.1 for an example of the We associate three indices with each node i in the tree: a predecessor index. The algorithm requires the tree to be represented so that the simplex algorithm can perform operations efficiently and update the representation quickly when the basis changes. j) and 1) with sufficiently large costs and capacities. of obtaining an initial We (j. L. the network contains arcs (1. U). have assumed that for every node j € N - {!). j) with flow S and arc set (j. /) violating the optimality conditions. a depth i index. end. end. perform a basis exchange and update node potentials. The this L consists of the remaining arcs. violates the optimality conditions while some arc begin select do an entering arc (k. jmd the as U is empty. We consider the tree We assume that node as "hanging" from a specially designated node.

As we will see. finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the simplex method. we can enumerate the path from any node to the root node. a sequence of nodes that walks or the way through nodes of the tree. number of arcs in the path. thread (i) specifies the next node in the traversal visited after node i. descendants of a node visited until the We 5. descendants of node and then left visit node Since node 3's depth equals that of node we know that we have the "descendant tree" lying below node 5. which are the 5. simply follow the thread from node recording the nodes depth of the visited node becomes at at least as large as node i.Ill to the root. For example. The predecessor index stores the stores the first node in that path (other than node i) and the depth index indices are zero.1. 9. 8. starting at the root and visiting nodes in a "top to bottom" and "left to right" order. U). 6. The thread indices can be formed by performing a depth first search of the tree as described in Section 1. The thread threads its indices define a traversal of the tree. we visit nodes 3. L. nodes 4. Computing Node Potentials and Flows for a Given Basis Structure We first consider the problem of computing node potentials n for a given basis structure (B. and (ii) basic steps: (i) determining the node p>otentials of a computing the arc flows for a given basis efficiently structure. Note that the value of one node potential . starting node 5. 8. and 7 in order. The simplex method has two given basis structure. its successors. 7. and 9 are leaf nodes.5 and setting the thread of a node to be the node encountered after the itself node in this depth first search. For our example.1). For example. the node set (5. 8. itself. and then finally returning to the root. this sequence would read For each node i.1 shows an example of these indices.1. successors of successors. For the root node these The Figure 5. This traversal satisfies the following in the two properties. (i) the predecessor of each node appears sequence before the node in the traversal. on. 6. 1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5. Note that by iteratively using the predecessor indices. and (ii) the descendants of any node are consecutive elements The thread indices provide a particularly convenient i: means for visiting (or finding) all i. We now describe how to perform these steps using the tree indices. 7. 9) is contair« the descendents In Figure 5. A node with no successors called a leaf node. of node 5 in Figure 5. We assume that n(l) = 0. its We say that pred(i) of a is the predecessor of node i i and i is a successor of node The descendants and so node i consist of the node itself. pred(i).

These conditions can alternatively be stated as 1 . j) in B.112 can be set arbitrarily since one constraint in (5.1b) is redundant. We compute the remaining node potentials using the conditions that Cj: = for each arc (i.

We proceed. procedure begin 7t(l): COMPUTE POTENTIALS. in the reverse order: indices. while node and move in toward the root using the predecessor computing flows this task. j) e B. however. A similar procedure will permit us to compute flows on basic arcs for a given start at the leaf basis structure (B. j: = thread(l). if (i. on arcs encountered along the way. The traversal assures that whenever this its fanning out procedure predecessor.:(]) : = 7t(i) - Cj. = 0. The following procedure accomplishes . j. for every arc (i.. Cjj. The thread compute node potentials 0(n) time using the following method. (5. the procedure can all comput in 7t(j) using (5. U). end. j while ^ 1 do begin i : = pred(j). if (j. i) A then 7t(j) : = 7t(i) + j : = thread (j). end. node it has already evaluated the potential of hence. L. say node indices allow us to i.12).12) The basic idea indices to is to start at node 1 and fan out along the tree arcs using the thread compute other node visits potentials.113 n(j) = Ji(i) - Cjj. j) 6 € A then .

if (i.114 procedure begin e(i) : COMPUTE FLOWS. units at Xj: node i and makes the same amount available initial This effect of setting nodes.6 in Section is the spanning tree T. descendants. = u^: explains the adjustments in the supply/demand of The manner for up>dating e(j) implies that each e(j) represents the j. This assignment creates an at j. which B represents the columns Since B is in the node-arc incidence matrix N corresponding to 2. each node appears after prior to its its Hence. (i. Now additional consider the steps of the method. Note that in the thread traversal. while T*{1) do begin select a leaf i : node j in the subtree T. and add u^: to e(j). Xj. j delete node and the arc incident to it from T.3). it a lower triangular matrix (see Theorem is possible to solve these equations by forward substitution. The arcs in the set U must carry flow node equal to their capacity. else Xjj add e(j) to e(i). = b(i) for aU i € N. which precisely . The procedure Compute Flows in essentially solves the system of equations Bx = b. j) € : T then = e(j). end. we set x^. One way thread indices. j) : for each e U do subtract Uj. sum of the adjusted supply /demand of nodes in the subtree hanging from node is Since this subtree connected to the rest of the tree only by the arc (i. this arc must carry -e(j) (or e(j)) units of flow to satisfy the adjusted supply /demand of nodes in the subtree. the reverse thread traversal examines each node examining descendants. Thus. i)). of identifying leaf nodes in T is to select nodes in the reverse order of the all A simple procedure completes this task in 0(n) time: push the nodes into a stack in order of their appearance on the thread. : = -e(j). = U|j for these arcs. and then take them out from the top one at a time. = pred(j). let T be the basis tree. end. 2. demand of Uj. j) (or (j. from e(i) set X|j = u^j.

An i. On the other hand. Compute Potentials solves the system B = c by back Entering Arc types of arcs are eligible to enter the basis: a negative is Two bound with aiiy nonbasic arc at its lower a reduced cost or any nonbasic arc eligible to enter the basis. one node emanating from node i at a time.. the algorithm examines to the candidate list. This approach also offers sufficient flexibility for fine tuning to special problem classes. One most successful implementations uses a candidate approach that strikes an effective compromise between these two strategies. In a major iteration. . (5. but might require a of the relatively number of iterations due to the list poor arc choice. i+2. has the largest value of Cjj I among such arcs. we ufxiate the candidate list by removing those arcs no longer violate the optimality limit conditions. but must examine each arc at each iteration. In other words. we rebuild the with another major . the procedure substitution. Once minor the list becomes empty or we have reached a specified be performed at on the list number of iterations to iteration. We examine arcs emanating from nodes. We repeat this list selection process for nodes i+1. adds arcs emanating from them Once minor the algorithm has formed the candidate list in a major iteration. until either we have examined all nodes or the has reached its maximum allowable size. scanning all candidate arcs and choosing a nonbasic arc from this that violates the optimality condition the most to enter the basis. The next major iteration begins with the node where the previous major nodes cyclically as it iteration ended. The algorithm maintains a candidate list of arcs violating the optimality conditions. we construct the candidate list. list which is very time<onsuming..e. it performs list iterations. examining the arc optimality condition large cyclically and selecting the first arc that violates the would quickly find the entering arc. that As we scan the arcs. at its upper bound with positive reduced cost. implementation that an arc that violates the optimality condition the most. selecting arcs in a two-phase procedure cor«isting of major iterations and minor iterations.11). of equations n Similarly. each major iteration.10) or The method used for selecting an entering arc among these eligible arcs has a inajor effect selects I on the performance of the simplex algorithm. adding to the candidate list the arcs (if any) that violate the optimality condition.115 what the algorithm does.. These arcs violate condition (5. might require the fewest number of iterations in practice.

|Uj: - X|: if (i. e We W. The crucial operation in this step in the basis to identify the cycle denotes the unique path from any node . cycle We (i. In other words. the first common P(/) ancestor of nodes k and to The cycle /). j) change the flow as much as possible until one of the arcs in the W reaches 5j: its lower or upper bound. Node w. /)} u P(k) u P(/)) n P(/))). namely. W contains the portions of the path P(k) and This method is efficient. /) if Oc. and e U. eliminates this extra work. 1) as the entering arc. If P(i) send 6 = min {5jj : (i. around W along and opposite to the cycle's orientation. /) if (k. j) e W. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of its the current solution. ^j=[Xi.j)eW. those in w and the root. which we might /. then this cycle consists of the arcs {(((k. W consists of the arc (k. denote the . say node w.. /) pivot cycle.116 Leaving Arc ^ Suppose we basis select the arc (k. refer to as the apex. The addition is of this arc to the to as the B forms exactly one (undirected) cycle W. The simultaneous use of depth and predecessor indices. opposite to the orientation of sets of arcs in /) W as the same as that of € L.(P(k) i to the root node. but it can be improved. the nodes in Repeat the same operation for node until we encounter a node already is labeled. if(i. The maximum flow is change on an arc W that satisfies the flow bound constraints . and is select an arc (p. We define the orientation of (k. /) and the disjoint portions of P(k) and Using predecessor indices alone permits us to identify the cycle W as / follows. as indicated in the following procedure. up It node w. e this arc leaves the basis. P(/). Let W and W respectively. j) W) units of flow around W. . Start at node k and using all predecessor indices trace the path from this node to the root and label this path. along with the arc (k. q) with 5pQ = 6 as the leaving arc. has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex W. which sometimes referred (k.

/) has . end. begin i : = k and i j : = /. If the leaving arc differs from becomes a more extensive ch«mges are needed. T2. not containing q. q). T^ containing the that the subtree root node. or vice versa. then set L to the set U. The deletion Note of the arc (p.117 * ' procedure IDENTIFY CYCLE. q) from the previous basis yields a new basis spanning The node potentials also change and can be updated as follows. typically The entire flow change operation takes CKn) time worstose. then the pivot if is said to be degenerate. simple modification of this procedure permits first it to determine the flow 6 that can be augmented along W as it determines the common W. merely moves from the the entering arc. Adding that is again a and deleting tree. q) from the previous b<isis partitions the set of nodes . : = i. ancestor w of nodes k and Using predecessor indices to again traverse the cycle the algorithm can then update in the flows on arcs. the arc (k. T2 hangs from node p or node The arc (k. (p. and the other. A /. otherwise its nondegenerate. A basis is called degenerate flow on some basic arc equals lower or upper bound. w end. . and nondegenerate otherwise. Basis Exchange In the terminology of the simplex method. If the leaving arc is the same as the entering arc. If 6 = 0. the root node.J) . into two subtrees— one. which would happen when 6 = uj^j the basis does not change. In this instance. the arc (p. /) Xpg = Upg. q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or Oc. Observe that a degenerate pivot occurs only in a degenerate Each time the method exchanges an entering arc (k. a basis exchange it is is a pivot operation. it must update the basis structure. basis. while ^ j do begin if depth(i) > depth(j) then if i : = pred(i) j : else depth(j) > depth(i) then j : = pred(j) else i : = pred(i) and = pred(j). In this instamce. /) for a leaving arc (p. but examines only a small subset of the nodes.

. - If k e T^ and / e T2. 118 one endpoint Cjj . It is it as just described. the network simplex algorithm will terminate finitely assunung nondegeneracy. to another until (5. Recall that I cj^/ I represents the net decrease in the (in cost per unit flow sent 0). however. Since there are a finite number of basis structures and every basis structure has a unique associated cost. During a nondegenerate pivot is which 6 > the new basis structure has a cost that 61 cy I units lower than the previous basis structiire. around the cycle W.7t(i) in T-j and the other in T2. using the thread and depth updates the node potentials quickly. procedure begin if UPDATE POTENTIALS. in T2 change by Cj^/ . if / e T| and k € T2. if k e T| then change = 7t(y) - Cjj else change = Cjj : : = 7t(y) + change. and + 7t(j) = for all arcs in the new basis imply that the potentials of nodes in the subtree T^ remain unchanged. however.. moves from one basis structure obtains a basis structure that satisfies the optimality conditions (5. : : q e T2 then y = q else y = : p. they change by the eimount indices.11). that possible to update the tree indices in 0(n) Termination The network simplex algorithm. while depth(z) < depth(y) do begin 7c(z) : = 7:(z) + change. end. time. 2 : = thread (z). The following method. end. As is easy to verify. Degenerate pivots. z : = thread(y). We do note.9)- easy to is show that the algorithm terminates in a finite number of steps if each pivot operation nondegenerate. the conditions n(l) = 0. The final step in the basis exchange is to ujxiate various indices. This step is rather involved and we refer the reader to the reference material cited in Section 6. pose theoretical difficulties that we address next. then all the node potentials Cjj. and the potentials of nodes in the subtree T2 change by a constant amount.4 for it is the details.

the root). it practice as well. 3. by maintaining a special type of basis. .. We show that a particular perturbation technique for the network simplex method basis technique.e. i. feasible basis. As we show next. for avoiding cycling in the The perturbation technique is a well-known method simplex algorithm for linear programming. (ii) i 1 = 2 ti < 1.2 for an example of a strongly Observe that this definition implies that no upward pointing at its eirc can be at upper bound and no downward pointing arc can be lower bound. ar»d . the feister in simplex algorithm terminates finitely. . moreover. Researchers have constructed very small network examples for which poor choices lead to cycling.. if it satisfies the following conditions: (i) Ej > n for all i = 2. t^) is a feasible perturbation .. ££. hand-side vector so that every convert an This technique slightly pertvirbs the right- fecisible basis is nondegenerate and so that it is easy to optimum solution of the perturbed problem to an optimum solution of the original problem. n. . its See Figure 5. Degeneracy in network problems not only a theoretical issue. infinite repetitive is sequence of degenerate pivots. minimum cost flow problem with integral As earlier.. in Computational studies have shown that as many as 90% of the pivot operations common runs networks can be degenerate. is equivalent to the combinatorial rule knov^Ti as the strongly feasible The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej. L. The tree arcs either are upward pointing (towards the root) or are downward pointing (away from (B.. positive We say that a basis structure of flow from U) is strongly feasible if we can send a amount any node in the tree to the root along arcs in the tree without violating any of the flow bounds. U) be a basis structure of the data. called a strongly feasible basis. we conceive of a basis tree as a tree hanging from the root node. Let (B. . but also a practical one.119 Strongly Feasible Bases The network simplex algorithm does not of iterations unless necessarily terminate in a finite number we impose an an additional restriction on the choice of entering and leaving arcs. L.

Suppose an upward pointing arc (i. As noted strictly earlier. = 1/n with for i = 2.- Since < X < rXi) k € CKi) 1. j) is at its upper bound. the perturbed solution remains feasible. n (and thus = -{n . i cannot send any flow to the root. is feasible if we replace b by b+e. pointing arc of the basis bound. . (B. is nonintegral and thus nonzero. L.. Suppose true. L. Proof. n. 120 r (iii) El = i L ^^ = 2 is Cj . j) by k€ X El. earlier in this section. is (ii) No upward the basis (B. then the perturbation decreases the flow in arc the resulting flow (i. The procedure we gave Compute-Flows.l)/n Another choice is Ej = a* for i = 2. j) by k€D(j) 1. then the perturbation increases the flow the resulting flow 5. (i. Since the flow on an upward pointing arc is integral and strictly less (integral) upp>er bound. If (i.. perturbation increases the flow on an upward pointing arc by an amount between than its and 1. One E| possible choice for a feasible perturbation )... for the perturbation e = (-(n-l)/n. j) is an upward pointing arc of tree B and in arc D(i) is the set of descendants of node El. U) is feasible . j) is a downward pointing arc of tree B and D(j) is the set of descendants of node Ei. violating the definition of a strongly feasible the For same =^ reason.2.. If (i. The perturbation changes the flow on for the basic arcs. . L. no dov^mward pointing arc can be that (ii) is at its lower bound. 2/n. is X Ew- Since < keD(j) Z < nonintegral and thus nonzero. 2. i. (ii) (iii). . is at its upper bound and no downward pointing arc of at its lower (iii) U) L. Similar reasoning shows that after we have downward pointing arcs also remain feeisible.. for any feasible perturbation e replace b (iv) (B. 1/n. (i) ^ (ii). . U) is strongly feasible. . o chosen as a very small justification positive number. perturbed the problem. Then node basis. the following statements are equivalent: (i) (B. if we by b+e. Theorem For any basis structure U) of the minimum cost flow problem. implies that perturbation of b by e changes the flow on basic arcs in the following maimer: 1. 1/n). j..

(iv) =* (i). As a corollary. . flows on the the U|: downward upward pointing arcs increase. 1/n. This theorem shows that maintaining a strongly feasible basis is equivalent to applying the ordinary simplex algorithm to the perturbed problem. The algorithm selects the leaving arc in a degenerate pivot carefully so that the next basis is also .. 1/n) is a feasible perturbation. < and U) is strongly feasible for the origiruil problem. any implementation feasible the simplex algorithm that maintains a strongly basis runs in pseudopolynomial time.. This result implies that both approaches obtain exactly the same sequence of basis structures if they use the that same rule to select the entering arcs.121 (iii) => (iv). we remove (i. cortsider the perturbed . the flow leeist on every arc is a multiple of 1/n. We can e. flows on the resulting flows are integral. this equivalence shows any implementation of the simplex algorithm that maintains a strongly feasible basis performs at most nmCU pivots. Follows directly because e = (-(n-l)/n. U) of the perturbed problem. (B.. The method initial basis always gives such a basis. Figure 5. there no need to actually perform the perturbation. 1/n). Therefore. thus maintain strong feasibility by f>erturbing b by a suitable perturbation is However. ( - To establish this result. With this perturbation. 1/n. 1/n. for pxjinting arcs. is Instead. and > for downward pointing arcs. the algorithm will terminate in at most of nmCU iterations. 1/n. L. Even though this rule permits degenerate pivots. Consequently. Consequently. b + e by b).. x^: upward pointing arcs decreaise. problem with the perturbation e = (n-l)/n. Combinatorial Version of Perturbation The network simplex algorithm described earlier to construct the starts with a strongly feasible basis. L. . 1/n Since mCU is is an upper bound on the objective function value of the starting a lower solution and zero bound on the minimum objective function value.e. the p>erturbation Consider the same basis tree for the replace original problem. every pivot operation augments at at least 1/n units of flow and therefore decreases the objective function value by units. Consider the feasible basis structure (B..2 will illustrate our discussion of this method. Each arc in the basis B has a If positive nonintegral flow. it is guaranteed to converge. x^. we can maintain strong feasibility using a "combinatorial rule" that the original simplex equivalent to applying method after we have imposed the perturbation. .

In this case.. because by the property of strong every node on the path from node to node w can send a positive amount of flow to the root before the pivot and.122 feasible. since the pivot does not W^ could send positive flow to the root and. node k the subtree T2 and . W along its introducing an arc into the basis for the network simplex say arc (p. is This conclusion completes the proof that the next basis strongly feasible. some basic arcs will be at their lower or upper bounds. Notice that since the previous basis was strongly feasible. Define the orientation of segments W^ and W2 to W. apex w and arc - (p. no arc on this path can be a blocking arc in a degenerate pivot. thus. encountered in traversing the orientation starting at the apex w. pivot cycle select the leaving arc as the last blocking arc. We now study the effect of the basis (k. every node in therefore. then leaves the basis. /) degenerate pivot. then W^ must be contained the segment of feasibility. q)). Since arc (p. no arc in be compatable vdth the orientation of W. every node in W^ be able to send positive flow to the root after the pivot as well. hence. See Figure 5.e. cj^j < 0. the algorithm selects the leaving arc in accordance with the following rule: Combinatorial Pivot Rule. Let W^ be the segment of the cycle orientation. W2 = for W - W| {(p. the i.2 for an illustration of the segments the last blocking arc in W| is and W2 our example. cycle contains more than one blocking then the next basis will be degenerate. j) W that satisfy = 5. is those arcs (i. the algorithm cycle identifies the blocking arcs. We distinguish two cases. The leaving lies in from node k node w. Hence. q).. q) is W2 blocking and every node contained in the segment orientation of W2 and via node w. first Suppose that the entering arc (k. If the blocking arc arc. /) to the basis We define 5jj the orientation of the cycle as the After in the If updating the flow. every node in the to the orientation of W^ can augment flow back to the root opposite If W^ and in node w. Now must observe that before the pivot. it (k. is the common tree.e. When We next do so. Since arc arc belongs to the path enters the basis to change on node potentials during a at its lower bound. If W2 can send positive flow to the root along the Now consider nodes contained in the segment W^. the current pivot of augmented segment via a positive amount was a nondegenerate pivot. show that this rule guarantees that the next basis is strongly feasible. /) is at its lower bound and the apex w /). method. then the pivot flow along the arcs in Wj. /. W between node w and node / k. every node could to the root send positive flow node. unique. change flow values. To we show that in this basis every node in the cycle W can send positive flow to the W between the let its root node. when we traverse the cycle along Further. q). ancestor of nodes k and Let W be the cycle formed by adding arc same as that of arc (k. i. the current pivot was a degenerate pivot.

123 the potentials of all nodes in T2 change by the amount . in In this case. 1/n. We have already shown that any version of the network simplex algorithm that maintairis a strongly feasible basis performs O(nmCU) pivots. W as opposite to its the orientation of arc The criteria to select the leaving arc remaii\s unchanged-the leaving arc starting at is the Icist blocking arc encountered in traversing W along orientation node w. also yields polynomial lime simplex algorithms for the shortest path and assignment problems. Consequently. 1/n. the arc that most violates the optimality conditions (that is. at the k-th iteration of the simplex algorithm. A > If denote the maximum violation of the optimality condition of any nonbasic the algorithm next pivots in a nonbasic arc corresponding to the maximum Hence. we can reduce the number of pivots 0(nmU log H). then the objective function value decreases by at least A/n units. node / is contained in the subtree T2 and.e. ^k. number So of successive degenerate pivots is finite. /). /) with the largest This technique value of I Cj^j | among all arcs that violate the optimality conditions).c^j > 0. L. Complexity Results The strongly feasible basis technique implies some nice theoretical results about i. and structure. the pivot again increases the sum node potentials. 1/n).. U) denote the current basis Let arc.. earlier. then we define the orientation of the cycle (k. /) is at its upper bound. after the Cj^^j . the network simplex algorithm implemented using Dantzig's pivot rule. pivoting in the arc (k. easy to show . Let denote the objective of the perturbed minimum cost flow problem (B. violation. thus. with H defined as e H = mCU. . Using Dantzig's pivot rule to and geometric improvement arguments. As . Since the sum of all node bounded from below. this degenerate pivot strictly increases the sum of all node potentials (which by our prior potentials is assumptions the is integral). pivot all nodes T2 again increase by the amount of the consequently. far we have assumed that the entering arc is at its lower bound. x denote the current flow. we consider the perturbed z*^ problem with perturbation function value = (-(n-l)/n. If the entering arc (k..^k+l^^/n (513) We now need an upper bound on the It is total possible that improvement in the objective function after the k-th iteration.

This pivot is a degenerate pivot.4) (2. capacities represented as The entering arc the blocking arcs are (2. 3) and (7. 10).5) (0. A strongly feasible basis.2) 0. and the leaving arc is (7.5) Entering arc Figure 5. Ujj). 5). The segments W^ and W2 are as shown.124 ap>exw (3.2. The figure shows the flows and is (9. (x^:. . 5).

5.. (514a) A »] 1] < xjj < Ujj. We summarize our discussion as Theorem The network simplex algorithm that maintains a strongly feasible basis and uses Dantzig's pivot rule performs 0(nmU log H) pivots.13) (5. j) e U with Cjj > 0.j)6 C. L.14) by setting Xj. the total improvement with respect to the objective function ^ C:: x. by setting xj: = for all arcs (i. '' total improvement (i. is bounded by the total . = u^ for all arcs (i. U). and by leaving the flow on the basic arcs unchanged.j)e A ' ' (i. Xj. Combining (5. for all (i. (5..1. the with respect to in the the objective objective function £ £ c..' (i. j) € A ' ' problem: f minimize subject to X {i..15) and (5. we construct an optimum solution of (5.15) we obtain nmu By Lemma 1. .j)€A'^ function Cj. This readjustment of at flow decreases the objective function by most mAU.14b) For a given basis structure (B. x.. j) € L vdth Cj: < 0. We have thus shown that z^-z»^mAU. x. j) € A..125 (i.j)€ A^ ^ ieN Since the rightmost term in this expression is a constant for fixed values of the node potentials.'" improvement • in the following relaxed (i.3.j)€A^ is equal to the total improvement Further. if H = mCU. 0(nmU log W) iterations. the network simplex algorithm terminates in follows.

Scaling techniques are among the most effective algorithmic strategies for designing polynomial time algorithms for the section. the least power of 2 satisfying Initially. we perform a number of augmentations. In this we describe an algorithm based on a right-hand-side scaling (RHS-scaling) technique. scaling. after has been converted into an uncapacitated problem (as described in Section The algorithm uses the pseudoflow Section 5. upon cost and simultaneous right-hand-side and is The RHS-scaling algorithm an improved version of the successive shortest in the successive shortest path algorithm. and each of these augmentations A imits of flow.e. Much A be as we did in the excess scaling algorithm for the either 2' (i) maximum for all i. to In fact. The inherent drawback path algorithm is that augmentations may carry relatively small amounts of flow. (ii) we let i. It x and the imbalances e(i) as defined in performs a number of scaling phases.4. is : e(i) < 2A or e(i) > -2A for all but not necessarily both.4. a problem with = » for each (i. flow problem. minimum cost flow problem. 5. The next two sections present polynomial time algorithms based cost scaling. shall illustrate RHS-scaling on the uncapacitated Uj: minimum cost flow problem. { j e(j) < -A ).4).126 This result gives polynomial time bounds for the shortest path and assignment problems since both can be formulated as minimum cost flow problems with U = n and U = 1 respectively. j) e A. These results can be found in the references cited in Section 6. we begin a new scaling phase. it is possible to modify the algorithm and use the previous in arguments pivots show in that the simplex algorithm solves these problems 0(n^ log C) and runs 0(nm log C) total time. each from a carries node c S(A) to a node € j T(A). Then at the beginning of the A-scaling phase. The definition of A implies that within n augmentations the algorithm will decrease A by a factor of at scaling least 2. sufficiently large The RHS-sc<iling algorithm guarantees each augmentation carries flow and thereby reduces the number of augmentations substantially. At this point.. We i. Hence. Let S(A) = e(i) ^A and let T(A) = 0. resulting in a fairly large that number of augmentations in the worst case. it This algorithm can be applied to the capacitated minimum cost flow problem 2. This definition implies that the is sum of excesses { i : (whose magnitude ) equal to the sum of deficits) bounded by 2nA. A= '°S ^ '. within Odog U) .7 Right-Hand-Side Scaling Algorithm ni . either S(2A) = or T(2A) = In the given i A-scaling phase.

while S(A) * and T(A) * e do begin select a node k e S(A) and a node / e T(A). S(A) and T(A). algorithm RHS-SCALING. X. U1. A < 1. units of flow along the path P. . The following algorithmic a formal statement of the RHS-scaling algorithm. This fact follows from the follovdng . . T(A) := { i € N : e(i) < -A ). > . ^ . begin X := 0. update n:=n-d. augment A update end. The RHS-scahng algorithm A-scaling phcise. all imbalances are now zero and the algorithm has found an optimum flow. all determine shortest path distances d from node k to in the residual other nodes network G(x) with respect to the reduced costs let P denote the shortest path from node k to node /. end. end. node / e T(A). This flow that invariant property and the connectedness assumption (A5. to a it is correctly solves the problem because during the able to send A units of flow on the shortest path from a node k € SiA) result. The driving force behind this scaling technique is an invariant property (which is we will prove later) that each arc flow in the A-scaling phase a multiple of A. e := b. let n be the shortest path distances in G(0).^ 2f log while the network contains a node with nonzero imbalance do begin S(A):={i€ N:e(i)^A).127 phase. By the integrality of data.2) ensure in S(A) to a we can always send A units of flow from a node description is node in T(A). A := A/2.

algorithm and thus terminates with a minimum We show that the algorithm performs l+Flog at most n augmentations per scaling phase. Applying the scaling algorithm problem introduces some directly to the capacitated minimum cost flow subtlety. m. O) time. I ends I node with a and carries A units of flow. The RHS-scaling algorithm is a special case of the successive shortest path cost flow. The shortest path problem on the transformed problem can be solved (using some clever techniques) in S(n. this fact would imply the conclusion of the theorem. m. e S(A). Proof. to an uncapacitated one using the technique described in Section We then apply the RHS-scaling algorithm on the transformed network. similar proof applies when T(2A) = At the beginning of the scaling i S(A) | < Observe that A< at a e(i) < 2A for each node deficit. C) time. The inductive hypothesis be true initially since the residual capacities are or Uj.2 does not apply for this situation. The transformed network contains n+m nodes. Theorem 5. Consequently. decreases S(A) by one. C) denote the time to solve a shortest path problem on a network with nonnegative arc lengths. The residual capacities of arcs in the residual A Proof. As we noted problem is previously. At the beginning of the A-scaling phase. A n.2. m. m. 0. and each seeding phase performs at most n+m augmentations.4. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U Sin. A units and preserves the inductive A decrease in the scale factor by a factor of 2 also preserves the inductive This result implies the conclusion of the lemma.4. or T(2A) = 0. S(2A) = I 0. O) time. at therefore. A recently developed modest variation of the problem RHS-scaling algorithm solves the capacitated minimum cost flow 0(m lof^ n . Let S(n..128 Lemma 5. each scaling phase can perform most n augmentations. Since the algorithm requires Ul seeding phases. because fails to Lemma 5. The Each residual capacities are a multiple of A because they or are either or «. Each augmentation starts at a node it in S(A). Consequently. the RHS-scaling algorithm solves the capacitated minimum in cost flow problem in 0(m log U S(n. one method of solving the problem cajjacitated minimum cost flow to first transform the capacitated 2. initial network are always integer multiples of We use induction on the number of augmentations and scaling phases. hypothesis. either S(2A) = We consider the case when phase. augmentation changes the residual capacities by hypothesis.

The e-dual imply that for all any directed cycle W in the residual network. This algorithm can be viewed as a generalization of the preflow-push algorithm for the flow problem.. Since arc costs are integral.1 the flow is optimum. Now consider an e-optimal flow with e < /n. j) at its upper bound. j) X W' ^ 6 ^\\ 0. Cost Scaling Algorithm We now maximum describe a cost scaling algorithm for the miiumum cost flow problem. any feasible flow with zero 1 node potentials satisfies C5.^-n£>-l. We The Cjj refer to these conditions as the e -optimality conditions. A flow x is said to be e -optimal for some conditions. i^ C. which a relaxation of the usual optimality conditions. This algorithm relies on the concept of approximate optimality. The follovsdng facts are useful for analysing the cost scaling algorithm. Hence.8 for e feasibility conditior« ^ C. Clearly. These conditions are a relaxation of the original optimality conditions e -optimality conditions permit -e < Cj. Any feasible flow e -optimal for ekC. Any e -optimal feasible flow for E<l/n is an optimum flow. Proof. is Lemma 5.6 when e is 0. < for and reduce to C5.: = Y C.3. the residual network contaii« no negative cost cycle and from Theorem 5. 5. and finally e < 1/n.5 and C5. This method is currently the best strongly polynomial-time algorithm for solving the minimum cost flow problem. and iteratively obtains e-optimal flows for successively smaller values of Initially e = C. e > if x together with some node potentials n satisfy the following C5. After l+Tlog nCl . (Primal feasibility) x (e -EHial feasibility) is Cj.129 (m + n log n)) time. j) at its lower bound and e S is > for an arc (i. feasible. The algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation procedure that transforms an e-optimal flow into an e/2-optimal flow.8. an arc (i. this result implies that (i.8. The cost scaling algorithm treats e as a parameter e. ^ -e for each arc (i. j) in the residual network G(x).7 C5.

j) e A(i) and r^j end. More formally. Moreover. X is x. i with 0.. end.8). procedure PUSH/RELABEL(i). re). algorithm COST SCALING. j) in the residual network admissible -e/2 < < The basic shall operations are selecting active nodes and pushing flows on admissible arcs. end.. j) in G(x). We feasibility The Improve-Approximation procedure uses the following subroutine. 130 cost scaling phases. begin if G(x) contains an admissible arc (i. see later that pushing flows on admissible arcs preserves the e/2-dual conditions. e(i) > and call an arc (i. Recall that r^: denotes the residual capacity of an arc (i. 5 = then we refer to the push as saturating. It e -optimal flow into an does so by is (i) first converting an e -optimal flow into an 0-optimal if it satisfies pseudoflow (a pseudoflow x (ii) called e -optimal the e -dual feasibility conditions C5. The purpose of create new admissible arcs. E:=£/2. 1 while e S /n do begin IMPROVE. j) then push 6 else Jt(i) := min { e(i). and e := C. we can state the algorithm as follows. rj: } units of flow from : node > 0). As if in our earlier r^.APPROXIMATION-I(£. always maintaining the e/2-dual active We if call a node c^. begin j: := let X be any feasible flow. discussion of preflow-push algorithms for the maximum it flow problem. and then gradually converting the pseudoflow into a flow while feasibility conditions. e < 1/n and the algorithm terminates with an optimum flow. we use the same data structure . an optimum flow for the minimum cost flow problem. i The Improve-Approximation procedure transforms an E/2-optimal flow. i to node j := 7c(i) + e/2 + min { c^: (i. otherwise is nonsaturating. We also refer to the a relabel bls updating of the potential of a node as a operation is to relabel operation.

after we Jt(i) by e/2 + min rj: Cj: : (i. at termination. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow. Proof. Pushing flow on arc (i. In addition. end. maintains the condition cj^ t -e/2 for all arc (k.8 i satisfied for (i. > then Cjj Xj. compute node imbalances. yields an e/2-optimal flow. ^ -e/2. PUSH/RELABEL(i). 131 used in the maximuin flow algorithms (i. Jt). := else < then Xj: := uj. use induction on the number of push/relabel steps to show algorithm preserves £/2-optimality of the pseudoflow. But since -e/2 S is Cj. node The current arc is found by sequentially scanning the arc of the The following generic version summarizes its Improve-Approximation procedure essential operations. begin if Cjj if x.i) in the Therefore.APPROXIMATION-I(e. Cjj > and the condition C5. maintain a currenl arc which is the current candidate for pushing flow out of list A(i). For each node i. we i. and at termination yields an e /2-optimal flow. Lemma 5.1. { By our and fjj rule for increasing potentials. it algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow is a 0-optiCTiaI that the pseudoflow). end. . < (by the criteria of c admissibility). j) e A(i) > 0) units. At the beginning of the procedure. j) in the residual network.. the reduced cost of every arc Ji(i) with > still satisfies Cj. increasing residual network. j) any value of > 0. We (j. j) to identify admissible arcs. j) might add its reversal i) to the residual network. ^ for every arc increaise (i. the procedure preserves e/2-optimality of the pseudoflow throughout and. while the network contains an active node do begin select an active node i. The correctness of this procedure rests on the iollowing result. This proof is similar to that of Lemma 4. procedure IMPROVE.4. The algorithm relabels node when Cj. the (in fact.

v-j - . (i. . we obtain X ^-/(e/2).. Alternatively.5. These time bounds are comparable flow problem. with the - P = vq .16) apeP^J Applying the £ - optimality conditions to arcs on the path P in G(x'). its vj = w .. It is of the flow decomposition properties discussed in Section 2.j)eP 7i(v) < Jt(w) + /(e/2) + y Cjj. we obtain (5. and (ii) its reversal P is an augmenting path with respect to exists a v^ is a This fact in terms of the residual . units.16) and + (5.optimality conditions C:. using a variation pseudoflow x and the flow x' repectively. . No node potential increases more than 3n times during an execution of the ImproveApproximation procedure.. (5.1.i)€ _C. < is and (iii) each increase in potential increases Ji(v) by at least e/2 The len\ma now immediate. that the complexity of the generic version is We a show O(n^m) and then describe specialized version running in time OCn-^).. = 7t'(v) + /£ - 2 C. networks implies that there property that sequence of nodes v = vq.j V| is a path in G(x'). v^.. Let X be the current £/2-optimal pseudoflow and x' be the e-optimal flow at the end of the previous cost scaling phase. P is an augmenting path with respect to x'..17) gives Jt(v) < n'(v) (7c(w) - n'(w)) + (3/2)/£.. Let n and to the n' be the node potentials corresponding possible to show. - path in G(x) and reversal to arcs P = vp vj. to those of the preflow-push algorithms for the maximum Lemma 5. P^' (i.17) 7l'(w) < 7t*(v) + /£ + I (j. Applying the e/2. (5..132 We will next analyze the complexity of the Improve-Approximation procedure. the facts that (i) k(w) = it'(w) (the potentials of it a node with a negative (ii) / imbalance does not change because the algorithm never selects for push/relabel).j)eP'J Combii\ing (5. ^ on the path P in G(x).18) Now we use n. Proof.. that for every node v with positive imbalance in x there exists a satisyfing the properties that (i) node w with negative imbalance in x and a path P x.

We The define the admissible network as the network consisting solely of admissible arcs. k -e/2 before a relabel The latter result is true operation.6.Approximation procedure performs 0(n m) nonsaturating pushes. Therefore the algorithm can create no directed cycles.8. and the same time to scan Since the cost scaling algorithm calls Improveresult. if the algorithm adds create its reversal to the residual network. is Lemma Proof. The Improve. The admissible network acyclic throughout the cost scaling algorithms. relabel operation since the relabel operation increases 7t(i) by at least e/2 units. j). we need one more result. Since the algorithm performs 5. is by an induction argument applied to the number of pushes and The result is true at the beginning of each cost scaling phase because the pseudoflow 0-optimal and the network contains no admissible arc. Approximation l+Tlog nCl times. Since any node p>otential increases 0(n) times.133 Lemma Proof.6. The Improve.AppToximation procedure performs 0(nm) saturating pushes.5 most 3n2 relabel operations and 0(nm) saturation pushes. j) We always (j. The algorithm takes 0(nm) time perform saturating pushes. hence. the bottleneck operation in the Improvethe nor«aturating pushes. the algorithm resulting in also saturates any arc 0(n) times the 0(nm) total saturating pushes. admissible arcs (i. then > 0. (i.7. i) push flow on an arc with Cjj Cj: < 0. number of nodes that are reachable from node i in the to admissible network and the potential function F = i X g^i)- Th^ proof amounts at active showing that a relabel operation or a saturating push can increase F by 1 most n units and each nonsaturating push decreases F by at at least unit. that 5.5 ar\d essentially (i. A relabel operation at may create new but (k. This proof is similar to that of Lemma 4. arcs while identifying admissible arcs. to Approximation procedure which take O(n^m) time. by Lemmas of and 5. Thus pushes do not new node admissible arcs and i preserve the inductive hypothesis. We establish this result relabels. Let g(i) be the let Proof (Sketch). Lemma 5. we obtain the following . j). amounts to showing i between two consecutive saturations of an arc j the potentials of both the nodes and increase at least once. and cj^j ^ after the (k. To bound number of nonsaturating pushes. these observations yield a bound O(nTn) on the number of nonsaturating pnjshes. 5. i). following result is crucial to analyse the complexity of the cost scaling algorithms. As in the maximum is flow algorithm. it also deletes cj^j all admissible arcs because for any arc i).

Each node examination entails at most one nonsaturating push. or bottleneck operation is the number of nonsaturating pushes. As is well known. method again if examine the nodes according However. nodes i of an acyclic be ordered so that for each arc (i. The wave algorithm examines each node is active. We then move node from its present position in . the Researchers have using si>ecific order. within n cortsecutive node examinations. < j. procedure for obtaining a top)ological order of nodes after each initial An topological ordering is determined using an 0(m) it. j) in the network. Consequently. in 0(m) time. algorithm. The generic cost scaling algorithm runs in 0(n^Tn log nC) time. The algorithm uses the network can acyclicity of the admissible network. the algorithm relabels Note that after the relabel operation at node the network contains no incoming admissible i arc at node i (see the proof of Lemma 5. i. in the topological order and if the node then it performs a push/relabel step. called a topological ordering of nodes. the wave algorithm performs O(n^) nor\saturating pushes per Improve- Approximation. The relabel may create new admissible arcs and consequently may affect the topological ordering of nodes. Observe pushes do not change the admissible network since they do not create new admissible operations. and thus the to the topological order. active nodes push flow higher numbered nodes. but it nodes for the push/relabel step in a specific order. The wave algorithm selects active is the same as the Improve-Approximation procedure. The cost scaling algorithm illustrates an important connection between the Solving maximum flow and the minimum is cost flow problems. which in turn push fiow to even higher so on. however.134 Theorem 5S. Suppose that while examining node i. the all algorithm performs no relabel operation then excesses and the algorithm obtains a flow. arcs. We describe one such improvement . suggested improvements based on examining nodes in some clever data structures.7). maximum flow problem. we immediately obtain a bound number of node examinations. We now describe a relabel operation. It is possible to determine this that ordering. to When examined in this order. called the wave algorithm. an Improve-Approximation problem very similar to solving a Just as in the generic preflow-push algorithm for the maximum flow problem. and A relabel operation changes the numbering of nodes and starts to the topological order. numbered nodes. active nodes have discharged their Since the algorithm requires O(n^) relabel of OCn-^) on the operations.

Double Scaling Algorithm The double scaling approach combines ideas from both the RHS-scaling and cost scaling algorithms and obtains an improvement not obtained by shall describe the either algorithm alone. excess to a node with deficit over an admissible path. This approach would send flow from a node with i. use ideas from the RHS-scaling algorithm to reduce the for augmentations to 0(n log U) an uncapacitated problem by ensuring that . and again examines nodes in order starting node We Theorem minimum have established the following The cost scaling result. at least this one arc and. We number of can. approach does not seem improve the O(nTn) bound of the generic Improve-Approximation procedure. Whenever node i. a path in which each arc result in is admissible.135 the topological order to the topological ordering of the first position.9.. A natural alternative would be an augmenting path based method. Notice that this altered ordering is a (i) new admissible network. by Lemma to the algorithm requires 0(nm) arc saturations. A natural implementation of this approach would 0(nm) augmentations since each augmentation would saturate 5.e. j). approach using the wave algorithm as a subroutine solves the log cost flow problem in 0(n^ nC) time. This result follows from the facts arc.4) and then applying the double scaling algorithm. A capacitated minimum cost flow problem can be solved by first transforming the problem into an uncapacitated transportation problem (as described in Section 2. A). (ii) node i has no incoming admissible j for each outgoing admissible arc (i. list) Thus the algorithm maintains an ordered and examines nodes it set of it a doubly linked in this order. the algorithm this moves at to the first place in this order i. however.6.6. node i precedes node in the order. For the sake of simplicity. Thus. with Nj and N2 as the sets of supply and demand nodes respectively. 5. The Improve-Approximation procedure section relied on a "pseudoflow-push" method. 5. we uncapacitated transportation network G = 0^^ u double scabng algorithm on the N2. and (iii) the rest of the admissible network does not change and so the previous order nodes (possibly relabels a eis is still valid. The double scaling algorithm is it the same as the cost scaling algorithm discussed in the previous section except that uses a more efficient version of the Improvein the previous to try Approximation procedure.

j) A at the beginning of the procedure and. 0(n) time on average over a sequence of n augmentations. units of flow on P and update x. also requires 0(n) time on average to find each augmenting path.4. n). a 0-optimal) for each e N2/ we obtain an e/2- pseudoflow. in is that the double scaling algorithm identifies an augmenting path fact. hence. procedure begin IMPROVE. at the termination of the procedure. end.APPROXIMATION-n(e. contrasted with solving a shortest path in the RHS-scaling algorithm. Thus. This approach gives us an algorithm cost scaling phase performs a is that does cost scaling in the outer loop and within each this number of RHS-scaling phases. and compute node imbalances. A := A/2. algorithm called the double scaling algorithm. end. In the double scaling algorithm app>ears to be similar to the shortest for the augmenting path algorithm maximum flow problem. we obtain an £/2-optimal flow. while S(A) ^ do begin OlHS-scaling phase) select a node k in S(A) and delete it from S(A). The double scaling algorithm uses the following Improve-Approximation procedure. augment A end.136 each augmentation carries sufficiently large flow. by adding e to optimal (in fact. . this algorithm. x. ^ j -e for all (i. A:=2riogUl. The advantage problem of the double scaling algorithm. We shall describe a method to determine admissible paths after First. while the network contains an active node do begin S(A) := ( i € Nj u N2 : e(i) ^A }. + E for . this The procedure always augments flow on choice preserves the e/2-optimality of the admissible arcs and. 5. observe that it(j) first commenting e on the correctness of this procedure. / determine an admissible path P from node k to some node with e(/) < 0. c^. from Lemma pseudoflow. set X := 7t(j) := 7t(j) all j € N2.

137 Further. advanced). the procedure maintains the invariant property that all residual capacities are integer multiples of A and thus each augmentation can carry A units of flow. the arc (pred(i). we delete this arc from P. i) from P. in the process. We admissible path P using a predecessor index. If < 0. Each execution of the procedure performs i. The algorithm maintain a partial identifies an admissible path by gradually building the path. then add (i. becomes inadmissible. If P has at least one arc. : (i. and a retreat step deletes (i) an arc from the partial admissible path. i A< < 2A node e S(A). then stop. We next coimt the number of advance steps.. v) e P then prediy) steps.. the residual network does not contain an admissible arc { rctreat(i). The creating retreat step relabels (increases the potential oO node i for the purpose of i) new admissible arcs emanating from this node. At the beginning of the A-scaling phase. j) € A(i) and r^: > 0). (pred(i). Thus. there are two types of advance steps: those that add arcs to an admissible path (ii) on which the algorithm later performs an augmentation. Each advance step adds an arc to the partial admissible path. At any point is in the algorithm. - u. and is those that are later cancelled by a retreat step. if (u. the method begins performs a total of new scaling phase. This operation reduces the excess at node k to a value less then is less A and ertsures that the excess at node /. after most n advance steps of the first type. then ujxiate then delete + e/2 + min Cj. n(i) to 7t(i) If (i. Since the set of admissible arcs at acyclic (by Lemma 5. Consequently. at the node node i. at than A. S(2A) = 0. j) to P. the algorithm will discover an admissible path .7). terminating when the last node deficit.4 implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow. Hence.e. e(j) If the residual network contains an admissible arc (i. say of the following two whichever has a applicable. During the scaling phase.e. each augmentation deletes a node from S(A) and after a most n augmentations. Ul RHS-scaling for each phases. The proof of Lemma 5. The algorithm thus 0(n log U) augmentations. the algorithm augments A units of flow from a node k in S(A) to a node / with e(/) < 0. j). as in the RHS-scaling algorithm. We l+flog e(i) next consider the complexity of this implementation of the Improve-Approximation procedure. j). leist we perform one of P. if there is any. i.

node potentials increase 0{t\^) times. The double scaling 0((nm + rr log U) log nC) time. and by Lemma is 5. researchers and have conducted There this sensitivity analysis using the primal simplex or dual this simplex algorithms. the number of the algorithm performs advance steps first typ>e at most 0(n^ log U). is. capacity or cost of any arc). For problems that satisfy the similarity assumption.we first transform it into an uncapacitated transportation problem and then apply the double scaling algorithm.10 minimum cost flow problem. a variant of this algorithm using more sophisticated data structures is currently the fastest polynomial-time algorithm for most classes of the 5. however. The references describe further modest improvements algorithm. The total number of advance steps.7. instead. therefore. and consequently changes the basis tree do not necessarily traiislate into the changes in the solution. Since the algorithm requires a total of 0(n log U) of advance steps is augmentations. 0(n^ log U). the simplex based approach does not give information about the changes in the solution as the data changes. The simplex based approach maintains a basis tree aruilysis every iteration and conducts sensitivity by determining changes in the b<isis tree precipitated by changes in the data.138 and vsdll perform an augmentation.5. n The amount of time needed to identify admissible arcs is 0( £ i=l lA(i)ln) = 0(nm) since between a potential increase of a node i. We leave it as an exercise for the reader to show that how the transformation permits us to use the double scaling algorithm to solve the capacitated minimum cost flow problem of the 0(nm log U log nC) time. We have therefore established the following Theorem 5. Therefore. though. algorithm solves the uncapacitated transportation problem in To solve the capacitated minimum cost flow problem . a conceptual drawback to at approach. Sensitivity Analysis The purpose solution of a of sensitivity analysis cost is to determine changes in the optimum minimum flow problem resulting from changes in the data (supply/demand practitioners vector. The in basis in the simplex algorithm is often degenerate. Traditionally. it tells us about the changes in the basts tree. the algorithm will examine result. . I A(i) I arcs for testing admissibility. The retreat at most O(n^) of the second type because each step increases a node potential.

Then x* a pseudoflow for the modified problem. Lemma implies that this flow optimum for the modified minimvmi cost flow problem. Augmenting one unit of flow from this node k to node into / along the shortest path in the residual network G(x') converts flow. node k node / with respect to the arc lengths Cj. / ) denote the shortest distance from node k Cj. this discussion is quite general: it is possible to reduce more complex changes to a sequence of the simple changes cost flow we cor^sider.139 We present another approach for performing serisitivity analysis.j)€ Cjj .1 pseudoflow / ) a Tliis augmentation changes the objective function value by d(k. the reduced costs of all arcs in the residual network are by solving n nonnegative. of a 1. Let n* be the corresponding node potentials and costs. Suppose that the capacity of an arc (p. minimum Cj. and must increase one value and decrease the is other). This approach does not share the drawback we have just mentioned. 5. Supply/Demand Sensitivity Analysis We becomes problem of first study the change in the supply/demand vector.K(k) + jt(l). Arc Capacity Sensitivity Analysis We next consider a change in an arc capacity. d(k. The flow x* is feasible for the modified problem. we limit our discussion to a unit change of only a particular type. equals the P cjj shortest distance from jt*(/) ). . In . we must change the supply /demand values two nodes by equal magnitudes. Z (i. plus ( 7t*(k) - At optimality. let d(k. this vector satisfies the dual feasibility conditions C5. residual network with respect to the original arc lengths Since for node / in the any directed path to / ) P from node k to node / . For simplicity. We show that the sensitivity analysis for the minimum flow problem essentially reduces to solving shortest path or maximum problems.1 minimum cost flow dictates that ie X N = 0. moreover. q) increases by one unit . hence.j)6P to ^ij = X (i. Suppose that the supply/demand b(/) node k becomes bGc) + (Recall 1 and the supply/demand that feasibility of the of another node / - from Section b(i) 1. however. = - 7C*(i) + 7t*(j) denote the reduced Further.6. we can compute d(k. cost flow problem. Let X* denote an optimum solution of a Cj. In a sense. /) for all pairs of nodes k and / single-source shortest path problems with nonnegative arc lengths. Hence. is units.

0. its Cpg < then condition C5. q) we assume are integral.4. then after the change c^ < 0. 0. Similarly. which (p. Suppose an arc increases by one unit. it is an optimum flow for the modified problem. however. Cpg = before the violates the change and Xp_ > then after the change Cpq = 1 > and the solution . q) decreases by one unit and flow on the arc is than its capacity. that the cost of we discuss changes in arc costs. before the change. q). /) . which produces a pseudoflow with an excess of one node q and a deficit of one unit node p. When strictly less the capacity of the arc (p. from node p to node q along the shortest This augmentation changes the objective function value by an amount -Cpn + d(p.140 addition. p). Cpq = 1 < before the change. 1) + d(l. often these upper bounds and the actual values are equal. hence. This flow is optimum from our observations concerning supply /demand sensitivity analysis. then c_ ^ if after the change. The preceding discussion shows how solution value in to determine changes in the optimum due to unit changes of any two supply /demand values or a unit change any arc capacity by solving n single -source shortest path problems. We at satisfy this requirement by increasing the by one unit. Recent empirical studies have suggested that these upper bounds are very close to the actual values. if Cpq S 0. if and hence optimun. then x* remains feasible. for all pairs of nodes k and 1 Consequently. However. /) obtain useful upper bounds on these changes by solving only two shortest path problems. the flow on the arc of flow is at its we decrease the flow by one unit and augment one unit path in the residual network. capacity. and from other nodes to node 1 to compute upper bounds on all d(k. for the modified problem. We convert the pseudoflow into a flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + d(q.4 dictates that flow on the arc must equal flow on the arc unit at (p. fact that d(k.2 . q) by one unit as well. /) S d{k. In both the Ctises. it satisfies the optimality If conditions C5. we need all to determine shortest path distances from node to all other nodes. if Cpq > 0. and usually they are within 5% of each other.C5. However. This observation uses the /. This change increases the reduced cost If of arc (p. Cost Sensitivity Analysis Finally. q) capacity. We can. we preserve the optimality conditions.

j) to-object assignments. the optimal objective function values of the original and modified problems are the same. (possibly negative) associated with each element The objective is to assign each person to one object . q) • to zero. (ii) define of node p as the source node and » node q as the sink node. say of f)€rsoris. then x° denotes a minimum cost flow of the Pi modified problem. to change flows only on arcs it with zero reduced costs. In this Ccise. q) to zero. a set N2. N.141 condition C5. however. eeisy to verify by case aruilysis change in node potentials maintains the optimality conditions and. choosing the assignment with . at node p and a deficit of x node Pi (iii) q. we must either reduce the (p. To satisfy the optimality condition of the arc. this case. furthermore. or change the potentiak so that the reduced cost of arc becomes zero. q) to zero. - units more than that of the original problem. this problem .2 and Let v" denote the flow sent from node p to node q » If and x" denote the resulting arc flow. v° = x . and a cost Cj. defined as follows: • (i) We at do so by solving is set a maximum flow problem the flow on the arc (p. 5. decreases the reduced cost of arc the flow on arc (p. (p. network flow problem. We first try to reroute the flow x from node p to node q without violating any of the optimality conditions. since otherwise would generate a solution that violates C5. say of objects cost Nj I = I N2 = n) 1 a collection of node pairs A C Nj x N2 representing possible person(i. and send a maximum x__ units from the source to the We C5. q e N . q) equal to Consequently. if v° < x then the maximum flow algorithm yields an s-t with the properties that p € X. We then decrease the node that potential of every this node in N-X by one unit.v° and obtain a feasible minimum is cost flow. permit the maximum flow algorithm.1 . thus creating an excess of X Pi sink.4. in A. q) flow on arc (p.2. the objective function value of the modified problem x_.11 Assignment Problem The assignment problem special cases of the is one of the best-known and most intensively studied minimum is Section ( I 1. and It is every forward arc in the cutset with zero reduced cost has others at the arc's capacitated. As already indicated in defined by a set N|.X) other hand.X. we v" can set In x^ . » cut On the (X.

(5. is called a partial assignment. for all (i.C) is the time required to solve a shortest p>ath problem with nonnegative arc lengths) . then is assigned to j and j is assigned to i. Researchers have suggested numerous algorithms for solving the assignment problem.18b) (i : (i.foraUje N2. the successive shortest path algorithm for the typically select the initial These algorithms node potentials with the following values: nii) = for all i e N| cost flow problem.m.142 minimum program: possible cost. A node not assigned to any other node is unassigned.foraUi€ X:: Xji N-i. j) e A) for All reduced costs defined by these node potentials are nonnegative. j) e A : x^.X:: (5. A 0-1 solution x satisfying ^ 1 for all i € Ni and X ''ii - 1 fo'" 3^' j e No X . either explicitly or implicitly.1 8d) G The assignment problem is with node set N = N| u N2. j) X e A) =l..18a) e A ^ ' subject to {j : (i. i A 0-1 solution x of (5. The assignment problem also known as the bipartite matching problem. = 1}.j)eA) If = 1. j) Cj. (Note that S(n. j) € A.18) is an assignment.C)) time. j) X € X) =l. We Xjj ^ use the following notation. : (i. arc = 1 if i a minimum cost flow Cj. arc costs problem defined on a network and supply /demand specified as has 2n nodes <md b(i) e N| and b(i) = is -1 if i e N2. and consequently runs in 0(n S(n. Several of these algorithms apply. The network G m= A | | arcs. (5. "ii {j:(i. Associated {i:(i.j)e A) is with any partial assignment x an index set defined as X= {(i. xjj (5. all j minimum e N2- and 7t(j) = min {cj.m. The problem can be formulated as the following linear Minimize 2(i.18c) ^ 0. The successive shortest path algorithm solves the assignment problem as a sequence of n shortest path problems with normegative arc lengths. set A.

j) each node (artificial) i by two nodes (i. Dijkstra's algorithm. we will discuss a different type of algorithm based upon the notion of an auction. j). Since these algorithms are special cases of other algorithms specify their details. To do we apply the tissignment algorithm twice. thus allowing any object to be assigned to more than one an object j person. can solve the shortest path Consequently.C)) time. Before doing so. is for maintaining a strongly feasible is fairly another solution procedure for the assignment problem.4. we can also use any algorithm for the to solve the shortest path problem with arbitrary arc lengths. is well knovkn solution procedure for the assignment problem. with provisions basis. by an arc (i. and. The node replaces each arc splitting tremsformation replaces (i. the constraint (5. in this section. The algorithm solves at most n shortest path problems.143 The relaxation approach is another popular approach. doesn't. i and i'.18c). i'). we show another intimate connection between the assignment problem and the shortest path problem. This relaxed problem smallest Cjj is easy to solve: assign each person i to with the value. we will not Rather. which is also closely related to the successive shortest path algorithm. This approach efficient in practice. if it The first application determines if the network contains shortest path. it Because this approach always maintains the optimality conditions. a negative cycle. assignment problem so. Interestingly. however. Assignments and Shortest Paths We have seen that by solving a sequence of shortest path problems. some implementations of it provide polynomial time bounds.the tissignment problem. we can solve any assignment problem. shortest paths The algorithm gradually builds from overassigned objects to assignment by identifying vmassigned objects and augmenting flows on these paths. For problems that satisfy the similarity assumption. this algorithm also One method. the second application identifies a Both the appbcations use the node splitting transformation described in Section 2. and adds an zero cost arc We first : note that the transformed network always has a feasible solution with cost zero . problems by implementations of runs in 0(n S(n. or relaxes. the Hungarian essentially the primal-dual variant of the successive shortest path algorithm.m. moreover. some objects may be unassigned and other a feasible objects may be overassigned. we have described earlier. The network simplex algorithm. a cost scaling algorithm provides the best-knowT> time bound fo. As a result. The relaxation algorithm removes.

j 2). (Jk' J]) Conversely. We if next show that the optimal value of the assignment problem negative if and only the original network has a negative cost cycle. (j^. First.144 namely. the assignment containing all artificial arcs is (i. (J2 / it can be no ^ ^ • more expensive than the partial assignment is { (jj jA ) / • • • » (Jk.Iv Since the optimal assignment cost negative. the assignment must contain a Qk' ii arcs of the form is . iy\2 -J3 ' ' • * " - . because j. some partial assignment PA j| must be J2 But then by construction of the transformed network. This solution must contain at least one arc of the form set of (i. the cycle ~ • ~ Jk ~ )l ^ ^ negative cost cycle in the original network.'). (J2 . the cost of the optimal assignment must be negative. ^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore. Then the assigment negative cost.. j') with * { j . Jl^-jj. t ) . • negative. (J2 / J3)/ • • • . suppose the cost of an optimeil assignment is i negative. i'). .. jo ) / • • • / ^'- ^^^ ^°^^ °^ *^'^ "partial" assignment nonpositive. suppose the original network contains { a negative cost cycle. PA = (j| . Consequently.

3.145 (a) (b) Figure 5. . (a) The original network. (b) The transformed network.

then we n. . = -uj.6. 3'). marginal utility of person for buying car is U|j price(j). Suppose n persons want is to interested in a subset of cars. and the converse is also true. We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to make the algorithm polynomial time. 2').3(a). 1 Now observe that each path from node to node n in the original network has a corresponding assignment of the same cost in the transformed network. For example. (4.3(b). say from node 1 to node as follows. buy n and has cars that are to be sold by auction. We The bid (i. value(i) ^ max {u^: . At each stage of the For a given set of - algorithm. assignment (4. 3')) in Figure 5. 5'). {lu^jl : (i. j) i a nonnegative utility Uj. the path 1-2-5 in Figure 5.146 If the original network contains no negative cost cycle. (2. This scaling algorithm 1. we cor\sider the maximization version of the assignment problem. C = max j. j) e A(i)}. j) e A(i)).3 for an example of this transformation. i.price(j) (i. At each an unassigned person bids on a car that has the highest margir\al utility. to reduce problem is to (5. and n and See Figure 5. for each set € A(i). j asking prices. is an instance of the bit-scaling algorithm described in Section To describe the auction this algorithm. (2. We assume that all utilities and prices are measured a We call a associate with each person i number - valued). then value(i) is person i is next in turn to too high and we decrease this value to max (u^j .3(a) has the corresponding in Figure 5. (3.3(b) has the corresponding path 1-2-4-5 in Figure 5. since version appears more natural for interpreting the algorithm.price(j) : (i. j) e A). which is an upper bound on : that person's highest marginal utility. If algorithm requires every bid in the auction to be admissible. 2'). an optimum assignment in the transformed network gives a shortest path in the original network. 4')) and an assignment {(1. can obtain a shortest path between a specific pair of nodes.18). 4'). ((1. j Each person (i. bid and has no admissible bid. 5'). for car utility.e. j) admissible if valued) = uj: price(j) and inadmissible otherwise. The objective this is to find an assignment with m<iximum Let We can Cj. 1' We consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes. there an asking price for car represented by i price(j). the iteration. in dollars. Consequently.. (3. The Auction Algorithm We now describe an algorithm for the assignment problem known as the auction algorithm.

we set price(j) = for each car and max {u^ : (i. with some valid For example. price). cars. some is admissible then begin assign person price(j) if : i to car j. end. We now show of the that this procedure gives an assignment whose utility is vdthin $n optimum utility.e. As the auction proceeds. begin let x". end. j) e A(i). the procedure yields an almost a more clever initialization. is while some person begin select if unassigned do an unassigned person bid (i. the prices of cars increase and hence the marginal values is to the persons decrease. Consequently. utility of always an upper bound on the highest marginal - person i. Also. value. let x° be the current assignment. person there is assigned to car The person k who was the previous bidder for car j. optimum tissignment procedure BIDDING(u. j) i. the polynomial time version requires At termination.. Let x" denote a partial assignment at some point during the Recall that i. person k was already assigned to car j.147 So the algorithm proceeds by persons bidding on car j. starts We now j describe this bidding procedure algorithmically. person k must bid on another car. Subsequently. if was one. . then person k becomes unassigned. end else update vzJue(i) : = max {uj: . becomes uneissigned. choices for value(i) and value(i) = price(j). execution of the auction algorithm and x* denote an value(i) is optimum assignment. subsequent bids are of higher value. The procedure can i.price(j) : (i. j. j) € A(i)}. = price(j) + 1. If a jjerson i makes a bid on then the price of car i j goes up by $1. j) e A(i)} for each person Although this initialization is sufficient for the pseud opolynomial time version. the initial assignment be a null assignment. valued) ^ Uj: price(j) for all (i. x°. The auction stops when each person assigned a car. therefore.

goesupby UB(x°)= UB(x°) be defined as follows.20) because priceCj) at the time of bidding value(i) = $1.148 X The partial Uji < (x. (5.22) N2 (5. are now multiples of (n+1). we UB(x^) ^ S value(i) + J I e price(j) - (5. The procedure yields an assignment that is within n units of the optimum value and. j) e X°. j) Z X° e "ii ^ + i € I °value(i). Suppose we multiply Since all utilities by (n+1) before applying the Bidding procedure. Let Uj: - price(j) and immediately after the bid.21) and observing that unassigned cars in N2 have zero prices. hence. In this modified problem. Using obtain n. N in (5. for all (i.23) As we show in our discussion to follow. within a finite a complete assignment x". must be optimal. x° is Since the algorithm v^l either modify a node value or node price whenever not an assignment. the algorithm can change the node values and prices at most a finite number of times. optimum assignment. however.i)eX'' i€Ni I valued) + J€N2 satisfies the condition X price(j) (5. We next discuss the complexity of the Bidding procedure as applied to the v^ith all utilities first assignment problem largest utility is multiplied by (n+1 ). the most $n than the maximum utility.20) in (5. to obtain an all utilities Uj.19) assignment \° also - value(i) = Ujj price(j) + 1. Hence. two assignments with distinct toted utility will differ by at least (n+1) units. is number of steps the method must terminate with utility of this is at Then utility UB(x°) represents the of the assignment x" assignment (since Nj less empty) . It is easy to modify the method. the C = (n+l)C.21) with N° denoting the unassigned persons N^. (5. We show that the value of any person decreases CXnC) . (i.

Substituting this inequality in (5. ?£. this inequality shows any person decreases I I most O(nC') times.. Using a scaling technique in the auction algorithm ensures that the prices and values do not change too many times. Odog nC) assignment problems and and show solve each problem by the auction We use the optimum prices and values of a problem as a starting solution that the prices of the subsequent problem and values change only CXn) times per sctiling phaise.23) implies UBCx") S -n. Since all utilities are nonnegative. we decompose the original problem into a sequence of algorithm. As in the bit -scaling technique described in Section 1. 149 times. some car By our previous arguments. Thus. can be assigned at most A(i) times betvk^een two of consecutive decreases total This observation gives us a bound O(nmC') on the the "current number of times all bidders become ass'. the values change O(n^C') times in value(i) > Uj. K we have Theorem established the following result. .price(j) after Further.6. The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj. since the price of car j person i i hais been aissigned to car I j and I increases by one unit. to locate As can be shown. Each j.8.21) yields valued) ^ -n(C' + 1). iteration either decreases the value of a person or assigns the person to total.. The auction algorithm is potentially very slow because can increase prices (and thus decreases values) in small increments of $1 and the final prices can be as large as n^C (the values as small as -n^C). (5. . ie No 1 Since valued) decreases by at that the value of le«ist one unit each time at it changes. a person in valued). the total time needed to ujxiate Veilues of all ( O ie I n I Ad) I C = O(nmC'). N^ We next examine the number of iterations performed i by the procedure. 5. using arc" data structure permits us admissible bids in O(nmC') time. The auction algorithm solves the assignment problem in O(n^mC) it time. Since C = nC.. Since decreasing the value of a person persor\s is i once takes 0( Ad) \ ) time. .gned. we solve each problem in 0(nm) time and solve the original problem in 0(nm log nC) time.

prices satisfy value(i) and values ^ max {uj. is K bits long.j) is the k if leading bits in the binary representation of assuming (by adding leading zeros necessary) that each Uj. begin multiply by (n+1). the algorithm starts with a null assignment. K: = riog(n+l)Cl price(j) : = : for each car j. all Uj. The assignment algorithm performs scaling phase. in which the utility of arc (i.+ {0 or 1). for k : = 1 K do = : begin let ujj : L Ujj / 2^-^J for each (i. j) € A. value. price). depending upon whether the newly added follows: bit is or 1. utilities u-j= Luj.150 Pj^ . the purpose of each scaling phase to obtain good prices and values for the subsequent scaling phase. BIDDING(uK end. In the k-lh obtains a near-optimum solution of the problem with the utilities k u--. end. . Note that in the problem Pp all utilities are and subsequently k+1 u^- k = 2u. In other words. The crucial result that the prices and values change only 0(n) times during each execution of the . the problem Pj^ has the arc or 1.price(j) : (i. / 2'^*'^ J. It is easy to verify that before the algorithm invokes the Bidding procedure. The problem Pj^ is an assignment problem ujj. the algorithm solves the assignment problem with the original utilities that in each scaling is and obtains an optimum solution of the original problem. it a number of cost scaling phtises. In the last scaling phase. We is next discuss the complexity of this assignment fdgorithm. The Bidding procedure maintains these conditions throughout execution. value(i) = to for each person i. The scaling algorithm works as algorithm ASSIGNMENT. for its each person i. x°. value(i) = 2 value + for each person i. j) e. A(i)). price(j) = 2 : price(j) for (i) each car 1 j. Observe phase.

for a given set of prices and values. Since t u- • - price(j) for each (i. of arcs in x*'" If * are either -2 or -3. price(j) calling the and value(i) have the values computed x.26) Hence. Using this result and (5. x° is some partial assignment in the k-th scaling phase. y (i.23) implies that UBCx") t -4n.20) x*^"* (the final at tie end of the (k-l)-st scaling phase). utility also an assignment that maximizes the reduced value(i) maximizes the utility. Before calling the Bidding procedure. and Uj. we observe that the Bidding procedure would terminate in 0(nm) time. The assignment algorithm applies the Bidding procedure Odog nC) times and. j price'(j) - value'(i) = -1. In this expression.7. = 2 u. Using this result in the proof of Theorem 5. (5. Now assignment k-1 consider the reduced utilities of arcs in the assignment (5. j) e x*^"'.24) in (5. we have (5. the optimum reduced utility is at least -3n. (5. The equality V 1 implies that u.. . valued) decreases 0(n) times. Substituting these relationships in (5. = (i. Therefore. we set price(j) = 2 price'(j). y ic U:: j )U X ^ X e price(j) i jfe X'^ N2 X e Nj Consequently. for all (i. then (5. For any assignment we have value(i). just before Bidding procedure.21) yields I icNj valued) ^-4n. Hence. runs in 0(nm log nC) time. value(i) k k-1 = 2 value'(i) + 1. consequently.24) Uij < 0.25). as We define the reduced utility of an arc (i.+ (0 or 1). for any i. j) e A. we find that the reduced utilities Uj. j) e A.25) where price'(j) and value'(i) are the corresponding values at the end of the (k-l)-st scaling phase. j) in the k-th scaling phase _ Ujj = Ujj ic - price(j) - value(i). the reduced utility of an assignment differs from the utility of that assignment by a constant amount. for aU (i. We summarize our discussion. _ u.151 Bidding procedure.

scaling version of the auction algorithm solves the assignment problem in The in scaling version of the auction algorithin can be further time. and 0((n if - Hence. we prohibit person from bidding value(i) S 4Vn .9. as described in Section these shortest paths in 0(m) time. first For example. n = 10. improved to run 0(Vn m log nC) If This improvement i is based on the following implication of if (5.152 Theorem 5.2. the algorithm takes CXVn m) time FVn 1 f>ersons fVn 1 )m) time to assign the remaining FVii persons.000. then version of the algorithm currently heis known time bound for solving the assignment problem .26). then the auction algorithm would assign would assign the 99% of the persons in 1% of the overall running time and the remaining 1% of the persons in the remaining 99% it of the time. We all therefore terminate the execution of the auction algorithm when has assigned but rVn It 1 persons and use successive shortest path algorithms to assign these persons. The 0(nm log nC) time. This version of the auction algorithm solves a scaling phase in 0(Vn m) time and its overall running time this is 0{-\fn m log nC). . so happens that the shortest paths have length 0(n) and thus Oial's 3. will find algorithm. If we invoke the similarity the best assumption. then by (5.26) the number of unassigned persons is at to assign n1 most Vn.

we present reference notes on topics covered in the (i) text. this research . researchers began to exhibit increasing interest in the its minimum the cost flow problem as well as special cases-the shortest path problem. Reference Notes In this section. presents a thorough discussion of the early research conducted by of flow decomp)osition theory. During the 1950's. them and by is others. Interest in network problems grew with the advent of the simplex Dantzig (1951] specialized the simplex algorithm for noted the traingularity of the basis and integrality of (1956] generalized this algorithm by Dantzig in 1947. flow Since these pioneering works. Ford and Fulkerson (1962].1 the empirical aspects of the algorithms. network problems and their generalizations emerged as major research topics in operations research.153 6. The studies in this problem domain. He the optimum solution. the tranportation problem. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. Whereas Dantzig focused on the primal simplex based algorithms. Hitchcock [1941]. and Koopmans (1947]. conducted by Kantorovich (1939]. a special case of the studies provided minimum cost flow problem. solve these problems. Introduction The study cf network flow models predates the development of first linear programming techniques. The book by Dantzig (1962] contains a thorough description of these contributions along with historical perspectives. The network simplex algorithm for the capacitated the development of the minimum cost flow problem follov/ed from for linear bounded variable simplex method programming by Dantzig (1955]. maximum flow problem and the assignment problem — mainly because of their to important applications. considered the transportation problem. Orden work by specializing the simplex algorithm for the uncapacitated minimum cost flow problem. It also covers the development which credited to Ford and Fulkerson. Soon researchers developed special purpose algorithms Dantzig. This discussion has three objectives: to review important theoretical contributions on each topic. Ford and Fulkerson pioneered those efforts. These some insight into the problem structure and yielded incomplete algorithms. (ii) to point out inter-relationships among different algorithms. and (iii) to comment on 6. Their book.

and Derigs Graphs). Berge and Ghouila-Houri . and Von Randow [1982. Several researchers have prepared general surveys of selected application areas. Murty [1976] and Combinatorial Programming). Syslo. the reader might consult the bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of 1985]).154 is documented in thousands of papers and many text and reference books. Swamy and Thulsiraman Networks and Algorithms). programming compiled by researchers at Bonn (Kastning [1976]. no single source provides a comprehensive account of network flow models and their impact on practice. We shall be surveying many important research papers in the following sections. Since the applications of network flow modelsa are so pervasive. books on commurucation networks by Bertsekas . Papadimitriou and Steiglitz [1982] (Combinatorial Optimization: Algorithms and Complexity). Hausman [1978]. Transportation and Scheduling). Phillips Garcia-Diaz [1981] (Fundamentals of Network Analysis). Transmission and Networks). (Programming in Netorks and As an additional source of references. and Kowalik [1983] (Discrete Optimization Algorithms). Frank and Transportation Frisch [1971] (Communication. Rockafellar [1984] (Network Flows and [1988] Monotropic Optimization). Christophides [1975] (Graph Theory: [1976] (Linear An Algorithmic Approach). field Several important books summarize developments in the literature: and serve as a guide to the Ford and Fulkerson [1962] (Flows in Networks). Tarjan [1983] (Data Structures and Network Algorithms). Iri (1969] (Network Flows. Gondran and Minoux [1984] (Graphs and Algorithms). Golden. Minieka [1978] (Optimization Algorithms for Networks and Graphs). Examples paper by Bodin. 11962] (Programming Games and Transportation Networks). Assad and Ball [1983] on vehicle routing and scheduling problems. Kennington and Helgason Programming). Lawler (Combinatorial (Linear Optimization: Networks and Matroids). Notable among these is the paper by Glover and Klingman [1976] on the applications of minimum problems. Deo. Bazaraa and Jarvis [1978] Programming and Network Flows). Jensen and Barnes [1980] [1980] (Algorithms for Network and (Network Flow Programming). cost flow and generalized minimum domains cost flow A number of books written in special problem also contain valuable insight about the range of applicatior\s of network flow in this category are the modek. Smith [1982] (Network Optimization Practice). Potts and Oliver [1972] (Flows in Transportation Networks). Hu [1969] (Integer Programming and Network Flows). [1981] (Graphs.

155 and Gallager [1987] and on transportation planning by collection of survey articles [1988]. We Gabow have mentioned the "similarity assumption" throughout the chapter. greatly helped in popularizing scaling techiuques. arc. Ruggen and Starchi [1982] and Deo and Pang [1984]. which summarizes some of this literature. since any algorithm for sparse must examine every networks. we refer the reader to the extensive bibliographies compiled by Gallo. 6^ Shortest Path Problem The shortest path problem and its generalizations have a voluminous research literature. lists. doubly is linked queues. Label Setting Algorithms The first label setting algorithm was suggested by Dijkstra [1959]. Sheffi [1985]. as well as a on facility location edited by Francis and Mirchandani Golden [1988] has described the census rounding application given in Section General references on data structure serve as a useful backdrop for the algorithms presented in this chapter. focuses especially on issues of computational complexity.1. improved running times are possible The following table svimmarizes various implementations of Dijkstra's algorithm that have been designed to improve the running time in the worst case or in practice. The book by Tarjan [1983] another useful source of references for these topics as well as for more complex data structures such as dynamic trees. which contains scaling algorithms for several network problems. 2. linked lists. This important paper. d = [2 + m/n] represents the average degree of a node in the network plus . paper on scaling algorithm for combinatorial [1985] coined this term in his optimization problems. 1. Hop>croft and Ullman [1974] is an excellent reference for simple data structures as arrays. The is original implementation of Dijkstra's algorithm runs in 0(n2) time which running time for fully the optimal dense networks (those with m = fiCn^ )). In the table. stacks. This section. Pallattino. The book by Aho. However. binary heaps or d-heaps. and independently by Dantzig [1960] and Whiting and Hillier [I960]. As a guide to these results.

156 « .

Then. Johnson [1977b] proposed a related bucket scheme with exponentially growing widths and obtained the running time of structure it 0((m+n log Olog log C). Kaas and Zijlstra [1977] suggested a data structure whose analysis depends upon the takes largest key D stored this in a heap. this data structure implements 0(m + n log n) time. The Fibonacci heap an n) somewhat complex. The best strongly polynomial-time algorithm to date is due to Fredman and is Tarjan [1984] ingenious. Dijkstra's algorithm in Consequently. nk(l+C^/^/w)] bound to a time for log C). is The correctness of this observation follows from the fact that d* the current minimum temporary temporary distance distance labels. implemented using data structure. but who use a Fibonacci heap data structure. Observe w = max minlcj. Denardo and Fox implemented the shortest path algorithm in 0(max{k C^^K m log (k+1).157 Boas. Though Dial's only pseudopolynomial-time. Dial. Choosing k = log C yields a time of 0(m log log C+n Depending on n. d* + w - 1] since each arc has length at least w - 1. data structure that takes an average of Odog time for each node selection (and the subsequent deletion) step and an average of 0(1) time for each distance update. This data is the same as the R-heap data structure described in Section 33. in practice. it runs in 0(nC + m log log nC) it Johiison [1982] suggested an improvement of this data structure and used to implement Dijkstra's algorithm in 0(m log log C) time. any choice of k. The R-heap implementation by a sequential search and improves the running time by a . then we can use buckets of width w in Dial's algorithm. Dial [1969] suggested his implementation of Dijkstra's algorithm because of its encouraging empirical performance. [1979] suggest several such improvements. successors have had improved worst- case behavior.: (i. other choices might lead modestly better time bound.m and C. then the algorithm will modify no other label in the range [d*. except that performs binary search over Odog C) buckets to insert nodes into buckets during the redistribution of ranges replaces the binary search and the distance updates. When Dijkstra's algorithm time. The initialization of this algorithm 0(D) time and each heap operation takes Odog log is D).j) € A}]. This algorithm was independently discovered [1979] by Wagner[1976]. using a multiple level bucket scheme. Kamey and Klingman which runs better its have proposed an improved version of algorithm is Dial's algorithm. hence reducing the number of buckets from 1+ C if to l+(C/w). that if Denardo and Fox [1. Glover.

The modification that adds a node to the LIST (see the description of the Modified Label Correcting Algorithm given in Section 3. The R-heap implementation described system. however. This modification was conveyed to Pollack and Wiebenson [1960] by D'Esopo. this algorithm as D'Esopo and Pape's algorithm.158 factor of Odog log C). Orlin and Tarjan [1988] suggested the Rits heap implementation and further improvements.) at the front if the algorithm has is previously examined the node earlier and at the end otherwise. probably the most popular. the shortest path problem. as shown by Edmonds Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice. all of its previous This approach permits the selection of much larger width of buckets. By using K = L = 2 log C/log log C. Incorporating a generalization of the Fibonacci heap data structure in the two-level bucket system with appropriate choices of K and L further reduces the time bound to 0(m + nVlog C ). If we invoke the similarity aissumption. the two-level bucket system redistributes the range of a subbucket over buckets. Ahuja. studied the theoretical properties of the Bellman's [1958] algorithm can also be regarded as a label correcting Though specific implementations of label correcting algorithms run in is 0(nm) [1970]. and so is unlikely that this algorithm would perform well Label Correcting Algorithm Ford [1956] suggested. The two-level data structure consists of K (big) buckets. each bucket being further subdivided into L (small) subbuckets.4. time. in skeleton form. We shall subsequently refer to A FORTRAN listing of this . this approach currently all classes gives the fastest worst-case implementation of Dijkstra's algorithm for of graphs except very sparse ones. in section 3. Mehlhom. The Fibonacci heap version it of two-level R-heap is very complex. the first label correcting algorithm for - Subsequently. in practice. Ouring redistribution. algorithm. several other researchers - Ford and Fulkerson [1962] and Moore [1957] algorithm. and later refined and tested by Pap>e [1974]. as described next.3 uses a single level bucket A two-level bucket system improves further on the R-heap implementation of Dijkstra's algorithm. thus reducing the number of buckets. this two-level bucket system version of Dijkstra's algorithm runs in 0(m+n log C/log log C) time. the most general form nonpolynomial-time. for which the algorithm of Johnson [1982] appears more attractive.

The algorithm we have presented is due in to Floyd [1962] and is based on a theorem by Warshall [1962]. Akgul's algorithm runs to in O(n^) time which can be reduced 0(nm + n^logn) using the Fibonacci heap data structure. Glover. that solve the all pair shortest path problem involve matrix The first such algorithm appears to be a part of the folklore. Hao and Kai [1986] described another simplex algorithm for the shortest path this problem: the number of pivots and running times for to those of algorithm are comparable Akgul's algorithm. computational attributes can be Klingman. Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm. This algorithm nms 0(n3) time and . the arc with largest violation of optimality condition) for the shortest path problem starting from an 0(n) artificial basis leads to Dijkstra's algorithm. as runs in shown by Kershenbaum [1981]. which can be improved slightly by using more sophisticated matrix multiplication procedures. Primal simplex algorithms for the that efficient. Goldfarb. Phillips and Schneider [1985].. Researchers have been interested in developing polynomial-time primal simplex algorithms for the shortest path problem. the number of pivots is if all arc costs are nonnegative. the FSP algorithm runs it in 0(nm) time. Using simple data structures. uses very T\atural pricing strategies. Dial. aiul also permits partial pricing All Pair Shortest Path Algorithms Most algorithms manipulation. Though this modified label correcting it algorithm has excellent computational behavior in the worst-case exponential time.159 algorithm can be found in Pape [1980]. called the partitioning shortest path (PSP) algorithm. For general networks. Karney and pivoting in Klingman [1979] and Zadeh [1979] showed that Dantzig's pivot rule (i. Orlin [1985] showed that the simplex algorithm with Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots. Lawler [1976] describes this algorithm in his textbook. The complexity of this algorithm is 0(n3 log n). while for networks with nonnegative arc lengths behavior. structures. Glover. Ahuja and Orlin [1988] recently discovered a scaling variation of this approach that performs 0(n^ log C) pivots and runs in 0(nm log C) time. shortest path problem with arbitrary arc lengths are not Akgul [1985a] developed a simplex algorithm for the shortest path problem that performs O(n^) pivots. This algorithm uses simple data . runs in 0(n2) time and has excellent computational their Other variants of the label correcting algorithms and found in Glover.e. Thus.

Glover. Computational Results Researchers have extensively tested shortest path algorithms on a variety of network classes. the computational performance of an algorithm is depends upon many factors: for example. they observe that their implementation would be faster for very large shortest path problems. and the distribution is of networks on which the algorithm tested.C) shortest path is the time neede to solve a problem with nonnegative arc is lengths). It is Dial's algorithm is the best label setting algorithm for the shortest faster than the original OCn^) implementation. rather than conclusive. for several other all pair shortest path From solve the all a worst -case complexity point of view. Kelton and Law [1978]. Denardo . The studies due to Gilsinn and Witzgall [1973]. however. Van Vliet [1978]. For very dense networks. the results of computational studies are only suggestive. The bibliography by Deo and Pang [1984] contains references algorithms.m. the manner in which the program written. extrapolating the results. it might be desirable to pair shortest path problem as a sequence of single source shortest path in the text. this problems. d-heap or the all Fibonacci heap implementation of Dijkstra's algorithm for network classes tested is fcister by these researchers. Pape [1974].C)) time to solve the n shortest path problems (recall that S(n. the language. compiler and the computer used. Iri Kamey and Klingman [1979]. however. Researchers have not yet tested the R-heap Dial's algorithm is implementation and so available. Unlike the worst<ase results. These studies generally suggest that path problem. Denardo and Fox [1979] also find that Dial's algorithm all than their two-level bucket implementation for of their test problems. Dial. at this moment no comparison with . [1979]. The results of these studies also depend greatly upon the density of the network.160 is also capable of detecting the presence of negative cycles. Hence. Klingman. Phillips and Schneider [1985] and Gallo and Pallottino [1988] are representative of these contributions. Dantzig [1967] devised another procedure requiring exactly the same order of calculations. and Fox Imai and [1984]. the in the algorithm by Fredman [1976] faster than this approach worst<ase complexity.m. the binary heap. As pointed out approach takes CXnm) time to construct an equivalent problem with nonnegative arc lengths and takes 0(n S(n. Glover.

. Kelton and Law [1978] have conducted a computational study of several aill pair shortest path algorithms. Feinstein and Shannon independently established the max-flow min-cut theorem. Studies generally suggest that. the bounds specified for the other algorithms apply to problems with arbitrary rational or real capacities. but slower for sparse networks. 6. et al.3 Maximum Flow Problem The maximum flow problem is distinguished by the long succession of research contributions that have improved algorithrr\s. Klingman. Several researchers - Dantzig and Fulkerson [1956]. This study also finds that matrix manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks. In the figure. Since then. Fulkerson and Dantzig [1955] solved the maximum flow problem [1956] by specializing the primal simplex algorithm.161 Among by Glover algorithm. and [1956] solved it by augmenting p>ath algorithms.2 summarizes the running times of some of these algorithms. The study finds that their algorithm is superior to D'Esopo and Pape's label setting algorithms Other researchers have also compared with label correcting algorithms. researchers have developed a number of algorithms for this problem. m is the number of arcs. but not all. the label correcting algorithn\s. and U is an upper bound on the integral arc capacities. and Schneider [1985] are the two fastest. Ford and Fulkerson [1956] [1956] - and Elias. of these improvements have produced improvements in practice. whereas Ford and Fulkerson Elias et al. n is the number of nodes. for very dense networks. algorithms whose time bounds involve The U assume integral capacities. upon the worst-case complexity of some. This study indicates that Dantzig's [1967] algorithm is with a modification due to Tabourier [1973] faster (up to two times) than the Floyd- Warshall algorithm described in Section 3. the algorithms Phillips by D'Esopo and Pape and by Glover.5. Figure 6. bbel correcting algorithms perform better. for sparse networks. label setting algorithms are superior and.

consequently.. They also showed that for arbitrary irrational arc capacities. Ford and Fulkerson [1956] observed that the labeling algorithm can perform as many an the as 0(nU) augmentations for networks with integer arc capacities. both with improved computational complexity.e.. this version of the labeling . J O nm 1^ U) r?- log log — log " U . U 17 Ahuja. [1974] [1977] 0(n2 VIS") [1978] Kumar and Maheshwari 0(n3) Galil [1980] 0(n5/3m2/3) [1980]. Running times of maximum flow algorithms. They one showed if the algorithm augments flow along a shortest path (i.162 # 1 Discoverers Running Time [1972] Edmonds and Karp Dinic [1970] 0(nm2) CKn2m) 0(n3) 2 3 4 5 6 Karzanov Cherkasky Malhotra. the labeling algorithm can perform infinite sequence of augmentations and might converge to a value different from flow value. Orhn and Tarjan [1988] (b) uvnm ol + n ^VlogU) (c) O nm V ( Table 6. containing the smallest possible number of arcs) in the residual network.. ) Ahuja and Orlin [1987] 0(nm + n^ . maximum that Edmonds and Karp [1972] suggested two specializations of the labeling algorithm.2. will A breadth first search of the network determine a shortest augmenting path. Shiloach [1978] 7 8 GalU and Naamad 0(nm CXn3) log2 n) Shiloach and Vishkin [1982] Sleator 9 10 11 and Tarjan [1983] 0(nm 0(n3) log n) Tarjan [1984] Gabow[1985] Goldberg [1985] 0(nm 0(n3) log U) 12 13 14 Goldberg and Tarjan [1986] Bertsekas [1986] CXnm 0(n3) log (n^/m)) 15 16 Cheriyan and Maheshwari [1987] 0(n2 Vm + •. Ca) log . then the algorithm performs 0(nm) augmentations.

A blocking flow in a layered in the network G' « (N'. maintains distance Goldberg [1985] introduced distance labels in the context of his preflow push algorithm. are easier to manipulate. so that for every arc . They proved that this algorithm to performs path 0(m log U) with maximum augmentations. The nodes . Karzanov [1974] introduced the concept . A layered network lie a subgraph of the residual network at least that contains only those nodes and arcs that on one shortest path from the source into layers of to the sink.3. Distance labels offer several advantages: They are simpler to understand than layered networks. of the labeling algorithm runs in 0(m2 log Dinic [1970] independently introduced the concept of shortest path networks. i e Nk and j e Nk+1 for some k). his algorithm runs in OCn^m) times. Tarjan [1986] has shown how determine a this version residual capacity in 0(m) time on average. hence. blocking flow iteration. called layered networks is . network connects nodes in adjacent layers (i. for solving the maximum flow problem.. Several researchers have contributed improvements to the computational complexity of maximum flow algorithms by developing more efficient algorithms to establish blocking flows in layered networks. but instead of constructing layered networks labels. They also showed that this equivalent both to all Edmonds and Karp's algorithm and to Dinic's algorithm in the sense that three algorithms enumerate the same augmenting paths in the same sequence. j) network can be partitioned in the layered nodes N]. His algorithm constructs layered networks Dinic showed that after each and establishes blocking flows in these networks. U) time. A') is a flow that blocks flow augmentations residual capacity sense that G' contains no directed path with positive from the source node to the sink node. . and have led to more efficient algorithms.3 achieves the same time bound it as Dinic's algorithm. Consequently.e. flow along a path with Edmonds and Karp's second idea was to augment maximum residual capacity. Dinic showed how to construct. Orbn and Ahuja [1987] developed the distance label based augmenting path algorithm given in Section algorithm is 4. The shortest augmenting path algorithm presented in Section 4. the length of the layered network increases and a^er at most n iterations. in a total of 0(nm) time. The algorithms differ only in the manner in which they obtain these augmenting paths. the source is disconnected from the sink in the residual network. a blocking flow in a layered network by performing at most m augmentations. . in a layered (i.163 algorithm runs in 0(nm2) time.. N2.

2-3 trees (see Aho. this time bound is is comparable to that of Sleator and Tarjan's algorithm. Kumar and Maheshwari [1978] present a conceptually simple maximum flow algorithm that runs in OCn^) time. in Hence. Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify later to augmenting paths quickly. constructs a blocking flow in 0(n2) time. this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's capacity.) Karzanov showed implementation that maintains preflows and pushes flows from nodes with excesses.164 of preflows in a layered network. for The search more efficient maximum flow algorithms has stimulated researchers to develop first new data structure for implementing Dinic's algorithm. their implementation of Dinic's algorithm 0(nm (log n)2) time.3) takes 0(n) time on average to identify an augmenting path and. (See the technical report of Even (1976] for a comprehensive description of this algorithm and the paper by Tarjan [1984] for a that an simplified version. Consequently. but the scaling algorithm much simpler to implement. . As outlined in Section 1. for example. saturated arcs from this path. The basic idea to store these path fragments using some data structure. the at initial flow value differs from the m£iximum flow value by most m units and so the shortest augmenting path algorithm (and also Dinic's algorithm) performs at scaling phase takes most m augmentations. Cherkasky [1977] and Galil [1980] presented further improvements of Karzanov's algorithm. If we delete the is we obtain a set of path fragments. during the augmentation. The such data structures were suggested independently by Shiloach [1978] and Galil [1980]. Gabow to the [1985] obtained a similar time bound by applying a bit scaling approach maximum flow problem. each log C) time. algorithm achieving Orlin and Ahuja [1987] have presented a variation of the Ga bow's same time bound. Sleator and Tarjan [1983] improved this approach by using a data structure called dynamic trees to store and update path fragments. If 0(nm) time and the algorithm runs in 0(nm we invoke the similarity assumption. Ehiring a scaling phase. it saturates some arcs in this path. to Shiloach [1978] and Galil and in a Naamad [1980] showed how augment flows through path fragments way that finds a blocking rur\s flow in O(m(log n)^) time.7. Sleator and Tarjan's algorithm establishes a blocking flow in 0(m log n) time and thereby yields an 0(nm log n) time bound for Dinic's algorithm. Malhotra. and Naamad Dinic's algorithm (or the shortest augmenting path algorithm described in Section 4.

Goldberg (1985] had shoum in the FIFO version of the algorithm that pushes flow from active nodes first-in-first-out order runs in OCn-^^ time. Orlin and Tarjan [1988]. Ahuja. the E>inic's and the FIFO preflow push algorithms. as ) algorithm improves to O nm log —— — ° +2 by using dyiuimic showT» in Ahuja. though the improvements are not as For dramatic as they have been for example. Scaling excesses by a factor of log U/log log U and pushing flow from a large excess node with the highest distance label. Orlin and Tarjan [1988] reduced the U). Previously. 0(nm + n^ Vlog U trees. selects a node from the front of the queue. The use of the dynamic tree data structure its improves the running times of the excess-scaling algorithm and variations. and adds the newly active nodes to the rear of the queue. that Recently. j>erforms a push /relabel step at this node. Cheriyan and Maheshwari [1987] showed Goldberg and Tarjan's highest-label preflow push algorithm actually performs ) OCn^Vin nonsaturating pushes and hence runs in OiriNm ) time. this algorithm closely resembles the Goldberg's FIFO preflow push algorithm. number of nonsaturating pushes to OCn^ log U/ log log Ahuja. Further. Goldberg and Tarjan [1986] the running time of the improved This FIFO preflow push algorithm to 0(nm log (n^/m). Bertsekas [1986] obtained another his maximum flow algorithm by specializing minimum cost flow algorithm. it (This algorithm maintains a queue of active nodes. Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an 0(nm + n^ log U) time bound. this algorithm does not use any complex data structures. algorithm currently gives the best strongly polynomial-time bound for solving the maximum flow problem. can be implemented in 0(nm log (2+p/nm) time using dynamic Although this . Tarjan [1987] conjectures that any preflow push algorithm that performs p nor«aturating pushes trees. at each iteration.) Using a dynamic tree data structure. If we invoke the similarity assumption. this algorithm improves Goldberg and Tarjan's for 0(iun log (n2/m)) algorithm by a factor of log n networks that are both non-sp>arse and nondense. Orlin and Tarjan [1988] obtained another variation of origir\al excess scaling algorithm which further reduces the number of nonsaturating pushes to 0(n^ VlogU ).165 Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the push algorithm.

U=l). that Ahuja. n2 = |N2| andn = n^+n2[1985] obtained the Suppose first that nj < n2 Gusfield. it is still open for the general case. A) Nj j << j N2 |(or j N2 .. essentially Goldfarb and Hao [1988] developed such an algorithm. and. This algorithm is based on selecting pivot arcs so that flow source to the sink. Tarjan[1988] recently showed how implement this algorithm in 0(nm logn) using dynamic trees. Femandez-Baca and Martel small integer capacities. except source and sink. is augmented along a shortest path from the As one would expect. flow problems: (ii) the maximum flow problem on (i.166 conjecture is true for all known preflow push algorithms. is Observe that the maximum flow value for unit capacity networks less than n. j « j N^ | ). this algorithm performs 0(nm) pivots and to can be implemented in ©(n^m) time. (i) unit capacity networks in the . maximum Recently. Thus. Both of these algorithms rely on ideas contained in Hopcraft and Karp's [1973] algorithm for maximum bipartite matching. [1987] have generalized these ideas for networks with Versions of the bipartite Let maximum = (N^ flow algorithms run considerably faster on a if j networks G u N2. n^=|N J. Ahuja [1987] have achieved the same time bounds using a modification of the shortest augmenting path algorithm. Martel and Fernandez-Baca such results by showing how the running times of Karzanov's and Malhotra et al. unit capacity simple networks U=l. Even and Tarjan [1975] showed that Dinic's algorithm solves the maximum flow problem on unit capacity networks in Orlin and O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time.e. has one incoming arc or one outgoing arc) bipartite networks. these problems are easier than are problems with large capacities. Developing a polynomial-time primal simplex algorithm for the flow problem has been an outstanding open problem for quite some time. Researchers have also investigated the following special cases of the maximum (i.e. Stein and Tarjan [1988] it improved upon these ideas by shov^dng time bounds for all is possible to substitute nj for n in the preflow push algorithms to obtain the new time bounds for bipartite networks. and so the shortest augmenting path in algorithm will solve these problems to solve 0(nm) time. Orlin.. every node network. (iii) and (iv) planar networks. This result implies that the FIFO preflow push algorithm and the .'s algorithms reduce from O(n^) to 0(n^ n2 ) and 0(nj + nm) respectively.

The studies performed by Hamacher [1979]. Cherkasky. Cheriyan [1988] has also constructed a for family of examples to show that the bound O(n^) FIFO preflow push algorithm is tight. (A network is called planar if it can be drawn nodes. Dinic. ) and 0(n. m + n. solution techniques. hence. i. however. Karzanov. Edmonds and Karp. Cheung . especially for the excess-scaling algorithms. whether the algorithms achieve their worst- bounds some families of networks. worth mentioning. that these knovkTi worst-case examples are quite artificial and are not likely to arise in practice. Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow Vm) for the push algorithm is tight.e. Some important [1979]. Several computational studies have assessed the empirical behavior of maximum flow algorithms. Galil [1981] constructed an interesting class of examples and showed that the algorithms of et al.) in a two-dimensional plane so that arcs intersect one another only planar network has at most A 6n arcs. solve the bipartite maximum flow problem on networks in 0(n. log U) time. is Zadeh [1972] showed that the bound of Edmonds and Karp that the algorithm tight when bound m = n^.. that have even better running times. Martel [1987] showed that the FIFO preflow push algorithm can take n(nm) time to solve a class of unit capacity networks. and the bound O(n^m) for the generic preflow push algorithm The research community has not established similar results for other preflow It is push algorithms. references for planar maximum flow algorithms are Itai and Shiloach Johnson and Venkatesan (1982] and Hassin and Johnson [1985]. respectively. Even and Tarjan [1975] noted same examples imply that the of Dinic's algorithm is tight when m= n2- Baratz [1977] showed that the bound on Karzanov's algorithm is tight. Researchers have also investigated whether the worst-case bounds of the maximum case flow algorithms are for tight. Other researchers have made some progress in constructing worst-case examples for preflow push algorithms.167 original excess scaling algorithm. It is possible to solve the maximum flow problem on planar networks much at the more efficiently than on general networks. the running times of the Specialized maximum flow algorithms on planar networks appear more attractive. Galil and Malhotra achieve their worst-case bounds on those examples. are quite different than those for the general networks. m + n.

Imai (1983] and Goldfarb [1986] and Grigoriadis are noteworthy. Imai [1983] noted that Galil and Naamad's is [1980] implementation of Dinic's algorithm. implementations of preflow push algorithms would be useful in been to in this case. and Ahuja. do not apply to the multi-terminal maximum flow problem on directed networks. . Derigs and Meier [1988]. Mote and Whitman [1979. highest-label preflow push algorithm runs the The excess-scaling algorithm and its variations have not been tested thoroughly. but slower for dense networks. the maximum maximum dynamic flow flow maximum flow value between every pair of nodes. the worst-case performance of algorithms. to the development of algorithms use distance These studies rank Edmonds order of performance for and Karp. the sophisticated data structures improve only are not useful empirically. algorithm and the primal simplex algorithm due to Fulkerson and Dantzig [1955] and found these algorithms to be slower than Dinic's algorithm for most classes of networks. using sophisticated data structures. (i) the multi-terminal flow problem. as in others. Kodialam and Orlin [1988] have found that the preflow push algorithms are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for most classes of networks. A number of researchers are currently evaluating the computational performance of preflow push algorithms. however . we wish to determine the flow problems. 1984). they observed that their implementation of Dinic's algorithm using dynamic tree data structure algorithm by a constant factor. Finally. we discuss two important generalizations of the (ii) problem: problem. slower than the original Dinic's algorithm. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem. Ehnic's and Karzanov's algorithms in increasing is most classes of networks. Sleator and Tarjan (1983] reported a similar finding. Glover. Dinic's algorithm competitive with Karzanov's algorithm for sparse networks. Grigoriadis [1988]. Recently. that These studies were conducted prior labels. Gusfield [1987] has suggested a simpler multi-terminal flow algorithm. Klingman. These results. the fastest. is slower than the original Dinic's Hence. their contribution has improve the worst- case p>erformances of algorithms. Among all nonscaling preflow push algorithms.168 [1980]. but Researchers have also tested the Malhotra et al. tree We do not anticipate that dynamic practice.

Orlin [1983] is to has considered infinite horizon dynannic flow problems which the objective minimize the average cost per period. we associate with each arc that arc. Dantzig [1951] developed the first complete solution procedure for the transportation problem by specializing his simplex algorithm for linear programming. Tomizava and Edmonds and Karp [1972] independently pointed out that if the computations use node potentials. 6. Hitchcock [1941]. Iri [1960] and Busaker and Gowen algorithm. a special the minimum cost flow problem. (i.169 In the simplest version of maximum dynamic tj: flow problem. Helgason and Kennington [1977] and Armstrong. known as the primal-dual algorithms. minimum cost flow problem. He observed for linear the spanning tree property of the basis and the solution. and Koopmans [1947].was posed and solved (though incompletely) by Kantorovich [1939]. primal-dual algorithm for the minimum Jewell [1958]. Klingnun and Whitman [1980] describe the . j) in the is network a number to denoting the time needed to traverse possible flow from the source The objective send the maximum node first to the sink node within a given time period T. then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths. Minty algorithm. Ford and Fulkerson [1958] showed that the maximum dynamic flow problem can be solved by solving a a nice treatment of nunimum in cost flow problem. [1960] and Fulkerson [1961] independently discovered the out-of-kilter is The negative cycle algorithm credited to Klein [1967]. these algorithms are Ford and Fulkerson [1962] describe the cost flow problem. [1961] independently discovered the successive shortest path These researchers showed how to solve the minimum cost flow problem [1971] as a sequence of shortest path problems v^th arbitrary arc lengths. (Ford and Fulkerson [1962] give this problem). caise of The classical transportation problem. 1957] suggested the for the uncapacitated first combinatorial algorithms and capacitated transportation problem. Dantzig's book [1962] Ford and Fulkerson [1956.4 Minimum Cost Flow Problem cost The minimum flow problem has a rich history. integrabty property of the optimum Later his development of the upper bounding technique programming led to an efficient sp)ecializatior of the simplex algorithm for the discusses these topics.

Klingman and Napier [1974]. The candidate list we described is due to Mulvey [1978a]. Klingman and that and Graves [1983] [1978]. Zadeh [1973a] describes one such example on which each of several algorithms — the primal simplex algorithm with Dantzig's pivot rule. the successive shortest path algorithm. its practical implementations have been first Johnson [1966] suggested the tree first manipulating data structure for implementing the simplex algorithm. the primal-dual algorithm. for the minimum problem (which is Each of these algorithms perform iterations that can (apparently) not be polynomially bounded. due to Srinivasan and Thompson [1973] and Glover. The implementations using these ideas. Bradley. The fact that one example is bad for many network insightful algorithms suggests inter-relationship among the algorithms. Brown and Graves [1977]. Kamey. significantly reduced the running time of the simplex algorithm.performs an exponential number of iterations. the negative cycle algorithm (which augments flow along a most negative cycle). the dual simplex algorithm. These studies show that the choice of the pricing strategy has a significant effect on both solution time and the number strategy BrovkTi of pivots required to solve minimum cost flow problems..e. these algorithms obtain shortest paths losing a method that can be regarded as an application of Dijkstra's algorithm. All these algorithms essentially cortsist of identifying shortest paths between appropriately defined nodes and augmenting flow along these paths. Zadeh 11973b) has also described more pathological examples for network algorithms. Glover. Gibby. and Klingman of [1979] subsequently discovered is improved data excellent structures. Glover. The paper by Zadeh [1979] just showed this relationship by pointing out that each of the algorithms mentioned of are indeed equivalent in the sense that they perform the same sequence augmentations provided ties are broken using the same rule. Bradley. Klingman and Stutz [1974]. Mead and Grigoriadis [1986] have described other strategies have been . Researchers have conducted extensive studies to determine the most effective pricing strategy. Goldfarb and Reid [1977].170 specialization of the linear cost flow programming dual simplex algorithm not discussed in this chapter). i. and the out-of-kilter algorithm . and Barr. The network simplex algorithm and most popular with operations researchers. Grigoriadis and Hsu [1979]. Further. The book Kennington and Helgason [1980] an source for references and background material concerning these developements. selection of the entering variable. Glover.

that for integer data an implementation of the primal simplex algorithm that maintains feasible basis a strongly performs O(nmCU) pivots pivots when used with any arbitrary pricing strategy and 0(nm C log (mCU)) when used with Dantzig's pricing strategy. Gavish. and introduces the first eligible arc into the basis. Hao and Kai [1987] have described more anti-stalling pivot rules for the minimum cost flow problem. but the number is of consecutive degenerate pivots may be exponential. degeneracy is both a computational and a theoretical issue. Brown and Graves [1978].171 effective in practice. The strongly feasible basis technique prevents cycling during a sequence of consecutive degenerate pivots. Orlin [1985] showed. each iteration starting at a place where it left off earlier. (Leaist Recently Considered) rule which orders the arcs in an arbitrary. One such rule is LRC fixed. network structure and the network Experience with solving large scale established that minimum cost flow problems has more than 90% of the pivoting steps in the simplex method can be degenerate (see Bradley. researchers have developed such algorithms the for the shortest path problem. Developing a polynomial-time primal simplex algorithm for the minimum cost flow problem is still open. Schweitzer and Shlifer [1977] and Grigoriadis (1986]). The algorithm then examines the arcs in the wrap-around fashion. using a p>erturbation technique. The strongly feasible basis technique. Zadeh . but manner. that this rule admits at most nm consecutive degenerate Goldfarb. maximum flow problem. Cunningham showed pivots. However. Cunningham [1979] described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling. the uncapacitated this minimum cost flow problem a dual algorithm performs 0(n^log n) pivots for minimum cost flow problem. The only is polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]. the use of this technique led to a finitely converging primal simplex algorithm. and the assignment problem: Dial et al. 1977b. proposed by Cunningham [1977a. Glover and Klingman contributed on both fronts. On the theoretical front. 1978) has {1976] and independently by Barr. This phenomenon known the as stalling. It appears that the best pricing strategy depends both upon the size. Computational experience has shown that maintaining strongly feasible basis substantially reduces the number of degenerate pivots. Thus. [1979]. Researchers have also been interested in developing polynomial-time simplex algorithms for the minimum cost flow problem or its special CJises.

and capacitated or uncapacitated transportation and minimum cost flow problems. and Roohy-Laleh [1980]. [1980] have reported on extensive studies of the dual simplex subject of The primal simplex algorithm has been a more rigorous . minimum this cost flow problem (with integer data). Grigoriadis [1979] and Grigoriadis [1986] are noteworthy. mirumum cost flow problem. Kamey and Klingman and Hsu [1988] [1974]. Klingman and Whitman algorithm. it to a deficit node along a path cortsisting of arcs (ii) changing the potentials of a subset of nodes. it has also obtained an optimum primal Bertsekas solution. [1976] Glover. Klingman and Napier [1974] Glover. this flow assignment might change the excesses that each and deficits at nodes. Goldfarb and Hao maximum flow problem. [1985] suggested the relaxation algorithm for the Bertsekas and Tseng [1985] real data. or latter case. Hao and Kai [1986] and Ahu)a and OrUn [1988] for the [1988] for the shortest path problem. of empirical studies have extensively tested minimum to cost flow algorithms for wide variety of network structures. The algorithm operates so change it in the node potentials increases the dual objective function value and when finally determines the optimum dual objective function value. The attractive relaxation algorithms proposed by Bertsekas and his associates are other algorithms for solving the For the minimum cost flow problem and its generalization. Hung [1983]. to their In the satisfy resets flows on some arcs however. Napier and Stutz which is capable of generating assignment. A number sizes. Goldfarb. Orlin [1985]. studies conducted by Glover. lower or upper bounds so as to the optimality conditions. investigation. and problem The most common problem generator [1974]. data distributions. Bertsekas results for the relaxation algorithm. is NETGEN. Brov^Ti and Graves [1977]. Bradley. Kamey and Klingman [1974] and Aeishtiani and Magnanti have tested the primal-dual and out-of-kilter algorithms. due Klingman. Akgul [1985a]. Orlin [1985]. and Tseng have presented computational . The algorithm proceeds by either augmenting flow from an excess node with zero reduced cost. Kamey. This relaxation algorithm has exhibited nice empirical behavior. Helgason and Kennington [1977] and Armstrong.172 [1979]. Mulvey [1978b]. this algorithm maintains a (i) pseudoflow satisfying the optimality conditions.6 for a definition of this problem). extended approach for the minimum cost flow cost flow problem with and for the generalized minimum problem (see Section 6. Akgul [1985b] and Ahuja and Orlin [1988] for the assignment problem.

we would expect that All the the primal simplex algorithm should outperform other algorithms. maximum . and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the minimum cost flow problem in practice. for some minimum cost flow problem are available in the These include the primal simplex codes RNET and NETFLOW developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980]. new At version of primal simplex algorithm faster than the relaxation this time. and the integral capacities. It cissumes that the integral cost coefficients are bounded value by C. supplies and demands are bounded in absolute value by U. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm. computational studies have verified this expectation and until very recently the all primal simplex algorithm has been a clear winner for almost classes of network is problems. However. and the primal simplex algorithm with Dantzig's pivot rule should have comparable running times. we would expect that the successive shortest path algorithm. respectively.173 In view of Zadeh's [1979] result. that determine a By using more effective pricing strategies good entering arc without examining all arcs. The term S() is the running time for the shortest path problem and the flow term M() represents the corresponding running time to solve a problem. and the relaxation code RELAX developed by Bertsekas and Tseng Polynomial-Time Algorithms In the recent past. minimum if Recall that an algorithm in the is strongly polynomial-time its running time is polynomial number or U. researchers have actively pursued the design of fast (weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem. Computer codes public domain. the out-of-kilter algorithm.3 these theoretical developments in solving the table reports running times for minimum cost flow problem. [1988]. it appears that the relaxation algorithm of Bertsekas and Tseng. and does not evolve summarizes terms containing logarithms of C The table given in Figure 6. Grigoriadis [1986] finds his algorithm. the dual simplex algorithm. the primal-dual algorithm. of nodes and arcs. The networks with n nodes and m arcs. m' of which are in absolute capacitated.

Orlin U/log log U) log nC) and Tarjan [1988] and log log U log nQ Strongly Polynomial -Time Combinatorial Algorithms # . m. O) C M(n.174 Polynomial-Time Combinatorial Algorithms # 1 Discoverers Running Time [1972] Edmonds and Karp Rock Rock [1980] [1980] 0((n + m") log 2 3 4 5 6 0((n + U S(n. m. m. 1988b] 0(nm log n log log n log (log U log nQ nC) Goldberg and Tarjan 0(nm 0(nm 0(nm 9 Ahuja. U)) C M(n. Goldberg. m. U)) nC) ) 0(n 0(n log log Bland and Jensen [1985] Goldberg and Tarjan [1988a] Bertsekas and Eckstein [1988] 0(nm log irr/nx) log nC) o(n3 log 7 7 8 Goldberg and Tarjan [1987] 0( n^ log nC Gabow and Tarjan [1987] [1987. C)) m') log U S(n.

m + rh/logC ) Johnson [1982].m. Orlin and Tarjan [1987] Strongly Polynomial -Time Bounds S(n. For problems that satisfy the similarity assumption. Rock [1980] developed two different bit-scaling algorithms for the minimum cost flow problem.8 use the concept of approximate optimality. introduced independently by Bertsekas [1979] and Tardos [1985]. researchers gradually recognized that the scaling technique has great theoretical value as well as potential practical significance. the best bounds for the shortest path and maximum flow problems are: Polynomial-Time Bounds S(n.7. This cost scaling algorithm reduces the minimum cost flow problem to a sequence of 0(n log C) maximum flow problems.8. proposed a wave algorithm for the maximum flow problem. Bertsekas [1986] developed the first pseudoflow push algorithm. was suggested by Orlin initially little [1988]. However.175 For the sake of comparing the polynomial and strongly polynomial-time algorithms. The pseudoflow push algorithms for the minimum cost flow problem discussed in Section 5. The scaling technique it did not capture the interest of many researchers. this Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section algorithm to Tarjan [1984] 5. Edmonds and Karp [1972] developed the first (weakly) polynomial-time eilgorithm for the in Section 5. one using capacity scaling and the other using cost scaling. This algorithm was pseudopolynomial-time. since they regarded as having practical utility. Orlin and Tarjan [1988] M(n. m. Mehlhom. Bland and Jensen [1985] independently discovered a similar cost scaling algorithm. we invoke the similarity assumption. The wave algorithm . and Ahuja. C) = nm ^%rT^gTJ log [ ^— + 2 J Ahuja. C) = Discoverers min (m log log C. m) = (n^/m) Goldberg and Tarjan [1986] Using capacity and right-hand-side scaling. The RHS-scaling algorithm presented the which a Vciriant of Edmonds-Karp algorithm. Discoverers m) = m+ nm n log n log Fredman and Tarjan [1984] M(n. minimum L> cost flow problem.

They also showed minimum cost flow problem cam be solved using 0(n log nC) blocking flow computations.3 contains the definition of a blocking flow.9. the double scaling algorithm faster than all other algorithms for all network topologies except for very dense networks. Goldberg and Tarjan [1988b] showed that flow a it if the negative cycle algorithm cycle always augments along / minimum mean cycle (a W for which V (i. its worst-case running time is not very attractive. Goldberg and Tarjan [1987] obtained a computational time that the bound of 0(nm log n log nC).j) Cj. which was developed relies independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988]. 5. Goldberg. analyzing an algorithm suggested by Weintraub [1974]. m log n)). showed that the negative cycle algorithm . log structures. except the wave algorithm. then is strongly polynomial-time. 6 W this Goldberg and Tarjan described an implementation of approach running in time 0(nm(log n) minflog nC. The success in this direction was due to who developed a triple scaling algorithm running in time to Ahuja. upon similar ideas. [1988].) finger tree (see Using both Mehlhom [1984]) and dynamic tree data structures. 176 for the minimum cost flow problem described in Section 5. The double as described in Section runs in 0(nm log U log nC) time. algorithms by Goldberg and Tarjan appear more attractive.. These algorithms. in these instances. The second success was due Orlin and Tarjan scaling algorithm. situation has prompted researchers to investigate the possibility of improving the computational complexity of minimum first cost flow algorithms without using any complex data Tarjan [1987]. Goldberg and Tarjan [1988a] obtained an 0(nm log (n^/m) log nC) bound for ^he wave algorithm. required sophisticated data structures that impose a very high computational overhead. cycle algorithm Both the algorithms are based on the negative due to Klein [1967]. Using a dynamic tree data structure in the generic pseudoflow push algorithm. who developed the double scaling algorithm. Barahona and Tardos if [1987]. |W | is minimum). Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed other polynomial-time algorithms. Although the wave This algorithm is very practical. For problems satisfying the similarity is assumption. Gabow and 0(nm log n U log nC). Scaling costs by an appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC) and a dynamic tree implementation improves the bound further to 0(nm log log U log nC). (The description of Dinic's algorithm in Section 6.8 .

Fujishige [1986].) C and log U typically range from 1 to 20. [1986]. the terms log in n. source of the difficult or underlying complexity in solving a problem. they describe a method (based upon solving to an auxiliary assignment problem) determine a disjoint set of augmenting cycles with the property that augmenting flows along these cycles improves the flow cost by at least as much as augmenting flow along any single cycle. theoretical considerations. This algorithm solves the minimum cost flow problem as a sequence of 0(min(m log U.e. and also highlighted the desire to develop a strongly polynomial-time algorithm. the worst-case running time of this algorithm nearly as low cis the best weakly polynomieil-time algorithm. Tarjan [1988b] also show that their algorithm that proceeds by cancelling minimvun mean cycles is also strongly polynomial time. Several researchers including Orlin [1984]. the fastest strongly polynomial-time algorithm due to Orlin [1988]. Since identifying a cycle with maximum improvement difficult (i.e.177 augments flow along then it a cycle with maximum improvement in the objective function. when applied minimum cost flow problem performs 0(n^-^ mK) operations. and Orlin [1988] provided subsequent improvements in the running Goldberg and Tarjan [1988a] obtained another strongly polynomial time Goldberg and algorithm by slightly modifying their pseudoflow push algorithm. is Currently. Galil and Tardos time. in practice. that can valued data as well as integer valued level. network flow algorithms data. For very sparse networks. are problems more equally difficult to solve as the values of the tmderlying data becomes increasingly larger? The Tardos first strongly polynomial-time minimum cost flow algorithm is due to [1985]. This desire was motivated primarily by (Indeed. identify the and (ii) they might. Kapoor and to the Vaidya [1986] have shown that Karmarkar's [1984] algorithm. at a more fundamental i. Edmonds and Karp the [1972] proposed the first polynomial-time algorithm for minimum cost flow problem.Tr\^ log (mCU) S(n.. even for problems that satisfy the similarity assumption. where . and are sublinear Strongly polynomial-time algorithms are (i) theoretically attractive for at least two reasons: run on real they might provide. m.. performs is 0(m log mCU) iterations. NP-hard). Interior point linear programming algorithms are another source of polynomial-time algorithms for the minimum cost flow problem. Their algorithm runs in 0(. O) time. in principle. m log n)) shortest path is problems.

At fully this time. and introducing and unit for all i€N|. Bland and Jensen [1985] also reported encouraging results with their cost scaling algorithm. and Orlin have obtained contradictory Testing the right-hand-side scaling algorithm for the minimum cost flow problem.t) first transform the assignment problem into a a source minimum arcs cost flow (s. 6. the research community has yet to develop sufficient evidence to assess the computational worth of scaling and interior point linear for the programming algorithms folklore.. minimum cost flow problem. we (j. s and a sink node t. Vaidya [1986] suggested another algorithm for linear programming that solves the minimum cost flow problem in 0(n^-^ y[m K) time. We believe that when implemented with appropriate speed-up techniques. Although the research community has developed several different algorithms for the assignment problem. and is explicit in the papers by Tomizava [1971] and Edmonds and Karp When applied to an assignment problem on the network G = (N^ u N2 .5 Assignment Problem The assignment problem has been emphasis in the literature has a popular research topic. The primary efficient been on the development of empirically algorithms rather than the development of algorithms with improved worst-case complexity. the scaling algorithms [1986] not as efficient as the non-scaling algorithms. described in Section 5.ar . To use this solution approach. According to the even though they might provide the best-worst case bounds on running eu-e times. scaling algorithms have the potential to be competitive with the best other algorithms.i) problem by adding node . Asymptotically. [1972].4 for the lie minimum algorithms. they found the scaling algorithm to be competitive with the relaxation algorithm for some classes of problems. features. 178 K= log n + log C + log U. Boyd results. The algorithm successively obtains a shortest path from with respect to the lir«. cost flow problem. A) the successive shortest path algorithm operates as follows. many of these algorithms share common The successive shortest path algorithm. and for all J€N2 these arcs have zero cost s to t capacity. appears to at the heart of many assignment due to This algorithm is implicit in the first assignment algorithm Kuhn known as the Hungarian method. these time bounds are worse than that of the double scaling algorithm. [1955].

C) problem. the Hungarian method. is the primal-dual version of the successive After solving a shortest path problem and updating the node potentials. the research community considered it to be O(n^) method. The fact that the assignment problem can be solved as a sequence of n shortest Iri path problems with arbitrary arc lengths follows from the works of Jewell [1958].C) O(n^) and for a Fibonacci heap implementation is it is 0(m+nlogn). However.m.mC)) = 0(nS(n. S(n.m. (For 0(nm + nS(n. updates the node potentials. since there are n augmentatior\s and each augmentation takes 0(m) runs in Consequently.C)) time. where S(n. the to Hungarian method solves a (particularly simple) maximum flow problem send the maximum possible flow from the source node s to the sink node t using arcs vdth zero reduced cost. then these applications take a total of 0(nm) time time. For problems satisfying the similarity assumption. the problem augments flow along one path augments flow along all Hungarian method to the sink node. S(n. If the shortest paths from the source node we use the labeling algorithm to solve the resulting maximum flow problems. overall. Carraresi and Hoffman and Markowitz path problem to [1963] pointed out the transformation of a shortest an assignment problem.C) min(m m+nVlogC}. too.m. some time after the development of the Hungarian method as described by Kuhn. Lawler [1976] described an Oiri^) . Glover and Klingman [1984]) with the flow augmentation process. in Whereas the successive shortest path an iteration. [1960] and Busaker and Gowen [1971] [1961] on the minimum cost flow problem. and augments one unit of flow along the shortest path. Kuhn's [1955] Hungarian method shortest path algorithm.179 programming reduced costs.m.C)) time. Glover The more recent [1986] is threshold and Klingman also a successive shortest path algorithm which integrates their threshold shortest path algorithm (see Glover. algorithm by Glover. [1972] independently pointed out that Tomizava and Edmonds and Karp working with reduced lengths. Sodini [1986] also suggested a similar threshold assignment algorithm. log log C. The algorithm solves the assignment problem by n applications of the shortest path algorithm for nonnegative arc lengths and runs in 0(nS(n. costs leads to shortest path problems with nonnegative arc details of Weintraub and Barahona [1979] worked out the Edmonds-Karp assignment algorithm for the assignment problem. is the time needed to solve a shortest path is For a naive implementation of Dijkstra's algorithm.m.

The major difference the nature of the infeasibility.m. many researchers realized that the Hungarian method in fact runs in 0(nS(n.C)) time. Both the algorithms maintain optimality of the intermediate solution and work toward feasibility by solving at most n shortest path problems with nonnegative arc lengths.m. Both approaches start writh is in an infeasible assignment and gradually make it feasible.C)) time. only n are nonzero. Subsequent research focused on developing . minimum cost flow problem is due to E>inic is and Kronrod Hung eind Rom [1980] and Engquist [1982]. Glover and Klingman [1977a] devised the strongly feasible basis technique.C)) time. This approach closely related to the successive shortest path algorithm. reoptimizes over All of these algorithms the previous basis to obtain another strongly feaisible basis. Probably because of this excessive degeneracy. Derigs [1985] notes that the shortest path computations vmderlie this method. every person assigned. The basis of the assignment problem is highly degenerate.) Jonker and Volgenant [1986] suggested some practical improvements of the Hungarian method. the shortest path computations are somewhat disguised paper of Dinic and Kronrod [1969]. run in 0(nS(n. The relaxation approach for the (1969]. Researchers have also studied primal simplex algorithms for the assignment problem. of its 2n-l variables. These authors to developed the details of the network simplex algorithm when implemented maintain a strongly feasible basis for the assignment problem. the mathematical programming community did not conduct much research on the network simplex method for the assignment problem until Barr. and with no person or is object overassigned. The algorithm of Hung and Rom after [1980] maintains a strongly feaisible basis rooted at an overassigned node and. and that it rurrs in 0(nS(n. a primal algorithm that maintains a feasible it assignment and gradually converts into an optimum assignment by augmenting flows along negative cycles or by modifying node potentials. they also reported encouraging computational results. The successive shortest path algorithm maintains a solution w^ith unassigned persons and objects.180 implementation of the method. Another algorithm worth mentioning This algorithm is is due to Balinski and Gomory [1964]. but may be overassigned or unassigned. [1969] The algorithms of Dinic and Kronrod but and Engquist [1982] are essentially the same as the one we in the just described.m. objects Throughout the relaxation algorithm. each augmentation. Subsequently.

The auction algorithm suggested in Bertsekas [1979]. . is due to Bertsekas and uses basic ideas originally [1988] described a Bertsekas and Eckstein more recent its version of the auction algorithm. Balinski [1985] developed the signature method. Hence.m. essentially consists of pivoting in any arc with sufficiently large reduced The algorithm defines the term "sufficiently large" iteratively. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots. dual feasible basis. Ahuja and Orlin rule that performs 0(n^log C) pivots and can be implemented to run in 0(nm log C) time using simple data structures. some variants of this Balinski's algorithm performs O(n^) pivots and runs O(n^) time. This algorithm essentially in amounts to solving n shortest path problems and runs 0(nS(n. which is a dual simplex algorithm for the eissignment problem. Hung [1983] describes a pivot rule that performs at at most O(n^) consecutive degenerate pivots and most 0(n log nC) nondegenerate pivots. For example. The algorithm cost. Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm and showed that for the eissignment problem this rule requires O(n^lognC) pivots. A naive implementation of the algorithm runs in [1988] described a scaling version of Dantzig's pivot 0(n^m log nC). analysis is Out presentation of the auction algorithm tmd somewhat different that the one given by Bertsekas and Eckstein [1988]. the algorithm we have presented increases the prices of the objects by one unit at a time. by the maximum amount Bertsekas is [1981] has presented another algorithm for the assignment problem which cost flow in fact a specialization of his relaxation algorithm for the minimum problem (see Bertsekas [1985]). Goldfarb [1985] described some implementations of O(n^) time using simple data structures and in Balinski's algorithm that run in 0(nm + n^log n) time using Fibonacci heaps. initially.) in every iteration. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots. whereas the algorithm by Bertsekas and Eckstein increases prices that preserves e-optimality of the solution. it it (Although his basic algorithm maintains a is not a dual simplex algorithm in the traditional sense because at does not necessarily increase the dual objective algorithm do have this property.ISl polynomial-time simplex algorithms. his algorithm performs 0(n^log nC) pivots.C)) time. this threshold value equals C and within O(n^) pivots its value is halved.

the successive shortest path algorithms Among due to Glover et al. by McGinnis [1983] and Carpento. Over the many computational studies have compared one algorithm with a few other algorithms. showed that the scaling version of the auction Bertsekas and Eckstein [1988] algorithm runs in this 0(nm log nC). developed the algorithm for the assignment problem. three approaches. it is difficult to assess their computational merits. results to date seem to justify the following observations about the algorithms' relative performance. His algorithm performs O(log C) scaling phases and solves each phase in OCn'^'^m) time. using bit-scaling of costs. but the two algorithms would probably have different computational attributes. the best strongly polynomial-time bound to solve the assignment algorithms. Using the concept of e-optimality. They also improved the time bound of the auction algorithm to 0(n^'^m lognC). by Engquist et al. Gabow [1985] . Bertsekas and Eckstein is found that the scaling version of the auction algorithm competitive with Jonker and Volgenant's algorithm. Martello and Trlh [1988] present .Currently. problem is 0(nm + n^ log n) which is achieved by many assignment Scaling algorithms can do better for problems that satisfy the similarity first scciling assumption. Section 5. on the relaxation methods. This time bound For problems satisfying best time is comparable to that of Gabow and Tarjan 's algorithm. Nevertheless. algorithm running in time 0(n^' Gabow and Tarjan [1987] developed another scaling push algorithm the assignment ^m log nC). [1986] and Jonker and Volgenant [1988] [1987] appear to be the fastest. Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5. The primal simplex algorithm is slower than the the latter primal-dual. Since no paper has compared all of these zilgorithms.11 has presented a modified version of algorithm in Orlin and Ahuja [1988]. years. As mentioned previously. Glover and Klingman [1977a] on the network simplex method. relaxation and successive shortest path algorithms. most of the research effort devoted to assignment algorithms has stressed the development of empirically faster algorithms. thereby achieving jm OCn'^' ^m log C) time bound. the similarity assumption. and by Glover [1986] and Jonker and Volgenant [1987] on the successive shortest path methods.8 solves problem in 0(nm log nC) since every push is a saturating push. Some representative computational studies are those conducted by Barr. these two algorithms achieve the boimd to solve the assignment problem without using any sophisticated data structure. Martello and Toth [1982] [1988] on the primal-dual method. Carpento.

< «>. Tj. in this chapter assume that arcs the flow entering an arc equals the flow leaving the arc.t. Researchers have studied several generalized network flow problems. units of flow enter an arc (i. We shall now discuss these topics briefly. then the arc is gainy.i) "'ji'^ji = K'if» = s S 0. then Tj: Xj: units "arrive" at arc. if i = . If node 1. t for aU i E N (6.1b) [vj. = for all arcs. is a is nonnegative flow multiplier dissociated with the lossy and. (iii) multicommodity flows.. (iv) convex cost flows. j). four other topics deserve mention: (ii) generalized network flows. 1 < rj: < then the arc Tjj if 1 < Tj. if i ?t (i. commodity network flow problems with linear Several other generic topics in the broader problem theoretical (i) network optimization are of considerable and practical interest. and network design. In the conventional flow networks. arcs do not necessarily conserve flow. i.j) € A) € A) s. extension of the conventional An maximum two flow problem is the generalized maximum flow problem which either maximizes the flow out of a source the flow into a sink node or maximizes of node (these objectives are different!) The source version the problem can be states as the following linear program. the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods. j. Generalized Network Flows The flow problems we have considered conserve flows. In particular. Maximize v^ (6ia) subject to X {j: "ij {j: S (j.e. For example.183 several cases. If In xj: models of generalized network flows. Generalized network flows arise in may application contexts. FORTRAN implementations of assignment algorithms for dense and sparse 6.6 Other Topics Our domain of discussion in this paper has featured single costs.

and Klingman among they Elam it is et al. cost flow algorithm. The approach. but convex objective functions are more difficult to solve. is due to Jewell [1982]. are not pseudopolynomial-time. the objective function can be written in the form V (i. for all (i. due to Bertsekeis and Tseng generalizes their minimum cost flow relaxation algorithm for the generalized minimum cost flow problem. Extended versions of the successive shortest path algorithm. j) e A.184 < x^j < uj: . however.j) Cjj (x^j). and the primal-dual algorithm for the cost flow problem apply to the generalized maximum flow problem. These algorithms. find that about 2 to 3 times slower than their implementations for the ordinary minimum [1988b]. note that Vg not necessarily equal to v^. is essentially a primal-dual algorithm.e. which is an extension of the ordinary minimum cost flow problem. The third approach. Convex Cost Flows We shall restrict this brief discussion to i. Problems containing nonconvex nonseparable cost terms such as xj2 e A are substantially X-J3 more difficult to solve and continue to pose a significant challenge for the mathematical programming community. Glover others. The recent paper by Goldberg. Even problems with nonseparable. Plotkin and Tardos [1986] describes the first polynomial-time combinatorial algorithms for the generalized maximum flow problem. The generalized maximum flow problem has many similarities with the minimum minimum cost flow problem. These are three main approaches to solve this problem. Further. we wish to determine the minimum first cost flow in a generalized network satisfying the specified supply/demand requirements of nodes. Note that the capacity restrictions apply to the flows entering is the arcs. In the generalized minimum cost flow problem. the negative cycle algorithm. The second approach [1979] the primal simplex algorithm studied by Elam. convex cost flow problems with separable cost functions. The paper by Truemper [1977] surveys these approaches.. mainly because the optimal arc flows and node potentials might be fractional. typically. . find their implementation to be very efficient in practice. because of flow losses and gains within arcs.

convex problem a priori (which of we knew the optimal solution to a separable course. Hax This transformation reduces the convex cost flow problem to a it minimum cost flow problem: introduces one arc for each linear segment in the cost functions. negative cycle algorithm.j) ^i] {j: € A S (j. to solve convex cost flow problems without increasing the problem [1984] illustrates this technique size.2a) e A subject to Y {j: (i. (62c) In this formulation. e. Bradley. < x^j for all (i. program (see. and Gupta and suggests a pseudopolynomial time algorithm. Batra. (6. is a convex function. j) e A.j) e A. The research community has focused on two (i) classes of separable convex costs flow each Cj.i) ''ji = ^^'^' ^°^ all i € N. it is possible to cost carry out this transformation implicitly and therefore modify many minimum flow algorithms such as the successive shortest path algorithm. to approximate a convex function of one variable to any desired degree of accuracy. Observe that segments chosen (if it is possible to use a piecewise linear function. (xjj) for each (i. j) with only three . classes of Solution techniques used to solve the two problems are quite is different. we don't). The paper by Ahuja. (xj.185 analysts rely on the general nonlinear programming techniques to solve these problems. primal-dual and out-of-kilter algorithms. then we could solve the if problem exactly using a linear approximation for any arc (i.) (6. with linear necessary) with sufficiently small size. thus increasing the problem size. However.) is problems: each Cj. Cj. The separable convex cost flow problem has the follow^ing formulation: Minimize V (i. (xjj) is a piecewise linear function.g. (xj..2b) e A < Ujj . More elaborate For example.j) Cj. There a well-known technique for transforming linear functions to a linear a separable convex program with piecewise and Magnanti standard [1972]). of (ii) a continuously differentiate function. alternatives are possible.

topic are Ali.3a) A subject to . and therefore solve the problem in pseudopolynomial time. Kennington and Helgason Meyer and Kao [1981]. same underlying network. an integer optimum solution of Muticommodity Flows Multicommodity flow problems arise when several commodities use the In this section. to obtain Minoux has also developed a polynomial-time algorithm the convex const flow problem. Hosein and Tseng [1987]. Some important references on this [1980]. Rockafellar [1984]. Helgason and Kennington [1978]. Klincewicz [1983]. Dembo and Klincewicz [1981]. 1 Let denote the supply/demand vector of commodity cost flow Then the multicommodity minimum ^ problem can be formulated as follows: Minimize V 1^=1 V (i. coarser. of this approach).j)e k c^: k x^(6. Some time. and the optimal flow on the arc. If (See Meyer [1979] for an example could we were interested in only integer solutions. but share common a linear arc capacities. Uj. the versions of the convex cost flow problems can be solved in polynomial [1984] has devised a polynomial-time algorithm for Minoux one of [1986] its special mininimum quadratic cost flow problem. Any other breakpoint in the linear approximation would be irrelevant and adding other points would be computationally wasteful. This observation has prompted researchers to devise adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous. we state programming formulation of the multicommodity minimum problem and its cost flow problem and point the reader to contributions to this specializations. using ideas from nonlinear progamming for solving this general separable convex cost flow problems. that the b*^ problem contains r distinct commodities numbered k. cases. Researchers have suggested other solution strategies. Suppose through r. then we choose the breakpoints of the linear approximation at the set of integer values. Florian [1986].186 breakpoints: at 0. and Bertsekas. approximation.

3c). decomposition and partitioning methods. (6.. for ^ all (i. < k u.j).3b) ''ii (i. commodities way that minimizes overall flow We problem is first consider some special cases. for all (i.3d) k In this formulation. (6. the model contains additional capacity each arc.j) k k ~ ^i ' ^OT a\] i and k.j) e A) e A y ktl ' k X. every s*^ commodity k has objective a is source node and a sink node. .j). Frisch [1968] showed how source or a to solve the multicommodity maximum flow problem with a common common sink by a single application of any maximum flow algorithm. restrictions on the flow of each commodity on Observe that it if the multicommodity flow problem does not contain bundle into r constraints.3c). subsequently generalized this decomposition approach to linear programming.j) and all k . The multicommodity maximum flow a special instance of In this problem. x-- and k c-- represent the amont of flow and the unit cost of flow for commodity k on arc (i. (6.187 k X. as captured by (6. represented respectively by to and tK The t*^ maximize the sum of flows that can be sent from s*^ to for all k. Further. Ford and Fulkerson [1958] solved the general multicommodity Dantzig and Wolfe maximum [1960] flow problem using a column generation algorithm. '^ < u:j. We refer the reader to . Researchers have proposed three basic approaches for solving the general multicommodity minimum resource-directive cost flow problems: price-directive decomposition. then decomposes single commodity minimum cost flow corxstraints problems. Shein and pseudopolynomial time by a labeling algorithm.3). the total flow on any arc cannot exceed capacity. With the presence of the bundle the essential problem in a is to distribute the capacity of each arc to individual costs. 1] {j: {j: V (i. As indicated by its the "bundle constraints" (6.3d). one for each commodity. Hu [1963] showed how network in to solve the two-commodity maximum flow problem on an undirected Rothfarb. (6. (63c) < k Xj..

of the form (6.3c) in the convex cost k These constraints force the flow the arc is x^- of each if commodity k on the arc is arc (i. algorithmic developments on the multicommodity minimum made on cost flow problem have not progressed at nearly the pace as the progress the single commodity minimum cost flow problem. the constraint on arc Ujj (i. for example.j) to be zero if not included in the network design.j) flow to be the arc's design capacity constraints Many modelling enhancements are possible. the algorithms developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et [1984]).3).188 the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these methods. Although specialized primal simplex software can solve the single commodity problem 10 to 100 times faster than the general purpose linear programming systems. the network might . restricts the total included. some may restrict the underlying network topology (for instance. Network Design We network. in other applications. The book by Kennington and Helgason [1980] describes the details of a primal simplex decomposition algorithm for the multicommodity minimum cost flow problem. the network must be a tree. These network design models contain is that indicate whether or not an arc included in the network.are multicommodity flows. these models involve k x^. The design problem is of its considerable importance in practice and has generated an extensive literature of own. for finding optimal routings in a on analysis rather than synthesis. related The design decisions yjj and routing decisions by "forcing" constraints of the form 2 k=l ''ii - "ij yij ^^^ ' ^" ^^'^^ which replace the bundle constraints multicommodity flow problem (6. in some applications. have focused on solution methods that is. Many design problems can be stated as fixed cost network flow problems: is (some) arcs have an associated fixed cost which incurred whenever the arc carries 0-1 variables yjj any flow. Typically. Unfortunately. al.

. Apple Computer. ^ (i. Lav^ence Wolsey .j) A V ij € A (as well zs fixed costs k which models commodity dependent per unit routing costs c Fjj for • the design arcs). and by Grants from Analog Devices.189 need alternate paths to ensure reliable operations). is many different objective functions arise in practise. by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research. Benders decomposition) as well as emerging ideas from the field of polyhedral combinatorics.Richard Robert Tarjan for a careful reading of the manuscript and many for useful suggestions. . Also. Hershel Safer. optimization-based heuristics. and integer programming decomposition (Lagrangian relaxation. dual ascent procedures. These solution methods include dynamic programming. The research Presidential of the first and third authors was supported in part by the Young Investigator Grant 8451517-ECS of the National Science Foundation. One of the most popular "" Minimize £ ^ k=l (i^j)e k c• k x^^ + Y. Usually. network design problems require solution techniques from any integer programming and other type of solution methods from combinatorial optimization. Magnanti and Wong [1984] and Minoux [1985. 1987] have described the broad range of applicability of network design models and summarize solution methods network design literature. and Prime Computer. for these problems as well as many references from the [1988] discuss Nemhauser and Wolsey many underlying methods from integer programming and combinatorial optimization. Acknowledgments We Wong and are grateful to Michel Goemans. Inc. We are particularly grateful to William Cunningham many valuable and detailed comments.

1988. of Shortest Path and Simplex Method. J. Orlin.E. Implementing Prin\al-E>ual Network Operations Research Center... and J.. in Oper.K.. Working Paper 1905-87. for the Shortest Path.I. 1974. Orlin. Bipartite J. Sloan School Management..V. and R. .. Kodialam. R. A Fast and Simple Algorithm for the Maximum M. OR Aho. R. Orlin. L. . Ahuja.K.C. Magnanti. Cambridge. Cambridge.. Flow Algorithms. A Parametric Algorithm for the Convex Cost Network Flow and Related Problems. 1985a. 222-25 Goldberg. Problem. J. M. Cambridge. and Orlin. and T.I. MA. R.K.E.I. Res..K. R. 1988. A. 055-76. 16.I.V.T. Operations Research Center. 1988. M. and S. A. K. Sloan School of Management. Ullman.B. Stein. 1987. J. and J. and Ahuja..190 References Aashtiani.T. Research Report. Gupta. To appear. and R. MA. Technical Report No.B. R. J.of Oper. Flow Problem. K.E.. Reading.K. Hop>croft. Ahuja. Ahuja. MA. Sloan School of Management. R. C. 193.E. To appear Ahuja. R. J. MA. H. MA. Technical Report Cambridge. North Carolina Raleigh. Tarjan. M. M. Res. L.. Improved Primal Simplex Algorithms Cost Flow Problems. Cambridge. J. Tarjan. 1988. and R. 1987.D.K.E.. Mehlhom. Tarjan. The Design and Analysis of Computer Algorithms. 2047-88. Orlin. Euro.B. ]. 1988. Orlin. Tarjan. To appear.B.. M.T. Assignment and Minimum and Ahuja. Orlin. Working Paper 1966-87. Department State University. Batra. R. Addison-Wesley. Faster Algorithms for the Shortest Path Problem. J.T. Personal Communication..T. Ahuja. Improved Algorithms for Network Flow Problen«. Akgul. Improved Time Bounds for the Maximum Flow M. Working Paper No. Computer Science and Operations Research. R. 1984.I. Ahuja.K. K. Finding Minimum-Cost Rows by Double of Scaling. 1976.B. 1988. N.B.A.B. MA.

Implementation and Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. 1977. L. North Carolina State University. Symposium on . Klingman. of Mathematics. R.. 1977a. Trans. Klingman. The Alternating Path for the Assignment Problem. and R. Balinski. 12. Basis Algorithm Ban. Wong. Barr. MA. 403-420. Tardos. Operations Research. MA. and J. 4. Signature Methods for the Assignment Problem. Baratz. A. Forces Karzanov Algorithm to O(n^) Running Time. Res. K.T. Cambridge. Bamett. Math. Oper.. 1977b. A Primal Method for the Assignment and Transportation Problems. Man. F. M. Networks 8. M. Kennington. A Network Augmenting of the International Path Basis Algorithm for Transshipment Problems. V. Farhangian. A Survey. F. A. B. McCarl and P. Cambridge.E.I. LIE. Kennington. R. 578-593.C. N. Whitman.L. Texeis. Ali. Department of Computer Science and Assignment Problem. M. Klingman. 1985b.. Note on Weintraub's Minimum Cost Flow Algorithm. F. M.191 Akgul. Multicommodity Network Flows Balinski. D. Multicommodity Network Problems: Applications and Computations. J. 1980..127-134. Ali.D. Technical Report OREM 78001. 1985. Construction and Analysis of a Network Flow Problem Which Technical Report TM-83. Comory. The Convex Cost Netwrork Flow Problem: A State-of-the-Art Survey.L. Sci. Raleigh. Res. I. 1964. Shetty. 1987. Euro.. Glover. R. D. J.. Laboratory for Computer Science. A. Dept. A Genuinely Polynomial Primal Simplex Algorithm for the Research Report. 1984. 1978. and D. Southern Methodist University. and D... 33. B. Research Report. Patty.I. and E. 527-536. and D. Helgason. 10. 16. Armstrong. B. Prog. 1978.E. Glover.. 1-13. Barahona. Proceedings External Methods and System Analysis. R. Assad. Oper.37-91. MIT.

R. M. Athens. Games and Transportation Networks. Cambridge. Prentice-Hall. Gallager. Data Networks. Bertsekas. D. D. Enhancement 17. 2.. R. 1987. Prog.T.. 1979. Eckstein. Greece. M. Klingman. 152-171. Laboratory Cambridge. 1981. Oper. Bertsekas. The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem. P. MA. 1987. A Nev^ Algorithm for the Assignment Problem. P. 32. D. Prog.192 Barr. A Unified Framev^ork for Primal-Dual Methods in Minimum Cost Network Flow Problems. Distributed Relaxation Methods for Linear Network Flow Problems. D. Jarvis. and R. in Math. A Distributed Algorithm for the Assignment Problem.1219-1243.I. Series B. P. Laboratory for Information Decision systems. of 25th IEEE Conference on Decision and Control. and 1978... ]. Working Paper. Res. 1978. Berge.. Bertsekas. Euro. IXial Coordinate Step Methods for Linear Network Flow Problems. and A. Flow Problems with Convex Arc Costs. INFOR J. and J. and D. F. Tseng. On a Routing Problem. Bazaraa. M. Linear Programming and Network Flows. Math. Programming. 16. Bertsekas.I. 125-145. Report LIDS-P-1653. for Information Decision Systems. 105-123.P. Ghouila-Houri. Math. 1986.. Bellman. Bertsekas.. Hosein. 1962. A. Bertsekas. To appear Bertsekas. C.. 21. Barr. Appl.T. John Wiley 1979.. Proc. P. 1958. Generalized Alternating Path Algorithm for Transportation Problems. 87-90. Glover. and P. R. 25. Also in Annals 1988. P. Relaxation Methods for Network J. D. of Spanning Tree Labeling Procedures for Network Optimization. SIAM of Control and Optimization . D. and D. Bertsekas. John Wiley & Sons. . Glover. 1985. 16-34. of Operations Research 14. D.P. Math.J.P. D. QuaH.P. 1987. & Sons. 137-144.. MA. Klingman. Prog.

125-190. Simeone et al. Routing and Scheduling and Crews. India. (eds. of Operations Research 33. Addison-Wesley. G. John Hopkins University. 1961. In B.B. Boas. L.. 93-114. Bodin. Res. Magnanti. 65-211. 1986. P. On the Computational Behavior of a Polynomial-Time Network Flow Algorithm.O. Hax.Y. Brown... Technical Report 661.G. Minimal-Cost Network Flow Patterns. D. Personal Communication. R. and M. of Vehicles L. C. and D. 1-38. Ball. School of Operations Research and Industrial Engineering.G. R. and Orlin. O.. Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems. Toth. D.193 Bertsekas. and P. Algorithms and Codes for the Assignment Problem. 1977. Gowen. N. J. D. Graves. and T. A. Assad.. 1988a. and P. Technical Report. Zijlstra. 1977. R. Scale Primal Transshipment Algorithms. Oper. 1985. 193-224.. Bradley.. Tseng. P. Operational MD. . 1988b. Applied Mathematical Programming. 1983. An Efficient Algorithm for the Bipartite Matching Problem. Theory 10. Computer Science Group. 1988. Optimization. Cheriyan. G. 1977. Math. FORTRAN Codes for Network As Annals and P. and P. A. Man. Tata Institute of Fundamental Research. C. Jensen.P. Van Emde. Design and Implementation of an Efficient Priority Queue. Baltimore. Research Office. and J. Sodini. Parametrized Worst Case Networks for Preflow Push Algorithms. Carraresi. 86-93. S. Oper. Carpento. P. 1988.L. S. 99-127.. Ithaca.. Martello. 36.). Design and Implementation of Large Sri. Comp. Bradley. Bertsekas. Bombay. Kaas. Sys. In B. Boyd. Oper.R. Eur.). for Linear Minimum Cost Network Flow Problems.J. Cornell University. of Operations Research 13. Bland. Res. A. Golden. L. et (ed. Res. Busaker. 1986. FORTRAN Codes for Network As Annals and J. 21.. and E. Simeone. 23. B. O. Tseng. Optimization. A. 10. Technical Report No. and G.P. A Procedure for Determining a Family of 15. The Relax Codes al. G.

Princeton University Press. and S. India. G. 1960. In P. NY.B. Dantzig. 1960. Tucker (ed. Princeton. N. NJ.B. Inc. Maheshwari. 1980. Princeton University Press. Analysis of Preflow Push Algorithms for Maximum Network Technical Report. of Computer Science and Engineering. Christophides. Dantzig. 174-183. Analysis of Production and Allocation. 1951.N. 4. In T. G. Math. and Block Triangularity Programming. Secondary Constraints. 1956. Dantzig. R. Cunningham. Mafft. 1979. ACM Trans. Dantzig. (ed.). Software 6.. Dantzig.. of Oper. Graph Theory : An Algorithmic Approach. Fulkerson. Rosenthiel Graphs. G. Theoretical Properties of the Network Simplex Method. Kuhn and A.. Cheung. Res. (ed. John Wiley & Sons. 1962. Dept. 1975. T. A Network Simplex Method. 105-116. 1-16. 1976. J.B. All Shortest Routes in a Graph.W. W.H. On the Max-Flow Min-Cut Theorem of Networks. G. Pro^. Academic Press. 101-111. on Math. 1967. Man. Mathematical Methods of Solution of 112-125 (in Russian). On the Shortest Route through a Network. In H. Linear Programming and Extensions.C. Decomposition Principle for Linear Programs.).194 Cheriyan.H.W. Algorithm for Cor\struction of Maximum Flow in Networks with Complexity of OCV^ Economical Problems 7. Wolfe. Oper. 8. G. Vl ) Operation. in Linear 1955. New Delhi. 11.. Application of the Simplex Method to a Transportation Problem. Rfs. W. 6. 196-208. . Cunningham.V. G. 1987.R. Activity Koopmans 359-373. 215-221. Flow. Cherkasky. Dantzig.B. Annals of Mathematics Study 38. Linear Inequalities and Related Systems.B. B. 1977. Upper Bounds.). Computational Comparison of Eight Methods for the Mzocimum Network Flow Problem. and P. and D. Theory of Gordon and Breach. 91-92.B. Indian Institute of Technology. 187-190. Economeirica 23. G. Sd. Dantzig.

632-633.A. Klingman. Prog. University of Bayreuth. S. Algorithm for Solution of a Problem of Soviet Maximum Flow in Networks with Power Estimation. An Algorithm for Solution of the Assignment Problem. and D. D. Meier. A Scaled Reduced Gradient Algorithm for Costs. Dial.195 Dembo. and J.. Dinic. Dial. Denardo. Networks 14. 1979. Numeriche Mathematics 1. 1969. 161-186. and M.269-271. 1985. Math.L. Derigs. Lecture Notes in Economics and Mathematical Systems. Exponential Grov^h of the Simplex Method for the Shortest Path Problem.. E.. Unpublished paper. R. University of Waterloo. Kronrod. 1277-1280. Glover. Pruning and Buckets.A. Implementing Goldberg's Max-Flow Algorithm: A Computational Investigation. R. 1959. U. 27. Oper. 2-[5-248. E. Dijkstra. A Computational Arvalysis of Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees. Dinic. 1988. 1324-1326. U. J. Canada. U. 1981. Ontario. Network Flow Problen\s with Convex Separable Deo. Shortest-Route Methods: 1. Technical Report. Reaching. E. Doklady 10. 1979. Res. Comm. Fox. E. and C Pang. Klincewicz.57-102. Algorithm 360: Shortest Path Forest with Topological Ordering. Soviet Maths.. Derigs. 1970. G.A. Study 15. Annals of Operations Research Derigs. 275-323. 1988. F. ACM 12. and Vol. A Note on Two Problems in Connexion with Graphs. West Germany. W. 1969. N. Shortest Path Algorithms: Taxonomy and Annotation. R. . 1984.. Programming in Networks and Graphs. 125-147.V. 11.. 300. Motivation and Computational Experience. Edmonds. Math. 1970. Kamey. and B. The Shortest Augmenting Path Method for Solving Assignment Problems: 4. Dokl. Networks 9. Springer-Verlag.

on Engquist. A Successive Shortest Path Algorithm for the Assignment Problem. and R. M. Jr. Prog..U. Algorithm 97: Shortest Path. 39-59. Technical Report TM-80. Maryland.. Solving the Trar\sportation Problem. Math. Cambridge. }. Network Flow Theory. Report Rand Corp. Klingman.. Maximal Flow through a Network.W. Even. 248-264. Ford. A. IRE Trans. Jr. Sd. Network Flow and Testing Graph Connectivity. Fulkerson. 1979. 1982.. ACM 19. P. 8. SI S. L. Shannon. 1979.E. J. and D. and C. Laboratory for Computer Science. 4.. Even.. Computer Science Press. of Oper. Jr.R. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. 1956. On the Efficiency of Maximum Flow To appear in Algorithms on Networks with Small Integer Capacities..R. CA. M. Elias. Study 26. INFOR 20.196 Edmonds. Nonlinear Cost Network Models in Transportation Analysis. Floyd. F. and R. Glover. Karp. J. 507-518. Canad. S. A Strongly Convergent Primal Simplex Algorithm for Generalized Networks. 1962. Ford. Research Report. Graph Algorithms. Note on Maximum Flow Through a Network.. Ford. 1956. and C. Math. . Math..M. and D. MA. /. R. 345.R. 167-196. Elam. The Max-Flow Algorithm of Dinic and Karzanov: An Exposition. Even. 4. 1972. lA. 1975.. Comm. Iowa Algorithmica. 24-32. Femandez-Baca. Ames. State University. Feiitstein. S. 1987. Fulkerson. Department of Computer Science.I. Florian. 5. 3. >4CM P-923. M. J. L. Res.T.R. 370-384.. AM Comput. Infor. 1976. Tarjan. 117-119.E. 1956. and D. Man. 399-404. D. Martel. Santa Monica. Theory TT-2.R.. 1956. 1986. L.

1962. also in /.. and Frisch. 596-615. 197 Ford. Faster Scaling Algorithms for Network SIAM ]. Fibonacci Heaps and Their Uses in of Improved Network Optimization Algorithms. Frank. Ford.. Fulkerson. An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm.. and R.R. and Problems.. 1986. 338-346. Transmission. H. 6. 47-54. Res. Sci.. Fredman. Discrete Location Theory..R. L. (submitted). NJ. 1986. D.T. 419-433. Fulkerson. Flows in Networks. A Primal-Dual Algorithm for the Capacitated Hitchcock Problem. 1955. New Bounds 5.N. R. Gabow.. Jr.ofComput. SIAM ]. and DR. 4. Quart. R. Fulkerson. Math. Scaling Algorithms for Network Problems. and D. 2. 1961. Addison-Wesley.. An Out-of-Kilter Method for Minimal Cost Flow Problems. and P. 1958. M...). 31. Log. S.. 97-101. 35. and Transportation Networks. H.R.L. Cost Circulation 298-309.R. Naval Res.R. 1971. Sci of ACM 34(1987).. 277-283. on Found. L. Gabow. 5. Jr. 18-27. SIAM J. Francis. Man. M.B. Ford. Dantzig. 9. and C. 1988. L. of Computing 83 - 89. Communication. 1958. Tarjan. A Suggested Computation for Maximal Multicommodity Network Flow. Fulkerson. Comp. Fujishige.E. I. 1985. on the Complexity of the Shortest Path Problem. John Wiley & Sons.E.Sys. Constructing Maximal Dynamic Flows from Static Flows. Comput. J. Princeton University Press. Computation of Maximum Flow in Networks. Princeton. Oper.. 25th Annual IEEE Symp. and D.N. Mirchandani (eds. L. Fredman. Ford. Fulkerson. R.R. H.Sci. 1987. 1957. 1984. D.R.. Appl. Tarjan. Math. To appear. L. Quart. and D. Fulkerson. Jr. 148-168. Logist. R.. Prog. Naval Res. .

Threshold Assignment Algorithm. Networks 191-212. Implementation and Computational for Comparisons of Primal. Sys. Minimum Cost Network Eow Problem. Klingman. 199-202. C. Galil. and A. Sci. Res. Glover. On the Theoretical Efficiency of Various 103-111. 1. Oper. Tardos. An 0(VE log^ V) Algorithm for the Maximum Flow Problem. Sci. . Starchi. OCV^/S E^/^) Algorithm for the Maximum Flow Problem.C. Glover. (eds. 136-146..198 GaUl. Shortest Paths: A Bibliography. 1980. Klingman. 221-242. 27th Annual Symp. F. Rome. Z. B. Pallottino. F. Toth. Gavish. Pallottino. and Its E. /. Theoretical Comp. Gallo. 1977. National Algorithms for Calculating Shortest Path Trees. Galil. G. and M. ofComput. Network Flow Algorithms. P. Bureau of Standards. and D. R.. D. EXial 1974. 3-79. Study 26. Klingman. and S. G. F. A Performance Comparison of Labeling Technical Note 772.). and C. Acta Informatica 14. Math. Sofmat Document 81 -PI -4-SOFMAT-27. Italy. Math. Klingman. 1981. 14. Min-Cost Flow Algorithm. 1986. 1983. Kamey. and G. Gallo. An 0(n^(m + n log n) log n) Sci. Glover. of Comp... J.. on the Found. 1982. Galil. Gibby. 1973. D. Glover. . F. and D. 203-217. 1984. and D. Glover. F.. 1988. and S. Glover. Gallo. D. 12. D. 12-37. Maffioli. and Primal-Dual Computer Codes 4. S. Netxvorks 14. A Comparison of Pivot Selection Rules for Primal Simplex Based Network Codes. Witzgall. Gilsinn. Z. Prog. In Fortran Codes for Network Optimization. Schweitzer. No. Z.. Pallottino As Annals of Operations Research 13. 21. The Zero Pivot Phenomenon in Transportation Problems and Computational Implications.. Shlifer.. Washington. B. 226-240. G. Ruggen. Proc. Shortest Path Algorithms. Letters 2. Prog. Simeone. and E. P. 1980. Z.. 1986. Mead. The Threshold Shortest Path Algorithm. Naamad. R.

Problem. 1984. and J. Klingman. D.. Successive Approximation. Laboratory for Computer MA. Proc.I. S. J. 1985. E. Whitman. F. Klingman. MA. on the Theory of Comput. Glover. and R. 1985..V. 1974. Whitman. Goldberg. Solving Minimum Cost Flow Problem by of Proc. A. Science. A. D. 136-146. J. AIIE Transactions Glover. Napier. Comprehensive Computer Evaluation and Enhancement of Maximum Flow Algorithms. Change Criteria. for the F. Phillips. Applications of Management Glover.T. R..199 Glover. A New Max-Flow for Algorithm.V. 18th ACM Symp. and D. 9. and N. A New Approach to the Maximum Flow /. and RE. Combiiuitorial Algorithms for the Generalized Circulation Problem. Goldberg.T. Klingman. Klingman. 12.E. 65-73. Klingman. Netvk'ork Applications in Industry and Government. D. Logis. Klingman. Tarjan. and D. and Tardos. 33. D. M... Glover. 1976. Man. 1987. D. D.A... Man. A. M. and A. Res. Klingman. 20. Phillips. INFOR Goldberg. A Computational Study on for Tranportation Start Procedures. Mote. Quart.V. Glover. D. Cambridge. Kamey. Mote. and D. Problem. Augmented Threaded Index Method for Network Optimization. 1988. F. 793-813. Stutz. Oper. on the Theory Comp. Research Report.I. 109-175.. New Polynomial Sci. 1974. 1979. 1106-1128. Schneider. Technical Report MIT/LCS/TM-291. A. Basis and Solution Algorithms Problem. N. A Primal Simplex Variant Maximum Flow F.. A New Polynomially Bounded Shortest Path Algorithm. 19th ACM Symp. . 31. Goldberg. 1986. 1985. 136-146. 31. F. Science 3. 293-298. Glover. F. Cambridge. and R. Shortest Path Algorithms and Their Computational Attributes.. Naval Res. Sd.. 363-376. Laboratory Computer Science.F. Tarjan. 41-61. To appear in ACM. Plotkin..V.

Reid. Department of Operations Research and Industrial Engineering. Proc. Goldfarb. 1S7-203. 7. and R. 2(Hh ACM Golden. MA. In B. 1986. Controlled Rounding of Tabular Data for the Cerisus Bureau at the : An Application of LP and Networks. L. 1977. 551-570. 1988. Res. and T. and Network Simplex Methods for Maximum Simeone et al. and R.. f. Department of Operations Research and Columbia University.E. J. (eds. Prog. . Multi-Terminal Network Flows.V. Efficient Dual Simplex Algorithms for the Assignment Problem. Networks 149-183. D. and S. D.200 Goldberg. Kai. Oper. A Practicable Steepest Edge Simplex Algorithm. 1986.361-371. 388-397. 1988. R. C. 33. Solving Minimum Cost Flow Problem by [1987]. Gomory. 83-124.. on the Theory of Comp.. Goldberg. Columbia New York. 1987. Seminar given OperatJons Research Center. At Most nm Pivots and O(n^m) Time. J. Hao. Taijan. Hao.. 1985. 1961. As Annals of Operations Research 13. M. B. Cambridge. Math.K.. E. 1988a.. D. Efficient Shortest Path Simplex Algorithms. NY. Goldfarb. D. Tarjan. Goldfarb. )To (A revision of Goldberg and Tarjan appear in Math. NY. I. Magnanti. NY. D. Hu. Columbia University. Goldfarb.E. D. . Golden. in New York. T. New York. Department of Operations Research and Industrial Engineering. Anti-Stalling Pivot Rules for the Network Simplex Algorithm. Canceling Negative Cycles. Optimization.. and S. Hao.. 1977. Research Report. 12.ofSlAM 9. Successive Approximation. A Computational Comparison of the Dinic Flow.) FORTRAN Codes for Network Goldfarb. A Primal Simplex Algorithm that Solves the Maximum Flow Problem University. 1988b. Math.D. Deterministic Network Optimization: A Bibliography. and M. and T. and J. Finding Minimum-Cost Circulations by Symp. A. Prog. Kai.V. Grigoriadis. Research Report.. and J. Goldfarb.. Industrial Engineering. A. B. Technical Report.

83-111. M. Technical Report No. CT. Research Report No. 1977. D. A Note on Shortest Path. 344-260. Grigoriadis. Karp.M. . 17-18. Hsu. /. Assignment. J. New Hamachar. 20. Numerical Investigations on the Maximal Flow Algorithm of 22. M. 1986. Hoffman. Kennington. C. 1963. M. and J. D. Hu. University. 2. Lecture Notes in Economics and Mathematical Systems. M. Bulletin of the ACM Gusfield. M. Math. .. Springer-Verlag. Computer Science and Engineering. and H. Personal Communication. and D. R. Subroutines. An n ' Algorithm for Maximun Matching in Bipartite Graphs. Davis. Femandez-Baca. 1985. and Transportation Problems. Res. L. 1984. University of California. Markowitz. 17-29. Oper. Fast Algorithms for Bipartite Gusfield. Naval Hopcroft. T.-< Karzanov. Integer SIAM J. L. Phys .. 1985.. CA. 224-230. Programming and Related Areas: A Classified Bibliography. C. and T. Martel. 26. Computing Hassin. YALEN/DCS/TR-356. D. F. D. D.. 612-^24. SIAM of Comp. An O(nlog^n) Algorithm for 14. R. Minoux. 1963. Log. Wiley-Interscience. Hausman. The Distribution Math. 1988. 1973. and M. J. A. . Quart. Helgason. J. SIGMAP 1987. Multicommodity Network Flows. Grigoriadis. Study Grigoriadis. 1978.201 Gondran. a Dual-Simplex Network Flow Algorithm. Vol. 10. E.. 160. of a Product from Several Sources to Numerous Facilities. An Efficient Implementation of the Network Simplex Method. 1979. Maximum Flow in Undirected Planar Networks. Johnson. Prog. and D. 63-68. Comput. 225-231. H. V. 11. The Rutgers Minimum Cost Network Flow 26. Network Row. Very Simple Algorithms and Programs Dept. Res. CSE-87-1.. 1979. Yale Haven.. 375-379. and R. Implementing Hitchcock. 1941. AIIE Trans. of for All Pairs Network Flow Analysis. B. D. An Efficient Procedure for 9. Graphs and Algorithms.

202

Hu, T.C.

1969. Integer Programming and Network Flours.

Addison-Wesley.

Hung, M.
Oper.Res.

S.

1983.

A

Polynomial Simplex Method for the Assignment Problem.

31,595-600.

Hung, M.
Oper. Res
.

S.,

and W. O. Rom.

1980.

Solving the Assignment Problem by Relaxation.

28, 969-892.

Imai, H.

1983.

On

the Practical Efficiency of

Various

Maximum Flow

Algorithms,

/.

Oper. Res. Soc. Japan

26,61-82.

Imai, H.,

and M.

Iri.

1984.

Practical Efficiencies of Existing Shortest-Path Algorithms
/.

and
Iri,

a

New

Bucket Algorithm.

of the Oper. Res. Soc. Japan 27, 43-58.

M.

1960.

A New Method

of Solving Transportation-Network Problems.

J.

Oper.

Res. Soc. Japan 3, 27-87.

Iri,

M.

1969. Network Flaws, Transportation and Scheduling.

Academic

Press.

Itai,

A.,

and

Y. Shiloach.

1979.

Maximum Flow

in Planar

Networks.

SIAM

J.

Comput.

8,135-150.

Jensen, P.A., and

W.

Barnes.

1980.

Network Flow Programming. John Wiley

&

Sons.

Jewell,

W.

S.

1958.

Optimal Flow Through Networks.

Interim Technical Report

No.

8,

Operation Research Center, M.I.T., Cambridge,

MA.
Gair>s.

Jewell,
499.

W.

S.

1962.

Optimal Flow Through Networks with

Oper. Res.

10, 476-

Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.

/.

ACM

24,1-13.

JohT\son, D. B.

1977b.

Efficient Special

Purpose Priority Queues.
1-7.

Proc. 15th

Annual

Allerton Conference on

Comm., Control and Computing,

Johnson, D.

B.

1982.

A

Priority

Queue

in

Which

Initialization

and Queue

Operations Take

OGog

log D) Time. Math. Sys. Theory 15, 295-309.

203
Johnson, D.
B.,

and

S.

Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
Proceedings of the 20th Annual

Directed Planar Networks in O(n^/^logn) time. In
Allerton Conference on

Comm.

Control, and Computing.

Univ. of Dlinois, Urbana-

Champaign,
Johnson,

IL.

E. L.

1966.

Networks and Basic
1986.

Solutions. Oper. Res. 14, 619-624.

Jonker, R., and T. Volgenant.

Improving the Hungarian Assignment

Algorithm. Oper. Res.

Letters 5, 171-175.

Jonker, R.,

and A. Volgenant.

1987.

A

Shortest

Augmenting Path Algorithm
38, 325-340.

for

Dense and Sparse Linear Assignment Problems. Computing
Kantorovich, L. V.
of Production.
in Mfln. Sci.

1939.

Mathematical Methods in the Organization and Planning

Publication

House

of the Leningrad University, 68 pp.

Translated

6(1960), 366-422.

Kapoor,

S.,

and

P.

Vaidya.

1986.

Fast

Algorithms for Convex Quadratic
Proc. of the 18th

Programming and Multicommodity Flows,
Theory of Comp.
,

ACM

Symp.

on the

147-159.

Karmarkar, N.

1984.

A New

Polynomial-Time Algorithm

for Linear

Programming.

Combinatorica 4, 373-395.

Karzanov, A.V.

1974.

Determining the Maximal Flow in a Network by the Method

of Preflows. Soviet Math. Doklady 15, 434-437.

Kastning, C.

1976.

Integer

Programming and Related Areas:

A

Classified Bibliography.

Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.

Kelton,

W.

D.,

and A. M. Law.

1978.

A

Mean-time Comparison of Algorithms
Networks
8,

for

the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.

97-106.

Kennington,

J.L.

1978.

Survey of Linear Cost Multicommodity Network Flows. Oper.

Res. 26, 209-236.

Kennington,

J.

L.,

and

R. V. Helgason.

1980.

Algorithms for Network

Programming,

Wiley-Interscience,

NY.

204

Kershenbaum, A. 1981.
400.

A

Note on Finding Shortest Path Trees. Networks

11,

399-

Klein,

M.

1967.

A

Primal Method for Minimal Cost Flows. Man.

Sci.

14, 205-220.

Klincewicz,

J.

G.

1983.

A Newton Method

for

Convex Separable Network Flow

Problems. Networks

13, 427-442.

Klingman,

D., A. Napier,

and

Large Scale Capacitated

NETGEN: A Program for Assignment, Transportation, and Minimum
J.

Stutz.

1974.

Generating

Cost Flow

Network Problems. Man. So. 20,814-821.

Koopmans,

T.

C.

1947.

Optimum
17 (1949).

Utilization of the Transportation System.

Proceedings of the International Statistical Conference,

Washington, DC. Also

reprinted

as supplement to Econometrica

Kuhn, H. W.

1955.

The Hungarian Method

for the

Assignment Problem. Naval

Res.

Log. Quart. 2, 83-97.

Lawler, E.L. 1976. Combinatorial Optimization:

Networks and Matroids. Holt, Rinehart

and Winston.
Magnanti,
T. L.

1981.

Combinatorial Optimization and Vehicle Fleet Planning:

Perspectives and Prospects. Networks 11, 179-214.

Magnanti,

T.L.,

and

R. T.

Wong.

1984.

Network Design and Tranportation Planning:

Models and Algorithms.

Trans. Sci. 18, 1-56.

Malhotra, V. M., M. P. Kumar, and
for Finding

S.

N. Maheshwari. 1978.

An CK V
I

1

3)

Algorithm

Maximum Flows
1987.

in

Networks. Inform.

Process. Lett. 7

,

277-278.

Martel, C. V.

A

Comparison

of Phase

and Non-Phase Network Flow

Algorithms.

Research Report, Dept. of Electrical and Computer Engineering,

University of California, Davis, CA.

McGinnis,

L.F.

1983.

Implementation and Testing of a Primal-Dual Algorithm

for

the Assignment Problem. Oper. Res. 31, 277-291.

Mehlhom,

K. 1984.

Data Structures and Algorithms.

Springer Verlag.

205 Meyer, R.R. 1979.

Two Segment
C. Y. Kao.

Separable Programming. Man.

Sri. 25,

285-295.

Meyer,

R. R.

and

1981.

Secant Approximation Methods for Convex

Optimization. Math. Prog. Study 14, 143-162.

Minieka,

E.

1978.

Optimization Algorithms for Networks and Graphs.

Marcel Dekker,

New

York.

Minoux, M.

1984.
J.

A

Polynomial Algorithm for

Mirumum

Quadratic Cost Flow

Problems. Eur.

Oper. Res. 18, 377-387.

Minoux, M.

1985.

Network Synthesis and Optimum Network Design Problems:
Technical Report, Laboratoire MASI,

Models, Solution Methods and Applications.
Universite Pierre
et

Marie Curie,

Paris, France.

Minoux, M.

1986.

Solving Integer

Minimum

Cost Flows with Separable Convex

Cost Objective Polynomially. Math. Prog. Study 26, 237-239.

Minoux, M.

1987.

Network Synthesis and E>ynamic Network Optimization. Annals

of Discrete Mathematics 31, 283-324.

Minty, G.

J.

1960.

Monotone Networks.

Proc. Roy. Soc.

London

,

257 Series A, 194-212.

Moore,

E.

F.

1957.

The Shortest Path through a Maze.
the Theory of Switching Part

In Proceedings
II;

of the

International

Symposium on

The Annals of the

Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.

Mulvey,
266-270.

J.

1978a.

Pivot Strategies for Primal-Simplex

Network Codes.

J.

ACM

25,

Mulvey,

J.

1978b. Testing a Large-Scale

Network Optimization Program. Math.

Prog.

15,291-314.

Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley

&

Sons.

Nemhauser,
Wiley

G.L.,

and L.A. Wolsey.

1988.

Integer

and Combinatorial Optimization. John

&

Sons.

Orden, A. 1956. The Transshipment Problem. Man.

Sci. 2,

276-285.

Rock. Orlin. Implementation and Efficiency of Moore. Math. Combinatorial Optimization: Algorithms and Complexity. Massachusetts Ii\stitute of Working Paper 1908-87. B. Prog.Algorithms for the Shortest Route Problem. Papadimitriou. . Carl Hansen. C. Scaling Algorithms for the Assignment Minimum Cycle Mean Problems. 1981. 7. 101-191. J.T. Prog.T. U. R. 1980. 450-455. Scaling Techniques for Miiumal Cost Network Flows.212-222. 27. and K. 1983. 1960. M. Cambridge.I. Potts. 1988. J. J.I. B. Discrete Structures and Algorithms . 1615-84. 8. New E>istance-E>irected Algorithms for Maximum MA. Math.. Ahuja.T. Floips in Transportation Netxvorks. J. Ahuja.. Pollack. New MA.B. J. K. Wiebenson. A Faster Strongly Polynomial Minimum Cost Flow Algorithm. and A.. B.. Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem. MA. ACM Trans. Pape. H. U. 1982.106 Orlin. and R. 1980. School of Management. Munich. Phillips. Academic Press. Cambridge. Oper.224-230. and R. on the Theory of Comp.(ed. K. Prentice-Hall. M. B. Garcia-Diaz. Proc. Orlin. Oliver. Prog. Math. Software 6. Study Orlin.. 1972. Fundamentals of Network Analysis. OR 178-88.. 1988. 1985. 377-387. Operations Research Center. M. Solutions of the Shortest-Route Problem-A Review. Networks and Generalized Networks. Pape.. 1984. Technical Report No. Prentice- HaU. 1974.H. Cambridge. 214-231. and Flow and Parametric Maximum Flow Problems.). 20th ACM and Symp. D. In V.. Page . Working Paper No. J. Maximum-Throughput Dynamic Network Flows. Algorithm 562: Shortest Path Lenghts. Math. Steiglitz.M. Sloan School of Management.B. On the Simplex Algorithm for 24. Orlin. R. Res. 166-178. B. Orlin. Sloan Technology. 1987. and W..

Sys.N. Tardos. M. 24. Rothfarb. Ottawa. Shiloach. Sheffi.S. An OCn^ log n) Parallel Max-Flow Algorithm. Carleton University. Vishkin. Networks. E. 1984. and J. D. 1983. John & M.Sci. 1973. 1982. and G. Sons. R. Interscience.. All Shortest Distances in a Graph: An Improvement to Dantzig's Inductive Algorithm. P. 202-205. New Jersey. Oper. R. Canada. N. Comput. Deo. E. 5. 1981. N. Res. - Srinivasan.362-391. 1968.E. Y. Y.. 16. 1983. An 0(nl log^(I)) Maximum Flow Algorithm. Thulsiraman. Kowalik. Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods. Stanford University. Network Optimisation Practice: A Computational Guide..T. T.. Frisch. Benefit-Cost Analysis of Coding /. Combinatorica 247-255. & -. Improvements to the Theoretical Efficiency of the Network Simplex Unpublished Ph. John Wiley . /. CA. PA. Shein.207 Rockafellar. L. Swamy. Sons. Techniques for Primal Transportation Algorithm. A Strongly Polynomial Minimum Cost Circulation Algorithm. 1983. B. Math. 1980. Philadelphia. 4. 1985. Y. Data Structures and Network Algorithms. Smith. 83-87.M. Wiley Syslo.S. Tabourier. 1985. ACM 20. Network Flows and Monotropic Optimization. Discrete Optimization Algorithms. .. E. /. Tarjan. and K. Computer Science Dept. Y. Method. K. V. 194-213. D. and I. Shiloach.. Prentice-Hall. and U. Graphs. Wiley- Roohy-Laleh. Algorithms 3 . Disc. Common Terminal MuJticommodity Flow.. 1973.128-'i46. 1978. and R.. Tarjan. and Algorithms. Sleator. Technical Report STAN-CS-78-702.D. D. A Data Structure for Dynamic Trees. 1982.. SIAM. Prentice-Hall. Dissertation. Thompson.

87-97. 1962. 32. Improved Shortest Path Algorithms for Transport Networks. ACM Symp. R. 243. A Theorem on Boolean A Matrices. 1978.Math. ACM 9. A. 1986. /. Personal Communication. Von Randow. Res. of the 19th +n)n^ + (m+n)^-^n)L) Arithmetic Operations. Techniques Useful for Solution of Transportation Network Problems. Tarjan. Vol. Oper.7-20. Math. E. E. 173-194. 23^-57. K.450-456. Von Randow. 21. P. Transp. 12. Wagner. 29-38. E. R. N. on the Van Vliet. D.197. 1977. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. Networks Truemper. A Shortest Path Algorithm for Edge - Sparse Graphs. 1982. Weintraub. 1988. Study 26. Letters 2 . Sci. 1984. Tomizava. On Max Flow with Gair\s and Pure Min-Cost Flows. R. Springer-Verlag. Theory of Comp. SI AM ].. R. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. 1972. J. Vaidya. ACM Warshall. Appl. . Algorithms for Maximum Network Flow. 265-268. 1987. 1978-1981. Tarjan. E. 1987. 1974. 1985. Man. Tarjan.11-12. Prog.208 Tarjan. S. Springer-Verlag. An Algorithm for Linear Programming which Requires 0(((m Proc. R. 1981-1984. A Simple Version of Karzanov's Blocking Flow Algorithm. R. 1-11. On Some 1.Res. R. Personal Communication. Primal Algorithm to Solve Network Flow Problems with Convex Costs. 1976. Vol. A.

37-40. 1973a. A. . Zadeh. CA.217-224. Barahona. Near Equivalence of Network Flow Algorithms. N. 26. N. of Operations Research. Edmonds-Karp Algorithm for Computing Maximal Flows. 11. 255-266. 1972. 5. 1960. A Method for Finding the Shortest Route Through a Road Network. P. Dept. for the Simplex Method and other Minimum Zadeh. Math. Whiting. Zadeh. Problem. Comm. A Ehial Algorithm for the Assignment 2. A Bad Network Problem 5.. Res. 347-348. Hillier. Technical Report No. N. Theoretical Efficiency of the /. 1964. Oper. 1979. J. y4CM 7 . 1979. N. Departmente de Industrias Report No. Universidad de Chile-Sede Occidente. WiUiams. Prog. Prog. Chile.209 Weintraub. W. Math. Stanford University. Algorithm 232: Heapsort. Zadeh. . Cost Flow Algorithms. More Pathological Examples for Network Flow Problems. and J. J.. A. y4CM 19. Quart. and F. 184-192. 1973b. D.

l^8^7 U^6 .

.

.

.

Date Due ne m^ ?«.* > SZQ0^ nrr ^^. f^cr J CM- OS 1992 • ::m \995t- o 1994 Lib-26-67 .„_ . 0.5 4Pi? 2 7 1991 W t 1 .

MIT LIBRARIES DUPl I 3 TDSD DQ5b72fl2 b .