^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

208016. MA. Ahuja* Thomas L. INDIA . B. Orlin On leave from Indian Institute of Technology.NETWORK FLOWS Ravindra K. and James Sloan School of Management Massachusetts Institute of Technology Cambridge. Magnanti. 02139 . Kanpur .

MIT. LffiRARF --^ JUN 1 .

11 Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis Assignment Problem Reference Notes References .5 Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems Minimum 5.5 Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm Dijkstra's Dial's Maximum Flows 4.8 5.3 4.2 4.3 5.10 5.3 Notation and Definitions 1.NETWORK FLOWS OVERVIEW Introduction 1.1 4.7 5.9 5.6 Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns 5.2 Complexity Analysis 1.4 5.2 3. Linear and Integer Programming Network Transformations Shortest Paths 3.4 Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm 4.3 3.4 3.1 Applications 1.1 3.6 Developing Polynomial Time Algorithms Basic Properties of 21 Z2 Z3 24 Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks.4 Network Representations 1.5 Search Algorithms 1.1 5.2 5.5 5.

.

networks combinatorial optimization. Many results in network optimization are routinely used to design and evaluate computer systems. Moreover. have served as the major prototype for several theoretical domaiiis (for example. plane methods and branch and bound procedures of integer programming. physical networks pervade our everyday As even non-specialists recognize the practical importance and the wide ranging applicability of networks. communication and many other a consequence. Network optimization is also alluring to methodologists. practitioners and of non-specialists can readily understand the mathematical descriptions network optimization problems and the basic ruiture of techniques used to solve these problems. of this paf>er is to summarilze of the fundamental ideas of network In particular. For example. and polyhedral methods of In addition.Network Flows Perhaps no subfield of mathematical programming is more alluring than network optimization. flows on arcs and mass balance at nodes) have natural mathematical representations. price directive decomposition algorithms for both linear programming and So did cutting combinatorial optimization had their origins in network optimization. science concerning data structures and ideas from computer and efficient data manipulation have had a major optimization algorithms. network optimization has served as a fertile meeting ground for ideas from optimization and computer science.. Highway. rail. we concentrate on network flow problems and highlight a number of recent theoretical and algorithmic advances. because the physical operating characteristics of networks (e. Moreover. network optimization has inspired many of the most fundamental results in all of optimization. primal-dual methods of linear and nonlinear programming. topics: We have divided the discussion into the following broad major . Indeed. electrical. This combination of widespread applicability and ease of assimilation has undoubtedly been instrumental in the evolution of network planning models as one of the most widely used modeling techniques in all of operatior^s research and applied mathematics. impact on the design and implementation of many network many The aim optimization. Networks provide a concrete setting for testing and devising new theories. lives.g. the field of matroids) for a and as the core model wide variety of min/max duality results in discrete mathematics.

To illustrate the breadth of network applications. the multicommodity flows. Some important generalizations of these problems such as (ii) the generalized network flows.. As a prelude to the remainder of our discussion. in this section we present several important preliminaries We discuss (i) different ways to measure the networks of performance of algorithms. but also serves as an introduction and summary to the non-specialists who have a basic working knowledge of the rudiments of optimization. We have attempted our discussion so that it not only provides a survey of the field for the specialists. quantitively. we limit our discussions to the problems (i) above. that we consider some models requiring solution techniques For the purposes of we will not describe in this chapter. briefly describe these problems in Section 6. Among good we have presented those that to structure are simple and are likely to be efficient in practice. we will consider four different types of networks arising in practice: . and two generic proof techniques that have proven be useful designing polynomial-time algorithms. however. We.1 algorithms. this discussion. polynomial-time) algorithms. arise in numerous application settings emd in a variety of guises. and (iv) the network design.6 and provide some important references. .Applications Basic Prof)erties of Network Flows '' Shortest Path Problems Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems Much of our discussion focuses on the design of provably good algorithms. a more extensive survey would take us far beyond the scope of our discussion. In is we briefly describe a few prototypical applications. Our discussion intended to illustrate a range of applications and to be suggestive of how network flow problems arise in practice. listed In this chapter.g. particularly linear programming. Applications Networks this section. will not be covered in our survey. (iii) (ii) graph notation and vtirious ways that to represent a few basic ideas from computer science (iv) underUe the design to many in 1. (e.

(1. then node is a transhipment Let n = N | and m= A The minimum cost network flow problem can be formulated as follows: Minimize ^ C. then node i is a supply node.1a) (i.i)6^A} =b(i).1b) /jj < Xjj S u^ = . a lower bound /. representing i its supply or demand. We associate with each If b(i) node i i e N an number < 0. performing optimization) that is. and a capacity integer Uj. = 0.• • Physical networks (Streets. if b(i) > 0. The Network Flow Model Let G = (N.. Network flow models are • • • also used for several purposes: Descriptive modeling (answering "what is?" questions) Predictive modeling (answering "what will be?" questions) Normative modeling (answering "what should be?" questions. j) Cjj. pipelines. wires) Route networks Space-time networks (Scheduling networks) • • Derived networks (Through problem trai^formations) in coverage.Ic) We refer to the vector x (xjj) as the flow in the network.j)€A^ subject to X^ii {j:(i. These four categories are not exhaustive and overlap Nevertheless.. The constraint (1. We will illustrate models in each of these categories. they provide a useful taxonomy for summarizing a variety of applications. if b(i) then node | is a | demand node.1b) implies that the total flow out of a node minus the total flow into that node must equal . and |. railbeds. j) e A. A) be a directed network with a cost (i.: ' (1. x. We first introduce the basic underlying network flow model and some useful notation. foralli€N. associated with every arc b(i) e A.. for all (i.j)e]\} - Xxji {j:(j. (1 . node.

j). central role in the The following special ccises of the minimum cost flow problem play a theory and applications of network flows. let e. total supply must equal total demand the mass balance cor\straints are to have any feasible solution. are all zero. any equation is sum of all other equations. all the mass is balance equations gives the zero equation Ox = equal to minus the or equivalently. balance constraint. cost flow problem (1. all N has very special structure: only 2m out of its nm total entries are of its nonzero entries are +1 or -1. = node . (1. we : represent the minimum ). the given lower bounds /j. j) Nj. Frequently. entries are all zeros except for the )-th entry which a flow variable app>ears in two mass balance equations. column vector Note that each i whose x-. The matrix N has one row for each node of the network and one column corresponding to arc of size n (i. contractual obligations or simply operating ranges of interest.e. and hence redundant.1 gives an example of the node-arc incidence matrix.or i € {N : Ib(i) = Mi) > 0) Ib(i) i . . the net supply /demand of the node. and each column h<is exactly one +1 and one 2. Therefore the column The matrix nonzero. Later in Sections and we consider some of the consequences of this special structure.1c) We henceforth refer to this constraint as the moss The flow must also satisfy the lower bound and capacity constraints which we refer to as the flow bound constraints. then summing 0.3. In matrix notation. The flow bounds might model later that they physical capacities. For now. € {N : b(i) < 0) if Consequently. for each arc. Summing gives all the mass balance constraints eliminates all the flow variables and i € I N b(i) = 0. we make two (i) observations. We let Njj represent the column of N and denote the j-th unit vector which is is a 1.2 -1. j with a -1 coefficient. (ii) If the total supply does equal the total demand. Figure 2. 1.. as an outflow from node to Cj with a +1 coefficient and as an inflow is corresponding to arc (i. we show can be made zero without any loss of generality.2) minimize { ex Nx = b and / <xSu in terms of a node-arc incidence matrix N.

(1.(a) An example network.2) 1 2 3 4 5 .

is there a flow pattern in the his (or her) choice of network with the property that no user can unilaterally change origin to destination path (that is. if two users traverse the same link. traverse it. j) € A). Now also suppose that each user of the system has a point of origin (e. traffic that The time to do so depends upon is traffic conditions. Each of these users must choose a route through the network. j) C. all other ULsers continue to use their specified paths in the equilibrium solution) to reduce his travel time. Physical Networks "^ The one that familiar city street map is perhaps the prototypical physical network..A c Nj one is X N2 representing possible person-to-object assignments. for example. The following type these types of questions. network decide upon such issues as speed one way street assignments.g. existence and uniqueness of equilibrium solutions). a limits. and algorithms for computing equilibrium solutions. Operations researchers have setting. street As one to illustration.. or whether or not to construct a new road or bridge. between his or her origin and destination as quickly as along a shortest travel time path. the more flows on the link. that tells us In order to make these decisions intelligently. we need a descriptive model how to model traffic flows and measure the performance of any design as well as a effect of predictive model for measuring the any change in the system. Note. affect however. We can then use these models to answer a variety of "what if planning questions. consider the problem of managing. and a cost (i. Used in the mode of "what if . as well as related theory developed a set of sophisticated models for this problem (concerning. or designing. the longer the travel time to (e. This situation leads to the following equilibrium problem vdth an embedded set of network optimization problems (shortest path problems). that these route choices each other. The Jissignment problem G = (N^ u N2. Many network planning problems arise in this problem context. A) with b(i) = 1 for all i i e Nj and b(i) = -1 for all e N2 (we set l^:= and u^. The objective is to assign each person to exactly way that a minimum cost flow problem on a network minimizes the cost of the assignment. Now us make the behavioral assumption that each user wishes to travel possible.g. specifies of equilibrium line of the network flow model permits us to answer that Each network has an associated delay function how long it takes to traverse this link. that is. associated with each element object in a in A.. and the most readily comes to inind when we envision a network. his or her home) and a point of destination his or her workplace in the central business district). = 1 for all (i. let they add to each other's travel time because of the added congestion on the link.

and Kirkhoff s Law represents the network mass balance equations. The traditional operations research transportation at its plants problem is illustrative. In this setting. we assign the arc with the composite . which are one level of abstraction removed from physical networks. Route Networks Route networks. For example. construct transportation routes.he its problem context. planning problems arise design. we posed These models are actively used in practice. Department of Energy as an analysis tool for guiding public policy on energy. is a very large-scale integrated circuit (VLSI In this setting the nodes of the network correspond to electrical components and the links correspond to wires that connect these links. a network equilibrium model forms the heairt of the Project Independence Energy Systems (LPIES) model developed by the U. Each arc connecting a supply point to a retail center incurs upon some physical network. *. are familiar to most students of operations research and management science. in this case the transportation network. how can we lay out or smallest possible integrated circuit to make the necessary connections between components and maintain necessary sejjarations between the wires (to avoid electrical interference). For example. in this Numerous network . A shipper with supplies must ship to geographically dispersed retail centers. each with a given aistomer costs based demand. an arc connecting a supply point and center might correspond to a complex four leg distribution channel with legs to a rail station. (iv) from the rail head (by truck) to a distribution center.S.scenario analysis. Rather than solving the problem directly on the physical network. (ii) from a plant (by truck) (iii) from the rail station to a rail head elsewhere in the system. Indeed. Similar types of models arise in many other problem contexts. retail (i) we preprocess the data and Consequently. the Urban Mass Transit Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement. Another type of physical network circuit). and even from the distribution center (on a local delivery truck) to the final If customer (or in some cases just to the distribution center). Ohm's Law serves as the analog of the congestion function for the traffic equilibrium problem. these models permit analysts to answer the type of questions previously. The basic equilibrium model of electrical networks is another example.

distribution cost of this route. . Space Time Networks Frequently in practice. particularly in problem contexts such as machine scheduling. a warehouse. One problem special case of the transportation problem merits note — the assignment This problem has numerous that we introduced previously in this section. In this problem context. a prize winning practice paper written several years ago described an application of such a network planning system by the Cahill costs May Roberts Pharmaceutical Company (of Ireland) to reduce overall distribution by 20%. The network representing this problem has T+ 1 nodes: one node = 1. an airport) but at different points in time. applications. for instance. the It is design issue of deciding upon the location of the distribution centers. . and the cost associated with arc i as the cost of completing job on machine j. Figure economic 1.2. which represents a core planning model is in production planning. Many address this related problems arise in this type of problem setting. period. a noted study conducted several years ago permitted Hunt Wesson Foods Corporation to save over $1 million annually. we wish to schedule some production or service activity over time. while improving customer service as well. the an important example. The solution to the problem specifies the minimum cost assignment of the jobs to the machines. In these instances it is often convenient to formulate a network flow problem facility (a on a "space— time network" with several nodes representing a particular machine. and one . find the flows is from plants to customers that minimizes overall This type of model used in numerous applications. this all the intermediary legs. In each d^ lot size problem. we can produce I^ at level Xj and /or we can meet the demand by drav^g upon inventory from the previous t f)eriod. and network flows to cost out (or optimize flows) for any using this approach. we wish to meet prescribed demands for a product in each of the T time periods. As but one illustration. possible to type of decision problem using integer programming methodology for sites choosing the distribution given choice of sites. j) demand points with available machines. assuming that each machine has the capacity to perform only one job. In this application context. . . T represents each of the planning periods. the (i. we would identify the supply points with jobs to be performed. as well as with the distribution capacity for classic problem becomes a network transportation model: costs. 2.

Whenever the production and holding costs are linear. t arc (0. this problem is easily solved as a we must find the minimum cost path of If we impose to that demand point). flow problem. cost: that is. over the entire planning period) must be produced in some period = 1.. the problem becomes a minimum cost network shortest path problem (for each demand period. t) prescribes the production level level I^ in period t. we incur a fixed cost t In addition we may h^ incur a per unit production cost c^ in period and a per t unit inventory cost for carrying any unit of inventory from period problem is t to i>eriod + 1. t (i. production and inventory arcs from node capacities on production or inventory.node represents the "source" of Xj all production. 2. T. Hence. and the flow on arc t + 1) represents the inventory for each in to t be carried from period to period t + 1 . The mass balance equation period models the basic accounting equation: incoming inventory plus production that period must equal demand plus all final inventory. One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^.e. the objective function for . Figure 1^. the cost on each arc for this either linear (for inventory carrying arcs) or linear plus a fixed cost (for production arcs). whenever we in period . . Consequently. . no matter how much or how little. The mass balance equation fir\al for node indicates that demand (assuming zero beginning and zero t inventory . . Network flow model of the economic lot size problem. Id. x^ > 0). The flow on (t. arises frequently in practice.

is a production arc (of the (0. it contains an arc (i.. A flow that maximizes revenue will prescribe a schedule for an . or the production facility might be producing several products that are linked by common share production costs or by changeover cost (for example. Hence we can obtain the optimum production schedule by of the solving a shortest path problem. layover arcs that permit a plane A. Moreover. any such concave cost network flow problem always has a special type of optimum solution solution. The length of arc is equal to the production and inventory cost of i satisfying the demand of the periods from to j-1. t)) and each other arc is an inventory carrying solution. for New York at 10 A. we produce enough to meet the demand for an integral number of contiguous periods. As we indicate in Section 2. 6 to wait for a later flight. or to wait If A. to T+ 1. the first arc on each path arc.M. though the embedded network often proves to be useful in designing either heuristic or optimization methods. This problem's spanning tree solution known as a spanning trees decomposes form into disjoint directed paths. an airport) and a point (i) time (e. each in node represents both a geographical location (e. in this we identify network flow network (with no external supply demand) will specify a set of flight plans (circulation of airplanes through the airline's fleet network).M. or that cases. in no period do we both carry inventory from the previous period and produce.g. to Boston at 11 to stay at New York from 10 A. most enhanced models are structure quite difficult to solve (they are NP<omplete).. efficiently as a problem on an auxiliary network G' defined 1 The network G' i nodes j).M. The production property permits us shortest path consists of to solve the problem very as follows. until example revenues vdth each service or leg.).M. j) nodes i and j with < j.M.M.2 . This observation implies the following production property: in the each time we produce. and for every pair of (i. Another classical network flow scheduling problem is the airline scheduling problem used to identify a flight schedule for an airline. G' contair\s a directed path 1 G' from node to node T + 1 of the same objective function veilue and vice-versa. The arcs are of two types: service arcs connecting (ii) two airports. In this application setting. the common limited production facilities. until 11 overnight at New York from 11 P. a A.10 the problem is concave. Many enhancements facility (ii) model are possible..g. for example (i) the production might have limited production capacity or limited storage for inventory. Observe that for every production in schedule satisfying the production property. we may need to change dies in an automobile stamping plant when making In different types of fenders).M. New York at 10 A. the next morning.

The foUovdng examples illustrate this Single Duty Crew Scheduling. Time Period/Duty Number .3 illustrates a number of possible duties for the drivers of a bus company. Figure 1. point.11 of planes. The same type of network representation arises in many other dynamic scheduling applications. Derived Networks This category a "grab is bag" of specialized applications and illustrates that arise in surprising sometimes network flow problems ways from problems that on the surface might not appear to involve networks.

each column in the first revised system will have a single +1 (corresponding to the hour of the duty in the column just of A) and last a single -1 (corresponding to the row in A. ^5 unit 1 Figure 1. rather than a shortest problem. the following operations: In (1. Because of the structure of A. but in arbitrary. the matrix A represents the matrix of duties Vs. Moreover.4.12 In this formulation the binary variable x: indicates whether 0) (x. . or the added row. to we specify a number network be on duty in each period. the revised right hand side vector of the problem will have a +1 is in row 1 and a -1 in the last (the 1 appended) row. for example. To make this identification. = a of we select the j-th duty. the same this case the right transformation would produce a flow problem. hand side coefficients (supply and demands) could be Therefore. and b is column vector whose components are all Observe 's that the ones in each column A occur in consecutive rows because each driver duty contains a single work is shift (no split shifts or work breaks). constructing a house. the problem cost in the to ship in one unit of flow from node 1. a builder must pour the foundation before framing the house and complete the framing before beginning to install either electrical or plumbing fixtures. We show that this problem a shortest path problem. workers need to in complete a variety of tasks that are related by precedence conditions. the transformed problem p)ath would be a general minimum cost network flow problem. = 1) or not (x. This transformation does not change the solution to Now add a redundant equation equal minus the sums of all the equations in the revised system. we perform it. If instead of requiring a single driver to be on duty in each period. to node 9 minimum network given Figure which is an instance of the shortest path problem. that Hes below the +1 in the column of A). Critical Path Scheduling and Networks Derived from Precedence Conditions In construction and many other project planning applications.4. at Therefore.2b) subtract each equation from the equation below to the system. Shortest path formulation of the single duty scheduling problem.

j) in the network. A) represent the network corresponding to solve the following optimization augmented project. . for each arc (i . ^ . ..j)€X subject to ^ 2^ X:. then the precedence constraints can be represented by arcs. however. + ^ 2- f Xjj si I {j:(i. The precedence constraints imply that for each arc job j (i. Let G = (N. is coefficient. then each constraint contains exactly two variables. otherwise. for l. seems the bear no resemblance to network optimization. one with one coefficient and one with a minus one structure. "start" job we add we to two dummy both with zero processing time: a a "completion" job J to be completed before any other job can begin and have completed this all + 1 that cannot be initiated until other jobs. j) . For convenience of notation. J) requires t: days to complete.13 This type of application can be formulated mathematically as follows. .i)€!^) -l. The linear programming dual xj: of this (i. (i. Sj Note. 2. this problem.j)eA) {j:(j.ifi = J + l all i € N . to Suppose we need complete J jobs and that job S. problem: minimize sj^^ . We are to choose the constraints jobs start time of each job j so that we honor a set of specified precedence If and complete the overall project as quickly as possible. X.Sq T subject to Sj S Sj + tj . i has been completed. problem has a familiar If we associate a dual variable with each arc then the dual of this problem maximize V t. the cannot start until job jobs.ifi = 0. On to the surface. Then we vdsh . j) e A. j (j = 1. thereby giving us a network. we represent the by nodes. . which is a linear program in the variables if s: . that we move variable to the left hand side of the a plus constraint.

14 .

and the revenue n as the demand demand node j. It is the longest sequence of jobs needed precedence conditions. The open pit mining problem is another network flow problem that arises from pit precedence conditions. yj S 0) whenever we that need wish to mine block to maximize before block total i. if resources are available for expediting individual jobs. Suppose now that each block has an associated revenue n (e. linear y. = 0) we extract block the problem will contain j a constraint y. As shown of in we have divided the region to be mined into blocks. This model become principal tool in project projects. Consider the open mine shown in Figure 1. and perhaps the geography of the mine. ^ y^ (or. S 0. and . For example. The provisions any given mining technology. impose restrictions on how we can remove the blocks: that lies for example. this figure. equal to minus the sum of the rj's. itself is particularly for managing it large-scale corwtruction The critical path important because identifies those jobs that require managerial attention in order to complete the project as quickly as possible. for all (i. summed or 1) blocks j. we could consider the most efficient use of these resources to complete the overall project as quickly as possible. y. rather than network flow problem with a node for each block. j).g. we let j. be a zero-one variable indicating whether (i) = 1) or not (y. a variable for at each precedence constraint. this path has become known as the critical path heis and a the problem has become known as the critical path problem. management. and an arc connecting to node j . This problem requires us to determine the longest path in the network G from node to node J + 1 with tj as the arc length of arc (i. Since delaying any job in this sequence must necessarily delay the completion of the overall project.. Researchers and practitioners have enhanced this basic model in several ways.15 xj. (ii) an objective function specifying over all we revenue ny. y. Certain versions of this problem can be formulated as minimum cost flow problems.5. < 1. This network will also have a dummy "collection it node" with (that is. j) 6 A . we have removed any block immediately above restrictions on the "angle" of mining the blocks might impose similar precedence conditions. The dual linear program (obtained from the constraints programming version = will be a of the problem (with the ^ y. the value of the ore in the block minus the j cost for extracting the block) If and we wish to extract blocks to (y^ maximize - overall revenue. This longest path has the to fulfill the sp>ecified following interpretation. we can never remove a block until it.

so that the entries in the table continue to add to the (rounded) row and column sums.6(b) shows a cast as finding rounded version of the data meets this criterion. the variable corresponding to this If precedence constraint in the dual linear program v^ll have a network flow structure. this arc corresponds to the upper bound constraint is y.S. It contains an arc connecting node j): i (corresponding to row ij-th i) and node (corresponding to column the flow on this arc should be the entry in the prescribed table. the tabulated information might disclose information about a particular individual. We also add an arc connecting node t and node s. for example. can attempt to do so by rounding the census information contained in any Consider. information and not disclose It the Bureau has an obligation to protect the source of its statistics that can be attributed to any particular individual. The problem can be a feasible flow in a network and can be solved by an application of the maximum flow algorithm. table. round each entry in the table. By law. In addition. the flow on this arc t must be the row sum. The network contains a node for each row in the table and one node for each j column. Matrix Rounding of Census Information The for a U.6(a). rounded up or dov^n. meeisuring them in integral units of the rounding base . Figure network flow problem corresponding to the census data specified in Figure we rescale all the flows. either up or dov^n to the nearest multiple of three. the data is shown in Figure 1. two variables in a linear program are by a precedence conditions. Since the upper leftmost entry in this table a 1. The dual problem one of finding a network flow that minimizes ths sum of 0. and the overall sum of the entries in the new table adds to a rounded version of the overall that sum in the original table. flows on the arcs incident to node The critical path scheduling problem and open pit mining problem illustrate one arise indirectly. the dual linear program will be a network flow problem. rounded the flow on this arc 1. rounded either up or dov^T*.6.7 up or down. must be the sum of illustrates the 1. we add a supersink with the arc connecting each j-th column node j to this node. say. the flow on this arc must be the column sum. ^ 1 in the original linear program. the only constraints in the problem are precedence constraints. We might disguise the information in this table as follows. way that network flow problems related Whenever. Census Bureau uses census infonnation to construct millions of tables wide variety of purposes.16 block j). i: we add a supersource s to the i-th network connected to each row node Similarly. Figure 1. including the row and column sums. If all entries in the original table rounded up or down.

16a Time in ^service (hours) <1 Income less 1-5 <5 than $10.000 Column Total .000 .000 .000 $30.$50.$30.000 mure than $50.(XX) 1 $10.

Average-case analysis differs from empirical analysis because provides rigorous mathematical proofs of average-case performance. and only secondarily on empirical behavior. then the flow on each arc must be integral at one of two of this consecutive integral values. Worst-case analysis aims to provide upper bounds on the number of steps that a given algorithm can take on Therefore. Nevertheless. Thus. rather than statistical estimates. these problems have an (corresponding to 2-dimensional "cuts" in the table) that algorithms to find rounded versions of the tables. for the algorithms performance.16b (multiples of 3 in our example). we present. and average-case analysis. and is Nevertheless. we can exploit in divising 12 Complexity Analysis There are three basic approaches for measuring the performance of an algorithm: empirical analysis. this chapter will focus primarily on worst-case analysis. typically Empirical analysis measures the computational time of an algorithm using statistical sampling on objective of a distribution (or several distributions) of problem instances. The formulation of a more general version imbedded network problem. the number of arcs and upper bounds C and U on the cost coefficients and the arc capacities. corresponding to tables with more than two dimensions. its relative merits. Researchers have designed many of the algorithms described in this chapter specifically to improve worst-case complexity while simultaneously maintaining good empirical behavior. The major empirical analysis is to estimate how algorithms behave in practice. terms of several basic problem parameters: the number of nodes (m). As an example of a worst-case result within this chapter. The objective of average-case analysis to estimate the expected number of steps taken it by an algorithm. will not be a network structure flow problem. we bound the running time of network algorithms in (n). worst-case analysis. we assume that each cost (or capacity) integer valued. Whenever is C (or U) appears in the complexity arulysis. is any problem instance. this type of analysis provides performance guarantees. worst-case analysis is the primary measure of Worst-Case Analysis For worst-case analysis. we will prove . Each of these three performance measures has appropriate for certain purposes.

has led to a flourishing of research on the worst<ase performance of algorithms." The 0( ) notation avoids the need to state a specific constant. 4. .17 that the number is less of steps for the label correcting algorithm to solve the shortest path problem than pnm steps for some sufficiently large constant p. C or U. 2. sufficiently large values of we mean the term that would dominate bounds are other terms for n and m. which. in turn. most of which are quite appropriate most of today's computers. assuming that is m ^ n. the constant terms 2''^'^n'^m this dominant even though most practical term would dominate. Although ignoring the may have undesirable feature. To avoid the need to compute or mention the constant p. the actual running time is lOnm^ + 2'^'^n^m. the constant terms are relatively small integers for the terms in the complexity bound. ) notation for several reasons: Ignoring the constants greatly simplifies the analysis. and even to the choice of the computer. the constant factors do not contribute nearly as much to the running time as do the factors involving n. For example. The counting for of steps relies on a number of assumptions. then we would state that the running time O(nm^). it is also highly sensitive to the choice of the computer language. the time is called asymptotic running times. ) Consequently. By dominant. For all all of the algorithms that we present. this notation indicates only the dominant terms of the all running time. replacing the expressions: requires "the label correcting algorithm pmn steps for some constant p" with the equivalent expression "the running is time of the label correcting algorithm 0(nm). The leeist value of the constants not determined solely by the algorithm. 3. m. researchers typically use a "big O" notation. the use of the 0( notation typically has permited analysts to avoid the prohibitively difficult analysis required to compute the leading constants. instead. Estimating the constants correctly is is fundamentally difficult. Counting Steps The running time of steps it of a network algorithm is determined by counting the number performs. For large practical problems. if Therefore. Observe that the for running time indicates that the lOnm^ term values of n and m. researchers have widely adopted the 0( 1.

we would allow costs to be as large as 100. be in part an addition or division. in comparing two running times. if known as the similarity assumption.000. are polynomially bounded in n. obtain the same asymptotic worst-case it algorithms that we Our cissumption that each operation. the assumption that each arithmetic operation takes one step lead us to underestimate the aisymptotic running time of arithmetic operations involving very large numbers on real computers since. a computer must access a number of words of data and this thus takes more than a constant number of steps. researchers refer if its network algorithm as a polynomial-time algorithm n. running time is bounded by a polynomial function in m..2 implicitly assumes that the only operations to and tirithmetic operations.l. be counted are comparisons Al . For example. which the time difference between an addition and a multiplication on essentially all modem computers.e. a computer must store large numbers in several words of its memory. For example. we were to restrict costs to be less than lOOn-^. Polynomial-Time Algorithms An the algorithm is said to be a polynomial-time algorithm if its running time is is boimded by a polynomial function of the input length.18 Al. Other instances of . quite /) reasonable in practice. log C and to a log U (e.g. i. on results for the today's computers we would present. we will typically assume for that both C and U k. For a network problem. we are adhering to a sequential model of computations. it is 0((n + m)flog n + log C + log U)). To avoid systematic underestimation of the running time. C = Oirr-) and U = 0(n'^). Consequently. in practice. we will not discuss parallel implementations of network flow «dgorithms. with at at a time.000 for networks with 1000 nodes. The input length of a problem number is of bits needed to represent that problem. m. Therefore. is justified by the fact that 0( is ) notation ignores differences in running times of at most a constant factor.l The computer being executed carries out instructions sequentially. the running time of one of the polynomial-time maximum flow algorithms we consider is 0(nm + n^ log U). takes equal time. to perform each operation on very large numbers. In fact. log C and log U.. On may the other hand. the input length a low order polynomial function of n.000. most one instruction A1. even by counting all other computer operations. is some constant This assumption. By envoking Al.2 Each comparison and basic arithmetic operation counts as one step.

C The maximum algorithm. pseudopolynomial-time algorithms become polynomial-time algorithms. Qn n must be larger than 2"^^^'^^^. A polynomial-time algorithm is is polynomial-time algorithm in if its running time bounded by or log U. this case. An algorithm is said to be an exponential-time algorithm if its running time grows of exp)onential time a as a function that can not be polynomially bovmded. is not a strongly polynomial-time is The if interest in strongly polynomial-time algorithms all primarily theoretical. we envoke the similarity assumption. There are two major reasons for preferring polynomial-time algorithms to exponential-time algorithms. as a rule. any polynomial-time algorithm is asymptotically superior to any exponential-time algorithm. For example. the polynomials in practice are typically of a . a polynomial function only n and m. an important subclass of exponential-time Some instances of pseudopolynomial-time bounds are 0(m + nC) and 0(mC). Even n is in extreme cases this is true. Some examples bounds are 0(nC). The class of pseudopolynomial-time algorithms algorithms. First. polynomial-time algorithms are strongly polynomial-time because log C = Odog n) and log U= CXlog n). 0(2^). polynomial-time algorithms perform better than exponential time algorithms. C and U. flow algorithm alluded therefore. Much practical shown that. For problems that satisfy the similarity assumption. but the algorithms will not be attractive if C and U are high degree polynomiab in n. and does not involve log to. n^'^OO is smaller than tP'^^^E^ ^ if sufficiently large. small degree. In particular.) polynomial-time algorithms.8 illustrates the asymptotic superiority of The second reason is more pragmatic. 0(n!) and 0(n^°g polynomial function of n and log if "). experience has Figure 1.19 polynomial-tiine bounds are said to be a strongly O(n^m) and 0(n log n). pseudopolynomial-time its running time is polynomially bounded in is m. Moreover. (Observe that nC cannot be bounded by is C) We say that an algorithm n.

20 APPROXIMATE VALUES .

i\^ We refer to the nodes i3 . 12. . A-Q) disconnected. we shall sometimes refer to a path as a set of (sequence oO arcs without mention of the nodes. j) if its i node set j N can be partitioned into and A' two subsets N| and N2 so that for each arc in A. > with each arc (i. 13). j) (i.^ as the internal nodes of the path. j) has two end points. we shall often refer to a path as a sequence of nodes - i2 - -ij^ when its arcs are apparent from the problem context.). The arc (i. ij-. (ij. . A(i) = {(i..( ij.21 I N I and m= A I I . Alternatively. . if the graph contains at least one if all undirected path from connected. refer to node i tail jmd node (i. A') is a subgraph of G= (N. is defined as the set of arcs emanating from node of a i. for each € A. . as the i. We associate that Uj. A) if N' CN c A. The degree node is the number of incoming and outgoing arcs incident to that node. . path in . .e. e N| and if e N2. and say that the arc (i. to is A graph is said to be connected pairs of nodes are that the it disconnected. and no superset of Q has this property. • • . A directed (\2 r-1. Frequently. i) or (i^ . . . In this chapter. A graph G' = (N'. A(i). A cutset connected. j) e A. We we shall often use the terminology path to designate either a directed or an undirected path. if) satisfying the property that ij^+p € A for each k= . A directed is cycle is a directed path together with the arc i|) and an undirected cycle an imdirected path together with the arc (ij. i and j j. list node i. . i\^+-[) i^. we always assume graph G is is We refer to any set Q c A with the property that the graph G' = (N. j). shall explicitly state directed or undirected path. We shall use similar conventions for A graph G = (N. j) as the head of arc aire (i. An undirected path is defined similarly except that for any two consecutive nodes either arc (ij^. A) is a sequence of distinct (ij^. A graph G' = is (N'. representing cycles. A) N' = N and A' c A.i.j) is incident to nodes i and j. We assume throughout nodes in a graph. whichever is appropriate from context. as a cutset of G. j) e A : € N}. A) is called a bipartite graph (i. i| For simplicity of notation. 1. We j. . If any ambiguity might arise. we distinguish two special the source s and sink t. Two nodes i and i j are said to be connected j. An arc (i. and a capacity Uj:. G if = (N. 13. nodes and arcs ip (ip 12^. or arc (ij^+i . and ij^^-j on the path. the path contains i2 . othervs^se. A') a spanning subgraph of G = (N. j) emanates from node Tlie arc adjacency The of j arc is an outgoing of node i and an incoming arc of node i. a cost Cj. ij.- • • .

the network discuss more cleverly and by using improved data of representing a network. to represent a network representation is not efficient. subtree of a tree T is a connected subgraph of T. A node in nc des.22 partitions the graph into two sets of nodes. We shall alternatively represent the cutset Q as the graph is node partition (X. The arc costs and capacities are . This scheme requires nm this words to store a network. structures. any arc belonging tree. A tree T is said to be a spanning A tree of G if and T is a spanning subgraph arcs not belonging to 1 of G. A tree is a connected acyclic graph. Clearly. any nontree arc to a spanning tree creates exactly one Removing any two arc in this cycle again creates a spanning tree. Another popular way = network the node-node adjacency I matrix representation. A) is has exactly ntree has at tree arcs. j) with the property that 1 if arc € A. but to represent the also upon the manner used network within a computer and the storage results. We represent the logarithm of any number b by 1. N-X). a tree with degree equal to one called a leaf node. the resulting graph is again a spanning In this chapter. The addition of cycle. the element I^: This representation stores an n x n matrix (i. we some popular ways In Section 1. T are A spanning tree of G = (N. T are called tree arcs. of is which only space 2m words have nonzero values. we assume that logarithms are of base 2 unless log b.4 Network Representations The complexity of a network algorithm depends not only on the algorithm. Each least two leaf A spanning tree contains a unique path between any two nodes. we state it othervdse. X and N-X. to this cutset is added to the subtrees. A acyclic if it contains no cycle. we have already described the node-arc incidence matrix representation of a network. and Ijj = otherwise. Arcs belonging to a spaiming tree called nontree arcs.1. Removing any tree-arc creates subtrees. Arcs a whose end points belong to two If different subtrees of a spanning tree created by deleting tree-arc constitute a cutset. scheme used for maintaining and updating the intermediate The running time of an algorithm (either worst<ase or empirical) can often be improved by representing In this section.

(c) The reverse star representation. arc number 1 (tail. head) cost 2 3 4 5 6 7 8 .23 (a) A network example arc number 1 point (tail. head) cost cost 1- 2 3 1 4 2 3 2 3 1 4 5 4 2 1 6 7 8 4 1 3 4 2 3 (b) The forward star representation.

(tail. Starting from a forward star representation. we store the incoming arcs node i at positions rpoint(i) to (rpoint(i+l) . We numbers in an m-array trace. we need an additional data structure known as the reverse star representation. in order and sequentially head) and the cost of incoming arcs of node i. We examine the nodes j = 1 to j. and so on. but is not attractive for storing a sparse network. arcs of node - are stored at positions point(i) to (point(i+l) the arc If point(i) > point(i+l) 1. then the arcs emanating from node arbitrarily. number i in the arc list of an arc emanating from - node 1) in Hence the outgoing list.1). 2) hcis 1.9(d) gives the arc numbers. We can avoid this duplication by eircs. To determine. . Figure 1. We then sequentially store the (taU. This representation is adequate for very dense networks. we will maintain a significant duplicate information. Figure complete trace array. head) and We also maintain a pointer with each node i.9(a). store the (tail. we can simply store the arc numbers and once we know the from the forward 1. maintain a reverse position in these pointer with each node denoted by rpoint(i). then node i has no outgoing arc.9(c). For consistency. numbers ir\stead of the (tail.) first known as representation in the computer science The forward star representation numbers the arcs in a certain order: 2. The arc (1. that indicates the smallest i.24 also stored in n x n matrices. We also i. As earlier. denoted by point(i). For the sake of we at set rpoint(l) = 1 and rpoint(n+l) = m+1. simultaneously. 2) So instead of storing head) and cost of arcs. 1. Arcs emanating from the same node can be numbered the cost of arcs in this order. head) and the cost of the For example. The forward star and reverse star representations are probably the most popular ways to represent networks. we can always retrieve the associated information store circ star representation. representation of the network given in Figure The forward outgoing arcs at star representation allows us to determine efficiently the set of set of any node. the incoming arcs at any node efficiently. set point(l) = 1 and point(n+l) = m+1. we n can create a reverse star representation as follows. both sparse and dei^se. storing arc (3.9(b) specifies the forward star 1. incidence list (These representations are also literature. This data structure gives us the representation shov^Ti in Figure Observe that by storing both the forward and reverse star representation S. arc has arc number arc number 4 in the forward star representation. which denotes the first arrays that contains information about an incoming arc at node consistency. we number the arcs emanating from node 1.

e. in a all nodes in a network that satisfy a particular For purposes of illustration. G = (N. j) admissible if node i is marked and node is j is unmarked. Subsequently. Search algorithms attempt to find property.. The algorithm we say that node is a predecessor terminates when the graph contains no (i. (i. the search algorithm will mark more nodes. only the source node marked. The marked nodes are is known be reachable from the source.25 1. j) admissible arcs. In this section. different variants of search lie at the heart of many network algorithms. we discuss two of the most commonly used search techniques: breadth-first search and depth-first search. . and Initially. Whenever i the procedure marks of a new node by examining an j admissible arc node j. A) that are reachable through directed paths from a distinguished node called the source. predi]) = i. let us suppose that we wish to find all the nodes graph s. in At every point states: in the search procedure. i.5 Search Algorithms Search algorithnvs are fundamental graph techniques. and the status of unmarked nodes yet to be determined. inadmissible We call an arc otherwise. all nodes in the to network are one of two marked or unmarked. Tl e follovkdng algorithm summarizes the basic iterative steps. by examining admissible arcs.

Since the algorithm marks any node at most once. Each node has a current arc Initially. it the algorithm marks a new node and adds it to LIST. declares that the node has no admissible It is easy to show that the search algorithm runs in 0(m + n) = 0(m) time. Each iteration of the while loop either finds an admissible arc or does not. it this list sequentially list and whenever the current arc arc. When from nodes. add node end else delete j to LIST. (i. which i is the current candidate for being examined next. mark node LIST := {s). it has marked all nodes in G that are reachable s via a directed path. begin unmark all in N. node i from LIST. it executes the while loop at most 2n times. makes the next list. end. nodes s. it arc in the arc the ciirrent When the algorithm reaches the end of the arc arc. j) node is incident to an admissible arc then begin mark node pred(j) := i. The same data also used in the maximum flow and minimum i cost flow algorithms A(i) of arcs discussed in later sections.26 algorithm SEARCH. this algoirthm terminates. j) from it. first the current arc of node is the arc in A(i). while LIST * do begin select a if node i i in LIST. and in the latter Ccise deletes a marked node from LIST. Arcs in each list can be arranged arbitrarily. Now consider the effort spent in identifying the . In the former case. j. is The search algorithm examines inadmissible. end. The predecessor indices define a tree consisting of marked We structure use the following data structure to identify admissible is arcs. We maintain with each node the list emanating (i.

nodes are always selected from the front and added first-in. in the m.27 admissible arcs. at most once. L6 Developing Polynomial-Time Algorithms Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence) approach. nodes to LIST. It marks nodes s to i in the nondecreasing order of their distance from the with the distance from i. For each node i. We assume. first-out to the rear. this version of search is called a depth-first search.e. will we briefly outline the basic ideas all underlying these two approaches.. in this instance. Hence. first-out order. The algorithm. Geometric Improvement Approach The geometric improvement approach shows polynomial time if that an algorithm runs in at every iteration it makes an improvement proportioT\al to the solutioiis. C. s. flow problem H = mU. This s. difference between the objective function values of the current and optimum Let H be an upper bound on the difference in objective function values between any two For most network problems. the set LIST is maintained as a queue. feasible solutions. that data are integral and that algorithms maintain integer solutions at intermediate stages of computations. i. the search algorithm selects the marked nodes in the last-in. and backs up one node initiate a new probe when it can mark no new nodes from the tip of the path. This algorithm to performs a deep probe. creating a path as long as possible. i. H is a function of n. does not specify the order for examining and adding If Different rules give rise to different search techniques.e. Therefore. and minimum . and U.. In this section. and the scaling approach. as described. nodes are always selected from the front and added to the front. meeisured as minimum number of arcs in a directed path from s to Another popular method is to maintain the set LIST as a stack. then the search algorithm selects the marked nodes in the order. in the problem H = maximum mCU. this version of search is called a breadth-first search. we scan arcs in A(i) arcs. as usual. the search in algorithm examines a total of ie X A(i) = m N and thus terminates 0(m) time. kind of search amounts to visiting the nodes in order of increasing distance from therefore. For cost flow instance.

z*) by a factor of 2 within these 2/a iterations. We A have stated this result for minimization versions of optimization problems. Since H is the maximum possible improvement and every objective function value is an integer.28 Lemma 1. similar result applies to maximization versions of optimization problems. therefore. Consider a consecutive sequence of starting 2/a iterations from iteration k. then the algorithm would determine an optimum solution within these 2/a iterations.e. Section 5. (i. the algorithm must terminate wathin 0((log H)/a) iterations. and. Then the algorithm terminates in O((log H)/a) iterations.) and Scaling Approach Researchers have extensively used an approach called scaling to derive polynomial-time algorithms for a wide variety of network and combinatorial optimization problems. (See Sections 5. If in each iteration. On the other hand. we can look for local improvement techniques that lead to large fixed percentage) improvements for the in the objective function. the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1.3.2 maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. then (1. Further. q the algorithm improves the objective function value by no more than aCz*^ . The maximum augmenting path algorithm for the 4. suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*) (13) for (i. the algorithm improves the objective function value by at least aCz*^ . if at some iteration.z*)/2 units. Proof.. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value.z*). In this discussion..1.3) implies that a(z^ . The quantity (z*^ - z*) represents the total possible improvement in the objective function value after the k-th iteration.e. The geometric improvement approach might be summarized by "network algorithms that have algorithms.z*)/2 units.11 presents an example of a bit-scaling algorithm for ." a the statement geometric convergence rate are polynomial time In order to develop polynomial time algorithms using this approach. the algorithm must have reduced the total possible improvement (z*^. we describe the simplest form of scaling which we call bit-scaling.z*)/2 ^ z^ - z^-^^ ^ aCz^ .

adding leading zeros necessary to make each capacity K bits long. of Observation. the problem P2 approximates data to the second bit. Sections 4 and 5.. for each k = 2. more efficient than For example. describe polynomial-time algorithms for the maximum flow and minimum cost flow problems. we solve a problem P parametrically as a sequence of problems P^. Further. Let K = Flog Ul and would consider suppose if that we represent each arc capacity as a K bit binary number. The is scaling technique useful whenever reoptimization from a good starting solution solving the problem from scratch. The manner of defining arc capacities easily implies the following observation.-j serves as the starting solution for problem Pj^. . the optimum solution is of problem Pj^^. using more refined versions of scaling. The capacity an arc in P^ is tivice that in Pf^^j plus or 1. Pj^ the problem P^ approximates data to the first . Figure 1. . P3. K. : bit.29 the assignment problem. . is a better approximation until Pj^ = P. and each successive problem . . P2. consider a network flow problem whose largest arc capacity has value U.. .10 illustrates an example of this type of scaling. Using the bit-scaling technique. Then the its problem Pj^ the capacity of each arc as the k leading bits in binary representation.

30 100 <=^ (a) (b) PI : P2 100 P3: 010 (c) Figure 1. . (b) (c) Network with binary expansion of The problems Pj. (a) Network with arc capacities. P2. and P3.10. arc capacities. Example of a bit-scaling technique.

e. because of the following reasons. Consider. variants of it have led to improved algorithms for both the maximum flow and minimum cost flow problems.. for example._i is an excellent starting solution for problem Pj^ since Pj^. (ii) The optimal solution problem Pj.i by 2. whereas time bound is the scaling version of the labeling algorithm runs in the non-scaling version runs in latter O(nmU) time. vj^ < m because multiplying the flow X]^_^ by 2 takes care of the I's doubling of the capacities and the additional can increase the maximum increase the flow value by at most m units (if we add 1 to the capacity of any arc. maximum flow value for problem Pj.31 The following algorithm encodes a generic version algorithm BIT-SCALING. solution of Pi^_i can be easily reoptimized to obtain an Hence. end. the maximum and is xj^ flow problem. (iii) optimum For problems that satisfy the similarity assumption. simple scaling algorithm improves the running time dramatically. The former Thus this polynomial and the bound is only pseudopolynomial. reoptimization needs to be only a little more efficient by a factor of log n) than optimization. This approach works well (i) for these applications. For example. Pj^ denote an arc flow corresponding to its In the problem the capacity of an arc xj^.i plus or 1.. begin reoptimize using the obtain an optimum solution of end. begin obtain an for k : optimum to solution of P^. claissical easier to reoptimize such a maximum Section 4. Let vj^ denote the vj^. of the bit-scaling technique. the number of problems solved is OOog n).1 flow problem. of The problem P^ is generally easy to solve. 0(m^ log U) time.^ twice capacity in Pj^. In general.^ and Pj^ are quite similar. .i to Pj^. for this approach to work. Moreover. in part. the labeling algorithm as discussed in would perform the reoptimization in at most m augmentations. the optimum solution of Pj^. we obtain a feasible flow for Pj^. it then is we maximum flow from source to sink by at most 1). This approach is very robust. taking O(m^) time. = 2 K do optimum solution of Pj^. If we multiply the optimum flow 2vj^_'j for Pj^. Thus (i. Therefore.

for every directed path and f(q). j) 1 if arc (i. similarly. its models. Therefore. only consider these special types of solutions. Then we partially characterize optimal solutions to network flow problems and demonstrate that these problems always have certain special types of optimal solutions (so<alled cycle free solutions). We next establish several important connections between network flows and linear and integer programming.1 or as flows on paths and cycles. the basic decision variables are flows Xj: on arcs cycles (i. BASIC PROPERTIES OF As a NETWORK FLOWS we describe several basic prelude to the rest of this chapter. is contained in path p and is otherwise.32 2. in designing algorithms. j) equals the sum of the flows h(p) and f(q) for all paths p and cycles q that contain this arc. In the context of developing underlying theory. Notice that every set of path and cycle flows uniquely determines arc flows in a natural way: the flow xj. j). we will find alternate formulations. we need Finally. transformations of network flow problems.1). we discuss a few useful 2. We begin by showing how network flow problems can be modeled Section in either of two equivalent ways: as flows on arcs as in our formulation in 1. in this section properties of network flows. it worthwhile develop several connections between these In the arc formulation (1. on arc (i. the flow in on cycle which are defined p in P and every directed cycle q Q. The path and the network. and spanning tree Consequently. each view has own to advantages.1 Flow Decomposition Properties and Optimality Conditions It is natural to view network flow problems in either of two ways: as flows on arcs or as flows on paths and cycles. q. Then ^i3= I p€ P 5ij(p)h(p)+ X qe hf<i^^^^^- Q . We j) formalize this observation by defining some new notation: 5jj(p) 1 if equals (i. the flow on path p. or algorithms. 6jj(q) equals arc is contained in cycle q and otherwise. as the first step in our discussion. cycle formulation starts with an enumeration of the paths Its P and Q of decision variables are h(p).

into path and cycle If flows. otherwise the (i^. (i. In the former case ij^ we obtain a directed path p from the supply node some demand node consisting solely of arcs with positive flow. We lecist repeat this process with the redefined problem until the network contains no supply node (and hence no demand node). Note that one of these cases will occur within n steps. 12) mass balance constraint (1. Every directed path and cycle flow Conversely. i^ implies that some other arc carries positive We repeat this argument until either we encounter a demand node ig to or we revisit a previously examined node. we reduce the identify supply /demand of some node or the flow on some arc a cycle. and repeat the procedure. We terminate when for the redefined problem x = by the Clearly. 2. At most n+m paths and cycles have nonzero flow. If and in the latter case [b(iQ). nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties: C2. Then some arc i|) carries a positive flow. j) in we obtain a cycle q. as) path and cycle flows? The following result provides an affirmative answer to this question. these. must find a is cycle.1. Proof. we say that the flow is represented f is eis path flows and cycle flows and that the path flow vector h and cycle flow vector cycle flow representation of the flow. cycles C2.33 If the flow vector x is expressed in this way.h(p). - h(p) for each arc x^. If b(ijj) + h(p) and : = Xj. b(ij^) we = let h(p) = inin min (i. -b(ij^).. Then we select a transhipment node with at one outgoing arc with positive flow as the starting node.1: Theorem Flow Decomposition Property (Directed Case). Consequently. a path and Can we represent it reverse this process? That is. j) e p)]. a path.e. We give an algorithmic proof to show any feasible arc flow x can be decomposed Oq. the path and cycle . can we decompose any arc flow into (i. is a demand node then we stop. j) we let f(q) = min {x^: (i.2.1b) of node flow. In the light of our previous observations. Every path with positive flow connects a supply node of x to a demand node most of x. i^j Suppose supply node. j) € q) and redefine = Xj: - f(q) for each arc in q. and each time we we reduce the flow on some arc to zero. out of have nonzero flow. xj: we obtain a directed (xj: : cycle q. in this Ceise which 0. at m we need that ig is a to establish only the converse assertions. Now observe that each time we identify to zero. (i. p. we obtain a directed path. every has a unique representation as nonnegative arc flows. the original flow the sum of flows on the paths and cycles identified procedure. and redefine b(iQ) = b(iQ) .

j) .3. is that we extend the path (ij^ . A cycle q with > is called an augmenting 5jj(q) f(q) cycle with respect to a flow x e q. Flow Decomposition Property (Undirected Case). and can contain arcs with negative flows. The major modification . if < Xjj + < Ujj. Theorem 2. some node by adding an arc (ij^. The other steps can be modified accordingly. every arc flow x can be flow has a unique representation as arc flows. to a sink node of x.34 representation of the given flow x contains at most (n + m) total paths and cycles. Every path with positive flow connects a source node of x For every path and cycle. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: C2. h(p) on each forward arc A path flow will be defined arc. has forward arcs and backward arcs which are defined as arcs along and opposite to the path's orientation. The flow decomposition property has one example.2. on p as a flow with value and -h(p) on each backward We define a cycle flow in the 5j. the paths and cycles can be undirected. Proof. to is be negative. our representation using the notation and -1 if valid v^th the following provision: we now define 6j. We need flow f(q) the concept of augmenting cycles with respect to a flow x. have nonzero flow. As enables us to compare any two solutions of a network flow problem in a particularly convenient way and to show how we can build one solution from another by a sequence of simple operations. is possible to state the decomposition property in a somewhat more general form that permits arc flows xj. even though the underlying network directed. C2. for each arc (i. At most n+m paths and cycles have nonzero flow. at most m cycles This proof at is similar to that of ij^_-j Theorem 2.5. of which there are It at most m cycles. these. final Each undirected path which has an orientation from its initial to its node. j) is a backward arc of the path or cycle. any arc with positive flow occurs as a forward arc and any arc with negative flow occurs as a backward arc. 6j:(q) is still In this more general setting.'j ij^) with positive flow or an arc ij^_| ) with negative flow.4.1.(p) and S^jCq) to be arc (i.(p) same way. In this Ccise. Every path and cycle Conversely. p. it a number of important consequences. out of C2.

j) 6 A (i.. j) e A k=l r (i. . Nx = b. f(q-)). moreover. arc . . j).e. . j) < qj^. .. is an augmenting cycle with respect to the flow x. zjj = 6ij(qi) f(qi) + 5jj(q2) f(q2) + . we can find (i.. each term between and the rightmost Ujj. j) is either a forward arc on each cycle q^. Ny = b. for each arc e That we add any (i. q^ that contains it or a backward arc on each cycle x^..35 In other words. for any arc (i. j) at most r < m cycle flows f(q])/ f(qj. - Then the difference vector z = y x satisfies the homogeneous equations Nz = Ny Nx = 0. j) we have + 6ij(q2) < yjj = Xjj + 5jj(q^) fCq^) f(q2) + .j)€A k=l . of these cycle flows qj^ to x. is. (i...) f(qj^^) Uj. by condition C2. q. j) e A (i. Consequently. Further.. if inequality in this expression has the for each cycle qj^ . + 6j:(qj(. < Xj. The cost of an augmenting cycle represents the change € A if in cost of a feasible solution we augment along the cycle with one unit of flow. We define the cost of an augmenting q as c(q) = V (i. i. yjj < Consequently. 0<y<u. same < sign. note (i. - i.. . change in flow cost for augmenting around cycle q with flow Suppose < X < u and that x and y are any two solutions to a network flow problem..) satisfying the property that for each arc of A. q2. flow decomposition implies that z can be represented as cycle flows. + SjjCqr) fCq^. . + 5jj(qr) f(qr) < Ujj. . q2 . 5jj(q). j) e A (i. each cycle q^ that ...e. the resulting solution remains feasible on each arc Hence. .. (i. qm that contains it. Therefore. The f(q) is c(q) f(q). Since y = x + z. j) Cj. the flow remains feasible if some positive amount of flow (namely cycle f(q)) is augmented around the cycle q. j) e A (i. Now q-j. q2.4 of the flow decomposition property. ..

cx* = cx and x result. The augmenting characterizing the cycle property permits us to formulate optimality conditions for optimum solution of the x* is minimum cost flow problem. Let X network flow problem. Suppose that X is any feasible solution. that an optimum solution of the minimum cost flow problem. and costs . if every augmenting cycle in the decomposition of x* . A feasible flow x is an optimum flow if and only if admits no negative cost augmenting cycle. Further. Further.4.ex.3: result.1.x has a 0. and that x ^ x*. 2J. Then y equals x plus the with respect to x. the cost of y equals the cost of x any two feasible solutions of a flow on at most m augmenting nicies and y he plus the cost of flow on the augmenting cycles. Much of the underlying theory of 2.x can be decomposed most m augmenting cycles and the sum of the costs of these cycles equals cx* . network flows stems from In the example.36 We have thus established the following important 2. Optimality Conditions. arc flows a simple observation concerning the example in Figure are given besides each arc.cx > Since x* is an optimum flow. Cycle Free and Spanning Tree Solutions We start by assuming that x is a feasible solution to the network flow problem minimize { cx : Nx = b and / ^x<u ) and that / = 0. nonnegative cost. Theorem Augmenting Cycle Property. is also an optimum flow. ex* < cx then one of these cycles must have a negative cost. then cx* . The augmenting into at If cycle property implies that the difference vector X* . We have thus obtained the following Theorem it 2.

. 4+e <!) cycle. 5 + 6^0. arc flows. that the cycle is a depending upon the sign of Consequently. Figure Improving flow around a being that all Let us assume for the time arcs are uncapacitated.e. select 6 in the interval -2 <6 < 3. The network in this figure contains flow around an undirected cycle. (at i. or 6 < 3.. A as the q/cle cost and say A. . and on at least 4 + 6 S 0.. Per unit change in cost = A = $2 + $1 + $3 Let us refer to this incremental cost negative. if the cycle cost were positive (i. in all our example. or 6 > at and again find a lower cost solution with the flow one arc in the cycle value zero.e. we set 6 as large as possible while preserving 4 - 3-6^0 and we no 8 S 0. note that the per unit incremental cost for this flow change cost of the clockwise arcs the sum minus the sum of the cost of counterclockvkdse arcs.. positive or zero cost cycle - $4 - $3 = $ -1.37 3. of all We can restate this observation in another way: to preserve nonnegativity flows. to minimize cost nonnegativity of that in the cycle. that is. i.$3 i 2.$4 3-e <D 2+e 4. Also. Note that adding a given amount this of flow 6 to all the arcs pointing in a clockwise direction all and subtracting flow from at arcs pointing in the counterclockwise direction preserves the mass balance is each of the node. we must on 6. Since the objective function -2 at depends linearly we optimize it by selecting 6 = 3 or 6 = which point one arc in the cycle has a flow value of zero.e. 2 + 6^0. longer have positive flow on arcs in the Similarly. we were to change C|2 from 2 to 4). we set 6 all = 3. then -2) we would decrease 6 as much as possible (i.1.e. Note new solution 6 = 3).

38 We (i) If can extend this observation in several ways: the per unit cycle cost A = 0. (i. upper bound (x^2 = ^ ^t 6 = 1). we are indifferent to all solutions in the interval -2 < 9 < 3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero. j) between the lower and upper bounds imposed is restricted if its upon it. lies strictly (i.. at a given any time. one by choosing 6 = for -2 or 6 = 1.5: fundamental result: Theorem optimization Cycle Free Property. good as the original that is. . then the range of flows that preserves flows) feasibility Ceise -2 mass balances. this condition rules out any negative cost directed cycle with no upper bounds on its arc flows. Therefore. In this terminology. is at its some arc on the cycle. our prior observations apply to any cycle in a network. Note that the lower bound assumption imposed upon the objective value is necessary to rule out situations in which the flow change variable 6 in our prior argument can be made arbitrarily large in a negative cost cycle. Let us say that an arc (i. a solution x has the "cycle free property" entirely of free arcs.g. initial flow we can apply our previous argument repeatedly. j) is a p'ee arc with respect to a given feasible flow x if Xj. either the flow is zero (the lower bound) or Some observations additional notation will be helpful in encapsulating and summarizing our up to this point.. We will also say that arc flow xj. equals either its lower or if upper bound. problem minimize ex If the objective function value of the network { : Nx = b.e. the network contains no cycle made up In general. in this <6< and we can find a solution as 6. then at least one cycle free solution solves the problem. (ii) If we impose upper bounds on is the flow. At these values of the solution is cycle free. lower and upper bounds on 1. such as 6 units on all arcs. again an interval. for example. one cycle and establish the following 2. or arbitrarily small (negative) in a positive cost cycle. 1 <x <u } is bounded from below on the feasible region and the problem has a feasible solution. e.

39
useful to interpret the cycle free property in another way.

It is

Suppose

that the

network
nodes).

is

connected

(i.e.,

there

is

an undirected path connecting every two pairs of
is

Then, either a given cycle free solution x contains a free arc that

incident to

each node in the network, or

we

can add to the free arcs some restricted arcs so that the

resulting set S of arcs has the following three properties:

(i)
(ii)

S contains

all

the free arcs in the current solution,

S contaiT\s no undirected cycles, and

(iii)

No

superset of S satisfies properties

(i)

and
(i)

(ii).

We

will refer to

any

set

S of arcs satisfying

through

(iii) eis

a spanning tree of
a

the network

and any

feasible solution x for the

network together with
(At times

spanning

tree S

that contains all free arcs as a spanning tree solution.

we

will also refer to a

given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs

may

be needed to form the spanning tree

S.)

Figure
that
it

2.2. illustrates a

spanning
is)

tree

corresponding to a cycle free solution. Note
set of free arcs into a

may

be possible (and often
(e.g.,

to

complete the
wdth arc
(3,

spanning

tree

in several

ways

replace arc

(2, 4)

5) in

Figure

2.2(c)); therefore, a

given

cycle free solution can correspond to several spanning trees S.

We
If

will say that a

spanning tree solution x
this case, the

is

nondegenerate

if

the set of free arcs forms a spanning tree.
to the

In

spanning tree S corresponding
are not incident to)
all

flow x

is

unique.

the free arcs do

rot span

(i.e.,

the nodes, then any spanning tree corresponding to
arc's

this solution will contain at least

one arc whose flow equals the
vdll say that the

lower or upper

bound

of the arc.

In this case,

we

spanning

tree

is

degenerate.

40

(4,4)

(1,6)

(0,5)

(a)

An example network with

arc

flows and capacities represented as

(xj:, uj:

).

©
(b)

A cycle free solution.

<D

©
(c)

A

spanning

tree solution.

Figure

2.2.

Converting a cycle free solution to

a

spanning

tree solution.

41

When

restated in the terminology of spanning trees, the cycle free property
result in

becomes another fundamental

network flow theory.
If the objective

Theorem

2.6:

Spanning Tree Property.
problem
minimize
{ex:

function value of the network

optimization

Nx

=

b,

I

<x <

u]

is

bounded from below on the

feasible

region and the problem has a feasible solution

then at least one spanning tree solution solves the problem.

We
of the flow

might note

that the

spanning

tree property is valid for

concave cost versions
is

problem as

well,

i.e.,

those versions where the objective function

a concave
is

function of the flow vector
valid because
if

x.

This extended version of the spanning tree property
is

the incremental cost of a cycle

negative at

some

point, then the

incremental cost remains negative (by concavity) as

we augment

positive

amount

of

flow around the

cycle.

Hence,

we

can increase flow in a negative cost cycle until

at least

one arc reaches
2.3

its

lower or upper bound.

Networks, Linear and Integer Programming

The

cycle free property

and spanning

tree property

have many other important

consequences.

In particular, these

two properties imply

that

network flow theory bes

at

the cusp between

two

large

and important subfields of optimization—linear and integer

programming.

This positioning may, to a large extent, account for the emergence of
a cornerstone of mathematical

network flow theory as
Triangularity Property

programming.

Before establishing our

first

results relating

network flows
that

to linear

and integer
S has
at

programming, we
least

first

make

a

few observations. Note
is,

any spanning

tree

one

(actually at

lecist

two) leaf nodes, that
if

a

node

that is incident to only

one arc

in the

spanning

tree.

Consequently,

we

rearrange the rows and columns of the
is

node-arc incidence matrix of S so that the leaf node

row

1

and
-1,

its

incident arc
lies

is

column

1,

then

row

1

has only a single nonzero entry, a +1 or a
If
is

which

on the
its

diagonal of the node-arc incidence matrix.
incident arc from S, the resulting network

we now remove

this lecif

node and

a

spanning tree on the remaining nodes.
1

Consequently, by rearranging
for the

all

but

row and column
that

of the node-arc incidence matrix

spanning

tree,

we

can

now assume

row

2 has

-t-1

or

-1

element on the

42

diagonal and zeros

to the right of the diagonal.

Continuing

in this

way

permits us to
n-1

rearrange the node-arc incidence matrix of the spanning tree so that

its first

rows

is

lower triangular. Figure

2.3

shows

the resulting lower triangular form (actually, one of

several possibilities) for the spanning tree in Figure 2.2(c).

nodes
5

L =

the problem has a feasible solution. Relationship to Linear Programming The network flow problem with the which. Linear programs.1) is an integer But this observation implies that the diagonal element of components -1. we might expect to discover that extreme point . bounded from below on the feasible region. extreme point solutions. or b - Mx^ (2.2 shows that this integrality property is also valid in the more general situation in which the objective function is concave. of x' are integral as well: since the first U equals +1 or the first equation in (2. ako satisfy another well-known property: they always have.e. 1. continuing forward substitution by successively solving for one variable at a time shows that x^ integral. an arc lower or upper bound and the right hand side M has integer components (each equal to vector. we have established the following Theorem problem 2. Since. emalysis.43 Now further suppose that the / supply/demand vector b and lower and upper bound Then since every vectors and u have all integer components. Network flow problems are distinguished as the most important large class of problems with this prop>erty. always has an integer solution.8. 1 <x <u } the vectors solution. yr- equals -1). that solutions x with the property that x cannot be z.1). then the problem has at least one integer optimum Our observation at the end of Section 2. Integrality Property. network flow problems always have cycle free solutions. expressed tis a weighted combination of two other feasible solutions y and as x = ay + (l-a)z for some weight < a < 1. i. or generalizations with concave cost objective functions. component of 0. problems always have spanning fundamental result. in the parlance of convex is. and u are integer. implies that x| is integreil. as we have seen. now if we move x] to the right of the equality in for X 2 the right hand side remains this is integral and we can solve from the second equation. as the leist objective function ex is a linear program result shows. Since the spanning tree property ensures that network flow tree solutions. This argument shows that for problems with integral data. every spanning tree solution is integral. +1. If the objective value of the network optimization minimize is { ex: Nx = b. and b..

Theorem Extreme Point Property. uij. For network flow problems.e. y' yij and zij z' be the ujj components zjj of /ij < < xij < < or /jj < < (i. then it cannot be an extreme point. Theorem is 2. spanning tree solutions correspond to basic solutions. this result is is easy to establish. Let us now make one final connection between networks and linear and integer programming— namely. With the background developed already. and indeed they are as shown by the next result. Let x'. Proof. We can extend B to a basis of the constraint matrix by adding a Just as cycle free solutions for maximal number of columns. suppose that x not an extreme point and is represented as x = ay + (l-a)z with these vectors for which y and z differ.9. the columns B of the constraint matrix of a between their linear program corresponding to variables strictly lower and upper bounds are linearly independent. y^ and z^.M] for some basis B and that x = (x . < a< i. yjj network contains an imdirected cycle with not equal to Zij for any arc on the But by definition of the Therefore. Every spanning tree solution to a is network flow problem a basic solution and. components if x^. this cycle contains only free arcs in the solution x. as in our discussion of Figure 2. then the problem has an extreme point solution. N = [B. every basic solution a spanning tree solution. conversely. if x not a cycle free solution.1. xij j).x^) is a compatible partitioning of Also suppose that we eliminate the redundant row so that B is a nonsingular matrix. every extreme point is a cycle free solution. Then NjCz^ > ) which implies. First.. we define two feasible solutions y and z with the property is that X = (l/2)y + (l/2)z. every cycle free solution is an extreme point and. 1. < yjj < and " let Nj = 0' denote the submatrix of N corresponding to these arcs that the cycle. between program of the basis and the that integrality property. Consequently. it X is not an extreme point solution. since by perturbing the -6 flow by a small amount 6 and by a small amount around a cycle with free arcs. Conversely. by flow decomposition. I <x <u ) bounded from below on the feasible region and the problem has a feasible solution. conversely. network flow problems correspond to extreme points. then is not a cycle free solution. extreme points are usually represented algebraically as basic solutions. for these special solutions. Consider a linear form Ax = b and suppose x. In linear programming. minimize is { ex: Nx = b. if the objective value of the network optimization problem 2.10: Basis Property. Then .44 solutions and cycle free solutions are closely related.

by Xjj+ l^- in the problem formulation. equals the product of the diagonal elements in the triangular representation of the basis.+l. if all of square submatrices have determincmt equal to either 0. If it is totally 0. the -1. call a matrix it A unimodular unimodular of its its bases have determinants either +1 or <md call totally -1. or -1.11: minimum cost M 2. therefore. j) has a positive lower boimd l^y then we can replace Xjj. the triangularity property shows that the determinant of any basis (excluding the redundant row now). using an expansion of determinants by minors. it is easy to see that the determinant of S it the product of the determinants of the spanning trees and. As measured by the new 0. we describe some of these important transformations. network flow problem is totally unimodular. is it has determinant must correspond to a cycle free solution. of x' as it is possible to find each component sums and multiples of components of if b' =b - Mx^ and B. or How Since bases of are these notions related to network flows and the integrality property? N correspond to sparming trees. (Removing Nonzero Lower Bounds). and u are all integers. or to put a network problem into a standard form required by a computer code. unimodular. / A corresponds to a basic feasible solution x and the problem data A. which a spanning tree on each is of its connected components. a node-arc incident matrix let is unimodular.) The constraint matrix of a Theorem Total Unimodularity Property. - Also. Xy. In this subsection. Consequently. vector whenever x^. the b. For Otherwise. j) will have a lower bound of This transformation has a . then x^ is an integer if and M are composed In particular. analysts use network transformations to simplify a network problem. and therefore equals +1 or -1. 2. divided by det(B). then x^ if all and consequently x^ is an integer. Tl.45 Bx^ = b . determinant of B. provides this totally an alternate proof of unimodular property. partitioning of b. Therefore.Mx^. If an arc (i. to show equivalences of different network problems. Even more.4 Network Transformations Frequently. S is singular. Let us -1. But then. it S be any square submatrix of N. variable the flow on arc (i. must be equal to 4l (An induction argument. or x^ = B-^(b Mx^). by Cramer's rule from linear algebra. the determinant of B equals +1 or of all integers.

^ a positive (i. <D then Transforming If {Removing Capacities). appear in exactly two constraints-in one with the positive sign and in the other with the negative sign. j) (Cij'Uij-V CD lower bound to zero. now appears in three mass balance constraints and j. O Removing ^©< t I © Xjj. b(j) oo) + Uij (0. If x^. X. and Sj. we begin by sending /j. making the j) arc uncapacitated. can be written as -1.j = X^j = Sjj arc capacities.Sjj = -Ujj (2.2) as the mass balance constraint Observe that the variable xj. . Likewise. Sj: additional node k with equation (2.. V. the corresponding flow in the transformed network both the flows x and x' = ik Xjj and = Uj. (i. + Sj.oo) Ujj) <T) Xjj <^ Figure 2. in only one.5. In the network context. using the following ideas. Multiplying both sides by we obtain -Xjj . a flow ^k' " ^" *^^ transformed network yields a flow of = Xjj^ of the same cost in the .2) from the mass balance constraint of node we assure that each of Xj. <D 2.4. j) in the original Xjj^ network.2) This transformation is tantamount to turning the slack variable into an for that node. Uj:. this transformation implies the follov^dng. = Ujj.Ujj) CD Figure T2.Xj:. an arc has a positive capacity we can remove the capacity. units of flow on the arc and then measure incremental flow above b(i) /jj. constraint (i. have the same Xj: cost. These algebraic manipulations correspond to the following network transformation. x^: The capacity Sj. 46 simple network interpretation. b(i) (Cjj . is a flow on arc is X. b(j) b(i) -Uij (Cjj . if we introduce a slack variable > 0. By subtracting (2. b(j) b(i)-/ij b(i) + / 'Cij.

i) i into and i' and replaces each original arc (i. (i. This transformation has the following network interpretation: (i. j) send Ujj units of flow on the arc and then replace arc by arc (j. uncapacitated. x^j Further. and is x^j^. This transformation splits each node (k.47 original network.. j) by an cost of the same cost and and each arc by an arc i. and x:j^ are both nonnegative. this transformation permits us to remove arcs with negative costs. i') i T4. This transformation a change (i. i) vdth cost -Cj. We also add arcs of cost zero for each Figure 2. (i'. since this x^j^ + Xjj^ = u^. (j. j) or an upper in variable: bound on the replace x^. j) by Cj: X • in the problem formulation. Let arc flow Ujj if it is represent the capacity of the arc is (i. » An example arc (k. = x^< Ujj. Doing so replaces arc with its associated cost by the arc i) v^ath a cost -Cj.. T3. (Arc Reversal). i') 0< of arc reversal. . Consequently. Uj. © two nodes capacity. Therefore.7 illustrates the resulting network all when we carry out the node splitting transformation for the nodes of a network. The new flow X •: measures the amount of flow we "remove" from the "full capacity" flow of b(i) b(j) b(i)-Ujj b(i) + Ujj CD <D Figure 2.6. j) of the same and capacity. transformation valid. (Node Splitting).

(a) The original network.11 when we use it reduce a shortest path problem with arbitrary arc lengths to an assignment problem. is This transformation also used in practice for representing node activities and node data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of we simply associate arc (i. (b) The transformed network.48 (a) (b) Figure 2.7. i'). We to shall see the usefulness of this transformation in Section 5. node i with the new throughput .

in increasing all order of solution difficulty. Each approach assigns tentative distance labels (shortest path distances) to nodes at each step. Label correcting methods consider as temporary until the final step label setting all labels when they all become f>ermanent. and (iv) finding shortest paths from every node to every other (e. algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often call for the solution of a large number of shortest path problems as subroutines. we discuss problem types (i) (i). or most pairs of rebable path between one or many nodes in a network. are finding shortest paths from one node to other nodes all when arc lengths are nonnegative. The label setting methods are applicable networks with nonnegative arc lengths. We will show that methods have the most attractive worst-case performance. In this section.49 3. we consider a generic version of the label correcting method. whereas label correcting methods apply to networks with negative arc lengths as well. node. We then describe two more sophisticated implementations that achieve in practice improved running times emd in theory. Next. practical experience has efficient shown is the label correcting methods to be modestly more Dijkstra's algorithm first the most popular label setting method.. designing amd testing shortest path efficient algorithms for the problem has been a major area of research in network optimization. shortest paths visiting specified nodes.g. In this section. (ii) and (iii). The problem arises when trying to determine the shortest. Researchers have studied several different (directed) shortest path models. SHORTEST PATHS Shortest path problems are the most fundamental and also the most commonly encountered problems shortest path in the study of transportation and communication networks. cheapest. Label setting methods designate one or more labels as permanent (optimum) at each iteration. Consequently. finding various types of constrained shortest paths between nodes shortest paths with turn penalties. More importantly. we discuss a simple implementation of this algorithm that achieves a time bound of 0(n2). the k-th shortest path). nevertheless. outlining one special implementation of this general approach that runs in polynomial time and another implementation that perfomns very . The algorithmic approaches for solving problem types setting and (ii) Cem be classified into two groups—label to and label correcting. The (i) major types of shortest path problems. (ii) finding shortest paths from one node to (iii) other nodes for networks with arbitrary arc lengths.

We can ensure this condition by adding an with a suitably large arc length.1 We consider a (i.A) with an arc length Cj. node j. is to fan out and label nodes is in order Each node i has a label. j) e A }. and in this section as well as in Sections 3. Dijkstra's Algorithm 3. we discuss a method to solve the all pairs shortest path problem. Initially. we further assume that arc lengths are nonnegative. and each other node j a temporary label equal to Cgj € A. Let A(i) represent the set of arcs emanating from node { € N. j). we s a permanent «> label of zero. We invoke this connectivity assumption throughout Dijkstra's algorithm finds shortest paths from the source node from node s s to all other nodes. We suppose that node s is a specially designated node. and scans au-cs in A(i) to it update the distamce all of adjacent nodes. At each iteration. and let C = max Cjj : (i. and otherwise. aissodated with each arc i e A. j) otherwise is temporary. the label of a node are i is its shortest distance from the source node along a path whose internal nodes selects a all permanently labeled. we assume amd that aire lengths are integer numbers. The following (which basic implementation of Dijkstra's algorithm. designate the node vdth the algorithmic representation is a . The correctness of the algorithm on the key observation we prove later) that it is always possible to minimum temporary label as permanent.3. node i with the minimum labels temporary makes it permanent. denoted by d(i): the label i. j) network G= (N. The algorithm label. for each this section. G contains a directed path from s to every artificial arc (s. Finally. and assume without any loss of generality that the network other node.50 well in practice.2 3. The algorithm terminates when has designated relies nodes as permanently labeled. permanent it once we know that it represents the shortest distance from s to give node if (s. In this section. The basic idea of the algorithm of their distances from s.

j) = T-{i}. whereas the label of each node in T is j) the length of a shortest path subject to the restriction that each node in the path (except belongs to P. denoted to by pred(i). To establish the validity of Dijkstra's algorithm. Cgj and pred(j) : = s if (s. if updates the labels of nodes in T (i). Assume that the label of each node j in P is the length of a shortest path from the source. After the algorithm has permanently in node i. d(s) d(j) : : = = and pred(s) = : 0. the segment of the path P between node k and node has a nonnegative length because arc lengths are nonnegative. The algorithm i associates a predecessor index. begin P:=(s). the algorithm requires 0(n) time to identify the node with minimum temporary label and . with each node € N.51 algorithm DIJKSTRA. T: = N-{s). then setting d(j) = d(i) The computational time its for this algorithm can be split into the time required by two basic operatior\s--selecting nodes and ujjdating i distances. j) in A(i). the we use an inductive argument. end. € A(i) do then d(j) : d(j) > d(i) + Cjj = d(i) + Cjj and pred(j) : = i. the temporary labels of some nodes > T+ Cj: (i) might decrease. P: = Pu(i). tentative shortest paths to these nodes. {distance update) for each if (i. Then it is possible to transfer the node i in T to with the smallest label d(i) to P for the following reason: that is any path P from the source node i must contain a first node k i in T. This observation shows that the length of path P is at least d(i) and hence labeled i it is valid to permanently label node i. sets. and d(j) : = «» otherwise.j) e A . However. in the algorithm. end. these indices allow us to trace back along a shortest path from each node to the source. because node could become an internal node in the must thus scan all of the arcs (i. node k must be is at i at least as far away from the source as node since its label least that of node i. d(j) We + Cj. while P * begin N do (node selection) let i e T be a node T: for which d(i) = min {d(j) : j € T). d(i) . The algorithm updates these indices (tentative) shortest path ensure that s to pred(i) is the last node prior to i on the from node node i. furthermore. In an iteration. At termination. At each point nodes are partitioned into two P and T.

One by . Instead of scanning temporarily labeled nodes at each iteration to find the one with the minimum in a sorted distance label.1 suggests the following scheme for node 0. Consequently. which currently comparable to the best label setting algorithm in practice. and reduces the algorithm's fact: computation time using the foUouing that FACT 3.) 3^ Dial's Implementation in Dijkstra's The bottleneck operation the algorithm's performance. We maintain nC+1 buckets numbered label is k. nC is Recall that C represents the largest arc length in the all an upper bound on the distance labels of the nodes. never decreases the distance label of any permanently labeled node since arc lengths are nonnegative. is we describe Oial's algorithm. Bucket k stores each node whose temporary distance network and. . more complex version of R-heaps gives the best worst-case performance for choices of the parameters n. and C. using clever data structures. all algorithm is node selection. Thus. overall. Subsequently the best we (A all describe an implementation using R-heaps. suggested several implementations of the algorithm. m. can we reduce in practice. hence. we scan the buckets in increasing order until label of each we is nonempty bucket. the computation time by maintaining distances fashion? Ehal's algorithm tries to accomplish this objective. . The distance nondecreasing. of Dijkstra's algorithm Dijkstra's algorithm has been a subject of much research. selection. This implementation time.52 takes 0( A(i) I I )) time to update the distance labels of adjacent nodes. FACT 3. and while scanning arcs in A(i) during the distance update step. In the identify the first node selection step.. which is nearly known most implementation of Dijkstra's algorithm from the perspective of worst-case analysis. These implementations have either its dramatically reduced the running time of the algorithm in practice or improved worst case complexity. 2. labels Dijkstra's algorithm designates as permanent are This fact follows from the observation that the algorithm permanently labels a node i with smallest temporary label d(i). In the following discussion. Researchers have attempted to reduce the node selection time without substantially increasing the time for updating distances.1. they have. nC.. To improve we must ask the following question. the algorithm requires Oirr-) time for selecting nodes and CX ^ ie A(i) | | ) = 0(m) time for N thus runs in O(n^) updating distances. The distance node in this bucket minimum. 1.

. it as we nodes and decrease any node's temporary distance we move from a higher index bucket to a lower index bucket.. .. then buckets k+1. C which can be viewed as arranged in a circle as in Figure 3. In other words. Doing so permits the topmost relabel us. The of buckets to C+1. efficiently. during the entire execution of the algorithm.. temporary labels are bracketed from below by Consequently. or delete label. 2. k+(C+l). store nodes in increeising values of the distance labels. it is possible to add.1)... We need not store the nodes with to a bucket infinite temporary distance labels first any of the buckets-we can add them when they receive a finite distance label. node from the list. bucket labels k.. to select easily a node. in fact. arc we delete these rodes from the bucket. at any point in time this bucket also implies that vvill if hold only nodes with the same distance labels. . and for each finitely labeled node j in T. k+2(C+l). This storage scheme bucket k contains a node with . 2. then at the end of that iteration labeled node j in T. i. Consequently. . delete. and select the next element of any bucket very constant. storing to its two pointers for each entry: one pointer immediate predecessor and one to its immediate successor. 0. d(j) = d(k) + Cj.2. add a bottommost node. Hence. < d(i) + C for each finitely This fact follows by noting that (ii) (i) d(k) < d(i) for eacl k e P (by FACT 3. We then resume the scanning of higher numbered buckets in increasing order to select the next nonempty bucket. 1. Now. By storing the content of these buckets carefully. This d(j) in implementation stores a temporarily labeled node j with distance label the bucket d(j) mod (C+1). C+1 buckets suffice to store d(i) and from above by finite d(i) + C. k stores temporary labeled nodes with distance however. 1. k-1. because of and so forth. C. d(j) < d(i) + < d(i) + C. . in 0(1) time. by rearranging the pointers. we order the content of each bucket arbitrarily. FACT 3. nodes with temporary distance in labels. making them permanent and scanning their lists to update distance labels of adjacent nodes.2. allows us to reduce the If d(i) is the number FACT 3. minimum distance label. One implemention uses a data structure knov\T» a doubly In this data structure.: cj^. for some k € P (by the property all finite of distance updates). distance label that the algorithm designates as permanent at the d(j) beginning of an iteration. Consequently. k+2. .1. this transfer requires 0(1) time. this algorithm runs in following fact 0(m + nC) time and uses nC+1 buckets.e..53 one. Dial's algorithm uses C+1 buckets numbered 0. bls a time bounded by some linked list.

3. The Rather. necessitating large storage and increased computational time. the previous The discussion sections of this implementation can skip it of a more advanced nature than and the reader without any loss of continuity. is C is not n. the algorithm as may wrap around many as n-1 times. as compared to the original algorithm.1. all of the buckets much less than however.54 k-l Figure 3. and C = 2" the algorithm takes exponential time in the worst case. resulting in a large computation time. typically does not encounter these difficulties in practice. we is consider an implementation using a data structure called a runs in redistributive heap (R-heap) that 0(m + n log nC) time. in it algorithm runs in is 0(m + nC) time which if not even polynomial time. For example. very large. next section. pseudopolynomial if time. it a wrap around fashion. The search heis for the theoretically fastest implementations of Dijkstra's algorithm In the led researchers to develop several new data structures for sparse networks. then the algorithm runs O(n^) time.3. however. is that C may be very large. The first implementation considers all the . is is rot attractive theoretically. to identify the first nonempty where it reexamines the buckets starting at the place A potential disadvantage of this scheme. and the number of passes through Dial's algorithm. In addition. For most applications. Bucket arrangement in Dial's algorithm Dial's algorithm examines the buckets sequentially. C = n'. The algorithm. R-Heap Implementation Our first O(n^) implementation of Dijkstra's algorithm and then Dial's implementation represent two extremes. in bucket. In the next iteration. left off earlier.

The R-heap algorithm we consider next In the version of 16. 4. say. but still requires us to search through the lowest numbered bucket to find the node with minimum temporary one for the lowest label. In fact. we could conceivably retain the advantages of bo. and the resulting algorithm reduces to Dijkstra's implementation. Indeed. if But in order to find the smallest distance we need is to search all of the elements in the smallest index nonempty bucket. 2. we need original only one bucket. For a given shortest path problem. The nodes in bucket k are denoted by the CONTENT(k). Using a width of TOO. so to speak) and searches for a node with the smallest label. reallocate we dynamically modify the ranges of numbers stored each bucket and we nodes with temporary distance labels in a is 1. to find the is we avoid the need to search the entire bucket minimum. for each bucket reduces the number of buckets. redistributive heaps that that the we present. size k permits us to reduce the number of buckets needed by a label.. 1. If we could devise a variable width scheme. Dial's algorithm separates nodes by storing any two nodes with different labels in different buckets. the cardinality of the range called its width. 8. changes the ranges.. We store a will it temporary node i in bucket k d(i) e range(k). . and nodes in the buckets. 1. different we could store temporary labels from 100k to lOOk+99 in bucket that can be stored in a bucket is k. . the widths of the buckets are is 1. the running time of this version of the R-heap algorithm 0(m + n log nC).h the wide bucket and narrow bucket approaches. but not bucket? labels in a For example... The buckets are numbered as is K = nCl We do not represent the range of bucket k by range(k) which a (possibly empty) if closed interval of integers. . Using widths of factor of k. uses variable length widths and changes the ranges dynamically. its For the preceding example. so number of buckets needed in only Odog nC). set We store permanent nodes. as in the previous algorithm. perhaps by storing many. lOOk+99] and width is TOO. redistributes the . Moreover. . The algorithm each time it change the ranges of the buckets dynamically. The temporary labels make up the range of the bucket. adopting an intermediate approach. k arbitrarily large. with a width of numbered bucket. instead of storing only nodes with a temporary label of k in the k-th bucket. the range of bucket k is [100k . Could we improve upon these methods by all. way that stores the minimum distance label in a bucket whose width In this way..55 temporarily labeled nodes together (in one large bucket. We now Flog describe an R-heap in 1 more detail. 0. the R-heap consists of + flog nCl buckets. 2.

and hence buckets to 3 v^ll never be needed again.56 Initially. range(K) = [2^-1 . 7]. Thus. carry out these operations a bit differently. In this case the resulting ranges of buckets . shift (or redistribute) its temporarily labeled nodes into the appropriate buckets and 3). these buckets idle. we know no temporary v^l ever again be than 8. ranged) = range(2) = [2 3). 15]. At all this point. finding a node with smallest temporary distance label) by a sequence of redistribution steps in which we shift is nodes constantly to lower indexed buckets. the minimum temporary it label is in a bucket with width one. in the Suppose range [8 . the redistribution time 0(n log nC) time in total. and the algorithm selects in an additional 0(1) time. Since the that minimum index nonempty bucket label the bucket less whose range is [8 15].. [9]. rangeO) = [4 .. distance label without searching nodes in bucket is The following observation helpful. we can redistribute the range of bucket 4 (whose width is 8) the previous buckets (whose combined width [12.. 1. Suppose for . redistributing the range [8 we need only to 4 redistribute the subrange [11 15]. [8]. Roughly speaking. . we have replaced the node selection step (i. [10 11]. Rather than leaving is 8) to . 2.. it Actually. that the minimum Then rather than . range(4) = [8 . we 4.. since each node can be shifted at most K = 1 + flog nCl times. the buckets have the following ranges: rarge(0) = [0]. 15]. [1].... and We then set the range of bucket 4 to and we (0. Eventually. to first minimum temporary label is 11. could not identify the minimum is . makes sense example 15]. each of the elements of bucket 4 moves to a lower indexed bucket. we would Since we will be scanning find the all of the elements of bucket 4 in the redistribute step.. 2^-1]. resulting in the ranges 0.. the widths of the buckets initial will not increase beyond their distance label is widths. Essentially. for 15]..e. These ranges will change dynamically. however. label in the bucket. example that the initial minimum quickly determined to be We could verify this is fact by verifying that buckets through 3 are empty and bucket 4 nonempty.

we scan the buckets is 0..2. are guaranteed that the minimum temporary label is stored in bucket 0.57 would be [n]. In number beside each length. In our example.. 14].4) (6) Figure 3... 7 127] Ranges: CONTENT: (2. e.3 The initial R-heap. Figure 3. . greater than 1. Nodei: Label d(i): 12 13 [0] [1] 3 4 15 5 6 20 4 [8 .3] (3) 3 [4 . [15]. . and then we reassign the content of bucket k time is The is redistribution 0(n log nC) and the running time of the algorithm 0(m + n log nC). Since bucket label..2 The shortest path example. at the end of this redistribution. we do is not carry out the actual node selection step until the If minimum nonempty bucket k. the minimum nonempty to buckets bucket is whose width we redistribute the range of bucket k into buckets to k-1. [12]. the illustrate R-heaps on the shortest path example given in Figure arc indicates its 3.15] nC=120 5 [16. So. (13 .3 specifies the starting solution of Dijkstra's algorithm and the initial R-heap. 1. We now the figure. to k-1.2. For this problem.. the has width 1. bucket has width one. we is 1.. whose width To reiterate.63] [64 . Moreover. every node in this bucket has the same (minimum) distance .7] 6 [32 . To select the node with the smallest distance label. K to find the first nonempty bucket. source Figure 3. bucket nonempty. C=20 and K = flog 1201 = 7.31] {5} Buckets: 12 [2 ..

starting at bucket is 4. move bucket to a lower 5. which Node 5 moves from bucket Figure 3. node 5 should left. to index bucket. deletes node 3 from the R-heap. its We check whether the is new distance label of node 5 5. Node i: .4 shows the new R-heap.58 algorithm designates node 3 as permanent. So identify the first we sequentially scan the buckets from right to 9.5) to change the distance label of node 5 from 20 to 9. It isn't. and scans the arc (3. bucket whose range contains the number 5 to bucket 4. which bucket Since its distance label has decreased. is contained in the range of present bucket.

this operation takes The term m reflects the number it of distance ujxlates. the modified we sequentially scan lower numbered buckets from right to left and add the node to the appropriate bucket. This redistribution necessarily empties smallest distance label to bucket 0.. . {2. CONTENT(2) = e. . CONTENTO) = CONTENT(4) = 4). a moves most lower indexed bucket.. to right to identify the first nonempty bucket. 2. O(nK) is node can move at K times.. the node selection steps take O(nK) Since K = [log nC"L the algorithm runs in 0(m + n log nC) time. Since bucket k 1. Thus. the next two integers to bucket htis the next four integers to bucket and so on. 0. . nK arises because the total every time a node moves. to a lower indexed bucket.. . 0. to a and the term 0(m + nK) time. . Node selection begins by scanning the k.. width < 2"^ and since the width of widths of the 2*^. so the nodes total move a total of at most nK times. Whenever we examine it a node in the nonempty bucket k with the at smallest index. bucket 4 . u] and the smallest distance is Idjj^jp . . we can redistribute the useful range of bucket k over the buckets . since there are K+1 Therefore. we assign 2..59 CONTENT(O) = (5).. We now summarize our discussion. e CONTENT(k) and that d(j) decreases. all we move can time. then any then node in the selected bucket has the minimum distance label. buckets. label of a node in djj^j^. the bucket is k-1 and reinsert content to If the range of bucket k is [/ . The algorithm the first redistributes the useful range in the following manner: 1. first buckets can be as large as 2*^'^ for a total potential 0. If If This operation takes 0(K) time per iteration and O(nK) time in k=0 or k=l. each node can move most K times. integer to bucket 0. 1. This redistribution of ranges and the subsequent reinsertions of labels to bucket nodes empties bucket k and moves the nodes with the smallest distance 0. 1. we its redistribute the "useful" range of bucket k into the buckets those buckets. Next we consider the node buckets from left selection step. . then the useful range of the bucket u]. and moves the node with the We are now then in a position to outline the general j algorithm and analyze If its complexity. Overall. 1. Suppose that d(j) « range(k). a bound on node movements. k-1 in the manner described.. say bucket total. the next integer to bucket 3. k ^ 2. CONTENTO) = 0.

path problem in 0(m The R-heap implementation + n log nC) time. structures. conditions which is more suitable from the viewpoint of be a set of labels. these algorithms maintain distance labels as temporary until the end. Unlike label setting algorithms. We will prove an alternate version of these label correcting algorithms. (3. a directed cycle whose arc lengths sum to a negative value. these algorithms typically require that the network does not contain any negative directed cycle. then they represent the shortest path lengths from the node: . usual. Label correcting algorithms can be viewed as a procedure for solving the following recursive equations: d(s) d(j) = 0. shortest paths. Label Correcting Algorithms Label correcting algorithms. 3. as the name implies.e. i.1) (d(i) = min + Cjj : i € N).60 Theorem 3. for example. d(j) denotes the length of a shortest path from the source node to node These equations are knov^m as Bellman's equations and represent necessary conditions These conditions are also sufficient if for optimality of the shortest path problem.2) As j. Most label correcting algorithms have the capability to detect the presence of negative cycles..4. (3. 0(m this + n log C) time. FACT 3. is possible to reduce this all bound further to 0(m + n Vlog n which is a linear time algorithm for but the sparsest classes of shortest path problems. to networks containing negative length arcs. for each j e N - {s}.2 Let d(i) for i e N If d(s) = and if in addition the labels satisfy the following conditions. Theorem source 3. every cycle in the network has a positive length.2). of Dijkstra's algorithm solves the shortest This algorithm requires 1 + flog nCl buckets. when they all become permanent simultaneously.2 permits us to reduce the number of buckets to 1 + flog CT This refined implementation of the algorithm runs in 1. Using substantially more sophisticated data ). The label correcting algorithms are conceptually more general than the label setting algorithms and are applicable to more general To produce situations.1. maintain tentative distance labels for nodes and correct the all labels at every iteration. For probelm that satisfy the similarity assumption (see Section bound becomes 0(m+ n it log n).

At any point in the algorithm. > d(i) based upon the simple observation that whenever the current path from the source to node i.j) Cj. We d(i) satisfy note that if the network contains a negative cycle then that the no set of labels d(i) satisfies C3. < + Cjj for all j) e A. . + d(j) + = Cj: ^ for each e W.2 implies that d(i2) ^ d(i^) + Ci^i2 = + Cij^-iijc d{i3) < d(i2) + Ci2i3' / d(ij^) d(ij^. together with is the arc (i. Let P consist of Ciii2/ nodes s = - i2 i3 ••• ••• - 'k = < ) Condition C3. of the shortest path problem. the label d(i) is either «» indicating that it is we have yet to discover any path from the source to node j. Conditions C3. cycle.-j) Adding these inequalities yields d(j) = d(ij^) < V (i.2. This conclusion contradicts our assumption that a negative Conditions C3. The algorithm + Cjj. We show that if the labels d(i) satisfy C3.1 in Theorem 3. . Therefore d(j) is a lower bound on the length of any directed path from e P node j. of length d(i).2. (i. Suppose C3.j) is a shorter path to node j than the current path of length d(j). d(i) d(j) is the length of d(i) some path from (i.j) Consequently. then they are also lower bounds on the - shortest path lengths. the source node to node i.1.61 C3.2. Consider any directed path i-j P from the source to node j. network did contain a negative cycle d(i) W and some labels (i. it is an upper bound on the shortest path length.j) ^ii ^ ^' since the labels d(i) cancel W e W W is out in the summation. first is The generic label correcting algorithm that we consider a general procedure for successively updating distance labels d(i) until they satisfy the conditions C3.2 correspond to label correcting algorithms as From this perspective.2 correspond to primal feeisibility for the linear programming formulation dual feasibility.j) V e (d(i) - d(j) Cjj) T!. which implies the conclusion of the theorem. we might view and methods that always maintain primal feasibility try to achieve dual feasibility. the source to including a shortest path from s to j. These inequalities imply that (i. or the length of some path from the source to node d(j) j.2. and C32. Since d(i) is the length of some path from the source to node i. Proof.

however. if the arc if then update = d(i) + Cj:. and hence the algorithm runs pseudopolynomial time. satisfies d(j) while some arc begin d(j) : (i. correcting algorithm. scan arcs in satisfies this condition. restriction One drawback Indeed. Proof.2. d(j) at bounded from above by nC all and below by -nC. number of steps can grow exponentially with (Since the algorithm of C. from Theorem 3. end. A arcs that nice feature of this label correcting algorithm is its flexibility: we can select the finite do not satisfy conditions C3. this conclusion imphes the . = and pred(s) = : 0. end.3 correcting algorithm Wher: applied to a network containing no negative cycles. method. these instances a polynomial time do have exponentially large values algorithm.j) > d(i) + Cj.2 in any order and of the still assure the convergence. the modified requires 0(nm) time to determine shortest paths from the source to every other node.) is pseudopolynomial time. do = d(i) + Cjj. In each pass.(s). We call this label Theorem label 3. = oo for each j € N . the algorithm updates integral. At termination. then the n. the most 2nC times.62 algorithm begin d(s) d(j) : : LABEL CORRECTING. To obtain bound for the we can organize the computations carefully in the following manner. We show that the algorithm performs at most n-1 passes through the arc list. for all (i. Now make d(j) passes through A. The correctness of the label correcting algorithm follows Cj. is and hence represent finite if there are We now note that this algorithm Since d(j) is no negative cost cycles and if the data are integral. the labels d(i) satisfy d(j) < d(i) + the shortest path lengths. Since each pass requires 0(1) computations for each arc. in Arrange the arcs A in some (possibly arbitrary) order. j) e A. we start with pathological instances of the problem and make a poor choice of arcs at every iteration. Thus when data are in number of distance updates is O(n^C). A in order and check the condition d(j) > d(i) + Cj:. Terminate the algorithm algorithm the modified no distance label changes during an entire pass. is that without a further on the choice of arcs. the label correcting if algorithm does not necessarily run in polynomial time. pred(j) : = i.

the modified label correcting algorithm considers every arc of the list. then has a set of labels d(j) satisfying C3-2. while scanning the arcs. We perform induction on the value of Suppose D^*^(j) < d''"Uj) I^Cj) for each € N. Thus. passes through the arc for list. In this case. Further. )). scanning arcs in A(i) and testing the optimality conditions. In dJi]) d^'Q) = d''"^(j). the shortest path from the source to any after at node consists of at most n-1 arcs.63 0(nm) bound. This situation cannot occur ui\less the network contair\s a negative cost cyde Practical Improvements stated so far. min i*j D''"^(i) + Cj. Then.. D^'Cj) < d'^(j) for all j e N. Consequently. represent the distance label of node D''(j) after r . = min (d''"^(j). . that < r. up to the (n-l)-th pass. As network during every pass through the arc arcs in the arc list It aill need not do so. We claim. On the other hand. d''(j) for each j € N and each r j = 1. for every (i. the n-1 passes. the algorithm terminates v^th the shortest path lengths. or in case (ii).. Let d''(j) denote the length of the shortest path from the source let D'^(j) to j node j consisting of r or fewer arcs. n-1. the inequality fol3o\*^ from the induction hypothesis. more nodes in the algorithm modifies If the distance label of (a some node i changes then the network contains a directed walk path i together with a cycle that have one or common) from node all 1 to of length greater than n-1 arcs that has snnaller distance than paths from the source node to i. during the next pass S d(i) + Cj. Now suppose that during one pass through the arc the algorithm does not change the distance label of a d(j) node i. we note that Therefore. .)) > min {D^" "•()). and d^(j) = min i*j {d''"^(i) + c^:]. Suppose we order the by their tail nodes so that arcs with the same tail node appear i consecutively on the list. The provisions { of the modified labeling algorithm imply that < min {ly'^(j).)). Finally. min {d^"''(i) + Cj. j) 6 A(i) and the . r arcs either (i) has no more than r-1 arcs. inductively. most n-1 passes. when we make one more pass. list. the algorithm does not update any distance label it during an entire pass. (ii) it contains exactly r arcs. in the n-th pass. Hence. the algorithm terminates with the shortest path distances and the network does not contain any negative distance labels in all cycle. min (D''"''(i) + Cj. Next note that the shortest path to node j containing no more than case (i). The modified label correcting algorithm is also capable of detecting the presence If of negative cycles in the network. we consider one node at a time.

(s). i This heuristic rule has the follovdng plausible justification. the worst-case Though this change makes the algorithm very attractive in . 64 algorithm need not maintain a It test these conditions. algorithm begin d(s) d(j) : MODIFIED LABEL CORRECTING. then we add to the beginning of LIST. « LIST then add j to the end of LIST. : «> for each j e N- LIST = (s). terminates in 0(nm) time. while LIST* begin do select the first element i of LIST. first-out order to assure that performs passes through is the arc A and. end. end. i we to see has already appeeired in the LIST. practice. consequently. j) for each e A(i) do + Cjj if d(j) > d(i) then begin d(j) : = d(i) + C|j pred(j) if j : = i. To achieve this savings. end. delete i from LIST.. the node has previously appeared on the LIST. yes. rather than update them from other nodes and then update them again when we consider node alone. Another modification of this algorithm sacrifices its its polynomial time behavior in the worst case. then some nodes may have i as a predecessor. but greatly improves running time in practice. While adding a node If to LIST. = = : and pred(s) = : 0. the algorithm is i. The following procedure this further m. It is advantageous to update the distances for these nodes immediately. (i. The modification i alters the manner check in which the algorithm adds nodes whether the it to LIST. otherwise we add If it to the end of LIST. Empirical studies indicate that with this change several times faster for many reasonable problem classes. scans this list list in the first-in.odification of the a formal description of modified label correcting method. the it algorithm can list of nodes whose distance labels have changed since it last examined them.

connected by directed paths. cor\sidering each node node once. (i. certain variants of the label setting algorithm more efficient in practice. j) either terminates with the shortest In the former case. / We then obtain the shortest path distance between nodes k and in the original network by adding d(/) . The algorithm well suited for sparse graphs. Indeed. this version of the label correcting the fastest algorithm in practice for finding the shortest path from a single all nodes in non-dense networks. we P new length of the arc Cj. transformation. The second better suited for dense graphs. All Pairs Shortest Path Algorithm In certain applications of the shortest path problem. Condition C3. This approach requires 0(nm) time to solve the first shortest path problem.j)€P ^ii ~ X ^ii "*" ^^^^ ~ '^^'^ since the intermediate (i. note that for any path from node k to node / X (i.e. j) as Cj. the method takes an extra . then we can fist transform the network to one with nonnegative arc lengths as follows a Let s be node from which all nodes in the network are reachable.2 implies that t for all € A.) 3. for each (i. This transformation thus changes the length of paths between a pair of nodes by a constant amount (depending on the pair) and Since arc lengths consequently preserves shortest paths. first In this section is describe two It algorithms to solve this problem. then we can solve the all pairs shortest path as the source problem by applying Dijkstra's algorithm n times. j) path distances define the d(j) or indicates the presence of a negative cycle.65 running time of the algorithm algorithm source to is is exponential. combines the modified algorithm is label correcting algorithm and Dijkstra's algorithm. (For the problem of finding a shortest path from are a single source node to a single sink. and if the network contains no negative cost cycle.5. It is based on dynamic programming. we need we to determine shortest path distances between all pairs of nodes.j)eP labels d(j) cancel out in the all summation. distances from s The algorithm Cu = (i. If the network has nonnegative arc lengths. + d(i) - d(j) e A. If the network contains arcs with negative arc lengths. We use the modified label correcting algorithm to compute shortest path to all other nodes. become nonnegative after the we can apply Dijkstra's algorithm n-1 additional times to determine shortest path distances between all pairs of nodes in the transformed network.. Further.d(k) to the corresponding shortest path distance in the transformed network. i.

and d^'+^Ci. in which case = d^(i. the time needed to solve a shortest path problem with nonnegative arc For the R-heap implementations of Dijkstra's algorithm we considered previously. C) time is to compute the remaining shortest path distances. j) denote the actual shortest path distance. j)). Thus we have d^(i. or does pass through the node in which case d^'*'^{i. (ii) . j). r. 2. we first . j) " the length of a shortest path from node i to node .j) = Cjj.m. In the expression S(n. . . observe that a shortest path from node either (i) to node r. S(n. To compute i d''"*'^(i.. jX d^Ci. j) known as Floyd's algorithm. j). (d^'U.. We over assume that = » for all node pairs (i. 2.C) lengths. j) = min Cj. The following procedure a formal description of this algorithm.. 1. r-1 (and and j) Let d(i. m.m. r) + d^Cr. j that passes through the nodes 1. We as follows: d'"(i. The approach we present variables d^(i. j) = d''(i. r does not pass through the node r. It is possible to solve the pairs previous equations recursively for increasing values of and by varying the node is N X N for a fixed value of r.. j subject to the i condition that the path uses only the nodes as internal nodes. j). solve the all Another way pairs shortest path is problem is by dynamic define the programming.66 0(n S(n. r) d'^"''^(i. j) e A. j) + d^ir.C) = m+ to n log nC.

2. e NxN d(i. and pred(i. j) : = pred(r. this theorem is a consequence of Theorem 3. for each node pair (i. j) : = 0. Theorem (i. j) for more transparent from x the followang theorem. for some node from node r. i) = for all i. Consequently. end. This cycle can be obtained by using the predecessor indices. ))•. j) > then . j). r) d(i. The algorithm for either terminates vdth the shortest path distances or stops i. STOP. For fixed i. end. + i) d(r. . j) is the d(i. pred(i. and in each iteration it performs 0(1) computations for each node pair. = Cj. d(i. it runs in OCn-') time. predd.ork contains a path from node to node j of length d(i. (Hi) < d(i.*i. pred(i. for each for each A to do d(i. j) for each . j). when < 0. j). node (i. j) € NxN j) : do d(i. Floyd's algorithm jilgorithm. r) do + d(r. d(i. r. The index pred(i. r : = n do (i. i) < some node In the latter case. This algorithm performs n iterations. j) length of some path from node i to node j. Floyd's algorithm uses predecessor indices. r) + c^: for all i. j) : = d(i. j) -: j T • if d(i. r) i + d(r. if i = j and < then the network contains a negative cycle. i) Hence. j) e N N satisfy the following conditions. the union of the r to tentative shortest paths to node r and from node node i contains a negative cycle. last node prior to node j in the tentative shortest path from the node i to node The algorithm maintains the property i that for each finite d(i. This path can be obtained by tracing the predecessor indices. = <« j) : and = i. then they represent the shortest path distances: (i) (ii) d(i. netw. j) denotes the j. p. : < > begin d(i.4 If d(i. is in many respects similar to the modified label correcting This relationship becomes 3. Proof.67 algorithm begin for all ALL PAIRS SHORTEST PATHS. and j. j). j) pairs € 1 (i. d(i. j).

the problem to . We Uj. j) G = GM. j) € A). We assume that for every arc in A. Moreover. what all is the minimum number join this pair of of nodes whose removal from the network destroys what is paths joining a particular pair of nodes? Or. In particular. i) is There is no generality in making this assumption since all we allow zero capacity arcs. two distinguished nodes also in A. We begin The by introducing a basic labeling algorithm for maximum flow problem. defined as A(i) = k) : (i. for consider a capacitated network (i. {(i. Formally. the arc adjacency i. We also we can assume set the without any loss of generality that arc capacities are finite (since capacity of any uncapacitated arc equal to the sum of the capacities of list all capacitated arcs).68 4. we describe preflow push algorithms that have recently emerged as the for solving the most powerful techniques maximum flow problem. is the maximum flow that can be sent between any two nodes? tire The resolution of this question determines the "best" use of capacities and establishes a reference point against which to compare other ways of using the network. j) are (j. of the loss of network. We then consider improved versions of the basic labeling algorithm with better theoretical performance guarantees. both theoretically and computationally. the solution of the maximum flow problem with capacity data chosen judiciously establishes other performance measures for a network. For example. t we wish to find the maximum flow from the source node is s to the sink node that satisfies the arc capacities. communication systems planning and several other application domains. MAXIMUM FLOWS An important characteristic of a network is its capacity to carry flow. A) with a nonnegative t integer capacity any arc e A. given capacities on the arcs. This remarkable theorem has a number of surprising implications in machine and vehicle scheduling. : (i. the maximum number reliability of node disjoint paths that nodes? These and similar its measures indicate the robustness of the network to failure of components. we discuss several algorithms for computing the maximum flow between two nodes solving the in a network. As earlier. Let U = max (u^. The source s and sink (i. What. In this section. k) € A) designates the arcs emanating from node In the maximum flow problem. validity of these algorithms rests upon the celebrated max-flow min-cut theorem of network flows.

j) (4. ifi 0. . is crucial to the algorithms (i. x. j). additional flow that can be sent from node (i) to u^: node - using the arcs and of arc i). We call the network consisting of the arcs with x). of any arc i e j A represents the (i. . the integrality assumption of residual network is not a restrictive assumption in practice. Given a the residual capacity.. "V' ^ > = ^' < Xj: < Ujj .foraUiG N. the current flow rj. Figure 4. Algorithms whose complexity bounds involve U assume integrality of data. 4.1b) (i. i) which can be cancelled flow to node Consequently. y {j : Xjj {) : y (j.69 Maximize v subject to V. though this assumption necessary for others. + xij . positive residual capacities the residual represent it network (with respect to the flow and as G(x). residual capacity has two components: (ii) x^. without specifying any particular algorithmic strategy for how to determine augmenting paths. for each (i. rj. The following high-level (and flexible) description of the algorithm summarizes the basic iterative steps. The and j. Thus. (4. = Uj. until the residual network contains no such path. j) maximum (j. however.1 Labeling Algorithm and the Max-Flow Min-Cut Theorem One of the simplest is and most path intuitive algorithms for solving the maximum The flow problem the augmenting algorithm due to Ford and Fulkerson. (4.1a) r = s. i) Xjj = \ ^ ifi*s. the unused capacity to increase (i.1c) It is possible to relax the integrality assumption on arc capacities for is some algorithms.j on arc x^: (j. j) € A) € A) e A. that rational arc capacities can always be transformed to integer arc capacities by appropriately scaling the data.1 illustrates an example of a residual network. algorithm proceeds by identifying directed paths from the source to the sink in the residual network and augmenting flows on these paths. j) we consider. The concept flow x.t. Note.

The algorithm selects a labeled node and scans arc adjacency list (in the residual network) to label more uiilabeled nodes. It then erases the labels and repeats this process. At any step. we refer to the nodes in the tree as labeled and those its not in the tree as unlabeled. AUGMENTING PATH. The labeling algorithm performs directed path from s to t. we need to show that the algorithm terminates Finally. j) e P. Eventually. last result we must establish that the algorithm termirtates with a maximum flow. or (iii) convex combination of and For our purposes.70 algorithm begin x: = 0. The . or (i) a decreeise in (ii). Xjj by A in the original it network. we need method to to identify a directed path from the source to the sink in the residual network or show that the network contains no such path. We now more detail. For each increases r:j (i. the flows only is easier to work directly with residual capacities and to compute when the algorithm terminates. augment A end. A directed path from the source to the sink in the residual network path. the sink becomes labeled and the algorithm sends the maximum possible flow on the path from s to it t. finitely. end. of the residual network corresponds to (ii) increase in by A a in the original network. The follows from the proof of the max-flow min-cut theorem. while there begin is a path P from s to t in G(x) do A = min : (rjj : (i. j) e P). is also called an augmenting The residual capacity on the path. The in arc (i. augmenting A units of flow along P decreases discuss this algorithm in r^: by A and a by A. First. Second. units of flow along P and update G(x). The following algorithmic description specifies the steps of the labeling algorithm in detail. j) of an augmenting path is the minimum an residual capacity of any arc that definition of the residual capacity implies (i) an additional flow of A Xj. The algorithm terminates all when has scanned labeled nodes and the sink remains unlabeled. a search of the residual network to find a It does so by fanning out from the source node s to find a directed tree containing nodes that are reachable from the source along a directed path in the residual network.

1 Example of a residua] network. (Arcs not shown have zero capacities. Node 1 is the source and node 4 is the sink.) Network with a flow x. . c The residual network with residual arc capacities. Figure 4.71 Network with arc capacities.

xj: + x:j x:j. . Hence. begin loop pred(j) : = for each j e N.Fjj. augment A erase all units of flow along P.72 algorithm maintains a predecessor index. if u^: > rj: we can set x^. j) i € L.the can be used to obtain the arc flows as follows. while L * begin and t is unlabeled do select a node (i. otherwise we set x^: = and x:j = fj. pred(i). and = 0. j as labeled and add this node to L.. mark end end. . end else quit the loop. for each if e A(i) do rj. j is unlabeled and > then begin pred(j) : = i. j) e P). (loop) end. Since arc flows satisfy xj: . (i.r^. labels and go to loop. L: = (s). to the source. The rjj final residual capacities r = uj. A = min : (rj. = Ujj .x:j = uj.u^. for each labeled node i indicating the to trace back rode that caused node a i to be labeled. . if t is labeled then begin use the predecessor labels to trace back to obtain the augmenting path P from s to : t. . end. The predecessor indices allow us along the path from node algorithm LABELING.

let v be the amount of flow leaving the source. Q c A is a cutset the Q Yias this property.Q) disconnected eind no superset of subsets. j we (S. summation and xj: ^ in the second summation shows that Fx(S. S) = X X ie S je "ij ^'^•^^ S cutset equals the value of the flow (4. S). The flow x determines the cutset (S. A if N s. into two A cutset is called am s-t cutset the source and the sink nodes are contained in different subsets of nodes S cind S = N .S: an S is the set of nodes connected t to Conversely. i and j both belong to S.2) S S j e S Def ne the capacity C(S. cutset partitions set A . S). S) as Fx<S< S)= i X G S j X_Xij e i I_ X e Xij. net flow across an s-t We refer to v as the value of the flow. (4.73 In order to show that the algorithm obtains a maximum flow. j) with i e S cind e S is called a forward arc.1 We in claim that the flow across any s-t and does not exceed the cutset capacity. (4. and an arc (i. alternatively designate s-t cutset as i (S.'^ij - I_ i€ S X je S ''ij = Fx^S. An is arc (i.4) j€ S in the first < u^. S)< X Z_ "ij ^ C<S. any partition of the node set as S and S with s e S and e S defines an S).1). and capacity constraints of For this flow vector X. S) of an s-t cutset (S. Consequently. S). x^j in equation for node Cemcels equation for node we obtain ^=1 ie S Substituting x^. j s-t cutset. S) is defined as C(S. (4. Recall from Section 1. I.5) i€ S J6 S .3 that a set is subnetwork G' = (N. j) with e S and € S called a backward arc in the cutset Let X be a flow vector satisfying the flow conservation (4. we introduce some if new definitions and notation. Adding the flow conservation constraints b) for nodes j S and noting -Xjj in that when nodes i.

Adding in the flow conservation equations for nodes in S. S) (4. Consequently. holds as some is choice of x and some choice of an s-t cutset (S. Let x denote the a maximum flow vector and v denote the maximum flow (Linear programming theory. the If all labeling iteration scans each arc at most once capacities are integral (s. inspecting each in A(i).6) But we have observed earlier that v (S. The maximum value of flow from s Theorem equals the 4. (Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts. (i.1.) Define some cutset has finite S to be the set of labeled initial nodes flow in the residual x. Coi^equently. - Xj: + Xjj. S). But does at it terminate finitely? Each labeling eirc iteration of the algorithm scans any node most once.4) yields V = Fx(S. Note that = xjj t e S. since x is a maximum flow.. for each forward arc in the cutset (S. S) = i ^ e S j ]£ € S Ujj = C(S. the bound . Making these substitutions in (4. vector) it has at hemd both the maximum flow value (and a maximum flow capacity s-t and a minimum cutset. network G(x) when we S. eis guarantee that the problem always has a maximvmn flow as long capacity.. This strong duality property the max-flow min-cut theorem. value.5) S). and x^. The more substantive strong duaUty property an equality for cisserts that (4. U = 2". weak duality results. apply the labeling algorithm with the Let S= N- Clearly. Since rj: = U|. to I Proof. j) in the cutset xj.{s}) is at most nU. = for each backward arc in the cutset. The proof of this theorem not only establishes the max-flow min-cut property. s e S and S. arc and bounded by a finite number U. it one unit in any is terminates within nU iterations. the conditions S) < Ujj and ^ imply that = Uj. is a lower bound on the capacity s-t of any s-t cutset. hence rj: for each forward arc x. is duahty theory. nodes S cannot be labeled from the nodes in (S.4). We thus have established the theorem. then the capacity of the cutset at least N . the cutset the S) is a minimum capacity cutset and its capacity equals maximum flow value v. or our subsequent algorithmic developments. but the same argument shows that when the labeling algorithm terminates. <md requires 0(m) computations. we obtain (4. This bound on the if number is of iterations not entirely satisfactory for large values of U. Since the labeling algorithm increases the flow value by iteration. 74 This result is the weak duahty property Like most of the maximum it flow problem when the "easy" half of the viewed as a linear program.

including those we consider in Section 4. is possible to obtain x from y by a sequence of at most s to t m augmentations on If augmenting paths from plus flows around augmenting cycles.4 if overcome this difficulty and obtain an optimum flow even the capacities are irrational. the algorithm can indeed perform that many iterations. For suppose an optimum flow and y it any flow (possibly zero). thein augmenting path algorithms to find a maximum is flow in no more initial m augmentations. without further take fiCnU) augmentations. if Moreover. as the modifications. By the flow decomposition property. we define x' x' as the flow vector obtained from y by applying only the augmenting paths. in principle.75 exponential in the number of nodes. the max-flow min-cut theorem (and our proof of Theorem 4. Nevertheless. This result shows that is. augmenting paths from the source described erases the labels The implementation we have when it proceeds from one iteration to the next. Flow decomposition shows should be able X is that. therefore destroys potentially useful information. . Furthermore. Ideally. Thus if the method is to be effective. moreover. in theory. the augmenting path algorithm may example given in Figure 4. In addition. Several refinements of the algorithms.4. second drawback of the labeling algorithm the is its "forget fulness". we need know a maximum theoretical flow. At each algorithm generates node labels that contain information about to other nodes. the capacities are irrational. possible to find a maximum flow using at most m augmentations. the algorithm may not terminate: although the successive flow values converge. it we should retain a label when can be used profitably in later computations. A iteration. No algorithm developed in the literature comes close to achieving this it is bound. possible to improve considerably on the bound of 0(nU) augmentations of the basic labeling algorithm.2 . we must augmenting paths carefully. to Unfortunately.2 illustrates. they may not converge to the select the maximum flow value. then also is a maximum it flow (flows around cycles do not change flow value). even though Erasing the labels much of this information may be valid in the next residual network.2 Decreasing the Number the of Augmentations The bound not satisfactory of nU on a number of augmentations in the labeling algorithm is from theoretical perspective.1) is true even if the data are irrational. to apply this flow decomposition argument. 4.

1 10^. the flow maximum. is After 2 xlO^ augmentations. . (c) s-b-a-t. s-a-b-t.0 (b) 10^.2 A pathological example for the labeling arc capacities. (a) The input network with (b) After aug^nenting along the path After augmenting along the path Arc flow is indicated beside the arc capacity. algorithm. alternately along s-a-b-t and s-b-a-t.1 (0 Figiire 4.76 (a) 10 \l 10^.

) In the following section. a path of An the alternative is to augment flow along maximum residual capacity. then the length of any increases. within m augmentations. This specialization also leads to improved complexity.77 One natural specialization of the augmenting path algorithm is to augment flow along a "shortest path" from the source to the sink. would obtain a shortest path in the Each of these iterations would take 0(m) steps both worst case and in practice. Now (v* consider a v. this computation time fact that the is excessive. Let v be any flow value and v* be maximum flow value. 2m consecutive maximum have capacity augmentations. then it by examining the labeled nodes in the residual network. we consider another algorithm for reducing the number of augmentations. and (by our subsequent observations) the resulting computation time would be O(nm^). in a first-in. (We will prove these results next section.) Since no path contains is at more than n-1 arcs. Thus the maximum capacity augmenting path has residual capacity at least (v*-v)/m.3 Shortest Augmenting Path Algorithm would be to successively If A natural approach to augmenting along shortest paths first look for shortest paths by performing a breadth the labeling algorithm maintains the set search in the residual network. the algorithm would reduce the capacity of a 2m or fewer maximum capacity most augmenting path by the capacity a factor of at least two. the flow must be maximum. the network contains to (v* - at most m augmenting paths whose residual capacities sum v). If we augment same or flow along a shortest path. L of labeled nodes as a queue. Thus after augmentations. By flow decomposition. this rule guarantees that the number of augmentations most (n-l)m.6. 4. the in the length of the shortest path is guaranteed to increase. sequence of least less. defined as a path consisting of the least number of arcs. shortest path either stays the Moreover. after capacity augmentations. at least 1 Since this capacity is initially at U and must be until the flow is maximum. We can improve this running time by exploiting the minimum distance from any . Unfortunately. (Note 0(m log U) maximum that we are essentially repeating the argument used to establish the geometric improvement approach discussed in Section 1. starting with flow - At or one of these augmentations must augment the flow by an amount for v)/2m otherwise we will a maximum flow. first-out order.

any shortest path from node i to t contains at leaist d(i) arcs. Since d(s) is a lower bound on the length of any path from the source to the sink.4 Tj: is and A if it distance function d : N -* Z"*" with respect to the residual capacities a fimction from is the set of nodes to the nonnegative integers.1(c). 2.2 we have d(i) = d(i|) < d(i2) + 1. Then. to t.1 C4-2. A path from s to consisting entirely of admissible arcs is an admissible path. We now admissible if it define satisfies some d(i) additional notation. refer to d(i) as the distance label of It and condition C4. node is i We condition. from C4. 0) is distance label. we can reduce the average time per augmentation to 0(n). Thus. hence.. The algorithm we describe next repeatedly augments flow along admissible paths. i If t for each node i.2 as the validit}/ is easy to demonstrate that i d(i) a lower boimd on i the length of the i2 shortest directed path from to t in the residual network. 0.. the distance label d(i) equals the length of the shortest path from to in the residual network. Let = i^ - - i3 - . which are lower bounds on the exact to distances. However. . j) in the residual network is t = d(j) + 1. -\ - t be any path of length k in the residual network from node i to t. we refer to the algorithm as the shortest augmenting path algorithm. d(s) = k. d(ij^) < d(t) + 1 = 1. 0) represents the exact distance label. Other arcs are inadmissible. Whenever we augment along path is a path..78 node i to the sink node t is monotonically nondecreasing over all augmentations. node i to be less than the distance from cost. > 0. These inequalities . though d = (3. then a valid we call the distance labels exact. An arc (i. d(i2) 2 d(i3) + 1. There is no particular urgency compute these distances i exactly.5. d(t) d(i) = < 0. the algorithm augments flows along shortest paths in the residual network. d(j) + 1 for every arc (i. satisfies the We say that a distance function valid follovdng two conditions: C4. imply that d(i) < k for any path of length k in the residual network and. suffices to have valid distances. For any admissible path of length k. The Algorithm The concept of distance labels w^ill prove to be an important construct in the 4. each of the distance labels for nodes in the in the exact. maximum flow algorithms that we discuss in this section and in Sections 4. it for other nodes network it is not necessary to maintain exact distances. j) € A with r^. in Figure 4. By fully exploiting this property. we maintain without incuring any significant . 0. By allowing flexibility the distance label of in the algorithm.. d = (0. For example. 1.

the partial admissible path becomes an contains node the algorithm makes a maximum possible augmentation on this path and begins again with the source as the current node.79 We can compute the initial distance labels by performing a backward breadth first search of the residual network. starting at the sink node. The algorithm generates an It admissible path by adding admissible circs. pred(j') : and i* : = j*. end. i. This step increeises the distance label of node it so that at least one admissible arc emanates from operation). = s.e. one at a time. indicating that the network contains no augmenting path from the source algorithm begin to the sink. we delete (pred(i*). two steps at the current (i*.e. procedure ADVANCE(i»). to some node path a called the current node. maintains a path from the source node admissible arcs. We next describe the algorithm formally. Whenever t). Consequently. i'. j*) node: advance or The advance step it identifies some and admissible arc designates j* emanating from node current node. first X = : perform backward breadth search of the residual network from node 1* : t to obtain the distance labels d(i). while begin d(s) < n do if i* has an admissible arc then ADVANCE(i*) = else if i* RETREAT(i*). admissible path (i. i*) from the partial admissible path and node pred(i*) becomes the new current node. 0. If i*. end. as the new no admissible arc emanates from node then i* the algorithm performs the retreat step. = t then AUGMENT and set i» : s. The algorithm terminates when d(s) S n.. indices. as follows. consisting entirely of We = call this i partial admissible path and store it using predecessor of the pred(j) for each arc (i. adds to the partial admissible path. begin let (i*. end. SHORTEST AUGMENTING PATH. inadmissible (assuming # s). . j*) be an admissible arc in = i* A(i*).. j) on the path. i*. (we i*) refer to this step as a relabel i* Increasing d(i*) makes the arc (predd*). The algorithm performs one retreat.

Moreover.e. each step. units of flow along path P. Proof. algorithm constructs valid distance function is Initially.80 procedure RETREAT(i'). Initially. Correctness of the Algorithm We maximum first show that the shortest augmentation algorithm correctly solves the flow problem. that the distance valid prior to a step. i. The shortest augmenting path algorithm maintains valid distance labels at Lemma 4. A = min : {rjj : (i. j) € P).1. end. . procedure begin AUGMENT. We show that the algorithm maintains valid distance labels at every step by performing induction on the number of augment and relabel steps. after an augment step (when the and (ii) after a relabel step.. In our subsequent discussion we shall always assume that the algorithms select admissible arcs using this technique. satisfies the validity (i) condition C4.2. inductively. list can be arranged arbitrarily. makes the next arc in the arc it the current arc. augment A end. list the current-arc of node sequentially list is the arc in its is arc list. node. once decided. using predecessor indices identify an augmenting path P from the source to the sink. but the order. each relabel step strictly increases the distance label of a node. the labels. We need to check whether these conditions remain valid residual graph changes). The algorithm examines this it and whenever the current arc inadmissible. We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node Each node i emanating from Arcs in each a i. updates the distance label of node arc in its and the current arc once again becomes the implicitly first arc list. ?t then i* : = pred(i*). When i the algorithm has examined all arcs in A(i). begin d(i*) if !• : = min s { d(j) + 1 : (i. j) which i is the current candidate for the first next advance step. Assume. has a current-arc (i. j) € A(i*) and ^- > ). remains unchanged throughout the algorithm.

2 implies that s-t = for each (i. Note that Oj^. By construction. then it remains inadmissible until d(i) increases because of our inductive hypothesis that the current arc reaches the end of the arc 1 distance labels are nondecreasing. . create an and. must be zero e N: d(i) > k*) S. (Recall that d(s) ^ n. 4. At termination of the algorithm. d(i) < min{d(j) + (i.S. affect the validity of the i) with rjj > might. Thus.2. S) is a minimum cutset and the current flow is maximum. therefore. j) might delete this arc from the residual network. j) € A(i) and > 0) = d'(i). which is the termination criterion for the generic augmenting path algorithm. s e sets 1 V S and t e and both the S and S are nonempty. The shortest augmenting path algorithm correctly computes a maximum Proof. 1 The distance labels satisfy this validity condition. j) e A(i) satisfies d(i) = d(j) + rj. also create an additional condition d(j) < j) Augmentation on arc + 1 that needs to be d(i) satisfied. to the residual network does not (i. > 0. Let S = {i some k* < n . but this modification distance function for this arc. since d(i) increases. though. S). let a^ denote for the number of nodes with distance label equal to k. and rj. d(s) is a lower bound on the length of the shortest augmenting path from s to this condition implies that the network contains no augmenting path from the source to the sink. ^ n and the algorithm terminates. j) s-t cutset (S. the for all arcs Gc. j) inadmissible at some stage. a relabel step at if (ii) The algorithm performs list A(i). j) in the residual network. i) conditions dOc) < d(i) + 1 remain valid in the residual network.81 (i) A flow augmentation on arc (i. Since t. < k < n. additional arc d(i) (j. Hence. list when A(i). Finally. thereby establishing the second part of the lemma. S). node is i when the current arc reaches the end of arc Observe that an arc (i. Consider the (i. j) e (S. in addition. The validity condition C4.) = k and S = N . The algorithm terminates when d(s) ^ n. we can obtain a minimum For s-t cutset as follows. however. d(i) > d(j) + rj: for all e (S. since = d(j) + by the admissibility property of the augmenting path. When d(s. Theorem flow. Hence. S). the choice for changing d(i) ensures that the condition d(i) < d(j) + 1 remains valid for all (i. (S.1 since Oj^ ^n-1. then no arc 1 : (i.

total any arc (i. of relabel steps is Thus the algorithm relabels a node at most n times and the total number bounded by n'^. Cortsequently. + 1 ^ d(i) + = d(j) + 2). After having performed list I A(i) i. we consider the time spent in identifying admissible N The time taken to identify the admissible arc of arcs. and the second term from the number the previous lemma. the algorithm never node again during an advance step since for every node k in the current path. (a) Each distance is label increases at most n times. Hence. d(k) < d(s) < n. j) becomes saturated sent at some iteration (at is which from d(i) j = i d(j) + 1). 4. After the algorithm has relabeled selects node i i at most n times. The shortest augmenting path algorithm runs in O(n^m) time. since each partial admissible path has length at most n. node I i is 0(1) plus the time sf)ent in scanning arcs in A(i). 4. i. (b) The number of augment steps at most nrnfl. From this point on. such scannings. its and each retreat step decrecises length by one. at most n/2 times and the number of arc saturations is no more Theorem Proof.. Consequently.e. j) can become saturated than nm/2.3. resulting O(n^m) total effort in the augmentation steps. n^m) advance steps. Each advance step increases the length of the partial admissible path by one. Finally. S n. j) d(j) increases by at least 2 units.82 Complexity of the Algorithm We Lemma number Proof. The first term comes from the number of of augmentations. Each relabel step at node i increeises d(i) d(i) by at least one. between two consecutive saturations of arc (i. Then no more flow can be d'(j) on 1 (i.2. the algorithm total reaches the end of the arc and relabels node Thus the time spent in all . Each augment step saturates zero. The total time spent in all relabel operations is V i€ n I A(i) I = 0(nm). decreases its residual capacity to Suppose that the arc (i. the algorithm requires at most 0(n^ + retreat (relabel) steps. each I execution requiring 0( A(i) I ) time. the total is of relabel steps at most n^ . which are bounded by nm/2 by For each node i. at least one arc. next show that the algorithm computes a maximvun flow in O(n^m) time. The algorithm performs 0(nm) flow augmentations and each augmentation takes in 0(n) time. the algorithm performs the relabel operation 0(n) times. j) until flow sent back to (at which point = d'(i) .

= for some k* < n. except in very dense networks. This implementation of the maximum flow algorithm runs in difficult 0(nm log n) time and obtaining further These improvements appears quite implementations interest. shortest The only way is improve the running time of the fewer computations per . of a sophisticated data structure. called dynamic trees reduces the average time for each augmentation from 0(n) to OGog n).e. The algorithm updates it after every relabel operation and terminates whenever first finds a gap in the { a array. Researchers have observed empirically major portion of which is that the algorithm spends too much time in relabeling. augmenting path algorithm The use to perform augmentation. The use of potential functions enables us to define an "accounting" relationship between the occurrences of various steps of an algorithm that can be used to . if S = i : d(s) > k*). Potential Functions and an Alternate Proof of Lemma 4. (S. powerful method for proving computational time bounds is to use potential Potential function techniques are general purpose techniques for proving the complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function. ex. The minimum cutset prior to this array performing these relabeling operations. The idea of augmenting flows along easy to implement in practice. aj^ with distance label equal to k.e.3 also suggests an alternative temnination condition criteria for is the shortest augmenting path algorithm. identify at most 0(nm) augmenting paths and this bound on particular examples these algorithms to perform f2(nm) augmentations.. The combination of these time bounds N establishes the theorem.2(b) A functions.83 scannings is 0( V i€ nlA(i)l) = 0(nm). i. As we have seen earlier. because maintaining the data structures requires substantial overhead that tends to increase rather than reduce the computationjd times in practice. for ^ k < n. then S) denotes a minimum cutset.. a done after it has already found the algorithm can be improved by detecting the presence of a maximum flow. Vkith sophisticated data structures appear to be primarily of theoretical however. We can do so by maintaining the number of nodes » i. shortest paths is intuitively appealing and The resulting algorithms is tight. A detailed discussion of dynamic trees is beyond the scope of this chapter. but The termination of d(s) ^ n may not be efficient in practice. satisfactory for a worst-case analysis. The proof of Theorem 4.

4 Freflow-Push Algorithms Augmenting path algorithms send flow by augmenting along step further arc. the number the of augmentations using bounds on the number of relabels. Thus the number of augmentations most m + nm was = 0(nm). a path. relabeling of Each node i creates as cis I A(i) I new admissible arcs. Rather than formally introducing potential functions. K steps before it Clearly. the push-based algorithms such as those we develop in this and the following sections necessarily violate conservation of flow. F(0) < m and many F(K) ^ Each augmentation decreases the residual capacity of at least one arc to zero and hence reduces F by at least one unit. 4. Suppose in the shortest augmenting path algorithm we kept track of the number Let F(k) denote the of admissible arcs in the residual network. potential increases only The when the algorithm relabels distances. . In fact. relabel operation. decomposes into k basic of operations of sending a flow of these basic operations as a push. we of bound number of steps of one type in terms of knovm boiands on the number steps of other types. This basic decomposes into the more elementary operation of sending flow along an Thus sending a flow of A A units along a path of k arcs units along an arc of the path.84 obtain a bound on the steps that might be difficult to obtain using other arguments. of augmentations. Let the algorithm perform 0. This relabels increase in F is at most nm over relabelings. Since the initial value of F is at most is m more than terminal value. and thus we can bound In general. since the algorithm any node at most n times (as a consequence of Lemma its 4. we count a step either an augmentation or as a terminates. We shall refer to each A path augmentation has one advantage over a single push: at all it maintains conservation of flow nodes. This argument objective to is fairly Our bound the number We did so by defining a potential function that decreases whenever the algorithm performs an augmentation. arcs at the number of admissible eis end of the k-th step. the total decrease in F due to is at all augmentations m + nm. and increases F by the all same amount. representative of the potential function argument.1) and V i€ n I A(i) I = N nm. we illustrate the technique by showing is that the number of augmentations in the shortest augmenting path algorithm 0(nm). for the purpose of this argument.

j) € A) • We refer to a node with positive excess as an active node.{s. The goal of each iterative step is to choose some active node and to send excess closer to the sink. We adopt the convention that the source and sink nodes are never active. i. labels.85 Rather. the network contains at e(i) one a node i e N .e. these algorithms permit the flow into a node to exceed the flow out of this node. they can push flow for closer to the sink before identifying augmenting paths.) (We Preflow-push algorithms have several advantages over augmentation based algorithms. they are more general and more flexible. t). Third. the it method cannot send excess increases the distance label from this node nodes with smaller distance it then of the node so that creates at least one new admissible arc.1c) and the following relaxation y {j:(j. preflow at each intermediate stage. The Generic Algorithm A preflow of (4. Fourth.menting path we send to flow only on admissible arcs. to the current distance labels. the best preflow-push algorithms currently outperform the best augmenting path algorithms in theory as well as in practice.i) Xjj - y '^ij SO . We will refer to any such flows as preflows. as in the augmenting path algorithm described in the last section. i) ''ji (j : € A) X'^ij (i. of the generic (ii) preflow-push methods are pushing the flow on an admissible and updating a distance label. First. its initialization and t) its termination). t} as e(»>= {) : Z (j.j) € A) a The preflow-push algorithms maintain a given preflow x. At each algorithm (except active node. € A) (j:(i. define the distance labels and admissible arcs as in the previous section.. The preflow-push algorithm uses the following subroutines: .foralli€ N-{s. The algorithm terminates when the network contains no active nodes. Second. with its > 0. (i) The two basic operations arc. closeness being measured with respect algorithms. As If in the shortest aug. For i we define the excess for each node e N- {s.1b): x is a function x: A —» R that satisfies (4. they are better suited distributed or parallel computation. algorithms perform all The preflow-push iteration of the le<ist operations using only local information.

end. perform a backward breadth first-search of the residual network. j) increases both saturating if and r. j) then push 5 = min{e(i). begin x: = 0. It might be instructive to visualize the generic preflow-push algorithm in terms of a physical network. PREPROCESS. procedure PUSH/RELABEL(i). we v^h to send water from the source In addition. end. Xgj : = Ugj for each arc (s. end. algorithm begin PREFLOW-PUSH. and nonsaturating otherwise. and to the sink. to determine initial distance labels d(i). The following generic version of the preflow-push algorithm combines the subroutines just described. : r^:) units of flow from 1 : node Tj: i to node j else replace d(i) by min {d(j) + (i. We say that a push of 6 units of flow on arc is 5 = rj. nodes represent joints. we visualize flow in an . by 5 units. arcs represent flexible water pipes. to create at least The piirpose of the relabel operation is one admissible arc on which the algorithm can perform further pushes. while the network contains an begin select active node do an active node i.86 procedure PREPROCESS. end.. We refer to the process of increasing the distance label of a node as a relabel operation. in this network. j) e A(s) and d(s) : = n. stairting at node t. PUSH/RELABEL(i). A push of 5 units e(j) from node i to node j decreases both e(i) and r^: by 6 units and (i. j) e A(i) and > 0}. begin if the network contains an admissible arc (i. and the distance function measures how far nodes are above the ground.

since the preprocessing step saturates is arcs none of these arcs admissible and setting d(s) = n will satisfy the is validity condition C4. Hence. Since arc (2. 2) is added to the residual network. We maintain vrith each node i a current arc which push operation. we are also guaranteed that in subsequent iterations t. As we continue to move nodes upwards. we move at a the source node upward. 1} units. Figure 4. The algorithm terminates when source. 1) have positive residual capacities. no flow than can reach the sink. since d(s) = n t. the residual network will never contain a directed path from s to will be and so there never any need to push flow from s again. j) augmenting path algorithm. 4) is deleted from the residual is still (4. 4) has residual capacity r24 = of value 6 1 and d(2) = d(4) + the algorithm performs a (saturating) of push = min {2. all the water flows either into the sink or into the Figure 4. We choose the current arc by sequentially scanning the arc scanning the arc times. we is identify an admissible arc in A(i) using the same data structure we used in the shortest (i. water flows downhill towards the sink. First. Eventually.1(a).3(a) specifies the preflow determined by the preprocess step. d(l)+l} = min{2. we move the node upward. however. Third. and again water flows downhill towards the sink. In general.3 illustrates the push/relabel steps applied to the example given in Figure 4.5) = 2. The arc and (2. The preprocessing node adjacent to step accomplishes several important tasks. it node 2 to 1. Second. a lower bound on the length of t.2. the algorithm performs a relabel operation and gives node 2 a new distance d'(2) = min {d(3) + 1. 3) an active can be selected again for further pushes. the remaining excess flow eventually flows back towards the source. s. and water flows to its neighbors. occasionally flow becomes trapped locally neighbors. so that the algorithm can begin by selecting all some node with incident to node positive excess. examines node 2. Initially.87 admissible arc as water flowing downhill. but they do not satisfy the distance condition. The push reduces the excess network and arc node. Suppose the select step 1. Since node 2 (2. any shortest path from s to the residual network contains no path from s to Since distances in d are nondecre<ising. Arc (2. node that has no downhill At this point. In the push/relabel(i) step. the current candidate for the list. lists We have seen earlier that takes 0(nm) total time. if the algorithm relabels each node 0(n) . it gives each node s a positive excess.

(a) d(3) = 1 d(l) =4 d(4) = d(2) = l 1 6^ = (b) After the execution of step PUSH(2). .88 d(3) = 1 e3=4 d(l) = 4 d(4) = d(2) = 1 e.= 2 The residual network after the preprocessing step.

1. the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily when no admissible arc emanates from it. and (iii) the flows around directed cycles. Lemma is 43.89 d(3) = 1 d(l) = 4 d(4) = d(2) = 2 (c) After the execution of step RELABEL(2). Assuming that the generic preflow-push algorithm terminates. each node i with positive excess node s by a directed path from i to s in the residual network. Proof. that distance labels are We 4. This condition total is the termination criterion of the augmenting path algorithm. Since d(s) = network contains no path from the source to the sink. The algorithm terminates when the excess is either at the source or at the sink implying that the current preflow r. be an . any preflow x can be decomposed with respect (i) to the original (ii) network G into nonnegative flows along paths from the source s to Let i t. the residual a flow. analyze the complexity of the algorithm. begin by establishing one result: first always valid and do not increase too many The of these conclusions follows from Lemma because as in the shortest augmenting path algorithm. By the flow decomposition theory. we can easily resides show that it finds a maximum flow.3 An illustration of push and relabel steps. connected to At any stage of the preflow-push algorithm. Complexity of the Algorithm We now important times. The second conclusion follows from the following lemma. Figure 4. paths from s to active nodes. arcs directed into the sink is and thus the flow on the maximum flow value.

This lemma imples set. does not . thereby increasing the number of active nodes by and increasing F by which may be as much as 2n per saturating push. the residual network contained a path of length at most n-1 from node fact that d(s) to node The = n and condition C4. and d(i) < 2n for all i e is I. it had a positive excess. F cases zero. Case The <ilgorithm is unable to find an admissible arc along which it can push flow. Lemma number 4. and hence a directed path from i to s. i and hence s.6. j) over all saturating pushes. j) it performs a saturating or a nonsaturating push.2. the algorithm does not minimize over an empty Lemma Proof. 2n. the total is of relabel steps at most 2n^ (b) The number of saturating pushes at most nm. In this case the distance label of node i increases by e ^ 1 units. dii) < 2n. Since < n.90 active node relative to the preflou' x in G. Consequently.5.2 imply that (a) d(i) < d(s) + n - 1 < 2n. and hence 2n'^m Next note that a nonsaturating push on arc (i. and flows around cycles do not P contribute to the excess at node Then the residual network contains the reversal of O' with the orientation of each arc reversed). Lemma Proof. create a A saturating push on arc might 1. 4. j. Let III We prove the lemma using an argument based on potential functions. For each node i e N. most 2n times. Since the total increase in d(i) throughout the running time of the i algorithm for each node distance labels is is bounded by 2n''. This operation increases F by at most e units. V i€ I d(i). I denote the set of active nodes. the initial value of F (after the preprocessing step) step. At termination. During the push/ relabel (i) one of the following two must apply: 1. that during a relabel step. Cor^ider the potential function F = . Proof. new excess at node d(j). the total increase in F due to increases in bounded by is Case 2. The number of nonsaturating pushes is O(n^m). 4.4. The algorithm able to identify an arc on which it can push flow. and so (i. The last time the algorithm relabeled node i. The proof is ver>' much similar to that of Lemma 4. Then there t must be a path P from s to i in the flow decomposition of since paths from s to i. is at most 2n^. Each distance is label increases at . x.

then excess reaches the sink node and the algorithm terminates. node F j was active before the push. we always an active node with the highest distance label for : Let h* = e(i) > 0. lists) Several data structures (for example. Consequently. The initial value of F is at most 2n^ and the F is Irr- + 2n^m. in particular. suppose that push/relabel step. proving the lemma. The algorithm maintains a set S of active nodes. i e N) at some point h*-l. the nortsaturating pushes can occur most 2n^ + 2n^ + 2n^m = O(n^m) times. we indicate how the algorithm keeps track of active nodes for the It push/relabel steps. we immediately obtain a bound of O(n^) on the number of node examinations. The nonsaturatirg push will decrease F by d(i) since i becomes inactive. and so on. of the algorithm. Finally. A Specialization of the Generic Algorithm The running time of the generic preflow-push algorithm is comparable to the bound of the shortest augmenting path algorithm. or select elements are available for storing S so that the algorithm can add. Each node examination entails at most one nonsaturating push. that the algorithm relabels no node during n node examinations. the preflow-push and its algorithm has several nice features. that adds to S nodes become active following a push and are not already in S. in Then nodes with distance h* push flow turn. further improvements. We maximum at summarize these possible increase in facts.4 The generic preflow-push algorithm runs in O(n'^m) time. it from in in 0(1) time. then F decrejises by an amount d(i). Since the algorithm requires O(n^) relabel operations.91 increase III. However. . it is easy to implement the preflow-push algorithm theorem: O(n'^m) time. doubly linked delete. is push flow to nodes with distance h*-2. Consequently. example. Hence. to nodes with distance and these nodes. we can derive many max different algorithms select {d(i) from the generic version. Note all If a if node relabeled then excess moves up and then gradually comes cor\secutive dov^n. this algorithm performs O(n^) nonsaturating pushes. and deletes from S nodes that become inactive following a nonsaturating push. but it simultaneously increases F by If d(j) = d(i) - 1 if the push causes node j to become The net active. We have thus established the following Theorem 1. decreeise in is at least 1 unit per norxsaturating push. Each nonsaturating push decreases F by one unit and F always remains nonnegative. its flexibility potential for By specifying different rules for selecting nodes for push/relabel For operations.

Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling . observe no particular pattern in In this section. refer to this U represents the largest arc capacity in the network. we would 0. active node) is By pushing flows from active nodes. This algorithmic strategy may prove to be useful for the following reason. it We algorithm as the excess-scaling algorithm since is bcised on scaling the node excesses. Note. for the highest label straightforward. The algorithm also does not allow the maximum excess to increase beyond A. algorithm pushes flow from nodes whose excess is A/2 S ^jj^ax^^- "^^ choice assures that during nonsaturating pushes the algorithm sends relatively large excess closer to the sink. The following theorem now evident. except that e^^g^^ eventually decreases to vtdue we develop an excess- scaling technique that systematically reduces Cjj^^ to 0. the execution of the generic algorithm. Pushes carrying small amounts of flow are of little benefit and can cause bottlenecks that retards the algorithm's progress. Suppose . The excess-scaling algorithm is based on the following ideas. that always pushes flow from an active node ipith the highest distance label runs in U preflow push algorithm is The O(n^) bound and can be improved.5 Excess-Scaling Algorithm at The generic preflow-push algorithm allows flows violate each intermediate step to mass balance equations. deleting.5 The preflcnv-push algorithm O(n^) time. 4. Theorem 4.92 variable level which is an upper bound on the highest index lists r for which LlST(r) is nonempty. Researchers have shown using more clever analysis that the ) highest label preflow push algorithm in fact runs in 0(n^ Vrn time. or selecting an element takes 0(1) time. We can store these as doubly linked lists so that adding. from O(n^m) 0(n^ log U). We to will next describe another implementation of the generic preflow-push algorithm that dramatically reduces the Recall that number of nonsaturating pushes. that during Cj^^g^. starting at LIST(level) We identify the highest indexed lists. though. attempts to satisfy the meiss balance equations. nonempty list and sequentially scanning the lower indexed needed is We leave it as an exercise to show that the overall effort to scan the lists is bounded by n plus is the total increase in the distance labels which O(n^). the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an : one measure of the infeasibility of a preflow.

pushing too much flow to any node likely to be a wasted The excess-scaling algorithm has the follouang algorithmic description. end. U < A < 2U. effort. Ehjring the A-scaling phase. but with one slight difference: instead of pushing units. Initially. Ij. begin PREPROCESS. for k : = K down to do begin (A-scaling phase) A: = 2^ while the network contains a node i with e(i) > A/2 do perform push/relabel(i) while ensuring that no node excess exceeds A. Thus. A/2 < Cj^g^ < A and ejj^^^ the phase. K:=2riogUl.. algorithm EXCESS-SCALING.e(j)} This change will the ensure that the algorithm permits no excess to exceed A. ejy. a new scaling ph«ise begins. A . 6 = min {e(i).ax decreases to value and we obtain The the maximum flow. Selection Rule. This algorithmic strategy may prove to be useful for the following reason. After the algorithm has peformed flog scaling phases. Suppose likely that several nodes send flow to a single node creating a very large excess. The algorithm performs a number of dominator A decreasing from phase certain value of scaling phases with the value of the excess- to phase. Tj. may vary up and down during When Ul + 1 Cjj^g^ < A/2. excess-scaling algorithm uses the same step push/relabel(i) as in the generic preflow-push algorithm.} units of flow. and thus the algorithm need to increase its distance and return much of is its excess back toward the source. A= 2' ^°6 ^ when ' the logarithm has base 2. Among all nodes with excess of distance label (breaking ties arbitrarily). j. end. it pushes 6 = min {e(i). It is node Vkdll j could not send the accumulated flow closer to the sink. more than A/2. Thus. select a node with minimum . The algorithm uses following node selection rule to guarantee that no node excess exceeds A.93 The algorithm also does not allow the maximum excess to increase beyond A. We refer to a specific scaling phase with a A as the /^-scaling phase.

at leaist A/2 vmits excess at node e(j) Further.4. Proof. Then e'(j) = e(j) + min {e(i). The excess-scaling algorithm performs O(n^) nonsaturating pushes per and scaling phase 0(n^ log U) pushes in total. Lemma 4. The algorithm satisfies the following two conditions: Each nonsaturating push sends at least A/2 units of flow. in F during this scaling is phase sum to 8rr. Using this potential function N Since the algorithm has first. i A and sends at leaist A/2 tmits of flow at least from node 1/2 units. A < + A- e(j) <A . In this case the distance label of node i increases e(i) by e ^ 1 units. and d(j) {e(i). ijj) units of flow. Since for each increaise in d(i) 4. - 1 < d(i) since arc is Hence. r^. This relabeling operation the totcil increases F by at most e units because < A. A . No excess ever exceeds A. The algorithm is unable to find an admissible arc along which it can push flow.94 Lemma C43. During the push/relabeKi) one of the following two cases must apply: Case 1. of flow. we ensure that in a nonsaturating push the Jilgorithm sends e(j). bounded by A and bounded by 2n. since node i is a node with smallest distance = d(i) label (i. the second assertion a consequence of the The e(i) is initial value of F the beginning of the A-scaling phase d(i) is bounded by 2n^ because step. The algorithm is able to identify an arc on which it can push flow and so Ccise. C4. to node j after this operation F decreaases by is at Since the initial value of F at the beginning of a A-scaling phase most 2n^ and the increases 1). i.4). e'(j) - be the e(j)) j after All the push.. (i. we will establish the first assertion of the is is lemma..e(j)) > min {A/2. j).1. Odog U) at scaling phases. j) In either F decreases.7. it performs either a saturating or a nonsaturating push. we have e(i) > A/2 and excess e(j) is < A/2. nonsaturating push on arc since d(j) = d(i) . by sending min more than A/2. node excesses thus remain less than or equal to A. the push operation increases only Let Tj. throughout the running of the algorithm increase in F is bounded by 2n (by Lemma is the total due to the relabeling of nodes bounded by 2n^ is at in the A-scaling all phase (actually. Proof. j) among nodes whose admissible. at most 2n^ (from Case the number of nonsaturating pushes bounded by . the increase in F due to node relabelings most 2n'^ over scaling phases). 4. Case 2.8. Consider the potential function F = ^ ie e(i) d(i)/A. For every push on arc (i.

we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. result. We is leave as an exercise to show needed to scan the lists is bounded by the number not a bottleneck of pushes performed by the algorithm plus 0(n log U) and. j) e A. otherwise. the problem wiih nonnegative lower bounds could be We can. If \ e(i) . + /jj a feasible flow. node 0. With this observation. we can summarize our discussion by the following Theorem time. operation. Up to this we have if ignored the method needed to identify a node with the excess minimum distance label easy. hence. For each node i with > we add an t*) arc (s*. This i representing the excess or deficit of any node e N.95 This lemma implies a bound of 0(nm all + n^ log U) for the excess-scaling algorithm since we have already seen that other operations — such as saturating pushes. Although the maximum flow problem v^th zero lower bounds always infecisible. determine the feeisibUlity of this problem with zero lower bounds as follows. 4.4 to find a e(i) node with the highest distance d(i) We is maintain the LIST(r) = {i € N : > A/2 and = r). We e(i) introduce a super source. We then solve a v* problem from Let x* denote the maximum v* = {i: flow and e(i) maximum flow denote the maximum is flow value in the transformed network. and super sink. among nodes with more than A/2. and a variable level which a lower bound on the smallest index list r for which LlST(r) is nonempty. j) e A. Let /j. we add an s* to t*.i) with capacity e(i). arc (i. however. We identify the lowest indexed nonempty starting at LIST(level) and sequentially scanning the higher indexed that the overall effort lists. relabel operations and finding admissible arcs point. with capacity -e(i).6 The preflow-push algorithm with excess-scaling runs in 0(nm + n^ log U) Networks with Lower Bounds on Flows To conclude this section. (We refer the reader to Section 5. s*. Making in the this identification is we use a scheme similar to the one used label. choice gives us a pseudoflow with e(i) We problem by solving a maximum flow set x^: = /j: for each arc (i. preflow-push method in Section lists 4. ^ denote the lower bound for flow on any eu'C (i. then the original problem > 0) is feasible and choosing the flow on each is arc (i. j) as x^. and for each node i with e(i) < 0. the problem infeasible. . — require 0(nm) time. has a feasible solution.4 for the definition of a pseudoflow with both a excesses and deficits). node t*.

initially first we apply any of the maximum flow as algorithms with only one change: rj. The and second tenns on arc in this expression denote. i). j) respectively. (i. j) - Xjj) + (xjj - /jj). These observations show that it is possible to solve the problem with nonnegative lower bounds by two applications of the cilgorithms maximum maximum flow flow we have already discussed. define the residual capacity of an arc (i.96 Once we have found = (ujj a feasible flow. It is possible to establish the optimality of the solution generated by the algorithm by generalizing the max-flow min-cut theorem to accomodate situations with lower bounds. . the residual capacity for incre<ising flow cmd for decreasing flow on arc (j.

Feasibility Assumption.97 5.j)€A^ subject to {j : (i. : (i. Let that the lower bounds ( /j. in Section 2. We consider the following node-arc formulation of the problem. Introduce a super source node i s*.1. j) X € X) X:: (j : (j.2. Minimize 2^ Cj. directed path We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes. on arc flows are all zero and that arc costs are [ C } = ). > 0. We assume that X ieN ^(^^ - and that the minimum cost flow problem has a feasible solution. assumption that all data (cost. if We (j. the maximum flow value equals {i : T b(D > b(i) 0) then the minimum flow problem A5.. (5. Connectedness Assumption. we consider algorithmic approaches for the minimum cost flow problem.e. x. (5. i) X^!k) = ''ii t)(>)' for a" > e N. it is infeasible. j) € A. add an arc (s*. j) e A ) and U = max max { lb(i)l : ie N}. max Cj. by adding artificial arcs (1. otherwise. We maximum t*. A5. j) and 1) for each € N and assigning a large cost and a very large capacity to each of these . this condition. j) € A The transformations Tl and T3 loss of generality. for each (i.1c) We assume nonnegative. max ( ujj : (i.4 imply that these assumptions do not impose any We remind the reader of our blanket capacity) are integral. Now solve a maximum flow problem cost from s* to t*. i) with capacity b(i). can ascertain the feasibility of the minimum cost flow problem by solving a flow problem as follows. is feasible. impose j necessary.1a) (i. add an If t*) with capacity -b(i). and a super and sink node i For each node b(i) with arc b(i) (i.: ' {5. (i. supply/demand and problem We also assume that the minimum cost flow satisfies the following two conditions. MINIMUM COST FLOWS In this section. for each node with < 0.1b) < xjj < Ujj.

. of this problem inherits linear many of these properties. more general parallel arcs However. due to its special structure. Moreover. This equivalence implies the following alternate statement of Theorem Theorem 2. 5. i). i).4.1. . view. any directed cycle in the residual network G(x) is an augmenting cycle with respect to the flow x and vice-versa (see Section 2. and (j. j) has cost rjj and x^. Our notation for arcs assumes that at most one arc joins easily treat this one node any other node.1. rather simple complementary slackness conditions. Duality and Optimality Conditions As we have seen programming dual in Section 1. from a linear programming point of In this section. The residual network (i.x^. CXir algorithms rely on the concept of residual networks. j) is defined as follows: Cj: We replace each arc r^. state the linear we formally programming dual problem and derive the complementary slackness conditions. the minimum cost flow problem and its dual have. No such arc would appear in a minimum cost solution unless the problem contains no feasible solution without artificial arcs. notational difficulties. 5. we can produce a network without any Observe that parallel arcs). residual capacity = u^j . rather than changing our notation. e A by two arc (j.98 arcs. if of residual networks poses some (i. j) For the original network contains both the arcs i and (j.2.. the minimum The cost flow problem has a number of important theoretical properties. A feasible flow x is an optimum flow if and only if the residual network G(x) contains no negative cost directed cycle. we (or. we can case. will tissume that never arise by inserting extra nodes on parallel arcs. The arc (i. By using more complex notation. j) G(x) corresponding to a flow x arcs i) (i. and the has cost -Cj: and residual capacity = The residual network consists only of arcs with positive residual capacity. then the residual j network may contain two arcs from node i to node and/or two j arcs from node to node with possibly different to costs.1 for the definition of augmenting cycle). The concept example.

3) implies that 7t(i)-7t(j) -5jj = Cjj. associate a dual variable 6jj We. in (5. (5.8) yields (5. in (5. It possible to show 7t(i) assumption imposes no loss of i We associate a dual variable with the mass balance corwtraint of node is Since one of the constraints in (5. . we The with the upper bound constraint of arc dual problem to (5. j).6) Xij n(i) - n{]) ^ qj < Xj: (5.4) Uj: implies that 6jj = 0. € A. Xj. Xj: < Uj. j) variables to an arbitrary value.3) 6jj > ^ Xjj = Ujj.2b) 5jjS 0. j) e A. implies that n(i) - n(j) - 5jj = Cjj . for all (i.1c). substituting this result in (5. foraU (i.4) These conditions are equivalent Xj: to the following optimality conditions: (5. Further. 0<xjj <u^j=* = Ujj=> Jt(i)- Jt(j) = Cjj. suppose that < Uj: for some arc (i. consider the j) minimum is cost flow problem that this (5. we that can set one of these dual 0.j)e A.1) assuming that Uj.1) is: Maximize X ie t)(') '^(i^ ~ (i. j). The complementary slackness conditions Xjj for this primal-dual pair are: > => 7i(i) - n(j) - 5jj = Cjj (5. (5. 99 We each arc generality. therefore assume 7c(l) = (i.3) Whenever = > for some arc (i.1b).2c) and Ji(i) are unrestricted.6). .1b) redundant.5) = =* 7c(i) - 7t(j) < Cjj . The condition (5. (5.j) N X e A "ij ^i\ ^ (5 2a) ' subject to 7c(i) - 7c(j) - 6ij < Cjj . (5.7) To see this equivalence. .. (5.8) Since (5. > for (i. . (5.

100 Substituting 5jj S in this 6jj equation gives (5. would contain arc with Cj. C5. .3.6.j)€ (-Jt(i) W + Jt(j)) (i. C5.1 C5. t for each arc (i. n of flows and node potentials C5.3 C5. Note note that if that the condition C5.5). Condition C5. the residual network contains no negative cost cycle. i) C5.7).6.4 If If < x^: Xj. Consider any pair x. The conditions if it - (5. j) in A. + I (i. shortest path optimality condition C3. Let W be any directed cycle in the residual network.6 implies that X (i. that the condition C5. j) as Cj. j) then (5..6 Cj. ^ S (i.2b) gives (5. Further . n of flows and node potentials optimal satisfies the follov^ing conditions: C5. some (i.j)€ XW C. in the original Cjj. Then The in the residual Cj:. 0.j)€ W C:.1. Finally. > subsumes for some arc (j. i)eW To see the converse.5) define the reduced cost of an arc (i.2 and C5.6 (Primal feasibility) x (E>ual feasibility) Cj. Observe however. is feasible. We (5.4) imples that = and substituting this result in (5.5 C5. to: terms of the residual network. when stated in we retain for the sake of completeness. -t^ Cjj . (i. residual network C5.2 implies that d(j) < d(i) + q. simplify C5.3 follows it from the conditions C5.j)e W q: = '' (i.2 X If is feasible. '^ S 0. C5. Cjj Cjj Cjj > = < 0. suppose that x is feasible and C(x) does not contain a negative cycle. > and Xj. respect to the arc lengths are well defined. It is easy to establish the equivalence between these optimality conditions and the condition stated in satisf)'ing Theorem 5. with Let d(i) denote the shortest distance from node 1 to node i.1. then then Xjj = 0.4. 0. j) in the residual network G(x). network.2.7) imply that a pair x. for some arc (i. Cjj = Xjj But then for Cjj contradicting A similar contradiction arises if < and < Uj. j) and C5. then the 0. To < see this result. = Cj: - Ji(i) + is n(j). then = U|j.4. network the shortest distances from node 1. if xj: = < uj. < Ujj. Hence.5 and C5. These conditions.

The shortest path problem from node s to all . led to Consequently. . Suppose that 7t is an optimal dual solution and c is the vector of reduced costs. Conversely. and setting = (i. for all (i. 1^. the maximum = flow problem from node node can be transformed to the s) minimum cost flow problem by introducing an additional arc (t.Jt(i) + 7t(j) = Cj. many of the algorithms use shortest path minimum and/or maximum for the cost flow problem either explicitly or implicitly flow algorithms as subroutines. maximum flow problems are of Indeed. j) in G(x). other nodes can be formulated as a minimum cost flow problem by setting b(l) = (n . j) in G(x). the This relationship will be cost flow problem. A*) as follows. 71 - d. improved algorithms for the for these two problems have improved algorithms minimum cost flow for problem. algorithms for the shortest path and great use in solving the minimum cost flow problen. more transparent when we discuss algorithms have already shov^m in Section 5.1 minimum We how to obtain an optimum dual solution from an optimum primal solution by solving a single shortest path problem.*. We define the cost-residual network G* = (N. setting Uj: equal to any integer greater than (n 1) will suffice if we wish s to to maintain t finite capacities). j) e A) would suffice). the pair satisfies C5. 5^ Relationship to Shortest Path and Maximum Flow Problems The minimum cost flow problem generalizes both the shortest path and maximum flow problems. : (i. j) e The nodes in G* have the A* has an upper bound u^:* as bound defined as follows: .1) b(i) = -1 for all 1 * s. Similarly. j) e A (in fact. Let n = x. Then < q. Thus. algorithms for the minimum cost flow problem solve both the shortest path and maximum flow problems as special cases. We now show how to obtain an optimal primal solution from an optimal dual solution by solving a single maximum flow problem. + d(i) - d(j) = Cj. j) € A. Hence. and Uj.101 for aU (i. same supply /demand well as a lower Any arc (i. with Cj: c^g = -1 and u^^ = for each arc ~ (in fact. as the nodes in G. Uj^ m • max {u|.6..5 and C5. = «« for each (i.

Notable examples are the negative cycle.4 in and then transform problem to a maximum cost flow problem as described assumption A5. j) with u^* = (i.102 (i) (ii) For each For each (i. out-of-kilter. j) flow. Let x* denote the x*+/* is flow in the transformed network. . cycle algorithm maintains a primal feasible solution It The negative to attain x and strives dual feasibility. Then an optimum solution of the maximum minimum problem in G. > for some (i. Similarly. j) 6 A. computer scientists. then any flow value will satisfy the condition C5. If Cj. If cjj must be at the arc's upper bound in the optimum = 0.3. r Now network the problem is reduced to finding a feasible flow in the cost-residual that satisfies the lower and upper bound restrictions of arcs and. arc j) with Uj. does so by identifying negative cost directed cycles in the in these cycles. successive shortest path. j) A with Cj. A* contains an arc in A with Cj. Negative Cycle Algorithm Operations researchers. j) € A. . < 0. at the same time. j) (iii) For each (i.3. then C5.4.. the algorithm terminates. A* contains an arc in A with c.1. minimum problem and point out relationships between We first consider the negative cycle algorithm. 5. > 0. residual network G(x) and augmenting flows The algorithm terminates when when the residual network contains no negative cost cycles. electrical engineers and many others have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem. primal-dual.2 dictates that xj: = in the optimum (i. We first eliminate the lower this bounds of arcs as described in Section 2.2-C5. In this and the following cost flow sections. (i. if Cjj < for some (i. j) in (i. then condition C5..4 implies the flow on arc flow. = 0. it Theorem 5.1 implies that has found a minimum cost flow. and hf = 0- The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5.* = uj. we discuss most of these important algorithms for the them. j) with u^:* = 1j:» = 0. meets the supply/demand constraints of the nodes. A* contains an (i. primal simplex and scaling-based algorithms. 1^:* =Uj.

The augmenting cycle theorem (Theorem 2. objective due to flow augmentations on these augmenting cycles sum Consequently.3) implies that x* equals x plus the flow on at most in cost augmenting cycles with respect to x. the simplex algorithm cannot necessarily send a positive amoimt (ii) of flow along this cycle. described in Section to identify a negative cycle.1. the flow cost and a lower bound on the optimum flow algorithm terminates after at most O(mCU) iterations and requires O(nm^CU) time in total. A cycle 3. the algorithm always augments flow along a . Identifying a negative cost cycle with maximum improvement due in the objective function value. j) e W). due to degeneracy. end. which requires 0(nm) time at least Every iteration reduces the initial flow cost by zero is one unit. at least one augmenting cycle with respect Hence. while C(x) contains a negative cycle do begin use some algorithm 5 : to identify a negative cycle W. The improvement is in the objective function to the augmentation x* be along a cycle W - (i. One algorithm for identifying a negative cost the label correcting algorithm for the shortest path problem. However. Let x be some flow and an optimum flow. j) e W)). = min [t^ (i. It The simplex algorithm solution be discussed later) nearly achieves this objective. Since mCU is an upper bound on an cost. improvements to ex -ex*. it maintains a tree and node potentials that enable to identify a negative cost cycle in 0(m) effort. Further. augment end. This algorithm can be improved in the following three ways (which irizpV summarize) we briefly (i) Identifying a negative cost cycle in effort (to much less than 0(nm) time. j) IW € m (min ^ (rjj : (i.103 algorithm NEGATIVE CYCLE. is feasible flow in the network can be found by solving a maximum flow problem as explained just after assumption A5. if to x must decrease the function by at least (ex -cx*)/m. begin establish a feasible flow x in the network. 6 units of flow along the cycle W and update G(x).4.

We define the mean cost cycle is a of a cycle cycle cost divided by the number of arcs It is contains.4. (iii) Identifying a negative cost cycle vdth ais its minimum mean it cost. A minimum mean whose mean cost is as small as possible. the algorithm selects a node i with extra supply and a node with unfulfilled demand and sends flow from terminates i to j along a shortest path in the residual network. It maintains a solution x that satisfies the normegativity and capacity constraints. absolute value decreases by a factor of l-(l/n) within m Since mean cost of the minimum mean -1/n. but violates the supply/demand constraints of the nodes. researchers have shown the negative cycle algorithm always augments the flow along a minimum mean is cycle.1 implies an optimum flow within 0(m log mCU) iterations. 5. the minimum mean (negative) cycle 1. the its to the next. A pseudoflow is a function x A -» R satisfying only : <md normegativity constraints. iterations. then from one iteration moreover. time. then e(i) is called the excess is of node Let S i. Successive Shortest Path Algorithm The negative cycle algorithm maintains primal feasibility of the solution at every feaisibility. possible to identify a minimum mean that if cycle in 0(nm) or 0(Vri m log nC) Recently. all The algorithm when the current solution satisfies the supply/demand the capacity i constraints. we define the imbalance of node as e(i) = b(i) + {j: (j.1 cycle value nondecreasing. j At each step. is bounded from below by -C and bounded from above by Lemma implies that this algorithm will terminate in 0(nm log nC) iterations. For any pseudoflow x. but a modest variation approach yields a polynomial time algorithm for the minimum cost flow problem. the successive shortest path algorithm maintains dual feasibility of the solution at every step and strives to attain primal feasibility. then Lemma 1. then called the deficit. if e(i) < 0.104 cycle with obtain maximum improvement.j) X€ a1 e(i) ''ii' for all i e N. cycle is that the method would Finding a of this maximum improvement a difficult problem. A node i vdth = called balanced. i) X€ A] ''ii - {j: (i. If e(i) -e(i) is > for some node i. step and attempts to achieve dual In contrast. and T denote the .

we use them to ensure that the arc . '' = (i. shortest path with respect to the same bls the shortest path with respect to The correctness of the successive shortest path algorithm rests on the following result. Then x' also satisfies the dual feasibility conditions with respect to some node potentials. The residual network corresponding to a pseudoflow is defined in the same way that we define the residual network for a flow. j) C5. . Suppose a pseudoflow x satisfies the dual feasibility condition C5. Hence. j) in G(x). is and the Cjj. j) may add .. But since for each arc 6 P Cjj = 0.c(i) + Jt(j). The node potentials play a very important role in this algorithm. Hence..6 unth respect to the node potentials it.nil) + n(k). x every arc every arc satisfies (i. Cj. i) to the residual network.1.105 sets of excess and deficit nodes respectively. /. = 7t-d. j) in G(x). fe P Z C. We in are now its in a position to prove the lemma. Next note that Cj. Augmenting flow along any = arc P maiintains the dual feasibility condition C5.6 with respect to the node potentials to n'. Cj: = - Cj. Since x satisfies the dual feasibility conditions with respect to the node potentials Cj: we have to ^ for all (i. node k any node v in G(x) with respect to the arc lengths We claim that x also Jt' satisfies the dual feasibility conditions with re. - Jt(i) + n(j) in these conditions and using 7t'(i) = 7t(i) - d(i) yields qj" = Cjj 7:'(i) + n'(j) S 0.' = Cjj for on the shortest path P from node k node since d(j) = d(i) + for € P and Cj: = c^.. Besides using them to prove the correctness of the algorithm. 5. for all (i. i) also satisfies C5. suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). Y fe C:.e. reversal arc (j. j) (i. the node ?'> potentials change all path lengths between a specific pair of nodes by a constant amount.2) imply that d(j)<d(i)+ Substituting cjj .6. Observe that for any directed path P from a node k to a node /.. for all (i.pect to the potentials (i. and so (j. jt. The shortest path optimality conditions C3. Lemma Proof. Let d(v) denote the shortest path distances from Cj. Augmenting flow on an Cj: arc (i. j) in G(x). The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj. j) (i.6 for this arc. Furthermore. (i. - .

end. of the successive shortest path algorithm summarizes the steps algorithm SUCCESSIVE SHORTEST PATH. by assumption. all lengths are nonnegative. -e(/). m it + is nVlogC ) ). where S(n. O). other nodes in G(x) with respect to the reduced costs let P denote : a shortest path from k to 1. m. do begin select a node k e S and a node / € T. the shortest path problem at each iteration can be solved using Dijkstra's algorithm. Each iteration of algorithm solves a shortest path problem with nonnegative arc lengths and reduces the supply of some node by Cj: at least one unit. then T* because the sum of excesses always equals the sum of deficits. and = begin X : = 7t : 0. S and To satisfies initialize the algorithm. Further.106 lengths are nonnegative. augment 6 update end. The successive n. d(j) determine shortest path distances from node k to all Cj. ujxJaten 6 : = 7t-d. if since. j) € P } ]. e(k). C) the time taken by Dijkstra's algorithm. T. the largest supply of any node. 5*0. The algorithm however. m. the connectedness assumption implies that the residual network G(x) contains a directed path from this node k to node /.6 with respect to the node potentials n = Also. is the best strongly polynomial -time bound implement Dijkstra's algorithm is CXm + n log n) and the best (weakly) polynomial time bound is 0(min {m log log C. units of flow along the path P. to Currently.. which is a feasible pseudoflow and arc C5. Consequently. e(i) compute imbalances while S ^ and initialize the sets S and T. polynomial in m and the supply U. = min [ min { rj: : (i. f>olynomial . shortest path algorithm largest pseudopolynomial time since is. the algorithm terminates in at most the arc lengths nU Since are nonnegative. thus enabling us to solve the shortest path subproblems efficiently. more The following formal statement of this method. if U is an upper bound on iterations. So the overall complexity of is this algorithm is 0(nU S(n. X. we set x = 0.

U)). comes closer to satisfying the mass balance However. The algorithm guarantees some node strictly decreases at each iteration. the adding nodes and arcs as in the assumption A5. and also assures that the node potential of the sink latter strictly decreases..e.1). C) and M(n. we will develop a polynomial time algorithm for the minimum cost flow problem using the successive shortest path algorithm in conjunction with scaling. m. nC} on the number of iterations since the magnitude of each node potential is bounded by nC. These algorithnns modify the flow and potentials so that the flow at each step constraints. each 7:(j) becomes 7t(j) - d(j)) and then solves a maximum flow problem to send the reduced maximum possible flow from the source to the sink using only arcs with zero that the excess of cost. Primal-Dual and Out-of-Kilter Algorithms The primal-dual algorithm is very similar to the successive shortest path problem.7. nC M(n. m. . U) respectively denote the solution times of shortest p>ath and maximum flow algorithms. the algorithm has an overall complexity of 0(min (nU S(n. the network contains no path from the source to the sink in the residual network consisting iteration d(t) entirely of arcs with zero reduced costs. 5. > 0. m. primal-dual algorithm solves a shortest path problem from the source to update the node potentials (i. Thus. m. we could The just as well have violated other constraints at intermediate steps.5. represented by k^:. the algorithm incurs the additional expense of solving a maximum flow problem at each iteration. of course. a special case of the minimum cost flow problem for which U = 1. drive the flow to zero if = 0. The basic if Cj. out-of-kilter algorithm satisfies only the mass balance cortstraints and may idea is violate the dual feasibility conditions to drive the flow on an arc (i. coi^equently. These observations give a bound of min {nU. j) to Uj. The flow observation follows from the fact that after we have solved the maximum problem. and to permit any flow between and Uj: if Cj: The kilter number. as before. To explain the primal-dual algorithm. In Section 5. where S(n. might send flow along many paths. we transform the minimum cost flow problem into a single-source and single-sink problem (possibly by At every iteration. C). but that mass balance constraints. The successive shortest path and primal-dual algorithnw maintain a solution that satisfies the dual feasibility conditions violates the and the flow bound iteratively constraints. Cj: < 0. and the flow bound restrictior«. This bound is better than that of the successive shortest path algorithm. in the next ^ 1.107 time for the assignment problem. it except that instead of sending flow on only one path during an iteration. but.

The Section 2. Through extensive empirical testing. k^j = I x^j I and for an arc (i. Then the algorithm network and would obtain augment this a shortest path to node {(i. At each it iteration. Network Simplex Algorithm The network simplex algorithm specialization of for the minimum cost flow problem for is a the bounded variable primal simplex algorithm cost flow linear programming. The special structure of the minimum benefits. j)). structure. we show how guarantee the finiteness of the network simplex algorithm. of an arc (i. with is Cjj > 0. We first define the concept of a basis structure and describe a data structure to store and to manipulate the basis. of an arc (i. version of the primal network simplex algorithm its is Though no known to run in polynomial time.kilter algorithm reduces the kilter number number of at least one arc. and node potentials for any basis We then show how to compute arc flows We next discuss how to perform various to simplex operations such as the selection of entering arcs. j) terminates when all arcs are in-kilter. particularly. - x^: I . Suppose the kilter would decrease by increasing flow on P from node in the cycle j the arc. In this section. For example.108 kjj. Finally. but P u The proof of the correctness of algorithm more detailed than. An arc with k^: = said to be in-kilter. . which is a spanning tree. = u^. we describe the network simplex algorithm in detail. j) with c^j < 0. j) is defined cis the minimum increase or decrease in the flow necessary to satisfy its j) flow bound constraint and dual feasibility condition. researchers have also improved the performance of the simplex algorithm by developing various heuristic rules for identifying entering variables. the out-of. k^. leaving arcs and pivots using the tree data structiire. for an arc I (i. i in the residual at least is one unit of flow similar to. 5. best implementations are empirically comparable to or better than other minimum cost flow algorithms.3) permits the algorithm to achieve these efficiencies. streamlining of the simplex problem offers several computations and eliminating the tree structure of the basis (see »need to explicitly maintain the simplex tableau. that of the successive shortest path algorithm.6. the last The advances made in two decades for maintaining and upxiating the tree structure efficiently have substantially improved the speed of the algorithm.

A + feasible basis structure U) is called an optimum basis structure if it is Cj. little later We shall see a that if nil) = 0. . j) (5. Cjj . The condition (5.1c). (5. Then.e. = Cj.. L. for each for each (i.10) implies that this along the tree path from node circulation of flow is to node 1. (i.9) Cij . possible to obtain a set of node potentials n so that the reduced costs defined by = Cj. through the arc (i. A basic solution of the minimum The cost flow set problem defined by a triple i. the problem has a feasible solution satisfying (5. B. L. = for each e L. (B. u^: for called feasible setting Xj. and then returning the flow (5. for each (i. / (5. (i. arcs of a spanrung U by respectively denote the sets of nonbasic arcs at their lower and upper U) is j) g U. bounds.jc(i) + 7t(j) for a nonbeisic arc (i. L and tree. j) A basis xj: structure (B. We refer to the triple (B. - nii) n(j) satisfy the following optimality conditions: Cjj = S < .9) 1 tree path in B from node to node j. The following algorithmic description specifies the essential steps of the procedure. j) e B. p in L denotes the change in the cost of flow achieved by sending one unit of flow through the tree path from node 1 to node j i. = each (i. then equations (5. B denotes the set of basic arcs.11) These optimality conditions have a nice economic interpretation.1b) and (B. imply that -7t(j) denotes the length of the cj. j). j) € U. The condition not profitable for any nonbasic arc in L. .11) has a similar interpretation. U p>artition and L and the arc set A.10) . L.109 The network simplex algorithm maintains a basic feasible solution at is each stage U). if U) as a basis structure. L. and setting (5. € L. The network simplex algorithm maintains iteration a feasible basis structure at each until it and successively improves the basis structure becomes an optimum basic structure.

Basis Structure Our connectedness assumption A5. 1) with flow b(j) if b(j) > 0. perform a basis exchange and update node potentials. j) and 1) with sufficiently large costs and capacities. we describe the various steps performed by the network simplex algorithm Obtaining an Initial in greater detail. forming a cycle and augment the maximum possible flow determine the leaving arc (p. L. of obtaining an initial We (j. In the following discussion.9). have assumed that for every node j € N - {!). /) (k. we will see later. root. thread(i).2 provides one way basic feasible solution. j) with flow S and arc set (j. The node potentials for basis are easily computed using (5. called the tree. /) violating the optimality conditions. See Figxire 5. We next describe one such tree representation. and a thread index. Each node has a unique path connecting it . pred(i). violates the optimality conditions while some arc begin select do an entering arc (k.1 for an example of the We associate three indices with each node i in the tree: a predecessor index. Maintaining the Tree Structure The specialized network simplex algorithm is possible because of the spanning tree property of the beisis. We consider the tree We assume that node as "hanging" from a specially designated node. baisis add arc to the spanning tree corresponding to the in this cycle.110 algorithm NETWORK SIMPLEX. end. U). the network contains arcs (1. depthd). a depth i index. 1 is the root node. begin determine an initial btisic feasible flow x and the corresponding basis structure (B. The -b(j) if b(j) initial basis B includes the arc set (1. compute node potentials for this basis structure. end. The algorithm requires the tree to be represented so that the simplex algorithm can perform operations efficiently and update the representation quickly when the basis changes. jmd the as U is empty. The this L consists of the remaining arcs. q).

Ill to the root. 8. and (ii) the descendants of any node are consecutive elements The thread indices provide a particularly convenient i: means for visiting (or finding) all i. and then finally returning to the root. For the root node these The Figure 5. and 9 are leaf nodes. simply follow the thread from node recording the nodes depth of the visited node becomes at at least as large as node i. its We say that pred(i) of a is the predecessor of node i i and i is a successor of node The descendants and so node i consist of the node itself. number of arcs in the path. The thread indices can be formed by performing a depth first search of the tree as described in Section 1. starting at the root and visiting nodes in a "top to bottom" and "left to right" order. nodes 4. we can enumerate the path from any node to the root node. (i) the predecessor of each node appears sequence before the node in the traversal. descendants of node and then left visit node Since node 3's depth equals that of node we know that we have the "descendant tree" lying below node 5. itself. Computing Node Potentials and Flows for a Given Basis Structure We first consider the problem of computing node potentials n for a given basis structure (B. descendants of a node visited until the We 5. this sequence would read For each node i. 9) is contair« the descendents In Figure 5. its successors. 8. As we will see. and 7 in order. thread (i) specifies the next node in the traversal visited after node i. Note that the value of one node potential .1). U). starting node 5. pred(i). successors of successors. of node 5 in Figure 5. the node set (5. 8. which are the 5. we visit nodes 3. This traversal satisfies the following in the two properties. The thread threads its indices define a traversal of the tree. 7. Note that by iteratively using the predecessor indices. For example. on. 6. finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the simplex method. a sequence of nodes that walks or the way through nodes of the tree. We assume that n(l) = 0.1 shows an example of these indices.1. For example. 7. For our example. 9. L.5 and setting the thread of a node to be the node encountered after the itself node in this depth first search. 6. The simplex method has two given basis structure. and (ii) basic steps: (i) determining the node p>otentials of a computing the arc flows for a given basis efficiently structure. The predecessor index stores the stores the first node in that path (other than node i) and the depth index indices are zero. A node with no successors called a leaf node.1. We now describe how to perform these steps using the tree indices. 1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5.

These conditions can alternatively be stated as 1 .112 can be set arbitrarily since one constraint in (5.1b) is redundant. We compute the remaining node potentials using the conditions that Cj: = for each arc (i. j) in B.

j) e B. U). A similar procedure will permit us to compute flows on basic arcs for a given start at the leaf basis structure (B.. i) A then 7t(j) : = 7t(i) + j : = thread (j). however. on arcs encountered along the way. j while ^ 1 do begin i : = pred(j). The thread compute node potentials 0(n) time using the following method. in the reverse order: indices. while node and move in toward the root using the predecessor computing flows this task. j. The following procedure accomplishes . j: = thread(l). if (j. (5. procedure begin 7t(l): COMPUTE POTENTIALS. L. the procedure can all comput in 7t(j) using (5. if (i.113 n(j) = Ji(i) - Cjj.12). The traversal assures that whenever this its fanning out procedure predecessor. We proceed. for every arc (i. end. j) 6 € A then . end. say node indices allow us to i. Cjj. = 0.:(]) : = 7t(i) - Cj. node it has already evaluated the potential of hence.12) The basic idea indices to is to start at node 1 and fan out along the tree arcs using the thread compute other node visits potentials.

(i. i)). Now additional consider the steps of the method. from e(i) set X|j = u^j. Thus. the reverse thread traversal examines each node examining descendants. This assignment creates an at j. Xj. else Xjj add e(j) to e(i). = b(i) for aU i € N. of identifying leaf nodes in T is to select nodes in the reverse order of the all A simple procedure completes this task in 0(n) time: push the nodes into a stack in order of their appearance on the thread. 2. while T*{1) do begin select a leaf i : node j in the subtree T. j) € : T then = e(j). sum of the adjusted supply /demand of nodes in the subtree hanging from node is Since this subtree connected to the rest of the tree only by the arc (i. : = -e(j). end. = pred(j). Note that in the thread traversal. One way thread indices. end. j) : for each e U do subtract Uj. and add u^: to e(j).6 in Section is the spanning tree T. demand of Uj. it a lower triangular matrix (see Theorem is possible to solve these equations by forward substitution. if (i. which B represents the columns Since B is in the node-arc incidence matrix N corresponding to 2. we set x^. let T be the basis tree.114 procedure begin e(i) : COMPUTE FLOWS. = U|j for these arcs. j) (or (j.3). = u^: explains the adjustments in the supply/demand of The manner for up>dating e(j) implies that each e(j) represents the j. The arcs in the set U must carry flow node equal to their capacity. The procedure Compute Flows in essentially solves the system of equations Bx = b. and then take them out from the top one at a time. which precisely . descendants. this arc must carry -e(j) (or e(j)) units of flow to satisfy the adjusted supply /demand of nodes in the subtree. units at Xj: node i and makes the same amount available initial This effect of setting nodes. j delete node and the arc incident to it from T. each node appears after prior to its its Hence.

one node emanating from node i at a time. On the other hand. The algorithm maintains a candidate list of arcs violating the optimality conditions. we ufxiate the candidate list by removing those arcs no longer violate the optimality limit conditions. implementation that an arc that violates the optimality condition the most. but must examine each arc at each iteration. The next major iteration begins with the node where the previous major nodes cyclically as it iteration ended. each major iteration. it performs list iterations. adding to the candidate list the arcs (if any) that violate the optimality condition.e.. We examine arcs emanating from nodes..115 what the algorithm does. In a major iteration. This approach also offers sufficient flexibility for fine tuning to special problem classes. we construct the candidate list. (5. at its upper bound with positive reduced cost. Once minor the list becomes empty or we have reached a specified be performed at on the list number of iterations to iteration. scanning all candidate arcs and choosing a nonbasic arc from this that violates the optimality condition the most to enter the basis.10) or The method used for selecting an entering arc among these eligible arcs has a inajor effect selects I on the performance of the simplex algorithm.11). until either we have examined all nodes or the has reached its maximum allowable size. we rebuild the with another major . . In other words. has the largest value of Cjj I among such arcs. might require the fewest number of iterations in practice. selecting arcs in a two-phase procedure cor«isting of major iterations and minor iterations. the algorithm examines to the candidate list. but might require a of the relatively number of iterations due to the list poor arc choice. Compute Potentials solves the system B = c by back Entering Arc types of arcs are eligible to enter the basis: a negative is Two bound with aiiy nonbasic arc at its lower a reduced cost or any nonbasic arc eligible to enter the basis. One most successful implementations uses a candidate approach that strikes an effective compromise between these two strategies. We repeat this list selection process for nodes i+1. the procedure substitution. list which is very time<onsuming. These arcs violate condition (5.. examining the arc optimality condition large cyclically and selecting the first arc that violates the would quickly find the entering arc. of equations n Similarly. adds arcs emanating from them Once minor the algorithm has formed the candidate list in a major iteration. that As we scan the arcs. An i. i+2.

along with the arc (k. say node w. We define the orientation of (k. In other words. If P(i) send 6 = min {5jj : (i. but it can be improved. up It node w. opposite to the orientation of sets of arcs in /) W as the same as that of € L. e this arc leaves the basis. namely. ^j=[Xi. around W along and opposite to the cycle's orientation.(P(k) i to the root node. /) if (k. P(/). j) change the flow as much as possible until one of the arcs in the W reaches 5j: its lower or upper bound.116 Leaving Arc ^ Suppose we basis select the arc (k. as indicated in the following procedure. e We W. which sometimes referred (k. denote the . Start at node k and using all predecessor indices trace the path from this node to the root and label this path. /) if Oc. has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex W. /) pivot cycle. if(i. Node w. refer to as the apex. and e U. . /)} u P(k) u P(/)) n P(/))). The addition is of this arc to the to as the B forms exactly one (undirected) cycle W. /) and the disjoint portions of P(k) and Using predecessor indices alone permits us to identify the cycle W as / follows. then this cycle consists of the arcs {(((k. q) with 5pQ = 6 as the leaving arc. cycle We (i.. W consists of the arc (k. those in w and the root. eliminates this extra work. j) e W.j)eW. 1) as the entering arc. W contains the portions of the path P(k) and This method is efficient. Let W and W respectively. j) W) units of flow around W. which we might /. |Uj: - X|: if (i. and is select an arc (p. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of its the current solution. the first common P(/) ancestor of nodes k and to The cycle /). The simultaneous use of depth and predecessor indices. the nodes in Repeat the same operation for node until we encounter a node already is labeled. The maximum flow is change on an arc W that satisfies the flow bound constraints . The crucial operation in this step in the basis to identify the cycle denotes the unique path from any node .

If the leaving arc is the same as the entering arc. end. the arc (k. typically The entire flow change operation takes CKn) time worstose. which would happen when 6 = uj^j the basis does not change. otherwise its nondegenerate. the root node. : = i. /) for a leaving arc (p. or vice versa. /) has . A /. If the leaving arc differs from becomes a more extensive ch«mges are needed.117 * ' procedure IDENTIFY CYCLE. w end. into two subtrees— one. /) Xpg = Upg. A basis is called degenerate flow on some basic arc equals lower or upper bound. merely moves from the the entering arc. not containing q. and the other. Adding that is again a and deleting tree. begin i : = k and i j : = /. T2 hangs from node p or node The arc (k. it must update the basis structure. (p. and nondegenerate otherwise.J) . In this instamce. then the pivot if is said to be degenerate. T^ containing the that the subtree root node. then set L to the set U. ancestor w of nodes k and Using predecessor indices to again traverse the cycle the algorithm can then update in the flows on arcs. q). q) from the previous b<isis partitions the set of nodes . but examines only a small subset of the nodes. The deletion Note of the arc (p. simple modification of this procedure permits first it to determine the flow 6 that can be augmented along W as it determines the common W. the arc (p. T2. Observe that a degenerate pivot occurs only in a degenerate Each time the method exchanges an entering arc (k. . In this instance. q) from the previous basis yields a new basis spanning The node potentials also change and can be updated as follows. Basis Exchange In the terminology of the simplex method. basis. a basis exchange it is is a pivot operation. while ^ j do begin if depth(i) > depth(j) then if i : = pred(i) j : else depth(j) > depth(i) then j : = pred(j) else i : = pred(i) and = pred(j). If 6 = 0. q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or Oc.

while depth(z) < depth(y) do begin 7c(z) : = 7:(z) + change. end.. It is it as just described.4 for it is the details. : : q e T2 then y = q else y = : p. however. and the potentials of nodes in the subtree T2 change by a constant amount. As is easy to verify. procedure begin if UPDATE POTENTIALS. During a nondegenerate pivot is which 6 > the new basis structure has a cost that 61 cy I units lower than the previous basis structiire.11). The following method.7t(i) in T-j and the other in T2. This step is rather involved and we refer the reader to the reference material cited in Section 6.9)- easy to is show that the algorithm terminates in a finite number of steps if each pivot operation nondegenerate. they change by the eimount indices. moves from one basis structure obtains a basis structure that satisfies the optimality conditions (5. 2 : = thread (z). Since there are a finite number of basis structures and every basis structure has a unique associated cost. using the thread and depth updates the node potentials quickly. The final step in the basis exchange is to ujxiate various indices. in T2 change by Cj^/ . - If k e T^ and / e T2. We do note. that possible to update the tree indices in 0(n) Termination The network simplex algorithm. around the cycle W. to another until (5. the network simplex algorithm will terminate finitely assunung nondegeneracy. time. pose theoretical difficulties that we address next. and + 7t(j) = for all arcs in the new basis imply that the potentials of nodes in the subtree T^ remain unchanged. then all the node potentials Cjj. if k e T| then change = 7t(y) - Cjj else change = Cjj : : = 7t(y) + change. the conditions n(l) = 0. z : = thread(y). Degenerate pivots. . 118 one endpoint Cjj . however. end. Recall that I cj^/ I represents the net decrease in the (in cost per unit flow sent 0). if / e T| and k € T2.

.. but also a practical one. t^) is a feasible perturbation . n. we conceive of a basis tree as a tree hanging from the root node. positive We say that a basis structure of flow from U) is strongly feasible if we can send a amount any node in the tree to the root along arcs in the tree without violating any of the flow bounds. As we show next. Let (B. minimum cost flow problem with integral As earlier.119 Strongly Feasible Bases The network simplex algorithm does not of iterations unless necessarily terminate in a finite number we impose an an additional restriction on the choice of entering and leaving arcs. L. moreover. . infinite repetitive is sequence of degenerate pivots. ££. We show that a particular perturbation technique for the network simplex method basis technique. .e. if it satisfies the following conditions: (i) Ej > n for all i = 2. its See Figure 5. feasible basis. the feister in simplex algorithm terminates finitely. ar»d . in Computational studies have shown that as many as 90% of the pivot operations common runs networks can be degenerate. is equivalent to the combinatorial rule knov^Ti as the strongly feasible The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej. Researchers have constructed very small network examples for which poor choices lead to cycling.. L. U) be a basis structure of the data. for avoiding cycling in the The perturbation technique is a well-known method simplex algorithm for linear programming.. 3.2 for an example of a strongly Observe that this definition implies that no upward pointing at its eirc can be at upper bound and no downward pointing arc can be lower bound. . The tree arcs either are upward pointing (towards the root) or are downward pointing (away from (B. by maintaining a special type of basis. called a strongly feasible basis. Degeneracy in network problems not only a theoretical issue. (ii) i 1 = 2 ti < 1.. it practice as well. the root). i. hand-side vector so that every convert an This technique slightly pertvirbs the right- fecisible basis is nondegenerate and so that it is easy to optimum solution of the perturbed problem to an optimum solution of the original problem..

If (i. implies that perturbation of b by e changes the flow on basic arcs in the following maimer: 1.- Since < X < rXi) k € CKi) 1.. . Suppose true. The procedure we gave Compute-Flows. is at its upper bound and no downward pointing arc of at its lower (iii) U) L. . if we by b+e. If (i. j. Since the flow on an upward pointing arc is integral and strictly less (integral) upp>er bound. for the perturbation e = (-(n-l)/n. j) by k€D(j) 1. n (and thus = -{n . 1/n.. is feasible if we replace b by b+e. then the perturbation decreases the flow in arc the resulting flow (i. One E| possible choice for a feasible perturbation ). j) is an upward pointing arc of tree B and in arc D(i) is the set of descendants of node El. . 2. is (ii) No upward the basis (B. earlier in this section. L. j) is at its upper bound. i cannot send any flow to the root. perturbation increases the flow on an upward pointing arc by an amount between than its and 1. U) is strongly feasible. is X Ew- Since < keD(j) Z < nonintegral and thus nonzero. (i) ^ (ii). Suppose an upward pointing arc (i. L. is nonintegral and thus nonzero. 2/n. The perturbation changes the flow on for the basic arcs. (B. 1/n). . 120 r (iii) El = i L ^^ = 2 is Cj .. Similar reasoning shows that after we have downward pointing arcs also remain feeisible. U) is feasible . then the perturbation increases the flow the resulting flow 5. (ii) (iii). L. n. i. (i. As noted strictly earlier.. perturbed the problem. for any feasible perturbation e replace b (iv) (B. Proof... pointing arc of the basis bound. Theorem For any basis structure U) of the minimum cost flow problem. the following statements are equivalent: (i) (B. j) is a downward pointing arc of tree B and D(j) is the set of descendants of node Ei. the perturbed solution remains feasible. violating the definition of a strongly feasible the For same =^ reason..l)/n Another choice is Ej = a* for i = 2. no dov^mward pointing arc can be that (ii) is at its lower bound. o chosen as a very small justification positive number.2. j) by k€ X El. = 1/n with for i = 2. . Then node basis.

This result implies that both approaches obtain exactly the same sequence of basis structures if they use the that same rule to select the entering arcs. ( - To establish this result. Consequently. . the flow leeist on every arc is a multiple of 1/n. the algorithm will terminate in at most of nmCU iterations. The algorithm selects the leaving arc in a degenerate pivot carefully so that the next basis is also . Each arc in the basis B has a If positive nonintegral flow..2 will illustrate our discussion of this method. Even though this rule permits degenerate pivots. b + e by b). As a corollary. (B. U) of the perturbed problem. problem with the perturbation e = (n-l)/n.121 (iii) => (iv). and > for downward pointing arcs.e. 1/n) is a feasible perturbation. (iv) =* (i). 1/n). cortsider the perturbed . we can maintain strong feasibility using a "combinatorial rule" that the original simplex equivalent to applying method after we have imposed the perturbation. it is guaranteed to converge. 1/n. L. flows on the the U|: downward upward pointing arcs increase. With this perturbation. this equivalence shows any implementation of the simplex algorithm that maintains a strongly feasible basis performs at most nmCU pivots. there no need to actually perform the perturbation. 1/n. < and U) is strongly feasible for the origiruil problem. Consider the feasible basis structure (B. L. The method initial basis always gives such a basis. 1/n. .. x^... 1/n. Combinatorial Version of Perturbation The network simplex algorithm described earlier to construct the starts with a strongly feasible basis. the p>erturbation Consider the same basis tree for the replace original problem. thus maintain strong feasibility by f>erturbing b by a suitable perturbation is However. every pivot operation augments at at least 1/n units of flow and therefore decreases the objective function value by units. This theorem shows that maintaining a strongly feasible basis is equivalent to applying the ordinary simplex algorithm to the perturbed problem. for pxjinting arcs. we remove (i. x^: upward pointing arcs decreaise. Consequently. flows on the resulting flows are integral. We can e. is Instead. any implementation feasible the simplex algorithm that maintains a strongly basis runs in pseudopolynomial time. Figure 5. Therefore. 1/n Since mCU is is an upper bound on the objective function value of the starting a lower solution and zero bound on the minimum objective function value. Follows directly because e = (-(n-l)/n.. .

W2 = for W - W| {(p. is This conclusion completes the proof that the next basis strongly feasible. it (k. apex w and arc - (p. then W^ must be contained the segment of feasibility. If the blocking arc arc. when we traverse the cycle along Further. change flow values. then the pivot flow along the arcs in Wj. because by the property of strong every node on the path from node to node w can send a positive amount of flow to the root before the pivot and. q). q) is W2 blocking and every node contained in the segment orientation of W2 and via node w.. every node in W^ be able to send positive flow to the root after the pivot as well. every node in the to the orientation of W^ can augment flow back to the root opposite If W^ and in node w. Since arc (p. We distinguish two cases. show that this rule guarantees that the next basis is strongly feasible. first Suppose that the entering arc (k. is those arcs (i. hence. /) is at its lower bound and the apex w /). j) W that satisfy = 5.e. the algorithm selects the leaving arc in accordance with the following rule: Combinatorial Pivot Rule. pivot cycle select the leaving arc as the last blocking arc. Hence.e. q)). method. /) to the basis We define 5jj the orientation of the cycle as the After in the If updating the flow. every node in therefore. the algorithm cycle identifies the blocking arcs.2 for an illustration of the segments the last blocking arc in W| is and W2 our example. every node could to the root send positive flow node. If W2 can send positive flow to the root along the Now consider nodes contained in the segment W^. /) degenerate pivot. ancestor of nodes k and Let W be the cycle formed by adding arc same as that of arc (k. Let W^ be the segment of the cycle orientation. Define the orientation of segments W^ and W2 to W. i. W between node w and node / k. Since arc arc belongs to the path enters the basis to change on node potentials during a at its lower bound. thus. the i. To we show that in this basis every node in the cycle W can send positive flow to the W between the let its root node. some basic arcs will be at their lower or upper bounds. cj^j < 0. node k the subtree T2 and .122 feasible. Now must observe that before the pivot. no arc on this path can be a blocking arc in a degenerate pivot. In this case. The leaving lies in from node k node w.. since the pivot does not W^ could send positive flow to the root and. the current pivot of augmented segment via a positive amount was a nondegenerate pivot. the current pivot was a degenerate pivot. Notice that since the previous basis was strongly feasible. no arc in be compatable vdth the orientation of W. is the common tree. When We next do so. encountered in traversing the orientation starting at the apex w. See Figure 5. W along its introducing an arc into the basis for the network simplex say arc (p. We now study the effect of the basis (k. unique. then leaves the basis. /. q). cycle contains more than one blocking then the next basis will be degenerate.

then the objective function value decreases by at least A/n units. the network simplex algorithm implemented using Dantzig's pivot rule. A > If denote the maximum violation of the optimality condition of any nonbasic the algorithm next pivots in a nonbasic arc corresponding to the maximum Hence. easy to show . we can reduce the number of pivots 0(nmU log H). 1/n. .. the pivot again increases the sum node potentials. Using Dantzig's pivot rule to and geometric improvement arguments.. /). As . ^k. far we have assumed that the entering arc is at its lower bound.. pivoting in the arc (k. in In this case. Consequently.c^j > 0. Let denote the objective of the perturbed minimum cost flow problem (B. thus. /) with the largest This technique value of I Cj^j | among all arcs that violate the optimality conditions). W as opposite to its the orientation of arc The criteria to select the leaving arc remaii\s unchanged-the leaving arc starting at is the Icist blocking arc encountered in traversing W along orientation node w. node / is contained in the subtree T2 and. earlier. number So of successive degenerate pivots is finite. We have already shown that any version of the network simplex algorithm that maintairis a strongly feasible basis performs O(nmCU) pivots. violation. Complexity Results The strongly feasible basis technique implies some nice theoretical results about i. the arc that most violates the optimality conditions (that is.123 the potentials of all nodes in T2 change by the amount . pivot all nodes T2 again increase by the amount of the consequently. this degenerate pivot strictly increases the sum of all node potentials (which by our prior potentials is assumptions the is integral).^k+l^^/n (513) We now need an upper bound on the It is total possible that improvement in the objective function after the k-th iteration. U) denote the current basis Let arc. with H defined as e H = mCU. and structure.e. then we define the orientation of the cycle (k. Since the sum of all node bounded from below. L. 1/n. after the Cj^^j . we consider the perturbed z*^ problem with perturbation function value = (-(n-l)/n. also yields polynomial lime simplex algorithms for the shortest path and assignment problems. If the entering arc (k. x denote the current flow. at the k-th iteration of the simplex algorithm. 1/n). /) is at its upper bound.

2) 0. 3) and (7. 5). A strongly feasible basis. This pivot is a degenerate pivot. (x^:.124 ap>exw (3.5) Entering arc Figure 5. . and the leaving arc is (7. The figure shows the flows and is (9. 10). capacities represented as The entering arc the blocking arcs are (2.4) (2. The segments W^ and W2 are as shown. 5). Ujj).2.5) (0.

the with respect to in the the objective objective function £ £ c.14) by setting Xj.3. for all (i. by setting xj: = for all arcs (i. j) € A ' ' problem: f minimize subject to X {i. This readjustment of at flow decreases the objective function by most mAU. 5. (514a) A »] 1] < xjj < Ujj.j)e A ' ' (i. if H = mCU. j) € A..j)6 C. 0(nmU log W) iterations. We summarize our discussion as Theorem The network simplex algorithm that maintains a strongly feasible basis and uses Dantzig's pivot rule performs 0(nmU log H) pivots. j) e U with Cjj > 0. is bounded by the total .13) (5.. we construct an optimum solution of (5. = u^ for all arcs (i.14b) For a given basis structure (B. j) € L vdth Cj: < 0.j)€ A^ ^ ieN Since the rightmost term in this expression is a constant for fixed values of the node potentials.'" improvement • in the following relaxed (i. We have thus shown that z^-z»^mAU.j)€A'^ function Cj..15) and (5. Combining (5. '' total improvement (i. and by leaving the flow on the basic arcs unchanged. . x.15) we obtain nmu By Lemma 1. x.j)€A^ is equal to the total improvement Further.' (i.. Xj.. U). (5. the network simplex algorithm terminates in follows. the total improvement with respect to the objective function ^ C:: x.125 (i.1.. L.

The definition of A implies that within n augmentations the algorithm will decrease A by a factor of at scaling least 2.e. upon cost and simultaneous right-hand-side and is The RHS-scaling algorithm an improved version of the successive shortest in the successive shortest path algorithm. 5.4. minimum cost flow problem. resulting in a fairly large that number of augmentations in the worst case. flow problem.7 Right-Hand-Side Scaling Algorithm ni . It x and the imbalances e(i) as defined in performs a number of scaling phases. shall illustrate RHS-scaling on the uncapacitated Uj: minimum cost flow problem. { j e(j) < -A ). to In fact. We i. after has been converted into an uncapacitated problem (as described in Section The algorithm uses the pseudoflow Section 5. within Odog U) . Much A be as we did in the excess scaling algorithm for the either 2' (i) maximum for all i. These results can be found in the references cited in Section 6. This definition implies that the is sum of excesses { i : (whose magnitude ) equal to the sum of deficits) bounded by 2nA. Scaling techniques are among the most effective algorithmic strategies for designing polynomial time algorithms for the section. Let S(A) = e(i) ^A and let T(A) = 0. j) e A. each from a carries node c S(A) to a node € j T(A). The inherent drawback path algorithm is that augmentations may carry relatively small amounts of flow. Hence.4. we perform a number of augmentations. the least power of 2 satisfying Initially.4). Then at the beginning of the A-scaling phase.. scaling. The next two sections present polynomial time algorithms based cost scaling. A= '°S ^ '. we begin a new scaling phase. either S(2A) = or T(2A) = In the given i A-scaling phase. In this we describe an algorithm based on a right-hand-side scaling (RHS-scaling) technique. is : e(i) < 2A or e(i) > -2A for all but not necessarily both. and each of these augmentations A imits of flow. At this point. sufficiently large The RHS-sc<iling algorithm guarantees each augmentation carries flow and thereby reduces the number of augmentations substantially. (ii) we let i. it This algorithm can be applied to the capacitated minimum cost flow problem 2.126 This result gives polynomial time bounds for the shortest path and assignment problems since both can be formulated as minimum cost flow problems with U = n and U = 1 respectively. a problem with = » for each (i. it is possible to modify the algorithm and use the previous in arguments pivots show in that the simplex algorithm solves these problems 0(n^ log C) and runs 0(nm log C) total time.

A < 1. The RHS-scahng algorithm A-scaling phcise. node / e T(A). to a it is correctly solves the problem because during the able to send A units of flow on the shortest path from a node k € SiA) result. S(A) and T(A). e := b. . augment A update end. while S(A) * and T(A) * e do begin select a node k e S(A) and a node / e T(A). end. all imbalances are now zero and the algorithm has found an optimum flow.2) ensure in S(A) to a we can always send A units of flow from a node description is node in T(A). . The following algorithmic a formal statement of the RHS-scaling algorithm. update n:=n-d. end. all determine shortest path distances d from node k to in the residual other nodes network G(x) with respect to the reduced costs let P denote the shortest path from node k to node /.127 phase. A := A/2. let n be the shortest path distances in G(0). X. > . ^ .^ 2f log while the network contains a node with nonzero imbalance do begin S(A):={i€ N:e(i)^A). units of flow along the path P. begin X := 0. This fact follows from the follovdng . This flow that invariant property and the connectedness assumption (A5. The driving force behind this scaling technique is an invariant property (which is we will prove later) that each arc flow in the A-scaling phase a multiple of A. T(A) := { i € N : e(i) < -A ). algorithm RHS-SCALING. U1. By the integrality of data.

because fails to Lemma 5. I ends I node with a and carries A units of flow.. The Each residual capacities are a multiple of A because they or are either or «. Applying the scaling algorithm problem introduces some directly to the capacitated minimum cost flow subtlety. augmentation changes the residual capacities by hypothesis. and each seeding phase performs at most n+m augmentations. The transformed network contains n+m nodes. Consequently. this fact would imply the conclusion of the theorem. similar proof applies when T(2A) = At the beginning of the scaling i S(A) | < Observe that A< at a e(i) < 2A for each node deficit.4.128 Lemma 5. Proof. As we noted problem is previously. O) time.2. m. e S(A).2 does not apply for this situation. The residual capacities of arcs in the residual A Proof. A recently developed modest variation of the problem RHS-scaling algorithm solves the capacitated minimum cost flow 0(m lof^ n . one method of solving the problem cajjacitated minimum cost flow to first transform the capacitated 2. 0. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U Sin. A n. O) time. C) denote the time to solve a shortest path problem on a network with nonnegative arc lengths. m. The inductive hypothesis be true initially since the residual capacities are or Uj. at therefore. Since the algorithm requires Ul seeding phases. The shortest path problem on the transformed problem can be solved (using some clever techniques) in S(n. either S(2A) = We consider the case when phase. initial network are always integer multiples of We use induction on the number of augmentations and scaling phases. Each augmentation starts at a node it in S(A). decreases S(A) by one. m. Consequently. The RHS-scaling algorithm is a special case of the successive shortest path cost flow. or T(2A) = 0.4. hypothesis. Theorem 5. S(2A) = I 0. m. Let S(n. each scaling phase can perform most n augmentations. At the beginning of the A-scaling phase. algorithm and thus terminates with a minimum We show that the algorithm performs l+Flog at most n augmentations per scaling phase. A units and preserves the inductive A decrease in the scale factor by a factor of 2 also preserves the inductive This result implies the conclusion of the lemma. the RHS-scaling algorithm solves the capacitated minimum in cost flow problem in 0(m log U S(n. to an uncapacitated one using the technique described in Section We then apply the RHS-scaling algorithm on the transformed network. C) time.

The follovsdng facts are useful for analysing the cost scaling algorithm. The algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation procedure that transforms an e-optimal flow into an e/2-optimal flow. which a relaxation of the usual optimality conditions. Any e -optimal feasible flow for E<l/n is an optimum flow. the residual network contaii« no negative cost cycle and from Theorem 5. this result implies that (i. j) X W' ^ 6 ^\\ 0.8 for e feasibility conditior« ^ C. i^ C. j) in the residual network G(x).6 when e is 0. Clearly. and finally e < 1/n.^-n£>-l. e > if x together with some node potentials n satisfy the following C5. Hence.8. 5. ^ -e for each arc (i. The e-dual imply that for all any directed cycle W in the residual network. These conditions are a relaxation of the original optimality conditions e -optimality conditions permit -e < Cj. Proof. The cost scaling algorithm treats e as a parameter e. This method is currently the best strongly polynomial-time algorithm for solving the minimum cost flow problem. < for and reduce to C5. This algorithm relies on the concept of approximate optimality.5 and C5. feasible.129 (m + n log n)) time.: = Y C. This algorithm can be viewed as a generalization of the preflow-push algorithm for the flow problem. A flow x is said to be e -optimal for some conditions. Cost Scaling Algorithm We now maximum describe a cost scaling algorithm for the miiumum cost flow problem.7 C5. any feasible flow with zero 1 node potentials satisfies C5.. j) at its lower bound and e S is > for an arc (i.3.1 the flow is optimum. j) at its upper bound. an arc (i. and iteratively obtains e-optimal flows for successively smaller values of Initially e = C. (Primal feasibility) x (e -EHial feasibility) is Cj. is Lemma 5. Any feasible flow e -optimal for ekC.8. Now consider an e-optimal flow with e < /n. Since arc costs are integral. We The Cjj refer to these conditions as the e -optimality conditions. After l+Tlog nCl .

procedure PUSH/RELABEL(i). begin j: := let X be any feasible flow. an optimum flow for the minimum cost flow problem. j) e A(i) and r^j end. always maintaining the e/2-dual active We if call a node c^. i to node j := 7c(i) + e/2 + min { c^: (i.APPROXIMATION-I(£. see later that pushing flows on admissible arcs preserves the e/2-dual conditions. j) then push 6 else Jt(i) := min { e(i). otherwise is nonsaturating. 5 = then we refer to the push as saturating. we use the same data structure . Recall that r^: denotes the residual capacity of an arc (i. e < 1/n and the algorithm terminates with an optimum flow. and then gradually converting the pseudoflow into a flow while feasibility conditions. More formally. Moreover. It e -optimal flow into an does so by is (i) first converting an e -optimal flow into an 0-optimal if it satisfies pseudoflow (a pseudoflow x (ii) called e -optimal the e -dual feasibility conditions C5. X is x. We feasibility The Improve-Approximation procedure uses the following subroutine. As if in our earlier r^. E:=£/2. 1 while e S /n do begin IMPROVE. i The Improve-Approximation procedure transforms an E/2-optimal flow. rj: } units of flow from : node > 0). discussion of preflow-push algorithms for the maximum it flow problem. end. 130 cost scaling phases.8). j) in G(x). The purpose of create new admissible arcs. end. begin if G(x) contains an admissible arc (i. e(i) > and call an arc (i. i with 0.. algorithm COST SCALING. We also refer to the a relabel bls updating of the potential of a node as a operation is to relabel operation.. re). we can state the algorithm as follows. and e := C. j) in the residual network admissible -e/2 < < The basic shall operations are selecting active nodes and pushing flows on admissible arcs.

yields an e/2-optimal flow. At the beginning of the procedure. This proof is similar to that of Lemma 4. while the network contains an active node do begin select an active node i. compute node imbalances. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow. increasing residual network. and at termination yields an e /2-optimal flow. Jt). end. ^ -e/2. Lemma 5.8 i satisfied for (i. > then Cjj Xj. j) to identify admissible arcs. begin if Cjj if x. PUSH/RELABEL(i). the procedure preserves e/2-optimality of the pseudoflow throughout and.1. j) in the residual network.i) in the Therefore. the (in fact. maintains the condition cj^ t -e/2 for all arc (k. Proof. We (j. it algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow is a 0-optiCTiaI that the pseudoflow). j) any value of > 0. maintain a currenl arc which is the current candidate for pushing flow out of list A(i). procedure IMPROVE. at termination. node The current arc is found by sequentially scanning the arc of the The following generic version summarizes its Improve-Approximation procedure essential operations. Cjj > and the condition C5. after we Jt(i) by e/2 + min rj: Cj: : (i. we i. end. The correctness of this procedure rests on the iollowing result. In addition. := else < then Xj: := uj. 131 used in the maximuin flow algorithms (i. j) e A(i) > 0) units. .. The algorithm relabels node when Cj. But since -e/2 S is Cj. use induction on the number of push/relabel steps to show algorithm preserves £/2-optimality of the pseudoflow.4.APPROXIMATION-I(e. j) might add its reversal i) to the residual network. Pushing flow on arc (i. ^ for every arc increaise (i. { By our and fjj rule for increasing potentials. < (by the criteria of c admissibility). the reduced cost of every arc Ji(i) with > still satisfies Cj. For each node i.

1.18) Now we use n.16) and + (5. using a variation pseudoflow x and the flow x' repectively. = 7t'(v) + /£ - 2 C.. with the - P = vq . Let X be the current £/2-optimal pseudoflow and x' be the e-optimal flow at the end of the previous cost scaling phase. Let n and to the n' be the node potentials corresponding possible to show..v-j - ..j)eP'J Combii\ing (5. networks implies that there property that sequence of nodes v = vq..132 We will next analyze the complexity of the Improve-Approximation procedure.j)eP 7i(v) < Jt(w) + /(e/2) + y Cjj. (5.i)€ _C. and (ii) its reversal P is an augmenting path with respect to exists a v^ is a This fact in terms of the residual . < is and (iii) each increase in potential increases Ji(v) by at least e/2 The len\ma now immediate... ^ on the path P in G(x). (5. the facts that (i) k(w) = it'(w) (the potentials of it a node with a negative (ii) / imbalance does not change because the algorithm never selects for push/relabel). Proof.17) gives Jt(v) < n'(v) (7c(w) - n'(w)) + (3/2)/£. we obtain X ^-/(e/2).5. Alternatively. P is an augmenting path with respect to x'. These time bounds are comparable flow problem. (i. v^. that the complexity of the generic version is We a show O(n^m) and then describe specialized version running in time OCn-^). It is of the flow decomposition properties discussed in Section 2. its vj = w ..16) apeP^J Applying the £ - optimality conditions to arcs on the path P in G(x').j V| is a path in G(x'). .17) 7l'(w) < 7t*(v) + /£ + I (j. P^' (i. - path in G(x) and reversal to arcs P = vp vj. . we obtain (5.. that for every node v with positive imbalance in x there exists a satisyfing the properties that (i) node w with negative imbalance in x and a path P x. to those of the preflow-push algorithms for the maximum Lemma 5.. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure. units.optimality conditions C:. Applying the e/2.

Let g(i) be the let Proof (Sketch). is Lemma Proof. arcs while identifying admissible arcs. and cj^j ^ after the (k. we need one more result.8. j). The admissible network acyclic throughout the cost scaling algorithms. i). As in the maximum is flow algorithm. To bound number of nonsaturating pushes. We establish this result relabels. then > 0. the algorithm resulting in also saturates any arc 0(n) times the 0(nm) total saturating pushes.7. the bottleneck operation in the Improvethe nor«aturating pushes.5 ar\d essentially (i. these observations yield a bound O(nTn) on the number of nonsaturating pnjshes.Approximation procedure performs 0(n m) nonsaturating pushes. i) push flow on an arc with Cjj Cj: < 0. j). The Improve.5 most 3n2 relabel operations and 0(nm) saturation pushes. it also deletes cj^j all admissible arcs because for any arc i). Approximation l+Tlog nCl times.6. Lemma 5. Since the algorithm performs 5. number of nodes that are reachable from node i in the to admissible network and the potential function F = i X g^i)- Th^ proof amounts at active showing that a relabel operation or a saturating push can increase F by 1 most n units and each nonsaturating push decreases F by at at least unit. j) We always (j. relabel operation since the relabel operation increases 7t(i) by at least e/2 units. Therefore the algorithm can create no directed cycles. is by an induction argument applied to the number of pushes and The result is true at the beginning of each cost scaling phase because the pseudoflow 0-optimal and the network contains no admissible arc. if the algorithm adds create its reversal to the residual network. The algorithm takes 0(nm) time perform saturating pushes.AppToximation procedure performs 0(nm) saturating pushes. hence. Thus pushes do not new node admissible arcs and i preserve the inductive hypothesis. Since any node p>otential increases 0(n) times. that 5. 5. (i. admissible arcs (i. A relabel operation at may create new but (k. and the same time to scan Since the cost scaling algorithm calls Improveresult.6. We The define the admissible network as the network consisting solely of admissible arcs. following result is crucial to analyse the complexity of the cost scaling algorithms. The Improve. by Lemmas of and 5. amounts to showing i between two consecutive saturations of an arc j the potentials of both the nodes and increase at least once.133 Lemma Proof. to Approximation procedure which take O(n^m) time. This proof is similar to that of Lemma 4. we obtain the following . k -e/2 before a relabel The latter result is true operation.

arcs. suggested improvements based on examining nodes in some clever data structures. The wave algorithm examines each node is active. The generic cost scaling algorithm runs in 0(n^Tn log nC) time. We then move node from its present position in . Each node examination entails at most one nonsaturating push. We now describe a relabel operation. method again if examine the nodes according However. nodes i of an acyclic be ordered so that for each arc (i. The relabel may create new admissible arcs and consequently may affect the topological ordering of nodes. Suppose that while examining node i. procedure for obtaining a top)ological order of nodes after each initial An topological ordering is determined using an 0(m) it. which in turn push fiow to even higher so on. the wave algorithm performs O(n^) nor\saturating pushes per Improve- Approximation. maximum flow problem. active nodes have discharged their Since the algorithm requires O(n^) relabel of OCn-^) on the operations. however. called the wave algorithm. Observe pushes do not change the admissible network since they do not create new admissible operations. but it nodes for the push/relabel step in a specific order. and A relabel operation changes the numbering of nodes and starts to the topological order. we immediately obtain a bound number of node examinations. As is well known. numbered nodes. active nodes push flow higher numbered nodes. We describe one such improvement . and thus the to the topological order. to When examined in this order. in 0(m) time.134 Theorem 5S. < j. called a topological ordering of nodes. The cost scaling algorithm illustrates an important connection between the Solving maximum flow and the minimum is cost flow problems. the all algorithm performs no relabel operation then excesses and the algorithm obtains a flow. or bottleneck operation is the number of nonsaturating pushes. an Improve-Approximation problem very similar to solving a Just as in the generic preflow-push algorithm for the maximum flow problem. The wave algorithm selects active is the same as the Improve-Approximation procedure. i. the Researchers have using si>ecific order. algorithm. within n cortsecutive node examinations. the algorithm relabels Note that after the relabel operation at node the network contains no incoming admissible i arc at node i (see the proof of Lemma 5.7). Consequently. in the topological order and if the node then it performs a push/relabel step. j) in the network. The algorithm uses the network can acyclicity of the admissible network. It is possible to determine this that ordering.

node i precedes node in the order. The double scaling algorithm is it the same as the cost scaling algorithm discussed in the previous section except that uses a more efficient version of the Improvein the previous to try Approximation procedure. This approach would send flow from a node with i.. The Improve-Approximation procedure section relied on a "pseudoflow-push" method. 5. A natural implementation of this approach would 0(nm) augmentations since each augmentation would saturate 5. A natural alternative would be an augmenting path based method. We number of can. (ii) node i has no incoming admissible j for each outgoing admissible arc (i. approach does not seem improve the O(nTn) bound of the generic Improve-Approximation procedure. however.6. use ideas from the RHS-scaling algorithm to reduce the for augmentations to 0(n log U) an uncapacitated problem by ensuring that . 5.4) and then applying the double scaling algorithm. at least this one arc and.9. Double Scaling Algorithm The double scaling approach combines ideas from both the RHS-scaling and cost scaling algorithms and obtains an improvement not obtained by shall describe the either algorithm alone. with Nj and N2 as the sets of supply and demand nodes respectively. and again examines nodes in order starting node We Theorem minimum have established the following The cost scaling result. we uncapacitated transportation network G = 0^^ u double scabng algorithm on the N2. Whenever node i.6. This result follows from the facts arc. Thus. For the sake of simplicity. excess to a node with deficit over an admissible path. j). and (iii) the rest of the admissible network does not change and so the previous order nodes (possibly relabels a eis is still valid. by Lemma to the algorithm requires 0(nm) arc saturations. a path in which each arc result in is admissible. the algorithm this moves at to the first place in this order i. list) Thus the algorithm maintains an ordered and examines nodes it set of it a doubly linked in this order. Notice that this altered ordering is a (i) new admissible network.135 the topological order to the topological ordering of the first position. approach using the wave algorithm as a subroutine solves the log cost flow problem in 0(n^ nC) time. A). A capacitated minimum cost flow problem can be solved by first transforming the problem into an uncapacitated transportation problem (as described in Section 2.e.

while the network contains an active node do begin S(A) := ( i € Nj u N2 : e(i) ^A }. A := A/2.4. . contrasted with solving a shortest path in the RHS-scaling algorithm. n). by adding e to optimal (in fact. ^ j -e for all (i. In the double scaling algorithm app>ears to be similar to the shortest for the augmenting path algorithm maximum flow problem. in is that the double scaling algorithm identifies an augmenting path fact. procedure begin IMPROVE. a 0-optimal) for each e N2/ we obtain an e/2- pseudoflow. also requires 0(n) time on average to find each augmenting path. set X := 7t(j) := 7t(j) all j € N2. A:=2riogUl. algorithm called the double scaling algorithm. from Lemma pseudoflow. this The procedure always augments flow on choice preserves the e/2-optimality of the admissible arcs and. j) A at the beginning of the procedure and. while S(A) ^ do begin OlHS-scaling phase) select a node k in S(A) and delete it from S(A). hence. The advantage problem of the double scaling algorithm. augment A end. 5. / determine an admissible path P from node k to some node with e(/) < 0. we obtain an £/2-optimal flow. this algorithm.APPROXIMATION-n(e. 0(n) time on average over a sequence of n augmentations. end. and compute node imbalances. + E for . This approach gives us an algorithm cost scaling phase performs a is that does cost scaling in the outer loop and within each this number of RHS-scaling phases. c^.136 each augmentation carries sufficiently large flow. We shall describe a method to determine admissible paths after First. units of flow on P and update x. at the termination of the procedure. Thus. observe that it(j) first commenting e on the correctness of this procedure. end. x. The double scaling algorithm uses the following Improve-Approximation procedure.

The algorithm thus 0(n log U) augmentations. there are two types of advance steps: those that add arcs to an admissible path (ii) on which the algorithm later performs an augmentation. becomes inadmissible. after most n advance steps of the first type.. Thus. We l+flog e(i) next consider the complexity of this implementation of the Improve-Approximation procedure. j) to P. i. We next coimt the number of advance steps. If P has at least one arc. leist we perform one of P. The proof of Lemma 5. j) € A(i) and r^: > 0). the method begins performs a total of new scaling phase. if (u. At any point is in the algorithm. as in the RHS-scaling algorithm. - u. each augmentation deletes a node from S(A) and after a most n augmentations. S(2A) = 0. the arc (pred(i). During the scaling phase. say of the following two whichever has a applicable. i) from P.137 Further. j). the procedure maintains the invariant property that all residual capacities are integer multiples of A and thus each augmentation can carry A units of flow. At the beginning of the A-scaling phase. The algorithm maintain a partial identifies an admissible path by gradually building the path. The creating retreat step relabels (increases the potential oO node i for the purpose of i) new admissible arcs emanating from this node. if there is any. Each execution of the procedure performs i. e(j) If the residual network contains an admissible arc (i. the residual network does not contain an admissible arc { rctreat(i). then ujxiate then delete + e/2 + min Cj. (pred(i). Consequently. then stop.7). If < 0. j).4 implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow. advanced).e. Since the set of admissible arcs at acyclic (by Lemma 5. We admissible path P using a predecessor index. in the process. we delete this arc from P. the algorithm augments A units of flow from a node k in S(A) to a node / with e(/) < 0. : (i. i A< < 2A node e S(A).. at the node node i.e. Hence. then add (i. at than A. This operation reduces the excess at node k to a value less then is less A and ertsures that the excess at node /. and a retreat step deletes (i) an arc from the partial admissible path. Ul RHS-scaling for each phases. Each advance step adds an arc to the partial admissible path. the algorithm will discover an admissible path . n(i) to 7t(i) If (i. and is those that are later cancelled by a retreat step. terminating when the last node deficit. v) e P then prediy) steps.

though.5. The retreat at most O(n^) of the second type because each step increases a node potential. . The double scaling 0((nm + rr log U) log nC) time. the algorithm will examine result. For problems that satisfy the similarity assumption. however. Sensitivity Analysis The purpose solution of a of sensitivity analysis cost is to determine changes in the optimum minimum flow problem resulting from changes in the data (supply/demand practitioners vector.7. capacity or cost of any arc). and by Lemma is 5. Therefore. The simplex based approach maintains a basis tree aruilysis every iteration and conducts sensitivity by determining changes in the b<isis tree precipitated by changes in the data. a variant of this algorithm using more sophisticated data structures is currently the fastest polynomial-time algorithm for most classes of the 5. algorithm solves the uncapacitated transportation problem in To solve the capacitated minimum cost flow problem . and consequently changes the basis tree do not necessarily traiislate into the changes in the solution.10 minimum cost flow problem. node potentials increase 0{t\^) times. instead. the simplex based approach does not give information about the changes in the solution as the data changes. Since the algorithm requires a total of 0(n log U) of advance steps is augmentations. The in basis in the simplex algorithm is often degenerate. it tells us about the changes in the basts tree. The total number of advance steps. Traditionally.we first transform it into an uncapacitated transportation problem and then apply the double scaling algorithm. n The amount of time needed to identify admissible arcs is 0( £ i=l lA(i)ln) = 0(nm) since between a potential increase of a node i.138 and vsdll perform an augmentation. the number of the algorithm performs advance steps first typ>e at most 0(n^ log U). a conceptual drawback to at approach. researchers and have conducted There this sensitivity analysis using the primal simplex or dual this simplex algorithms. is. I A(i) I arcs for testing admissibility. We have therefore established the following Theorem 5. 0(n^ log U). We leave it as an exercise for the reader to show that how the transformation permits us to use the double scaling algorithm to solve the capacitated minimum cost flow problem of the 0(nm log U log nC) time. The references describe further modest improvements algorithm. therefore.

this vector satisfies the dual feasibility conditions C5. / ) denote the shortest distance from node k Cj. hence. Augmenting one unit of flow from this node k to node into / along the shortest path in the residual network G(x') converts flow. we limit our discussion to a unit change of only a particular type. /) for all pairs of nodes k and / single-source shortest path problems with nonnegative arc lengths. residual network with respect to the original arc lengths Since for node / in the any directed path to / ) P from node k to node / .1 pseudoflow / ) a Tliis augmentation changes the objective function value by d(k. this discussion is quite general: it is possible to reduce more complex changes to a sequence of the simple changes cost flow we cor^sider. equals the P cjj shortest distance from jt*(/) ). In a sense. Supply/Demand Sensitivity Analysis We becomes problem of first study the change in the supply/demand vector. we can compute d(k. In . Suppose that the supply/demand b(/) node k becomes bGc) + (Recall 1 and the supply/demand that feasibility of the of another node / - from Section b(i) 1. Let n* be the corresponding node potentials and costs.1 minimum cost flow dictates that ie X N = 0. and must increase one value and decrease the is other).139 We present another approach for performing serisitivity analysis.j)€ Cjj . of a 1. . This approach does not share the drawback we have just mentioned. plus ( 7t*(k) - At optimality. cost flow problem. Z (i.K(k) + jt(l). however. q) increases by one unit . Arc Capacity Sensitivity Analysis We next consider a change in an arc capacity. Then x* a pseudoflow for the modified problem. is units. let d(k. moreover.6. Suppose that the capacity of an arc (p. node k node / with respect to the arc lengths Cj. minimum Cj.j)6P to ^ij = X (i. d(k. We show that the sensitivity analysis for the minimum flow problem essentially reduces to solving shortest path or maximum problems. Hence. 5. For simplicity. the reduced costs of all arcs in the residual network are by solving n nonnegative. we must change the supply /demand values two nodes by equal magnitudes. = - 7C*(i) + 7t*(j) denote the reduced Further. Lemma implies that this flow optimum for the modified minimvmi cost flow problem. The flow x* is feasible for the modified problem. Let X* denote an optimum solution of a Cj.

Cost Sensitivity Analysis Finally. Cpg = before the violates the change and Xp_ > then after the change Cpq = 1 > and the solution . This flow is optimum from our observations concerning supply /demand sensitivity analysis. /) obtain useful upper bounds on these changes by solving only two shortest path problems. q) we assume are integral. However. q) capacity. for the modified problem. However. often these upper bounds and the actual values are equal. This observation uses the /. p).C5. /) S d{k. The preceding discussion shows how solution value in to determine changes in the optimum due to unit changes of any two supply /demand values or a unit change any arc capacity by solving n single -source shortest path problems. Similarly. q). before the change. 1) + d(l. which produces a pseudoflow with an excess of one node q and a deficit of one unit node p. and from other nodes to node 1 to compute upper bounds on all d(k. hence. it satisfies the optimality If conditions C5. capacity. if and hence optimun. from node p to node q along the shortest This augmentation changes the objective function value by an amount -Cpn + d(p. we need all to determine shortest path distances from node to all other nodes. it is an optimum flow for the modified problem.140 addition. which (p. the flow on the arc of flow is at its we decrease the flow by one unit and augment one unit path in the residual network.4 dictates that flow on the arc must equal flow on the arc unit at (p. its Cpg < then condition C5. then after the change c^ < 0. /) .4. We can. 0. In both the Ctises. q) decreases by one unit and flow on the arc is than its capacity. we preserve the optimality conditions. fact that d(k. then c_ ^ if after the change. if Cpq > 0. Cpq = 1 < before the change. if Cpq S 0. q) by one unit as well. and usually they are within 5% of each other. for all pairs of nodes k and 1 Consequently. 0. When strictly less the capacity of the arc (p. We convert the pseudoflow into a flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + d(q. This change increases the reduced cost If of arc (p. however. Suppose an arc increases by one unit. that the cost of we discuss changes in arc costs. We at satisfy this requirement by increasing the by one unit. then x* remains feasible.2 . Recent empirical studies have suggested that these upper bounds are very close to the actual values.

X) other hand. since otherwise would generate a solution that violates C5. 5. say of objects cost Nj I = I N2 = n) 1 a collection of node pairs A C Nj x N2 representing possible person(i.2 and Let v" denote the flow sent from node p to node q » If and x" denote the resulting arc flow. (possibly negative) associated with each element The objective is to assign each person to one object .v° and obtain a feasible minimum is cost flow.141 condition C5. v° = x . q) to zero. if v° < x then the maximum flow algorithm yields an s-t with the properties that p € X. In this Ccise. thus creating an excess of X Pi sink. To satisfy the optimality condition of the arc. at node p and a deficit of x node Pi (iii) q. We then decrease the node that potential of every this node in N-X by one unit. q) to zero. to change flows only on arcs it with zero reduced costs. in A. j) to-object assignments. we v" can set In x^ . q) flow on arc (p. a set N2. decreases the reduced cost of arc the flow on arc (p. N. » cut On the (X. and a cost Cj. (ii) define of node p as the source node and » node q as the sink node. eeisy to verify by case aruilysis change in node potentials maintains the optimality conditions and. network flow problem. (p. the objective function value of the modified problem x_. say of f)€rsoris. then x° denotes a minimum cost flow of the Pi modified problem. defined as follows: • (i) We at do so by solving is set a maximum flow problem the flow on the arc (p. or change the potentiak so that the reduced cost of arc becomes zero.4. - units more than that of the original problem. As already indicated in defined by a set N|. this problem .X. permit the maximum flow algorithm.11 Assignment Problem The assignment problem special cases of the is one of the best-known and most intensively studied minimum is Section ( I 1. furthermore. q) equal to Consequently. the optimal objective function values of the original and modified problems are the same. we must either reduce the (p. however.1 . and send a maximum x__ units from the source to the We C5. choosing the assignment with . this case. q e N . and It is every forward arc in the cutset with zero reduced cost has others at the arc's capacitated.2. q) • to zero. We first try to reroute the flow x from node p to node q without violating any of the optimality conditions.

= 1}. Several of these algorithms apply.18) is an assignment.X:: (5. j) X € X) =l. Researchers have suggested numerous algorithms for solving the assignment problem. Associated {i:(i.j)eA) If = 1.C) is the time required to solve a shortest p>ath problem with nonnegative arc lengths) .foraUi€ X:: Xji N-i.1 8d) G The assignment problem is with node set N = N| u N2. xjj (5. j) e A) for All reduced costs defined by these node potentials are nonnegative. A node not assigned to any other node is unassigned. either explicitly or implicitly. j) € A.m. : (i. (Note that S(n. A 0-1 solution x satisfying ^ 1 for all i € Ni and X ''ii - 1 fo'" 3^' j e No X . then is assigned to j and j is assigned to i. i A 0-1 solution x of (5. j) e A : x^.18c) ^ 0. set A. The successive shortest path algorithm solves the assignment problem as a sequence of n shortest path problems with normegative arc lengths. all j minimum e N2- and 7t(j) = min {cj. (5.j)e A) is with any partial assignment x an index set defined as X= {(i. j) Cj. and consequently runs in 0(n S(n.foraUje N2.18b) (i : (i.18a) e A ^ ' subject to {j : (i. is called a partial assignment.C)) time. arc costs problem defined on a network and supply /demand specified as has 2n nodes <md b(i) e N| and b(i) = is -1 if i e N2.. the successive shortest path algorithm for the typically select the initial These algorithms node potentials with the following values: nii) = for all i e N| cost flow problem. (5. We Xjj ^ use the following notation. The network G m= A | | arcs.142 minimum program: possible cost. arc = 1 if i a minimum cost flow Cj. for all (i.m. The assignment problem also known as the bipartite matching problem. j) X e A) =l. The problem can be formulated as the following linear Minimize 2(i. "ii {j:(i.

thus allowing any object to be assigned to more than one an object j person. a negative cycle. some objects may be unassigned and other a feasible objects may be overassigned. To do we apply the tissignment algorithm twice. can solve the shortest path Consequently. by an arc (i. Since these algorithms are special cases of other algorithms specify their details.the tissignment problem. we will not Rather. Dijkstra's algorithm. we have described earlier. problems by implementations of runs in 0(n S(n. is well knovkn solution procedure for the assignment problem. in this section. and adds an zero cost arc We first : note that the transformed network always has a feasible solution with cost zero . the Hungarian essentially the primal-dual variant of the successive shortest path algorithm. a cost scaling algorithm provides the best-knowT> time bound fo. we will discuss a different type of algorithm based upon the notion of an auction. assignment problem so. however. we can solve any assignment problem. Before doing so. For problems that satisfy the similarity assumption. we show another intimate connection between the assignment problem and the shortest path problem. and.4. Assignments and Shortest Paths We have seen that by solving a sequence of shortest path problems. the second application identifies a Both the appbcations use the node splitting transformation described in Section 2. This relaxed problem smallest Cjj is easy to solve: assign each person i to with the value.18c). As a result. doesn't. with provisions basis. The node replaces each arc splitting tremsformation replaces (i. the constraint (5. shortest paths The algorithm gradually builds from overassigned objects to assignment by identifying vmassigned objects and augmenting flows on these paths. moreover. j). i and i'. it Because this approach always maintains the optimality conditions. some implementations of it provide polynomial time bounds. i'). The network simplex algorithm. this algorithm also One method.m. The algorithm solves at most n shortest path problems. or relaxes. we can also use any algorithm for the to solve the shortest path problem with arbitrary arc lengths. Interestingly. j) each node (artificial) i by two nodes (i. is for maintaining a strongly feasible is fairly another solution procedure for the assignment problem.C)) time. which is also closely related to the successive shortest path algorithm. This approach efficient in practice. if it The first application determines if the network contains shortest path. The relaxation algorithm removes.143 The relaxation approach is another popular approach.

. (Jk' J]) Conversely. the assignment must contain a Qk' ii arcs of the form is . (J2 .144 namely.'). This solution must contain at least one arc of the form set of (i. iy\2 -J3 ' ' • * " - . some partial assignment PA j| must be J2 But then by construction of the transformed network. Then the assigment negative cost.. the assignment containing all artificial arcs is (i. Jl^-jj. suppose the original network contains { a negative cost cycle. (J2 / it can be no ^ ^ • more expensive than the partial assignment is { (jj jA ) / • • • » (Jk. t ) . • negative. First.. (J2 / J3)/ • • • . suppose the cost of an optimeil assignment is i negative. j') with * { j . the cost of the optimal assignment must be negative. jo ) / • • • / ^'- ^^^ ^°^^ °^ *^'^ "partial" assignment nonpositive. the cycle ~ • ~ Jk ~ )l ^ ^ negative cost cycle in the original network. (j^. ^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore. because j.Iv Since the optimal assignment cost negative. i'). j 2). Consequently. PA = (j| . We if next show that the optimal value of the assignment problem negative if and only the original network has a negative cost cycle.

3.145 (a) (b) Figure 5. (a) The original network. . (b) The transformed network.

4').18). C = max j.3(a). {lu^jl : (i. = -uj. 3')) in Figure 5. 3'). (2. in dollars. 2'). can obtain a shortest path between a specific pair of nodes. marginal utility of person for buying car is U|j price(j). j asking prices. We The bid (i. (4. which is an upper bound on : that person's highest marginal utility. j) i a nonnegative utility Uj.e. is an instance of the bit-scaling algorithm described in Section To describe the auction this algorithm. Consequently. j) admissible if valued) = uj: price(j) and inadmissible otherwise. j) e A). for each set € A(i). The objective this is to find an assignment with m<iximum Let We can Cj. say from node 1 to node as follows. . If algorithm requires every bid in the auction to be admissible. ((1. We assume that all utilities and prices are measured a We call a associate with each person i number - valued). 5'). since version appears more natural for interpreting the algorithm. 1 Now observe that each path from node to node n in the original network has a corresponding assignment of the same cost in the transformed network. bid and has no admissible bid. an optimum assignment in the transformed network gives a shortest path in the original network.3(b).146 If the original network contains no negative cost cycle.price(j) (i. j) e A(i)). We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to make the algorithm polynomial time. (2. This scaling algorithm 1. the iteration. At each an unassigned person bids on a car that has the highest margir\al utility. 5'). Suppose n persons want is to interested in a subset of cars. to reduce problem is to (5. 1' We consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes.price(j) : (i. buy n and has cars that are to be sold by auction. i. 4')) and an assignment {(1..3 for an example of this transformation. the path 1-2-5 in Figure 5. and the converse is also true. assignment (4. value(i) ^ max {u^: . j) e A(i)}.3(b) has the corresponding path 1-2-4-5 in Figure 5. there an asking price for car represented by i price(j). 2').3(a) has the corresponding in Figure 5. For example. and n and See Figure 5. (3. j Each person (i. we cor\sider the maximization version of the assignment problem. (3. then we n.6. At each stage of the For a given set of - algorithm. The Auction Algorithm We now describe an algorithm for the assignment problem known as the auction algorithm. then value(i) is person i is next in turn to too high and we decrease this value to max (u^j . for car utility.

The auction stops when each person assigned a car. We now show of the that this procedure gives an assignment whose utility is vdthin $n optimum utility. begin let x". the prices of cars increase and hence the marginal values is to the persons decrease. x°. the procedure yields an almost a more clever initialization. valued) ^ Uj: price(j) for all (i. the polynomial time version requires At termination. Let x" denote a partial assignment at some point during the Recall that i. if was one.price(j) : (i. person k was already assigned to car j. end. The procedure can i. Consequently. with some valid For example.e. let x° be the current assignment. person there is assigned to car The person k who was the previous bidder for car j. end. j) e A(i). person k must bid on another car. Also. choices for value(i) and value(i) = price(j). therefore. cars.147 So the algorithm proceeds by persons bidding on car j. execution of the auction algorithm and x* denote an value(i) is optimum assignment. Subsequently.. value. we set price(j) = for each car and max {u^ : (i. is while some person begin select if unassigned do an unassigned person bid (i. end else update vzJue(i) : = max {uj: . some is admissible then begin assign person price(j) if : i to car j. . j. subsequent bids are of higher value. the initial assignment be a null assignment. price). then person k becomes unassigned. becomes uneissigned. = price(j) + 1. j) e A(i)} for each person Although this initialization is sufficient for the pseud opolynomial time version. starts We now j describe this bidding procedure algorithmically. If a jjerson i makes a bid on then the price of car i j goes up by $1. utility of always an upper bound on the highest marginal - person i. optimum tissignment procedure BIDDING(u. j) € A(i)}. j) i. As the auction proceeds.

is number of steps the method must terminate with utility of this is at Then utility UB(x°) represents the of the assignment x" assignment (since Nj less empty) . Using obtain n. we UB(x^) ^ S value(i) + J I e price(j) - (5.20) in (5. N in (5. (5. however. We show that the value of any person decreases CXnC) . to obtain an all utilities Uj. two assignments with distinct toted utility will differ by at least (n+1) units. goesupby UB(x°)= UB(x°) be defined as follows. the C = (n+l)C. for all (i. In this modified problem.21) and observing that unassigned cars in N2 have zero prices. must be optimal. We next discuss the complexity of the Bidding procedure as applied to the v^ith all utilities first assignment problem largest utility is multiplied by (n+1 ). It is easy to modify the method.22) N2 (5. Suppose we multiply Since all utilities by (n+1) before applying the Bidding procedure. (i.19) assignment \° also - value(i) = Ujj price(j) + 1. j) Z X° e "ii ^ + i € I °value(i). optimum assignment. hence. within a finite a complete assignment x".21) with N° denoting the unassigned persons N^. the most $n than the maximum utility. Let Uj: - price(j) and immediately after the bid. x° is Since the algorithm v^l either modify a node value or node price whenever not an assignment.i)eX'' i€Ni I valued) + J€N2 satisfies the condition X price(j) (5. (5.23) As we show in our discussion to follow. are now multiples of (n+1). j) e X°. The procedure yields an assignment that is within n units of the optimum value and. the algorithm can change the node values and prices at most a finite number of times.148 X The partial Uji < (x.20) because priceCj) at the time of bidding value(i) = $1. Hence.

gned.. some car By our previous arguments. (5. can be assigned at most A(i) times betvk^een two of consecutive decreases total This observation gives us a bound O(nmC') on the the "current number of times all bidders become ass'. using arc" data structure permits us admissible bids in O(nmC') time. As in the bit -scaling technique described in Section 1. Substituting this inequality in (5. the total time needed to ujxiate Veilues of all ( O ie I n I Ad) I C = O(nmC').6.23) implies UBCx") S -n. Using a scaling technique in the auction algorithm ensures that the prices and values do not change too many times. . ie No 1 Since valued) decreases by at that the value of le«ist one unit each time at it changes. ?£. Thus. N^ We next examine the number of iterations performed i by the procedure. . 5. The auction algorithm solves the assignment problem in O(n^mC) it time. Since C = nC. we solve each problem in 0(nm) time and solve the original problem in 0(nm log nC) time. K we have Theorem established the following result. the values change O(n^C') times in value(i) > Uj. Odog nC) assignment problems and and show solve each problem by the auction We use the optimum prices and values of a problem as a starting solution that the prices of the subsequent problem and values change only CXn) times per sctiling phaise. this inequality shows any person decreases I I most O(nC') times. a person in valued). Each j. Since all utilities are nonnegative. we decompose the original problem into a sequence of algorithm.. Since decreasing the value of a person persor\s is i once takes 0( Ad) \ ) time.21) yields valued) ^ -n(C' + 1). The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj. The auction algorithm is potentially very slow because can increase prices (and thus decreases values) in small increments of $1 and the final prices can be as large as n^C (the values as small as -n^C). iteration either decreases the value of a person or assigns the person to total. to locate As can be shown.. since the price of car j person i i hais been aissigned to car I j and I increases by one unit.8. .price(j) after Further. 149 times.

j) is the k if leading bits in the binary representation of assuming (by adding leading zeros necessary) that each Uj. It is easy to verify that before the algorithm invokes the Bidding procedure. BIDDING(uK end. In the k-lh obtains a near-optimum solution of the problem with the utilities k u--. Note that in the problem Pp all utilities are and subsequently k+1 u^- k = 2u. the problem Pj^ has the arc or 1. in which the utility of arc (i. the purpose of each scaling phase to obtain good prices and values for the subsequent scaling phase. all Uj. value(i) = to for each person i. The Bidding procedure maintains these conditions throughout execution. depending upon whether the newly added follows: bit is or 1. for its each person i. j) € A. it a number of cost scaling phtises. utilities u-j= Luj.150 Pj^ . In other words. K: = riog(n+l)Cl price(j) : = : for each car j. for k : = 1 K do = : begin let ujj : L Ujj / 2^-^J for each (i. In the last scaling phase. end.price(j) : (i. value(i) = 2 value + for each person i. the algorithm solves the assignment problem with the original utilities that in each scaling is and obtains an optimum solution of the original problem. price(j) = 2 : price(j) for (i) each car 1 j. The crucial result that the prices and values change only 0(n) times during each execution of the . the algorithm starts with a null assignment. Observe phase. The assignment algorithm performs scaling phase.+ {0 or 1). We is next discuss the complexity of this assignment fdgorithm. x°. prices satisfy value(i) and values ^ max {uj. begin multiply by (n+1). value. price). / 2'^*'^ J. A(i)). The problem Pj^ is an assignment problem ujj. j) e. The scaling algorithm works as algorithm ASSIGNMENT. . is K bits long.

for all (i.7. We summarize our discussion. then (5.. For any assignment we have value(i). j) in the k-th scaling phase _ Ujj = Ujj ic - price(j) - value(i).20) x*^"* (the final at tie end of the (k-l)-st scaling phase). of arcs in x*'" If * are either -2 or -3. Using this result in the proof of Theorem 5.151 Bidding procedure. Hence. x° is some partial assignment in the k-th scaling phase. (5. The assignment algorithm applies the Bidding procedure Odog nC) times and. and Uj. _ u. just before Bidding procedure. y ic U:: j )U X ^ X e price(j) i jfe X'^ N2 X e Nj Consequently. we set price(j) = 2 price'(j). we observe that the Bidding procedure would terminate in 0(nm) time. the optimum reduced utility is at least -3n. Using this result and (5.25) where price'(j) and value'(i) are the corresponding values at the end of the (k-l)-st scaling phase. we find that the reduced utilities Uj.25). The equality V 1 implies that u. as We define the reduced utility of an arc (i. = (i. for a given set of prices and values. utility also an assignment that maximizes the reduced value(i) maximizes the utility. Therefore.23) implies that UBCx") t -4n.21) yields I icNj valued) ^-4n. Now assignment k-1 consider the reduced utilities of arcs in the assignment (5. the reduced utility of an assignment differs from the utility of that assignment by a constant amount. for aU (i.24) in (5. Substituting these relationships in (5. Before calling the Bidding procedure.24) Uij < 0. In this expression. valued) decreases 0(n) times. runs in 0(nm log nC) time. . value(i) k k-1 = 2 value'(i) + 1. Since t u- • - price(j) for each (i. (5. j) e x*^"'. j price'(j) - value'(i) = -1. for any i. consequently.+ (0 or 1). we have (5. = 2 u. y (i. j) e A. price(j) calling the and value(i) have the values computed x. j) e A.26) Hence.

first For example. n = 10.26). as described in Section these shortest paths in 0(m) time.9. and 0((n if - Hence. the algorithm takes CXVn m) time FVn 1 f>ersons fVn 1 )m) time to assign the remaining FVii persons. then the auction algorithm would assign would assign the 99% of the persons in 1% of the overall running time and the remaining 1% of the persons in the remaining 99% it of the time. . so happens that the shortest paths have length 0(n) and thus Oial's 3. We all therefore terminate the execution of the auction algorithm when has assigned but rVn It 1 persons and use successive shortest path algorithms to assign these persons.152 Theorem 5. then version of the algorithm currently heis known time bound for solving the assignment problem . The 0(nm log nC) time. improved to run 0(Vn m log nC) If This improvement i is based on the following implication of if (5. will find algorithm. If we invoke the similarity the best assumption. This version of the auction algorithm solves a scaling phase in 0(Vn m) time and its overall running time this is 0{-\fn m log nC). we prohibit person from bidding value(i) S 4Vn .000.2.26) the number of unassigned persons is at to assign n1 most Vn. then by (5. scaling version of the auction algorithm solves the assignment problem in The in scaling version of the auction algorithin can be further time.

It also covers the development which credited to Ford and Fulkerson. and Koopmans (1947]. the tranportation problem. a special case of the studies provided minimum cost flow problem.1 the empirical aspects of the algorithms. During the 1950's. Orden work by specializing the simplex algorithm for the uncapacitated minimum cost flow problem. researchers began to exhibit increasing interest in the its minimum the cost flow problem as well as special cases-the shortest path problem. considered the transportation problem. Soon researchers developed special purpose algorithms Dantzig. Interest in network problems grew with the advent of the simplex Dantzig (1951] specialized the simplex algorithm for noted the traingularity of the basis and integrality of (1956] generalized this algorithm by Dantzig in 1947. network problems and their generalizations emerged as major research topics in operations research. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. Their book. Introduction The study cf network flow models predates the development of first linear programming techniques. He the optimum solution. Ford and Fulkerson (1962]. flow Since these pioneering works. solve these problems. them and by is others. (ii) to point out inter-relationships among different algorithms. maximum flow problem and the assignment problem — mainly because of their to important applications. These some insight into the problem structure and yielded incomplete algorithms. Whereas Dantzig focused on the primal simplex based algorithms. conducted by Kantorovich (1939]. The network simplex algorithm for the capacitated the development of the minimum cost flow problem follov/ed from for linear bounded variable simplex method programming by Dantzig (1955]. and (iii) to comment on 6. The book by Dantzig (1962] contains a thorough description of these contributions along with historical perspectives. Ford and Fulkerson pioneered those efforts. The studies in this problem domain. this research . presents a thorough discussion of the early research conducted by of flow decomp)osition theory.153 6. Reference Notes In this section. we present reference notes on topics covered in the (i) text. Hitchcock [1941]. This discussion has three objectives: to review important theoretical contributions on each topic.

Examples paper by Bodin. Iri (1969] (Network Flows. Transmission and Networks).154 is documented in thousands of papers and many text and reference books. Since the applications of network flow modelsa are so pervasive. [1981] (Graphs. Potts and Oliver [1972] (Flows in Transportation Networks). the reader might consult the bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of 1985]). and Von Randow [1982. Hausman [1978]. and Kowalik [1983] (Discrete Optimization Algorithms). Phillips Garcia-Diaz [1981] (Fundamentals of Network Analysis). We shall be surveying many important research papers in the following sections. Christophides [1975] (Graph Theory: [1976] (Linear An Algorithmic Approach). Swamy and Thulsiraman Networks and Algorithms). Kennington and Helgason Programming). Golden. Tarjan [1983] (Data Structures and Network Algorithms). (Programming in Netorks and As an additional source of references. Lawler (Combinatorial (Linear Optimization: Networks and Matroids). books on commurucation networks by Bertsekas . Minieka [1978] (Optimization Algorithms for Networks and Graphs). Bazaraa and Jarvis [1978] Programming and Network Flows). Transportation and Scheduling). Assad and Ball [1983] on vehicle routing and scheduling problems. Smith [1982] (Network Optimization Practice). Deo. cost flow and generalized minimum domains cost flow A number of books written in special problem also contain valuable insight about the range of applicatior\s of network flow in this category are the modek. Hu [1969] (Integer Programming and Network Flows). Gondran and Minoux [1984] (Graphs and Algorithms). Jensen and Barnes [1980] [1980] (Algorithms for Network and (Network Flow Programming). field Several important books summarize developments in the literature: and serve as a guide to the Ford and Fulkerson [1962] (Flows in Networks). Frank and Transportation Frisch [1971] (Communication. programming compiled by researchers at Bonn (Kastning [1976]. Berge and Ghouila-Houri . Rockafellar [1984] (Network Flows and [1988] Monotropic Optimization). Murty [1976] and Combinatorial Programming). Papadimitriou and Steiglitz [1982] (Combinatorial Optimization: Algorithms and Complexity). 11962] (Programming Games and Transportation Networks). Several researchers have prepared general surveys of selected application areas. Notable among these is the paper by Glover and Klingman [1976] on the applications of minimum problems. and Derigs Graphs). Syslo. no single source provides a comprehensive account of network flow models and their impact on practice.

and independently by Dantzig [1960] and Whiting and Hillier [I960]. greatly helped in popularizing scaling techiuques. Label Setting Algorithms The first label setting algorithm was suggested by Dijkstra [1959]. However. linked lists. paper on scaling algorithm for combinatorial [1985] coined this term in his optimization problems. Sheffi [1985]. Pallattino.1. As a guide to these results. lists. doubly is linked queues. we refer the reader to the extensive bibliographies compiled by Gallo. The book by Tarjan [1983] another useful source of references for these topics as well as for more complex data structures such as dynamic trees. since any algorithm for sparse must examine every networks. binary heaps or d-heaps.155 and Gallager [1987] and on transportation planning by collection of survey articles [1988]. This important paper. 2. focuses especially on issues of computational complexity. as well as a on facility location edited by Francis and Mirchandani Golden [1988] has described the census rounding application given in Section General references on data structure serve as a useful backdrop for the algorithms presented in this chapter. This section. The book by Aho. improved running times are possible The following table svimmarizes various implementations of Dijkstra's algorithm that have been designed to improve the running time in the worst case or in practice. stacks. which summarizes some of this literature. arc. We Gabow have mentioned the "similarity assumption" throughout the chapter. The is original implementation of Dijkstra's algorithm runs in 0(n2) time which running time for fully the optimal dense networks (those with m = fiCn^ )). 1. d = [2 + m/n] represents the average degree of a node in the network plus . which contains scaling algorithms for several network problems. Ruggen and Starchi [1982] and Deo and Pang [1984]. Hop>croft and Ullman [1974] is an excellent reference for simple data structures as arrays. 6^ Shortest Path Problem The shortest path problem and its generalizations have a voluminous research literature. In the table.

156 « .

The R-heap implementation by a sequential search and improves the running time by a . Johnson [1977b] proposed a related bucket scheme with exponentially growing widths and obtained the running time of structure it 0((m+n log Olog log C). The best strongly polynomial-time algorithm to date is due to Fredman and is Tarjan [1984] ingenious. in practice. except that performs binary search over Odog C) buckets to insert nodes into buckets during the redistribution of ranges replaces the binary search and the distance updates. nk(l+C^/^/w)] bound to a time for log C). that if Denardo and Fox [1.157 Boas. but who use a Fibonacci heap data structure. Denardo and Fox implemented the shortest path algorithm in 0(max{k C^^K m log (k+1).m and C. This data is the same as the R-heap data structure described in Section 33. Observe w = max minlcj. data structure that takes an average of Odog time for each node selection (and the subsequent deletion) step and an average of 0(1) time for each distance update. Dijkstra's algorithm in Consequently. using a multiple level bucket scheme. Dial. this data structure implements 0(m + n log n) time.: (i. Though Dial's only pseudopolynomial-time. This algorithm was independently discovered [1979] by Wagner[1976]. Then. any choice of k. Kaas and Zijlstra [1977] suggested a data structure whose analysis depends upon the takes largest key D stored this in a heap. Glover. successors have had improved worst- case behavior. implemented using data structure. [1979] suggest several such improvements. d* + w - 1] since each arc has length at least w - 1. Kamey and Klingman which runs better its have proposed an improved version of algorithm is Dial's algorithm. then we can use buckets of width w in Dial's algorithm. it runs in 0(nC + m log log nC) it Johiison [1982] suggested an improvement of this data structure and used to implement Dijkstra's algorithm in 0(m log log C) time. The Fibonacci heap an n) somewhat complex. Dial [1969] suggested his implementation of Dijkstra's algorithm because of its encouraging empirical performance. other choices might lead modestly better time bound. hence reducing the number of buckets from 1+ C if to l+(C/w). Choosing k = log C yields a time of 0(m log log C+n Depending on n. When Dijkstra's algorithm time. The initialization of this algorithm 0(D) time and each heap operation takes Odog log is D). then the algorithm will modify no other label in the range [d*. is The correctness of this observation follows from the fact that d* the current minimum temporary temporary distance distance labels.j) € A}].

thus reducing the number of buckets.) at the front if the algorithm has is previously examined the node earlier and at the end otherwise. all of its previous This approach permits the selection of much larger width of buckets. the two-level bucket system redistributes the range of a subbucket over buckets. as described next. the first label correcting algorithm for - Subsequently. By using K = L = 2 log C/log log C. probably the most popular. We shall subsequently refer to A FORTRAN listing of this . in practice. This modification was conveyed to Pollack and Wiebenson [1960] by D'Esopo. Mehlhom. this approach currently all classes gives the fastest worst-case implementation of Dijkstra's algorithm for of graphs except very sparse ones. and so is unlikely that this algorithm would perform well Label Correcting Algorithm Ford [1956] suggested. Incorporating a generalization of the Fibonacci heap data structure in the two-level bucket system with appropriate choices of K and L further reduces the time bound to 0(m + nVlog C ). Ahuja. this algorithm as D'Esopo and Pape's algorithm. The R-heap implementation described system. several other researchers - Ford and Fulkerson [1962] and Moore [1957] algorithm.3 uses a single level bucket A two-level bucket system improves further on the R-heap implementation of Dijkstra's algorithm. If we invoke the similarity aissumption. time. studied the theoretical properties of the Bellman's [1958] algorithm can also be regarded as a label correcting Though specific implementations of label correcting algorithms run in is 0(nm) [1970]. however. The two-level data structure consists of K (big) buckets.158 factor of Odog log C). this two-level bucket system version of Dijkstra's algorithm runs in 0(m+n log C/log log C) time. Ouring redistribution. for which the algorithm of Johnson [1982] appears more attractive. The modification that adds a node to the LIST (see the description of the Modified Label Correcting Algorithm given in Section 3. the most general form nonpolynomial-time. each bucket being further subdivided into L (small) subbuckets.4. in skeleton form. Orlin and Tarjan [1988] suggested the Rits heap implementation and further improvements. algorithm. in section 3. The Fibonacci heap version it of two-level R-heap is very complex. and later refined and tested by Pap>e [1974]. as shown by Edmonds Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice. the shortest path problem.

computational attributes can be Klingman. Hao and Kai [1986] described another simplex algorithm for the shortest path this problem: the number of pivots and running times for to those of algorithm are comparable Akgul's algorithm. runs in 0(n2) time and has excellent computational their Other variants of the label correcting algorithms and found in Glover. the FSP algorithm runs it in 0(nm) time. Akgul's algorithm runs to in O(n^) time which can be reduced 0(nm + n^logn) using the Fibonacci heap data structure. as runs in shown by Kershenbaum [1981]. the number of pivots is if all arc costs are nonnegative. Glover.e. Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm.. aiul also permits partial pricing All Pair Shortest Path Algorithms Most algorithms manipulation. Glover. structures. For general networks. Orlin [1985] showed that the simplex algorithm with Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots. This algorithm uses simple data . called the partitioning shortest path (PSP) algorithm. while for networks with nonnegative arc lengths behavior. The complexity of this algorithm is 0(n3 log n). Thus. Though this modified label correcting it algorithm has excellent computational behavior in the worst-case exponential time. which can be improved slightly by using more sophisticated matrix multiplication procedures. Ahuja and Orlin [1988] recently discovered a scaling variation of this approach that performs 0(n^ log C) pivots and runs in 0(nm log C) time. Phillips and Schneider [1985]. the arc with largest violation of optimality condition) for the shortest path problem starting from an 0(n) artificial basis leads to Dijkstra's algorithm. that solve the all pair shortest path problem involve matrix The first such algorithm appears to be a part of the folklore. Karney and pivoting in Klingman [1979] and Zadeh [1979] showed that Dantzig's pivot rule (i. Goldfarb. Primal simplex algorithms for the that efficient. This algorithm nms 0(n3) time and . The algorithm we have presented is due in to Floyd [1962] and is based on a theorem by Warshall [1962]. uses very T\atural pricing strategies. Lawler [1976] describes this algorithm in his textbook. Using simple data structures. Researchers have been interested in developing polynomial-time primal simplex algorithms for the shortest path problem.159 algorithm can be found in Pape [1980]. Dial. shortest path problem with arbitrary arc lengths are not Akgul [1985a] developed a simplex algorithm for the shortest path problem that performs O(n^) pivots.

Computational Results Researchers have extensively tested shortest path algorithms on a variety of network classes. Iri Kamey and Klingman [1979]. Hence. Denardo and Fox [1979] also find that Dial's algorithm all than their two-level bucket implementation for of their test problems. Researchers have not yet tested the R-heap Dial's algorithm is implementation and so available.C)) time to solve the n shortest path problems (recall that S(n. The bibliography by Deo and Pang [1984] contains references algorithms. the language. extrapolating the results. the computational performance of an algorithm is depends upon many factors: for example. Klingman. they observe that their implementation would be faster for very large shortest path problems. Glover.m.m. the in the algorithm by Fredman [1976] faster than this approach worst<ase complexity. Phillips and Schneider [1985] and Gallo and Pallottino [1988] are representative of these contributions. Dantzig [1967] devised another procedure requiring exactly the same order of calculations. These studies generally suggest that path problem. this problems. Denardo . compiler and the computer used. rather than conclusive. and the distribution is of networks on which the algorithm tested. Van Vliet [1978]. the manner in which the program written. Unlike the worst<ase results. the binary heap. The results of these studies also depend greatly upon the density of the network. Dial. at this moment no comparison with . d-heap or the all Fibonacci heap implementation of Dijkstra's algorithm for network classes tested is fcister by these researchers. Pape [1974]. Kelton and Law [1978]. and Fox Imai and [1984]. however.C) shortest path is the time neede to solve a problem with nonnegative arc is lengths). For very dense networks. As pointed out approach takes CXnm) time to construct an equivalent problem with nonnegative arc lengths and takes 0(n S(n. for several other all pair shortest path From solve the all a worst -case complexity point of view. it might be desirable to pair shortest path problem as a sequence of single source shortest path in the text. It is Dial's algorithm is the best label setting algorithm for the shortest faster than the original OCn^) implementation. The studies due to Gilsinn and Witzgall [1973]. Glover. the results of computational studies are only suggestive. however. [1979].160 is also capable of detecting the presence of negative cycles.

for very dense networks. bbel correcting algorithms perform better. et al. Figure 6. researchers have developed a number of algorithms for this problem. upon the worst-case complexity of some. but not all. for sparse networks. the label correcting algorithn\s. and Schneider [1985] are the two fastest. label setting algorithms are superior and. This study indicates that Dantzig's [1967] algorithm is with a modification due to Tabourier [1973] faster (up to two times) than the Floyd- Warshall algorithm described in Section 3. Fulkerson and Dantzig [1955] solved the maximum flow problem [1956] by specializing the primal simplex algorithm. whereas Ford and Fulkerson Elias et al. but slower for sparse networks. This study also finds that matrix manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks.3 Maximum Flow Problem The maximum flow problem is distinguished by the long succession of research contributions that have improved algorithrr\s. Kelton and Law [1978] have conducted a computational study of several aill pair shortest path algorithms.161 Among by Glover algorithm. of these improvements have produced improvements in practice. . Feinstein and Shannon independently established the max-flow min-cut theorem. Since then. m is the number of arcs. Klingman. Several researchers - Dantzig and Fulkerson [1956]. the bounds specified for the other algorithms apply to problems with arbitrary rational or real capacities.2 summarizes the running times of some of these algorithms. Studies generally suggest that. The study finds that their algorithm is superior to D'Esopo and Pape's label setting algorithms Other researchers have also compared with label correcting algorithms. and U is an upper bound on the integral arc capacities.5. Ford and Fulkerson [1956] [1956] - and Elias. 6. and [1956] solved it by augmenting p>ath algorithms. algorithms whose time bounds involve The U assume integral capacities. the algorithms Phillips by D'Esopo and Pape and by Glover. n is the number of nodes. In the figure.

consequently. Orhn and Tarjan [1988] (b) uvnm ol + n ^VlogU) (c) O nm V ( Table 6. They one showed if the algorithm augments flow along a shortest path (i.162 # 1 Discoverers Running Time [1972] Edmonds and Karp Dinic [1970] 0(nm2) CKn2m) 0(n3) 2 3 4 5 6 Karzanov Cherkasky Malhotra. this version of the labeling . both with improved computational complexity. [1974] [1977] 0(n2 VIS") [1978] Kumar and Maheshwari 0(n3) Galil [1980] 0(n5/3m2/3) [1980].2. ) Ahuja and Orlin [1987] 0(nm + n^ . then the algorithm performs 0(nm) augmentations. U 17 Ahuja.e. containing the smallest possible number of arcs) in the residual network. Running times of maximum flow algorithms. the labeling algorithm can perform infinite sequence of augmentations and might converge to a value different from flow value. They also showed that for arbitrary irrational arc capacities. Shiloach [1978] 7 8 GalU and Naamad 0(nm CXn3) log2 n) Shiloach and Vishkin [1982] Sleator 9 10 11 and Tarjan [1983] 0(nm 0(n3) log n) Tarjan [1984] Gabow[1985] Goldberg [1985] 0(nm 0(n3) log U) 12 13 14 Goldberg and Tarjan [1986] Bertsekas [1986] CXnm 0(n3) log (n^/m)) 15 16 Cheriyan and Maheshwari [1987] 0(n2 Vm + •. Ford and Fulkerson [1956] observed that the labeling algorithm can perform as many an the as 0(nU) augmentations for networks with integer arc capacities. will A breadth first search of the network determine a shortest augmenting path. maximum that Edmonds and Karp [1972] suggested two specializations of the labeling algorithm.. Ca) log .. J O nm 1^ U) r?- log log — log " U ..

but instead of constructing layered networks labels. the length of the layered network increases and a^er at most n iterations. are easier to manipulate. his algorithm runs in OCn^m) times. . the source is disconnected from the sink in the residual network.. N2. His algorithm constructs layered networks Dinic showed that after each and establishes blocking flows in these networks. of the labeling algorithm runs in 0(m2 log Dinic [1970] independently introduced the concept of shortest path networks. They proved that this algorithm to performs path 0(m log U) with maximum augmentations. flow along a path with Edmonds and Karp's second idea was to augment maximum residual capacity. i e Nk and j e Nk+1 for some k). They also showed that this equivalent both to all Edmonds and Karp's algorithm and to Dinic's algorithm in the sense that three algorithms enumerate the same augmenting paths in the same sequence. called layered networks is . network connects nodes in adjacent layers (i. U) time. The algorithms differ only in the manner in which they obtain these augmenting paths. Consequently. .e. for solving the maximum flow problem. Dinic showed how to construct. in a total of 0(nm) time. The shortest augmenting path algorithm presented in Section 4. maintains distance Goldberg [1985] introduced distance labels in the context of his preflow push algorithm. A layered network lie a subgraph of the residual network at least that contains only those nodes and arcs that on one shortest path from the source into layers of to the sink. A') is a flow that blocks flow augmentations residual capacity sense that G' contains no directed path with positive from the source node to the sink node. Tarjan [1986] has shown how determine a this version residual capacity in 0(m) time on average.3 achieves the same time bound it as Dinic's algorithm. a blocking flow in a layered network by performing at most m augmentations. and have led to more efficient algorithms. Several researchers have contributed improvements to the computational complexity of maximum flow algorithms by developing more efficient algorithms to establish blocking flows in layered networks.163 algorithm runs in 0(nm2) time. The nodes . so that for every arc . in a layered (i.3. blocking flow iteration. j) network can be partitioned in the layered nodes N]. Karzanov [1974] introduced the concept . hence.. A blocking flow in a layered in the network G' « (N'. Distance labels offer several advantages: They are simpler to understand than layered networks. Orbn and Ahuja [1987] developed the distance label based augmenting path algorithm given in Section algorithm is 4.

this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's capacity. but the scaling algorithm much simpler to implement. Kumar and Maheshwari [1978] present a conceptually simple maximum flow algorithm that runs in OCn^) time. saturated arcs from this path. Consequently. If we delete the is we obtain a set of path fragments. and Naamad Dinic's algorithm (or the shortest augmenting path algorithm described in Section 4. Malhotra. If 0(nm) time and the algorithm runs in 0(nm we invoke the similarity assumption. Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify later to augmenting paths quickly. Gabow to the [1985] obtained a similar time bound by applying a bit scaling approach maximum flow problem. (See the technical report of Even (1976] for a comprehensive description of this algorithm and the paper by Tarjan [1984] for a that an simplified version. Cherkasky [1977] and Galil [1980] presented further improvements of Karzanov's algorithm. . 2-3 trees (see Aho. Sleator and Tarjan's algorithm establishes a blocking flow in 0(m log n) time and thereby yields an 0(nm log n) time bound for Dinic's algorithm. it saturates some arcs in this path. As outlined in Section 1. each log C) time.164 of preflows in a layered network.) Karzanov showed implementation that maintains preflows and pushes flows from nodes with excesses. constructs a blocking flow in 0(n2) time. Sleator and Tarjan [1983] improved this approach by using a data structure called dynamic trees to store and update path fragments. algorithm achieving Orlin and Ahuja [1987] have presented a variation of the Ga bow's same time bound. The basic idea to store these path fragments using some data structure. this time bound is is comparable to that of Sleator and Tarjan's algorithm. Ehiring a scaling phase. The such data structures were suggested independently by Shiloach [1978] and Galil [1980]. for example.7. to Shiloach [1978] and Galil and in a Naamad [1980] showed how augment flows through path fragments way that finds a blocking rur\s flow in O(m(log n)^) time.3) takes 0(n) time on average to identify an augmenting path and. their implementation of Dinic's algorithm 0(nm (log n)2) time. in Hence. the at initial flow value differs from the m£iximum flow value by most m units and so the shortest augmenting path algorithm (and also Dinic's algorithm) performs at scaling phase takes most m augmentations. for The search more efficient maximum flow algorithms has stimulated researchers to develop first new data structure for implementing Dinic's algorithm. during the augmentation.

Previously. this algorithm does not use any complex data structures. selects a node from the front of the queue. j>erforms a push /relabel step at this node. this algorithm improves Goldberg and Tarjan's for 0(iun log (n2/m)) algorithm by a factor of log n networks that are both non-sp>arse and nondense. and adds the newly active nodes to the rear of the queue. Orlin and Tarjan [1988]. Goldberg and Tarjan [1986] the running time of the improved This FIFO preflow push algorithm to 0(nm log (n^/m). though the improvements are not as For dramatic as they have been for example. The use of the dynamic tree data structure its improves the running times of the excess-scaling algorithm and variations. Tarjan [1987] conjectures that any preflow push algorithm that performs p nor«aturating pushes trees. Scaling excesses by a factor of log U/log log U and pushing flow from a large excess node with the highest distance label. 0(nm + n^ Vlog U trees. number of nonsaturating pushes to OCn^ log U/ log log Ahuja. algorithm currently gives the best strongly polynomial-time bound for solving the maximum flow problem. If we invoke the similarity assumption. this algorithm closely resembles the Goldberg's FIFO preflow push algorithm. Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an 0(nm + n^ log U) time bound. Ahuja. Cheriyan and Maheshwari [1987] showed Goldberg and Tarjan's highest-label preflow push algorithm actually performs ) OCn^Vin nonsaturating pushes and hence runs in OiriNm ) time. it (This algorithm maintains a queue of active nodes. Further. Goldberg (1985] had shoum in the FIFO version of the algorithm that pushes flow from active nodes first-in-first-out order runs in OCn-^^ time.) Using a dynamic tree data structure. as ) algorithm improves to O nm log —— — ° +2 by using dyiuimic showT» in Ahuja.165 Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the push algorithm. Orlin and Tarjan [1988] reduced the U). the E>inic's and the FIFO preflow push algorithms. that Recently. Orlin and Tarjan [1988] obtained another variation of origir\al excess scaling algorithm which further reduces the number of nonsaturating pushes to 0(n^ VlogU ). Bertsekas [1986] obtained another his maximum flow algorithm by specializing minimum cost flow algorithm. can be implemented in 0(nm log (2+p/nm) time using dynamic Although this . at each iteration.

flow problems: (ii) the maximum flow problem on (i. Ahuja [1987] have achieved the same time bounds using a modification of the shortest augmenting path algorithm. Thus. Tarjan[1988] recently showed how implement this algorithm in 0(nm logn) using dynamic trees.e. This algorithm is based on selecting pivot arcs so that flow source to the sink. j « j N^ | ). and. Stein and Tarjan [1988] it improved upon these ideas by shov^dng time bounds for all is possible to substitute nj for n in the preflow push algorithms to obtain the new time bounds for bipartite networks. Even and Tarjan [1975] showed that Dinic's algorithm solves the maximum flow problem on unit capacity networks in Orlin and O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time. that Ahuja. Martel and Fernandez-Baca such results by showing how the running times of Karzanov's and Malhotra et al. This result implies that the FIFO preflow push algorithm and the . it is still open for the general case. Femandez-Baca and Martel small integer capacities. n^=|N J. this algorithm performs 0(nm) pivots and to can be implemented in ©(n^m) time.. these problems are easier than are problems with large capacities. Developing a polynomial-time primal simplex algorithm for the flow problem has been an outstanding open problem for quite some time. unit capacity simple networks U=l. essentially Goldfarb and Hao [1988] developed such an algorithm.. Orlin. is Observe that the maximum flow value for unit capacity networks less than n. A) Nj j << j N2 |(or j N2 . Researchers have also investigated the following special cases of the maximum (i. [1987] have generalized these ideas for networks with Versions of the bipartite Let maximum = (N^ flow algorithms run considerably faster on a if j networks G u N2. (iii) and (iv) planar networks.e. Both of these algorithms rely on ideas contained in Hopcraft and Karp's [1973] algorithm for maximum bipartite matching. (i) unit capacity networks in the . U=l). is augmented along a shortest path from the As one would expect. and so the shortest augmenting path in algorithm will solve these problems to solve 0(nm) time. except source and sink. every node network.166 conjecture is true for all known preflow push algorithms. maximum Recently.'s algorithms reduce from O(n^) to 0(n^ n2 ) and 0(nj + nm) respectively. has one incoming arc or one outgoing arc) bipartite networks. n2 = |N2| andn = n^+n2[1985] obtained the Suppose first that nj < n2 Gusfield.

the running times of the Specialized maximum flow algorithms on planar networks appear more attractive. that these knovkTi worst-case examples are quite artificial and are not likely to arise in practice. m + n. (A network is called planar if it can be drawn nodes. hence. respectively. Dinic.167 original excess scaling algorithm. It is possible to solve the maximum flow problem on planar networks much at the more efficiently than on general networks. i. whether the algorithms achieve their worst- bounds some families of networks. are quite different than those for the general networks. however. that have even better running times.. Researchers have also investigated whether the worst-case bounds of the maximum case flow algorithms are for tight. Cheung . Galil and Malhotra achieve their worst-case bounds on those examples. ) and 0(n.e. is Zadeh [1972] showed that the bound of Edmonds and Karp that the algorithm tight when bound m = n^. Several computational studies have assessed the empirical behavior of maximum flow algorithms. especially for the excess-scaling algorithms. worth mentioning. Galil [1981] constructed an interesting class of examples and showed that the algorithms of et al. solution techniques. Other researchers have made some progress in constructing worst-case examples for preflow push algorithms. and the bound O(n^m) for the generic preflow push algorithm The research community has not established similar results for other preflow It is push algorithms. log U) time. Cheriyan [1988] has also constructed a for family of examples to show that the bound O(n^) FIFO preflow push algorithm is tight. m + n. solve the bipartite maximum flow problem on networks in 0(n. Karzanov. The studies performed by Hamacher [1979]. Martel [1987] showed that the FIFO preflow push algorithm can take n(nm) time to solve a class of unit capacity networks. Cherkasky.) in a two-dimensional plane so that arcs intersect one another only planar network has at most A 6n arcs. Even and Tarjan [1975] noted same examples imply that the of Dinic's algorithm is tight when m= n2- Baratz [1977] showed that the bound on Karzanov's algorithm is tight. Some important [1979]. Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow Vm) for the push algorithm is tight. Edmonds and Karp. references for planar maximum flow algorithms are Itai and Shiloach Johnson and Venkatesan (1982] and Hassin and Johnson [1985].

using sophisticated data structures. they observed that their implementation of Dinic's algorithm using dynamic tree data structure algorithm by a constant factor. . implementations of preflow push algorithms would be useful in been to in this case. Imai (1983] and Goldfarb [1986] and Grigoriadis are noteworthy. Dinic's algorithm competitive with Karzanov's algorithm for sparse networks. the fastest. Glover. is slower than the original Dinic's Hence. Sleator and Tarjan (1983] reported a similar finding. the sophisticated data structures improve only are not useful empirically. however . to the development of algorithms use distance These studies rank Edmonds order of performance for and Karp. but Researchers have also tested the Malhotra et al. (i) the multi-terminal flow problem. Kodialam and Orlin [1988] have found that the preflow push algorithms are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for most classes of networks. but slower for dense networks. the worst-case performance of algorithms. highest-label preflow push algorithm runs the The excess-scaling algorithm and its variations have not been tested thoroughly. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem. do not apply to the multi-terminal maximum flow problem on directed networks. and Ahuja. Among all nonscaling preflow push algorithms. that These studies were conducted prior labels. tree We do not anticipate that dynamic practice. Grigoriadis [1988]. we discuss two important generalizations of the (ii) problem: problem. their contribution has improve the worst- case p>erformances of algorithms. Derigs and Meier [1988]. 1984). These results. Recently. slower than the original Dinic's algorithm. Imai [1983] noted that Galil and Naamad's is [1980] implementation of Dinic's algorithm. Finally. we wish to determine the flow problems. algorithm and the primal simplex algorithm due to Fulkerson and Dantzig [1955] and found these algorithms to be slower than Dinic's algorithm for most classes of networks. Gusfield [1987] has suggested a simpler multi-terminal flow algorithm. the maximum maximum dynamic flow flow maximum flow value between every pair of nodes.168 [1980]. Ehnic's and Karzanov's algorithms in increasing is most classes of networks. Klingman. as in others. A number of researchers are currently evaluating the computational performance of preflow push algorithms. Mote and Whitman [1979.

Klingnun and Whitman [1980] describe the . Ford and Fulkerson [1958] showed that the maximum dynamic flow problem can be solved by solving a a nice treatment of nunimum in cost flow problem. 1957] suggested the for the uncapacitated first combinatorial algorithms and capacitated transportation problem. a special the minimum cost flow problem. then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths. Helgason and Kennington [1977] and Armstrong. minimum cost flow problem. (Ford and Fulkerson [1962] give this problem). caise of The classical transportation problem. He observed for linear the spanning tree property of the basis and the solution. (i. [1961] independently discovered the successive shortest path These researchers showed how to solve the minimum cost flow problem [1971] as a sequence of shortest path problems v^th arbitrary arc lengths. integrabty property of the optimum Later his development of the upper bounding technique programming led to an efficient sp)ecializatior of the simplex algorithm for the discusses these topics. known as the primal-dual algorithms.169 In the simplest version of maximum dynamic tj: flow problem. Dantzig [1951] developed the first complete solution procedure for the transportation problem by specializing his simplex algorithm for linear programming. primal-dual algorithm for the minimum Jewell [1958].was posed and solved (though incompletely) by Kantorovich [1939]. we associate with each arc that arc. Hitchcock [1941]. [1960] and Fulkerson [1961] independently discovered the out-of-kilter is The negative cycle algorithm credited to Klein [1967]. 6. and Koopmans [1947].4 Minimum Cost Flow Problem cost The minimum flow problem has a rich history. j) in the is network a number to denoting the time needed to traverse possible flow from the source The objective send the maximum node first to the sink node within a given time period T. Minty algorithm. Dantzig's book [1962] Ford and Fulkerson [1956. Tomizava and Edmonds and Karp [1972] independently pointed out that if the computations use node potentials. Iri [1960] and Busaker and Gowen algorithm. Orlin [1983] is to has considered infinite horizon dynannic flow problems which the objective minimize the average cost per period. these algorithms are Ford and Fulkerson [1962] describe the cost flow problem.

The book Kennington and Helgason [1980] an source for references and background material concerning these developements. Goldfarb and Reid [1977]. these algorithms obtain shortest paths losing a method that can be regarded as an application of Dijkstra's algorithm. the primal-dual algorithm. the successive shortest path algorithm.. the dual simplex algorithm. The candidate list we described is due to Mulvey [1978a]. for the minimum problem (which is Each of these algorithms perform iterations that can (apparently) not be polynomially bounded. and Klingman of [1979] subsequently discovered is improved data excellent structures. selection of the entering variable. Zadeh [1973a] describes one such example on which each of several algorithms — the primal simplex algorithm with Dantzig's pivot rule. Bradley.170 specialization of the linear cost flow programming dual simplex algorithm not discussed in this chapter).performs an exponential number of iterations. Researchers have conducted extensive studies to determine the most effective pricing strategy. due to Srinivasan and Thompson [1973] and Glover. i. Zadeh 11973b) has also described more pathological examples for network algorithms. Mead and Grigoriadis [1986] have described other strategies have been . Glover.e. and the out-of-kilter algorithm . significantly reduced the running time of the simplex algorithm. Klingman and Napier [1974]. Brown and Graves [1977]. The paper by Zadeh [1979] just showed this relationship by pointing out that each of the algorithms mentioned of are indeed equivalent in the sense that they perform the same sequence augmentations provided ties are broken using the same rule. Glover. These studies show that the choice of the pricing strategy has a significant effect on both solution time and the number strategy BrovkTi of pivots required to solve minimum cost flow problems. its practical implementations have been first Johnson [1966] suggested the tree first manipulating data structure for implementing the simplex algorithm. Further. Grigoriadis and Hsu [1979]. Glover. Gibby. Kamey. The network simplex algorithm and most popular with operations researchers. Klingman and that and Graves [1983] [1978]. The fact that one example is bad for many network insightful algorithms suggests inter-relationship among the algorithms. The implementations using these ideas. and Barr. Bradley. All these algorithms essentially cortsist of identifying shortest paths between appropriately defined nodes and augmenting flow along these paths. the negative cycle algorithm (which augments flow along a most negative cycle). Klingman and Stutz [1974].

Thus. Gavish. Developing a polynomial-time primal simplex algorithm for the minimum cost flow problem is still open. The algorithm then examines the arcs in the wrap-around fashion. The strongly feasible basis technique. [1979]. Zadeh . that this rule admits at most nm consecutive degenerate Goldfarb. using a p>erturbation technique. network structure and the network Experience with solving large scale established that minimum cost flow problems has more than 90% of the pivoting steps in the simplex method can be degenerate (see Bradley. It appears that the best pricing strategy depends both upon the size. On the theoretical front. Cunningham showed pivots. but the number is of consecutive degenerate pivots may be exponential. Orlin [1985] showed. Researchers have also been interested in developing polynomial-time simplex algorithms for the minimum cost flow problem or its special CJises. One such rule is LRC fixed. each iteration starting at a place where it left off earlier. Schweitzer and Shlifer [1977] and Grigoriadis (1986]). maximum flow problem. (Leaist Recently Considered) rule which orders the arcs in an arbitrary. Computational experience has shown that maintaining strongly feasible basis substantially reduces the number of degenerate pivots. degeneracy is both a computational and a theoretical issue. Glover and Klingman contributed on both fronts. the use of this technique led to a finitely converging primal simplex algorithm. proposed by Cunningham [1977a. and the assignment problem: Dial et al. This phenomenon known the as stalling.171 effective in practice. Hao and Kai [1987] have described more anti-stalling pivot rules for the minimum cost flow problem. researchers have developed such algorithms the for the shortest path problem. 1978) has {1976] and independently by Barr. The strongly feasible basis technique prevents cycling during a sequence of consecutive degenerate pivots. that for integer data an implementation of the primal simplex algorithm that maintains feasible basis a strongly performs O(nmCU) pivots pivots when used with any arbitrary pricing strategy and 0(nm C log (mCU)) when used with Dantzig's pricing strategy. and introduces the first eligible arc into the basis. The only is polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]. Cunningham [1979] described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling. but manner. Brown and Graves [1978]. However. 1977b. the uncapacitated this minimum cost flow problem a dual algorithm performs 0(n^log n) pivots for minimum cost flow problem.

and capacitated or uncapacitated transportation and minimum cost flow problems. extended approach for the minimum cost flow cost flow problem with and for the generalized minimum problem (see Section 6. and Roohy-Laleh [1980].172 [1979]. data distributions. The algorithm proceeds by either augmenting flow from an excess node with zero reduced cost. Goldfarb. Grigoriadis [1979] and Grigoriadis [1986] are noteworthy. Kamey and Klingman and Hsu [1988] [1974]. [1985] suggested the relaxation algorithm for the Bertsekas and Tseng [1985] real data. minimum this cost flow problem (with integer data). Klingman and Napier [1974] Glover. Mulvey [1978b]. or latter case. investigation. Akgul [1985b] and Ahuja and Orlin [1988] for the assignment problem. Napier and Stutz which is capable of generating assignment. Kamey. lower or upper bounds so as to the optimality conditions. is NETGEN. Hao and Kai [1986] and Ahu)a and OrUn [1988] for the [1988] for the shortest path problem. of empirical studies have extensively tested minimum to cost flow algorithms for wide variety of network structures. Bertsekas results for the relaxation algorithm. [1980] have reported on extensive studies of the dual simplex subject of The primal simplex algorithm has been a more rigorous . it to a deficit node along a path cortsisting of arcs (ii) changing the potentials of a subset of nodes. Orlin [1985]. Hung [1983]. Kamey and Klingman [1974] and Aeishtiani and Magnanti have tested the primal-dual and out-of-kilter algorithms.6 for a definition of this problem). to their In the satisfy resets flows on some arcs however. Brov^Ti and Graves [1977]. this flow assignment might change the excesses that each and deficits at nodes. The attractive relaxation algorithms proposed by Bertsekas and his associates are other algorithms for solving the For the minimum cost flow problem and its generalization. Orlin [1985]. Akgul [1985a]. studies conducted by Glover. [1976] Glover. This relaxation algorithm has exhibited nice empirical behavior. mirumum cost flow problem. Bradley. The algorithm operates so change it in the node potentials increases the dual objective function value and when finally determines the optimum dual objective function value. this algorithm maintains a (i) pseudoflow satisfying the optimality conditions. Goldfarb and Hao maximum flow problem. Klingman and Whitman algorithm. it has also obtained an optimum primal Bertsekas solution. and problem The most common problem generator [1974]. and Tseng have presented computational . A number sizes. due Klingman. Helgason and Kennington [1977] and Armstrong.

the primal-dual algorithm. it appears that the relaxation algorithm of Bertsekas and Tseng. [1988]. new At version of primal simplex algorithm faster than the relaxation this time. The networks with n nodes and m arcs.3 these theoretical developments in solving the table reports running times for minimum cost flow problem. maximum . of nodes and arcs. Computer codes public domain. and does not evolve summarizes terms containing logarithms of C The table given in Figure 6. and the integral capacities. It cissumes that the integral cost coefficients are bounded value by C. researchers have actively pursued the design of fast (weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem. and the relaxation code RELAX developed by Bertsekas and Tseng Polynomial-Time Algorithms In the recent past. the dual simplex algorithm. and the primal simplex algorithm with Dantzig's pivot rule should have comparable running times.173 In view of Zadeh's [1979] result. minimum if Recall that an algorithm in the is strongly polynomial-time its running time is polynomial number or U. m' of which are in absolute capacitated. for some minimum cost flow problem are available in the These include the primal simplex codes RNET and NETFLOW developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980]. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm. computational studies have verified this expectation and until very recently the all primal simplex algorithm has been a clear winner for almost classes of network is problems. supplies and demands are bounded in absolute value by U. The term S() is the running time for the shortest path problem and the flow term M() represents the corresponding running time to solve a problem. and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the minimum cost flow problem in practice. the out-of-kilter algorithm. we would expect that All the the primal simplex algorithm should outperform other algorithms. Grigoriadis [1986] finds his algorithm. that determine a By using more effective pricing strategies good entering arc without examining all arcs. However. respectively. we would expect that the successive shortest path algorithm.

O) C M(n. C)) m') log U S(n. 1988b] 0(nm log n log log n log (log U log nQ nC) Goldberg and Tarjan 0(nm 0(nm 0(nm 9 Ahuja. U)) nC) ) 0(n 0(n log log Bland and Jensen [1985] Goldberg and Tarjan [1988a] Bertsekas and Eckstein [1988] 0(nm log irr/nx) log nC) o(n3 log 7 7 8 Goldberg and Tarjan [1987] 0( n^ log nC Gabow and Tarjan [1987] [1987. m. Goldberg. m. U)) C M(n. m. Orlin U/log log U) log nC) and Tarjan [1988] and log log U log nQ Strongly Polynomial -Time Combinatorial Algorithms # . m.174 Polynomial-Time Combinatorial Algorithms # 1 Discoverers Running Time [1972] Edmonds and Karp Rock Rock [1980] [1980] 0((n + m") log 2 3 4 5 6 0((n + U S(n.

Bertsekas [1986] developed the first pseudoflow push algorithm. m. m) = (n^/m) Goldberg and Tarjan [1986] Using capacity and right-hand-side scaling. Discoverers m) = m+ nm n log n log Fredman and Tarjan [1984] M(n. C) = Discoverers min (m log log C.8 use the concept of approximate optimality. Mehlhom. However. C) = nm ^%rT^gTJ log [ ^— + 2 J Ahuja. researchers gradually recognized that the scaling technique has great theoretical value as well as potential practical significance. since they regarded as having practical utility. Orlin and Tarjan [1987] Strongly Polynomial -Time Bounds S(n. The wave algorithm . this Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section algorithm to Tarjan [1984] 5. Rock [1980] developed two different bit-scaling algorithms for the minimum cost flow problem. This cost scaling algorithm reduces the minimum cost flow problem to a sequence of 0(n log C) maximum flow problems.175 For the sake of comparing the polynomial and strongly polynomial-time algorithms.m. Orlin and Tarjan [1988] M(n. Edmonds and Karp [1972] developed the first (weakly) polynomial-time eilgorithm for the in Section 5. we invoke the similarity assumption. The RHS-scaling algorithm presented the which a Vciriant of Edmonds-Karp algorithm. Bland and Jensen [1985] independently discovered a similar cost scaling algorithm. the best bounds for the shortest path and maximum flow problems are: Polynomial-Time Bounds S(n. and Ahuja. The pseudoflow push algorithms for the minimum cost flow problem discussed in Section 5. This algorithm was pseudopolynomial-time. minimum L> cost flow problem. For problems that satisfy the similarity assumption. one using capacity scaling and the other using cost scaling.7. proposed a wave algorithm for the maximum flow problem. introduced independently by Bertsekas [1979] and Tardos [1985].8. was suggested by Orlin initially little [1988]. m + rh/logC ) Johnson [1982]. The scaling technique it did not capture the interest of many researchers.

which was developed relies independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988]. its worst-case running time is not very attractive. The double as described in Section runs in 0(nm log U log nC) time. algorithms by Goldberg and Tarjan appear more attractive. required sophisticated data structures that impose a very high computational overhead. log structures. the double scaling algorithm faster than all other algorithms for all network topologies except for very dense networks. except the wave algorithm. Goldberg.3 contains the definition of a blocking flow. analyzing an algorithm suggested by Weintraub [1974]. upon similar ideas. 5. Gabow and 0(nm log n U log nC). Barahona and Tardos if [1987].8 . 6 W this Goldberg and Tarjan described an implementation of approach running in time 0(nm(log n) minflog nC. |W | is minimum). in these instances. who developed the double scaling algorithm.) finger tree (see Using both Mehlhom [1984]) and dynamic tree data structures. Goldberg and Tarjan [1988b] showed that flow a it if the negative cycle algorithm cycle always augments along / minimum mean cycle (a W for which V (i. These algorithms. 176 for the minimum cost flow problem described in Section 5. Scaling costs by an appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC) and a dynamic tree implementation improves the bound further to 0(nm log log U log nC). Although the wave This algorithm is very practical. [1988]. cycle algorithm Both the algorithms are based on the negative due to Klein [1967]. Using a dynamic tree data structure in the generic pseudoflow push algorithm. Goldberg and Tarjan [1988a] obtained an 0(nm log (n^/m) log nC) bound for ^he wave algorithm. For problems satisfying the similarity is assumption. (The description of Dinic's algorithm in Section 6. m log n)). The second success was due Orlin and Tarjan scaling algorithm. They also showed minimum cost flow problem cam be solved using 0(n log nC) blocking flow computations. showed that the negative cycle algorithm . then is strongly polynomial-time.j) Cj.9. Goldberg and Tarjan [1987] obtained a computational time that the bound of 0(nm log n log nC). Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed other polynomial-time algorithms.. The success in this direction was due to who developed a triple scaling algorithm running in time to Ahuja. situation has prompted researchers to investigate the possibility of improving the computational complexity of minimum first cost flow algorithms without using any complex data Tarjan [1987].

are problems more equally difficult to solve as the values of the tmderlying data becomes increasingly larger? The Tardos first strongly polynomial-time minimum cost flow algorithm is due to [1985]. This desire was motivated primarily by (Indeed. Tarjan [1988b] also show that their algorithm that proceeds by cancelling minimvun mean cycles is also strongly polynomial time. NP-hard).. that can valued data as well as integer valued level. and also highlighted the desire to develop a strongly polynomial-time algorithm. Edmonds and Karp the [1972] proposed the first polynomial-time algorithm for minimum cost flow problem.e. For very sparse networks.177 augments flow along then it a cycle with maximum improvement in the objective function. network flow algorithms data. theoretical considerations.e. m. in practice. where . they describe a method (based upon solving to an auxiliary assignment problem) determine a disjoint set of augmenting cycles with the property that augmenting flows along these cycles improves the flow cost by at least as much as augmenting flow along any single cycle. identify the and (ii) they might. m log n)) shortest path is problems. and Orlin [1988] provided subsequent improvements in the running Goldberg and Tarjan [1988a] obtained another strongly polynomial time Goldberg and algorithm by slightly modifying their pseudoflow push algorithm. and are sublinear Strongly polynomial-time algorithms are (i) theoretically attractive for at least two reasons: run on real they might provide. the terms log in n. Interior point linear programming algorithms are another source of polynomial-time algorithms for the minimum cost flow problem. the fastest strongly polynomial-time algorithm due to Orlin [1988]. is Currently. Their algorithm runs in 0(. source of the difficult or underlying complexity in solving a problem.Tr\^ log (mCU) S(n. [1986]. at a more fundamental i. the worst-case running time of this algorithm nearly as low cis the best weakly polynomieil-time algorithm. Several researchers including Orlin [1984]. in principle. Galil and Tardos time. Fujishige [1986].) C and log U typically range from 1 to 20. even for problems that satisfy the similarity assumption. O) time. Kapoor and to the Vaidya [1986] have shown that Karmarkar's [1984] algorithm. This algorithm solves the minimum cost flow problem as a sequence of 0(min(m log U. when applied minimum cost flow problem performs 0(n^-^ mK) operations. performs is 0(m log mCU) iterations.. Since identifying a cycle with maximum improvement difficult (i.

s and a sink node t. we (j. and introducing and unit for all i€N|. The algorithm successively obtains a shortest path from with respect to the lir«. We believe that when implemented with appropriate speed-up techniques. and is explicit in the papers by Tomizava [1971] and Edmonds and Karp When applied to an assignment problem on the network G = (N^ u N2 . the scaling algorithms [1986] not as efficient as the non-scaling algorithms. According to the even though they might provide the best-worst case bounds on running eu-e times.i) problem by adding node . Asymptotically. minimum cost flow problem. and for all J€N2 these arcs have zero cost s to t capacity. At fully this time. [1972]. scaling algorithms have the potential to be competitive with the best other algorithms.t) first transform the assignment problem into a a source minimum arcs cost flow (s.5 Assignment Problem The assignment problem has been emphasis in the literature has a popular research topic. 6. cost flow problem. A) the successive shortest path algorithm operates as follows.ar . described in Section 5.. they found the scaling algorithm to be competitive with the relaxation algorithm for some classes of problems. [1955]. Boyd results. Although the research community has developed several different algorithms for the assignment problem.4 for the lie minimum algorithms. The primary efficient been on the development of empirically algorithms rather than the development of algorithms with improved worst-case complexity. Vaidya [1986] suggested another algorithm for linear programming that solves the minimum cost flow problem in 0(n^-^ y[m K) time. To use this solution approach. and Orlin have obtained contradictory Testing the right-hand-side scaling algorithm for the minimum cost flow problem. these time bounds are worse than that of the double scaling algorithm. appears to at the heart of many assignment due to This algorithm is implicit in the first assignment algorithm Kuhn known as the Hungarian method. 178 K= log n + log C + log U. Bland and Jensen [1985] also reported encouraging results with their cost scaling algorithm. features. the research community has yet to develop sufficient evidence to assess the computational worth of scaling and interior point linear for the programming algorithms folklore. many of these algorithms share common The successive shortest path algorithm.

costs leads to shortest path problems with nonnegative arc details of Weintraub and Barahona [1979] worked out the Edmonds-Karp assignment algorithm for the assignment problem. and augments one unit of flow along the shortest path.mC)) = 0(nS(n. [1960] and Busaker and Gowen [1971] [1961] on the minimum cost flow problem. (For 0(nm + nS(n. then these applications take a total of 0(nm) time time. Glover The more recent [1986] is threshold and Klingman also a successive shortest path algorithm which integrates their threshold shortest path algorithm (see Glover. the Hungarian method. in Whereas the successive shortest path an iteration.m.C)) time. [1972] independently pointed out that Tomizava and Edmonds and Karp working with reduced lengths. overall. However. Carraresi and Hoffman and Markowitz path problem to [1963] pointed out the transformation of a shortest an assignment problem. some time after the development of the Hungarian method as described by Kuhn. Kuhn's [1955] Hungarian method shortest path algorithm. For problems satisfying the similarity assumption. the to Hungarian method solves a (particularly simple) maximum flow problem send the maximum possible flow from the source node s to the sink node t using arcs vdth zero reduced cost.m. too.C) O(n^) and for a Fibonacci heap implementation is it is 0(m+nlogn).C) problem. S(n. If the shortest paths from the source node we use the labeling algorithm to solve the resulting maximum flow problems. Sodini [1986] also suggested a similar threshold assignment algorithm.m. is the time needed to solve a shortest path is For a naive implementation of Dijkstra's algorithm. The algorithm solves the assignment problem by n applications of the shortest path algorithm for nonnegative arc lengths and runs in 0(nS(n.m. The fact that the assignment problem can be solved as a sequence of n shortest Iri path problems with arbitrary arc lengths follows from the works of Jewell [1958]. log log C. the problem augments flow along one path augments flow along all Hungarian method to the sink node.179 programming reduced costs. where S(n. updates the node potentials.m. is the primal-dual version of the successive After solving a shortest path problem and updating the node potentials.C)) time. Lawler [1976] described an Oiri^) . algorithm by Glover. since there are n augmentatior\s and each augmentation takes 0(m) runs in Consequently. S(n. the research community considered it to be O(n^) method.C) min(m m+nVlogC}. Glover and Klingman [1984]) with the flow augmentation process.

Subsequent research focused on developing . the mathematical programming community did not conduct much research on the network simplex method for the assignment problem until Barr. The successive shortest path algorithm maintains a solution w^ith unassigned persons and objects.C)) time. every person assigned. a primal algorithm that maintains a feasible it assignment and gradually converts into an optimum assignment by augmenting flows along negative cycles or by modifying node potentials. This approach closely related to the successive shortest path algorithm. reoptimizes over All of these algorithms the previous basis to obtain another strongly feaisible basis. Derigs [1985] notes that the shortest path computations vmderlie this method. and that it rurrs in 0(nS(n. objects Throughout the relaxation algorithm. minimum cost flow problem is due to E>inic is and Kronrod Hung eind Rom [1980] and Engquist [1982]. the shortest path computations are somewhat disguised paper of Dinic and Kronrod [1969]. Researchers have also studied primal simplex algorithms for the assignment problem. and with no person or is object overassigned.m. but may be overassigned or unassigned. The major difference the nature of the infeasibility. Both approaches start writh is in an infeasible assignment and gradually make it feasible. These authors to developed the details of the network simplex algorithm when implemented maintain a strongly feasible basis for the assignment problem. of its 2n-l variables.C)) time. The basis of the assignment problem is highly degenerate. only n are nonzero. Subsequently. [1969] The algorithms of Dinic and Kronrod but and Engquist [1982] are essentially the same as the one we in the just described.m. Both the algorithms maintain optimality of the intermediate solution and work toward feasibility by solving at most n shortest path problems with nonnegative arc lengths. The algorithm of Hung and Rom after [1980] maintains a strongly feaisible basis rooted at an overassigned node and. Glover and Klingman [1977a] devised the strongly feasible basis technique. The relaxation approach for the (1969]. run in 0(nS(n. many researchers realized that the Hungarian method in fact runs in 0(nS(n.180 implementation of the method. Another algorithm worth mentioning This algorithm is is due to Balinski and Gomory [1964].m.) Jonker and Volgenant [1986] suggested some practical improvements of the Hungarian method. they also reported encouraging computational results. each augmentation.C)) time. Probably because of this excessive degeneracy.

which is a dual simplex algorithm for the eissignment problem. . this threshold value equals C and within O(n^) pivots its value is halved. For example. essentially consists of pivoting in any arc with sufficiently large reduced The algorithm defines the term "sufficiently large" iteratively. is due to Bertsekas and uses basic ideas originally [1988] described a Bertsekas and Eckstein more recent its version of the auction algorithm.m. Goldfarb [1985] described some implementations of O(n^) time using simple data structures and in Balinski's algorithm that run in 0(nm + n^log n) time using Fibonacci heaps. Ahuja and Orlin rule that performs 0(n^log C) pivots and can be implemented to run in 0(nm log C) time using simple data structures. analysis is Out presentation of the auction algorithm tmd somewhat different that the one given by Bertsekas and Eckstein [1988]. Hence. A naive implementation of the algorithm runs in [1988] described a scaling version of Dantzig's pivot 0(n^m log nC). by the maximum amount Bertsekas is [1981] has presented another algorithm for the assignment problem which cost flow in fact a specialization of his relaxation algorithm for the minimum problem (see Bertsekas [1985]). his algorithm performs 0(n^log nC) pivots. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots. Balinski [1985] developed the signature method. dual feasible basis. This algorithm essentially in amounts to solving n shortest path problems and runs 0(nS(n.C)) time. whereas the algorithm by Bertsekas and Eckstein increases prices that preserves e-optimality of the solution.ISl polynomial-time simplex algorithms. Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm and showed that for the eissignment problem this rule requires O(n^lognC) pivots. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots. it it (Although his basic algorithm maintains a is not a dual simplex algorithm in the traditional sense because at does not necessarily increase the dual objective algorithm do have this property. Hung [1983] describes a pivot rule that performs at at most O(n^) consecutive degenerate pivots and most 0(n log nC) nondegenerate pivots. The auction algorithm suggested in Bertsekas [1979].) in every iteration. some variants of this Balinski's algorithm performs O(n^) pivots and runs O(n^) time. initially. the algorithm we have presented increases the prices of the objects by one unit at a time. The algorithm cost.

Martello and Toth [1982] [1988] on the primal-dual method. using bit-scaling of costs. these two algorithms achieve the boimd to solve the assignment problem without using any sophisticated data structure. it is difficult to assess their computational merits. Some representative computational studies are those conducted by Barr. Martello and Trlh [1988] present . the successive shortest path algorithms Among due to Glover et al. three approaches. Glover and Klingman [1977a] on the network simplex method. This time bound For problems satisfying best time is comparable to that of Gabow and Tarjan 's algorithm. results to date seem to justify the following observations about the algorithms' relative performance. They also improved the time bound of the auction algorithm to 0(n^'^m lognC). Nevertheless. Since no paper has compared all of these zilgorithms. As mentioned previously. Section 5. His algorithm performs O(log C) scaling phases and solves each phase in OCn'^'^m) time.Currently. but the two algorithms would probably have different computational attributes.11 has presented a modified version of algorithm in Orlin and Ahuja [1988]. The primal simplex algorithm is slower than the the latter primal-dual. the best strongly polynomial-time bound to solve the assignment algorithms. Carpento. on the relaxation methods. Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5. developed the algorithm for the assignment problem. and by Glover [1986] and Jonker and Volgenant [1987] on the successive shortest path methods. years.8 solves problem in 0(nm log nC) since every push is a saturating push. the similarity assumption. algorithm running in time 0(n^' Gabow and Tarjan [1987] developed another scaling push algorithm the assignment ^m log nC). Gabow [1985] . by Engquist et al. problem is 0(nm + n^ log n) which is achieved by many assignment Scaling algorithms can do better for problems that satisfy the similarity first scciling assumption. by McGinnis [1983] and Carpento. Bertsekas and Eckstein is found that the scaling version of the auction algorithm competitive with Jonker and Volgenant's algorithm. thereby achieving jm OCn'^' ^m log C) time bound. most of the research effort devoted to assignment algorithms has stressed the development of empirically faster algorithms. relaxation and successive shortest path algorithms. showed that the scaling version of the auction Bertsekas and Eckstein [1988] algorithm runs in this 0(nm log nC). Over the many computational studies have compared one algorithm with a few other algorithms. Using the concept of e-optimality. [1986] and Jonker and Volgenant [1988] [1987] appear to be the fastest.

in this chapter assume that arcs the flow entering an arc equals the flow leaving the arc. t for aU i E N (6. j). commodity network flow problems with linear Several other generic topics in the broader problem theoretical (i) network optimization are of considerable and practical interest. (iv) convex cost flows. For example. the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods.183 several cases. is a is nonnegative flow multiplier dissociated with the lossy and. i. if i ?t (i. extension of the conventional An maximum two flow problem is the generalized maximum flow problem which either maximizes the flow out of a source the flow into a sink node or maximizes of node (these objectives are different!) The source version the problem can be states as the following linear program. If node 1.j) € A) € A) s.6 Other Topics Our domain of discussion in this paper has featured single costs.1b) [vj.i) "'ji'^ji = K'if» = s S 0. If In xj: models of generalized network flows. Generalized network flows arise in may application contexts. 1 < rj: < then the arc Tjj if 1 < Tj. j.t. Tj. then the arc is gainy. then Tj: Xj: units "arrive" at arc..e. < «>. if i = . four other topics deserve mention: (ii) generalized network flows. We shall now discuss these topics briefly. = for all arcs. Researchers have studied several generalized network flow problems. In particular. FORTRAN implementations of assignment algorithms for dense and sparse 6. In the conventional flow networks. units of flow enter an arc (i. (iii) multicommodity flows. Generalized Network Flows The flow problems we have considered conserve flows. arcs do not necessarily conserve flow. and network design. Maximize v^ (6ia) subject to X {j: "ij {j: S (j.

typically.j) Cjj (x^j). The recent paper by Goldberg. The third approach. j) e A. and Klingman among they Elam it is et al. find their implementation to be very efficient in practice. due to Bertsekeis and Tseng generalizes their minimum cost flow relaxation algorithm for the generalized minimum cost flow problem. however. cost flow algorithm. find that about 2 to 3 times slower than their implementations for the ordinary minimum [1988b]. Plotkin and Tardos [1986] describes the first polynomial-time combinatorial algorithms for the generalized maximum flow problem. the objective function can be written in the form V (i. The second approach [1979] the primal simplex algorithm studied by Elam. .. Problems containing nonconvex nonseparable cost terms such as xj2 e A are substantially X-J3 more difficult to solve and continue to pose a significant challenge for the mathematical programming community. Convex Cost Flows We shall restrict this brief discussion to i. The generalized maximum flow problem has many similarities with the minimum minimum cost flow problem. convex cost flow problems with separable cost functions. but convex objective functions are more difficult to solve. is due to Jewell [1982]. In the generalized minimum cost flow problem.e. note that Vg not necessarily equal to v^. The paper by Truemper [1977] surveys these approaches. are not pseudopolynomial-time. the negative cycle algorithm. The approach. These algorithms. Even problems with nonseparable. we wish to determine the minimum first cost flow in a generalized network satisfying the specified supply/demand requirements of nodes.184 < x^j < uj: . Note that the capacity restrictions apply to the flows entering is the arcs. because of flow losses and gains within arcs. and the primal-dual algorithm for the cost flow problem apply to the generalized maximum flow problem. Glover others. Further. is essentially a primal-dual algorithm. for all (i. These are three main approaches to solve this problem. mainly because the optimal arc flows and node potentials might be fractional. which is an extension of the ordinary minimum cost flow problem. Extended versions of the successive shortest path algorithm.

then we could solve the if problem exactly using a linear approximation for any arc (i. (xjj) is a piecewise linear function. program (see. The separable convex cost flow problem has the follow^ing formulation: Minimize V (i. primal-dual and out-of-kilter algorithms.) (6. to approximate a convex function of one variable to any desired degree of accuracy. j) with only three . and Gupta and suggests a pseudopolynomial time algorithm. (xjj) for each (i. convex problem a priori (which of we knew the optimal solution to a separable course.2b) e A < Ujj . Observe that segments chosen (if it is possible to use a piecewise linear function. classes of Solution techniques used to solve the two problems are quite is different. Batra. However. (xj.j) Cj.j) ^i] {j: € A S (j. The paper by Ahuja. j) e A. with linear necessary) with sufficiently small size. There a well-known technique for transforming linear functions to a linear a separable convex program with piecewise and Magnanti standard [1972]). we don't). (xj.2a) e A subject to Y {j: (i. More elaborate For example. The research community has focused on two (i) classes of separable convex costs flow each Cj.. to solve convex cost flow problems without increasing the problem [1984] illustrates this technique size. negative cycle algorithm. Bradley. (62c) In this formulation. Cj.j) e A. is a convex function. Hax This transformation reduces the convex cost flow problem to a it minimum cost flow problem: introduces one arc for each linear segment in the cost functions.i) ''ji = ^^'^' ^°^ all i € N. it is possible to cost carry out this transformation implicitly and therefore modify many minimum flow algorithms such as the successive shortest path algorithm.g. e. (6. of (ii) a continuously differentiate function. thus increasing the problem size.185 analysts rely on the general nonlinear programming techniques to solve these problems.) is problems: each Cj. alternatives are possible. < x^j for all (i.

an integer optimum solution of Muticommodity Flows Multicommodity flow problems arise when several commodities use the In this section. coarser. and Bertsekas. Klincewicz [1983]. Any other breakpoint in the linear approximation would be irrelevant and adding other points would be computationally wasteful. Kennington and Helgason Meyer and Kao [1981].3a) A subject to . and the optimal flow on the arc. we state programming formulation of the multicommodity minimum problem and its cost flow problem and point the reader to contributions to this specializations. the versions of the convex cost flow problems can be solved in polynomial [1984] has devised a polynomial-time algorithm for Minoux one of [1986] its special mininimum quadratic cost flow problem. topic are Ali. of this approach). This observation has prompted researchers to devise adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous. using ideas from nonlinear progamming for solving this general separable convex cost flow problems. approximation. If (See Meyer [1979] for an example could we were interested in only integer solutions. Florian [1986]. Hosein and Tseng [1987]. Uj. and therefore solve the problem in pseudopolynomial time. 1 Let denote the supply/demand vector of commodity cost flow Then the multicommodity minimum ^ problem can be formulated as follows: Minimize V 1^=1 V (i. Rockafellar [1984]. same underlying network. Dembo and Klincewicz [1981]. Some time. but share common a linear arc capacities. Researchers have suggested other solution strategies. cases. to obtain Minoux has also developed a polynomial-time algorithm the convex const flow problem. Helgason and Kennington [1978]. Some important references on this [1980].j)e k c^: k x^(6. Suppose through r. that the b*^ problem contains r distinct commodities numbered k.186 breakpoints: at 0. then we choose the breakpoints of the linear approximation at the set of integer values.

With the presence of the bundle the essential problem in a is to distribute the capacity of each arc to individual costs. The multicommodity maximum flow a special instance of In this problem. Hu [1963] showed how network in to solve the two-commodity maximum flow problem on an undirected Rothfarb. As indicated by its the "bundle constraints" (6..3d). subsequently generalized this decomposition approach to linear programming. as captured by (6. (6. Further. (6.j).3d) k In this formulation.3). '^ < u:j.187 k X. restrictions on the flow of each commodity on Observe that it if the multicommodity flow problem does not contain bundle into r constraints. . We refer the reader to .j). (6.j) k k ~ ^i ' ^OT a\] i and k.3c). Shein and pseudopolynomial time by a labeling algorithm. Frisch [1968] showed how source or a to solve the multicommodity maximum flow problem with a common common sink by a single application of any maximum flow algorithm.. Ford and Fulkerson [1958] solved the general multicommodity Dantzig and Wolfe maximum [1960] flow problem using a column generation algorithm.3c). for ^ all (i. represented respectively by to and tK The t*^ maximize the sum of flows that can be sent from s*^ to for all k.j) e A) e A y ktl ' k X.3b) ''ii (i. 1] {j: {j: V (i. one for each commodity. then decomposes single commodity minimum cost flow corxstraints problems. for all (i. (6. Researchers have proposed three basic approaches for solving the general multicommodity minimum resource-directive cost flow problems: price-directive decomposition. < k u.j) and all k . every s*^ commodity k has objective a is source node and a sink node. decomposition and partitioning methods. the total flow on any arc cannot exceed capacity. x-- and k c-- represent the amont of flow and the unit cost of flow for commodity k on arc (i. (63c) < k Xj. the model contains additional capacity each arc. commodities way that minimizes overall flow We problem is first consider some special cases.

Although specialized primal simplex software can solve the single commodity problem 10 to 100 times faster than the general purpose linear programming systems.j) to be zero if not included in the network design. the constraint on arc Ujj (i. in other applications.3). of the form (6.are multicommodity flows. restricts the total included. for example. the network might . in some applications. Typically.188 the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these methods. Unfortunately. have focused on solution methods that is. Many design problems can be stated as fixed cost network flow problems: is (some) arcs have an associated fixed cost which incurred whenever the arc carries 0-1 variables yjj any flow. These network design models contain is that indicate whether or not an arc included in the network. algorithmic developments on the multicommodity minimum made on cost flow problem have not progressed at nearly the pace as the progress the single commodity minimum cost flow problem. related The design decisions yjj and routing decisions by "forcing" constraints of the form 2 k=l ''ii - "ij yij ^^^ ' ^" ^^'^^ which replace the bundle constraints multicommodity flow problem (6. al. The design problem is of its considerable importance in practice and has generated an extensive literature of own. the algorithms developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et [1984]). the network must be a tree. for finding optimal routings in a on analysis rather than synthesis.j) flow to be the arc's design capacity constraints Many modelling enhancements are possible. these models involve k x^.3c) in the convex cost k These constraints force the flow the arc is x^- of each if commodity k on the arc is arc (i. some may restrict the underlying network topology (for instance. The book by Kennington and Helgason [1980] describes the details of a primal simplex decomposition algorithm for the multicommodity minimum cost flow problem. Network Design We network.

for these problems as well as many references from the [1988] discuss Nemhauser and Wolsey many underlying methods from integer programming and combinatorial optimization. We are particularly grateful to William Cunningham many valuable and detailed comments. Hershel Safer. Apple Computer. The research Presidential of the first and third authors was supported in part by the Young Investigator Grant 8451517-ECS of the National Science Foundation. 1987] have described the broad range of applicability of network design models and summarize solution methods network design literature. is many different objective functions arise in practise. network design problems require solution techniques from any integer programming and other type of solution methods from combinatorial optimization. Lav^ence Wolsey .Richard Robert Tarjan for a careful reading of the manuscript and many for useful suggestions. dual ascent procedures. Usually. One of the most popular "" Minimize £ ^ k=l (i^j)e k c• k x^^ + Y. and by Grants from Analog Devices. optimization-based heuristics. by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research. .. Benders decomposition) as well as emerging ideas from the field of polyhedral combinatorics. Inc. These solution methods include dynamic programming. and Prime Computer.189 need alternate paths to ensure reliable operations). Magnanti and Wong [1984] and Minoux [1985. Acknowledgments We Wong and are grateful to Michel Goemans. and integer programming decomposition (Lagrangian relaxation. Also. ^ (i.j) A V ij € A (as well zs fixed costs k which models commodity dependent per unit routing costs c Fjj for • the design arcs).

and J. 055-76.B. Addison-Wesley. and S.V. and J. Assignment and Minimum and Ahuja. Implementing Prin\al-E>ual Network Operations Research Center. 1988.K. MA.190 References Aashtiani.I.B. Ahuja. Sloan School of Management. R.. R. K. M. Res. Orlin. R. Orlin. and R. Sloan School of Management.. J. 2047-88. Magnanti. Problem. M. Stein.B. 1988.D. To appear. The Design and Analysis of Computer Algorithms.I. Faster Algorithms for the Shortest Path Problem. Tarjan. To appear Ahuja. L. L. Hop>croft. MA. Tarjan. Research Report.K.E. 222-25 Goldberg. R.I. A.B.of Oper. 1976. MA.B. North Carolina Raleigh. M. R. MA.. Improved Algorithms for Network Flow Problen«.. MA. and Orlin.I. 1988. 1974.. Res.I.E.C. Ullman..K. 193.E.. Ahuja. . J.. R. 1988.A. Gupta. Mehlhom. and R.K. J. M. N. Orlin..K. Cambridge. K. A. Working Paper 1905-87. Orlin.T. 16. Personal Communication.T. A Parametric Algorithm for the Convex Cost Network Flow and Related Problems. Tarjan.E.T. 1987. H. J. 1985a. Kodialam. 1987. . J. M.. Sloan School Management. J. Technical Report Cambridge.K. OR Aho. Improved Time Bounds for the Maximum Flow M. and R.K. Department State University. Akgul. A Fast and Simple Algorithm for the Maximum M. Flow Algorithms. MA..T.B. and T. ]. 1984. Working Paper 1966-87. Cambridge. Tarjan. Improved Primal Simplex Algorithms Cost Flow Problems. 1988. C.T. Finding Minimum-Cost Rows by Double of Scaling. Ahuja. Orlin. in Oper. Bipartite J... Flow Problem. To appear. Technical Report No. R. Ahuja. R. of Shortest Path and Simplex Method.V. Batra. Computer Science and Operations Research. 1988. Euro. J. R. Cambridge.. Ahuja.E. Cambridge. and Ahuja. Operations Research Center. K. Orlin. Reading. Working Paper No. for the Shortest Path.B.

1978. Prog. Laboratory for Computer Science. Department of Computer Science and Assignment Problem. F. A Network Augmenting of the International Path Basis Algorithm for Transshipment Problems. B. A Survey. LIE. M.D.C. Kennington. Balinski. I. Southern Methodist University. 527-536. D. Barahona. Assad. 1987. Shetty. Multicommodity Network Problems: Applications and Computations. and R.. R. Klingman. 16. Helgason. Texeis. 1964. R. and D. L. Glover.. Oper. Klingman.L.. Euro. J.E. Raleigh. Sci.T. Comory. F. of Mathematics. 1977a. Oper. V. Multicommodity Network Flows Balinski. 1980. B. Construction and Analysis of a Network Flow Problem Which Technical Report TM-83. MA. 10. Forces Karzanov Algorithm to O(n^) Running Time. 1977. Symposium on . and E. M.191 Akgul. Ali. Ali. K. 578-593.. 1985. Implementation and Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. Proceedings External Methods and System Analysis. N. 403-420. 12.I. Basis Algorithm Ban. Bamett. 1978. Kennington. Dept. Note on Weintraub's Minimum Cost Flow Algorithm. Math.. Operations Research. The Alternating Path for the Assignment Problem. M. Klingman. Research Report. A. Trans. Wong.L. M. Baratz. Cambridge. Barr. Glover. Farhangian. Res. and D. R.. Patty. D. Whitman. 1-13. Networks 8. 33. and D. McCarl and P. A Genuinely Polynomial Primal Simplex Algorithm for the Research Report. A. Armstrong. and J. 1977b. R. MIT... Man. B. Res. F. A. 1985b.127-134. A Primal Method for the Assignment and Transportation Problems. Cambridge.I.37-91. The Convex Cost Netwrork Flow Problem: A State-of-the-Art Survey. 4.E. North Carolina State University. J. 1984. MA. Tardos. Signature Methods for the Assignment Problem. Technical Report OREM 78001..

16.. 21. 1979. D. ]. 1978. Flow Problems with Convex Arc Costs. 1981. Also in Annals 1988.P. P. Greece.. Math. A Unified Framev^ork for Primal-Dual Methods in Minimum Cost Network Flow Problems. A Nev^ Algorithm for the Assignment Problem. INFOR J. Berge.J.T.1219-1243. Programming. QuaH. Prentice-Hall. Working Paper. Relaxation Methods for Network J. 25. 1987. F. 1985. R. M. in Math. IXial Coordinate Step Methods for Linear Network Flow Problems.P. and D. 16-34. Generalized Alternating Path Algorithm for Transportation Problems. Bertsekas.... 1962. Bazaraa. P. & Sons. 87-90. Prog. Bertsekas. P. 1987. Bertsekas. A Distributed Algorithm for the Assignment Problem.P. Res. Bertsekas. R. Appl. Prog.. Enhancement 17. and R. 137-144. Series B. Athens. Distributed Relaxation Methods for Linear Network Flow Problems. A. P. P. Glover. Linear Programming and Network Flows. 1987. Eckstein.. Proc.. 152-171. 105-123. 125-145. Math. MA. Gallager. Prog. Laboratory Cambridge. 1958. M. D. Oper. and 1978. 1986. . Bertsekas.T. and D.I. of Operations Research 14.192 Barr. 32. D.. Data Networks. of Spanning Tree Labeling Procedures for Network Optimization. John Wiley & Sons. Barr. Jarvis. Ghouila-Houri.. D. MA. Report LIDS-P-1653. Glover. of 25th IEEE Conference on Decision and Control. SIAM of Control and Optimization . for Information Decision Systems. C. and J. Klingman. and P. D. M. Games and Transportation Networks. Hosein. and A. Math. Bertsekas. 2. Tseng. D. To appear Bertsekas. On a Routing Problem. D. R. Euro. Cambridge. D. Bertsekas. Laboratory for Information Decision systems. The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem.P. Klingman.I. John Wiley 1979. Bellman.

Simeone et al. (eds. 86-93. Theory 10. Algorithms and Codes for the Assignment Problem. The Relax Codes al. Technical Report No.. et (ed. In B.P. and J. Research Office. 1988b. and P. Van Emde. 1986. P. Personal Communication. Boas. of Operations Research 13. Kaas. and M.R.P. and E. On the Computational Behavior of a Polynomial-Time Network Flow Algorithm. 10. Minimal-Cost Network Flow Patterns. Parametrized Worst Case Networks for Preflow Push Algorithms. S. Cornell University. 1977. Busaker. Carraresi.. and P. Bland. of Operations Research 33. 1985. Res. N. 23. D. Simeone. Tseng. Gowen. FORTRAN Codes for Network As Annals and P. 1961. and Orlin. In B. J. P. Comp.J. D. Optimization.. Graves. A. Eur. Bertsekas. 21. 1988a. Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems. Martello. and D. Oper. Computer Science Group. Brown. Magnanti. R. Technical Report. and G. . B. 93-114. Res. 65-211. Jensen. for Linear Minimum Cost Network Flow Problems. Bodin. An Efficient Algorithm for the Bipartite Matching Problem. Sodini. 1-38. A. Routing and Scheduling and Crews.G. O. Carpento. R. Tseng.. 1986. 1988. Applied Mathematical Programming. G. Assad. A.. Operational MD. Design and Implementation of Large Sri. Sys. Toth. A.O. Addison-Wesley. A Procedure for Determining a Family of 15. Bradley. 1983. D. Hax. Cheriyan. Oper.. and T. Technical Report 661.). India. FORTRAN Codes for Network As Annals and J. S. 1977.). Baltimore. Design and Implementation of an Efficient Priority Queue. Oper.Y. C. Bombay. Optimization. 193-224.B. C. John Hopkins University. Tata Institute of Fundamental Research.L. L. School of Operations Research and Industrial Engineering.G.. L. Ball. 1977. G. of Vehicles L. 125-190.. P. Scale Primal Transshipment Algorithms. Golden. 1988. Math. Bradley. O. 36.. 99-127. Res.. Boyd. R. Ithaca. and P. Zijlstra. G.193 Bertsekas. Man.

1980. G. Analysis of Production and Allocation. Pro^. New Delhi.194 Cheriyan. Kuhn and A. (ed. 1977. Dantzig. Economeirica 23. and Block Triangularity Programming. India. Cherkasky. Flow. R. Dantzig. G.W. . Activity Koopmans 359-373. Oper.). Theoretical Properties of the Network Simplex Method. All Shortest Routes in a Graph. In H.H. Sd. Indian Institute of Technology.H. 8. N. 1975. 187-190.. Graph Theory : An Algorithmic Approach. On the Shortest Route through a Network. Mafft. 1-16. Vl ) Operation. and P. Princeton. W. 1962. A Network Simplex Method. J. On the Max-Flow Min-Cut Theorem of Networks. 1976. Computational Comparison of Eight Methods for the Mzocimum Network Flow Problem. B. Cheung. Application of the Simplex Method to a Transportation Problem. Rfs. and S. Man. Analysis of Preflow Push Algorithms for Maximum Network Technical Report. Dantzig. and D. of Oper. 101-111. Rosenthiel Graphs. Dantzig. Cunningham. 105-116. (ed. John Wiley & Sons. 1951. Linear Inequalities and Related Systems. Academic Press. 6. Princeton University Press. Christophides. Upper Bounds. ACM Trans. 196-208. NY. Dantzig.B.. NJ. 1987. G. in Linear 1955. Res. Annals of Mathematics Study 38.B.). Cunningham. Dantzig.V. 1960.B. 11. 4.W. of Computer Science and Engineering. Inc. Maheshwari. on Math. Dept. Fulkerson. Wolfe.C. Dantzig. Software 6. In P.B. Princeton University Press. 91-92. G.B. 1979. G. 1967. G.R. Algorithm for Cor\struction of Maximum Flow in Networks with Complexity of OCV^ Economical Problems 7. Tucker (ed. T. 1956. 174-183.B.N. Mathematical Methods of Solution of 112-125 (in Russian). In T.). Decomposition Principle for Linear Programs. Math. Linear Programming and Extensions. W. Secondary Constraints. 1960. Theory of Gordon and Breach.. G.. 215-221.

1984. Networks 9. E. D. 1970. An Algorithm for Solution of the Assignment Problem. 1985. Study 15. E. Dial. U. S. and M. Reaching. F. Shortest-Route Methods: 1. Res. Doklady 10.A. Network Flow Problen\s with Convex Separable Deo. and B. R. Shortest Path Algorithms: Taxonomy and Annotation. Derigs.. U. Fox.L. Dinic. Kamey. Soviet Maths. Canada.. Springer-Verlag. Programming in Networks and Graphs.V. 1981.A. Glover. Motivation and Computational Experience. 1277-1280. Kronrod. 1970. 161-186. 632-633. 1969. ACM 12. Unpublished paper. 2-[5-248. A Computational Arvalysis of Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees. Lecture Notes in Economics and Mathematical Systems.57-102. West Germany. Exponential Grov^h of the Simplex Method for the Shortest Path Problem.. 1979. Algorithm for Solution of a Problem of Soviet Maximum Flow in Networks with Power Estimation. N.. University of Bayreuth. Pruning and Buckets. 125-147. Math. Dinic. 1959. and Vol. and J. 300.A. Dijkstra. Implementing Goldberg's Max-Flow Algorithm: A Computational Investigation. Prog. University of Waterloo. . Comm..269-271. 1979. Numeriche Mathematics 1. R. E. Meier. Dial. 1969. J.195 Dembo. and C Pang. G. 11. E. R. Annals of Operations Research Derigs. A Scaled Reduced Gradient Algorithm for Costs. 1324-1326. A Note on Two Problems in Connexion with Graphs. Denardo. Algorithm 360: Shortest Path Forest with Topological Ordering. The Shortest Augmenting Path Method for Solving Assignment Problems: 4. U. 1988. 27. Klingman. Edmonds. Dokl. Klincewicz. Derigs. Oper. 275-323. Networks 14. Technical Report.. and D. Math. W. 1988. Ontario.

370-384.. Laboratory for Computer Science.. L. Network Flow and Testing Graph Connectivity. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. 1956. 1956. M. CA. and R. Fulkerson. Klingman. and C. Ford. Sd. 1956. On the Efficiency of Maximum Flow To appear in Algorithms on Networks with Small Integer Capacities. Prog. 4.E. and D. AM Comput.R. Even. Feiitstein.E. Floyd. P. 1962. on Engquist.R. M. Shannon. Comm. Santa Monica.. J. J.R. Jr.196 Edmonds. 8. Maximal Flow through a Network. Martel. M. Elam. Graph Algorithms. 39-59. and D. Technical Report TM-80.M. Jr. Iowa Algorithmica. Elias. and D.T. 1979. of Oper. INFOR 20. Even... Karp. and R. L. State University. 5. A Successive Shortest Path Algorithm for the Assignment Problem.. A. Ford. S.. Glover. 1987. 1986. Note on Maximum Flow Through a Network. Cambridge. }.. Femandez-Baca. >4CM P-923. Research Report. 3. 248-264. 1979. . /. Solving the Trar\sportation Problem. Algorithm 97: Shortest Path. D. 399-404. lA.. 167-196. Math. 1976. 507-518. Nonlinear Cost Network Models in Transportation Analysis. MA. Network Flow Theory. Maryland. A Strongly Convergent Primal Simplex Algorithm for Generalized Networks. 117-119. L. 4. Tarjan. ACM 19.W. 1982. R. Even. Department of Computer Science. Res. Ford. Man. Florian. Canad. and C...I. 1972. S.R. Study 26. SI S. 24-32. Fulkerson. The Max-Flow Algorithm of Dinic and Karzanov: An Exposition. Theory TT-2.U.R. IRE Trans. F. Computer Science Press. Ames. Report Rand Corp.. J. Jr. 1956. 1975. Math. 345. Math. Infor.

Quart. 197 Ford. Naval Res.B.R. Dantzig. 35. I. 419-433. 97-101. Ford. Frank.. R. Naval Res.Sys. L. John Wiley & Sons. An Out-of-Kilter Method for Minimal Cost Flow Problems. D.ofComput. Res. H. Math.. A Suggested Computation for Maximal Multicommodity Network Flow. and D. 1988.E. Ford. on the Complexity of the Shortest Path Problem. R. Communication. Fulkerson. Fulkerson. and R. H. 1962. on Found.E. 2. Tarjan. 9. Math. L. Francis. 1958. Princeton University Press. Gabow. 1984. 277-283... Flows in Networks. 1986. 1971. S.R. 148-168. Fulkerson. 1987.. 1955. and D. Mirchandani (eds. Sci of ACM 34(1987).R. NJ.N. and P.. A Primal-Dual Algorithm for the Capacitated Hitchcock Problem. SIAM ]. Fulkerson. 1957. L. New Bounds 5. 5. 1958. L. M. Oper. Tarjan. SIAM J. 18-27. Man. 4. 338-346. Prog.. 47-54. Transmission. of Computing 83 - 89. Quart. Fibonacci Heaps and Their Uses in of Improved Network Optimization Algorithms.R. Computation of Maximum Flow in Networks. Constructing Maximal Dynamic Flows from Static Flows. . 1961. 1986. Logist.. R.. Comput. J. and D. H. Scaling Algorithms for Network Problems. Jr. Discrete Location Theory. Appl. R. M.R. Log.Sci..). Jr.T. (submitted). Jr...R. Faster Scaling Algorithms for Network SIAM ]. Ford. D. Cost Circulation 298-309. 25th Annual IEEE Symp. and Frisch.N.. Fredman. and C. 6. An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm.. 1985. To appear. 31. and DR. and Transportation Networks. Fredman. and Problems. Fujishige.. Gabow.R. Fulkerson. Sci. Comp. Princeton.L. L. 596-615. Fulkerson. Addison-Wesley. also in /.

F. F. Pallottino. Z. A Comparison of Pivot Selection Rules for Primal Simplex Based Network Codes. Klingman. Netxvorks 14.. and D. Z. on the Found. ofComput. F. D. 199-202. and Primal-Dual Computer Codes 4. and S. Naamad. Kamey. Study 26.. R. Gallo. Gilsinn. National Algorithms for Calculating Shortest Path Trees. Threshold Assignment Algorithm. D. 21. of Comp. . Glover. Klingman. Glover. Minimum Cost Network Eow Problem. S. EXial 1974.. Oper. Bureau of Standards. C. 1982. 1977.198 GaUl. Sci.). Rome. No.. and M. An 0(n^(m + n log n) log n) Sci. Acta Informatica 14. P. Networks 191-212. A Performance Comparison of Labeling Technical Note 772. The Threshold Shortest Path Algorithm.. Z.. 221-242. 136-146. Sys. On the Theoretical Efficiency of Various 103-111. B.. Klingman. Washington. 1984. Simeone.C.. Network Flow Algorithms. and S. J. Galil. and D. Gavish. Maffioli. G. Prog. and Its E. Z. Prog. P. Shortest Path Algorithms. 12. Witzgall. . Res. 1973. and A. Glover. Glover. Math. Theoretical Comp. 14. 226-240. Proc. Min-Cost Flow Algorithm. Sofmat Document 81 -PI -4-SOFMAT-27. Klingman. 203-217. 1980. G. D. D. B. Shortest Paths: A Bibliography. 1981. Shlifer.. 1980. G. Pallottino. An 0(VE log^ V) Algorithm for the Maximum Flow Problem. /. and E. Galil.. and C. 1. R. Pallottino As Annals of Operations Research 13. Gallo. Toth. 1988. OCV^/S E^/^) Algorithm for the Maximum Flow Problem. Glover. Tardos. Letters 2. 12-37. 3-79. Mead. 1986. and G. 27th Annual Symp. Gibby. (eds. 1986. Gallo. Glover. and D. F. Italy. Ruggen. Sci. 1983. In Fortran Codes for Network Optimization. Galil. Math. F. Implementation and Computational for Comparisons of Primal. The Zero Pivot Phenomenon in Transportation Problems and Computational Implications. Schweitzer. Starchi.

1985. Cambridge. J. and A. To appear in ACM.199 Glover. Oper. Schneider. Res. D. Glover.I. Kamey.V. A Primal Simplex Variant Maximum Flow F.T. Augmented Threaded Index Method for Network Optimization. Problem. 1106-1128. and Tardos. 136-146. J. Proc. and D.. Goldberg.. F. Combiiuitorial Algorithms for the Generalized Circulation Problem. for the F.. Klingman. and D.. 363-376..V. Plotkin. 1976. and RE.. Tarjan. Phillips. 1979. Research Report. D. 65-73. 1974. Solving Minimum Cost Flow Problem by of Proc. A. Goldberg. 1985. 1984. S. Laboratory for Computer MA. on the Theory Comp. Goldberg. Successive Approximation. A New Max-Flow for Algorithm. 33. Laboratory Computer Science. and D.I. Problem.V.. Technical Report MIT/LCS/TM-291. and J. M. . Whitman.A. A Computational Study on for Tranportation Start Procedures. A. Whitman. Glover. A New Polynomially Bounded Shortest Path Algorithm. on the Theory of Comput. R. Comprehensive Computer Evaluation and Enhancement of Maximum Flow Algorithms. 20.. F. Cambridge. Klingman. and R. D. E. Mote. 19th ACM Symp.. INFOR Goldberg... AIIE Transactions Glover. Klingman. Glover. 1986. D. Napier. 136-146. A. Netvk'ork Applications in Industry and Government. D. Logis. 9. Klingman. Tarjan. N. 18th ACM Symp. 1988. Stutz. 293-298. Sd. Naval Res. New Polynomial Sci.. 41-61. Klingman. Klingman. Klingman. MA. Quart. 31. 1985. 1987. Man.F. Man. Change Criteria. 109-175. Applications of Management Glover. 793-813. Mote. Basis and Solution Algorithms Problem. M. 1974. D. Science 3. 31. 12. and N. A New Approach to the Maximum Flow /. D. F.E. Science.T. Shortest Path Algorithms and Their Computational Attributes. Glover. and R. A. F. Phillips.V.

1988a. NY. Hao. Goldfarb.361-371. 33. Gomory. Hao. 1988.V.. f. I. Technical Report. Optimization. Reid. and R. Canceling Negative Cycles. 1988. Deterministic Network Optimization: A Bibliography. Kai.ofSlAM 9..V. MA. A. D. J. B. 388-397. A Primal Simplex Algorithm that Solves the Maximum Flow Problem University. 1961. Goldberg.. 1986. 1988b. on the Theory of Comp.. New York. (eds. 12. Seminar given OperatJons Research Center. Proc. Hao. Research Report. Efficient Dual Simplex Algorithms for the Assignment Problem. Oper. A. As Annals of Operations Research 13. and T. and R. A Practicable Steepest Edge Simplex Algorithm. and J. L. Columbia New York. Grigoriadis.. and M. Prog.. 2(Hh ACM Golden. Magnanti. J. 83-124. Columbia University. In B. NY. R. D.. Math. Department of Operations Research and Industrial Engineering. and S. D. Research Report. D. Taijan. . A Computational Comparison of the Dinic Flow. 1986. 1987.200 Goldberg. D. M. )To (A revision of Goldberg and Tarjan appear in Math. Tarjan. Controlled Rounding of Tabular Data for the Cerisus Bureau at the : An Application of LP and Networks. and S. Networks 149-183. T. 1985. Goldfarb. Kai. Industrial Engineering. C. in New York.) FORTRAN Codes for Network Goldfarb. Solving Minimum Cost Flow Problem by [1987].E. Multi-Terminal Network Flows. Goldfarb. and T. D. Department of Operations Research and Columbia University. NY.D.. Math. At Most nm Pivots and O(n^m) Time. .E.. Finding Minimum-Cost Circulations by Symp. E. Goldfarb. Successive Approximation. Hu. Efficient Shortest Path Simplex Algorithms. and Network Simplex Methods for Maximum Simeone et al. Cambridge. 551-570.K.. Res. Department of Operations Research and Industrial Engineering. 1977. 1S7-203. B. and J. Prog. Anti-Stalling Pivot Rules for the Network Simplex Algorithm. Golden. 7. 1977. Goldfarb.

of a Product from Several Sources to Numerous Facilities. M. B. Network Row. 26. 1978. 1985. Quart. of for All Pairs Network Flow Analysis. SIAM of Comp... 1973. 11. Naval Hopcroft. Fast Algorithms for Bipartite Gusfield. 1984. Kennington. Minoux. 20. Yale Haven. CT. Davis. 1963. Phys . 17-29. M. Wiley-Interscience. Personal Communication. 17-18. Research Report No. 1988. J. Res. Multicommodity Network Flows. SIGMAP 1987. . New Hamachar. Prog. and J.. Integer SIAM J. D. 63-68. and M. An n ' Algorithm for Maximun Matching in Bipartite Graphs. and D. C. D. An Efficient Implementation of the Network Simplex Method. 225-231. Hu. 375-379. Subroutines. CSE-87-1. L. Oper. and T. Res. 1979. Karp. . T. Comput. Maximum Flow in Undirected Planar Networks. R. An Efficient Procedure for 9. An O(nlog^n) Algorithm for 14. Graphs and Algorithms. Springer-Verlag. 1985. CA. 2. Grigoriadis. H. Implementing Hitchcock. The Distribution Math. 344-260. and D. University of California. Programming and Related Areas: A Classified Bibliography.-< Karzanov. 1963. a Dual-Simplex Network Flow Algorithm. Study Grigoriadis. 1986. Femandez-Baca. Hausman. and Transportation Problems. Lecture Notes in Economics and Mathematical Systems. 83-111. C. Computing Hassin.201 Gondran... 224-230. and H. D. J. 1941. Bulletin of the ACM Gusfield. Hoffman. R. Math. A. The Rutgers Minimum Cost Network Flow 26.M. E. AIIE Trans. /. . A Note on Shortest Path. V. Martel. J. YALEN/DCS/TR-356. Hsu. 1977. Helgason. Johnson. Grigoriadis. Markowitz. D. Assignment. Numerical Investigations on the Maximal Flow Algorithm of 22. Technical Report No. Very Simple Algorithms and Programs Dept. and R. Log. M. 1979. 160. 612-^24.. University. 10. M. M.. Computer Science and Engineering. L. D. D. F. Vol.

202

Hu, T.C.

1969. Integer Programming and Network Flours.

Addison-Wesley.

Hung, M.
Oper.Res.

S.

1983.

A

Polynomial Simplex Method for the Assignment Problem.

31,595-600.

Hung, M.
Oper. Res
.

S.,

and W. O. Rom.

1980.

Solving the Assignment Problem by Relaxation.

28, 969-892.

Imai, H.

1983.

On

the Practical Efficiency of

Various

Maximum Flow

Algorithms,

/.

Oper. Res. Soc. Japan

26,61-82.

Imai, H.,

and M.

Iri.

1984.

Practical Efficiencies of Existing Shortest-Path Algorithms
/.

and
Iri,

a

New

Bucket Algorithm.

of the Oper. Res. Soc. Japan 27, 43-58.

M.

1960.

A New Method

of Solving Transportation-Network Problems.

J.

Oper.

Res. Soc. Japan 3, 27-87.

Iri,

M.

1969. Network Flaws, Transportation and Scheduling.

Academic

Press.

Itai,

A.,

and

Y. Shiloach.

1979.

Maximum Flow

in Planar

Networks.

SIAM

J.

Comput.

8,135-150.

Jensen, P.A., and

W.

Barnes.

1980.

Network Flow Programming. John Wiley

&

Sons.

Jewell,

W.

S.

1958.

Optimal Flow Through Networks.

Interim Technical Report

No.

8,

Operation Research Center, M.I.T., Cambridge,

MA.
Gair>s.

Jewell,
499.

W.

S.

1962.

Optimal Flow Through Networks with

Oper. Res.

10, 476-

Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.

/.

ACM

24,1-13.

JohT\son, D. B.

1977b.

Efficient Special

Purpose Priority Queues.
1-7.

Proc. 15th

Annual

Allerton Conference on

Comm., Control and Computing,

Johnson, D.

B.

1982.

A

Priority

Queue

in

Which

Initialization

and Queue

Operations Take

OGog

log D) Time. Math. Sys. Theory 15, 295-309.

203
Johnson, D.
B.,

and

S.

Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
Proceedings of the 20th Annual

Directed Planar Networks in O(n^/^logn) time. In
Allerton Conference on

Comm.

Control, and Computing.

Univ. of Dlinois, Urbana-

Champaign,
Johnson,

IL.

E. L.

1966.

Networks and Basic
1986.

Solutions. Oper. Res. 14, 619-624.

Jonker, R., and T. Volgenant.

Improving the Hungarian Assignment

Algorithm. Oper. Res.

Letters 5, 171-175.

Jonker, R.,

and A. Volgenant.

1987.

A

Shortest

Augmenting Path Algorithm
38, 325-340.

for

Dense and Sparse Linear Assignment Problems. Computing
Kantorovich, L. V.
of Production.
in Mfln. Sci.

1939.

Mathematical Methods in the Organization and Planning

Publication

House

of the Leningrad University, 68 pp.

Translated

6(1960), 366-422.

Kapoor,

S.,

and

P.

Vaidya.

1986.

Fast

Algorithms for Convex Quadratic
Proc. of the 18th

Programming and Multicommodity Flows,
Theory of Comp.
,

ACM

Symp.

on the

147-159.

Karmarkar, N.

1984.

A New

Polynomial-Time Algorithm

for Linear

Programming.

Combinatorica 4, 373-395.

Karzanov, A.V.

1974.

Determining the Maximal Flow in a Network by the Method

of Preflows. Soviet Math. Doklady 15, 434-437.

Kastning, C.

1976.

Integer

Programming and Related Areas:

A

Classified Bibliography.

Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.

Kelton,

W.

D.,

and A. M. Law.

1978.

A

Mean-time Comparison of Algorithms
Networks
8,

for

the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.

97-106.

Kennington,

J.L.

1978.

Survey of Linear Cost Multicommodity Network Flows. Oper.

Res. 26, 209-236.

Kennington,

J.

L.,

and

R. V. Helgason.

1980.

Algorithms for Network

Programming,

Wiley-Interscience,

NY.

204

Kershenbaum, A. 1981.
400.

A

Note on Finding Shortest Path Trees. Networks

11,

399-

Klein,

M.

1967.

A

Primal Method for Minimal Cost Flows. Man.

Sci.

14, 205-220.

Klincewicz,

J.

G.

1983.

A Newton Method

for

Convex Separable Network Flow

Problems. Networks

13, 427-442.

Klingman,

D., A. Napier,

and

Large Scale Capacitated

NETGEN: A Program for Assignment, Transportation, and Minimum
J.

Stutz.

1974.

Generating

Cost Flow

Network Problems. Man. So. 20,814-821.

Koopmans,

T.

C.

1947.

Optimum
17 (1949).

Utilization of the Transportation System.

Proceedings of the International Statistical Conference,

Washington, DC. Also

reprinted

as supplement to Econometrica

Kuhn, H. W.

1955.

The Hungarian Method

for the

Assignment Problem. Naval

Res.

Log. Quart. 2, 83-97.

Lawler, E.L. 1976. Combinatorial Optimization:

Networks and Matroids. Holt, Rinehart

and Winston.
Magnanti,
T. L.

1981.

Combinatorial Optimization and Vehicle Fleet Planning:

Perspectives and Prospects. Networks 11, 179-214.

Magnanti,

T.L.,

and

R. T.

Wong.

1984.

Network Design and Tranportation Planning:

Models and Algorithms.

Trans. Sci. 18, 1-56.

Malhotra, V. M., M. P. Kumar, and
for Finding

S.

N. Maheshwari. 1978.

An CK V
I

1

3)

Algorithm

Maximum Flows
1987.

in

Networks. Inform.

Process. Lett. 7

,

277-278.

Martel, C. V.

A

Comparison

of Phase

and Non-Phase Network Flow

Algorithms.

Research Report, Dept. of Electrical and Computer Engineering,

University of California, Davis, CA.

McGinnis,

L.F.

1983.

Implementation and Testing of a Primal-Dual Algorithm

for

the Assignment Problem. Oper. Res. 31, 277-291.

Mehlhom,

K. 1984.

Data Structures and Algorithms.

Springer Verlag.

205 Meyer, R.R. 1979.

Two Segment
C. Y. Kao.

Separable Programming. Man.

Sri. 25,

285-295.

Meyer,

R. R.

and

1981.

Secant Approximation Methods for Convex

Optimization. Math. Prog. Study 14, 143-162.

Minieka,

E.

1978.

Optimization Algorithms for Networks and Graphs.

Marcel Dekker,

New

York.

Minoux, M.

1984.
J.

A

Polynomial Algorithm for

Mirumum

Quadratic Cost Flow

Problems. Eur.

Oper. Res. 18, 377-387.

Minoux, M.

1985.

Network Synthesis and Optimum Network Design Problems:
Technical Report, Laboratoire MASI,

Models, Solution Methods and Applications.
Universite Pierre
et

Marie Curie,

Paris, France.

Minoux, M.

1986.

Solving Integer

Minimum

Cost Flows with Separable Convex

Cost Objective Polynomially. Math. Prog. Study 26, 237-239.

Minoux, M.

1987.

Network Synthesis and E>ynamic Network Optimization. Annals

of Discrete Mathematics 31, 283-324.

Minty, G.

J.

1960.

Monotone Networks.

Proc. Roy. Soc.

London

,

257 Series A, 194-212.

Moore,

E.

F.

1957.

The Shortest Path through a Maze.
the Theory of Switching Part

In Proceedings
II;

of the

International

Symposium on

The Annals of the

Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.

Mulvey,
266-270.

J.

1978a.

Pivot Strategies for Primal-Simplex

Network Codes.

J.

ACM

25,

Mulvey,

J.

1978b. Testing a Large-Scale

Network Optimization Program. Math.

Prog.

15,291-314.

Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley

&

Sons.

Nemhauser,
Wiley

G.L.,

and L.A. Wolsey.

1988.

Integer

and Combinatorial Optimization. John

&

Sons.

Orden, A. 1956. The Transshipment Problem. Man.

Sci. 2,

276-285.

Potts. and K. Pollack. 1987. Math. Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem. M. Combinatorial Optimization: Algorithms and Complexity. R. Cambridge.B. Sloan Technology. On the Simplex Algorithm for 24. J. 1615-84. 1981. Solutions of the Shortest-Route Problem-A Review.. Floips in Transportation Netxvorks.106 Orlin. U. and R. 450-455.Algorithms for the Shortest Route Problem... Wiebenson. A Faster Strongly Polynomial Minimum Cost Flow Algorithm. M. Academic Press. 1988. . and Flow and Parametric Maximum Flow Problems. M. 1972.T. School of Management. New E>istance-E>irected Algorithms for Maximum MA.I. Technical Report No. Massachusetts Ii\stitute of Working Paper 1908-87. Operations Research Center. Algorithm 562: Shortest Path Lenghts. 8. 1985.. Pape. Networks and Generalized Networks.. B. on the Theory of Comp. Scaling Algorithms for the Assignment Minimum Cycle Mean Problems. 1960. New MA. 1974. Prog.B. Proc. Scaling Techniques for Miiumal Cost Network Flows. 1983. and R. Discrete Structures and Algorithms .(ed. K. Math. Fundamentals of Network Analysis. Oper. J. Study Orlin. Orlin. Page . MA. J. 1980. Maximum-Throughput Dynamic Network Flows. Sloan School of Management. 1984. ACM Trans.212-222. B. OR 178-88.. Software 6. Working Paper No. and W. Prog. Orlin. 214-231. Papadimitriou. Carl Hansen. Munich. Math. B. Math. 20th ACM and Symp. 7.H. 27. 1980. In V.. 166-178. Pape. 1982. 101-191.).T. J. Implementation and Efficiency of Moore. Garcia-Diaz. Prentice- HaU.T. D. 377-387. Rock. R. K. B. Orlin. Cambridge. Ahuja.224-230. J. H.. Ahuja. Cambridge. U. Oliver.M. Prentice-Hall. C.. and A. Steiglitz. Phillips. Prog. Orlin. Res.I. J. 1988. B.

Networks. E.362-391. 1981. Canada. and I. 16. 1982. Graphs. Thompson. and J. 194-213. & -. M. A Strongly Polynomial Minimum Cost Circulation Algorithm. ACM 20. Sleator. Interscience. 1982. Combinatorica 247-255. 1984. Y. Y. Deo. N. D. Data Structures and Network Algorithms. B. Shiloach.T.. E. Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods. Shein. Ottawa.207 Rockafellar. Tabourier. and R. /. D.S. 1983. 1973. Disc. Techniques for Primal Transportation Algorithm. Philadelphia. Technical Report STAN-CS-78-702. 1968. Wiley Syslo. N. 83-87. 1978. Prentice-Hall. Computer Science Dept. 202-205. Sons. Smith.M. Benefit-Cost Analysis of Coding /....N. 4. T. Wiley- Roohy-Laleh.E. Frisch. Discrete Optimization Algorithms. Dissertation. K.128-'i46. Prentice-Hall. Stanford University. R. John & M. Shiloach.. Tarjan. Res. Network Flows and Monotropic Optimization. and U. An OCn^ log n) Parallel Max-Flow Algorithm.Sci. New Jersey. Oper. 24.. Vishkin.. Improvements to the Theoretical Efficiency of the Network Simplex Unpublished Ph. Comput. Y. Algorithms 3 . 5. Network Optimisation Practice: A Computational Guide. Thulsiraman. Method. John Wiley . Common Terminal MuJticommodity Flow. . 1985. Tardos. Carleton University. D. and K. PA. Kowalik. CA. Rothfarb. and Algorithms. Swamy.D. A Data Structure for Dynamic Trees. L. V. Sheffi.S. 1983. E. SIAM. 1985. Sys. and G.. An 0(nl log^(I)) Maximum Flow Algorithm. - Srinivasan.. R. All Shortest Distances in a Graph: An Improvement to Dantzig's Inductive Algorithm. 1983. Sons. Y. Math. /. P. 1973. 1980. Tarjan.

1981-1984. A. Vol. An Algorithm for Linear Programming which Requires 0(((m Proc. Springer-Verlag. Personal Communication. A Simple Version of Karzanov's Blocking Flow Algorithm. E.197. 21. K. Algorithms for Maximum Network Flow. Springer-Verlag.7-20. 1978-1981. Man. R. Appl. 1977. . Sci. R. 1986. Tarjan. Theory of Comp..Math. R. E. Tarjan. Improved Shortest Path Algorithms for Transport Networks. Von Randow. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. Vaidya. Primal Algorithm to Solve Network Flow Problems with Convex Costs. D. SI AM ]. J. /. Wagner. Vol. E. R. A Shortest Path Algorithm for Edge - Sparse Graphs. R. Techniques Useful for Solution of Transportation Network Problems. of the 19th +n)n^ + (m+n)^-^n)L) Arithmetic Operations. Tarjan. Tomizava. Math. 12. P. 1987.450-456. 1982. On Max Flow with Gair\s and Pure Min-Cost Flows. 243. Study 26. Von Randow. ACM Symp. 1962. 1978. 173-194. on the Van Vliet. 1988. ACM Warshall. 1-11.208 Tarjan. E. 1985.11-12. ACM 9. Oper. Networks Truemper. 1984. 1987.Res. A. R. 32. N. S. 23^-57. 265-268. Transp. Prog. Personal Communication. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. Letters 2 . 1976. A Theorem on Boolean A Matrices. 87-97. 1972. Weintraub. 29-38. R. On Some 1. 1974. Res.

A Bad Network Problem 5. N. Theoretical Efficiency of the /. N. for the Simplex Method and other Minimum Zadeh.217-224.. A. Technical Report No. N. 1979. A. J. 11. N. 255-266. . Near Equivalence of Network Flow Algorithms. y4CM 7 . and J. J. 1972. . 1964. Whiting. Stanford University. 26. of Operations Research. y4CM 19. Oper. 347-348. Hillier. 1973a. 1973b. 5. W. P. Universidad de Chile-Sede Occidente. 37-40. Zadeh. Comm. Problem. and F. Math. Zadeh. Prog. Barahona. Algorithm 232: Heapsort. A Method for Finding the Shortest Route Through a Road Network. Chile. D. Quart.209 Weintraub. Dept. Departmente de Industrias Report No. 1979. Edmonds-Karp Algorithm for Computing Maximal Flows.. Math. 1960. WiUiams. Prog. More Pathological Examples for Network Flow Problems. Cost Flow Algorithms. Zadeh. 184-192. A Ehial Algorithm for the Assignment 2. CA. Res.

l^8^7 U^6 .

.

.

.

5 4Pi? 2 7 1991 W t 1 . 0. f^cr J CM- OS 1992 • ::m \995t- o 1994 Lib-26-67 .Date Due ne m^ ?«.„_ .* > SZQ0^ nrr ^^.

MIT LIBRARIES DUPl I 3 TDSD DQ5b72fl2 b .

Sign up to vote on this title
UsefulNot useful