^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

B. Magnanti. Ahuja* Thomas L. INDIA .NETWORK FLOWS Ravindra K. Kanpur . MA.208016. and James Sloan School of Management Massachusetts Institute of Technology Cambridge. 02139 . Orlin On leave from Indian Institute of Technology.

MIT. LffiRARF --^ JUN 1 .

1 Applications 1.11 Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis Assignment Problem Reference Notes References .3 5.2 5.7 5.5 Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems Minimum 5.4 Network Representations 1.6 Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns 5.1 3.4 Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm 4.2 4.10 5.4 3.5 5. Linear and Integer Programming Network Transformations Shortest Paths 3.8 5.3 4.5 Search Algorithms 1.1 4.5 Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm Dijkstra's Dial's Maximum Flows 4.4 5.3 Notation and Definitions 1.2 3.1 5.3 3.2 Complexity Analysis 1.NETWORK FLOWS OVERVIEW Introduction 1.9 5.6 Developing Polynomial Time Algorithms Basic Properties of 21 Z2 Z3 24 Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks.

.

communication and many other a consequence. impact on the design and implementation of many network many The aim optimization. Network optimization is also alluring to methodologists. and polyhedral methods of In addition. of this paf>er is to summarilze of the fundamental ideas of network In particular. have served as the major prototype for several theoretical domaiiis (for example. practitioners and of non-specialists can readily understand the mathematical descriptions network optimization problems and the basic ruiture of techniques used to solve these problems. network optimization has inspired many of the most fundamental results in all of optimization. Moreover. network optimization has served as a fertile meeting ground for ideas from optimization and computer science. rail.g. flows on arcs and mass balance at nodes) have natural mathematical representations. science concerning data structures and ideas from computer and efficient data manipulation have had a major optimization algorithms. For example. networks combinatorial optimization. electrical. the field of matroids) for a and as the core model wide variety of min/max duality results in discrete mathematics.Network Flows Perhaps no subfield of mathematical programming is more alluring than network optimization. primal-dual methods of linear and nonlinear programming. physical networks pervade our everyday As even non-specialists recognize the practical importance and the wide ranging applicability of networks.. This combination of widespread applicability and ease of assimilation has undoubtedly been instrumental in the evolution of network planning models as one of the most widely used modeling techniques in all of operatior^s research and applied mathematics. topics: We have divided the discussion into the following broad major . we concentrate on network flow problems and highlight a number of recent theoretical and algorithmic advances. price directive decomposition algorithms for both linear programming and So did cutting combinatorial optimization had their origins in network optimization. Moreover. because the physical operating characteristics of networks (e. lives. Networks provide a concrete setting for testing and devising new theories. Highway. Indeed. Many results in network optimization are routinely used to design and evaluate computer systems. plane methods and branch and bound procedures of integer programming.

however. Applications Networks this section.g. To illustrate the breadth of network applications. Our discussion intended to illustrate a range of applications and to be suggestive of how network flow problems arise in practice. Some important generalizations of these problems such as (ii) the generalized network flows. (e. we limit our discussions to the problems (i) above. in this section we present several important preliminaries We discuss (i) different ways to measure the networks of performance of algorithms. and (iv) the network design. this discussion. Among good we have presented those that to structure are simple and are likely to be efficient in practice.1 algorithms. listed In this chapter. that we consider some models requiring solution techniques For the purposes of we will not describe in this chapter. As a prelude to the remainder of our discussion. we will consider four different types of networks arising in practice: . quantitively. We have attempted our discussion so that it not only provides a survey of the field for the specialists. but also serves as an introduction and summary to the non-specialists who have a basic working knowledge of the rudiments of optimization. We. a more extensive survey would take us far beyond the scope of our discussion. and two generic proof techniques that have proven be useful designing polynomial-time algorithms. will not be covered in our survey. particularly linear programming. (iii) (ii) graph notation and vtirious ways that to represent a few basic ideas from computer science (iv) underUe the design to many in 1.Applications Basic Prof)erties of Network Flows '' Shortest Path Problems Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems Much of our discussion focuses on the design of provably good algorithms.. arise in numerous application settings emd in a variety of guises. In is we briefly describe a few prototypical applications.6 and provide some important references. briefly describe these problems in Section 6. polynomial-time) algorithms. . the multicommodity flows.

(1. associated with every arc b(i) e A. then node is a transhipment Let n = N | and m= A The minimum cost network flow problem can be formulated as follows: Minimize ^ C. pipelines..1a) (i.Ic) We refer to the vector x (xjj) as the flow in the network. a lower bound /.j)€A^ subject to X^ii {j:(i. then node i is a supply node. they provide a useful taxonomy for summarizing a variety of applications.1b) /jj < Xjj S u^ = . and a capacity integer Uj.. j) e A. if b(i) > 0. = 0. representing i its supply or demand.i)6^A} =b(i). x. performing optimization) that is. foralli€N. The constraint (1. railbeds. Network flow models are • • • also used for several purposes: Descriptive modeling (answering "what is?" questions) Predictive modeling (answering "what will be?" questions) Normative modeling (answering "what should be?" questions. and |.: ' (1.j)e]\} - Xxji {j:(j.• • Physical networks (Streets.1b) implies that the total flow out of a node minus the total flow into that node must equal . if b(i) then node | is a | demand node. We associate with each If b(i) node i i e N an number < 0. (1 . These four categories are not exhaustive and overlap Nevertheless. The Network Flow Model Let G = (N. A) be a directed network with a cost (i. wires) Route networks Space-time networks (Scheduling networks) • • Derived networks (Through problem trai^formations) in coverage. j) Cjj.. We will illustrate models in each of these categories. for all (i. We first introduce the basic underlying network flow model and some useful notation. node.

any equation is sum of all other equations. The matrix N has one row for each node of the network and one column corresponding to arc of size n (i. Figure 2. entries are all zeros except for the )-th entry which a flow variable app>ears in two mass balance equations. cost flow problem (1. Frequently. .or i € {N : Ib(i) = Mi) > 0) Ib(i) i . let e.1 gives an example of the node-arc incidence matrix. and hence redundant. Therefore the column The matrix nonzero. For now. we : represent the minimum ).e. 1.3. j).1c) We henceforth refer to this constraint as the moss The flow must also satisfy the lower bound and capacity constraints which we refer to as the flow bound constraints.2) minimize { ex Nx = b and / <xSu in terms of a node-arc incidence matrix N. j with a -1 coefficient. column vector Note that each i whose x-. the net supply /demand of the node. for each arc. then summing 0. contractual obligations or simply operating ranges of interest. = node . total supply must equal total demand the mass balance cor\straints are to have any feasible solution. (1. we make two (i) observations. € {N : b(i) < 0) if Consequently. the given lower bounds /j. Summing gives all the mass balance constraints eliminates all the flow variables and i € I N b(i) = 0. Later in Sections and we consider some of the consequences of this special structure. all the mass is balance equations gives the zero equation Ox = equal to minus the or equivalently. We let Njj represent the column of N and denote the j-th unit vector which is is a 1. In matrix notation.. and each column h<is exactly one +1 and one 2. as an outflow from node to Cj with a +1 coefficient and as an inflow is corresponding to arc (i. The flow bounds might model later that they physical capacities. central role in the The following special ccises of the minimum cost flow problem play a theory and applications of network flows.2 -1. are all zero. all N has very special structure: only 2m out of its nm total entries are of its nonzero entries are +1 or -1. (ii) If the total supply does equal the total demand. balance constraint. j) Nj. we show can be made zero without any loss of generality.

2) 1 2 3 4 5 . (1.(a) An example network.

Now also suppose that each user of the system has a point of origin (e. street As one to illustration... We can then use these models to answer a variety of "what if planning questions. j) C. A) with b(i) = 1 for all i i e Nj and b(i) = -1 for all e N2 (we set l^:= and u^. existence and uniqueness of equilibrium solutions). between his or her origin and destination as quickly as along a shortest travel time path. affect however. network decide upon such issues as speed one way street assignments. and a cost (i. j) € A). Operations researchers have setting.g. and the most readily comes to inind when we envision a network.A c Nj one is X N2 representing possible person-to-object assignments.. that tells us In order to make these decisions intelligently. consider the problem of managing. we need a descriptive model how to model traffic flows and measure the performance of any design as well as a effect of predictive model for measuring the any change in the system. specifies of equilibrium line of the network flow model permits us to answer that Each network has an associated delay function how long it takes to traverse this link. Physical Networks "^ The one that familiar city street map is perhaps the prototypical physical network. traverse it. all other ULsers continue to use their specified paths in the equilibrium solution) to reduce his travel time. Now us make the behavioral assumption that each user wishes to travel possible. Each of these users must choose a route through the network. Used in the mode of "what if . for example. the more flows on the link. his or her home) and a point of destination his or her workplace in the central business district). traffic that The time to do so depends upon is traffic conditions. that these route choices each other. The following type these types of questions. that is. or designing. and algorithms for computing equilibrium solutions. The objective is to assign each person to exactly way that a minimum cost flow problem on a network minimizes the cost of the assignment. if two users traverse the same link. Many network planning problems arise in this problem context. is there a flow pattern in the his (or her) choice of network with the property that no user can unilaterally change origin to destination path (that is.g. This situation leads to the following equilibrium problem vdth an embedded set of network optimization problems (shortest path problems). The Jissignment problem G = (N^ u N2. the longer the travel time to (e. = 1 for all (i. associated with each element object in a in A. as well as related theory developed a set of sophisticated models for this problem (concerning. or whether or not to construct a new road or bridge. let they add to each other's travel time because of the added congestion on the link. Note. a limits.

Indeed. and Kirkhoff s Law represents the network mass balance equations. Similar types of models arise in many other problem contexts. we posed These models are actively used in practice.scenario analysis. Route Networks Route networks. Ohm's Law serves as the analog of the congestion function for the traffic equilibrium problem. The basic equilibrium model of electrical networks is another example. a network equilibrium model forms the heairt of the Project Independence Energy Systems (LPIES) model developed by the U. construct transportation routes. Department of Energy as an analysis tool for guiding public policy on energy. *. The traditional operations research transportation at its plants problem is illustrative. an arc connecting a supply point and center might correspond to a complex four leg distribution channel with legs to a rail station. in this case the transportation network. these models permit analysts to answer the type of questions previously.S. Rather than solving the problem directly on the physical network. retail (i) we preprocess the data and Consequently. For example. A shipper with supplies must ship to geographically dispersed retail centers. In this setting. Another type of physical network circuit). (iv) from the rail head (by truck) to a distribution center. how can we lay out or smallest possible integrated circuit to make the necessary connections between components and maintain necessary sejjarations between the wires (to avoid electrical interference). planning problems arise design. which are one level of abstraction removed from physical networks. (ii) from a plant (by truck) (iii) from the rail station to a rail head elsewhere in the system. in this Numerous network . Each arc connecting a supply point to a retail center incurs upon some physical network. are familiar to most students of operations research and management science. the Urban Mass Transit Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement.he its problem context. For example. and even from the distribution center (on a local delivery truck) to the final If customer (or in some cases just to the distribution center). each with a given aistomer costs based demand. we assign the arc with the composite . is a very large-scale integrated circuit (VLSI In this setting the nodes of the network correspond to electrical components and the links correspond to wires that connect these links.

the (i. Space Time Networks Frequently in practice. possible to type of decision problem using integer programming methodology for sites choosing the distribution given choice of sites. As but one illustration. In this application context.2. we would identify the supply points with jobs to be performed. and network flows to cost out (or optimize flows) for any using this approach. while improving customer service as well. applications. for instance. . In this problem context. this all the intermediary legs. Many address this related problems arise in this type of problem setting. particularly in problem contexts such as machine scheduling. The solution to the problem specifies the minimum cost assignment of the jobs to the machines. T represents each of the planning periods. j) demand points with available machines. the It is design issue of deciding upon the location of the distribution centers. In these instances it is often convenient to formulate a network flow problem facility (a on a "space— time network" with several nodes representing a particular machine. a noted study conducted several years ago permitted Hunt Wesson Foods Corporation to save over $1 million annually. as well as with the distribution capacity for classic problem becomes a network transportation model: costs. and the cost associated with arc i as the cost of completing job on machine j. we wish to meet prescribed demands for a product in each of the T time periods. 2.distribution cost of this route. a warehouse. a prize winning practice paper written several years ago described an application of such a network planning system by the Cahill costs May Roberts Pharmaceutical Company (of Ireland) to reduce overall distribution by 20%. period. and one . we wish to schedule some production or service activity over time. In each d^ lot size problem. . the an important example. we can produce I^ at level Xj and /or we can meet the demand by drav^g upon inventory from the previous t f)eriod. which represents a core planning model is in production planning. . Figure economic 1. The network representing this problem has T+ 1 nodes: one node = 1. . an airport) but at different points in time. One problem special case of the transportation problem merits note — the assignment This problem has numerous that we introduced previously in this section. find the flows is from plants to customers that minimizes overall This type of model used in numerous applications. assuming that each machine has the capacity to perform only one job.

T. The mass balance equation fir\al for node indicates that demand (assuming zero beginning and zero t inventory . 2. Whenever the production and holding costs are linear.e. this problem is easily solved as a we must find the minimum cost path of If we impose to that demand point). t (i. Figure 1^. we incur a fixed cost t In addition we may h^ incur a per unit production cost c^ in period and a per t unit inventory cost for carrying any unit of inventory from period problem is t to i>eriod + 1. over the entire planning period) must be produced in some period = 1.node represents the "source" of Xj all production. Hence. arises frequently in practice. The flow on (t. t) prescribes the production level level I^ in period t. . the problem becomes a minimum cost network shortest path problem (for each demand period. the objective function for .. Consequently. Network flow model of the economic lot size problem. production and inventory arcs from node capacities on production or inventory. cost: that is. whenever we in period . The mass balance equation period models the basic accounting equation: incoming inventory plus production that period must equal demand plus all final inventory. Id. no matter how much or how little. . x^ > 0). . and the flow on arc t + 1) represents the inventory for each in to t be carried from period to period t + 1 . t arc (0. One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^. the cost on each arc for this either linear (for inventory carrying arcs) or linear plus a fixed cost (for production arcs). flow problem.

M. The production property permits us shortest path consists of to solve the problem very as follows.M. Many enhancements facility (ii) model are possible. until example revenues vdth each service or leg.g. the next morning.M. in this we identify network flow network (with no external supply demand) will specify a set of flight plans (circulation of airplanes through the airline's fleet network)..M.M. In this application setting. The arcs are of two types: service arcs connecting (ii) two airports. j) nodes i and j with < j. Another classical network flow scheduling problem is the airline scheduling problem used to identify a flight schedule for an airline.10 the problem is concave. The length of arc is equal to the production and inventory cost of i satisfying the demand of the periods from to j-1. each in node represents both a geographical location (e. the common limited production facilities. though the embedded network often proves to be useful in designing either heuristic or optimization methods. Observe that for every production in schedule satisfying the production property. in no period do we both carry inventory from the previous period and produce. Hence we can obtain the optimum production schedule by of the solving a shortest path problem.2 .. or that cases. to T+ 1. the first arc on each path arc. or to wait If A. for New York at 10 A.g. This problem's spanning tree solution known as a spanning trees decomposes form into disjoint directed paths. A flow that maximizes revenue will prescribe a schedule for an . Moreover. we produce enough to meet the demand for an integral number of contiguous periods. 6 to wait for a later flight.M. This observation implies the following production property: in the each time we produce. we may need to change dies in an automobile stamping plant when making In different types of fenders).M. an airport) and a point (i) time (e. it contains an arc (i. until 11 overnight at New York from 11 P. most enhanced models are structure quite difficult to solve (they are NP<omplete). is a production arc (of the (0. New York at 10 A. and for every pair of (i. layover arcs that permit a plane A. any such concave cost network flow problem always has a special type of optimum solution solution. t)) and each other arc is an inventory carrying solution.). G' contair\s a directed path 1 G' from node to node T + 1 of the same objective function veilue and vice-versa. for example (i) the production might have limited production capacity or limited storage for inventory. efficiently as a problem on an auxiliary network G' defined 1 The network G' i nodes j). a A. to Boston at 11 to stay at New York from 10 A. or the production facility might be producing several products that are linked by common share production costs or by changeover cost (for example. As we indicate in Section 2..

The foUovdng examples illustrate this Single Duty Crew Scheduling. Time Period/Duty Number .3 illustrates a number of possible duties for the drivers of a bus company.11 of planes. The same type of network representation arises in many other dynamic scheduling applications. Figure 1. point. Derived Networks This category a "grab is bag" of specialized applications and illustrates that arise in surprising sometimes network flow problems ways from problems that on the surface might not appear to involve networks.

and b is column vector whose components are all Observe 's that the ones in each column A occur in consecutive rows because each driver duty contains a single work is shift (no split shifts or work breaks). we perform it.4. to we specify a number network be on duty in each period. workers need to in complete a variety of tasks that are related by precedence conditions. the revised right hand side vector of the problem will have a +1 is in row 1 and a -1 in the last (the 1 appended) row. To make this identification. Critical Path Scheduling and Networks Derived from Precedence Conditions In construction and many other project planning applications. that Hes below the +1 in the column of A). = a of we select the j-th duty. If instead of requiring a single driver to be on duty in each period. the same this case the right transformation would produce a flow problem.4. hand side coefficients (supply and demands) could be Therefore. = 1) or not (x. . the matrix A represents the matrix of duties Vs. to node 9 minimum network given Figure which is an instance of the shortest path problem. the transformed problem p)ath would be a general minimum cost network flow problem. the following operations: In (1. This transformation does not change the solution to Now add a redundant equation equal minus the sums of all the equations in the revised system.2b) subtract each equation from the equation below to the system. or the added row. Moreover. at Therefore. each column in the first revised system will have a single +1 (corresponding to the hour of the duty in the column just of A) and last a single -1 (corresponding to the row in A. a builder must pour the foundation before framing the house and complete the framing before beginning to install either electrical or plumbing fixtures. but in arbitrary. constructing a house.12 In this formulation the binary variable x: indicates whether 0) (x. We show that this problem a shortest path problem. Shortest path formulation of the single duty scheduling problem. ^5 unit 1 Figure 1. rather than a shortest problem. the problem cost in the to ship in one unit of flow from node 1. Because of the structure of A. for example.

ifi = 0.ifi = J + l all i € N . then each constraint contains exactly two variables. + ^ 2- f Xjj si I {j:(i. we represent the by nodes. one with one coefficient and one with a minus one structure. thereby giving us a network. (i. On to the surface. For convenience of notation. j) . the cannot start until job jobs.j)eA) {j:(j. We are to choose the constraints jobs start time of each job j so that we honor a set of specified precedence If and complete the overall project as quickly as possible. X. "start" job we add we to two dummy both with zero processing time: a a "completion" job J to be completed before any other job can begin and have completed this all + 1 that cannot be initiated until other jobs. however.Sq T subject to Sj S Sj + tj . for l. that we move variable to the left hand side of the a plus constraint. . Then we vdsh . then the precedence constraints can be represented by arcs. The linear programming dual xj: of this (i.13 This type of application can be formulated mathematically as follows. problem has a familiar If we associate a dual variable with each arc then the dual of this problem maximize V t. J) requires t: days to complete. i has been completed. is coefficient. j) in the network. Sj Note. The precedence constraints imply that for each arc job j (i. Let G = (N. for each arc (i . j) e A. j (j = 1.. . which is a linear program in the variables if s: . . ^ .j)€X subject to ^ 2^ X:. A) represent the network corresponding to solve the following optimization augmented project. problem: minimize sj^^ . this problem. 2. seems the bear no resemblance to network optimization. to Suppose we need complete J jobs and that job S.i)€!^) -l. . otherwise.

14 .

if resources are available for expediting individual jobs. Certain versions of this problem can be formulated as minimum cost flow problems. management. linear y. The provisions any given mining technology. The open pit mining problem is another network flow problem that arises from pit precedence conditions. j). This network will also have a dummy "collection it node" with (that is. summed or 1) blocks j. impose restrictions on how we can remove the blocks: that lies for example.5. As shown of in we have divided the region to be mined into blocks. itself is particularly for managing it large-scale corwtruction The critical path important because identifies those jobs that require managerial attention in order to complete the project as quickly as possible. = 0) we extract block the problem will contain j a constraint y. and . y. This model become principal tool in project projects.. be a zero-one variable indicating whether (i) = 1) or not (y.g. we can never remove a block until it. This problem requires us to determine the longest path in the network G from node to node J + 1 with tj as the arc length of arc (i. S 0. this path has become known as the critical path heis and a the problem has become known as the critical path problem. Since delaying any job in this sequence must necessarily delay the completion of the overall project. rather than network flow problem with a node for each block. we could consider the most efficient use of these resources to complete the overall project as quickly as possible. Researchers and practitioners have enhanced this basic model in several ways. j) 6 A . and an arc connecting to node j . we let j. equal to minus the sum of the rj's. this figure. y. a variable for at each precedence constraint. Suppose now that each block has an associated revenue n (e. we have removed any block immediately above restrictions on the "angle" of mining the blocks might impose similar precedence conditions. ^ y^ (or. for all (i. The dual linear program (obtained from the constraints programming version = will be a of the problem (with the ^ y. (ii) an objective function specifying over all we revenue ny. < 1. the value of the ore in the block minus the j cost for extracting the block) If and we wish to extract blocks to (y^ maximize - overall revenue. and perhaps the geography of the mine. This longest path has the to fulfill the sp>ecified following interpretation. yj S 0) whenever we that need wish to mine block to maximize before block total i.15 xj. It is the longest sequence of jobs needed precedence conditions. For example. Consider the open mine shown in Figure 1. and the revenue n as the demand demand node j.

The problem can be a feasible flow in a network and can be solved by an application of the maximum flow algorithm. ^ 1 in the original linear program. for example. the tabulated information might disclose information about a particular individual.6(a). In addition. rounded the flow on this arc 1. The dual problem one of finding a network flow that minimizes ths sum of 0. either up or dov^n to the nearest multiple of three. We also add an arc connecting node t and node s. Figure 1. i: we add a supersource s to the i-th network connected to each row node Similarly. the flow on this arc must be the column sum. round each entry in the table. meeisuring them in integral units of the rounding base . rounded either up or dov^T*. the variable corresponding to this If precedence constraint in the dual linear program v^ll have a network flow structure.7 up or down. way that network flow problems related Whenever.S. information and not disclose It the Bureau has an obligation to protect the source of its statistics that can be attributed to any particular individual. Census Bureau uses census infonnation to construct millions of tables wide variety of purposes. the dual linear program will be a network flow problem. If all entries in the original table rounded up or down. flows on the arcs incident to node The critical path scheduling problem and open pit mining problem illustrate one arise indirectly. Figure network flow problem corresponding to the census data specified in Figure we rescale all the flows. must be the sum of illustrates the 1. we add a supersink with the arc connecting each j-th column node j to this node. the data is shown in Figure 1. and the overall sum of the entries in the new table adds to a rounded version of the overall that sum in the original table. can attempt to do so by rounding the census information contained in any Consider. By law. The network contains a node for each row in the table and one node for each j column. rounded up or dov^n. It contains an arc connecting node j): i (corresponding to row ij-th i) and node (corresponding to column the flow on this arc should be the entry in the prescribed table. this arc corresponds to the upper bound constraint is y. Since the upper leftmost entry in this table a 1.6. including the row and column sums.6(b) shows a cast as finding rounded version of the data meets this criterion.16 block j). so that the entries in the table continue to add to the (rounded) row and column sums. the only constraints in the problem are precedence constraints. say. two variables in a linear program are by a precedence conditions. Matrix Rounding of Census Information The for a U. table. the flow on this arc t must be the row sum. We might disguise the information in this table as follows.

(XX) 1 $10.000 .000 .000 Column Total .$50.16a Time in ^service (hours) <1 Income less 1-5 <5 than $10.000 $30.000 mure than $50.$30.

worst-case analysis is the primary measure of Worst-Case Analysis For worst-case analysis. Whenever is C (or U) appears in the complexity arulysis. will not be a network structure flow problem. this chapter will focus primarily on worst-case analysis. rather than statistical estimates. and is Nevertheless. we present. corresponding to tables with more than two dimensions. As an example of a worst-case result within this chapter. The formulation of a more general version imbedded network problem. Nevertheless. is any problem instance. its relative merits. for the algorithms performance. the number of arcs and upper bounds C and U on the cost coefficients and the arc capacities. this type of analysis provides performance guarantees. we assume that each cost (or capacity) integer valued. The major empirical analysis is to estimate how algorithms behave in practice. we bound the running time of network algorithms in (n). Researchers have designed many of the algorithms described in this chapter specifically to improve worst-case complexity while simultaneously maintaining good empirical behavior. these problems have an (corresponding to 2-dimensional "cuts" in the table) that algorithms to find rounded versions of the tables. Worst-case analysis aims to provide upper bounds on the number of steps that a given algorithm can take on Therefore. we can exploit in divising 12 Complexity Analysis There are three basic approaches for measuring the performance of an algorithm: empirical analysis. worst-case analysis.16b (multiples of 3 in our example). and average-case analysis. we will prove . and only secondarily on empirical behavior. terms of several basic problem parameters: the number of nodes (m). Thus. then the flow on each arc must be integral at one of two of this consecutive integral values. Average-case analysis differs from empirical analysis because provides rigorous mathematical proofs of average-case performance. The objective of average-case analysis to estimate the expected number of steps taken it by an algorithm. Each of these three performance measures has appropriate for certain purposes. typically Empirical analysis measures the computational time of an algorithm using statistical sampling on objective of a distribution (or several distributions) of problem instances.

this notation indicates only the dominant terms of the all running time. The leeist value of the constants not determined solely by the algorithm. To avoid the need to compute or mention the constant p. m. For example. The counting for of steps relies on a number of assumptions. has led to a flourishing of research on the worst<ase performance of algorithms. instead. C or U. Counting Steps The running time of steps it of a network algorithm is determined by counting the number performs. 3. the actual running time is lOnm^ + 2'^'^n^m. . assuming that is m ^ n. the constant terms 2''^'^n'^m this dominant even though most practical term would dominate. Although ignoring the may have undesirable feature. researchers have widely adopted the 0( 1. most of which are quite appropriate most of today's computers. in turn. For all all of the algorithms that we present. the use of the 0( notation typically has permited analysts to avoid the prohibitively difficult analysis required to compute the leading constants. then we would state that the running time O(nm^). the constant terms are relatively small integers for the terms in the complexity bound. 2. researchers typically use a "big O" notation. 4." The 0( ) notation avoids the need to state a specific constant. which. For large practical problems. sufficiently large values of we mean the term that would dominate bounds are other terms for n and m.17 that the number is less of steps for the label correcting algorithm to solve the shortest path problem than pnm steps for some sufficiently large constant p. Observe that the for running time indicates that the lOnm^ term values of n and m. if Therefore. ) notation for several reasons: Ignoring the constants greatly simplifies the analysis. ) Consequently. replacing the expressions: requires "the label correcting algorithm pmn steps for some constant p" with the equivalent expression "the running is time of the label correcting algorithm 0(nm). it is also highly sensitive to the choice of the computer language. the constant factors do not contribute nearly as much to the running time as do the factors involving n. the time is called asymptotic running times. By dominant. Estimating the constants correctly is is fundamentally difficult. and even to the choice of the computer.

2 implicitly assumes that the only operations to and tirithmetic operations.2 Each comparison and basic arithmetic operation counts as one step. be in part an addition or division. In fact. log C and log U. on results for the today's computers we would present. in practice. a computer must access a number of words of data and this thus takes more than a constant number of steps. For example.. quite /) reasonable in practice.l The computer being executed carries out instructions sequentially. a computer must store large numbers in several words of its memory. it is 0((n + m)flog n + log C + log U)). in comparing two running times.000. takes equal time. Other instances of . C = Oirr-) and U = 0(n'^). if known as the similarity assumption. to perform each operation on very large numbers. we will typically assume for that both C and U k. The input length of a problem number is of bits needed to represent that problem. Therefore.000 for networks with 1000 nodes. most one instruction A1. we were to restrict costs to be less than lOOn-^. the assumption that each arithmetic operation takes one step lead us to underestimate the aisymptotic running time of arithmetic operations involving very large numbers on real computers since. the input length a low order polynomial function of n. we are adhering to a sequential model of computations. m.. be counted are comparisons Al .g. For example.e. To avoid systematic underestimation of the running time. obtain the same asymptotic worst-case it algorithms that we Our cissumption that each operation.l. with at at a time. i. are polynomially bounded in n. we will not discuss parallel implementations of network flow «dgorithms. is justified by the fact that 0( is ) notation ignores differences in running times of at most a constant factor. Consequently. Polynomial-Time Algorithms An the algorithm is said to be a polynomial-time algorithm if its running time is is boimded by a polynomial function of the input length. researchers refer if its network algorithm as a polynomial-time algorithm n.000. which the time difference between an addition and a multiplication on essentially all modem computers. log C and to a log U (e. For a network problem.18 Al. running time is bounded by a polynomial function in m. even by counting all other computer operations. By envoking Al. On may the other hand. is some constant This assumption. we would allow costs to be as large as 100. the running time of one of the polynomial-time maximum flow algorithms we consider is 0(nm + n^ log U).

Even n is in extreme cases this is true.) polynomial-time algorithms. an important subclass of exponential-time Some instances of pseudopolynomial-time bounds are 0(m + nC) and 0(mC). The class of pseudopolynomial-time algorithms algorithms. An algorithm is said to be an exponential-time algorithm if its running time grows of exp)onential time a as a function that can not be polynomially bovmded. 0(2^). Qn n must be larger than 2"^^^'^^^. polynomial-time algorithms are strongly polynomial-time because log C = Odog n) and log U= CXlog n). 0(n!) and 0(n^°g polynomial function of n and log if "). First. C and U. In particular. flow algorithm alluded therefore. polynomial-time algorithms perform better than exponential time algorithms. (Observe that nC cannot be bounded by is C) We say that an algorithm n. For problems that satisfy the similarity assumption. pseudopolynomial-time its running time is polynomially bounded in is m. Moreover.8 illustrates the asymptotic superiority of The second reason is more pragmatic.19 polynomial-tiine bounds are said to be a strongly O(n^m) and 0(n log n). There are two major reasons for preferring polynomial-time algorithms to exponential-time algorithms. A polynomial-time algorithm is is polynomial-time algorithm in if its running time bounded by or log U. but the algorithms will not be attractive if C and U are high degree polynomiab in n. Some examples bounds are 0(nC). pseudopolynomial-time algorithms become polynomial-time algorithms. and does not involve log to. small degree. any polynomial-time algorithm is asymptotically superior to any exponential-time algorithm. this case. a polynomial function only n and m. experience has Figure 1. n^'^OO is smaller than tP'^^^E^ ^ if sufficiently large. For example. is not a strongly polynomial-time is The if interest in strongly polynomial-time algorithms all primarily theoretical. the polynomials in practice are typically of a . Much practical shown that. we envoke the similarity assumption. as a rule. C The maximum algorithm.

20 APPROXIMATE VALUES .

• • . . . j) as the head of arc aire (i. j) has two end points. > with each arc (i. e N| and if e N2. In this chapter. A) is called a bipartite graph (i.e. 1. if the graph contains at least one if all undirected path from connected. An undirected path is defined similarly except that for any two consecutive nodes either arc (ij^. .).21 I N I and m= A I I . If any ambiguity might arise. and say that the arc (i. A) if N' CN c A. A directed (\2 r-1. we shall sometimes refer to a path as a set of (sequence oO arcs without mention of the nodes. A') a spanning subgraph of G = (N. to is A graph is said to be connected pairs of nodes are that the it disconnected. j) e A. othervs^se. as a cutset of G. 13. A(i). . i\^+-[) i^. a cost Cj. we shall often refer to a path as a sequence of nodes - i2 - -ij^ when its arcs are apparent from the problem context. G if = (N. whichever is appropriate from context. list node i. . Two nodes i and i j are said to be connected j. i and j j. Alternatively. A graph G' = (N'. A graph G' = is (N'. shall explicitly state directed or undirected path. A') is a subgraph of G= (N. . . i\^ We refer to the nodes i3 . or arc (ij^+i . and a capacity Uj:. ij. as the i. The arc (i. A) N' = N and A' c A.( ij.i. j) (i.. . i) or (i^ . we always assume graph G is is We refer to any set Q c A with the property that the graph G' = (N. ij-. for each € A. nodes and arcs ip (ip 12^. i| For simplicity of notation. and ij^^-j on the path. We associate that Uj. we distinguish two special the source s and sink t. We assume throughout nodes in a graph. A directed is cycle is a directed path together with the arc i|) and an undirected cycle an imdirected path together with the arc (ij. 12. representing cycles. An arc (i. . We shall use similar conventions for A graph G = (N. A cutset connected.^ as the internal nodes of the path. We j. A-Q) disconnected. j) emanates from node Tlie arc adjacency The of j arc is an outgoing of node i and an incoming arc of node i. Frequently.- • • . the path contains i2 . and no superset of Q has this property. j) e A : € N}. if) satisfying the property that ij^+p € A for each k= . A(i) = {(i. j) if its i node set j N can be partitioned into and A' two subsets N| and N2 so that for each arc in A. is defined as the set of arcs emanating from node of a i. (ij. path in . . 13). The degree node is the number of incoming and outgoing arcs incident to that node. We we shall often use the terminology path to designate either a directed or an undirected path. j). A) is a sequence of distinct (ij^.j) is incident to nodes i and j. refer to node i tail jmd node (i.

Clearly. the network discuss more cleverly and by using improved data of representing a network. a tree with degree equal to one called a leaf node. to represent a network representation is not efficient. we state it othervdse. we have already described the node-arc incidence matrix representation of a network. any nontree arc to a spanning tree creates exactly one Removing any two arc in this cycle again creates a spanning tree. A acyclic if it contains no cycle. Arcs belonging to a spaiming tree called nontree arcs. X and N-X. Each least two leaf A spanning tree contains a unique path between any two nodes. and Ijj = otherwise. j) with the property that 1 if arc € A. subtree of a tree T is a connected subgraph of T. but to represent the also upon the manner used network within a computer and the storage results. we assume that logarithms are of base 2 unless log b. Arcs a whose end points belong to two If different subtrees of a spanning tree created by deleting tree-arc constitute a cutset. structures. N-X). A tree is a connected acyclic graph. of is which only space 2m words have nonzero values. we some popular ways In Section 1. We shall alternatively represent the cutset Q as the graph is node partition (X.4 Network Representations The complexity of a network algorithm depends not only on the algorithm. scheme used for maintaining and updating the intermediate The running time of an algorithm (either worst<ase or empirical) can often be improved by representing In this section. the resulting graph is again a spanning In this chapter. A) is has exactly ntree has at tree arcs. A node in nc des. to this cutset is added to the subtrees. This scheme requires nm this words to store a network. The addition of cycle. T are called tree arcs. T are A spanning tree of G = (N. the element I^: This representation stores an n x n matrix (i. Removing any tree-arc creates subtrees. The arc costs and capacities are . Another popular way = network the node-node adjacency I matrix representation. We represent the logarithm of any number b by 1. any arc belonging tree.1. A tree T is said to be a spanning A tree of G if and T is a spanning subgraph arcs not belonging to 1 of G.22 partitions the graph into two sets of nodes.

(c) The reverse star representation.23 (a) A network example arc number 1 point (tail. arc number 1 (tail. head) cost 2 3 4 5 6 7 8 . head) cost cost 1- 2 3 1 4 2 3 2 3 1 4 5 4 2 1 6 7 8 4 1 3 4 2 3 (b) The forward star representation.

The arc (1. we number the arcs emanating from node 1. we need an additional data structure known as the reverse star representation. simultaneously. denoted by point(i). that indicates the smallest i. We then sequentially store the (taU. We numbers in an m-array trace. head) and the cost of the For example. For consistency. both sparse and dei^se. we n can create a reverse star representation as follows. and so on. we can simply store the arc numbers and once we know the from the forward 1.9(a).9(b) specifies the forward star 1. Figure complete trace array.24 also stored in n x n matrices. Figure 1. For the sake of we at set rpoint(l) = 1 and rpoint(n+l) = m+1. 1. set point(l) = 1 and point(n+l) = m+1. To determine.9(c). number i in the arc list of an arc emanating from - node 1) in Hence the outgoing list. then the arcs emanating from node arbitrarily. As earlier. but is not attractive for storing a sparse network. the incoming arcs at any node efficiently. we can always retrieve the associated information store circ star representation. head) and We also maintain a pointer with each node i. storing arc (3. (tail. The forward star and reverse star representations are probably the most popular ways to represent networks. Starting from a forward star representation. we will maintain a significant duplicate information. representation of the network given in Figure The forward outgoing arcs at star representation allows us to determine efficiently the set of set of any node. We examine the nodes j = 1 to j. which denotes the first arrays that contains information about an incoming arc at node consistency. arc has arc number arc number 4 in the forward star representation. store the (tail. in order and sequentially head) and the cost of incoming arcs of node i. 2) hcis 1. then node i has no outgoing arc. maintain a reverse position in these pointer with each node denoted by rpoint(i). incidence list (These representations are also literature.1).) first known as representation in the computer science The forward star representation numbers the arcs in a certain order: 2. 2) So instead of storing head) and cost of arcs. We can avoid this duplication by eircs. . we store the incoming arcs node i at positions rpoint(i) to (rpoint(i+l) . numbers ir\stead of the (tail. Arcs emanating from the same node can be numbered the cost of arcs in this order. We also i. arcs of node - are stored at positions point(i) to (point(i+l) the arc If point(i) > point(i+l) 1.9(d) gives the arc numbers. This data structure gives us the representation shov^Ti in Figure Observe that by storing both the forward and reverse star representation S. This representation is adequate for very dense networks.

In this section. in At every point states: in the search procedure. inadmissible We call an arc otherwise. let us suppose that we wish to find all the nodes graph s. by examining admissible arcs. we discuss two of the most commonly used search techniques: breadth-first search and depth-first search.5 Search Algorithms Search algorithnvs are fundamental graph techniques. and the status of unmarked nodes yet to be determined.e. G = (N. Tl e follovkdng algorithm summarizes the basic iterative steps. different variants of search lie at the heart of many network algorithms. . and Initially. j) admissible arcs. Whenever i the procedure marks of a new node by examining an j admissible arc node j. A) that are reachable through directed paths from a distinguished node called the source. Subsequently. all nodes in the to network are one of two marked or unmarked. Search algorithms attempt to find property. predi]) = i. i. The algorithm we say that node is a predecessor terminates when the graph contains no (i. j) admissible if node i is marked and node is j is unmarked.25 1.. in a all nodes in a network that satisfy a particular For purposes of illustration. (i. The marked nodes are is known be reachable from the source. only the source node marked. the search algorithm will mark more nodes.

node i from LIST. is The search algorithm examines inadmissible. first the current arc of node is the arc in A(i). nodes s. j) from it. (i. Arcs in each list can be arranged arbitrarily. it this list sequentially list and whenever the current arc arc.26 algorithm SEARCH. Each iteration of the while loop either finds an admissible arc or does not. Each node has a current arc Initially. this algoirthm terminates. which i is the current candidate for being examined next. declares that the node has no admissible It is easy to show that the search algorithm runs in 0(m + n) = 0(m) time. it has marked all nodes in G that are reachable s via a directed path. it arc in the arc the ciirrent When the algorithm reaches the end of the arc arc. mark node LIST := {s). end. Now consider the effort spent in identifying the . add node end else delete j to LIST. When from nodes. The predecessor indices define a tree consisting of marked We structure use the following data structure to identify admissible is arcs. makes the next list. while LIST * do begin select a if node i i in LIST. In the former case. j. j) node is incident to an admissible arc then begin mark node pred(j) := i. begin unmark all in N. The same data also used in the maximum flow and minimum i cost flow algorithms A(i) of arcs discussed in later sections. and in the latter Ccise deletes a marked node from LIST. Since the algorithm marks any node at most once. end. We maintain with each node the list emanating (i. it executes the while loop at most 2n times. it the algorithm marks a new node and adds it to LIST.

this version of search is called a depth-first search. s. and the scaling approach. will we briefly outline the basic ideas all underlying these two approaches. and minimum . nodes are always selected from the front and added first-in..e. The algorithm. as described. i. the search algorithm selects the marked nodes in the last-in. the search in algorithm examines a total of ie X A(i) = m N and thus terminates 0(m) time. in this instance. and backs up one node initiate a new probe when it can mark no new nodes from the tip of the path. this version of search is called a breadth-first search. For cost flow instance..e. It marks nodes s to i in the nondecreasing order of their distance from the with the distance from i. Geometric Improvement Approach The geometric improvement approach shows polynomial time if that an algorithm runs in at every iteration it makes an improvement proportioT\al to the solutioiis. This algorithm to performs a deep probe. in the m. flow problem H = mU. the set LIST is maintained as a queue. first-out to the rear. we scan arcs in A(i) arcs. i. Hence. Therefore. We assume. then the search algorithm selects the marked nodes in the order. In this section. and U. difference between the objective function values of the current and optimum Let H be an upper bound on the difference in objective function values between any two For most network problems. that data are integral and that algorithms maintain integer solutions at intermediate stages of computations.27 admissible arcs. creating a path as long as possible. at most once. nodes to LIST. does not specify the order for examining and adding If Different rules give rise to different search techniques. H is a function of n. This s. in the problem H = maximum mCU. For each node i. feasible solutions. as usual. first-out order. meeisured as minimum number of arcs in a directed path from s to Another popular method is to maintain the set LIST as a stack. nodes are always selected from the front and added to the front. L6 Developing Polynomial-Time Algorithms Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence) approach. C. kind of search amounts to visiting the nodes in order of increasing distance from therefore.

2 maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*) (13) for (i. and. We A have stated this result for minimization versions of optimization problems. The maximum augmenting path algorithm for the 4. (i.28 Lemma 1.3) implies that a(z^ . Consider a consecutive sequence of starting 2/a iterations from iteration k. if at some iteration.z*)/2 units. Since H is the maximum possible improvement and every objective function value is an integer. On the other hand.11 presents an example of a bit-scaling algorithm for .) and Scaling Approach Researchers have extensively used an approach called scaling to derive polynomial-time algorithms for a wide variety of network and combinatorial optimization problems. If in each iteration. Then the algorithm terminates in O((log H)/a) iterations. Section 5. (See Sections 5. the algorithm must terminate wathin 0((log H)/a) iterations." a the statement geometric convergence rate are polynomial time In order to develop polynomial time algorithms using this approach.. The geometric improvement approach might be summarized by "network algorithms that have algorithms.e.1.z*).z*)/2 ^ z^ - z^-^^ ^ aCz^ . q the algorithm improves the objective function value by no more than aCz*^ . Proof. Further.3. In this discussion. we can look for local improvement techniques that lead to large fixed percentage) improvements for the in the objective function. we describe the simplest form of scaling which we call bit-scaling. therefore.. then (1.z*) by a factor of 2 within these 2/a iterations. The quantity (z*^ - z*) represents the total possible improvement in the objective function value after the k-th iteration. then the algorithm would determine an optimum solution within these 2/a iterations. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value.e. the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1.z*)/2 units. similar result applies to maximization versions of optimization problems. the algorithm must have reduced the total possible improvement (z*^. the algorithm improves the objective function value by at least aCz*^ .

P2. describe polynomial-time algorithms for the maximum flow and minimum cost flow problems. the optimum solution is of problem Pj^^.-j serves as the starting solution for problem Pj^. consider a network flow problem whose largest arc capacity has value U. . we solve a problem P parametrically as a sequence of problems P^.. The capacity an arc in P^ is tivice that in Pf^^j plus or 1. using more refined versions of scaling. K.. Sections 4 and 5. . more efficient than For example. P3. . Pj^ the problem P^ approximates data to the first . Then the its problem Pj^ the capacity of each arc as the k leading bits in binary representation. adding leading zeros necessary to make each capacity K bits long.29 the assignment problem. of Observation. is a better approximation until Pj^ = P.10 illustrates an example of this type of scaling. the problem P2 approximates data to the second bit. and each successive problem . The manner of defining arc capacities easily implies the following observation. . : bit. Using the bit-scaling technique. The is scaling technique useful whenever reoptimization from a good starting solution solving the problem from scratch. Figure 1. . Further. Let K = Flog Ul and would consider suppose if that we represent each arc capacity as a K bit binary number. for each k = 2.

10. (a) Network with arc capacities. arc capacities. and P3. (b) (c) Network with binary expansion of The problems Pj. . P2. Example of a bit-scaling technique.30 100 <=^ (a) (b) PI : P2 100 P3: 010 (c) Figure 1.

(iii) optimum For problems that satisfy the similarity assumption. the optimum solution of Pj^. Therefore. This approach works well (i) for these applications. of the bit-scaling technique. simple scaling algorithm improves the running time dramatically.. end. in part. variants of it have led to improved algorithms for both the maximum flow and minimum cost flow problems. If we multiply the optimum flow 2vj^_'j for Pj^. taking O(m^) time. claissical easier to reoptimize such a maximum Section 4. = 2 K do optimum solution of Pj^..i plus or 1. Consider. vj^ < m because multiplying the flow X]^_^ by 2 takes care of the I's doubling of the capacities and the additional can increase the maximum increase the flow value by at most m units (if we add 1 to the capacity of any arc. (ii) The optimal solution problem Pj.i by 2. reoptimization needs to be only a little more efficient by a factor of log n) than optimization. . whereas time bound is the scaling version of the labeling algorithm runs in the non-scaling version runs in latter O(nmU) time. The former Thus this polynomial and the bound is only pseudopolynomial. For example.i to Pj^. Let vj^ denote the vj^. begin obtain an for k : optimum to solution of P^. for example.e.^ and Pj^ are quite similar._i is an excellent starting solution for problem Pj^ since Pj^. begin reoptimize using the obtain an optimum solution of end. In general.1 flow problem. Pj^ denote an arc flow corresponding to its In the problem the capacity of an arc xj^.^ twice capacity in Pj^. it then is we maximum flow from source to sink by at most 1). because of the following reasons. the maximum and is xj^ flow problem.31 The following algorithm encodes a generic version algorithm BIT-SCALING. solution of Pi^_i can be easily reoptimized to obtain an Hence. This approach is very robust. maximum flow value for problem Pj. Thus (i. Moreover. for this approach to work. 0(m^ log U) time. the number of problems solved is OOog n). we obtain a feasible flow for Pj^. the labeling algorithm as discussed in would perform the reoptimization in at most m augmentations. of The problem P^ is generally easy to solve.

1). in designing algorithms. the basic decision variables are flows Xj: on arcs cycles (i. as the first step in our discussion. and spanning tree Consequently. or algorithms. Then we partially characterize optimal solutions to network flow problems and demonstrate that these problems always have certain special types of optimal solutions (so<alled cycle free solutions). is contained in path p and is otherwise. we need Finally. Therefore. we will find alternate formulations. j) 1 if arc (i.1 Flow Decomposition Properties and Optimality Conditions It is natural to view network flow problems in either of two ways: as flows on arcs or as flows on paths and cycles. for every directed path and f(q). We j) formalize this observation by defining some new notation: 5jj(p) 1 if equals (i.1 or as flows on paths and cycles. it worthwhile develop several connections between these In the arc formulation (1. on arc (i. cycle formulation starts with an enumeration of the paths Its P and Q of decision variables are h(p). in this section properties of network flows. the flow in on cycle which are defined p in P and every directed cycle q Q. only consider these special types of solutions. We next establish several important connections between network flows and linear and integer programming. Notice that every set of path and cycle flows uniquely determines arc flows in a natural way: the flow xj. q.32 2. BASIC PROPERTIES OF As a NETWORK FLOWS we describe several basic prelude to the rest of this chapter. The path and the network. its models. j). each view has own to advantages. transformations of network flow problems. In the context of developing underlying theory. j) equals the sum of the flows h(p) and f(q) for all paths p and cycles q that contain this arc. the flow on path p. We begin by showing how network flow problems can be modeled Section in either of two equivalent ways: as flows on arcs as in our formulation in 1. similarly. we discuss a few useful 2. Then ^i3= I p€ P 5ij(p)h(p)+ X qe hf<i^^^^^- Q . 6jj(q) equals arc is contained in cycle q and otherwise.

Every path with positive flow connects a supply node of x to a demand node most of x.e. nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties: C2. the path and cycle . can we decompose any arc flow into (i. we reduce the identify supply /demand of some node or the flow on some arc a cycle.1. We terminate when for the redefined problem x = by the Clearly. and redefine b(iQ) = b(iQ) . j) we let f(q) = min {x^: (i. into path and cycle If flows. (i. a path. 2. b(ij^) we = let h(p) = inin min (i. and each time we we reduce the flow on some arc to zero. We lecist repeat this process with the redefined problem until the network contains no supply node (and hence no demand node). Note that one of these cases will occur within n steps. the original flow the sum of flows on the paths and cycles identified procedure.h(p). We give an algorithmic proof to show any feasible arc flow x can be decomposed Oq. these. we obtain a directed path. cycles C2. If and in the latter case [b(iQ). Every directed path and cycle flow Conversely. (i. we say that the flow is represented f is eis path flows and cycle flows and that the path flow vector h and cycle flow vector cycle flow representation of the flow. xj: we obtain a directed (xj: : cycle q. Now observe that each time we identify to zero. at m we need that ig is a to establish only the converse assertions. otherwise the (i^. Then some arc i|) carries a positive flow. - h(p) for each arc x^.1: Theorem Flow Decomposition Property (Directed Case). At most n+m paths and cycles have nonzero flow.. j) € q) and redefine = Xj: - f(q) for each arc in q.1b) of node flow. 12) mass balance constraint (1. out of have nonzero flow. is a demand node then we stop. a path and Can we represent it reverse this process? That is.33 If the flow vector x is expressed in this way. every has a unique representation as nonnegative arc flows. must find a is cycle. Then we select a transhipment node with at one outgoing arc with positive flow as the starting node. -b(ij^). In the former case ij^ we obtain a directed path p from the supply node some demand node consisting solely of arcs with positive flow. Consequently. i^j Suppose supply node. In the light of our previous observations. Proof. j) in we obtain a cycle q. in this Ceise which 0. and repeat the procedure. i^ implies that some other arc carries positive We repeat this argument until either we encounter a demand node ig to or we revisit a previously examined node. p. If b(ijj) + h(p) and : = Xj.2. as) path and cycle flows? The following result provides an affirmative answer to this question. j) e p)].

'j ij^) with positive flow or an arc ij^_| ) with negative flow. The other steps can be modified accordingly. Proof. on p as a flow with value and -h(p) on each backward We define a cycle flow in the 5j. every arc flow x can be flow has a unique representation as arc flows. We need flow f(q) the concept of augmenting cycles with respect to a flow x. The major modification . 6j:(q) is still In this more general setting. at most m cycles This proof at is similar to that of ij^_-j Theorem 2. final Each undirected path which has an orientation from its initial to its node. even though the underlying network directed.4.(p) same way. p. j) is a backward arc of the path or cycle.(p) and S^jCq) to be arc (i. out of C2. any arc with positive flow occurs as a forward arc and any arc with negative flow occurs as a backward arc. The flow decomposition property has one example. At most n+m paths and cycles have nonzero flow. As enables us to compare any two solutions of a network flow problem in a particularly convenient way and to show how we can build one solution from another by a sequence of simple operations.1. Theorem 2. Every path and cycle Conversely. our representation using the notation and -1 if valid v^th the following provision: we now define 6j. to is be negative. have nonzero flow. A cycle q with > is called an augmenting 5jj(q) f(q) cycle with respect to a flow x e q.5. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: C2. h(p) on each forward arc A path flow will be defined arc. for each arc (i. is that we extend the path (ij^ . In this Ccise. has forward arcs and backward arcs which are defined as arcs along and opposite to the path's orientation. some node by adding an arc (ij^.2. to a sink node of x. these. the paths and cycles can be undirected. Every path with positive flow connects a source node of x For every path and cycle. is possible to state the decomposition property in a somewhat more general form that permits arc flows xj. C2. of which there are It at most m cycles. j) . Flow Decomposition Property (Undirected Case). and can contain arcs with negative flows.34 representation of the given flow x contains at most (n + m) total paths and cycles.3. if < Xjj + < Ujj. it a number of important consequences.

< Xj.e. Ny = b. 0<y<u. arc . for any arc (i. each cycle q^ that . . Further. Consequently. if inequality in this expression has the for each cycle qj^ . Therefore. . i. Now q-j. j) e A (i. q2 .) f(qj^^) Uj.. each term between and the rightmost Ujj. + 5jj(qr) f(qr) < Ujj. + SjjCqr) fCq^. the resulting solution remains feasible on each arc Hence. q2. The f(q) is c(q) f(q). . f(q-)). .. j) we have + 6ij(q2) < yjj = Xjj + 5jj(q^) fCq^) f(q2) + . change in flow cost for augmenting around cycle q with flow Suppose < X < u and that x and y are any two solutions to a network flow problem. j). j) < qj^.. q^ that contains it or a backward arc on each cycle x^. j) e A k=l r (i. for each arc e That we add any (i. is an augmenting cycle with respect to the flow x. zjj = 6ij(qi) f(qi) + 5jj(q2) f(q2) + .4 of the flow decomposition property... same < sign. - Then the difference vector z = y x satisfies the homogeneous equations Nz = Ny Nx = 0. j) e A (i. yjj < Consequently. qm that contains it..) satisfying the property that for each arc of A.. (i. we can find (i.j)€A k=l .... . j) Cj.. of these cycle flows qj^ to x. by condition C2. j) e A (i. Nx = b. . j) 6 A (i.35 In other words.. Since y = x + z. j) at most r < m cycle flows f(q])/ f(qj. We define the cost of an augmenting q as c(q) = V (i. + 6j:(qj(. q2. moreover. (i.. q. the flow remains feasible if some positive amount of flow (namely cycle f(q)) is augmented around the cycle q. . flow decomposition implies that z can be represented as cycle flows. . . note (i. The cost of an augmenting cycle represents the change € A if in cost of a feasible solution we augment along the cycle with one unit of flow. is. - i.e. j) is either a forward arc on each cycle q^. 5jj(q).

if every augmenting cycle in the decomposition of x* . then cx* . Much of the underlying theory of 2. We have thus obtained the following Theorem it 2. Let X network flow problem. ex* < cx then one of these cycles must have a negative cost. The augmenting characterizing the cycle property permits us to formulate optimality conditions for optimum solution of the x* is minimum cost flow problem.cx > Since x* is an optimum flow. nonnegative cost.1. the cost of y equals the cost of x any two feasible solutions of a flow on at most m augmenting nicies and y he plus the cost of flow on the augmenting cycles.x can be decomposed most m augmenting cycles and the sum of the costs of these cycles equals cx* . Then y equals x plus the with respect to x. network flows stems from In the example. 2J.36 We have thus established the following important 2. arc flows a simple observation concerning the example in Figure are given besides each arc. Further. is also an optimum flow. Further. Optimality Conditions. that an optimum solution of the minimum cost flow problem.4. cx* = cx and x result.3: result. and costs . and that x ^ x*. A feasible flow x is an optimum flow if and only if admits no negative cost augmenting cycle.ex. Cycle Free and Spanning Tree Solutions We start by assuming that x is a feasible solution to the network flow problem minimize { cx : Nx = b and / ^x<u ) and that / = 0. The augmenting into at If cycle property implies that the difference vector X* .x has a 0. Suppose that X is any feasible solution. Theorem Augmenting Cycle Property.

Since the objective function -2 at depends linearly we optimize it by selecting 6 = 3 or 6 = which point one arc in the cycle has a flow value of zero.e.37 3. that is.e. Note that adding a given amount this of flow 6 to all the arcs pointing in a clockwise direction all and subtracting flow from at arcs pointing in the counterclockwise direction preserves the mass balance is each of the node. note that the per unit incremental cost for this flow change cost of the clockwise arcs the sum minus the sum of the cost of counterclockvkdse arcs. we set 6 all = 3.1. longer have positive flow on arcs in the Similarly.$3 i 2. of all We can restate this observation in another way: to preserve nonnegativity flows. .$4 3-e <D 2+e 4.e. 2 + 6^0. Note new solution 6 = 3). then -2) we would decrease 6 as much as possible (i. Figure Improving flow around a being that all Let us assume for the time arcs are uncapacitated. (at i. or 6 < 3.. that the cycle is a depending upon the sign of Consequently. A as the q/cle cost and say A. we were to change C|2 from 2 to 4).e. The network in this figure contains flow around an undirected cycle. or 6 > at and again find a lower cost solution with the flow one arc in the cycle value zero. and on at least 4 + 6 S 0. 4+e <!) cycle. to minimize cost nonnegativity of that in the cycle. i. in all our example.. positive or zero cost cycle - $4 - $3 = $ -1. we set 6 as large as possible while preserving 4 - 3-6^0 and we no 8 S 0.. select 6 in the interval -2 <6 < 3. Also. we must on 6. if the cycle cost were positive (i. 5 + 6^0.. arc flows. Per unit change in cost = A = $2 + $1 + $3 Let us refer to this incremental cost negative.

either the flow is zero (the lower bound) or Some observations additional notation will be helpful in encapsulating and summarizing our up to this point. we are indifferent to all solutions in the interval -2 < 9 < 3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero. in this <6< and we can find a solution as 6.. We will also say that arc flow xj. again an interval.g. e. one by choosing 6 = for -2 or 6 = 1. equals either its lower or if upper bound. for example.. good as the original that is. Note that the lower bound assumption imposed upon the objective value is necessary to rule out situations in which the flow change variable 6 in our prior argument can be made arbitrarily large in a negative cost cycle. then at least one cycle free solution solves the problem. at a given any time. j) between the lower and upper bounds imposed is restricted if its upon it. (i. a solution x has the "cycle free property" entirely of free arcs. is at its some arc on the cycle. this condition rules out any negative cost directed cycle with no upper bounds on its arc flows. At these values of the solution is cycle free. . the network contains no cycle made up In general. Let us say that an arc (i. lies strictly (i. (ii) If we impose upper bounds on is the flow. one cycle and establish the following 2. Therefore. In this terminology. problem minimize ex If the objective function value of the network { : Nx = b.e. 1 <x <u } is bounded from below on the feasible region and the problem has a feasible solution. then the range of flows that preserves flows) feasibility Ceise -2 mass balances. initial flow we can apply our previous argument repeatedly. j) is a p'ee arc with respect to a given feasible flow x if Xj. our prior observations apply to any cycle in a network.5: fundamental result: Theorem optimization Cycle Free Property. such as 6 units on all arcs.38 We (i) If can extend this observation in several ways: the per unit cycle cost A = 0. lower and upper bounds on 1. upper bound (x^2 = ^ ^t 6 = 1). or arbitrarily small (negative) in a positive cost cycle.

39
useful to interpret the cycle free property in another way.

It is

Suppose

that the

network
nodes).

is

connected

(i.e.,

there

is

an undirected path connecting every two pairs of
is

Then, either a given cycle free solution x contains a free arc that

incident to

each node in the network, or

we

can add to the free arcs some restricted arcs so that the

resulting set S of arcs has the following three properties:

(i)
(ii)

S contains

all

the free arcs in the current solution,

S contaiT\s no undirected cycles, and

(iii)

No

superset of S satisfies properties

(i)

and
(i)

(ii).

We

will refer to

any

set

S of arcs satisfying

through

(iii) eis

a spanning tree of
a

the network

and any

feasible solution x for the

network together with
(At times

spanning

tree S

that contains all free arcs as a spanning tree solution.

we

will also refer to a

given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs

may

be needed to form the spanning tree

S.)

Figure
that
it

2.2. illustrates a

spanning
is)

tree

corresponding to a cycle free solution. Note
set of free arcs into a

may

be possible (and often
(e.g.,

to

complete the
wdth arc
(3,

spanning

tree

in several

ways

replace arc

(2, 4)

5) in

Figure

2.2(c)); therefore, a

given

cycle free solution can correspond to several spanning trees S.

We
If

will say that a

spanning tree solution x
this case, the

is

nondegenerate

if

the set of free arcs forms a spanning tree.
to the

In

spanning tree S corresponding
are not incident to)
all

flow x

is

unique.

the free arcs do

rot span

(i.e.,

the nodes, then any spanning tree corresponding to
arc's

this solution will contain at least

one arc whose flow equals the
vdll say that the

lower or upper

bound

of the arc.

In this case,

we

spanning

tree

is

degenerate.

40

(4,4)

(1,6)

(0,5)

(a)

An example network with

arc

flows and capacities represented as

(xj:, uj:

).

©
(b)

A cycle free solution.

<D

©
(c)

A

spanning

tree solution.

Figure

2.2.

Converting a cycle free solution to

a

spanning

tree solution.

41

When

restated in the terminology of spanning trees, the cycle free property
result in

becomes another fundamental

network flow theory.
If the objective

Theorem

2.6:

Spanning Tree Property.
problem
minimize
{ex:

function value of the network

optimization

Nx

=

b,

I

<x <

u]

is

bounded from below on the

feasible

region and the problem has a feasible solution

then at least one spanning tree solution solves the problem.

We
of the flow

might note

that the

spanning

tree property is valid for

concave cost versions
is

problem as

well,

i.e.,

those versions where the objective function

a concave
is

function of the flow vector
valid because
if

x.

This extended version of the spanning tree property
is

the incremental cost of a cycle

negative at

some

point, then the

incremental cost remains negative (by concavity) as

we augment

positive

amount

of

flow around the

cycle.

Hence,

we

can increase flow in a negative cost cycle until

at least

one arc reaches
2.3

its

lower or upper bound.

Networks, Linear and Integer Programming

The

cycle free property

and spanning

tree property

have many other important

consequences.

In particular, these

two properties imply

that

network flow theory bes

at

the cusp between

two

large

and important subfields of optimization—linear and integer

programming.

This positioning may, to a large extent, account for the emergence of
a cornerstone of mathematical

network flow theory as
Triangularity Property

programming.

Before establishing our

first

results relating

network flows
that

to linear

and integer
S has
at

programming, we
least

first

make

a

few observations. Note
is,

any spanning

tree

one

(actually at

lecist

two) leaf nodes, that
if

a

node

that is incident to only

one arc

in the

spanning

tree.

Consequently,

we

rearrange the rows and columns of the
is

node-arc incidence matrix of S so that the leaf node

row

1

and
-1,

its

incident arc
lies

is

column

1,

then

row

1

has only a single nonzero entry, a +1 or a
If
is

which

on the
its

diagonal of the node-arc incidence matrix.
incident arc from S, the resulting network

we now remove

this lecif

node and

a

spanning tree on the remaining nodes.
1

Consequently, by rearranging
for the

all

but

row and column
that

of the node-arc incidence matrix

spanning

tree,

we

can

now assume

row

2 has

-t-1

or

-1

element on the

42

diagonal and zeros

to the right of the diagonal.

Continuing

in this

way

permits us to
n-1

rearrange the node-arc incidence matrix of the spanning tree so that

its first

rows

is

lower triangular. Figure

2.3

shows

the resulting lower triangular form (actually, one of

several possibilities) for the spanning tree in Figure 2.2(c).

nodes
5

L =

that solutions x with the property that x cannot be z. Network flow problems are distinguished as the most important large class of problems with this prop>erty. implies that x| is integreil. and b. Relationship to Linear Programming The network flow problem with the which. This argument shows that for problems with integral data. we have established the following Theorem problem 2. every spanning tree solution is integral. ako satisfy another well-known property: they always have. problems always have spanning fundamental result. always has an integer solution. or generalizations with concave cost objective functions. yr- equals -1). 1. as the leist objective function ex is a linear program result shows. Since. now if we move x] to the right of the equality in for X 2 the right hand side remains this is integral and we can solve from the second equation.8. or b - Mx^ (2. component of 0. extreme point solutions. If the objective value of the network optimization minimize is { ex: Nx = b. 1 <x <u } the vectors solution. an arc lower or upper bound and the right hand side M has integer components (each equal to vector. continuing forward substitution by successively solving for one variable at a time shows that x^ integral. we might expect to discover that extreme point .2 shows that this integrality property is also valid in the more general situation in which the objective function is concave. bounded from below on the feasible region. the problem has a feasible solution. as we have seen.1). Integrality Property. of x' are integral as well: since the first U equals +1 or the first equation in (2. Linear programs. in the parlance of convex is. i. and u are integer. network flow problems always have cycle free solutions. expressed tis a weighted combination of two other feasible solutions y and as x = ay + (l-a)z for some weight < a < 1. +1. Since the spanning tree property ensures that network flow tree solutions.. then the problem has at least one integer optimum Our observation at the end of Section 2.1) is an integer But this observation implies that the diagonal element of components -1. emalysis.43 Now further suppose that the / supply/demand vector b and lower and upper bound Then since every vectors and u have all integer components.e.

9. Let x'. we define two feasible solutions y and z with the property is that X = (l/2)y + (l/2)z. for these special solutions. Then . First. extreme points are usually represented algebraically as basic solutions. it X is not an extreme point solution. I <x <u ) bounded from below on the feasible region and the problem has a feasible solution. yjj network contains an imdirected cycle with not equal to Zij for any arc on the But by definition of the Therefore. conversely. We can extend B to a basis of the constraint matrix by adding a Just as cycle free solutions for maximal number of columns. as in our discussion of Figure 2.. N = [B.x^) is a compatible partitioning of Also suppose that we eliminate the redundant row so that B is a nonsingular matrix. Then NjCz^ > ) which implies.e. components if x^. conversely. network flow problems correspond to extreme points. < a< i.10: Basis Property. suppose that x not an extreme point and is represented as x = ay + (l-a)z with these vectors for which y and z differ. Theorem Extreme Point Property. xij j). Let us now make one final connection between networks and linear and integer programming— namely. Conversely. Theorem is 2. In linear programming. by flow decomposition. y' yij and zij z' be the ujj components zjj of /ij < < xij < < or /jj < < (i. 1. this result is is easy to establish. Consider a linear form Ax = b and suppose x. every extreme point is a cycle free solution. if the objective value of the network optimization problem 2. then it cannot be an extreme point. Consequently. every basic solution a spanning tree solution.1. Proof. the columns B of the constraint matrix of a between their linear program corresponding to variables strictly lower and upper bounds are linearly independent. < yjj < and " let Nj = 0' denote the submatrix of N corresponding to these arcs that the cycle. minimize is { ex: Nx = b. then the problem has an extreme point solution. this cycle contains only free arcs in the solution x. spanning tree solutions correspond to basic solutions. since by perturbing the -6 flow by a small amount 6 and by a small amount around a cycle with free arcs. and indeed they are as shown by the next result. then is not a cycle free solution.44 solutions and cycle free solutions are closely related. every cycle free solution is an extreme point and. y^ and z^. if x not a cycle free solution. With the background developed already. between program of the basis and the that integrality property. uij. Every spanning tree solution to a is network flow problem a basic solution and. For network flow problems.M] for some basis B and that x = (x .

must be equal to 4l (An induction argument. determinant of B. if all of square submatrices have determincmt equal to either 0.11: minimum cost M 2. 2. Let us -1. and u are all integers.Mx^. the b. and therefore equals +1 or -1. equals the product of the diagonal elements in the triangular representation of the basis. the -1.4 Network Transformations Frequently. Therefore. it is easy to see that the determinant of S it the product of the determinants of the spanning trees and. we describe some of these important transformations. Tl. vector whenever x^. which a spanning tree on each is of its connected components.+l. to show equivalences of different network problems. Consequently. j) will have a lower bound of This transformation has a . partitioning of b. In this subsection. it S be any square submatrix of N. divided by det(B). the triangularity property shows that the determinant of any basis (excluding the redundant row now). - Also. S is singular. As measured by the new 0. unimodular. is it has determinant must correspond to a cycle free solution. provides this totally an alternate proof of unimodular property. But then. variable the flow on arc (i. a node-arc incident matrix let is unimodular. j) has a positive lower boimd l^y then we can replace Xjj. Xy. If an arc (i. the determinant of B equals +1 or of all integers. For Otherwise. by Xjj+ l^- in the problem formulation. (Removing Nonzero Lower Bounds). Even more. of x' as it is possible to find each component sums and multiples of components of if b' =b - Mx^ and B. then x^ if all and consequently x^ is an integer.) The constraint matrix of a Theorem Total Unimodularity Property. then x^ is an integer if and M are composed In particular. network flow problem is totally unimodular. If it is totally 0. by Cramer's rule from linear algebra. using an expansion of determinants by minors. or to put a network problem into a standard form required by a computer code. call a matrix it A unimodular unimodular of its its bases have determinants either +1 or <md call totally -1. analysts use network transformations to simplify a network problem. / A corresponds to a basic feasible solution x and the problem data A. or x^ = B-^(b Mx^). or -1.45 Bx^ = b . therefore. or How Since bases of are these notions related to network flows and the integrality property? N correspond to sparming trees.

oo) Ujj) <T) Xjj <^ Figure 2. If x^. (i. a flow ^k' " ^" *^^ transformed network yields a flow of = Xjj^ of the same cost in the . X. making the j) arc uncapacitated.2) from the mass balance constraint of node we assure that each of Xj. ^ a positive (i. the corresponding flow in the transformed network both the flows x and x' = ik Xjj and = Uj.j = X^j = Sjj arc capacities.5. if we introduce a slack variable > 0.2) as the mass balance constraint Observe that the variable xj.Ujj) CD Figure T2. we begin by sending /j. Multiplying both sides by we obtain -Xjj . b(j) b(i)-/ij b(i) + / 'Cij. appear in exactly two constraints-in one with the positive sign and in the other with the negative sign. and Sj.Xj:. in only one. <D then Transforming If {Removing Capacities). constraint (i. In the network context. By subtracting (2. 46 simple network interpretation.. + Sj. an arc has a positive capacity we can remove the capacity. now appears in three mass balance constraints and j. <D 2.2) This transformation is tantamount to turning the slack variable into an for that node. j) (Cij'Uij-V CD lower bound to zero. b(i) (Cjj . this transformation implies the follov^dng. These algebraic manipulations correspond to the following network transformation. Likewise. can be written as -1. Sj: additional node k with equation (2. . is a flow on arc is X. O Removing ^©< t I © Xjj. V. j) in the original Xjj^ network.Sjj = -Ujj (2.4. have the same Xj: cost. b(j) oo) + Uij (0. using the following ideas. units of flow on the arc and then measure incremental flow above b(i) /jj. x^: The capacity Sj. b(j) b(i) -Uij (Cjj . Uj:. = Ujj.

i') 0< of arc reversal. T3. This transformation a change (i. and is x^j^. Therefore. j) send Ujj units of flow on the arc and then replace arc by arc (j. (Node Splitting). (i'. This transformation splits each node (k. this transformation permits us to remove arcs with negative costs.47 original network. transformation valid. (i. Uj.6. Consequently. and x:j^ are both nonnegative. since this x^j^ + Xjj^ = u^. Doing so replaces arc with its associated cost by the arc i) v^ath a cost -Cj. j) by an cost of the same cost and and each arc by an arc i. j) by Cj: X • in the problem formulation. » An example arc (k. . © two nodes capacity. This transformation has the following network interpretation: (i.. Let arc flow Ujj if it is represent the capacity of the arc is (i. (j. i) i into and i' and replaces each original arc (i. (Arc Reversal). j) or an upper in variable: bound on the replace x^. i) vdth cost -Cj. i') i T4. j) of the same and capacity. The new flow X •: measures the amount of flow we "remove" from the "full capacity" flow of b(i) b(j) b(i)-Ujj b(i) + Ujj CD <D Figure 2.. uncapacitated.7 illustrates the resulting network all when we carry out the node splitting transformation for the nodes of a network. = x^< Ujj. x^j Further. We also add arcs of cost zero for each Figure 2.

i').7. We to shall see the usefulness of this transformation in Section 5.11 when we use it reduce a shortest path problem with arbitrary arc lengths to an assignment problem. (b) The transformed network. (a) The original network. is This transformation also used in practice for representing node activities and node data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of we simply associate arc (i. node i with the new throughput .48 (a) (b) Figure 2.

49 3. practical experience has efficient shown is the label correcting methods to be modestly more Dijkstra's algorithm first the most popular label setting method. Label setting methods designate one or more labels as permanent (optimum) at each iteration. (ii) finding shortest paths from one node to (iii) other nodes for networks with arbitrary arc lengths. we consider a generic version of the label correcting method. outlining one special implementation of this general approach that runs in polynomial time and another implementation that perfomns very . Consequently. shortest paths visiting specified nodes. In this section. the k-th shortest path). The algorithmic approaches for solving problem types setting and (ii) Cem be classified into two groups—label to and label correcting. The problem arises when trying to determine the shortest. or most pairs of rebable path between one or many nodes in a network. Label correcting methods consider as temporary until the final step label setting all labels when they all become f>ermanent. we discuss a simple implementation of this algorithm that achieves a time bound of 0(n2). More importantly.. The label setting methods are applicable networks with nonnegative arc lengths. The (i) major types of shortest path problems. We will show that methods have the most attractive worst-case performance. Each approach assigns tentative distance labels (shortest path distances) to nodes at each step. cheapest.g. Researchers have studied several different (directed) shortest path models. algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often call for the solution of a large number of shortest path problems as subroutines. SHORTEST PATHS Shortest path problems are the most fundamental and also the most commonly encountered problems shortest path in the study of transportation and communication networks. (ii) and (iii). In this section. designing amd testing shortest path efficient algorithms for the problem has been a major area of research in network optimization. nevertheless. node. and (iv) finding shortest paths from every node to every other (e. are finding shortest paths from one node to other nodes all when arc lengths are nonnegative. we discuss problem types (i) (i). in increasing all order of solution difficulty. finding various types of constrained shortest paths between nodes shortest paths with turn penalties. We then describe two more sophisticated implementations that achieve in practice improved running times emd in theory. Next. whereas label correcting methods apply to networks with negative arc lengths as well.

In this section. is to fan out and label nodes is in order Each node i has a label. Initially. Finally. We suppose that node s is a specially designated node. for each this section. We invoke this connectivity assumption throughout Dijkstra's algorithm finds shortest paths from the source node from node s s to all other nodes. j). we assume amd that aire lengths are integer numbers.1 We consider a (i. and in this section as well as in Sections 3.3. node j. and otherwise. j) network G= (N. j) e A }. The following (which basic implementation of Dijkstra's algorithm. the label of a node are i is its shortest distance from the source node along a path whose internal nodes selects a all permanently labeled.A) with an arc length Cj. node i with the minimum labels temporary makes it permanent. and scans au-cs in A(i) to it update the distamce all of adjacent nodes. permanent it once we know that it represents the shortest distance from s to give node if (s. we discuss a method to solve the all pairs shortest path problem. The algorithm label. and let C = max Cjj : (i. and assume without any loss of generality that the network other node. Dijkstra's Algorithm 3. G contains a directed path from s to every artificial arc (s. At each iteration. we further assume that arc lengths are nonnegative. The basic idea of the algorithm of their distances from s. The correctness of the algorithm on the key observation we prove later) that it is always possible to minimum temporary label as permanent. denoted by d(i): the label i. j) otherwise is temporary. aissodated with each arc i e A.2 3. we s a permanent «> label of zero. Let A(i) represent the set of arcs emanating from node { € N. designate the node vdth the algorithmic representation is a . We can ensure this condition by adding an with a suitably large arc length. and each other node j a temporary label equal to Cgj € A. The algorithm terminates when has designated relies nodes as permanently labeled.50 well in practice.

This observation shows that the length of path P is at least d(i) and hence labeled i it is valid to permanently label node i. The algorithm updates these indices (tentative) shortest path ensure that s to pred(i) is the last node prior to i on the from node node i. However. d(i) . the temporary labels of some nodes > T+ Cj: (i) might decrease. furthermore. The algorithm i associates a predecessor index. j) in A(i). while P * begin N do (node selection) let i e T be a node T: for which d(i) = min {d(j) : j € T). T: = N-{s). To establish the validity of Dijkstra's algorithm.j) = T-{i}. whereas the label of each node in T is j) the length of a shortest path subject to the restriction that each node in the path (except belongs to P. {distance update) for each if (i. and d(j) : = «» otherwise. denoted to by pred(i). end. then setting d(j) = d(i) The computational time its for this algorithm can be split into the time required by two basic operatior\s--selecting nodes and ujjdating i distances. At each point nodes are partitioned into two P and T. d(s) d(j) : : = = and pred(s) = : 0. Assume that the label of each node j in P is the length of a shortest path from the source. if updates the labels of nodes in T (i). the we use an inductive argument. P: = Pu(i). After the algorithm has permanently in node i. the algorithm requires 0(n) time to identify the node with minimum temporary label and .51 algorithm DIJKSTRA. end. with each node € N. in the algorithm. In an iteration. Cgj and pred(j) : = s if (s. node k must be is at i at least as far away from the source as node since its label least that of node i.j) e A . At termination. the segment of the path P between node k and node has a nonnegative length because arc lengths are nonnegative. sets. Then it is possible to transfer the node i in T to with the smallest label d(i) to P for the following reason: that is any path P from the source node i must contain a first node k i in T. € A(i) do then d(j) : d(j) > d(i) + Cjj = d(i) + Cjj and pred(j) : = i. begin P:=(s). because node could become an internal node in the must thus scan all of the arcs (i. d(j) We + Cj. these indices allow us to trace back along a shortest path from each node to the source. tentative shortest paths to these nodes.

FACT 3. 1. nC is Recall that C represents the largest arc length in the all an upper bound on the distance labels of the nodes. nC. suggested several implementations of the algorithm. The distance node in this bucket minimum. hence. We maintain nC+1 buckets numbered label is k. is we describe Oial's algorithm. and reduces the algorithm's fact: computation time using the foUouing that FACT 3. Researchers have attempted to reduce the node selection time without substantially increasing the time for updating distances. These implementations have either its dramatically reduced the running time of the algorithm in practice or improved worst case complexity. labels Dijkstra's algorithm designates as permanent are This fact follows from the observation that the algorithm permanently labels a node i with smallest temporary label d(i). One by . using clever data structures. of Dijkstra's algorithm Dijkstra's algorithm has been a subject of much research. which currently comparable to the best label setting algorithm in practice. To improve we must ask the following question. m. never decreases the distance label of any permanently labeled node since arc lengths are nonnegative. the algorithm requires Oirr-) time for selecting nodes and CX ^ ie A(i) | | ) = 0(m) time for N thus runs in O(n^) updating distances. and C. This implementation time. Thus.52 takes 0( A(i) I I )) time to update the distance labels of adjacent nodes.. In the identify the first node selection step. The distance nondecreasing. Instead of scanning temporarily labeled nodes at each iteration to find the one with the minimum in a sorted distance label. and while scanning arcs in A(i) during the distance update step.. can we reduce in practice. . Bucket k stores each node whose temporary distance network and. overall.1. In the following discussion. the computation time by maintaining distances fashion? Ehal's algorithm tries to accomplish this objective. selection. 2. more complex version of R-heaps gives the best worst-case performance for choices of the parameters n. Subsequently the best we (A all describe an implementation using R-heaps. . Consequently.) 3^ Dial's Implementation in Dijkstra's The bottleneck operation the algorithm's performance. which is nearly known most implementation of Dijkstra's algorithm from the perspective of worst-case analysis.1 suggests the following scheme for node 0. we scan the buckets in increasing order until label of each we is nonempty bucket. all algorithm is node selection. they have.

.1. in 0(1) time. Dial's algorithm uses C+1 buckets numbered 0. making them permanent and scanning their lists to update distance labels of adjacent nodes. 1. We then resume the scanning of higher numbered buckets in increasing order to select the next nonempty bucket. bls a time bounded by some linked list. arc we delete these rodes from the bucket. this transfer requires 0(1) time. or delete label. This storage scheme bucket k contains a node with . k+2(C+l).. One implemention uses a data structure knov\T» a doubly In this data structure. temporary labels are bracketed from below by Consequently. .2. distance label that the algorithm designates as permanent at the d(j) beginning of an iteration. C+1 buckets suffice to store d(i) and from above by finite d(i) + C. 2. then buckets k+1. This d(j) in implementation stores a temporarily labeled node j with distance label the bucket d(j) mod (C+1). for some k € P (by the property all finite of distance updates). by rearranging the pointers.1). to select easily a node. Doing so permits the topmost relabel us. FACT 3. Consequently. in fact. k stores temporary labeled nodes with distance however.53 one. 0. we order the content of each bucket arbitrarily. C which can be viewed as arranged in a circle as in Figure 3. 2. d(j) = d(k) + Cj. k+(C+l). node from the list.2. 1. bucket labels k.. Consequently. Now. < d(i) + C for each finitely This fact follows by noting that (ii) (i) d(k) < d(i) for eacl k e P (by FACT 3. .. The of buckets to C+1. allows us to reduce the If d(i) is the number FACT 3.e. at any point in time this bucket also implies that vvill if hold only nodes with the same distance labels. it is possible to add. efficiently. C. minimum distance label. d(j) < d(i) + < d(i) + C. k+2. this algorithm runs in following fact 0(m + nC) time and uses nC+1 buckets. . nodes with temporary distance in labels. because of and so forth. and for each finitely labeled node j in T. it as we nodes and decrease any node's temporary distance we move from a higher index bucket to a lower index bucket. Hence. and select the next element of any bucket very constant.. store nodes in increeising values of the distance labels. . k-1. i.. We need not store the nodes with to a bucket infinite temporary distance labels first any of the buckets-we can add them when they receive a finite distance label.. By storing the content of these buckets carefully. delete. . storing to its two pointers for each entry: one pointer immediate predecessor and one to its immediate successor.. then at the end of that iteration labeled node j in T. add a bottommost node.: cj^. In other words. during the entire execution of the algorithm.

The search heis for the theoretically fastest implementations of Dijkstra's algorithm In the led researchers to develop several new data structures for sparse networks. The first implementation considers all the . In addition. is C is not n. it a wrap around fashion. the previous The discussion sections of this implementation can skip it of a more advanced nature than and the reader without any loss of continuity. all of the buckets much less than however. and C = 2" the algorithm takes exponential time in the worst case. R-Heap Implementation Our first O(n^) implementation of Dijkstra's algorithm and then Dial's implementation represent two extremes.1. C = n'. Bucket arrangement in Dial's algorithm Dial's algorithm examines the buckets sequentially. 3. necessitating large storage and increased computational time. the algorithm as may wrap around many as n-1 times. then the algorithm runs O(n^) time.3. left off earlier. we is consider an implementation using a data structure called a runs in redistributive heap (R-heap) that 0(m + n log nC) time. as compared to the original algorithm. in it algorithm runs in is 0(m + nC) time which if not even polynomial time. is is rot attractive theoretically. The Rather. In the next iteration. pseudopolynomial if time. however. to identify the first nonempty where it reexamines the buckets starting at the place A potential disadvantage of this scheme. is that C may be very large. next section. in bucket. The algorithm. For most applications. For example.54 k-l Figure 3. very large. typically does not encounter these difficulties in practice. and the number of passes through Dial's algorithm. resulting in a large computation time.

. adopting an intermediate approach. We store a will it temporary node i in bucket k d(i) e range(k). we need original only one bucket. and the resulting algorithm reduces to Dijkstra's implementation. uses variable length widths and changes the ranges dynamically. for each bucket reduces the number of buckets.. The algorithm each time it change the ranges of the buckets dynamically. set We store permanent nodes. but still requires us to search through the lowest numbered bucket to find the node with minimum temporary one for the lowest label. Using widths of factor of k. The R-heap algorithm we consider next In the version of 16.55 temporarily labeled nodes together (in one large bucket. Using a width of TOO. The nodes in bucket k are denoted by the CONTENT(k). we could conceivably retain the advantages of bo. the widths of the buckets are is 1. Indeed. 0.. . way that stores the minimum distance label in a bucket whose width In this way. changes the ranges. k arbitrarily large. 2. lOOk+99] and width is TOO. . instead of storing only nodes with a temporary label of k in the k-th bucket. if But in order to find the smallest distance we need is to search all of the elements in the smallest index nonempty bucket. size k permits us to reduce the number of buckets needed by a label. 8. the range of bucket k is [100k . the cardinality of the range called its width. 1. reallocate we dynamically modify the ranges of numbers stored each bucket and we nodes with temporary distance labels in a is 1. The buckets are numbered as is K = nCl We do not represent the range of bucket k by range(k) which a (possibly empty) if closed interval of integers. 4.. For a given shortest path problem. In fact. Moreover. 1. different we could store temporary labels from 100k to lOOk+99 in bucket that can be stored in a bucket is k. its For the preceding example.h the wide bucket and narrow bucket approaches. The temporary labels make up the range of the bucket. 2. the R-heap consists of + flog nCl buckets. redistributes the . perhaps by storing many. redistributive heaps that that the we present. If we could devise a variable width scheme.. so number of buckets needed in only Odog nC). Dial's algorithm separates nodes by storing any two nodes with different labels in different buckets. . and nodes in the buckets. so to speak) and searches for a node with the smallest label.. say. as in the previous algorithm. the running time of this version of the R-heap algorithm 0(m + n log nC). We now Flog describe an R-heap in 1 more detail. Could we improve upon these methods by all. with a width of numbered bucket. to find the is we avoid the need to search the entire bucket minimum. but not bucket? labels in a For example.

redistributing the range [8 we need only to 4 redistribute the subrange [11 15]. and hence buckets to 3 v^ll never be needed again. [1]. [8]. and the algorithm selects in an additional 0(1) time. These ranges will change dynamically. it Actually. 15]. distance label without searching nodes in bucket is The following observation helpful. rangeO) = [4 . 15]. 7]. the minimum temporary it label is in a bucket with width one. Thus. At all this point. we would Since we will be scanning find the all of the elements of bucket 4 in the redistribute step. makes sense example 15].. to first minimum temporary label is 11. and We then set the range of bucket 4 to and we (0. finding a node with smallest temporary distance label) by a sequence of redistribution steps in which we shift is nodes constantly to lower indexed buckets. that the minimum Then rather than . example that the initial minimum quickly determined to be We could verify this is fact by verifying that buckets through 3 are empty and bucket 4 nonempty.56 Initially.. for 15]. the widths of the buckets initial will not increase beyond their distance label is widths.. Rather than leaving is 8) to .e. since each node can be shifted at most K = 1 + flog nCl times. label in the bucket. ranged) = range(2) = [2 3)... the redistribution time 0(n log nC) time in total.. Suppose for . Since the that minimum index nonempty bucket label the bucket less whose range is [8 15]. 2^-1]. each of the elements of bucket 4 moves to a lower indexed bucket. resulting in the ranges 0.. Roughly speaking. [10 11]. In this case the resulting ranges of buckets . 1. carry out these operations a bit differently. the buckets have the following ranges: rarge(0) = [0]. shift (or redistribute) its temporarily labeled nodes into the appropriate buckets and 3). could not identify the minimum is . 2. in the Suppose range [8 . these buckets idle. Eventually. range(K) = [2^-1 . range(4) = [8 .. we can redistribute the range of bucket 4 (whose width is 8) the previous buckets (whose combined width [12. we 4. .. however.. we have replaced the node selection step (i.. we know no temporary v^l ever again be than 8. [9]. Essentially.

63] [64 . to k-1.2. To select the node with the smallest distance label. . . In our example.. bucket nonempty. greater than 1. Figure 3. In number beside each length.2. and then we reassign the content of bucket k time is The is redistribution 0(n log nC) and the running time of the algorithm 0(m + n log nC). C=20 and K = flog 1201 = 7.. we scan the buckets is 0.3 specifies the starting solution of Dijkstra's algorithm and the initial R-heap. Since bucket label. we is 1. K to find the first nonempty bucket.. 1. For this problem. every node in this bucket has the same (minimum) distance . 7 127] Ranges: CONTENT: (2. [15].15] nC=120 5 [16. We now the figure...57 would be [n]. the minimum nonempty to buckets bucket is whose width we redistribute the range of bucket k into buckets to k-1. whose width To reiterate.4) (6) Figure 3. Moreover. (13 .2 The shortest path example. the illustrate R-heaps on the shortest path example given in Figure arc indicates its 3. the has width 1.3] (3) 3 [4 . Nodei: Label d(i): 12 13 [0] [1] 3 4 15 5 6 20 4 [8 . 14]. we do is not carry out the actual node selection step until the If minimum nonempty bucket k. [12].. bucket has width one.. So.3 The initial R-heap. e. source Figure 3. at the end of this redistribution.31] {5} Buckets: 12 [2 .7] 6 [32 . are guaranteed that the minimum temporary label is stored in bucket 0..

node 5 should left.4 shows the new R-heap. Node i: . which bucket Since its distance label has decreased. deletes node 3 from the R-heap. and scans the arc (3. which Node 5 moves from bucket Figure 3. move bucket to a lower 5. to index bucket. its We check whether the is new distance label of node 5 5. So identify the first we sequentially scan the buckets from right to 9. It isn't. bucket whose range contains the number 5 to bucket 4. starting at bucket is 4.5) to change the distance label of node 5 from 20 to 9.58 algorithm designates node 3 as permanent. is contained in the range of present bucket.

. . buckets. all we move can time. Overall. say bucket total. 1. then any then node in the selected bucket has the minimum distance label. a bound on node movements. the node selection steps take O(nK) Since K = [log nC"L the algorithm runs in 0(m + n log nC) time. This redistribution of ranges and the subsequent reinsertions of labels to bucket nodes empties bucket k and moves the nodes with the smallest distance 0. the modified we sequentially scan lower numbered buckets from right to left and add the node to the appropriate bucket. the bucket is k-1 and reinsert content to If the range of bucket k is [/ . O(nK) is node can move at K times. width < 2"^ and since the width of widths of the 2*^. the next two integers to bucket htis the next four integers to bucket and so on. e CONTENT(k) and that d(j) decreases. each node can move most K times. Thus. CONTENTO) = 0. 0. this operation takes The term m reflects the number it of distance ujxlates. then the useful range of the bucket u]. This redistribution necessarily empties smallest distance label to bucket 0. . a moves most lower indexed bucket.. If If This operation takes 0(K) time per iteration and O(nK) time in k=0 or k=l. we assign 2. Since bucket k 1... to a and the term 0(m + nK) time. k-1 in the manner described.. Suppose that d(j) « range(k). {2. so the nodes total move a total of at most nK times. k ^ 2. first buckets can be as large as 2*^'^ for a total potential 0. Next we consider the node buckets from left selection step. to right to identify the first nonempty bucket. The algorithm the first redistributes the useful range in the following manner: 1. We now summarize our discussion. .. 1. . CONTENTO) = CONTENT(4) = 4). Node selection begins by scanning the k. we its redistribute the "useful" range of bucket k into the buckets those buckets. and moves the node with the We are now then in a position to outline the general j algorithm and analyze If its complexity. integer to bucket 0.. 0..59 CONTENT(O) = (5). we can redistribute the useful range of bucket k over the buckets . to a lower indexed bucket. bucket 4 . 2. u] and the smallest distance is Idjj^jp . . . nK arises because the total every time a node moves. the next integer to bucket 3. Whenever we examine it a node in the nonempty bucket k with the at smallest index. label of a node in djj^j^. 1. CONTENT(2) = e. since there are K+1 Therefore. .

60 Theorem 3. The label correcting algorithms are conceptually more general than the label setting algorithms and are applicable to more general To produce situations. 3. d(j) denotes the length of a shortest path from the source node to node These equations are knov^m as Bellman's equations and represent necessary conditions These conditions are also sufficient if for optimality of the shortest path problem. 0(m this + n log C) time.. conditions which is more suitable from the viewpoint of be a set of labels. these algorithms typically require that the network does not contain any negative directed cycle. as the name implies. is possible to reduce this all bound further to 0(m + n Vlog n which is a linear time algorithm for but the sparsest classes of shortest path problems. usual. Label correcting algorithms can be viewed as a procedure for solving the following recursive equations: d(s) d(j) = 0.2) As j. For probelm that satisfy the similarity assumption (see Section bound becomes 0(m+ n it log n). every cycle in the network has a positive length. maintain tentative distance labels for nodes and correct the all labels at every iteration. Most label correcting algorithms have the capability to detect the presence of negative cycles.2). Unlike label setting algorithms. for each j e N - {s}. to networks containing negative length arcs. (3.e.4.2 permits us to reduce the number of buckets to 1 + flog CT This refined implementation of the algorithm runs in 1. We will prove an alternate version of these label correcting algorithms. then they represent the shortest path lengths from the node: . (3. of Dijkstra's algorithm solves the shortest This algorithm requires 1 + flog nCl buckets. a directed cycle whose arc lengths sum to a negative value. Using substantially more sophisticated data ). Label Correcting Algorithms Label correcting algorithms. for example.2 Let d(i) for i e N If d(s) = and if in addition the labels satisfy the following conditions.1) (d(i) = min + Cjj : i € N). i.1. when they all become permanent simultaneously. Theorem source 3. path problem in 0(m The R-heap implementation + n log nC) time. these algorithms maintain distance labels as temporary until the end. FACT 3. shortest paths. structures.

2.j) is a shorter path to node j than the current path of length d(j).j) ^ii ^ ^' since the labels d(i) cancel W e W W is out in the summation. Proof.j) Cj.2. Conditions C3. < + Cjj for all j) e A. + d(j) + = Cj: ^ for each e W. the source to including a shortest path from s to j.2 correspond to label correcting algorithms as From this perspective. the source node to node i. which implies the conclusion of the theorem. network did contain a negative cycle d(i) W and some labels (i. and C32. The algorithm + Cjj.2 correspond to primal feeisibility for the linear programming formulation dual feasibility. > d(i) based upon the simple observation that whenever the current path from the source to node i. cycle. then they are also lower bounds on the - shortest path lengths. This conclusion contradicts our assumption that a negative Conditions C3.2. of length d(i). . Since d(i) is the length of some path from the source to node i. These inequalities imply that (i. first is The generic label correcting algorithm that we consider a general procedure for successively updating distance labels d(i) until they satisfy the conditions C3. We d(i) satisfy note that if the network contains a negative cycle then that the no set of labels d(i) satisfies C3.1.j) Consequently. the label d(i) is either «» indicating that it is we have yet to discover any path from the source to node j.2. of the shortest path problem.61 C3. We show that if the labels d(i) satisfy C3. together with is the arc (i. Consider any directed path i-j P from the source to node j.1 in Theorem 3. Let P consist of Ciii2/ nodes s = - i2 i3 ••• ••• - 'k = < ) Condition C3. (i. Therefore d(j) is a lower bound on the length of any directed path from e P node j. . d(i) d(j) is the length of d(i) some path from (i. it is an upper bound on the shortest path length. or the length of some path from the source to node d(j) j.2 implies that d(i2) ^ d(i^) + Ci^i2 = + Cij^-iijc d{i3) < d(i2) + Ci2i3' / d(ij^) d(ij^. Suppose C3.j) V e (d(i) - d(j) Cjj) T!. we might view and methods that always maintain primal feasibility try to achieve dual feasibility.-j) Adding these inequalities yields d(j) = d(ij^) < V (i. At any point in the algorithm.

A in order and check the condition d(j) > d(i) + Cj:. however. do = d(i) + Cjj. Now make d(j) passes through A. Proof. A arcs that nice feature of this label correcting algorithm is its flexibility: we can select the finite do not satisfy conditions C3. satisfies d(j) while some arc begin d(j) : (i.62 algorithm begin d(s) d(j) : : LABEL CORRECTING. restriction One drawback Indeed. and hence the algorithm runs pseudopolynomial time. j) e A. then the n. we start with pathological instances of the problem and make a poor choice of arcs at every iteration.3 correcting algorithm Wher: applied to a network containing no negative cycles.) is pseudopolynomial time. the most 2nC times. in Arrange the arcs A in some (possibly arbitrary) order. To obtain bound for the we can organize the computations carefully in the following manner. Since each pass requires 0(1) computations for each arc. is and hence represent finite if there are We now note that this algorithm Since d(j) is no negative cost cycles and if the data are integral. end. We show that the algorithm performs at most n-1 passes through the arc list. the modified requires 0(nm) time to determine shortest paths from the source to every other node. end. the label correcting if algorithm does not necessarily run in polynomial time. At termination. scan arcs in satisfies this condition. = and pred(s) = : 0. method. In each pass. = oo for each j € N .j) > d(i) + Cj. Thus when data are in number of distance updates is O(n^C). The correctness of the label correcting algorithm follows Cj. Terminate the algorithm algorithm the modified no distance label changes during an entire pass. the labels d(i) satisfy d(j) < d(i) + the shortest path lengths.2 in any order and of the still assure the convergence. is that without a further on the choice of arcs. if the arc if then update = d(i) + Cj:. these instances a polynomial time do have exponentially large values algorithm. this conclusion imphes the . for all (i. number of steps can grow exponentially with (Since the algorithm of C. the algorithm updates integral.2. correcting algorithm.(s). We call this label Theorem label 3. d(j) at bounded from above by nC all and below by -nC. from Theorem 3. pred(j) : = i.

then has a set of labels d(j) satisfying C3-2. list.)) > min {D^" "•()). Let d''(j) denote the length of the shortest path from the source let D'^(j) to j node j consisting of r or fewer arcs. On the other hand. Further. the algorithm does not update any distance label it during an entire pass. the algorithm terminates with the shortest path distances and the network does not contain any negative distance labels in all cycle. . The modified label correcting algorithm is also capable of detecting the presence If of negative cycles in the network. We claim. min {d^"''(i) + Cj. Thus.63 0(nm) bound. min i*j D''"^(i) + Cj. during the next pass S d(i) + Cj. inductively. We perform induction on the value of Suppose D^*^(j) < d''"Uj) I^Cj) for each € N. we note that Therefore. D^'Cj) < d'^(j) for all j e N. we consider one node at a time. the n-1 passes. passes through the arc for list. the modified label correcting algorithm considers every arc of the list. .. and d^(j) = min i*j {d''"^(i) + c^:]. that < r. = min (d''"^(j). Finally. up to the (n-l)-th pass. more nodes in the algorithm modifies If the distance label of (a some node i changes then the network contains a directed walk path i together with a cycle that have one or common) from node all 1 to of length greater than n-1 arcs that has snnaller distance than paths from the source node to i. for every (i. As network during every pass through the arc arcs in the arc list It aill need not do so. the shortest path from the source to any after at node consists of at most n-1 arcs. min (D''"''(i) + Cj. represent the distance label of node D''(j) after r . r arcs either (i) has no more than r-1 arcs. Now suppose that during one pass through the arc the algorithm does not change the distance label of a d(j) node i. the algorithm terminates v^th the shortest path lengths. The provisions { of the modified labeling algorithm imply that < min {ly'^(j). Then. n-1. d''(j) for each j € N and each r j = 1. j) 6 A(i) and the . Next note that the shortest path to node j containing no more than case (i). ))..)). (ii) it contains exactly r arcs. Consequently. or in case (ii). the inequality fol3o\*^ from the induction hypothesis. Suppose we order the by their tail nodes so that arcs with the same tail node appear i consecutively on the list. Hence. in the n-th pass. This situation cannot occur ui\less the network contair\s a negative cost cyde Practical Improvements stated so far. when we make one more pass. most n-1 passes. In this case. while scanning the arcs. scanning arcs in A(i) and testing the optimality conditions. In dJi]) d^'Q) = d''"^(j).

end. (s). then some nodes may have i as a predecessor. the node has previously appeared on the LIST. i This heuristic rule has the follovdng plausible justification. scans this list list in the first-in. practice.. terminates in 0(nm) time. To achieve this savings. rather than update them from other nodes and then update them again when we consider node alone. yes. the algorithm is i. algorithm begin d(s) d(j) : MODIFIED LABEL CORRECTING. while LIST* begin do select the first element i of LIST. While adding a node If to LIST. the worst-case Though this change makes the algorithm very attractive in .odification of the a formal description of modified label correcting method. (i. Empirical studies indicate that with this change several times faster for many reasonable problem classes. end. otherwise we add If it to the end of LIST. The following procedure this further m. end. consequently. first-out order to assure that performs passes through is the arc A and. 64 algorithm need not maintain a It test these conditions. = = : and pred(s) = : 0. It is advantageous to update the distances for these nodes immediately. « LIST then add j to the end of LIST. : «> for each j e N- LIST = (s). j) for each e A(i) do + Cjj if d(j) > d(i) then begin d(j) : = d(i) + C|j pred(j) if j : = i. delete i from LIST. i we to see has already appeeired in the LIST. The modification i alters the manner check in which the algorithm adds nodes whether the it to LIST. then we add to the beginning of LIST. but greatly improves running time in practice. Another modification of this algorithm sacrifices its its polynomial time behavior in the worst case. the it algorithm can list of nodes whose distance labels have changed since it last examined them.

Indeed.65 running time of the algorithm algorithm source to is is exponential. j) path distances define the d(j) or indicates the presence of a negative cycle. note that for any path from node k to node / X (i. (i. this version of the label correcting the fastest algorithm in practice for finding the shortest path from a single all nodes in non-dense networks. If the network contains arcs with negative arc lengths.5.e. certain variants of the label setting algorithm more efficient in practice. i. We use the modified label correcting algorithm to compute shortest path to all other nodes.d(k) to the corresponding shortest path distance in the transformed network. If the network has nonnegative arc lengths. we need we to determine shortest path distances between all pairs of nodes. This transformation thus changes the length of paths between a pair of nodes by a constant amount (depending on the pair) and Since arc lengths consequently preserves shortest paths. then we can fist transform the network to one with nonnegative arc lengths as follows a Let s be node from which all nodes in the network are reachable. This approach requires 0(nm) time to solve the first shortest path problem. (For the problem of finding a shortest path from are a single source node to a single sink.j)€P ^ii ~ X ^ii "*" ^^^^ ~ '^^'^ since the intermediate (i. the method takes an extra . combines the modified algorithm is label correcting algorithm and Dijkstra's algorithm. All Pairs Shortest Path Algorithm In certain applications of the shortest path problem. The algorithm well suited for sparse graphs. for each (i. transformation. The second better suited for dense graphs. j) either terminates with the shortest In the former case. distances from s The algorithm Cu = (i. we P new length of the arc Cj.) 3.2 implies that t for all € A. It is based on dynamic programming. j) as Cj.j)eP labels d(j) cancel out in the all summation. first In this section is describe two It algorithms to solve this problem. + d(i) - d(j) e A. cor\sidering each node node once. connected by directed paths. Condition C3. and if the network contains no negative cost cycle.. Further. / We then obtain the shortest path distance between nodes k and in the original network by adding d(/) . become nonnegative after the we can apply Dijkstra's algorithm n-1 additional times to determine shortest path distances between all pairs of nodes in the transformed network. then we can solve the all pairs shortest path as the source problem by applying Dijkstra's algorithm n times.

or does pass through the node in which case d^'*'^{i. r) d'^"''^(i.C) = m+ to n log nC. j) = d''(i. r-1 (and and j) Let d(i. 2. We as follows: d'"(i. solve the all Another way pairs shortest path is problem is by dynamic define the programming. in which case = d^(i. j subject to the i condition that the path uses only the nodes as internal nodes. m. It is possible to solve the pairs previous equations recursively for increasing values of and by varying the node is N X N for a fixed value of r. r) + d^Cr.C) lengths. r does not pass through the node r. and d^'+^Ci. The following procedure a formal description of this algorithm. (d^'U. j)). jX d^Ci. .. we first . . j). (ii) . In the expression S(n. j) known as Floyd's algorithm. Thus we have d^(i. j) + d^ir. j) denote the actual shortest path distance. j that passes through the nodes 1. 2. the time needed to solve a shortest path problem with nonnegative arc For the R-heap implementations of Dijkstra's algorithm we considered previously.. To compute i d''"*'^(i. S(n. The approach we present variables d^(i. C) time is to compute the remaining shortest path distances. observe that a shortest path from node either (i) to node r.. j). j) " the length of a shortest path from node i to node .m.. j) = min Cj. 1.j) = Cjj. We over assume that = » for all node pairs (i.66 0(n S(n.m. j). j) e A. r.

This path can be obtained by tracing the predecessor indices. d(i. . netw. i) Hence. j). for each node pair (i. STOP. last node prior to node j in the tentative shortest path from the node i to node The algorithm maintains the property i that for each finite d(i. end. j). e NxN d(i. is in many respects similar to the modified label correcting This relationship becomes 3. For fixed i. p. pred(i. = <« j) : and = i. the union of the r to tentative shortest paths to node r and from node node i contains a negative cycle. j) : = d(i. j) e N N satisfy the following conditions. d(i. j) denotes the j.ork contains a path from node to node j of length d(i.2. Floyd's algorithm uses predecessor indices.4 If d(i. j) pairs € 1 (i. then they represent the shortest path distances: (i) (ii) d(i. j) is the d(i. The algorithm for either terminates vdth the shortest path distances or stops i. node (i. The index pred(i. Theorem (i. ))•. j) for each . pred(i. if i = j and < then the network contains a negative cycle. r : = n do (i. = Cj. : < > begin d(i. r. r) d(i. j). r) do + d(r. j). This algorithm performs n iterations. for some node from node r. j) -: j T • if d(i. and j. j). + i) d(r. i) < some node In the latter case. r) + c^: for all i. Proof. predd. j) € NxN j) : do d(i. for each for each A to do d(i. Consequently. This cycle can be obtained by using the predecessor indices. j) > then . r) i + d(r.*i.67 algorithm begin for all ALL PAIRS SHORTEST PATHS. j) for more transparent from x the followang theorem. end. and pred(i. j) : = pred(r. and in each iteration it performs 0(1) computations for each node pair. when < 0. (Hi) < d(i. this theorem is a consequence of Theorem 3. d(i. j) length of some path from node i to node j. it runs in OCn-') time. Floyd's algorithm jilgorithm. j) : = 0. i) = for all i.

the problem to . In this section. We then consider improved versions of the basic labeling algorithm with better theoretical performance guarantees. For example. t we wish to find the maximum flow from the source node is s to the sink node that satisfies the arc capacities. j) € A). Formally. the maximum number reliability of node disjoint paths that nodes? These and similar its measures indicate the robustness of the network to failure of components. We assume that for every arc in A. We Uj. for consider a capacitated network (i. Moreover. The source s and sink (i. We begin The by introducing a basic labeling algorithm for maximum flow problem. of the loss of network. both theoretically and computationally. A) with a nonnegative t integer capacity any arc e A. we describe preflow push algorithms that have recently emerged as the for solving the most powerful techniques maximum flow problem. In particular. given capacities on the arcs.68 4. MAXIMUM FLOWS An important characteristic of a network is its capacity to carry flow. validity of these algorithms rests upon the celebrated max-flow min-cut theorem of network flows. the solution of the maximum flow problem with capacity data chosen judiciously establishes other performance measures for a network. two distinguished nodes also in A. What. k) € A) designates the arcs emanating from node In the maximum flow problem. This remarkable theorem has a number of surprising implications in machine and vehicle scheduling. {(i. we discuss several algorithms for computing the maximum flow between two nodes solving the in a network. is the maximum flow that can be sent between any two nodes? tire The resolution of this question determines the "best" use of capacities and establishes a reference point against which to compare other ways of using the network. Let U = max (u^. : (i. j) G = GM. We also we can assume set the without any loss of generality that arc capacities are finite (since capacity of any uncapacitated arc equal to the sum of the capacities of list all capacitated arcs). defined as A(i) = k) : (i. i) is There is no generality in making this assumption since all we allow zero capacity arcs. what all is the minimum number join this pair of of nodes whose removal from the network destroys what is paths joining a particular pair of nodes? Or. the arc adjacency i. As earlier. j) are (j. communication systems planning and several other application domains.

rj.1b) (i.1c) It is possible to relax the integrality assumption on arc capacities for is some algorithms. however. j). j) (4. positive residual capacities the residual represent it network (with respect to the flow and as G(x). Given a the residual capacity. algorithm proceeds by identifying directed paths from the source to the sink in the residual network and augmenting flows on these paths. is crucial to the algorithms (i. Thus. . y {j : Xjj {) : y (j. 4.1 illustrates an example of a residual network.j on arc x^: (j. j) we consider. the unused capacity to increase (i. j) € A) € A) e A.1a) r = s. (4. The following high-level (and flexible) description of the algorithm summarizes the basic iterative steps. We call the network consisting of the arcs with x). of any arc i e j A represents the (i. until the residual network contains no such path.. = Uj. the current flow rj. Algorithms whose complexity bounds involve U assume integrality of data. The concept flow x.1 Labeling Algorithm and the Max-Flow Min-Cut Theorem One of the simplest is and most path intuitive algorithms for solving the maximum The flow problem the augmenting algorithm due to Ford and Fulkerson. additional flow that can be sent from node (i) to u^: node - using the arcs and of arc i). Figure 4. Note. + xij . j) maximum (j. "V' ^ > = ^' < Xj: < Ujj . . (4. ifi 0.69 Maximize v subject to V. without specifying any particular algorithmic strategy for how to determine augmenting paths. the integrality assumption of residual network is not a restrictive assumption in practice. though this assumption necessary for others. that rational arc capacities can always be transformed to integer arc capacities by appropriately scaling the data. i) which can be cancelled flow to node Consequently.t. i) Xjj = \ ^ ifi*s. for each (i. The and j.foraUiG N. x. residual capacity has two components: (ii) x^.

j) e P). The in arc (i. j) of an augmenting path is the minimum an residual capacity of any arc that definition of the residual capacity implies (i) an additional flow of A Xj. A directed path from the source to the sink in the residual network path. The algorithm selects a labeled node and scans arc adjacency list (in the residual network) to label more uiilabeled nodes. or (i) a decreeise in (ii).70 algorithm begin x: = 0. AUGMENTING PATH. last result we must establish that the algorithm termirtates with a maximum flow. The following algorithmic description specifies the steps of the labeling algorithm in detail. we need to show that the algorithm terminates Finally. finitely. The algorithm terminates all when has scanned labeled nodes and the sink remains unlabeled. Eventually. or (iii) convex combination of and For our purposes. At any step. while there begin is a path P from s to t in G(x) do A = min : (rjj : (i. j) e P. end. First. we refer to the nodes in the tree as labeled and those its not in the tree as unlabeled. Second. augment A end. The . We now more detail. The follows from the proof of the max-flow min-cut theorem. For each increases r:j (i. augmenting A units of flow along P decreases discuss this algorithm in r^: by A and a by A. is also called an augmenting The residual capacity on the path. The labeling algorithm performs directed path from s to t. of the residual network corresponds to (ii) increase in by A a in the original network. the flows only is easier to work directly with residual capacities and to compute when the algorithm terminates. Xjj by A in the original it network. the sink becomes labeled and the algorithm sends the maximum possible flow on the path from s to it t. we need method to to identify a directed path from the source to the sink in the residual network or show that the network contains no such path. units of flow along P and update G(x). It then erases the labels and repeats this process. a search of the residual network to find a It does so by fanning out from the source node s to find a directed tree containing nodes that are reachable from the source along a directed path in the residual network.

1 Example of a residua] network. c The residual network with residual arc capacities. . Node 1 is the source and node 4 is the sink.) Network with a flow x. (Arcs not shown have zero capacities.71 Network with arc capacities. Figure 4.

L: = (s). while L * begin and t is unlabeled do select a node (i. (loop) end.r^. A = min : (rj. . to the source. end.u^. The predecessor indices allow us along the path from node algorithm LABELING. The rjj final residual capacities r = uj. .xj: + x:j x:j. otherwise we set x^: = and x:j = fj. for each labeled node i indicating the to trace back rode that caused node a i to be labeled. mark end end. j is unlabeled and > then begin pred(j) : = i. if u^: > rj: we can set x^. pred(i). labels and go to loop. and = 0. j) e P).. j as labeled and add this node to L. Hence.x:j = uj. . if t is labeled then begin use the predecessor labels to trace back to obtain the augmenting path P from s to : t.the can be used to obtain the arc flows as follows.72 algorithm maintains a predecessor index.Fjj. j) i € L. . end else quit the loop. Since arc flows satisfy xj: . begin loop pred(j) : = for each j e N. augment A erase all units of flow along P. for each if e A(i) do rj. = Ujj . (i.

A if N s. (4. S). alternatively designate s-t cutset as i (S.1 We in claim that the flow across any s-t and does not exceed the cutset capacity. Recall from Section 1. S) is defined as C(S. An is arc (i. I. and capacity constraints of For this flow vector X.1). (4. S) = X X ie S je "ij ^'^•^^ S cutset equals the value of the flow (4. S). S)< X Z_ "ij ^ C<S. j s-t cutset.'^ij - I_ i€ S X je S ''ij = Fx^S. any partition of the node set as S and S with s e S and e S defines an S). (4. i and j both belong to S. let v be the amount of flow leaving the source. Consequently.2) S S j e S Def ne the capacity C(S.5) i€ S J6 S . summation and xj: ^ in the second summation shows that Fx(S. Adding the flow conservation constraints b) for nodes j S and noting -Xjj in that when nodes i. net flow across an s-t We refer to v as the value of the flow. and an arc (i. S).3 that a set is subnetwork G' = (N. The flow x determines the cutset (S. j) with e S and € S called a backward arc in the cutset Let X be a flow vector satisfying the flow conservation (4. Q c A is a cutset the Q Yias this property. j) with i e S cind e S is called a forward arc.S: an S is the set of nodes connected t to Conversely. we introduce some if new definitions and notation. x^j in equation for node Cemcels equation for node we obtain ^=1 ie S Substituting x^.Q) disconnected eind no superset of subsets.4) j€ S in the first < u^.73 In order to show that the algorithm obtains a maximum flow. S) as Fx<S< S)= i X G S j X_Xij e i I_ X e Xij. cutset partitions set A . S) of an s-t cutset (S. j we (S. into two A cutset is called am s-t cutset the source and the sink nodes are contained in different subsets of nodes S cind S = N .

eis guarantee that the problem always has a maximvmn flow as long capacity. S) (4. Note that = xjj t e S. The more substantive strong duaUty property an equality for cisserts that (4. S). Since the labeling algorithm increases the flow value by iteration. but the same argument shows that when the labeling algorithm terminates. the cutset the S) is a minimum capacity cutset and its capacity equals maximum flow value v. network G(x) when we S. hence rj: for each forward arc x. arc and bounded by a finite number U. apply the labeling algorithm with the Let S= N- Clearly. to I Proof. But does at it terminate finitely? Each labeling eirc iteration of the algorithm scans any node most once. U = 2".{s}) is at most nU.5) S). Adding in the flow conservation equations for nodes in S. Coi^equently. then the capacity of the cutset at least N . is duahty theory. (Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts. This strong duality property the max-flow min-cut theorem. and x^. 74 This result is the weak duahty property Like most of the maximum it flow problem when the "easy" half of the viewed as a linear program. The proof of this theorem not only establishes the max-flow min-cut property. Since rj: = U|. j) in the cutset xj. inspecting each in A(i).. <md requires 0(m) computations. s e S and S. vector) it has at hemd both the maximum flow value (and a maximum flow capacity s-t and a minimum cutset. or our subsequent algorithmic developments. we obtain (4. Consequently. We thus have established the theorem.) Define some cutset has finite S to be the set of labeled initial nodes flow in the residual x. it one unit in any is terminates within nU iterations. (i. value. since x is a maximum flow. holds as some is choice of x and some choice of an s-t cutset (S. = for each backward arc in the cutset. the bound . Making these substitutions in (4.. for each forward arc in the cutset (S. Let x denote the a maximum flow vector and v denote the maximum flow (Linear programming theory. the conditions S) < Ujj and ^ imply that = Uj. nodes S cannot be labeled from the nodes in (S. The maximum value of flow from s Theorem equals the 4. This bound on the if number is of iterations not entirely satisfactory for large values of U. weak duality results.6) But we have observed earlier that v (S.4) yields V = Fx(S. the If all labeling iteration scans each arc at most once capacities are integral (s. - Xj: + Xjj. S) = i ^ e S j ]£ € S Ujj = C(S.1.4). is a lower bound on the capacity s-t of any s-t cutset.

In addition. as the modifications. we must augmenting paths carefully. the max-flow min-cut theorem (and our proof of Theorem 4. to apply this flow decomposition argument. is possible to obtain x from y by a sequence of at most s to t m augmentations on If augmenting paths from plus flows around augmenting cycles. No algorithm developed in the literature comes close to achieving this it is bound. Thus if the method is to be effective. At each algorithm generates node labels that contain information about to other nodes. the capacities are irrational. to Unfortunately. Ideally. 4. moreover. the algorithm may not terminate: although the successive flow values converge.2 . For suppose an optimum flow and y it any flow (possibly zero). they may not converge to the select the maximum flow value. therefore destroys potentially useful information.2 Decreasing the Number the of Augmentations The bound not satisfactory of nU on a number of augmentations in the labeling algorithm is from theoretical perspective. in theory.1) is true even if the data are irrational. This result shows that is. without further take fiCnU) augmentations. even though Erasing the labels much of this information may be valid in the next residual network. in principle. we define x' x' as the flow vector obtained from y by applying only the augmenting paths. Flow decomposition shows should be able X is that. then also is a maximum it flow (flows around cycles do not change flow value). Several refinements of the algorithms. if Moreover. the algorithm can indeed perform that many iterations. Nevertheless. Furthermore.75 exponential in the number of nodes. the augmenting path algorithm may example given in Figure 4.4. thein augmenting path algorithms to find a maximum is flow in no more initial m augmentations.4 if overcome this difficulty and obtain an optimum flow even the capacities are irrational. possible to find a maximum flow using at most m augmentations. possible to improve considerably on the bound of 0(nU) augmentations of the basic labeling algorithm. augmenting paths from the source described erases the labels The implementation we have when it proceeds from one iteration to the next. including those we consider in Section 4. it we should retain a label when can be used profitably in later computations. we need know a maximum theoretical flow. By the flow decomposition property. A iteration.2 illustrates. second drawback of the labeling algorithm the is its "forget fulness". .

s-a-b-t. alternately along s-a-b-t and s-b-a-t.76 (a) 10 \l 10^.0 (b) 10^. (a) The input network with (b) After aug^nenting along the path After augmenting along the path Arc flow is indicated beside the arc capacity. . algorithm. (c) s-b-a-t. the flow maximum.1 (0 Figiire 4.2 A pathological example for the labeling arc capacities.1 10^. is After 2 xlO^ augmentations.

a path of An the alternative is to augment flow along maximum residual capacity. then the length of any increases. We can improve this running time by exploiting the minimum distance from any . 4. in a first-in. By flow decomposition. after capacity augmentations. we consider another algorithm for reducing the number of augmentations. within m augmentations. shortest path either stays the Moreover. Thus the maximum capacity augmenting path has residual capacity at least (v*-v)/m. Unfortunately. the algorithm would reduce the capacity of a 2m or fewer maximum capacity most augmenting path by the capacity a factor of at least two. sequence of least less. this computation time fact that the is excessive. (We will prove these results next section. If we augment same or flow along a shortest path. first-out order. at least 1 Since this capacity is initially at U and must be until the flow is maximum.77 One natural specialization of the augmenting path algorithm is to augment flow along a "shortest path" from the source to the sink. Thus after augmentations. and (by our subsequent observations) the resulting computation time would be O(nm^). L of labeled nodes as a queue. would obtain a shortest path in the Each of these iterations would take 0(m) steps both worst case and in practice.) Since no path contains is at more than n-1 arcs. the in the length of the shortest path is guaranteed to increase. starting with flow - At or one of these augmentations must augment the flow by an amount for v)/2m otherwise we will a maximum flow.6. defined as a path consisting of the least number of arcs. Now (v* consider a v. (Note 0(m log U) maximum that we are essentially repeating the argument used to establish the geometric improvement approach discussed in Section 1. this rule guarantees that the number of augmentations most (n-l)m. the flow must be maximum. the network contains to (v* - at most m augmenting paths whose residual capacities sum v).3 Shortest Augmenting Path Algorithm would be to successively If A natural approach to augmenting along shortest paths first look for shortest paths by performing a breadth the labeling algorithm maintains the set search in the residual network. then it by examining the labeled nodes in the residual network. 2m consecutive maximum have capacity augmentations.) In the following section. This specialization also leads to improved complexity. Let v be any flow value and v* be maximum flow value.

imply that d(i) < k for any path of length k in the residual network and. though d = (3. The algorithm we describe next repeatedly augments flow along admissible paths. satisfies the We say that a distance function valid follovdng two conditions: C4. each of the distance labels for nodes in the in the exact. i If t for each node i. suffices to have valid distances. Other arcs are inadmissible. 0) represents the exact distance label. refer to d(i) as the distance label of It and condition C4. the algorithm augments flows along shortest paths in the residual network.. An arc (i. Thus. . For any admissible path of length k. Since d(s) is a lower bound on the length of any path from the source to the sink. node i to be less than the distance from cost. These inequalities .5. which are lower bounds on the exact to distances. -\ - t be any path of length k in the residual network from node i to t. We now admissible if it define satisfies some d(i) additional notation.1 C4-2.2 we have d(i) = d(i|) < d(i2) + 1. it for other nodes network it is not necessary to maintain exact distances. d(t) d(i) = < 0. d(j) + 1 for every arc (i. node is i We condition. j) € A with r^. d = (0. d(s) = k. 0) is distance label.. any shortest path from node i to t contains at leaist d(i) arcs. hence. There is no particular urgency compute these distances i exactly. 1. However.78 node i to the sink node t is monotonically nondecreasing over all augmentations. d(i2) 2 d(i3) + 1. maximum flow algorithms that we discuss in this section and in Sections 4. d(ij^) < d(t) + 1 = 1. A path from s to consisting entirely of admissible arcs is an admissible path. we refer to the algorithm as the shortest augmenting path algorithm. 0. By fully exploiting this property.. The Algorithm The concept of distance labels w^ill prove to be an important construct in the 4.4 Tj: is and A if it distance function d : N -* Z"*" with respect to the residual capacities a fimction from is the set of nodes to the nonnegative integers.1(c).. > 0. the distance label d(i) equals the length of the shortest path from to in the residual network. Then. then a valid we call the distance labels exact. we can reduce the average time per augmentation to 0(n). we maintain without incuring any significant . j) in the residual network is t = d(j) + 1. in Figure 4. 2. By allowing flexibility the distance label of in the algorithm.2 as the validit}/ is easy to demonstrate that i d(i) a lower boimd on i the length of the i2 shortest directed path from to t in the residual network. For example. from C4. 0. Let = i^ - - i3 - . Whenever we augment along path is a path. to t.

j*) be an admissible arc in = i* A(i*). i*) from the partial admissible path and node pred(i*) becomes the new current node. the partial admissible path becomes an contains node the algorithm makes a maximum possible augmentation on this path and begins again with the source as the current node. admissible path (i.79 We can compute the initial distance labels by performing a backward breadth first search of the residual network. The algorithm performs one retreat. If i*. adds to the partial admissible path. starting at the sink node. maintains a path from the source node admissible arcs. consisting entirely of We = call this i partial admissible path and store it using predecessor of the pred(j) for each arc (i. first X = : perform backward breadth search of the residual network from node 1* : t to obtain the distance labels d(i). SHORTEST AUGMENTING PATH. end. Consequently. = s. Whenever t). . = t then AUGMENT and set i» : s. We next describe the algorithm formally. This step increeises the distance label of node it so that at least one admissible arc emanates from operation). inadmissible (assuming # s). The algorithm terminates when d(s) S n. begin let (i*. while begin d(s) < n do if i* has an admissible arc then ADVANCE(i*) = else if i* RETREAT(i*). 0.e. indicating that the network contains no augmenting path from the source algorithm begin to the sink. indices. two steps at the current (i*. i. end. procedure ADVANCE(i»).. end. as the new no admissible arc emanates from node then i* the algorithm performs the retreat step. j*) node: advance or The advance step it identifies some and admissible arc designates j* emanating from node current node. one at a time.. to some node path a called the current node. i*. as follows. (we i*) refer to this step as a relabel i* Increasing d(i*) makes the arc (predd*). j) on the path.e. pred(j') : and i* : = j*. i'. The algorithm generates an It admissible path by adding admissible circs. we delete (pred(i*).

makes the next arc in the arc it the current arc. augment A end. once decided.2. list can be arranged arbitrarily. has a current-arc (i. begin d(i*) if !• : = min s { d(j) + 1 : (i. after an augment step (when the and (ii) after a relabel step. that the distance valid prior to a step. node. The shortest augmenting path algorithm maintains valid distance labels at Lemma 4. i. list the current-arc of node sequentially list is the arc in its is arc list. ?t then i* : = pred(i*).1. Moreover. j) which i is the current candidate for the first next advance step. We show that the algorithm maintains valid distance labels at every step by performing induction on the number of augment and relabel steps. Correctness of the Algorithm We maximum first show that the shortest augmentation algorithm correctly solves the flow problem. We need to check whether these conditions remain valid residual graph changes). remains unchanged throughout the algorithm. . using predecessor indices identify an augmenting path P from the source to the sink.e. units of flow along path P. satisfies the validity (i) condition C4. When i the algorithm has examined all arcs in A(i). Proof. We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node Each node i emanating from Arcs in each a i. A = min : {rjj : (i. Initially. algorithm constructs valid distance function is Initially. each relabel step strictly increases the distance label of a node. The algorithm examines this it and whenever the current arc inadmissible. updates the distance label of node arc in its and the current arc once again becomes the implicitly first arc list. In our subsequent discussion we shall always assume that the algorithms select admissible arcs using this technique. end. j) € A(i*) and ^- > ). inductively. procedure begin AUGMENT. each step. j) € P). the labels. but the order.. Assume.80 procedure RETREAT(i').

a relabel step at if (ii) The algorithm performs list A(i). ^ n and the algorithm terminates. though. d(i) > d(j) + rj: for all e (S. Finally. d(s) is a lower bound on the length of the shortest augmenting path from s to this condition implies that the network contains no augmenting path from the source to the sink. j) e (S. create an and. d(i) < min{d(j) + (i. By construction. Let S = {i some k* < n .81 (i) A flow augmentation on arc (i. additional arc d(i) (j. The shortest augmenting path algorithm correctly computes a maximum Proof. < k < n.2. The validity condition C4. must be zero e N: d(i) > k*) S. Note that Oj^. then no arc 1 : (i. in addition. node is i when the current arc reaches the end of arc Observe that an arc (i. the for all arcs Gc. Consider the (i. s e sets 1 V S and t e and both the S and S are nonempty. since = d(j) + by the admissibility property of the augmenting path. also create an additional condition d(j) < j) Augmentation on arc + 1 that needs to be d(i) satisfied. S). the choice for changing d(i) ensures that the condition d(i) < d(j) + 1 remains valid for all (i. > 0. affect the validity of the i) with rjj > might. The algorithm terminates when d(s) ^ n. Hence. but this modification distance function for this arc. however. . Theorem flow. S). S) is a minimum cutset and the current flow is maximum.S. to the residual network does not (i. therefore. (S. 4. S). and rj. i) conditions dOc) < d(i) + 1 remain valid in the residual network. Thus. When d(s. j) inadmissible at some stage.1 since Oj^ ^n-1. we can obtain a minimum For s-t cutset as follows.) = k and S = N . which is the termination criterion for the generic augmenting path algorithm. (Recall that d(s) ^ n. list when A(i). Hence. j) e A(i) satisfies d(i) = d(j) + rj.2 implies that s-t = for each (i. Since t. j) € A(i) and > 0) = d'(i). since d(i) increases. then it remains inadmissible until d(i) increases because of our inductive hypothesis that the current arc reaches the end of the arc 1 distance labels are nondecreasing. j) s-t cutset (S. 1 The distance labels satisfy this validity condition. thereby establishing the second part of the lemma. At termination of the algorithm. let a^ denote for the number of nodes with distance label equal to k. j) might delete this arc from the residual network. j) in the residual network.

the algorithm requires at most 0(n^ + retreat (relabel) steps.82 Complexity of the Algorithm We Lemma number Proof. the algorithm never node again during an advance step since for every node k in the current path. its and each retreat step decrecises length by one. at least one arc. Each advance step increases the length of the partial admissible path by one. at most n/2 times and the number of arc saturations is no more Theorem Proof. 4. S n. From this point on.3. Each relabel step at node i increeises d(i) d(i) by at least one. next show that the algorithm computes a maximvun flow in O(n^m) time. After the algorithm has relabeled selects node i i at most n times. j) d(j) increases by at least 2 units. and the second term from the number the previous lemma. j) until flow sent back to (at which point = d'(i) . the algorithm performs the relabel operation 0(n) times. the total is of relabel steps at most n^ . Finally. total any arc (i.e. resulting O(n^m) total effort in the augmentation steps. (a) Each distance is label increases at most n times. The first term comes from the number of of augmentations. The shortest augmenting path algorithm runs in O(n^m) time. Each augment step saturates zero. n^m) advance steps. The algorithm performs 0(nm) flow augmentations and each augmentation takes in 0(n) time. Cortsequently. After having performed list I A(i) i. 4. between two consecutive saturations of arc (i.. which are bounded by nm/2 by For each node i. of relabel steps is Thus the algorithm relabels a node at most n times and the total number bounded by n'^. The total time spent in all relabel operations is V i€ n I A(i) I = 0(nm). i. such scannings. + 1 ^ d(i) + = d(j) + 2).2. j) can become saturated than nm/2. we consider the time spent in identifying admissible N The time taken to identify the admissible arc of arcs. (b) The number of augment steps at most nrnfl. Hence. each I execution requiring 0( A(i) I ) time. since each partial admissible path has length at most n. Then no more flow can be d'(j) on 1 (i. the algorithm total reaches the end of the arc and relabels node Thus the time spent in all . j) becomes saturated sent at some iteration (at is which from d(i) j = i d(j) + 1). d(k) < d(s) < n. node I i is 0(1) plus the time sf)ent in scanning arcs in A(i). decreases its residual capacity to Suppose that the arc (i. Consequently.

Potential Functions and an Alternate Proof of Lemma 4. The idea of augmenting flows along easy to implement in practice.83 scannings is 0( V i€ nlA(i)l) = 0(nm). This implementation of the maximum flow algorithm runs in difficult 0(nm log n) time and obtaining further These improvements appears quite implementations interest. except in very dense networks. = for some k* < n. but The termination of d(s) ^ n may not be efficient in practice. As we have seen earlier. The combination of these time bounds N establishes the theorem. aj^ with distance label equal to k. The minimum cutset prior to this array performing these relabeling operations. ex. The use of potential functions enables us to define an "accounting" relationship between the occurrences of various steps of an algorithm that can be used to . identify at most 0(nm) augmenting paths and this bound on particular examples these algorithms to perform f2(nm) augmentations. (S...2(b) A functions. shortest paths is intuitively appealing and The resulting algorithms is tight.e. for ^ k < n. if S = i : d(s) > k*). satisfactory for a worst-case analysis. We can do so by maintaining the number of nodes » i. The proof of Theorem 4. a done after it has already found the algorithm can be improved by detecting the presence of a maximum flow. of a sophisticated data structure. powerful method for proving computational time bounds is to use potential Potential function techniques are general purpose techniques for proving the complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function. shortest The only way is improve the running time of the fewer computations per . i. Researchers have observed empirically major portion of which is that the algorithm spends too much time in relabeling. Vkith sophisticated data structures appear to be primarily of theoretical however. A detailed discussion of dynamic trees is beyond the scope of this chapter. augmenting path algorithm The use to perform augmentation. called dynamic trees reduces the average time for each augmentation from 0(n) to OGog n). then S) denotes a minimum cutset.e. because maintaining the data structures requires substantial overhead that tends to increase rather than reduce the computationjd times in practice. The algorithm updates it after every relabel operation and terminates whenever first finds a gap in the { a array.3 also suggests an alternative temnination condition criteria for is the shortest augmenting path algorithm.

representative of the potential function argument. we illustrate the technique by showing is that the number of augmentations in the shortest augmenting path algorithm 0(nm). decomposes into k basic of operations of sending a flow of these basic operations as a push. Rather than formally introducing potential functions. we of bound number of steps of one type in terms of knovm boiands on the number steps of other types. for the purpose of this argument. the push-based algorithms such as those we develop in this and the following sections necessarily violate conservation of flow. and increases F by the all same amount. we count a step either an augmentation or as a terminates. relabeling of Each node i creates as cis I A(i) I new admissible arcs. potential increases only The when the algorithm relabels distances. relabel operation. This argument objective to is fairly Our bound the number We did so by defining a potential function that decreases whenever the algorithm performs an augmentation. the number the of augmentations using bounds on the number of relabels. a path. . 4. and thus we can bound In general.4 Freflow-Push Algorithms Augmenting path algorithms send flow by augmenting along step further arc. arcs at the number of admissible eis end of the k-th step.84 obtain a bound on the steps that might be difficult to obtain using other arguments. F(0) < m and many F(K) ^ Each augmentation decreases the residual capacity of at least one arc to zero and hence reduces F by at least one unit. This basic decomposes into the more elementary operation of sending flow along an Thus sending a flow of A A units along a path of k arcs units along an arc of the path. In fact. Suppose in the shortest augmenting path algorithm we kept track of the number Let F(k) denote the of admissible arcs in the residual network. since the algorithm any node at most n times (as a consequence of Lemma its 4. Since the initial value of F is at most is m more than terminal value. K steps before it Clearly.1) and V i€ n I A(i) I = N nm. of augmentations. Thus the number of augmentations most m + nm was = 0(nm). This relabels increase in F is at most nm over relabelings. the total decrease in F due to is at all augmentations m + nm. Let the algorithm perform 0. We shall refer to each A path augmentation has one advantage over a single push: at all it maintains conservation of flow nodes.

j) € A) a The preflow-push algorithms maintain a given preflow x. i) ''ji (j : € A) X'^ij (i. of the generic (ii) preflow-push methods are pushing the flow on an admissible and updating a distance label.e. The algorithm terminates when the network contains no active nodes. define the distance labels and admissible arcs as in the previous section. € A) (j:(i. j) € A) • We refer to a node with positive excess as an active node. the best preflow-push algorithms currently outperform the best augmenting path algorithms in theory as well as in practice. We adopt the convention that the source and sink nodes are never active.1b): x is a function x: A —» R that satisfies (4. The Generic Algorithm A preflow of (4. these algorithms permit the flow into a node to exceed the flow out of this node. the it method cannot send excess increases the distance label from this node nodes with smaller distance it then of the node so that creates at least one new admissible arc. Third. For i we define the excess for each node e N- {s. Second. labels.i) Xjj - y '^ij SO . algorithms perform all The preflow-push iteration of the le<ist operations using only local information. they can push flow for closer to the sink before identifying augmenting paths. The preflow-push algorithm uses the following subroutines: .85 Rather. as in the augmenting path algorithm described in the last section. We will refer to any such flows as preflows.1c) and the following relaxation y {j:(j.{s. i.foralli€ N-{s. preflow at each intermediate stage. closeness being measured with respect algorithms. At each algorithm (except active node. t). t} as e(»>= {) : Z (j.menting path we send to flow only on admissible arcs. they are better suited distributed or parallel computation. its initialization and t) its termination). with its > 0. The goal of each iterative step is to choose some active node and to send excess closer to the sink. the network contains at e(i) one a node i e N . As If in the shortest aug. Fourth. (i) The two basic operations arc. to the current distance labels. First. they are more general and more flexible.) (We Preflow-push algorithms have several advantages over augmentation based algorithms..

begin x: = 0. we visualize flow in an . j) e A(s) and d(s) : = n.86 procedure PREPROCESS. The following generic version of the preflow-push algorithm combines the subroutines just described. end. while the network contains an begin select active node do an active node i. end. and nonsaturating otherwise. in this network. end. algorithm begin PREFLOW-PUSH. stairting at node t. We say that a push of 6 units of flow on arc is 5 = rj. perform a backward breadth first-search of the residual network. j) increases both saturating if and r. j) then push 5 = min{e(i). to create at least The piirpose of the relabel operation is one admissible arc on which the algorithm can perform further pushes. we v^h to send water from the source In addition. PUSH/RELABEL(i). by 5 units. It might be instructive to visualize the generic preflow-push algorithm in terms of a physical network. We refer to the process of increasing the distance label of a node as a relabel operation.. procedure PUSH/RELABEL(i). and the distance function measures how far nodes are above the ground. PREPROCESS. j) e A(i) and > 0}. begin if the network contains an admissible arc (i. A push of 5 units e(j) from node i to node j decreases both e(i) and r^: by 6 units and (i. and to the sink. nodes represent joints. to determine initial distance labels d(i). : r^:) units of flow from 1 : node Tj: i to node j else replace d(i) by min {d(j) + (i. end. Xgj : = Ugj for each arc (s. arcs represent flexible water pipes.

First. The arc and (2. the current candidate for the list. it gives each node s a positive excess. The push reduces the excess network and arc node. Since arc (2. Second. we move the node upward. but they do not satisfy the distance condition. As we continue to move nodes upwards. node that has no downhill At this point. Since node 2 (2. a lower bound on the length of t. Suppose the select step 1. since the preprocessing step saturates is arcs none of these arcs admissible and setting d(s) = n will satisfy the is validity condition C4. and again water flows downhill towards the sink. we is identify an admissible arc in A(i) using the same data structure we used in the shortest (i.3(a) specifies the preflow determined by the preprocess step. 4) is deleted from the residual is still (4. since d(s) = n t. Hence. In general.87 admissible arc as water flowing downhill. if the algorithm relabels each node 0(n) . examines node 2. we move at a the source node upward.3 illustrates the push/relabel steps applied to the example given in Figure 4. d(l)+l} = min{2. any shortest path from s to the residual network contains no path from s to Since distances in d are nondecre<ising. it node 2 to 1. Figure 4. 4) has residual capacity r24 = of value 6 1 and d(2) = d(4) + the algorithm performs a (saturating) of push = min {2. 1} units. In the push/relabel(i) step. The preprocessing node adjacent to step accomplishes several important tasks. Initially. We maintain vrith each node i a current arc which push operation. Third. the remaining excess flow eventually flows back towards the source.5) = 2. occasionally flow becomes trapped locally neighbors.1(a). The algorithm terminates when source. the algorithm performs a relabel operation and gives node 2 a new distance d'(2) = min {d(3) + 1. however. 2) is added to the residual network. s.2. and water flows to its neighbors. no flow than can reach the sink. We choose the current arc by sequentially scanning the arc scanning the arc times. lists We have seen earlier that takes 0(nm) total time. so that the algorithm can begin by selecting all some node with incident to node positive excess. Arc (2. 1) have positive residual capacities. all the water flows either into the sink or into the Figure 4. we are also guaranteed that in subsequent iterations t. 3) an active can be selected again for further pushes. Eventually. the residual network will never contain a directed path from s to will be and so there never any need to push flow from s again. water flows downhill towards the sink. j) augmenting path algorithm.

. (a) d(3) = 1 d(l) =4 d(4) = d(2) = l 1 6^ = (b) After the execution of step PUSH(2).= 2 The residual network after the preprocessing step.88 d(3) = 1 e3=4 d(l) = 4 d(4) = d(2) = 1 e.

This condition total is the termination criterion of the augmenting path algorithm. paths from s to active nodes. the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily when no admissible arc emanates from it. Proof. any preflow x can be decomposed with respect (i) to the original (ii) network G into nonnegative flows along paths from the source s to Let i t. and (iii) the flows around directed cycles. The second conclusion follows from the following lemma. be an . the residual a flow. begin by establishing one result: first always valid and do not increase too many The of these conclusions follows from Lemma because as in the shortest augmenting path algorithm. The algorithm terminates when the excess is either at the source or at the sink implying that the current preflow r. Complexity of the Algorithm We now important times.89 d(3) = 1 d(l) = 4 d(4) = d(2) = 2 (c) After the execution of step RELABEL(2). Figure 4. we can easily resides show that it finds a maximum flow.3 An illustration of push and relabel steps. arcs directed into the sink is and thus the flow on the maximum flow value. that distance labels are We 4. Lemma is 43. Assuming that the generic preflow-push algorithm terminates.1. By the flow decomposition theory. each node i with positive excess node s by a directed path from i to s in the residual network. connected to At any stage of the preflow-push algorithm. Since d(s) = network contains no path from the source to the sink. analyze the complexity of the algorithm.

it had a positive excess. x. V i€ I d(i). dii) < 2n. F cases zero. Then there t must be a path P from s to i in the flow decomposition of since paths from s to i. Since the total increase in d(i) throughout the running time of the i algorithm for each node distance labels is is bounded by 2n''. The algorithm able to identify an arc on which it can push flow. Case The <ilgorithm is unable to find an admissible arc along which it can push flow. For each node i e N. Lemma number 4. This lemma imples set. and hence a directed path from i to s. that during a relabel step.4. create a A saturating push on arc might 1. j) over all saturating pushes. Consequently. the initial value of F (after the preprocessing step) step. Each distance is label increases at .2 imply that (a) d(i) < d(s) + n - 1 < 2n. Proof. j. and hence 2n'^m Next note that a nonsaturating push on arc (i. At termination. thereby increasing the number of active nodes by and increasing F by which may be as much as 2n per saturating push. In this case the distance label of node i increases by e ^ 1 units. Lemma Proof. the algorithm does not minimize over an empty Lemma Proof. the total increase in F due to increases in bounded by is Case 2. The proof is ver>' much similar to that of Lemma 4. most 2n times. and d(i) < 2n for all i e is I. 4. The number of nonsaturating pushes is O(n^m).2. new excess at node d(j). I denote the set of active nodes. the residual network contained a path of length at most n-1 from node fact that d(s) to node The = n and condition C4.5. and flows around cycles do not P contribute to the excess at node Then the residual network contains the reversal of O' with the orientation of each arc reversed). 4. During the push/ relabel (i) one of the following two must apply: 1. does not . 2n. i and hence s. is at most 2n^. the total is of relabel steps at most 2n^ (b) The number of saturating pushes at most nm. This operation increases F by at most e units. Since < n. Cor^ider the potential function F = . Let III We prove the lemma using an argument based on potential functions.6.90 active node relative to the preflou' x in G. and so (i. The last time the algorithm relabeled node i. j) it performs a saturating or a nonsaturating push.

then F decrejises by an amount d(i). that the algorithm relabels no node during n node examinations. Hence. Finally. it is easy to implement the preflow-push algorithm theorem: O(n'^m) time. Each node examination entails at most one nonsaturating push. we can derive many max different algorithms select {d(i) from the generic version.91 increase III. We maximum at summarize these possible increase in facts. However. decreeise in is at least 1 unit per norxsaturating push. we always an active node with the highest distance label for : Let h* = e(i) > 0. The nonsaturatirg push will decrease F by d(i) since i becomes inactive. then excess reaches the sink node and the algorithm terminates. Each nonsaturating push decreases F by one unit and F always remains nonnegative. Consequently. node F j was active before the push. Since the algorithm requires O(n^) relabel operations.4 The generic preflow-push algorithm runs in O(n'^m) time. proving the lemma. further improvements. We have thus established the following Theorem 1. The algorithm maintains a set S of active nodes. we immediately obtain a bound of O(n^) on the number of node examinations. Note all If a if node relabeled then excess moves up and then gradually comes cor\secutive dov^n. doubly linked delete. the preflow-push and its algorithm has several nice features. . its flexibility potential for By specifying different rules for selecting nodes for push/relabel For operations. lists) Several data structures (for example. we indicate how the algorithm keeps track of active nodes for the It push/relabel steps. A Specialization of the Generic Algorithm The running time of the generic preflow-push algorithm is comparable to the bound of the shortest augmenting path algorithm. example. and so on. is push flow to nodes with distance h*-2. i e N) at some point h*-l. and deletes from S nodes that become inactive following a nonsaturating push. of the algorithm. or select elements are available for storing S so that the algorithm can add. suppose that push/relabel step. in particular. it from in in 0(1) time. in Then nodes with distance h* push flow turn. to nodes with distance and these nodes. but it simultaneously increases F by If d(j) = d(i) - 1 if the push causes node j to become The net active. The initial value of F is at most 2n^ and the F is Irr- + 2n^m. the nortsaturating pushes can occur most 2n^ + 2n^ + 2n^m = O(n^m) times. this algorithm performs O(n^) nonsaturating pushes. that adds to S nodes become active following a push and are not already in S. Consequently.

except that e^^g^^ eventually decreases to vtdue we develop an excess- scaling technique that systematically reduces Cjj^^ to 0. algorithm pushes flow from nodes whose excess is A/2 S ^jj^ax^^- "^^ choice assures that during nonsaturating pushes the algorithm sends relatively large excess closer to the sink. This algorithmic strategy may prove to be useful for the following reason. it We algorithm as the excess-scaling algorithm since is bcised on scaling the node excesses. 4.92 variable level which is an upper bound on the highest index lists r for which LlST(r) is nonempty. we would 0. active node) is By pushing flows from active nodes. that during Cj^^g^. though. Researchers have shown using more clever analysis that the ) highest label preflow push algorithm in fact runs in 0(n^ Vrn time. refer to this U represents the largest arc capacity in the network. the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an : one measure of the infeasibility of a preflow. We can store these as doubly linked lists so that adding. Suppose . starting at LIST(level) We identify the highest indexed lists. Pushes carrying small amounts of flow are of little benefit and can cause bottlenecks that retards the algorithm's progress. the execution of the generic algorithm.5 Excess-Scaling Algorithm at The generic preflow-push algorithm allows flows violate each intermediate step to mass balance equations. We to will next describe another implementation of the generic preflow-push algorithm that dramatically reduces the Recall that number of nonsaturating pushes. deleting.5 The preflcnv-push algorithm O(n^) time. The excess-scaling algorithm is based on the following ideas. observe no particular pattern in In this section. attempts to satisfy the meiss balance equations. from O(n^m) 0(n^ log U). Theorem 4. for the highest label straightforward. that always pushes flow from an active node ipith the highest distance label runs in U preflow push algorithm is The O(n^) bound and can be improved. Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling . The algorithm also does not allow the maximum excess to increase beyond A. The following theorem now evident. nonempty list and sequentially scanning the lower indexed needed is We leave it as an exercise to show that the overall effort to scan the lists is bounded by n plus is the total increase in the distance labels which O(n^). or selecting an element takes 0(1) time. Note.

U < A < 2U. end. effort. After the algorithm has peformed flog scaling phases. A= 2' ^°6 ^ when ' the logarithm has base 2. algorithm EXCESS-SCALING. Selection Rule. It is node Vkdll j could not send the accumulated flow closer to the sink.ax decreases to value and we obtain The the maximum flow. j. The algorithm uses following node selection rule to guarantee that no node excess exceeds A. Initially. begin PREPROCESS. a new scaling ph«ise begins. but with one slight difference: instead of pushing units. The algorithm performs a number of dominator A decreasing from phase certain value of scaling phases with the value of the excess- to phase. more than A/2. may vary up and down during When Ul + 1 Cjj^g^ < A/2. 6 = min {e(i). pushing too much flow to any node likely to be a wasted The excess-scaling algorithm has the follouang algorithmic description. Thus. end.93 The algorithm also does not allow the maximum excess to increase beyond A. Ehjring the A-scaling phase. ejy. Among all nodes with excess of distance label (breaking ties arbitrarily). select a node with minimum ..e(j)} This change will the ensure that the algorithm permits no excess to exceed A. Tj. A .} units of flow. K:=2riogUl. Suppose likely that several nodes send flow to a single node creating a very large excess. Thus. A/2 < Cj^g^ < A and ejj^^^ the phase. it pushes 6 = min {e(i). Ij. We refer to a specific scaling phase with a A as the /^-scaling phase. This algorithmic strategy may prove to be useful for the following reason. for k : = K down to do begin (A-scaling phase) A: = 2^ while the network contains a node i with e(i) > A/2 do perform push/relabel(i) while ensuring that no node excess exceeds A. excess-scaling algorithm uses the same step push/relabel(i) as in the generic preflow-push algorithm. and thus the algorithm need to increase its distance and return much of is its excess back toward the source.

4. it performs either a saturating or a nonsaturating push. since node i is a node with smallest distance = d(i) label (i. of flow. For every push on arc (i. i. j). the increase in F due to node relabelings most 2n'^ over scaling phases).1. throughout the running of the algorithm increase in F is bounded by 2n (by Lemma is the total due to the relabeling of nodes bounded by 2n^ is at in the A-scaling all phase (actually. The excess-scaling algorithm performs O(n^) nonsaturating pushes per and scaling phase 0(n^ log U) pushes in total. Lemma 4. r^. During the push/relabeKi) one of the following two cases must apply: Case 1. at most 2n^ (from Case the number of nonsaturating pushes bounded by . the push operation increases only Let Tj. Odog U) at scaling phases. node excesses thus remain less than or equal to A.4.8. A < + A- e(j) <A . i A and sends at leaist A/2 tmits of flow at least from node 1/2 units.94 Lemma C43. In this case the distance label of node i increases e(i) by e ^ 1 units. Using this potential function N Since the algorithm has first. A . The algorithm is able to identify an arc on which it can push flow and so Ccise. we will establish the first assertion of the is is lemma. C4.e(j)) > min {A/2. we have e(i) > A/2 and excess e(j) is < A/2. The algorithm satisfies the following two conditions: Each nonsaturating push sends at least A/2 units of flow. Proof. j) In either F decreases. This relabeling operation the totcil increases F by at most e units because < A. by sending min more than A/2.. ijj) units of flow. nonsaturating push on arc since d(j) = d(i) . Consider the potential function F = ^ ie e(i) d(i)/A. at leaist A/2 vmits excess at node e(j) Further. e'(j) - be the e(j)) j after All the push. - 1 < d(i) since arc is Hence. The algorithm is unable to find an admissible arc along which it can push flow. Proof. (i. Since for each increaise in d(i) 4. Case 2. we ensure that in a nonsaturating push the Jilgorithm sends e(j).. bounded by A and bounded by 2n.4). j) among nodes whose admissible. Then e'(j) = e(j) + min {e(i). and d(j) {e(i). in F during this scaling is phase sum to 8rr. the second assertion a consequence of the The e(i) is initial value of F the beginning of the A-scaling phase d(i) is bounded by 2n^ because step. No excess ever exceeds A.7. to node j after this operation F decreaases by is at Since the initial value of F at the beginning of a A-scaling phase most 2n^ and the increases 1).

(We refer the reader to Section 5. Let /j. Up to this we have if ignored the method needed to identify a node with the excess minimum distance label easy. however. . arc (i. we add an s* to t*. and a variable level which a lower bound on the smallest index list r for which LlST(r) is nonempty. ^ denote the lower bound for flow on any eu'C (i. node 0. For each node i with > we add an t*) arc (s*. j) e A. With this observation. then the original problem > 0) is feasible and choosing the flow on each is arc (i. This i representing the excess or deficit of any node e N. operation. among nodes with more than A/2. j) e A. We then solve a v* problem from Let x* denote the maximum v* = {i: flow and e(i) maximum flow denote the maximum is flow value in the transformed network. the problem infeasible. Although the maximum flow problem v^th zero lower bounds always infecisible.95 This lemma implies a bound of 0(nm all + n^ log U) for the excess-scaling algorithm since we have already seen that other operations — such as saturating pushes. we can summarize our discussion by the following Theorem time. the problem wiih nonnegative lower bounds could be We can. We identify the lowest indexed nonempty starting at LIST(level) and sequentially scanning the higher indexed that the overall effort lists. with capacity -e(i).6 The preflow-push algorithm with excess-scaling runs in 0(nm + n^ log U) Networks with Lower Bounds on Flows To conclude this section. j) as x^. relabel operations and finding admissible arcs point. hence. — require 0(nm) time. and super sink. preflow-push method in Section lists 4.i) with capacity e(i). otherwise. s*. has a feasible solution.4 to find a e(i) node with the highest distance d(i) We is maintain the LIST(r) = {i € N : > A/2 and = r). we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. choice gives us a pseudoflow with e(i) We problem by solving a maximum flow set x^: = /j: for each arc (i. 4. Making in the this identification is we use a scheme similar to the one used label. determine the feeisibUlity of this problem with zero lower bounds as follows. result. and for each node i with e(i) < 0. If \ e(i) . node t*.4 for the definition of a pseudoflow with both a excesses and deficits). We is leave as an exercise to show needed to scan the lists is bounded by the number not a bottleneck of pushes performed by the algorithm plus 0(n log U) and. We e(i) introduce a super source. + /jj a feasible flow.

j) respectively. define the residual capacity of an arc (i. i).96 Once we have found = (ujj a feasible flow. The and second tenns on arc in this expression denote. . j) - Xjj) + (xjj - /jj). (i. It is possible to establish the optimality of the solution generated by the algorithm by generalizing the max-flow min-cut theorem to accomodate situations with lower bounds. These observations show that it is possible to solve the problem with nonnegative lower bounds by two applications of the cilgorithms maximum maximum flow flow we have already discussed. initially first we apply any of the maximum flow as algorithms with only one change: rj. the residual capacity for incre<ising flow cmd for decreasing flow on arc (j.

this condition. We assume that X ieN ^(^^ - and that the minimum cost flow problem has a feasible solution. (5. A5.2. Now solve a maximum flow problem cost from s* to t*.1. Let that the lower bounds ( /j. Minimize 2^ Cj.1c) We assume nonnegative. the maximum flow value equals {i : T b(D > b(i) 0) then the minimum flow problem A5. i) X^!k) = ''ii t)(>)' for a" > e N. otherwise. for each node with < 0. j) X € X) X:: (j : (j.97 5. MINIMUM COST FLOWS In this section. max Cj. is feasible. j) and 1) for each € N and assigning a large cost and a very large capacity to each of these .1b) < xjj < Ujj.. i) with capacity b(i). assumption that all data (cost. impose j necessary. directed path We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes. Introduce a super source node i s*. supply/demand and problem We also assume that the minimum cost flow satisfies the following two conditions. Feasibility Assumption. it is infeasible. for each (i. in Section 2. add an If t*) with capacity -b(i). (5. : (i. on arc flows are all zero and that arc costs are [ C } = ). We maximum t*. (i. j) € A The transformations Tl and T3 loss of generality. Connectedness Assumption. add an arc (s*. we consider algorithmic approaches for the minimum cost flow problem.j)€A^ subject to {j : (i. if We (j. can ascertain the feasibility of the minimum cost flow problem by solving a flow problem as follows.: ' {5. j) € A. by adding artificial arcs (1. We consider the following node-arc formulation of the problem. max ( ujj : (i.4 imply that these assumptions do not impose any We remind the reader of our blanket capacity) are integral.1a) (i. and a super and sink node i For each node b(i) with arc b(i) (i. j) e A ) and U = max max { lb(i)l : ie N}. x.e. > 0.

j) is defined as follows: Cj: We replace each arc r^. we can produce a network without any Observe that parallel arcs). will tissume that never arise by inserting extra nodes on parallel arcs.98 arcs.2. then the residual j network may contain two arcs from node i to node and/or two j arcs from node to node with possibly different to costs. the minimum The cost flow problem has a number of important theoretical properties. Moreover. The arc (i. and (j. of this problem inherits linear many of these properties. rather simple complementary slackness conditions. This equivalence implies the following alternate statement of Theorem Theorem 2. j) has cost rjj and x^. i). By using more complex notation. view.1. 5. from a linear programming point of In this section. Our notation for arcs assumes that at most one arc joins easily treat this one node any other node. j) For the original network contains both the arcs i and (j. A feasible flow x is an optimum flow if and only if the residual network G(x) contains no negative cost directed cycle. i). The concept example. state the linear we formally programming dual problem and derive the complementary slackness conditions.. rather than changing our notation.1.x^. . No such arc would appear in a minimum cost solution unless the problem contains no feasible solution without artificial arcs. j) G(x) corresponding to a flow x arcs i) (i. notational difficulties.. CXir algorithms rely on the concept of residual networks. and the has cost -Cj: and residual capacity = The residual network consists only of arcs with positive residual capacity. we can case. e A by two arc (j. we (or. more general parallel arcs However. The residual network (i.1 for the definition of augmenting cycle). 5. residual capacity = u^j . Duality and Optimality Conditions As we have seen programming dual in Section 1. due to its special structure. any directed cycle in the residual network G(x) is an augmenting cycle with respect to the flow x and vice-versa (see Section 2.4. if of residual networks poses some (i. the minimum cost flow problem and its dual have.

(5.1) assuming that Uj. Further.4) Uj: implies that 6jj = 0.. consider the j) minimum is cost flow problem that this (5.1) is: Maximize X ie t)(') '^(i^ ~ (i.6). suppose that < Uj: for some arc (i.3) Whenever = > for some arc (i. in (5.3) 6jj > ^ Xjj = Ujj. we that can set one of these dual 0. (5.8) yields (5. foraU (i.j)e A. 0<xjj <u^j=* = Ujj=> Jt(i)- Jt(j) = Cjj. (5.7) To see this equivalence. (5. substituting this result in (5.1b) redundant.2c) and Ji(i) are unrestricted. j) variables to an arbitrary value. we The with the upper bound constraint of arc dual problem to (5. (5. (5. Xj: < Uj.1b). . .2b) 5jjS 0. therefore assume 7c(l) = (i. The complementary slackness conditions Xjj for this primal-dual pair are: > => 7i(i) - n(j) - 5jj = Cjj (5. 99 We each arc generality.j) N X e A "ij ^i\ ^ (5 2a) ' subject to 7c(i) - 7c(j) - 6ij < Cjj . The condition (5. Xj. . It possible to show 7t(i) assumption imposes no loss of i We associate a dual variable with the mass balance corwtraint of node is Since one of the constraints in (5. j).5) = =* 7c(i) - 7t(j) < Cjj .8) Since (5. j) e A.1c).4) These conditions are equivalent Xj: to the following optimality conditions: (5. . j). > for (i.6) Xij n(i) - n{]) ^ qj < Xj: (5. € A. associate a dual variable 6jj We.3) implies that 7t(i)-7t(j) -5jj = Cjj. implies that n(i) - n(j) - 5jj = Cjj . for all (i. in (5.

Note note that if that the condition C5. 0. residual network C5. i)eW To see the converse. < Ujj. some (i.7)..4 If If < x^: Xj. We (5.2 X If is feasible.3 C5. These conditions. to: terms of the residual network. C5. Observe however. then then Xjj = 0. j) in the residual network G(x). network.2. simplify C5.j)€ XW C.j)€ (-Jt(i) W + Jt(j)) (i. Cjj Cjj Cjj > = < 0. i) C5. that the condition C5. . To < see this result. with Let d(i) denote the shortest distance from node 1 to node i. then the 0.6. in the original Cjj.3.1. -t^ Cjj .5).6.2b) gives (5.4. j) in A. (i.j)€ W C:. shortest path optimality condition C3. + I (i. Let W be any directed cycle in the residual network. > subsumes for some arc (j.7) imply that a pair x.3 follows it from the conditions C5. then = U|j.6 implies that X (i.6 Cj. for some arc (i. Finally.4. Cjj = Xjj But then for Cjj contradicting A similar contradiction arises if < and < Uj. j) then (5. ^ S (i. The conditions if it - (5. = Cj: - Ji(i) + is n(j). Condition C5. n of flows and node potentials optimal satisfies the follov^ing conditions: C5. Further . would contain arc with Cj. t for each arc (i. the residual network contains no negative cost cycle.100 Substituting 5jj S in this 6jj equation gives (5.5) define the reduced cost of an arc (i.j)e W q: = '' (i. respect to the arc lengths are well defined. is feasible.5 C5. suppose that x is feasible and C(x) does not contain a negative cycle. when stated in we retain for the sake of completeness. if xj: = < uj. j) and C5.4) imples that = and substituting this result in (5. 0. C5. n of flows and node potentials C5.5 and C5. j) as Cj. Then The in the residual Cj:.2 implies that d(j) < d(i) + q.1. C5. > and Xj.2 and C5. It is easy to establish the equivalence between these optimality conditions and the condition stated in satisf)'ing Theorem 5. Consider any pair x. network the shortest distances from node 1. Hence. '^ S 0.6 (Primal feasibility) x (E>ual feasibility) Cj.1 C5.

Hence. setting Uj: equal to any integer greater than (n 1) will suffice if we wish s to to maintain t finite capacities). 5^ Relationship to Shortest Path and Maximum Flow Problems The minimum cost flow problem generalizes both the shortest path and maximum flow problems. maximum flow problems are of Indeed. j) in G(x). A*) as follows. the This relationship will be cost flow problem.1) b(i) = -1 for all 1 * s. j) e The nodes in G* have the A* has an upper bound u^:* as bound defined as follows: . improved algorithms for the for these two problems have improved algorithms minimum cost flow for problem. other nodes can be formulated as a minimum cost flow problem by setting b(l) = (n . and setting = (i. Conversely. + d(i) - d(j) = Cj. Suppose that 7t is an optimal dual solution and c is the vector of reduced costs. : (i. as the nodes in G. Uj^ m • max {u|. The shortest path problem from node s to all . Then < q.6. We now show how to obtain an optimal primal solution from an optimal dual solution by solving a single maximum flow problem. the maximum = flow problem from node node can be transformed to the s) minimum cost flow problem by introducing an additional arc (t. algorithms for the shortest path and great use in solving the minimum cost flow problen. same supply /demand well as a lower Any arc (i.*.5 and C5. and Uj. = «« for each (i. j) e A (in fact. algorithms for the minimum cost flow problem solve both the shortest path and maximum flow problems as special cases.Jt(i) + 7t(j) = Cj. Thus. We define the cost-residual network G* = (N. more transparent when we discuss algorithms have already shov^m in Section 5. for all (i. Similarly. 1^.1 minimum We how to obtain an optimum dual solution from an optimum primal solution by solving a single shortest path problem. Let n = x.. 71 - d. j) e A) would suffice). the pair satisfies C5. j) in G(x). many of the algorithms use shortest path minimum and/or maximum for the cost flow problem either explicitly or implicitly flow algorithms as subroutines. . led to Consequently.101 for aU (i. j) € A. with Cj: c^g = -1 and u^^ = for each arc ~ (in fact.

at the same time. does so by identifying negative cost directed cycles in the in these cycles. electrical engineers and many others have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem. meets the supply/demand constraints of the nodes. cycle algorithm maintains a primal feasible solution It The negative to attain x and strives dual feasibility. minimum problem and point out relationships between We first consider the negative cycle algorithm. computer scientists. j) in (i. .. We first eliminate the lower this bounds of arcs as described in Section 2. . j) (iii) For each (i. successive shortest path. j) A with Cj. < 0. j) flow. and hf = 0- The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5. 1^:* =Uj. then condition C5.4 in and then transform problem to a maximum cost flow problem as described assumption A5. Negative Cycle Algorithm Operations researchers. j) with u^:* = 1j:» = 0. j) with u^* = (i. If cjj must be at the arc's upper bound in the optimum = 0. A* contains an arc in A with c. if Cjj < for some (i. primal-dual. out-of-kilter. Let x* denote the x*+/* is flow in the transformed network. > for some (i.102 (i) (ii) For each For each (i.2-C5.3. A* contains an arc in A with Cj. it Theorem 5. the algorithm terminates. then C5. 5.4 implies the flow on arc flow. arc j) with Uj. Similarly. A* contains an (i. = 0. j) € A.1. j) 6 A. If Cj. r Now network the problem is reduced to finding a feasible flow in the cost-residual that satisfies the lower and upper bound restrictions of arcs and.4. > 0. primal simplex and scaling-based algorithms. (i. Then an optimum solution of the maximum minimum problem in G. Notable examples are the negative cycle.1 implies that has found a minimum cost flow. we discuss most of these important algorithms for the them. In this and the following cost flow sections. residual network G(x) and augmenting flows The algorithm terminates when when the residual network contains no negative cost cycles.2 dictates that xj: = in the optimum (i.3..* = uj. then any flow value will satisfy the condition C5.

Identifying a negative cost cycle with maximum improvement due in the objective function value. augment end. j) IW € m (min ^ (rjj : (i. described in Section to identify a negative cycle. The improvement is in the objective function to the augmentation x* be along a cycle W - (i.4. j) e W). The augmenting cycle theorem (Theorem 2. is feasible flow in the network can be found by solving a maximum flow problem as explained just after assumption A5. begin establish a feasible flow x in the network. end. A cycle 3. Further. it maintains a tree and node potentials that enable to identify a negative cost cycle in 0(m) effort. One algorithm for identifying a negative cost the label correcting algorithm for the shortest path problem. This algorithm can be improved in the following three ways (which irizpV summarize) we briefly (i) Identifying a negative cost cycle in effort (to much less than 0(nm) time. while C(x) contains a negative cycle do begin use some algorithm 5 : to identify a negative cycle W. However. at least one augmenting cycle with respect Hence. = min [t^ (i. the algorithm always augments flow along a . improvements to ex -ex*. 6 units of flow along the cycle W and update G(x). if to x must decrease the function by at least (ex -cx*)/m.103 algorithm NEGATIVE CYCLE.1. which requires 0(nm) time at least Every iteration reduces the initial flow cost by zero is one unit. due to degeneracy.3) implies that x* equals x plus the flow on at most in cost augmenting cycles with respect to x. the flow cost and a lower bound on the optimum flow algorithm terminates after at most O(mCU) iterations and requires O(nm^CU) time in total. Let x be some flow and an optimum flow. objective due to flow augmentations on these augmenting cycles sum Consequently. the simplex algorithm cannot necessarily send a positive amoimt (ii) of flow along this cycle. It The simplex algorithm solution be discussed later) nearly achieves this objective. Since mCU is an upper bound on an cost. j) e W)).

time. and T denote the . but a modest variation approach yields a polynomial time algorithm for the minimum cost flow problem. We define the mean cost cycle is a of a cycle cycle cost divided by the number of arcs It is contains. A minimum mean whose mean cost is as small as possible. i) X€ A] ''ii - {j: (i. then called the deficit.j) X€ a1 e(i) ''ii' for all i e N. then e(i) is called the excess is of node Let S i. 5. the its to the next. (iii) Identifying a negative cost cycle vdth ais its minimum mean it cost. iterations. A pseudoflow is a function x A -» R satisfying only : <md normegativity constraints. then Lemma 1. the successive shortest path algorithm maintains dual feasibility of the solution at every step and strives to attain primal feasibility. we define the imbalance of node as e(i) = b(i) + {j: (j.104 cycle with obtain maximum improvement.1 implies an optimum flow within 0(m log mCU) iterations. j At each step. if e(i) < 0. researchers have shown the negative cycle algorithm always augments the flow along a minimum mean is cycle. A node i vdth = called balanced. all The algorithm when the current solution satisfies the supply/demand the capacity i constraints. possible to identify a minimum mean that if cycle in 0(nm) or 0(Vri m log nC) Recently. is bounded from below by -C and bounded from above by Lemma implies that this algorithm will terminate in 0(nm log nC) iterations.4. If e(i) -e(i) is > for some node i. cycle is that the method would Finding a of this maximum improvement a difficult problem. Successive Shortest Path Algorithm The negative cycle algorithm maintains primal feasibility of the solution at every feaisibility. It maintains a solution x that satisfies the normegativity and capacity constraints. the algorithm selects a node i with extra supply and a node with unfulfilled demand and sends flow from terminates i to j along a shortest path in the residual network. but violates the supply/demand constraints of the nodes. the minimum mean (negative) cycle 1. step and attempts to achieve dual In contrast. absolute value decreases by a factor of l-(l/n) within m Since mean cost of the minimum mean -1/n. then from one iteration moreover.1 cycle value nondecreasing. For any pseudoflow x.

nil) + n(k). j) in G(x). Hence. x every arc every arc satisfies (i. We in are now its in a position to prove the lemma.105 sets of excess and deficit nodes respectively. node k any node v in G(x) with respect to the arc lengths We claim that x also Jt' satisfies the dual feasibility conditions with re. Since x satisfies the dual feasibility conditions with respect to the node potentials Cj: we have to ^ for all (i. Lemma Proof.e. Then x' also satisfies the dual feasibility conditions with respect to some node potentials. (i.1..6 unth respect to the node potentials it.pect to the potentials (i. . Cj. for all (i.' = Cjj for on the shortest path P from node k node since d(j) = d(i) + for € P and Cj: = c^. for all (i. j) in G(x). The shortest path optimality conditions C3. Observe that for any directed path P from a node k to a node /. we use them to ensure that the arc . But since for each arc 6 P Cjj = 0. - . suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). j) (i. jt. Hence. i) also satisfies C5.6 for this arc. The residual network corresponding to a pseudoflow is defined in the same way that we define the residual network for a flow. Augmenting flow on an Cj: arc (i. j) C5. j) (i.. Let d(v) denote the shortest path distances from Cj. '' = (i.6. The node potentials play a very important role in this algorithm. j) may add . Y fe C:. is and the Cjj. the node ?'> potentials change all path lengths between a specific pair of nodes by a constant amount. j) in G(x). Next note that Cj. Cj: = - Cj. fe P Z C.c(i) + Jt(j).2) imply that d(j)<d(i)+ Substituting cjj . Furthermore..6 with respect to the node potentials to n'. Suppose a pseudoflow x satisfies the dual feasibility condition C5. Besides using them to prove the correctness of the algorithm. = 7t-d. 5.. /. Augmenting flow along any = arc P maiintains the dual feasibility condition C5. - Jt(i) + n(j) in these conditions and using 7t'(i) = 7t(i) - d(i) yields qj" = Cjj 7:'(i) + n'(j) S 0. and so (j. shortest path with respect to the same bls the shortest path with respect to The correctness of the successive shortest path algorithm rests on the following result. reversal arc (j. i) to the residual network. The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj.

The successive n. polynomial in m and the supply U. -e(/). ujxJaten 6 : = 7t-d. end. 5*0. to Currently. O). is the best strongly polynomial -time bound implement Dijkstra's algorithm is CXm + n log n) and the best (weakly) polynomial time bound is 0(min {m log log C. m. all lengths are nonnegative.106 lengths are nonnegative. where S(n. do begin select a node k e S and a node / € T. S and To satisfies initialize the algorithm. e(i) compute imbalances while S ^ and initialize the sets S and T. of the successive shortest path algorithm summarizes the steps algorithm SUCCESSIVE SHORTEST PATH. d(j) determine shortest path distances from node k to all Cj. which is a feasible pseudoflow and arc C5. The algorithm however. Further. by assumption. e(k). j) € P } ]. if U is an upper bound on iterations. then T* because the sum of excesses always equals the sum of deficits. the largest supply of any node. the connectedness assumption implies that the residual network G(x) contains a directed path from this node k to node /. more The following formal statement of this method. = min [ min { rj: : (i.6 with respect to the node potentials n = Also. if since. f>olynomial . thus enabling us to solve the shortest path subproblems efficiently. T. So the overall complexity of is this algorithm is 0(nU S(n. other nodes in G(x) with respect to the reduced costs let P denote : a shortest path from k to 1. the shortest path problem at each iteration can be solved using Dijkstra's algorithm. m it + is nVlogC ) ). Each iteration of algorithm solves a shortest path problem with nonnegative arc lengths and reduces the supply of some node by Cj: at least one unit. m.. the algorithm terminates in at most the arc lengths nU Since are nonnegative. units of flow along the path P. augment 6 update end. we set x = 0. shortest path algorithm largest pseudopolynomial time since is. and = begin X : = 7t : 0. C) the time taken by Dijkstra's algorithm. Consequently. X.

the adding nodes and arcs as in the assumption A5. as before. and to permit any flow between and Uj: if Cj: The kilter number. The algorithm guarantees some node strictly decreases at each iteration. Primal-Dual and Out-of-Kilter Algorithms The primal-dual algorithm is very similar to the successive shortest path problem. This bound is better than that of the successive shortest path algorithm. the algorithm incurs the additional expense of solving a maximum flow problem at each iteration..107 time for the assignment problem. we could The just as well have violated other constraints at intermediate steps. In Section 5. > 0. and the flow bound restrictior«. we transform the minimum cost flow problem into a single-source and single-sink problem (possibly by At every iteration. U) respectively denote the solution times of shortest p>ath and maximum flow algorithms. . nC} on the number of iterations since the magnitude of each node potential is bounded by nC. primal-dual algorithm solves a shortest path problem from the source to update the node potentials (i. a special case of the minimum cost flow problem for which U = 1. U)). nC M(n. out-of-kilter algorithm satisfies only the mass balance cortstraints and may idea is violate the dual feasibility conditions to drive the flow on an arc (i. represented by k^:. in the next ^ 1. of course. might send flow along many paths.e. where S(n. m. 5. comes closer to satisfying the mass balance However. drive the flow to zero if = 0. and also assures that the node potential of the sink latter strictly decreases. C). The successive shortest path and primal-dual algorithnw maintain a solution that satisfies the dual feasibility conditions violates the and the flow bound iteratively constraints.1). m. These algorithnns modify the flow and potentials so that the flow at each step constraints. j) to Uj. These observations give a bound of min {nU. The flow observation follows from the fact that after we have solved the maximum problem. it except that instead of sending flow on only one path during an iteration. Thus. but that mass balance constraints. To explain the primal-dual algorithm. C) and M(n.5. m. coi^equently. the network contains no path from the source to the sink in the residual network consisting iteration d(t) entirely of arcs with zero reduced costs. we will develop a polynomial time algorithm for the minimum cost flow problem using the successive shortest path algorithm in conjunction with scaling. Cj: < 0.7. but. m. each 7:(j) becomes 7t(j) - d(j)) and then solves a maximum flow problem to send the reduced maximum possible flow from the source to the sink using only arcs with zero that the excess of cost. The basic if Cj. the algorithm has an overall complexity of 0(min (nU S(n.

j) with c^j < 0. k^. 5. In this section. . which is a spanning tree. version of the primal network simplex algorithm its is Though no known to run in polynomial time. An arc with k^: = said to be in-kilter. At each it iteration. - x^: I .108 kjj. researchers have also improved the performance of the simplex algorithm by developing various heuristic rules for identifying entering variables. particularly. Suppose the kilter would decrease by increasing flow on P from node in the cycle j the arc. j) terminates when all arcs are in-kilter. that of the successive shortest path algorithm. for an arc I (i. The special structure of the minimum benefits. Then the algorithm network and would obtain augment this a shortest path to node {(i. We first define the concept of a basis structure and describe a data structure to store and to manipulate the basis. the last The advances made in two decades for maintaining and upxiating the tree structure efficiently have substantially improved the speed of the algorithm. Finally. structure. = u^. The Section 2. j) is defined cis the minimum increase or decrease in the flow necessary to satisfy its j) flow bound constraint and dual feasibility condition.kilter algorithm reduces the kilter number number of at least one arc. we describe the network simplex algorithm in detail. with is Cjj > 0. For example.3) permits the algorithm to achieve these efficiencies. of an arc (i. but P u The proof of the correctness of algorithm more detailed than. best implementations are empirically comparable to or better than other minimum cost flow algorithms. leaving arcs and pivots using the tree data structiire. k^j = I x^j I and for an arc (i. streamlining of the simplex problem offers several computations and eliminating the tree structure of the basis (see »need to explicitly maintain the simplex tableau. j)). i in the residual at least is one unit of flow similar to. Through extensive empirical testing. we show how guarantee the finiteness of the network simplex algorithm. Network Simplex Algorithm The network simplex algorithm specialization of for the minimum cost flow problem for is a the bounded variable primal simplex algorithm cost flow linear programming. and node potentials for any basis We then show how to compute arc flows We next discuss how to perform various to simplex operations such as the selection of entering arcs. of an arc (i.6. the out-of.

= for each e L. (B. through the arc (i.9) 1 tree path in B from node to node j. for each (i. Cjj .109 The network simplex algorithm maintains a basic feasible solution at is each stage U). Then. for each for each (i. = each (i. bounds. and setting (5.1c). - nii) n(j) satisfy the following optimality conditions: Cjj = S < . u^: for called feasible setting Xj. j) e B. The following algorithmic description specifies the essential steps of the procedure.e.10) . The condition not profitable for any nonbasic arc in L. (i. € L. A + feasible basis structure U) is called an optimum basis structure if it is Cj. B.10) implies that this along the tree path from node circulation of flow is to node 1. if U) as a basis structure. B denotes the set of basic arcs. L and tree. the problem has a feasible solution satisfying (5. j). p in L denotes the change in the cost of flow achieved by sending one unit of flow through the tree path from node 1 to node j i. (i.11) has a similar interpretation.1b) and (B. L. then equations (5.jc(i) + 7t(j) for a nonbeisic arc (i. The condition (5. U p>artition and L and the arc set A. . (5. L.9) Cij . L. L. A basic solution of the minimum The cost flow set problem defined by a triple i. possible to obtain a set of node potentials n so that the reduced costs defined by = Cj. j) A basis xj: structure (B. and then returning the flow (5. j) € U.11) These optimality conditions have a nice economic interpretation. imply that -7t(j) denotes the length of the cj. / (5. j) (5. The network simplex algorithm maintains iteration a feasible basis structure at each until it and successively improves the basis structure becomes an optimum basic structure. . arcs of a spanrung U by respectively denote the sets of nonbasic arcs at their lower and upper U) is j) g U. = Cj. We refer to the triple (B. little later We shall see a that if nil) = 0..

In the following discussion. U). j) with flow S and arc set (j.110 algorithm NETWORK SIMPLEX.2 provides one way basic feasible solution. jmd the as U is empty. thread(i). depthd). begin determine an initial btisic feasible flow x and the corresponding basis structure (B. end. Each node has a unique path connecting it . We next describe one such tree representation. Maintaining the Tree Structure The specialized network simplex algorithm is possible because of the spanning tree property of the beisis. 1 is the root node. root.9). The this L consists of the remaining arcs. /) violating the optimality conditions. we will see later.1 for an example of the We associate three indices with each node i in the tree: a predecessor index. /) (k. The algorithm requires the tree to be represented so that the simplex algorithm can perform operations efficiently and update the representation quickly when the basis changes. the network contains arcs (1. j) and 1) with sufficiently large costs and capacities. baisis add arc to the spanning tree corresponding to the in this cycle. a depth i index. L. See Figxire 5. violates the optimality conditions while some arc begin select do an entering arc (k. have assumed that for every node j € N - {!). compute node potentials for this basis structure. 1) with flow b(j) if b(j) > 0. end. q). The -b(j) if b(j) initial basis B includes the arc set (1. Basis Structure Our connectedness assumption A5. The node potentials for basis are easily computed using (5. of obtaining an initial We (j. forming a cycle and augment the maximum possible flow determine the leaving arc (p. pred(i). we describe the various steps performed by the network simplex algorithm Obtaining an Initial in greater detail. and a thread index. called the tree. We consider the tree We assume that node as "hanging" from a specially designated node. perform a basis exchange and update node potentials.

8. number of arcs in the path. nodes 4. we can enumerate the path from any node to the root node. Computing Node Potentials and Flows for a Given Basis Structure We first consider the problem of computing node potentials n for a given basis structure (B. 9) is contair« the descendents In Figure 5. starting node 5.1 shows an example of these indices. 6. on. pred(i). its successors. and 7 in order. and then finally returning to the root. and (ii) the descendants of any node are consecutive elements The thread indices provide a particularly convenient i: means for visiting (or finding) all i. The simplex method has two given basis structure. For the root node these The Figure 5. starting at the root and visiting nodes in a "top to bottom" and "left to right" order. itself. The predecessor index stores the stores the first node in that path (other than node i) and the depth index indices are zero. successors of successors. For example. We now describe how to perform these steps using the tree indices. and (ii) basic steps: (i) determining the node p>otentials of a computing the arc flows for a given basis efficiently structure. thread (i) specifies the next node in the traversal visited after node i. 7. As we will see. Note that the value of one node potential . which are the 5.1). Note that by iteratively using the predecessor indices.1. A node with no successors called a leaf node. For example. L. 8. finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the simplex method. 1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5. its We say that pred(i) of a is the predecessor of node i i and i is a successor of node The descendants and so node i consist of the node itself. 6. descendants of node and then left visit node Since node 3's depth equals that of node we know that we have the "descendant tree" lying below node 5. The thread threads its indices define a traversal of the tree. (i) the predecessor of each node appears sequence before the node in the traversal. the node set (5. 9. and 9 are leaf nodes. this sequence would read For each node i.Ill to the root. a sequence of nodes that walks or the way through nodes of the tree. The thread indices can be formed by performing a depth first search of the tree as described in Section 1. For our example. we visit nodes 3. descendants of a node visited until the We 5.5 and setting the thread of a node to be the node encountered after the itself node in this depth first search. This traversal satisfies the following in the two properties. 8. 7. simply follow the thread from node recording the nodes depth of the visited node becomes at at least as large as node i.1. We assume that n(l) = 0. U). of node 5 in Figure 5.

These conditions can alternatively be stated as 1 . We compute the remaining node potentials using the conditions that Cj: = for each arc (i.1b) is redundant. j) in B.112 can be set arbitrarily since one constraint in (5.

j) e B. U). Cjj. if (j. j) 6 € A then . however..:(]) : = 7t(i) - Cj. The traversal assures that whenever this its fanning out procedure predecessor. = 0. i) A then 7t(j) : = 7t(i) + j : = thread (j). L. if (i.12) The basic idea indices to is to start at node 1 and fan out along the tree arcs using the thread compute other node visits potentials. (5. while node and move in toward the root using the predecessor computing flows this task. for every arc (i. The thread compute node potentials 0(n) time using the following method. in the reverse order: indices.12). A similar procedure will permit us to compute flows on basic arcs for a given start at the leaf basis structure (B. procedure begin 7t(l): COMPUTE POTENTIALS. We proceed. j. end. on arcs encountered along the way. j: = thread(l).113 n(j) = Ji(i) - Cjj. j while ^ 1 do begin i : = pred(j). the procedure can all comput in 7t(j) using (5. end. say node indices allow us to i. The following procedure accomplishes . node it has already evaluated the potential of hence.

2. The arcs in the set U must carry flow node equal to their capacity. = pred(j). Note that in the thread traversal. j) (or (j.114 procedure begin e(i) : COMPUTE FLOWS. j) : for each e U do subtract Uj. which B represents the columns Since B is in the node-arc incidence matrix N corresponding to 2. each node appears after prior to its its Hence. sum of the adjusted supply /demand of nodes in the subtree hanging from node is Since this subtree connected to the rest of the tree only by the arc (i. (i. This assignment creates an at j. the reverse thread traversal examines each node examining descendants. from e(i) set X|j = u^j. and add u^: to e(j). : = -e(j).3). = u^: explains the adjustments in the supply/demand of The manner for up>dating e(j) implies that each e(j) represents the j. it a lower triangular matrix (see Theorem is possible to solve these equations by forward substitution. = U|j for these arcs. j delete node and the arc incident to it from T. let T be the basis tree. which precisely . end. else Xjj add e(j) to e(i). descendants. while T*{1) do begin select a leaf i : node j in the subtree T. we set x^.6 in Section is the spanning tree T. if (i. = b(i) for aU i € N. One way thread indices. this arc must carry -e(j) (or e(j)) units of flow to satisfy the adjusted supply /demand of nodes in the subtree. end. of identifying leaf nodes in T is to select nodes in the reverse order of the all A simple procedure completes this task in 0(n) time: push the nodes into a stack in order of their appearance on the thread. Thus. Xj. demand of Uj. and then take them out from the top one at a time. The procedure Compute Flows in essentially solves the system of equations Bx = b. j) € : T then = e(j). i)). units at Xj: node i and makes the same amount available initial This effect of setting nodes. Now additional consider the steps of the method.

but might require a of the relatively number of iterations due to the list poor arc choice. In other words.115 what the algorithm does. On the other hand. adds arcs emanating from them Once minor the algorithm has formed the candidate list in a major iteration. it performs list iterations.. .e. examining the arc optimality condition large cyclically and selecting the first arc that violates the would quickly find the entering arc. we construct the candidate list. We examine arcs emanating from nodes. i+2. has the largest value of Cjj I among such arcs. (5. The algorithm maintains a candidate list of arcs violating the optimality conditions. might require the fewest number of iterations in practice.10) or The method used for selecting an entering arc among these eligible arcs has a inajor effect selects I on the performance of the simplex algorithm. An i. the algorithm examines to the candidate list. scanning all candidate arcs and choosing a nonbasic arc from this that violates the optimality condition the most to enter the basis. One most successful implementations uses a candidate approach that strikes an effective compromise between these two strategies. one node emanating from node i at a time. The next major iteration begins with the node where the previous major nodes cyclically as it iteration ended. This approach also offers sufficient flexibility for fine tuning to special problem classes. These arcs violate condition (5. at its upper bound with positive reduced cost. list which is very time<onsuming. adding to the candidate list the arcs (if any) that violate the optimality condition. selecting arcs in a two-phase procedure cor«isting of major iterations and minor iterations. We repeat this list selection process for nodes i+1. we rebuild the with another major .11). Compute Potentials solves the system B = c by back Entering Arc types of arcs are eligible to enter the basis: a negative is Two bound with aiiy nonbasic arc at its lower a reduced cost or any nonbasic arc eligible to enter the basis. the procedure substitution. Once minor the list becomes empty or we have reached a specified be performed at on the list number of iterations to iteration. In a major iteration. but must examine each arc at each iteration. of equations n Similarly.. that As we scan the arcs. until either we have examined all nodes or the has reached its maximum allowable size. we ufxiate the candidate list by removing those arcs no longer violate the optimality limit conditions. implementation that an arc that violates the optimality condition the most.. each major iteration.

and is select an arc (p. then this cycle consists of the arcs {(((k.. The addition is of this arc to the to as the B forms exactly one (undirected) cycle W. W contains the portions of the path P(k) and This method is efficient. Start at node k and using all predecessor indices trace the path from this node to the root and label this path. /) if (k.(P(k) i to the root node. j) W) units of flow around W. /) if Oc. In other words. 1) as the entering arc. We define the orientation of (k. say node w. ^j=[Xi. has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex W. which sometimes referred (k. P(/). q) with 5pQ = 6 as the leaving arc. denote the . Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of its the current solution. along with the arc (k. the first common P(/) ancestor of nodes k and to The cycle /). which we might /.116 Leaving Arc ^ Suppose we basis select the arc (k. cycle We (i. j) e W. if(i. but it can be improved. Node w. The simultaneous use of depth and predecessor indices. /) pivot cycle. as indicated in the following procedure. If P(i) send 6 = min {5jj : (i. The crucial operation in this step in the basis to identify the cycle denotes the unique path from any node . The maximum flow is change on an arc W that satisfies the flow bound constraints . and e U. those in w and the root. |Uj: - X|: if (i. e this arc leaves the basis. e We W. the nodes in Repeat the same operation for node until we encounter a node already is labeled. around W along and opposite to the cycle's orientation. eliminates this extra work. namely. j) change the flow as much as possible until one of the arcs in the W reaches 5j: its lower or upper bound. Let W and W respectively. .j)eW. opposite to the orientation of sets of arcs in /) W as the same as that of € L. W consists of the arc (k. up It node w. /) and the disjoint portions of P(k) and Using predecessor indices alone permits us to identify the cycle W as / follows. refer to as the apex. /)} u P(k) u P(/)) n P(/))).

A /. : = i. If 6 = 0. into two subtrees— one. T2. /) Xpg = Upg. merely moves from the the entering arc. the root node. but examines only a small subset of the nodes. it must update the basis structure. Adding that is again a and deleting tree. /) for a leaving arc (p. T2 hangs from node p or node The arc (k. otherwise its nondegenerate. the arc (p. and the other. If the leaving arc differs from becomes a more extensive ch«mges are needed. simple modification of this procedure permits first it to determine the flow 6 that can be augmented along W as it determines the common W. typically The entire flow change operation takes CKn) time worstose. then set L to the set U.117 * ' procedure IDENTIFY CYCLE. end. q) from the previous basis yields a new basis spanning The node potentials also change and can be updated as follows. basis. Observe that a degenerate pivot occurs only in a degenerate Each time the method exchanges an entering arc (k. In this instance. . or vice versa. ancestor w of nodes k and Using predecessor indices to again traverse the cycle the algorithm can then update in the flows on arcs. If the leaving arc is the same as the entering arc. Basis Exchange In the terminology of the simplex method. while ^ j do begin if depth(i) > depth(j) then if i : = pred(i) j : else depth(j) > depth(i) then j : = pred(j) else i : = pred(i) and = pred(j). A basis is called degenerate flow on some basic arc equals lower or upper bound. the arc (k. q). q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or Oc. a basis exchange it is is a pivot operation. not containing q. (p. begin i : = k and i j : = /.J) . and nondegenerate otherwise. q) from the previous b<isis partitions the set of nodes . /) has . The deletion Note of the arc (p. In this instamce. w end. then the pivot if is said to be degenerate. T^ containing the that the subtree root node. which would happen when 6 = uj^j the basis does not change.

the network simplex algorithm will terminate finitely assunung nondegeneracy.7t(i) in T-j and the other in T2. end. in T2 change by Cj^/ . 118 one endpoint Cjj . This step is rather involved and we refer the reader to the reference material cited in Section 6.11).. end. As is easy to verify. while depth(z) < depth(y) do begin 7c(z) : = 7:(z) + change. It is it as just described. time. During a nondegenerate pivot is which 6 > the new basis structure has a cost that 61 cy I units lower than the previous basis structiire. they change by the eimount indices. 2 : = thread (z). if / e T| and k € T2. to another until (5. The following method. and + 7t(j) = for all arcs in the new basis imply that the potentials of nodes in the subtree T^ remain unchanged. the conditions n(l) = 0. : : q e T2 then y = q else y = : p. Degenerate pivots. however.9)- easy to is show that the algorithm terminates in a finite number of steps if each pivot operation nondegenerate. procedure begin if UPDATE POTENTIALS. z : = thread(y). that possible to update the tree indices in 0(n) Termination The network simplex algorithm. . using the thread and depth updates the node potentials quickly. Since there are a finite number of basis structures and every basis structure has a unique associated cost. and the potentials of nodes in the subtree T2 change by a constant amount. however. if k e T| then change = 7t(y) - Cjj else change = Cjj : : = 7t(y) + change. We do note. Recall that I cj^/ I represents the net decrease in the (in cost per unit flow sent 0).4 for it is the details. pose theoretical difficulties that we address next. - If k e T^ and / e T2. moves from one basis structure obtains a basis structure that satisfies the optimality conditions (5. The final step in the basis exchange is to ujxiate various indices. around the cycle W. then all the node potentials Cjj.

119 Strongly Feasible Bases The network simplex algorithm does not of iterations unless necessarily terminate in a finite number we impose an an additional restriction on the choice of entering and leaving arcs. The tree arcs either are upward pointing (towards the root) or are downward pointing (away from (B. moreover. hand-side vector so that every convert an This technique slightly pertvirbs the right- fecisible basis is nondegenerate and so that it is easy to optimum solution of the perturbed problem to an optimum solution of the original problem.e.. As we show next. We show that a particular perturbation technique for the network simplex method basis technique. n. . ar»d . called a strongly feasible basis. Degeneracy in network problems not only a theoretical issue. if it satisfies the following conditions: (i) Ej > n for all i = 2.2 for an example of a strongly Observe that this definition implies that no upward pointing at its eirc can be at upper bound and no downward pointing arc can be lower bound. minimum cost flow problem with integral As earlier.. Researchers have constructed very small network examples for which poor choices lead to cycling. ££. positive We say that a basis structure of flow from U) is strongly feasible if we can send a amount any node in the tree to the root along arcs in the tree without violating any of the flow bounds. in Computational studies have shown that as many as 90% of the pivot operations common runs networks can be degenerate. it practice as well. infinite repetitive is sequence of degenerate pivots. L. the root). for avoiding cycling in the The perturbation technique is a well-known method simplex algorithm for linear programming. but also a practical one. i. its See Figure 5. . the feister in simplex algorithm terminates finitely. we conceive of a basis tree as a tree hanging from the root node. t^) is a feasible perturbation .. . 3. (ii) i 1 = 2 ti < 1.. by maintaining a special type of basis. . feasible basis.. L. Let (B. U) be a basis structure of the data. is equivalent to the combinatorial rule knov^Ti as the strongly feasible The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej.

If (i. Theorem For any basis structure U) of the minimum cost flow problem. If (i. 1/n. perturbation increases the flow on an upward pointing arc by an amount between than its and 1. the following statements are equivalent: (i) (B. if we by b+e.. perturbed the problem.- Since < X < rXi) k € CKi) 1. j) is at its upper bound. The perturbation changes the flow on for the basic arcs. 1/n). is nonintegral and thus nonzero. (B.. j) is a downward pointing arc of tree B and D(j) is the set of descendants of node Ei. Then node basis. j. Suppose true. Similar reasoning shows that after we have downward pointing arcs also remain feeisible. The procedure we gave Compute-Flows. is (ii) No upward the basis (B. . . then the perturbation decreases the flow in arc the resulting flow (i. = 1/n with for i = 2.. U) is strongly feasible. is feasible if we replace b by b+e. then the perturbation increases the flow the resulting flow 5. i cannot send any flow to the root. L. U) is feasible . One E| possible choice for a feasible perturbation ). is X Ew- Since < keD(j) Z < nonintegral and thus nonzero. L. . . j) is an upward pointing arc of tree B and in arc D(i) is the set of descendants of node El..2. o chosen as a very small justification positive number..l)/n Another choice is Ej = a* for i = 2. n. As noted strictly earlier. i. implies that perturbation of b by e changes the flow on basic arcs in the following maimer: 1. earlier in this section. 2. (i. Since the flow on an upward pointing arc is integral and strictly less (integral) upp>er bound.. for any feasible perturbation e replace b (iv) (B. L. the perturbed solution remains feasible. n (and thus = -{n . no dov^mward pointing arc can be that (ii) is at its lower bound. pointing arc of the basis bound. (ii) (iii). j) by k€D(j) 1. j) by k€ X El. 120 r (iii) El = i L ^^ = 2 is Cj . Suppose an upward pointing arc (i. violating the definition of a strongly feasible the For same =^ reason. is at its upper bound and no downward pointing arc of at its lower (iii) U) L. . 2/n. (i) ^ (ii). for the perturbation e = (-(n-l)/n. Proof..

it is guaranteed to converge. the flow leeist on every arc is a multiple of 1/n. there no need to actually perform the perturbation. (iv) =* (i).. This result implies that both approaches obtain exactly the same sequence of basis structures if they use the that same rule to select the entering arcs. U) of the perturbed problem.. 1/n) is a feasible perturbation. for pxjinting arcs. .2 will illustrate our discussion of this method. Each arc in the basis B has a If positive nonintegral flow. problem with the perturbation e = (n-l)/n. Consequently. we remove (i. L. Figure 5. The algorithm selects the leaving arc in a degenerate pivot carefully so that the next basis is also . 1/n.. We can e. flows on the the U|: downward upward pointing arcs increase. 1/n. Even though this rule permits degenerate pivots. x^: upward pointing arcs decreaise. As a corollary. < and U) is strongly feasible for the origiruil problem. L. Follows directly because e = (-(n-l)/n. the algorithm will terminate in at most of nmCU iterations. thus maintain strong feasibility by f>erturbing b by a suitable perturbation is However. the p>erturbation Consider the same basis tree for the replace original problem. this equivalence shows any implementation of the simplex algorithm that maintains a strongly feasible basis performs at most nmCU pivots. every pivot operation augments at at least 1/n units of flow and therefore decreases the objective function value by units. is Instead. . any implementation feasible the simplex algorithm that maintains a strongly basis runs in pseudopolynomial time. The method initial basis always gives such a basis. 1/n. . Consequently. This theorem shows that maintaining a strongly feasible basis is equivalent to applying the ordinary simplex algorithm to the perturbed problem. (B.e.. Therefore.. Combinatorial Version of Perturbation The network simplex algorithm described earlier to construct the starts with a strongly feasible basis. we can maintain strong feasibility using a "combinatorial rule" that the original simplex equivalent to applying method after we have imposed the perturbation. 1/n. 1/n Since mCU is is an upper bound on the objective function value of the starting a lower solution and zero bound on the minimum objective function value. 1/n). x^. ( - To establish this result. flows on the resulting flows are integral.121 (iii) => (iv). cortsider the perturbed . b + e by b). With this perturbation. Consider the feasible basis structure (B. and > for downward pointing arcs.

The leaving lies in from node k node w. every node in the to the orientation of W^ can augment flow back to the root opposite If W^ and in node w. every node in W^ be able to send positive flow to the root after the pivot as well. thus. no arc in be compatable vdth the orientation of W. then the pivot flow along the arcs in Wj. ancestor of nodes k and Let W be the cycle formed by adding arc same as that of arc (k. then W^ must be contained the segment of feasibility. every node could to the root send positive flow node. q) is W2 blocking and every node contained in the segment orientation of W2 and via node w. Since arc (p. is those arcs (i.e. Now must observe that before the pivot. Define the orientation of segments W^ and W2 to W. W along its introducing an arc into the basis for the network simplex say arc (p. To we show that in this basis every node in the cycle W can send positive flow to the W between the let its root node. When We next do so. when we traverse the cycle along Further. apex w and arc - (p. every node in therefore. Notice that since the previous basis was strongly feasible. q). some basic arcs will be at their lower or upper bounds.e.2 for an illustration of the segments the last blocking arc in W| is and W2 our example. q). W2 = for W - W| {(p. If W2 can send positive flow to the root along the Now consider nodes contained in the segment W^. /) is at its lower bound and the apex w /). change flow values. show that this rule guarantees that the next basis is strongly feasible. the current pivot of augmented segment via a positive amount was a nondegenerate pivot. cycle contains more than one blocking then the next basis will be degenerate. encountered in traversing the orientation starting at the apex w. hence. because by the property of strong every node on the path from node to node w can send a positive amount of flow to the root before the pivot and. q)). W between node w and node / k. j) W that satisfy = 5. node k the subtree T2 and . unique. We now study the effect of the basis (k. the algorithm selects the leaving arc in accordance with the following rule: Combinatorial Pivot Rule. i. If the blocking arc arc. See Figure 5. the i. the current pivot was a degenerate pivot.. Hence. pivot cycle select the leaving arc as the last blocking arc. is This conclusion completes the proof that the next basis strongly feasible. /. then leaves the basis. it (k. Let W^ be the segment of the cycle orientation.. the algorithm cycle identifies the blocking arcs. In this case. no arc on this path can be a blocking arc in a degenerate pivot. Since arc arc belongs to the path enters the basis to change on node potentials during a at its lower bound. since the pivot does not W^ could send positive flow to the root and. method. is the common tree. /) to the basis We define 5jj the orientation of the cycle as the After in the If updating the flow.122 feasible. /) degenerate pivot. first Suppose that the entering arc (k. cj^j < 0. We distinguish two cases.

. 1/n. far we have assumed that the entering arc is at its lower bound. /). with H defined as e H = mCU. violation. we can reduce the number of pivots 0(nmU log H). this degenerate pivot strictly increases the sum of all node potentials (which by our prior potentials is assumptions the is integral). We have already shown that any version of the network simplex algorithm that maintairis a strongly feasible basis performs O(nmCU) pivots. then we define the orientation of the cycle (k.c^j > 0. in In this case. Since the sum of all node bounded from below. earlier.^k+l^^/n (513) We now need an upper bound on the It is total possible that improvement in the objective function after the k-th iteration. the network simplex algorithm implemented using Dantzig's pivot rule. then the objective function value decreases by at least A/n units. /) with the largest This technique value of I Cj^j | among all arcs that violate the optimality conditions). If the entering arc (k. 1/n. L. node / is contained in the subtree T2 and. . we consider the perturbed z*^ problem with perturbation function value = (-(n-l)/n. pivoting in the arc (k. the pivot again increases the sum node potentials.. U) denote the current basis Let arc. W as opposite to its the orientation of arc The criteria to select the leaving arc remaii\s unchanged-the leaving arc starting at is the Icist blocking arc encountered in traversing W along orientation node w. /) is at its upper bound.123 the potentials of all nodes in T2 change by the amount . easy to show . after the Cj^^j . also yields polynomial lime simplex algorithms for the shortest path and assignment problems. thus. x denote the current flow. As . Using Dantzig's pivot rule to and geometric improvement arguments. Complexity Results The strongly feasible basis technique implies some nice theoretical results about i. the arc that most violates the optimality conditions (that is. number So of successive degenerate pivots is finite.e.. pivot all nodes T2 again increase by the amount of the consequently. ^k. A > If denote the maximum violation of the optimality condition of any nonbasic the algorithm next pivots in a nonbasic arc corresponding to the maximum Hence. at the k-th iteration of the simplex algorithm. 1/n). and structure. Consequently. Let denote the objective of the perturbed minimum cost flow problem (B.

capacities represented as The entering arc the blocking arcs are (2. Ujj). This pivot is a degenerate pivot. . (x^:. 3) and (7. 5).2.2) 0.4) (2. 10). and the leaving arc is (7.5) Entering arc Figure 5.124 ap>exw (3. The segments W^ and W2 are as shown. 5). The figure shows the flows and is (9. A strongly feasible basis.5) (0.

'" improvement • in the following relaxed (i.125 (i. and by leaving the flow on the basic arcs unchanged.14) by setting Xj.. x. we construct an optimum solution of (5. by setting xj: = for all arcs (i. 5.1.' (i. This readjustment of at flow decreases the objective function by most mAU.j)€A'^ function Cj. .3. Xj.j)€ A^ ^ ieN Since the rightmost term in this expression is a constant for fixed values of the node potentials.15) and (5.j)€A^ is equal to the total improvement Further.. = u^ for all arcs (i. the network simplex algorithm terminates in follows. 0(nmU log W) iterations. the total improvement with respect to the objective function ^ C:: x. the with respect to in the the objective objective function £ £ c. U). (514a) A »] 1] < xjj < Ujj. We summarize our discussion as Theorem The network simplex algorithm that maintains a strongly feasible basis and uses Dantzig's pivot rule performs 0(nmU log H) pivots.j)e A ' ' (i. We have thus shown that z^-z»^mAU.13) (5. Combining (5. x..14b) For a given basis structure (B.15) we obtain nmu By Lemma 1...j)6 C. '' total improvement (i. j) € L vdth Cj: < 0. is bounded by the total . if H = mCU. (5. L.. j) € A. for all (i. j) e U with Cjj > 0. j) € A ' ' problem: f minimize subject to X {i.

within Odog U) . resulting in a fairly large that number of augmentations in the worst case. shall illustrate RHS-scaling on the uncapacitated Uj: minimum cost flow problem. the least power of 2 satisfying Initially. is : e(i) < 2A or e(i) > -2A for all but not necessarily both.7 Right-Hand-Side Scaling Algorithm ni . we perform a number of augmentations. after has been converted into an uncapacitated problem (as described in Section The algorithm uses the pseudoflow Section 5. 5. Hence. it This algorithm can be applied to the capacitated minimum cost flow problem 2. { j e(j) < -A ). Much A be as we did in the excess scaling algorithm for the either 2' (i) maximum for all i. These results can be found in the references cited in Section 6. either S(2A) = or T(2A) = In the given i A-scaling phase.e. This definition implies that the is sum of excesses { i : (whose magnitude ) equal to the sum of deficits) bounded by 2nA. minimum cost flow problem. It x and the imbalances e(i) as defined in performs a number of scaling phases. We i. j) e A. The inherent drawback path algorithm is that augmentations may carry relatively small amounts of flow. flow problem.126 This result gives polynomial time bounds for the shortest path and assignment problems since both can be formulated as minimum cost flow problems with U = n and U = 1 respectively. sufficiently large The RHS-sc<iling algorithm guarantees each augmentation carries flow and thereby reduces the number of augmentations substantially. The next two sections present polynomial time algorithms based cost scaling. and each of these augmentations A imits of flow.. In this we describe an algorithm based on a right-hand-side scaling (RHS-scaling) technique. The definition of A implies that within n augmentations the algorithm will decrease A by a factor of at scaling least 2. each from a carries node c S(A) to a node € j T(A).4. upon cost and simultaneous right-hand-side and is The RHS-scaling algorithm an improved version of the successive shortest in the successive shortest path algorithm. a problem with = » for each (i. Let S(A) = e(i) ^A and let T(A) = 0. we begin a new scaling phase. (ii) we let i. scaling. to In fact. At this point. A= '°S ^ '.4). Then at the beginning of the A-scaling phase. it is possible to modify the algorithm and use the previous in arguments pivots show in that the simplex algorithm solves these problems 0(n^ log C) and runs 0(nm log C) total time.4. Scaling techniques are among the most effective algorithmic strategies for designing polynomial time algorithms for the section.

The driving force behind this scaling technique is an invariant property (which is we will prove later) that each arc flow in the A-scaling phase a multiple of A. all determine shortest path distances d from node k to in the residual other nodes network G(x) with respect to the reduced costs let P denote the shortest path from node k to node /. This flow that invariant property and the connectedness assumption (A5. By the integrality of data. units of flow along the path P. all imbalances are now zero and the algorithm has found an optimum flow. end. begin X := 0. The RHS-scahng algorithm A-scaling phcise. The following algorithmic a formal statement of the RHS-scaling algorithm. S(A) and T(A). T(A) := { i € N : e(i) < -A ). end. algorithm RHS-SCALING. update n:=n-d. augment A update end. . > .^ 2f log while the network contains a node with nonzero imbalance do begin S(A):={i€ N:e(i)^A). U1. while S(A) * and T(A) * e do begin select a node k e S(A) and a node / e T(A). to a it is correctly solves the problem because during the able to send A units of flow on the shortest path from a node k € SiA) result.127 phase. A < 1. . let n be the shortest path distances in G(0). node / e T(A). A := A/2. This fact follows from the follovdng . X. e := b. ^ .2) ensure in S(A) to a we can always send A units of flow from a node description is node in T(A).

At the beginning of the A-scaling phase. The Each residual capacities are a multiple of A because they or are either or «. at therefore. Since the algorithm requires Ul seeding phases. O) time. or T(2A) = 0. Consequently. hypothesis. S(2A) = I 0. Proof.4. to an uncapacitated one using the technique described in Section We then apply the RHS-scaling algorithm on the transformed network. augmentation changes the residual capacities by hypothesis. A units and preserves the inductive A decrease in the scale factor by a factor of 2 also preserves the inductive This result implies the conclusion of the lemma. this fact would imply the conclusion of the theorem.2 does not apply for this situation. and each seeding phase performs at most n+m augmentations. Theorem 5. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U Sin. As we noted problem is previously. m. because fails to Lemma 5.2. Consequently. C) time. A recently developed modest variation of the problem RHS-scaling algorithm solves the capacitated minimum cost flow 0(m lof^ n . The shortest path problem on the transformed problem can be solved (using some clever techniques) in S(n. the RHS-scaling algorithm solves the capacitated minimum in cost flow problem in 0(m log U S(n. m. O) time. decreases S(A) by one. initial network are always integer multiples of We use induction on the number of augmentations and scaling phases. algorithm and thus terminates with a minimum We show that the algorithm performs l+Flog at most n augmentations per scaling phase. each scaling phase can perform most n augmentations. similar proof applies when T(2A) = At the beginning of the scaling i S(A) | < Observe that A< at a e(i) < 2A for each node deficit. one method of solving the problem cajjacitated minimum cost flow to first transform the capacitated 2. The transformed network contains n+m nodes. The inductive hypothesis be true initially since the residual capacities are or Uj. C) denote the time to solve a shortest path problem on a network with nonnegative arc lengths. A n. The residual capacities of arcs in the residual A Proof. The RHS-scaling algorithm is a special case of the successive shortest path cost flow.128 Lemma 5.. 0. Let S(n. either S(2A) = We consider the case when phase. Each augmentation starts at a node it in S(A). Applying the scaling algorithm problem introduces some directly to the capacitated minimum cost flow subtlety.4. I ends I node with a and carries A units of flow. m. m. e S(A).

The cost scaling algorithm treats e as a parameter e. Now consider an e-optimal flow with e < /n. and finally e < 1/n. ^ -e for each arc (i. Clearly. 5.6 when e is 0.7 C5. This algorithm relies on the concept of approximate optimality. j) X W' ^ 6 ^\\ 0. and iteratively obtains e-optimal flows for successively smaller values of Initially e = C. Any feasible flow e -optimal for ekC. the residual network contaii« no negative cost cycle and from Theorem 5. This method is currently the best strongly polynomial-time algorithm for solving the minimum cost flow problem. is Lemma 5. After l+Tlog nCl . These conditions are a relaxation of the original optimality conditions e -optimality conditions permit -e < Cj. i^ C. < for and reduce to C5. Hence. A flow x is said to be e -optimal for some conditions. e > if x together with some node potentials n satisfy the following C5.129 (m + n log n)) time.8.1 the flow is optimum.3. The algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation procedure that transforms an e-optimal flow into an e/2-optimal flow. The e-dual imply that for all any directed cycle W in the residual network.^-n£>-l. (Primal feasibility) x (e -EHial feasibility) is Cj. feasible.: = Y C. We The Cjj refer to these conditions as the e -optimality conditions. Any e -optimal feasible flow for E<l/n is an optimum flow.5 and C5. This algorithm can be viewed as a generalization of the preflow-push algorithm for the flow problem. The follovsdng facts are useful for analysing the cost scaling algorithm. which a relaxation of the usual optimality conditions. j) at its upper bound. this result implies that (i. Proof. an arc (i.8 for e feasibility conditior« ^ C. j) at its lower bound and e S is > for an arc (i. Cost Scaling Algorithm We now maximum describe a cost scaling algorithm for the miiumum cost flow problem. Since arc costs are integral. any feasible flow with zero 1 node potentials satisfies C5.8.. j) in the residual network G(x).

It e -optimal flow into an does so by is (i) first converting an e -optimal flow into an 0-optimal if it satisfies pseudoflow (a pseudoflow x (ii) called e -optimal the e -dual feasibility conditions C5. 5 = then we refer to the push as saturating.. e < 1/n and the algorithm terminates with an optimum flow. Moreover. i with 0. algorithm COST SCALING. we can state the algorithm as follows. j) then push 6 else Jt(i) := min { e(i). procedure PUSH/RELABEL(i). see later that pushing flows on admissible arcs preserves the e/2-dual conditions. i The Improve-Approximation procedure transforms an E/2-optimal flow. X is x. end. end. e(i) > and call an arc (i. we use the same data structure . re). j) in the residual network admissible -e/2 < < The basic shall operations are selecting active nodes and pushing flows on admissible arcs. We feasibility The Improve-Approximation procedure uses the following subroutine. We also refer to the a relabel bls updating of the potential of a node as a operation is to relabel operation. otherwise is nonsaturating. i to node j := 7c(i) + e/2 + min { c^: (i.. and e := C. discussion of preflow-push algorithms for the maximum it flow problem. always maintaining the e/2-dual active We if call a node c^. More formally. begin j: := let X be any feasible flow. an optimum flow for the minimum cost flow problem. rj: } units of flow from : node > 0). The purpose of create new admissible arcs. j) e A(i) and r^j end. E:=£/2. j) in G(x). As if in our earlier r^. begin if G(x) contains an admissible arc (i. 1 while e S /n do begin IMPROVE. and then gradually converting the pseudoflow into a flow while feasibility conditions.APPROXIMATION-I(£.8). Recall that r^: denotes the residual capacity of an arc (i. 130 cost scaling phases.

{ By our and fjj rule for increasing potentials. after we Jt(i) by e/2 + min rj: Cj: : (i. The algorithm relabels node when Cj. Pushing flow on arc (i.APPROXIMATION-I(e. In addition.8 i satisfied for (i. and at termination yields an e /2-optimal flow. But since -e/2 S is Cj.i) in the Therefore. Cjj > and the condition C5. PUSH/RELABEL(i). For each node i. end. We (j. At the beginning of the procedure. the procedure preserves e/2-optimality of the pseudoflow throughout and. > then Cjj Xj. j) in the residual network. the (in fact. use induction on the number of push/relabel steps to show algorithm preserves £/2-optimality of the pseudoflow.4. at termination. j) any value of > 0. compute node imbalances. while the network contains an active node do begin select an active node i. j) might add its reversal i) to the residual network. . yields an e/2-optimal flow. The correctness of this procedure rests on the iollowing result. ^ for every arc increaise (i. increasing residual network.1.. := else < then Xj: := uj. end. node The current arc is found by sequentially scanning the arc of the The following generic version summarizes its Improve-Approximation procedure essential operations. ^ -e/2. j) e A(i) > 0) units. it algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow is a 0-optiCTiaI that the pseudoflow). Lemma 5. we i. j) to identify admissible arcs. begin if Cjj if x. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow. 131 used in the maximuin flow algorithms (i. This proof is similar to that of Lemma 4. procedure IMPROVE. the reduced cost of every arc Ji(i) with > still satisfies Cj. Jt). maintains the condition cj^ t -e/2 for all arc (k. maintain a currenl arc which is the current candidate for pushing flow out of list A(i). < (by the criteria of c admissibility). Proof.

. P^' (i. Let X be the current £/2-optimal pseudoflow and x' be the e-optimal flow at the end of the previous cost scaling phase. .. Let n and to the n' be the node potentials corresponding possible to show.i)€ _C. Applying the e/2. networks implies that there property that sequence of nodes v = vq.1. (5.. Proof.. units.17) gives Jt(v) < n'(v) (7c(w) - n'(w)) + (3/2)/£.j)eP 7i(v) < Jt(w) + /(e/2) + y Cjj. = 7t'(v) + /£ - 2 C. (5. It is of the flow decomposition properties discussed in Section 2. (i.18) Now we use n.17) 7l'(w) < 7t*(v) + /£ + I (j.16) and + (5. using a variation pseudoflow x and the flow x' repectively. that the complexity of the generic version is We a show O(n^m) and then describe specialized version running in time OCn-^)... we obtain (5.optimality conditions C:. - path in G(x) and reversal to arcs P = vp vj.16) apeP^J Applying the £ - optimality conditions to arcs on the path P in G(x'). and (ii) its reversal P is an augmenting path with respect to exists a v^ is a This fact in terms of the residual . .132 We will next analyze the complexity of the Improve-Approximation procedure.5. Alternatively. ^ on the path P in G(x).v-j - . to those of the preflow-push algorithms for the maximum Lemma 5.j)eP'J Combii\ing (5. These time bounds are comparable flow problem. P is an augmenting path with respect to x'. v^. the facts that (i) k(w) = it'(w) (the potentials of it a node with a negative (ii) / imbalance does not change because the algorithm never selects for push/relabel).. < is and (iii) each increase in potential increases Ji(v) by at least e/2 The len\ma now immediate.j V| is a path in G(x'). with the - P = vq . its vj = w . that for every node v with positive imbalance in x there exists a satisyfing the properties that (i) node w with negative imbalance in x and a path P x.. we obtain X ^-/(e/2).. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure.

A relabel operation at may create new but (k. k -e/2 before a relabel The latter result is true operation. admissible arcs (i.Approximation procedure performs 0(n m) nonsaturating pushes. if the algorithm adds create its reversal to the residual network.133 Lemma Proof. Approximation l+Tlog nCl times. Lemma 5. Let g(i) be the let Proof (Sketch). relabel operation since the relabel operation increases 7t(i) by at least e/2 units. and cj^j ^ after the (k. The algorithm takes 0(nm) time perform saturating pushes. The Improve. it also deletes cj^j all admissible arcs because for any arc i). that 5. The Improve. j). j). (i. is by an induction argument applied to the number of pushes and The result is true at the beginning of each cost scaling phase because the pseudoflow 0-optimal and the network contains no admissible arc. This proof is similar to that of Lemma 4. amounts to showing i between two consecutive saturations of an arc j the potentials of both the nodes and increase at least once. We establish this result relabels. by Lemmas of and 5. As in the maximum is flow algorithm. i) push flow on an arc with Cjj Cj: < 0. 5.5 ar\d essentially (i. Thus pushes do not new node admissible arcs and i preserve the inductive hypothesis. we obtain the following . We The define the admissible network as the network consisting solely of admissible arcs. the bottleneck operation in the Improvethe nor«aturating pushes.5 most 3n2 relabel operations and 0(nm) saturation pushes. To bound number of nonsaturating pushes. Therefore the algorithm can create no directed cycles.6. i).8. is Lemma Proof.6. number of nodes that are reachable from node i in the to admissible network and the potential function F = i X g^i)- Th^ proof amounts at active showing that a relabel operation or a saturating push can increase F by 1 most n units and each nonsaturating push decreases F by at at least unit. these observations yield a bound O(nTn) on the number of nonsaturating pnjshes.7. to Approximation procedure which take O(n^m) time. we need one more result. and the same time to scan Since the cost scaling algorithm calls Improveresult. j) We always (j. the algorithm resulting in also saturates any arc 0(n) times the 0(nm) total saturating pushes. The admissible network acyclic throughout the cost scaling algorithms.AppToximation procedure performs 0(nm) saturating pushes. following result is crucial to analyse the complexity of the cost scaling algorithms. then > 0. hence. arcs while identifying admissible arcs. Since the algorithm performs 5. Since any node p>otential increases 0(n) times.

< j. which in turn push fiow to even higher so on. active nodes have discharged their Since the algorithm requires O(n^) relabel of OCn-^) on the operations.7). called the wave algorithm. suggested improvements based on examining nodes in some clever data structures. the algorithm relabels Note that after the relabel operation at node the network contains no incoming admissible i arc at node i (see the proof of Lemma 5. and thus the to the topological order. however. to When examined in this order. We then move node from its present position in . Consequently. in 0(m) time. in the topological order and if the node then it performs a push/relabel step. j) in the network. the wave algorithm performs O(n^) nor\saturating pushes per Improve- Approximation. or bottleneck operation is the number of nonsaturating pushes. the Researchers have using si>ecific order. Observe pushes do not change the admissible network since they do not create new admissible operations. within n cortsecutive node examinations. The algorithm uses the network can acyclicity of the admissible network. method again if examine the nodes according However. procedure for obtaining a top)ological order of nodes after each initial An topological ordering is determined using an 0(m) it. maximum flow problem. It is possible to determine this that ordering. we immediately obtain a bound number of node examinations. The relabel may create new admissible arcs and consequently may affect the topological ordering of nodes. The wave algorithm selects active is the same as the Improve-Approximation procedure. The wave algorithm examines each node is active. Suppose that while examining node i.134 Theorem 5S. The cost scaling algorithm illustrates an important connection between the Solving maximum flow and the minimum is cost flow problems. We describe one such improvement . an Improve-Approximation problem very similar to solving a Just as in the generic preflow-push algorithm for the maximum flow problem. active nodes push flow higher numbered nodes. algorithm. As is well known. nodes i of an acyclic be ordered so that for each arc (i. but it nodes for the push/relabel step in a specific order. We now describe a relabel operation. and A relabel operation changes the numbering of nodes and starts to the topological order. Each node examination entails at most one nonsaturating push. The generic cost scaling algorithm runs in 0(n^Tn log nC) time. arcs. i. called a topological ordering of nodes. numbered nodes. the all algorithm performs no relabel operation then excesses and the algorithm obtains a flow.

list) Thus the algorithm maintains an ordered and examines nodes it set of it a doubly linked in this order.e. This approach would send flow from a node with i. approach does not seem improve the O(nTn) bound of the generic Improve-Approximation procedure. 5. use ideas from the RHS-scaling algorithm to reduce the for augmentations to 0(n log U) an uncapacitated problem by ensuring that . excess to a node with deficit over an admissible path.6.9. we uncapacitated transportation network G = 0^^ u double scabng algorithm on the N2. the algorithm this moves at to the first place in this order i. A natural implementation of this approach would 0(nm) augmentations since each augmentation would saturate 5. a path in which each arc result in is admissible. 5. A natural alternative would be an augmenting path based method. and (iii) the rest of the admissible network does not change and so the previous order nodes (possibly relabels a eis is still valid. Double Scaling Algorithm The double scaling approach combines ideas from both the RHS-scaling and cost scaling algorithms and obtains an improvement not obtained by shall describe the either algorithm alone. and again examines nodes in order starting node We Theorem minimum have established the following The cost scaling result. Thus. This result follows from the facts arc.135 the topological order to the topological ordering of the first position. however. The Improve-Approximation procedure section relied on a "pseudoflow-push" method. approach using the wave algorithm as a subroutine solves the log cost flow problem in 0(n^ nC) time. node i precedes node in the order. We number of can. (ii) node i has no incoming admissible j for each outgoing admissible arc (i.6. with Nj and N2 as the sets of supply and demand nodes respectively. Notice that this altered ordering is a (i) new admissible network.. A capacitated minimum cost flow problem can be solved by first transforming the problem into an uncapacitated transportation problem (as described in Section 2. A). by Lemma to the algorithm requires 0(nm) arc saturations.4) and then applying the double scaling algorithm. at least this one arc and. Whenever node i. j). For the sake of simplicity. The double scaling algorithm is it the same as the cost scaling algorithm discussed in the previous section except that uses a more efficient version of the Improvein the previous to try Approximation procedure.

^ j -e for all (i. augment A end. 5. Thus. hence.4. 0(n) time on average over a sequence of n augmentations. this algorithm. n). in is that the double scaling algorithm identifies an augmenting path fact. procedure begin IMPROVE. / determine an admissible path P from node k to some node with e(/) < 0. The advantage problem of the double scaling algorithm. The double scaling algorithm uses the following Improve-Approximation procedure. algorithm called the double scaling algorithm. also requires 0(n) time on average to find each augmenting path. while S(A) ^ do begin OlHS-scaling phase) select a node k in S(A) and delete it from S(A). We shall describe a method to determine admissible paths after First.136 each augmentation carries sufficiently large flow. and compute node imbalances. at the termination of the procedure. from Lemma pseudoflow. j) A at the beginning of the procedure and. A:=2riogUl. + E for . while the network contains an active node do begin S(A) := ( i € Nj u N2 : e(i) ^A }. by adding e to optimal (in fact. set X := 7t(j) := 7t(j) all j € N2. x. we obtain an £/2-optimal flow. . end. A := A/2. units of flow on P and update x. This approach gives us an algorithm cost scaling phase performs a is that does cost scaling in the outer loop and within each this number of RHS-scaling phases. a 0-optimal) for each e N2/ we obtain an e/2- pseudoflow.APPROXIMATION-n(e. this The procedure always augments flow on choice preserves the e/2-optimality of the admissible arcs and. In the double scaling algorithm app>ears to be similar to the shortest for the augmenting path algorithm maximum flow problem. c^. end. contrasted with solving a shortest path in the RHS-scaling algorithm. observe that it(j) first commenting e on the correctness of this procedure.

Thus. leist we perform one of P. Since the set of admissible arcs at acyclic (by Lemma 5.. i.e. We admissible path P using a predecessor index. say of the following two whichever has a applicable. i) from P. there are two types of advance steps: those that add arcs to an admissible path (ii) on which the algorithm later performs an augmentation. Each advance step adds an arc to the partial admissible path. the arc (pred(i). We next coimt the number of advance steps. at the node node i. each augmentation deletes a node from S(A) and after a most n augmentations.7). then ujxiate then delete + e/2 + min Cj.e. (pred(i). The creating retreat step relabels (increases the potential oO node i for the purpose of i) new admissible arcs emanating from this node. Each execution of the procedure performs i. if (u. If P has at least one arc. : (i. then stop. i A< < 2A node e S(A). terminating when the last node deficit. - u. the algorithm will discover an admissible path . advanced). S(2A) = 0. the procedure maintains the invariant property that all residual capacities are integer multiples of A and thus each augmentation can carry A units of flow. the residual network does not contain an admissible arc { rctreat(i). The algorithm thus 0(n log U) augmentations. and is those that are later cancelled by a retreat step. the method begins performs a total of new scaling phase. we delete this arc from P.. then add (i. after most n advance steps of the first type. The proof of Lemma 5. This operation reduces the excess at node k to a value less then is less A and ertsures that the excess at node /. e(j) If the residual network contains an admissible arc (i. Consequently. Ul RHS-scaling for each phases. j). becomes inadmissible. The algorithm maintain a partial identifies an admissible path by gradually building the path. n(i) to 7t(i) If (i.4 implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow. If < 0. At the beginning of the A-scaling phase. as in the RHS-scaling algorithm. and a retreat step deletes (i) an arc from the partial admissible path. j) € A(i) and r^: > 0). We l+flog e(i) next consider the complexity of this implementation of the Improve-Approximation procedure. v) e P then prediy) steps. During the scaling phase. the algorithm augments A units of flow from a node k in S(A) to a node / with e(/) < 0. in the process. if there is any. at than A. Hence. At any point is in the algorithm.137 Further. j) to P. j).

The references describe further modest improvements algorithm. Since the algorithm requires a total of 0(n log U) of advance steps is augmentations.7. We leave it as an exercise for the reader to show that how the transformation permits us to use the double scaling algorithm to solve the capacitated minimum cost flow problem of the 0(nm log U log nC) time. The simplex based approach maintains a basis tree aruilysis every iteration and conducts sensitivity by determining changes in the b<isis tree precipitated by changes in the data. researchers and have conducted There this sensitivity analysis using the primal simplex or dual this simplex algorithms. a conceptual drawback to at approach. The in basis in the simplex algorithm is often degenerate.we first transform it into an uncapacitated transportation problem and then apply the double scaling algorithm. the algorithm will examine result. The double scaling 0((nm + rr log U) log nC) time. We have therefore established the following Theorem 5.10 minimum cost flow problem. instead. a variant of this algorithm using more sophisticated data structures is currently the fastest polynomial-time algorithm for most classes of the 5.138 and vsdll perform an augmentation. therefore. and by Lemma is 5. the simplex based approach does not give information about the changes in the solution as the data changes. Sensitivity Analysis The purpose solution of a of sensitivity analysis cost is to determine changes in the optimum minimum flow problem resulting from changes in the data (supply/demand practitioners vector. I A(i) I arcs for testing admissibility. Therefore. . the number of the algorithm performs advance steps first typ>e at most 0(n^ log U). it tells us about the changes in the basts tree. Traditionally. 0(n^ log U). The total number of advance steps. and consequently changes the basis tree do not necessarily traiislate into the changes in the solution. algorithm solves the uncapacitated transportation problem in To solve the capacitated minimum cost flow problem . though. The retreat at most O(n^) of the second type because each step increases a node potential. is. node potentials increase 0{t\^) times. For problems that satisfy the similarity assumption.5. capacity or cost of any arc). n The amount of time needed to identify admissible arcs is 0( £ i=l lA(i)ln) = 0(nm) since between a potential increase of a node i. however.

plus ( 7t*(k) - At optimality. cost flow problem.1 minimum cost flow dictates that ie X N = 0. Let X* denote an optimum solution of a Cj. = - 7C*(i) + 7t*(j) denote the reduced Further.139 We present another approach for performing serisitivity analysis. however. / ) denote the shortest distance from node k Cj. 5. this vector satisfies the dual feasibility conditions C5. q) increases by one unit . the reduced costs of all arcs in the residual network are by solving n nonnegative. d(k. and must increase one value and decrease the is other). . of a 1. minimum Cj. In a sense. we must change the supply /demand values two nodes by equal magnitudes. Then x* a pseudoflow for the modified problem. hence. Suppose that the supply/demand b(/) node k becomes bGc) + (Recall 1 and the supply/demand that feasibility of the of another node / - from Section b(i) 1. is units. Supply/Demand Sensitivity Analysis We becomes problem of first study the change in the supply/demand vector. Arc Capacity Sensitivity Analysis We next consider a change in an arc capacity.1 pseudoflow / ) a Tliis augmentation changes the objective function value by d(k. This approach does not share the drawback we have just mentioned. In . Z (i. The flow x* is feasible for the modified problem.j)6P to ^ij = X (i. Suppose that the capacity of an arc (p. this discussion is quite general: it is possible to reduce more complex changes to a sequence of the simple changes cost flow we cor^sider. equals the P cjj shortest distance from jt*(/) ). let d(k. we can compute d(k.6. node k node / with respect to the arc lengths Cj. Lemma implies that this flow optimum for the modified minimvmi cost flow problem. Hence. we limit our discussion to a unit change of only a particular type. We show that the sensitivity analysis for the minimum flow problem essentially reduces to solving shortest path or maximum problems.K(k) + jt(l). /) for all pairs of nodes k and / single-source shortest path problems with nonnegative arc lengths. Augmenting one unit of flow from this node k to node into / along the shortest path in the residual network G(x') converts flow. residual network with respect to the original arc lengths Since for node / in the any directed path to / ) P from node k to node / .j)€ Cjj . moreover. Let n* be the corresponding node potentials and costs. For simplicity.

which produces a pseudoflow with an excess of one node q and a deficit of one unit node p.4 dictates that flow on the arc must equal flow on the arc unit at (p. it is an optimum flow for the modified problem. In both the Ctises. we preserve the optimality conditions. 0. This change increases the reduced cost If of arc (p. /) obtain useful upper bounds on these changes by solving only two shortest path problems. it satisfies the optimality If conditions C5. We can. This observation uses the /.C5. p). and from other nodes to node 1 to compute upper bounds on all d(k. The preceding discussion shows how solution value in to determine changes in the optimum due to unit changes of any two supply /demand values or a unit change any arc capacity by solving n single -source shortest path problems. for all pairs of nodes k and 1 Consequently. q) we assume are integral. capacity. then x* remains feasible. 0. which (p. Recent empirical studies have suggested that these upper bounds are very close to the actual values. we need all to determine shortest path distances from node to all other nodes. the flow on the arc of flow is at its we decrease the flow by one unit and augment one unit path in the residual network. Similarly. fact that d(k.140 addition. q) decreases by one unit and flow on the arc is than its capacity. 1) + d(l. Cpg = before the violates the change and Xp_ > then after the change Cpq = 1 > and the solution .4. often these upper bounds and the actual values are equal. before the change. Suppose an arc increases by one unit. This flow is optimum from our observations concerning supply /demand sensitivity analysis. Cpq = 1 < before the change. its Cpg < then condition C5.2 . When strictly less the capacity of the arc (p. from node p to node q along the shortest This augmentation changes the objective function value by an amount -Cpn + d(p. however. q). Cost Sensitivity Analysis Finally. q) by one unit as well. if and hence optimun. However. /) . and usually they are within 5% of each other. then c_ ^ if after the change. /) S d{k. then after the change c^ < 0. We convert the pseudoflow into a flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + d(q. if Cpq > 0. hence. that the cost of we discuss changes in arc costs. q) capacity. We at satisfy this requirement by increasing the by one unit. if Cpq S 0. for the modified problem. However.

4. a set N2. q) • to zero. or change the potentiak so that the reduced cost of arc becomes zero. and It is every forward arc in the cutset with zero reduced cost has others at the arc's capacitated. at node p and a deficit of x node Pi (iii) q.2 and Let v" denote the flow sent from node p to node q » If and x" denote the resulting arc flow. (ii) define of node p as the source node and » node q as the sink node. To satisfy the optimality condition of the arc.141 condition C5. say of f)€rsoris. and a cost Cj. q e N . q) equal to Consequently. furthermore. q) to zero. since otherwise would generate a solution that violates C5. the optimal objective function values of the original and modified problems are the same. and send a maximum x__ units from the source to the We C5. - units more than that of the original problem. this problem . if v° < x then the maximum flow algorithm yields an s-t with the properties that p € X.11 Assignment Problem The assignment problem special cases of the is one of the best-known and most intensively studied minimum is Section ( I 1. the objective function value of the modified problem x_.2. in A. As already indicated in defined by a set N|. » cut On the (X. In this Ccise. permit the maximum flow algorithm. j) to-object assignments.X. say of objects cost Nj I = I N2 = n) 1 a collection of node pairs A C Nj x N2 representing possible person(i.v° and obtain a feasible minimum is cost flow. (possibly negative) associated with each element The objective is to assign each person to one object .1 . N. eeisy to verify by case aruilysis change in node potentials maintains the optimality conditions and. (p. to change flows only on arcs it with zero reduced costs. q) flow on arc (p. choosing the assignment with . this case. we must either reduce the (p. decreases the reduced cost of arc the flow on arc (p. then x° denotes a minimum cost flow of the Pi modified problem. thus creating an excess of X Pi sink. v° = x .X) other hand. We first try to reroute the flow x from node p to node q without violating any of the optimality conditions. defined as follows: • (i) We at do so by solving is set a maximum flow problem the flow on the arc (p. we v" can set In x^ . however. 5. q) to zero. network flow problem. We then decrease the node that potential of every this node in N-X by one unit.

foraUje N2. i A 0-1 solution x of (5.18) is an assignment.j)e A) is with any partial assignment x an index set defined as X= {(i.1 8d) G The assignment problem is with node set N = N| u N2. = 1}. then is assigned to j and j is assigned to i.foraUi€ X:: Xji N-i. and consequently runs in 0(n S(n. j) Cj. j) e A) for All reduced costs defined by these node potentials are nonnegative. Researchers have suggested numerous algorithms for solving the assignment problem. xjj (5.142 minimum program: possible cost. arc costs problem defined on a network and supply /demand specified as has 2n nodes <md b(i) e N| and b(i) = is -1 if i e N2. j) e A : x^.C) is the time required to solve a shortest p>ath problem with nonnegative arc lengths) .m. The network G m= A | | arcs. j) X e A) =l. either explicitly or implicitly. Several of these algorithms apply. all j minimum e N2- and 7t(j) = min {cj.18b) (i : (i. j) € A. The assignment problem also known as the bipartite matching problem.18a) e A ^ ' subject to {j : (i.X:: (5. The problem can be formulated as the following linear Minimize 2(i. A 0-1 solution x satisfying ^ 1 for all i € Ni and X ''ii - 1 fo'" 3^' j e No X .C)) time. j) X € X) =l. set A. A node not assigned to any other node is unassigned. The successive shortest path algorithm solves the assignment problem as a sequence of n shortest path problems with normegative arc lengths. : (i. the successive shortest path algorithm for the typically select the initial These algorithms node potentials with the following values: nii) = for all i e N| cost flow problem. for all (i. "ii {j:(i.18c) ^ 0.m. (Note that S(n. arc = 1 if i a minimum cost flow Cj.. (5. We Xjj ^ use the following notation.j)eA) If = 1. (5. Associated {i:(i. is called a partial assignment.

however.143 The relaxation approach is another popular approach. with provisions basis. a negative cycle. we have described earlier. Since these algorithms are special cases of other algorithms specify their details.m. i'). and adds an zero cost arc We first : note that the transformed network always has a feasible solution with cost zero . As a result. This approach efficient in practice. the second application identifies a Both the appbcations use the node splitting transformation described in Section 2. To do we apply the tissignment algorithm twice. and. we will discuss a different type of algorithm based upon the notion of an auction.C)) time. Dijkstra's algorithm. which is also closely related to the successive shortest path algorithm. some implementations of it provide polynomial time bounds. is for maintaining a strongly feasible is fairly another solution procedure for the assignment problem. shortest paths The algorithm gradually builds from overassigned objects to assignment by identifying vmassigned objects and augmenting flows on these paths. can solve the shortest path Consequently. j) each node (artificial) i by two nodes (i. thus allowing any object to be assigned to more than one an object j person. is well knovkn solution procedure for the assignment problem. This relaxed problem smallest Cjj is easy to solve: assign each person i to with the value. For problems that satisfy the similarity assumption. Before doing so. the Hungarian essentially the primal-dual variant of the successive shortest path algorithm. The relaxation algorithm removes. assignment problem so. this algorithm also One method.18c). problems by implementations of runs in 0(n S(n. a cost scaling algorithm provides the best-knowT> time bound fo. it Because this approach always maintains the optimality conditions. The algorithm solves at most n shortest path problems. doesn't. some objects may be unassigned and other a feasible objects may be overassigned. we show another intimate connection between the assignment problem and the shortest path problem. we can solve any assignment problem. The node replaces each arc splitting tremsformation replaces (i. Interestingly. the constraint (5. by an arc (i. moreover. if it The first application determines if the network contains shortest path.the tissignment problem. Assignments and Shortest Paths We have seen that by solving a sequence of shortest path problems. in this section. we will not Rather. we can also use any algorithm for the to solve the shortest path problem with arbitrary arc lengths. The network simplex algorithm. j). i and i'. or relaxes.4.

• negative. First.. the assignment must contain a Qk' ii arcs of the form is . the cycle ~ • ~ Jk ~ )l ^ ^ negative cost cycle in the original network.Iv Since the optimal assignment cost negative.144 namely.. PA = (j| . t ) . (J2 . (j^. iy\2 -J3 ' ' • * " - . the assignment containing all artificial arcs is (i. We if next show that the optimal value of the assignment problem negative if and only the original network has a negative cost cycle. because j. the cost of the optimal assignment must be negative. This solution must contain at least one arc of the form set of (i. jo ) / • • • / ^'- ^^^ ^°^^ °^ *^'^ "partial" assignment nonpositive. j') with * { j . (J2 / it can be no ^ ^ • more expensive than the partial assignment is { (jj jA ) / • • • » (Jk.'). some partial assignment PA j| must be J2 But then by construction of the transformed network. . ^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore. Consequently. suppose the original network contains { a negative cost cycle. (J2 / J3)/ • • • . Then the assigment negative cost. j 2). i'). suppose the cost of an optimeil assignment is i negative. Jl^-jj. (Jk' J]) Conversely.

(a) The original network.145 (a) (b) Figure 5. .3. (b) The transformed network.

say from node 1 to node as follows. 2'). the iteration. j) e A(i)}. j) e A(i)). The objective this is to find an assignment with m<iximum Let We can Cj. The Auction Algorithm We now describe an algorithm for the assignment problem known as the auction algorithm. marginal utility of person for buying car is U|j price(j). We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to make the algorithm polynomial time. we cor\sider the maximization version of the assignment problem. value(i) ^ max {u^: . ((1. 2'). then value(i) is person i is next in turn to too high and we decrease this value to max (u^j . the path 1-2-5 in Figure 5. for each set € A(i). an optimum assignment in the transformed network gives a shortest path in the original network.6. j) admissible if valued) = uj: price(j) and inadmissible otherwise. 1' We consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes. = -uj. bid and has no admissible bid.146 If the original network contains no negative cost cycle. At each stage of the For a given set of - algorithm.3 for an example of this transformation. If algorithm requires every bid in the auction to be admissible. (3.price(j) : (i. since version appears more natural for interpreting the algorithm. and n and See Figure 5. assignment (4. then we n. 4'). At each an unassigned person bids on a car that has the highest margir\al utility. 3')) in Figure 5. to reduce problem is to (5. buy n and has cars that are to be sold by auction.3(a).e. j) i a nonnegative utility Uj.18). This scaling algorithm 1. is an instance of the bit-scaling algorithm described in Section To describe the auction this algorithm. .3(b). 3'). (2. there an asking price for car represented by i price(j). i. and the converse is also true. C = max j.3(b) has the corresponding path 1-2-4-5 in Figure 5. We The bid (i. {lu^jl : (i. 1 Now observe that each path from node to node n in the original network has a corresponding assignment of the same cost in the transformed network. We assume that all utilities and prices are measured a We call a associate with each person i number - valued). for car utility. j) e A). (3. For example. 4')) and an assignment {(1. can obtain a shortest path between a specific pair of nodes. (4. j Each person (i. which is an upper bound on : that person's highest marginal utility.price(j) (i. (2. 5').. 5'). j asking prices. in dollars. Suppose n persons want is to interested in a subset of cars. Consequently.3(a) has the corresponding in Figure 5.

with some valid For example.147 So the algorithm proceeds by persons bidding on car j. the polynomial time version requires At termination. therefore. some is admissible then begin assign person price(j) if : i to car j. begin let x". We now show of the that this procedure gives an assignment whose utility is vdthin $n optimum utility.. utility of always an upper bound on the highest marginal - person i. cars. we set price(j) = for each car and max {u^ : (i. the procedure yields an almost a more clever initialization. Consequently. person k must bid on another car. choices for value(i) and value(i) = price(j). becomes uneissigned. j) e A(i)} for each person Although this initialization is sufficient for the pseud opolynomial time version.price(j) : (i. end else update vzJue(i) : = max {uj: .e. Let x" denote a partial assignment at some point during the Recall that i. j. starts We now j describe this bidding procedure algorithmically. is while some person begin select if unassigned do an unassigned person bid (i. the initial assignment be a null assignment. j) e A(i). execution of the auction algorithm and x* denote an value(i) is optimum assignment. end. The auction stops when each person assigned a car. x°. person there is assigned to car The person k who was the previous bidder for car j. As the auction proceeds. optimum tissignment procedure BIDDING(u. The procedure can i. j) € A(i)}. then person k becomes unassigned. subsequent bids are of higher value. price). Subsequently. let x° be the current assignment. . valued) ^ Uj: price(j) for all (i. = price(j) + 1. end. value. if was one. j) i. person k was already assigned to car j. the prices of cars increase and hence the marginal values is to the persons decrease. If a jjerson i makes a bid on then the price of car i j goes up by $1. Also.

Hence. We next discuss the complexity of the Bidding procedure as applied to the v^ith all utilities first assignment problem largest utility is multiplied by (n+1 ). within a finite a complete assignment x". (5.20) because priceCj) at the time of bidding value(i) = $1. Using obtain n. is number of steps the method must terminate with utility of this is at Then utility UB(x°) represents the of the assignment x" assignment (since Nj less empty) . hence. j) e X°.20) in (5.i)eX'' i€Ni I valued) + J€N2 satisfies the condition X price(j) (5.21) and observing that unassigned cars in N2 have zero prices. The procedure yields an assignment that is within n units of the optimum value and.23) As we show in our discussion to follow. Suppose we multiply Since all utilities by (n+1) before applying the Bidding procedure. must be optimal.21) with N° denoting the unassigned persons N^. N in (5. (i. goesupby UB(x°)= UB(x°) be defined as follows. the C = (n+l)C. the most $n than the maximum utility. are now multiples of (n+1). optimum assignment. We show that the value of any person decreases CXnC) . x° is Since the algorithm v^l either modify a node value or node price whenever not an assignment. to obtain an all utilities Uj.148 X The partial Uji < (x. two assignments with distinct toted utility will differ by at least (n+1) units.19) assignment \° also - value(i) = Ujj price(j) + 1. Let Uj: - price(j) and immediately after the bid. j) Z X° e "ii ^ + i € I °value(i).22) N2 (5. It is easy to modify the method. for all (i. (5. In this modified problem. however. the algorithm can change the node values and prices at most a finite number of times. we UB(x^) ^ S value(i) + J I e price(j) - (5.

6. K we have Theorem established the following result. . using arc" data structure permits us admissible bids in O(nmC') time.. As in the bit -scaling technique described in Section 1. we decompose the original problem into a sequence of algorithm.. Thus.8. Odog nC) assignment problems and and show solve each problem by the auction We use the optimum prices and values of a problem as a starting solution that the prices of the subsequent problem and values change only CXn) times per sctiling phaise. Since all utilities are nonnegative. ?£. Substituting this inequality in (5. Using a scaling technique in the auction algorithm ensures that the prices and values do not change too many times.. Each j. . (5. 149 times. ie No 1 Since valued) decreases by at that the value of le«ist one unit each time at it changes. can be assigned at most A(i) times betvk^een two of consecutive decreases total This observation gives us a bound O(nmC') on the the "current number of times all bidders become ass'. The auction algorithm solves the assignment problem in O(n^mC) it time. we solve each problem in 0(nm) time and solve the original problem in 0(nm log nC) time. iteration either decreases the value of a person or assigns the person to total.21) yields valued) ^ -n(C' + 1). this inequality shows any person decreases I I most O(nC') times. the total time needed to ujxiate Veilues of all ( O ie I n I Ad) I C = O(nmC'). since the price of car j person i i hais been aissigned to car I j and I increases by one unit. a person in valued).23) implies UBCx") S -n. some car By our previous arguments.price(j) after Further. . N^ We next examine the number of iterations performed i by the procedure. The auction algorithm is potentially very slow because can increase prices (and thus decreases values) in small increments of $1 and the final prices can be as large as n^C (the values as small as -n^C). 5. Since C = nC. to locate As can be shown. Since decreasing the value of a person persor\s is i once takes 0( Ad) \ ) time.gned. The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj. the values change O(n^C') times in value(i) > Uj.

Observe phase. value(i) = to for each person i. price).150 Pj^ . for k : = 1 K do = : begin let ujj : L Ujj / 2^-^J for each (i. it a number of cost scaling phtises. j) € A. utilities u-j= Luj. / 2'^*'^ J. In the k-lh obtains a near-optimum solution of the problem with the utilities k u--. begin multiply by (n+1). depending upon whether the newly added follows: bit is or 1. The crucial result that the prices and values change only 0(n) times during each execution of the . the problem Pj^ has the arc or 1. price(j) = 2 : price(j) for (i) each car 1 j. BIDDING(uK end. It is easy to verify that before the algorithm invokes the Bidding procedure. is K bits long. Note that in the problem Pp all utilities are and subsequently k+1 u^- k = 2u.price(j) : (i.j) is the k if leading bits in the binary representation of assuming (by adding leading zeros necessary) that each Uj. The assignment algorithm performs scaling phase. In the last scaling phase. We is next discuss the complexity of this assignment fdgorithm. for its each person i. the purpose of each scaling phase to obtain good prices and values for the subsequent scaling phase. K: = riog(n+l)Cl price(j) : = : for each car j. the algorithm starts with a null assignment. x°. In other words. the algorithm solves the assignment problem with the original utilities that in each scaling is and obtains an optimum solution of the original problem. j) e. prices satisfy value(i) and values ^ max {uj. value. . The problem Pj^ is an assignment problem ujj. in which the utility of arc (i. A(i)). The Bidding procedure maintains these conditions throughout execution. value(i) = 2 value + for each person i.+ {0 or 1). all Uj. end. The scaling algorithm works as algorithm ASSIGNMENT.

We summarize our discussion. y (i. the reduced utility of an assignment differs from the utility of that assignment by a constant amount.24) Uij < 0. we find that the reduced utilities Uj.. (5. The assignment algorithm applies the Bidding procedure Odog nC) times and. .24) in (5. = 2 u. j price'(j) - value'(i) = -1. utility also an assignment that maximizes the reduced value(i) maximizes the utility. and Uj. x° is some partial assignment in the k-th scaling phase. Since t u- • - price(j) for each (i. Now assignment k-1 consider the reduced utilities of arcs in the assignment (5.25). price(j) calling the and value(i) have the values computed x.+ (0 or 1). for all (i. just before Bidding procedure. Using this result and (5. valued) decreases 0(n) times.20) x*^"* (the final at tie end of the (k-l)-st scaling phase). j) in the k-th scaling phase _ Ujj = Ujj ic - price(j) - value(i). Therefore.25) where price'(j) and value'(i) are the corresponding values at the end of the (k-l)-st scaling phase. runs in 0(nm log nC) time. j) e A. j) e x*^"'. consequently.26) Hence. the optimum reduced utility is at least -3n. = (i. Hence.7. j) e A. The equality V 1 implies that u. of arcs in x*'" If * are either -2 or -3. _ u. In this expression. value(i) k k-1 = 2 value'(i) + 1.23) implies that UBCx") t -4n. then (5. for any i. we set price(j) = 2 price'(j). as We define the reduced utility of an arc (i. for a given set of prices and values. y ic U:: j )U X ^ X e price(j) i jfe X'^ N2 X e Nj Consequently. we observe that the Bidding procedure would terminate in 0(nm) time. For any assignment we have value(i). Using this result in the proof of Theorem 5. for aU (i.21) yields I icNj valued) ^-4n. (5. Substituting these relationships in (5. we have (5. Before calling the Bidding procedure.151 Bidding procedure.

The 0(nm log nC) time. This version of the auction algorithm solves a scaling phase in 0(Vn m) time and its overall running time this is 0{-\fn m log nC). we prohibit person from bidding value(i) S 4Vn . If we invoke the similarity the best assumption. the algorithm takes CXVn m) time FVn 1 f>ersons fVn 1 )m) time to assign the remaining FVii persons. first For example. .9.26) the number of unassigned persons is at to assign n1 most Vn.152 Theorem 5. will find algorithm. and 0((n if - Hence.26).000. then the auction algorithm would assign would assign the 99% of the persons in 1% of the overall running time and the remaining 1% of the persons in the remaining 99% it of the time. n = 10. so happens that the shortest paths have length 0(n) and thus Oial's 3. then version of the algorithm currently heis known time bound for solving the assignment problem .2. We all therefore terminate the execution of the auction algorithm when has assigned but rVn It 1 persons and use successive shortest path algorithms to assign these persons. improved to run 0(Vn m log nC) If This improvement i is based on the following implication of if (5. then by (5. as described in Section these shortest paths in 0(m) time. scaling version of the auction algorithm solves the assignment problem in The in scaling version of the auction algorithin can be further time.

During the 1950's. maximum flow problem and the assignment problem — mainly because of their to important applications. and (iii) to comment on 6. Interest in network problems grew with the advent of the simplex Dantzig (1951] specialized the simplex algorithm for noted the traingularity of the basis and integrality of (1956] generalized this algorithm by Dantzig in 1947. Ford and Fulkerson (1962]. Hitchcock [1941]. a special case of the studies provided minimum cost flow problem. flow Since these pioneering works. presents a thorough discussion of the early research conducted by of flow decomp)osition theory. Orden work by specializing the simplex algorithm for the uncapacitated minimum cost flow problem. and Koopmans (1947]. These some insight into the problem structure and yielded incomplete algorithms. Soon researchers developed special purpose algorithms Dantzig. them and by is others. (ii) to point out inter-relationships among different algorithms. This discussion has three objectives: to review important theoretical contributions on each topic. we present reference notes on topics covered in the (i) text. He the optimum solution. Introduction The study cf network flow models predates the development of first linear programming techniques. the tranportation problem. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. this research . Their book. Ford and Fulkerson pioneered those efforts. The studies in this problem domain. considered the transportation problem. researchers began to exhibit increasing interest in the its minimum the cost flow problem as well as special cases-the shortest path problem. conducted by Kantorovich (1939]. solve these problems. Whereas Dantzig focused on the primal simplex based algorithms. network problems and their generalizations emerged as major research topics in operations research. Reference Notes In this section. The book by Dantzig (1962] contains a thorough description of these contributions along with historical perspectives.153 6. The network simplex algorithm for the capacitated the development of the minimum cost flow problem follov/ed from for linear bounded variable simplex method programming by Dantzig (1955]. It also covers the development which credited to Ford and Fulkerson.1 the empirical aspects of the algorithms.

programming compiled by researchers at Bonn (Kastning [1976]. the reader might consult the bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of 1985]).154 is documented in thousands of papers and many text and reference books. books on commurucation networks by Bertsekas . (Programming in Netorks and As an additional source of references. Transmission and Networks). 11962] (Programming Games and Transportation Networks). Jensen and Barnes [1980] [1980] (Algorithms for Network and (Network Flow Programming). and Derigs Graphs). Since the applications of network flow modelsa are so pervasive. Frank and Transportation Frisch [1971] (Communication. Bazaraa and Jarvis [1978] Programming and Network Flows). Phillips Garcia-Diaz [1981] (Fundamentals of Network Analysis). We shall be surveying many important research papers in the following sections. Minieka [1978] (Optimization Algorithms for Networks and Graphs). no single source provides a comprehensive account of network flow models and their impact on practice. and Kowalik [1983] (Discrete Optimization Algorithms). Assad and Ball [1983] on vehicle routing and scheduling problems. Christophides [1975] (Graph Theory: [1976] (Linear An Algorithmic Approach). Swamy and Thulsiraman Networks and Algorithms). field Several important books summarize developments in the literature: and serve as a guide to the Ford and Fulkerson [1962] (Flows in Networks). Papadimitriou and Steiglitz [1982] (Combinatorial Optimization: Algorithms and Complexity). Kennington and Helgason Programming). Berge and Ghouila-Houri . [1981] (Graphs. Tarjan [1983] (Data Structures and Network Algorithms). Syslo. Examples paper by Bodin. Potts and Oliver [1972] (Flows in Transportation Networks). cost flow and generalized minimum domains cost flow A number of books written in special problem also contain valuable insight about the range of applicatior\s of network flow in this category are the modek. Transportation and Scheduling). Notable among these is the paper by Glover and Klingman [1976] on the applications of minimum problems. Hu [1969] (Integer Programming and Network Flows). Lawler (Combinatorial (Linear Optimization: Networks and Matroids). Iri (1969] (Network Flows. Gondran and Minoux [1984] (Graphs and Algorithms). Smith [1982] (Network Optimization Practice). Golden. and Von Randow [1982. Deo. Rockafellar [1984] (Network Flows and [1988] Monotropic Optimization). Hausman [1978]. Several researchers have prepared general surveys of selected application areas. Murty [1976] and Combinatorial Programming).

155 and Gallager [1987] and on transportation planning by collection of survey articles [1988]. The is original implementation of Dijkstra's algorithm runs in 0(n2) time which running time for fully the optimal dense networks (those with m = fiCn^ )). focuses especially on issues of computational complexity. and independently by Dantzig [1960] and Whiting and Hillier [I960]. We Gabow have mentioned the "similarity assumption" throughout the chapter. improved running times are possible The following table svimmarizes various implementations of Dijkstra's algorithm that have been designed to improve the running time in the worst case or in practice. arc. paper on scaling algorithm for combinatorial [1985] coined this term in his optimization problems. d = [2 + m/n] represents the average degree of a node in the network plus . The book by Tarjan [1983] another useful source of references for these topics as well as for more complex data structures such as dynamic trees. doubly is linked queues. This important paper. which contains scaling algorithms for several network problems. Ruggen and Starchi [1982] and Deo and Pang [1984]. As a guide to these results. stacks. 2. Sheffi [1985]. 6^ Shortest Path Problem The shortest path problem and its generalizations have a voluminous research literature. which summarizes some of this literature. we refer the reader to the extensive bibliographies compiled by Gallo. This section. binary heaps or d-heaps. greatly helped in popularizing scaling techiuques. 1. The book by Aho. since any algorithm for sparse must examine every networks. Hop>croft and Ullman [1974] is an excellent reference for simple data structures as arrays. Label Setting Algorithms The first label setting algorithm was suggested by Dijkstra [1959]. as well as a on facility location edited by Francis and Mirchandani Golden [1988] has described the census rounding application given in Section General references on data structure serve as a useful backdrop for the algorithms presented in this chapter.1. In the table. However. linked lists. Pallattino. lists.

156 « .

The R-heap implementation by a sequential search and improves the running time by a . Then. Though Dial's only pseudopolynomial-time. Dial [1969] suggested his implementation of Dijkstra's algorithm because of its encouraging empirical performance. is The correctness of this observation follows from the fact that d* the current minimum temporary temporary distance distance labels. it runs in 0(nC + m log log nC) it Johiison [1982] suggested an improvement of this data structure and used to implement Dijkstra's algorithm in 0(m log log C) time. This algorithm was independently discovered [1979] by Wagner[1976]. hence reducing the number of buckets from 1+ C if to l+(C/w). data structure that takes an average of Odog time for each node selection (and the subsequent deletion) step and an average of 0(1) time for each distance update. nk(l+C^/^/w)] bound to a time for log C). except that performs binary search over Odog C) buckets to insert nodes into buckets during the redistribution of ranges replaces the binary search and the distance updates. Dijkstra's algorithm in Consequently. Dial. then the algorithm will modify no other label in the range [d*. in practice. Choosing k = log C yields a time of 0(m log log C+n Depending on n. Glover.m and C. Kaas and Zijlstra [1977] suggested a data structure whose analysis depends upon the takes largest key D stored this in a heap. implemented using data structure.j) € A}]. then we can use buckets of width w in Dial's algorithm. Kamey and Klingman which runs better its have proposed an improved version of algorithm is Dial's algorithm. any choice of k. The initialization of this algorithm 0(D) time and each heap operation takes Odog log is D).: (i. [1979] suggest several such improvements. When Dijkstra's algorithm time. this data structure implements 0(m + n log n) time. using a multiple level bucket scheme. successors have had improved worst- case behavior. Denardo and Fox implemented the shortest path algorithm in 0(max{k C^^K m log (k+1). This data is the same as the R-heap data structure described in Section 33. other choices might lead modestly better time bound. that if Denardo and Fox [1. d* + w - 1] since each arc has length at least w - 1.157 Boas. The best strongly polynomial-time algorithm to date is due to Fredman and is Tarjan [1984] ingenious. Johnson [1977b] proposed a related bucket scheme with exponentially growing widths and obtained the running time of structure it 0((m+n log Olog log C). The Fibonacci heap an n) somewhat complex. Observe w = max minlcj. but who use a Fibonacci heap data structure.

By using K = L = 2 log C/log log C. Mehlhom. in skeleton form. The modification that adds a node to the LIST (see the description of the Modified Label Correcting Algorithm given in Section 3.158 factor of Odog log C). Orlin and Tarjan [1988] suggested the Rits heap implementation and further improvements. all of its previous This approach permits the selection of much larger width of buckets. The two-level data structure consists of K (big) buckets. This modification was conveyed to Pollack and Wiebenson [1960] by D'Esopo. this algorithm as D'Esopo and Pape's algorithm. in section 3. The R-heap implementation described system. as shown by Edmonds Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice. thus reducing the number of buckets. Incorporating a generalization of the Fibonacci heap data structure in the two-level bucket system with appropriate choices of K and L further reduces the time bound to 0(m + nVlog C ). Ahuja.4. and so is unlikely that this algorithm would perform well Label Correcting Algorithm Ford [1956] suggested.3 uses a single level bucket A two-level bucket system improves further on the R-heap implementation of Dijkstra's algorithm. If we invoke the similarity aissumption. for which the algorithm of Johnson [1982] appears more attractive. however. We shall subsequently refer to A FORTRAN listing of this . Ouring redistribution. studied the theoretical properties of the Bellman's [1958] algorithm can also be regarded as a label correcting Though specific implementations of label correcting algorithms run in is 0(nm) [1970]. probably the most popular. the shortest path problem. several other researchers - Ford and Fulkerson [1962] and Moore [1957] algorithm. each bucket being further subdivided into L (small) subbuckets. algorithm. and later refined and tested by Pap>e [1974]. in practice. this approach currently all classes gives the fastest worst-case implementation of Dijkstra's algorithm for of graphs except very sparse ones. The Fibonacci heap version it of two-level R-heap is very complex. time. the most general form nonpolynomial-time. this two-level bucket system version of Dijkstra's algorithm runs in 0(m+n log C/log log C) time. the first label correcting algorithm for - Subsequently.) at the front if the algorithm has is previously examined the node earlier and at the end otherwise. as described next. the two-level bucket system redistributes the range of a subbucket over buckets.

aiul also permits partial pricing All Pair Shortest Path Algorithms Most algorithms manipulation.159 algorithm can be found in Pape [1980]. This algorithm nms 0(n3) time and . called the partitioning shortest path (PSP) algorithm. as runs in shown by Kershenbaum [1981]. For general networks. Goldfarb. Thus. This algorithm uses simple data .e. The complexity of this algorithm is 0(n3 log n). computational attributes can be Klingman. Researchers have been interested in developing polynomial-time primal simplex algorithms for the shortest path problem. the arc with largest violation of optimality condition) for the shortest path problem starting from an 0(n) artificial basis leads to Dijkstra's algorithm. Though this modified label correcting it algorithm has excellent computational behavior in the worst-case exponential time. Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm. which can be improved slightly by using more sophisticated matrix multiplication procedures. uses very T\atural pricing strategies. structures. Hao and Kai [1986] described another simplex algorithm for the shortest path this problem: the number of pivots and running times for to those of algorithm are comparable Akgul's algorithm. Ahuja and Orlin [1988] recently discovered a scaling variation of this approach that performs 0(n^ log C) pivots and runs in 0(nm log C) time. Orlin [1985] showed that the simplex algorithm with Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots. runs in 0(n2) time and has excellent computational their Other variants of the label correcting algorithms and found in Glover. shortest path problem with arbitrary arc lengths are not Akgul [1985a] developed a simplex algorithm for the shortest path problem that performs O(n^) pivots. the number of pivots is if all arc costs are nonnegative. The algorithm we have presented is due in to Floyd [1962] and is based on a theorem by Warshall [1962]. the FSP algorithm runs it in 0(nm) time. Akgul's algorithm runs to in O(n^) time which can be reduced 0(nm + n^logn) using the Fibonacci heap data structure. Glover. Dial. Lawler [1976] describes this algorithm in his textbook. Primal simplex algorithms for the that efficient. while for networks with nonnegative arc lengths behavior. Karney and pivoting in Klingman [1979] and Zadeh [1979] showed that Dantzig's pivot rule (i.. Using simple data structures. that solve the all pair shortest path problem involve matrix The first such algorithm appears to be a part of the folklore. Glover. Phillips and Schneider [1985].

the language. The bibliography by Deo and Pang [1984] contains references algorithms. it might be desirable to pair shortest path problem as a sequence of single source shortest path in the text. It is Dial's algorithm is the best label setting algorithm for the shortest faster than the original OCn^) implementation.C)) time to solve the n shortest path problems (recall that S(n. Researchers have not yet tested the R-heap Dial's algorithm is implementation and so available. For very dense networks. the binary heap.m. The studies due to Gilsinn and Witzgall [1973]. Kelton and Law [1978]. Klingman. the in the algorithm by Fredman [1976] faster than this approach worst<ase complexity. this problems. The results of these studies also depend greatly upon the density of the network. they observe that their implementation would be faster for very large shortest path problems. at this moment no comparison with . for several other all pair shortest path From solve the all a worst -case complexity point of view. compiler and the computer used. however. the results of computational studies are only suggestive.C) shortest path is the time neede to solve a problem with nonnegative arc is lengths).160 is also capable of detecting the presence of negative cycles. Dial. Denardo and Fox [1979] also find that Dial's algorithm all than their two-level bucket implementation for of their test problems. however. [1979]. Computational Results Researchers have extensively tested shortest path algorithms on a variety of network classes. the computational performance of an algorithm is depends upon many factors: for example. Glover. and the distribution is of networks on which the algorithm tested.m. Glover. Phillips and Schneider [1985] and Gallo and Pallottino [1988] are representative of these contributions. Iri Kamey and Klingman [1979]. d-heap or the all Fibonacci heap implementation of Dijkstra's algorithm for network classes tested is fcister by these researchers. These studies generally suggest that path problem. Van Vliet [1978]. extrapolating the results. Pape [1974]. rather than conclusive. the manner in which the program written. and Fox Imai and [1984]. Denardo . Dantzig [1967] devised another procedure requiring exactly the same order of calculations. As pointed out approach takes CXnm) time to construct an equivalent problem with nonnegative arc lengths and takes 0(n S(n. Hence. Unlike the worst<ase results.

label setting algorithms are superior and. Klingman. whereas Ford and Fulkerson Elias et al. . and Schneider [1985] are the two fastest. bbel correcting algorithms perform better. and [1956] solved it by augmenting p>ath algorithms. Figure 6.161 Among by Glover algorithm. The study finds that their algorithm is superior to D'Esopo and Pape's label setting algorithms Other researchers have also compared with label correcting algorithms.2 summarizes the running times of some of these algorithms. Kelton and Law [1978] have conducted a computational study of several aill pair shortest path algorithms. Feinstein and Shannon independently established the max-flow min-cut theorem. of these improvements have produced improvements in practice. n is the number of nodes. the algorithms Phillips by D'Esopo and Pape and by Glover. upon the worst-case complexity of some. This study indicates that Dantzig's [1967] algorithm is with a modification due to Tabourier [1973] faster (up to two times) than the Floyd- Warshall algorithm described in Section 3.3 Maximum Flow Problem The maximum flow problem is distinguished by the long succession of research contributions that have improved algorithrr\s. Several researchers - Dantzig and Fulkerson [1956]. but slower for sparse networks. m is the number of arcs. researchers have developed a number of algorithms for this problem. and U is an upper bound on the integral arc capacities. the bounds specified for the other algorithms apply to problems with arbitrary rational or real capacities. for very dense networks.5. This study also finds that matrix manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks. In the figure. algorithms whose time bounds involve The U assume integral capacities. Since then. but not all. Studies generally suggest that. Fulkerson and Dantzig [1955] solved the maximum flow problem [1956] by specializing the primal simplex algorithm. the label correcting algorithn\s. et al. for sparse networks. Ford and Fulkerson [1956] [1956] - and Elias. 6.

They also showed that for arbitrary irrational arc capacities.2. containing the smallest possible number of arcs) in the residual network. consequently. Ca) log .e. Shiloach [1978] 7 8 GalU and Naamad 0(nm CXn3) log2 n) Shiloach and Vishkin [1982] Sleator 9 10 11 and Tarjan [1983] 0(nm 0(n3) log n) Tarjan [1984] Gabow[1985] Goldberg [1985] 0(nm 0(n3) log U) 12 13 14 Goldberg and Tarjan [1986] Bertsekas [1986] CXnm 0(n3) log (n^/m)) 15 16 Cheriyan and Maheshwari [1987] 0(n2 Vm + •.. both with improved computational complexity... Ford and Fulkerson [1956] observed that the labeling algorithm can perform as many an the as 0(nU) augmentations for networks with integer arc capacities. J O nm 1^ U) r?- log log — log " U . ) Ahuja and Orlin [1987] 0(nm + n^ . They one showed if the algorithm augments flow along a shortest path (i. then the algorithm performs 0(nm) augmentations. U 17 Ahuja. will A breadth first search of the network determine a shortest augmenting path. maximum that Edmonds and Karp [1972] suggested two specializations of the labeling algorithm.162 # 1 Discoverers Running Time [1972] Edmonds and Karp Dinic [1970] 0(nm2) CKn2m) 0(n3) 2 3 4 5 6 Karzanov Cherkasky Malhotra. Orhn and Tarjan [1988] (b) uvnm ol + n ^VlogU) (c) O nm V ( Table 6. the labeling algorithm can perform infinite sequence of augmentations and might converge to a value different from flow value. Running times of maximum flow algorithms. this version of the labeling . [1974] [1977] 0(n2 VIS") [1978] Kumar and Maheshwari 0(n3) Galil [1980] 0(n5/3m2/3) [1980].

. They proved that this algorithm to performs path 0(m log U) with maximum augmentations. Consequently. called layered networks is . his algorithm runs in OCn^m) times. the length of the layered network increases and a^er at most n iterations. of the labeling algorithm runs in 0(m2 log Dinic [1970] independently introduced the concept of shortest path networks. j) network can be partitioned in the layered nodes N]. N2. but instead of constructing layered networks labels.. The shortest augmenting path algorithm presented in Section 4. They also showed that this equivalent both to all Edmonds and Karp's algorithm and to Dinic's algorithm in the sense that three algorithms enumerate the same augmenting paths in the same sequence. . A layered network lie a subgraph of the residual network at least that contains only those nodes and arcs that on one shortest path from the source into layers of to the sink. The algorithms differ only in the manner in which they obtain these augmenting paths. Several researchers have contributed improvements to the computational complexity of maximum flow algorithms by developing more efficient algorithms to establish blocking flows in layered networks. Tarjan [1986] has shown how determine a this version residual capacity in 0(m) time on average.3. Distance labels offer several advantages: They are simpler to understand than layered networks. Orbn and Ahuja [1987] developed the distance label based augmenting path algorithm given in Section algorithm is 4. i e Nk and j e Nk+1 for some k).3 achieves the same time bound it as Dinic's algorithm. network connects nodes in adjacent layers (i. Dinic showed how to construct. hence.e. and have led to more efficient algorithms. A blocking flow in a layered in the network G' « (N'. a blocking flow in a layered network by performing at most m augmentations. A') is a flow that blocks flow augmentations residual capacity sense that G' contains no directed path with positive from the source node to the sink node.163 algorithm runs in 0(nm2) time. U) time. are easier to manipulate. Karzanov [1974] introduced the concept . maintains distance Goldberg [1985] introduced distance labels in the context of his preflow push algorithm. so that for every arc . in a total of 0(nm) time. His algorithm constructs layered networks Dinic showed that after each and establishes blocking flows in these networks. in a layered (i. . The nodes . flow along a path with Edmonds and Karp's second idea was to augment maximum residual capacity. the source is disconnected from the sink in the residual network. for solving the maximum flow problem. blocking flow iteration.

(See the technical report of Even (1976] for a comprehensive description of this algorithm and the paper by Tarjan [1984] for a that an simplified version. this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's capacity. The basic idea to store these path fragments using some data structure. Ehiring a scaling phase. constructs a blocking flow in 0(n2) time. for example. Sleator and Tarjan's algorithm establishes a blocking flow in 0(m log n) time and thereby yields an 0(nm log n) time bound for Dinic's algorithm.3) takes 0(n) time on average to identify an augmenting path and. algorithm achieving Orlin and Ahuja [1987] have presented a variation of the Ga bow's same time bound. Consequently.) Karzanov showed implementation that maintains preflows and pushes flows from nodes with excesses. The such data structures were suggested independently by Shiloach [1978] and Galil [1980]. Cherkasky [1977] and Galil [1980] presented further improvements of Karzanov's algorithm. Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify later to augmenting paths quickly. If we delete the is we obtain a set of path fragments. it saturates some arcs in this path. this time bound is is comparable to that of Sleator and Tarjan's algorithm. their implementation of Dinic's algorithm 0(nm (log n)2) time. As outlined in Section 1. If 0(nm) time and the algorithm runs in 0(nm we invoke the similarity assumption. Gabow to the [1985] obtained a similar time bound by applying a bit scaling approach maximum flow problem. . Sleator and Tarjan [1983] improved this approach by using a data structure called dynamic trees to store and update path fragments. to Shiloach [1978] and Galil and in a Naamad [1980] showed how augment flows through path fragments way that finds a blocking rur\s flow in O(m(log n)^) time. Malhotra. each log C) time. 2-3 trees (see Aho.164 of preflows in a layered network. the at initial flow value differs from the m£iximum flow value by most m units and so the shortest augmenting path algorithm (and also Dinic's algorithm) performs at scaling phase takes most m augmentations. in Hence. during the augmentation. but the scaling algorithm much simpler to implement. and Naamad Dinic's algorithm (or the shortest augmenting path algorithm described in Section 4. for The search more efficient maximum flow algorithms has stimulated researchers to develop first new data structure for implementing Dinic's algorithm. Kumar and Maheshwari [1978] present a conceptually simple maximum flow algorithm that runs in OCn^) time. saturated arcs from this path.7.

Orlin and Tarjan [1988]. the E>inic's and the FIFO preflow push algorithms. Previously. algorithm currently gives the best strongly polynomial-time bound for solving the maximum flow problem. this algorithm improves Goldberg and Tarjan's for 0(iun log (n2/m)) algorithm by a factor of log n networks that are both non-sp>arse and nondense. Cheriyan and Maheshwari [1987] showed Goldberg and Tarjan's highest-label preflow push algorithm actually performs ) OCn^Vin nonsaturating pushes and hence runs in OiriNm ) time. and adds the newly active nodes to the rear of the queue. though the improvements are not as For dramatic as they have been for example. Ahuja. selects a node from the front of the queue. Goldberg (1985] had shoum in the FIFO version of the algorithm that pushes flow from active nodes first-in-first-out order runs in OCn-^^ time.) Using a dynamic tree data structure.165 Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the push algorithm. this algorithm closely resembles the Goldberg's FIFO preflow push algorithm. Bertsekas [1986] obtained another his maximum flow algorithm by specializing minimum cost flow algorithm. at each iteration. The use of the dynamic tree data structure its improves the running times of the excess-scaling algorithm and variations. Orlin and Tarjan [1988] reduced the U). Further. that Recently. Tarjan [1987] conjectures that any preflow push algorithm that performs p nor«aturating pushes trees. can be implemented in 0(nm log (2+p/nm) time using dynamic Although this . 0(nm + n^ Vlog U trees. If we invoke the similarity assumption. it (This algorithm maintains a queue of active nodes. j>erforms a push /relabel step at this node. Scaling excesses by a factor of log U/log log U and pushing flow from a large excess node with the highest distance label. this algorithm does not use any complex data structures. Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an 0(nm + n^ log U) time bound. Orlin and Tarjan [1988] obtained another variation of origir\al excess scaling algorithm which further reduces the number of nonsaturating pushes to 0(n^ VlogU ). Goldberg and Tarjan [1986] the running time of the improved This FIFO preflow push algorithm to 0(nm log (n^/m). number of nonsaturating pushes to OCn^ log U/ log log Ahuja. as ) algorithm improves to O nm log —— — ° +2 by using dyiuimic showT» in Ahuja.

n^=|N J. has one incoming arc or one outgoing arc) bipartite networks. that Ahuja. unit capacity simple networks U=l. Developing a polynomial-time primal simplex algorithm for the flow problem has been an outstanding open problem for quite some time.. is Observe that the maximum flow value for unit capacity networks less than n.'s algorithms reduce from O(n^) to 0(n^ n2 ) and 0(nj + nm) respectively.166 conjecture is true for all known preflow push algorithms. [1987] have generalized these ideas for networks with Versions of the bipartite Let maximum = (N^ flow algorithms run considerably faster on a if j networks G u N2.. Femandez-Baca and Martel small integer capacities. essentially Goldfarb and Hao [1988] developed such an algorithm.e. flow problems: (ii) the maximum flow problem on (i. (iii) and (iv) planar networks. Tarjan[1988] recently showed how implement this algorithm in 0(nm logn) using dynamic trees. is augmented along a shortest path from the As one would expect. these problems are easier than are problems with large capacities. This algorithm is based on selecting pivot arcs so that flow source to the sink. it is still open for the general case.e. Ahuja [1987] have achieved the same time bounds using a modification of the shortest augmenting path algorithm. Both of these algorithms rely on ideas contained in Hopcraft and Karp's [1973] algorithm for maximum bipartite matching. (i) unit capacity networks in the . U=l). Thus. n2 = |N2| andn = n^+n2[1985] obtained the Suppose first that nj < n2 Gusfield. Even and Tarjan [1975] showed that Dinic's algorithm solves the maximum flow problem on unit capacity networks in Orlin and O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time. and. every node network. j « j N^ | ). Stein and Tarjan [1988] it improved upon these ideas by shov^dng time bounds for all is possible to substitute nj for n in the preflow push algorithms to obtain the new time bounds for bipartite networks. This result implies that the FIFO preflow push algorithm and the . maximum Recently. except source and sink. this algorithm performs 0(nm) pivots and to can be implemented in ©(n^m) time. Martel and Fernandez-Baca such results by showing how the running times of Karzanov's and Malhotra et al. A) Nj j << j N2 |(or j N2 . Researchers have also investigated the following special cases of the maximum (i. and so the shortest augmenting path in algorithm will solve these problems to solve 0(nm) time. Orlin.

Cheriyan [1988] has also constructed a for family of examples to show that the bound O(n^) FIFO preflow push algorithm is tight. Dinic. respectively. Karzanov. the running times of the Specialized maximum flow algorithms on planar networks appear more attractive. i. Edmonds and Karp. solution techniques. however. Cherkasky. especially for the excess-scaling algorithms. Galil [1981] constructed an interesting class of examples and showed that the algorithms of et al. m + n. Even and Tarjan [1975] noted same examples imply that the of Dinic's algorithm is tight when m= n2- Baratz [1977] showed that the bound on Karzanov's algorithm is tight. Martel [1987] showed that the FIFO preflow push algorithm can take n(nm) time to solve a class of unit capacity networks. are quite different than those for the general networks.e. log U) time. Cheung . m + n. Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow Vm) for the push algorithm is tight. ) and 0(n. and the bound O(n^m) for the generic preflow push algorithm The research community has not established similar results for other preflow It is push algorithms. The studies performed by Hamacher [1979]. is Zadeh [1972] showed that the bound of Edmonds and Karp that the algorithm tight when bound m = n^. Several computational studies have assessed the empirical behavior of maximum flow algorithms.167 original excess scaling algorithm. Researchers have also investigated whether the worst-case bounds of the maximum case flow algorithms are for tight. solve the bipartite maximum flow problem on networks in 0(n. whether the algorithms achieve their worst- bounds some families of networks. Some important [1979]. It is possible to solve the maximum flow problem on planar networks much at the more efficiently than on general networks. worth mentioning. Other researchers have made some progress in constructing worst-case examples for preflow push algorithms.) in a two-dimensional plane so that arcs intersect one another only planar network has at most A 6n arcs. Galil and Malhotra achieve their worst-case bounds on those examples. references for planar maximum flow algorithms are Itai and Shiloach Johnson and Venkatesan (1982] and Hassin and Johnson [1985]. that these knovkTi worst-case examples are quite artificial and are not likely to arise in practice. (A network is called planar if it can be drawn nodes. that have even better running times. hence..

is slower than the original Dinic's Hence. their contribution has improve the worst- case p>erformances of algorithms. Mote and Whitman [1979. . implementations of preflow push algorithms would be useful in been to in this case. Imai [1983] noted that Galil and Naamad's is [1980] implementation of Dinic's algorithm. the fastest. using sophisticated data structures. but slower for dense networks. Ehnic's and Karzanov's algorithms in increasing is most classes of networks. however . These results. Derigs and Meier [1988]. as in others. the worst-case performance of algorithms. that These studies were conducted prior labels. Recently. Sleator and Tarjan (1983] reported a similar finding.168 [1980]. but Researchers have also tested the Malhotra et al. Grigoriadis [1988]. to the development of algorithms use distance These studies rank Edmonds order of performance for and Karp. Imai (1983] and Goldfarb [1986] and Grigoriadis are noteworthy. the sophisticated data structures improve only are not useful empirically. Kodialam and Orlin [1988] have found that the preflow push algorithms are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for most classes of networks. tree We do not anticipate that dynamic practice. the maximum maximum dynamic flow flow maximum flow value between every pair of nodes. Among all nonscaling preflow push algorithms. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem. 1984). we wish to determine the flow problems. they observed that their implementation of Dinic's algorithm using dynamic tree data structure algorithm by a constant factor. A number of researchers are currently evaluating the computational performance of preflow push algorithms. algorithm and the primal simplex algorithm due to Fulkerson and Dantzig [1955] and found these algorithms to be slower than Dinic's algorithm for most classes of networks. slower than the original Dinic's algorithm. Glover. highest-label preflow push algorithm runs the The excess-scaling algorithm and its variations have not been tested thoroughly. Dinic's algorithm competitive with Karzanov's algorithm for sparse networks. we discuss two important generalizations of the (ii) problem: problem. Gusfield [1987] has suggested a simpler multi-terminal flow algorithm. and Ahuja. Klingman. do not apply to the multi-terminal maximum flow problem on directed networks. (i) the multi-terminal flow problem. Finally.

Hitchcock [1941]. known as the primal-dual algorithms. a special the minimum cost flow problem. Tomizava and Edmonds and Karp [1972] independently pointed out that if the computations use node potentials. Dantzig's book [1962] Ford and Fulkerson [1956. [1961] independently discovered the successive shortest path These researchers showed how to solve the minimum cost flow problem [1971] as a sequence of shortest path problems v^th arbitrary arc lengths. integrabty property of the optimum Later his development of the upper bounding technique programming led to an efficient sp)ecializatior of the simplex algorithm for the discusses these topics. j) in the is network a number to denoting the time needed to traverse possible flow from the source The objective send the maximum node first to the sink node within a given time period T.was posed and solved (though incompletely) by Kantorovich [1939]. (i. [1960] and Fulkerson [1961] independently discovered the out-of-kilter is The negative cycle algorithm credited to Klein [1967]. these algorithms are Ford and Fulkerson [1962] describe the cost flow problem. 1957] suggested the for the uncapacitated first combinatorial algorithms and capacitated transportation problem. Helgason and Kennington [1977] and Armstrong. Ford and Fulkerson [1958] showed that the maximum dynamic flow problem can be solved by solving a a nice treatment of nunimum in cost flow problem. He observed for linear the spanning tree property of the basis and the solution. then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths.169 In the simplest version of maximum dynamic tj: flow problem. (Ford and Fulkerson [1962] give this problem). primal-dual algorithm for the minimum Jewell [1958]. caise of The classical transportation problem. Klingnun and Whitman [1980] describe the . Dantzig [1951] developed the first complete solution procedure for the transportation problem by specializing his simplex algorithm for linear programming. Iri [1960] and Busaker and Gowen algorithm. we associate with each arc that arc. Minty algorithm. and Koopmans [1947]. minimum cost flow problem. Orlin [1983] is to has considered infinite horizon dynannic flow problems which the objective minimize the average cost per period.4 Minimum Cost Flow Problem cost The minimum flow problem has a rich history. 6.

significantly reduced the running time of the simplex algorithm. Glover. the dual simplex algorithm. Gibby. Klingman and that and Graves [1983] [1978]. The candidate list we described is due to Mulvey [1978a]. Goldfarb and Reid [1977]. Zadeh 11973b) has also described more pathological examples for network algorithms. Glover. Kamey. All these algorithms essentially cortsist of identifying shortest paths between appropriately defined nodes and augmenting flow along these paths. Klingman and Stutz [1974]. Klingman and Napier [1974]. Glover. Bradley. Zadeh [1973a] describes one such example on which each of several algorithms — the primal simplex algorithm with Dantzig's pivot rule. The fact that one example is bad for many network insightful algorithms suggests inter-relationship among the algorithms. and the out-of-kilter algorithm .e. Mead and Grigoriadis [1986] have described other strategies have been . the negative cycle algorithm (which augments flow along a most negative cycle). The implementations using these ideas. the successive shortest path algorithm. The network simplex algorithm and most popular with operations researchers..170 specialization of the linear cost flow programming dual simplex algorithm not discussed in this chapter). Researchers have conducted extensive studies to determine the most effective pricing strategy. Brown and Graves [1977]. and Klingman of [1979] subsequently discovered is improved data excellent structures. The paper by Zadeh [1979] just showed this relationship by pointing out that each of the algorithms mentioned of are indeed equivalent in the sense that they perform the same sequence augmentations provided ties are broken using the same rule. i. Bradley. these algorithms obtain shortest paths losing a method that can be regarded as an application of Dijkstra's algorithm.performs an exponential number of iterations. selection of the entering variable. for the minimum problem (which is Each of these algorithms perform iterations that can (apparently) not be polynomially bounded. These studies show that the choice of the pricing strategy has a significant effect on both solution time and the number strategy BrovkTi of pivots required to solve minimum cost flow problems. the primal-dual algorithm. due to Srinivasan and Thompson [1973] and Glover. and Barr. Further. The book Kennington and Helgason [1980] an source for references and background material concerning these developements. its practical implementations have been first Johnson [1966] suggested the tree first manipulating data structure for implementing the simplex algorithm. Grigoriadis and Hsu [1979].

Hao and Kai [1987] have described more anti-stalling pivot rules for the minimum cost flow problem. that for integer data an implementation of the primal simplex algorithm that maintains feasible basis a strongly performs O(nmCU) pivots pivots when used with any arbitrary pricing strategy and 0(nm C log (mCU)) when used with Dantzig's pricing strategy. the uncapacitated this minimum cost flow problem a dual algorithm performs 0(n^log n) pivots for minimum cost flow problem. Developing a polynomial-time primal simplex algorithm for the minimum cost flow problem is still open. One such rule is LRC fixed. Computational experience has shown that maintaining strongly feasible basis substantially reduces the number of degenerate pivots. but the number is of consecutive degenerate pivots may be exponential. The only is polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]. that this rule admits at most nm consecutive degenerate Goldfarb. degeneracy is both a computational and a theoretical issue. The strongly feasible basis technique. The algorithm then examines the arcs in the wrap-around fashion. Brown and Graves [1978]. Researchers have also been interested in developing polynomial-time simplex algorithms for the minimum cost flow problem or its special CJises. (Leaist Recently Considered) rule which orders the arcs in an arbitrary. researchers have developed such algorithms the for the shortest path problem. However. On the theoretical front. 1977b. Cunningham showed pivots. and the assignment problem: Dial et al. but manner. Gavish. The strongly feasible basis technique prevents cycling during a sequence of consecutive degenerate pivots. Orlin [1985] showed. proposed by Cunningham [1977a. This phenomenon known the as stalling. maximum flow problem. using a p>erturbation technique. network structure and the network Experience with solving large scale established that minimum cost flow problems has more than 90% of the pivoting steps in the simplex method can be degenerate (see Bradley. Cunningham [1979] described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling. and introduces the first eligible arc into the basis. 1978) has {1976] and independently by Barr. the use of this technique led to a finitely converging primal simplex algorithm. each iteration starting at a place where it left off earlier. Glover and Klingman contributed on both fronts. Zadeh . Schweitzer and Shlifer [1977] and Grigoriadis (1986]). [1979]. Thus.171 effective in practice. It appears that the best pricing strategy depends both upon the size.

Bertsekas results for the relaxation algorithm. minimum this cost flow problem (with integer data). to their In the satisfy resets flows on some arcs however. Napier and Stutz which is capable of generating assignment. it to a deficit node along a path cortsisting of arcs (ii) changing the potentials of a subset of nodes. and Tseng have presented computational . [1976] Glover. Orlin [1985]. Bradley. it has also obtained an optimum primal Bertsekas solution.6 for a definition of this problem). extended approach for the minimum cost flow cost flow problem with and for the generalized minimum problem (see Section 6. of empirical studies have extensively tested minimum to cost flow algorithms for wide variety of network structures. this flow assignment might change the excesses that each and deficits at nodes. lower or upper bounds so as to the optimality conditions. Klingman and Napier [1974] Glover. Hung [1983]. Klingman and Whitman algorithm. Grigoriadis [1979] and Grigoriadis [1986] are noteworthy. or latter case. mirumum cost flow problem. Akgul [1985b] and Ahuja and Orlin [1988] for the assignment problem.172 [1979]. Brov^Ti and Graves [1977]. this algorithm maintains a (i) pseudoflow satisfying the optimality conditions. and Roohy-Laleh [1980]. due Klingman. Orlin [1985]. Akgul [1985a]. A number sizes. data distributions. Kamey and Klingman [1974] and Aeishtiani and Magnanti have tested the primal-dual and out-of-kilter algorithms. The algorithm proceeds by either augmenting flow from an excess node with zero reduced cost. The attractive relaxation algorithms proposed by Bertsekas and his associates are other algorithms for solving the For the minimum cost flow problem and its generalization. [1985] suggested the relaxation algorithm for the Bertsekas and Tseng [1985] real data. and problem The most common problem generator [1974]. [1980] have reported on extensive studies of the dual simplex subject of The primal simplex algorithm has been a more rigorous . Goldfarb and Hao maximum flow problem. The algorithm operates so change it in the node potentials increases the dual objective function value and when finally determines the optimum dual objective function value. investigation. and capacitated or uncapacitated transportation and minimum cost flow problems. is NETGEN. Kamey. Hao and Kai [1986] and Ahu)a and OrUn [1988] for the [1988] for the shortest path problem. studies conducted by Glover. Helgason and Kennington [1977] and Armstrong. This relaxation algorithm has exhibited nice empirical behavior. Goldfarb. Mulvey [1978b]. Kamey and Klingman and Hsu [1988] [1974].

the dual simplex algorithm. m' of which are in absolute capacitated. the primal-dual algorithm. and the primal simplex algorithm with Dantzig's pivot rule should have comparable running times. new At version of primal simplex algorithm faster than the relaxation this time. Computer codes public domain. and the relaxation code RELAX developed by Bertsekas and Tseng Polynomial-Time Algorithms In the recent past. and the integral capacities. maximum . of nodes and arcs. for some minimum cost flow problem are available in the These include the primal simplex codes RNET and NETFLOW developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980]. However. supplies and demands are bounded in absolute value by U. computational studies have verified this expectation and until very recently the all primal simplex algorithm has been a clear winner for almost classes of network is problems.3 these theoretical developments in solving the table reports running times for minimum cost flow problem. and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the minimum cost flow problem in practice. the out-of-kilter algorithm. [1988]. It cissumes that the integral cost coefficients are bounded value by C. we would expect that All the the primal simplex algorithm should outperform other algorithms. respectively. minimum if Recall that an algorithm in the is strongly polynomial-time its running time is polynomial number or U. that determine a By using more effective pricing strategies good entering arc without examining all arcs. it appears that the relaxation algorithm of Bertsekas and Tseng. researchers have actively pursued the design of fast (weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem.173 In view of Zadeh's [1979] result. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm. and does not evolve summarizes terms containing logarithms of C The table given in Figure 6. Grigoriadis [1986] finds his algorithm. we would expect that the successive shortest path algorithm. The term S() is the running time for the shortest path problem and the flow term M() represents the corresponding running time to solve a problem. The networks with n nodes and m arcs.

174 Polynomial-Time Combinatorial Algorithms # 1 Discoverers Running Time [1972] Edmonds and Karp Rock Rock [1980] [1980] 0((n + m") log 2 3 4 5 6 0((n + U S(n. C)) m') log U S(n. Orlin U/log log U) log nC) and Tarjan [1988] and log log U log nQ Strongly Polynomial -Time Combinatorial Algorithms # . Goldberg. m. m. U)) nC) ) 0(n 0(n log log Bland and Jensen [1985] Goldberg and Tarjan [1988a] Bertsekas and Eckstein [1988] 0(nm log irr/nx) log nC) o(n3 log 7 7 8 Goldberg and Tarjan [1987] 0( n^ log nC Gabow and Tarjan [1987] [1987. m. 1988b] 0(nm log n log log n log (log U log nQ nC) Goldberg and Tarjan 0(nm 0(nm 0(nm 9 Ahuja. U)) C M(n. O) C M(n. m.

the best bounds for the shortest path and maximum flow problems are: Polynomial-Time Bounds S(n.7. m. we invoke the similarity assumption. The pseudoflow push algorithms for the minimum cost flow problem discussed in Section 5. C) = Discoverers min (m log log C. Orlin and Tarjan [1988] M(n. m + rh/logC ) Johnson [1982]. Mehlhom. this Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section algorithm to Tarjan [1984] 5. Bland and Jensen [1985] independently discovered a similar cost scaling algorithm. However. The scaling technique it did not capture the interest of many researchers. For problems that satisfy the similarity assumption. m) = (n^/m) Goldberg and Tarjan [1986] Using capacity and right-hand-side scaling. one using capacity scaling and the other using cost scaling.m. This algorithm was pseudopolynomial-time.8 use the concept of approximate optimality. This cost scaling algorithm reduces the minimum cost flow problem to a sequence of 0(n log C) maximum flow problems. Orlin and Tarjan [1987] Strongly Polynomial -Time Bounds S(n. Edmonds and Karp [1972] developed the first (weakly) polynomial-time eilgorithm for the in Section 5.175 For the sake of comparing the polynomial and strongly polynomial-time algorithms. Discoverers m) = m+ nm n log n log Fredman and Tarjan [1984] M(n. Bertsekas [1986] developed the first pseudoflow push algorithm. and Ahuja. proposed a wave algorithm for the maximum flow problem. introduced independently by Bertsekas [1979] and Tardos [1985]. Rock [1980] developed two different bit-scaling algorithms for the minimum cost flow problem. The wave algorithm . C) = nm ^%rT^gTJ log [ ^— + 2 J Ahuja. The RHS-scaling algorithm presented the which a Vciriant of Edmonds-Karp algorithm. researchers gradually recognized that the scaling technique has great theoretical value as well as potential practical significance. since they regarded as having practical utility. minimum L> cost flow problem.8. was suggested by Orlin initially little [1988].

Barahona and Tardos if [1987]. Although the wave This algorithm is very practical. in these instances. the double scaling algorithm faster than all other algorithms for all network topologies except for very dense networks.) finger tree (see Using both Mehlhom [1984]) and dynamic tree data structures. cycle algorithm Both the algorithms are based on the negative due to Klein [1967]. showed that the negative cycle algorithm . Goldberg and Tarjan [1988b] showed that flow a it if the negative cycle algorithm cycle always augments along / minimum mean cycle (a W for which V (i. who developed the double scaling algorithm. its worst-case running time is not very attractive. analyzing an algorithm suggested by Weintraub [1974]. log structures. The success in this direction was due to who developed a triple scaling algorithm running in time to Ahuja. then is strongly polynomial-time. Goldberg.9. For problems satisfying the similarity is assumption. Scaling costs by an appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC) and a dynamic tree implementation improves the bound further to 0(nm log log U log nC). required sophisticated data structures that impose a very high computational overhead.8 . Goldberg and Tarjan [1988a] obtained an 0(nm log (n^/m) log nC) bound for ^he wave algorithm. The second success was due Orlin and Tarjan scaling algorithm. Goldberg and Tarjan [1987] obtained a computational time that the bound of 0(nm log n log nC). The double as described in Section runs in 0(nm log U log nC) time. 176 for the minimum cost flow problem described in Section 5. which was developed relies independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988]. situation has prompted researchers to investigate the possibility of improving the computational complexity of minimum first cost flow algorithms without using any complex data Tarjan [1987]. algorithms by Goldberg and Tarjan appear more attractive. 6 W this Goldberg and Tarjan described an implementation of approach running in time 0(nm(log n) minflog nC. 5. upon similar ideas. Gabow and 0(nm log n U log nC). [1988].3 contains the definition of a blocking flow. Using a dynamic tree data structure in the generic pseudoflow push algorithm. |W | is minimum). except the wave algorithm. m log n)).j) Cj. Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed other polynomial-time algorithms. They also showed minimum cost flow problem cam be solved using 0(n log nC) blocking flow computations. These algorithms. (The description of Dinic's algorithm in Section 6..

theoretical considerations.. Their algorithm runs in 0(. For very sparse networks. Kapoor and to the Vaidya [1986] have shown that Karmarkar's [1984] algorithm. and are sublinear Strongly polynomial-time algorithms are (i) theoretically attractive for at least two reasons: run on real they might provide..177 augments flow along then it a cycle with maximum improvement in the objective function. Galil and Tardos time. Since identifying a cycle with maximum improvement difficult (i.Tr\^ log (mCU) S(n. are problems more equally difficult to solve as the values of the tmderlying data becomes increasingly larger? The Tardos first strongly polynomial-time minimum cost flow algorithm is due to [1985]. when applied minimum cost flow problem performs 0(n^-^ mK) operations. NP-hard). source of the difficult or underlying complexity in solving a problem. in principle. Interior point linear programming algorithms are another source of polynomial-time algorithms for the minimum cost flow problem. Tarjan [1988b] also show that their algorithm that proceeds by cancelling minimvun mean cycles is also strongly polynomial time. even for problems that satisfy the similarity assumption. identify the and (ii) they might. [1986]. the terms log in n. Several researchers including Orlin [1984]. at a more fundamental i. network flow algorithms data.e. Fujishige [1986]. m. where . performs is 0(m log mCU) iterations. that can valued data as well as integer valued level. in practice.e. and Orlin [1988] provided subsequent improvements in the running Goldberg and Tarjan [1988a] obtained another strongly polynomial time Goldberg and algorithm by slightly modifying their pseudoflow push algorithm. the fastest strongly polynomial-time algorithm due to Orlin [1988]. Edmonds and Karp the [1972] proposed the first polynomial-time algorithm for minimum cost flow problem. they describe a method (based upon solving to an auxiliary assignment problem) determine a disjoint set of augmenting cycles with the property that augmenting flows along these cycles improves the flow cost by at least as much as augmenting flow along any single cycle. O) time.) C and log U typically range from 1 to 20. This desire was motivated primarily by (Indeed. and also highlighted the desire to develop a strongly polynomial-time algorithm. m log n)) shortest path is problems. This algorithm solves the minimum cost flow problem as a sequence of 0(min(m log U. the worst-case running time of this algorithm nearly as low cis the best weakly polynomieil-time algorithm. is Currently.

the scaling algorithms [1986] not as efficient as the non-scaling algorithms. we (j.t) first transform the assignment problem into a a source minimum arcs cost flow (s. Vaidya [1986] suggested another algorithm for linear programming that solves the minimum cost flow problem in 0(n^-^ y[m K) time. According to the even though they might provide the best-worst case bounds on running eu-e times.ar .4 for the lie minimum algorithms. The algorithm successively obtains a shortest path from with respect to the lir«. many of these algorithms share common The successive shortest path algorithm. Although the research community has developed several different algorithms for the assignment problem. and introducing and unit for all i€N|. At fully this time. Boyd results. minimum cost flow problem.i) problem by adding node . We believe that when implemented with appropriate speed-up techniques. The primary efficient been on the development of empirically algorithms rather than the development of algorithms with improved worst-case complexity. 178 K= log n + log C + log U. scaling algorithms have the potential to be competitive with the best other algorithms. described in Section 5. To use this solution approach. s and a sink node t. A) the successive shortest path algorithm operates as follows. Bland and Jensen [1985] also reported encouraging results with their cost scaling algorithm. the research community has yet to develop sufficient evidence to assess the computational worth of scaling and interior point linear for the programming algorithms folklore.5 Assignment Problem The assignment problem has been emphasis in the literature has a popular research topic.. cost flow problem. appears to at the heart of many assignment due to This algorithm is implicit in the first assignment algorithm Kuhn known as the Hungarian method. and for all J€N2 these arcs have zero cost s to t capacity. these time bounds are worse than that of the double scaling algorithm. 6. and is explicit in the papers by Tomizava [1971] and Edmonds and Karp When applied to an assignment problem on the network G = (N^ u N2 . and Orlin have obtained contradictory Testing the right-hand-side scaling algorithm for the minimum cost flow problem. [1972]. [1955]. features. Asymptotically. they found the scaling algorithm to be competitive with the relaxation algorithm for some classes of problems.

then these applications take a total of 0(nm) time time.C)) time.m. where S(n.C) problem. overall. S(n. in Whereas the successive shortest path an iteration. If the shortest paths from the source node we use the labeling algorithm to solve the resulting maximum flow problems. The algorithm solves the assignment problem by n applications of the shortest path algorithm for nonnegative arc lengths and runs in 0(nS(n. The fact that the assignment problem can be solved as a sequence of n shortest Iri path problems with arbitrary arc lengths follows from the works of Jewell [1958].m. is the time needed to solve a shortest path is For a naive implementation of Dijkstra's algorithm.mC)) = 0(nS(n.m. the problem augments flow along one path augments flow along all Hungarian method to the sink node. costs leads to shortest path problems with nonnegative arc details of Weintraub and Barahona [1979] worked out the Edmonds-Karp assignment algorithm for the assignment problem. (For 0(nm + nS(n. Glover and Klingman [1984]) with the flow augmentation process. some time after the development of the Hungarian method as described by Kuhn. the research community considered it to be O(n^) method. log log C. is the primal-dual version of the successive After solving a shortest path problem and updating the node potentials.179 programming reduced costs. S(n. However. For problems satisfying the similarity assumption. [1960] and Busaker and Gowen [1971] [1961] on the minimum cost flow problem.m. Kuhn's [1955] Hungarian method shortest path algorithm. algorithm by Glover.C)) time. the to Hungarian method solves a (particularly simple) maximum flow problem send the maximum possible flow from the source node s to the sink node t using arcs vdth zero reduced cost. and augments one unit of flow along the shortest path. Sodini [1986] also suggested a similar threshold assignment algorithm. updates the node potentials. Carraresi and Hoffman and Markowitz path problem to [1963] pointed out the transformation of a shortest an assignment problem. [1972] independently pointed out that Tomizava and Edmonds and Karp working with reduced lengths.C) min(m m+nVlogC}.C) O(n^) and for a Fibonacci heap implementation is it is 0(m+nlogn).m. since there are n augmentatior\s and each augmentation takes 0(m) runs in Consequently. Glover The more recent [1986] is threshold and Klingman also a successive shortest path algorithm which integrates their threshold shortest path algorithm (see Glover. too. Lawler [1976] described an Oiri^) . the Hungarian method.

These authors to developed the details of the network simplex algorithm when implemented maintain a strongly feasible basis for the assignment problem. Researchers have also studied primal simplex algorithms for the assignment problem. The relaxation approach for the (1969].m. Both the algorithms maintain optimality of the intermediate solution and work toward feasibility by solving at most n shortest path problems with nonnegative arc lengths. Both approaches start writh is in an infeasible assignment and gradually make it feasible. objects Throughout the relaxation algorithm. a primal algorithm that maintains a feasible it assignment and gradually converts into an optimum assignment by augmenting flows along negative cycles or by modifying node potentials. This approach closely related to the successive shortest path algorithm.m. they also reported encouraging computational results. of its 2n-l variables. but may be overassigned or unassigned. Probably because of this excessive degeneracy. and with no person or is object overassigned. reoptimizes over All of these algorithms the previous basis to obtain another strongly feaisible basis.C)) time.C)) time. only n are nonzero. The basis of the assignment problem is highly degenerate.) Jonker and Volgenant [1986] suggested some practical improvements of the Hungarian method. Subsequent research focused on developing .m. The successive shortest path algorithm maintains a solution w^ith unassigned persons and objects. every person assigned. The major difference the nature of the infeasibility. the mathematical programming community did not conduct much research on the network simplex method for the assignment problem until Barr. the shortest path computations are somewhat disguised paper of Dinic and Kronrod [1969]. minimum cost flow problem is due to E>inic is and Kronrod Hung eind Rom [1980] and Engquist [1982].C)) time. Another algorithm worth mentioning This algorithm is is due to Balinski and Gomory [1964]. The algorithm of Hung and Rom after [1980] maintains a strongly feaisible basis rooted at an overassigned node and. many researchers realized that the Hungarian method in fact runs in 0(nS(n. Derigs [1985] notes that the shortest path computations vmderlie this method.180 implementation of the method. and that it rurrs in 0(nS(n. Subsequently. run in 0(nS(n. each augmentation. [1969] The algorithms of Dinic and Kronrod but and Engquist [1982] are essentially the same as the one we in the just described. Glover and Klingman [1977a] devised the strongly feasible basis technique.

this threshold value equals C and within O(n^) pivots its value is halved.ISl polynomial-time simplex algorithms. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots. by the maximum amount Bertsekas is [1981] has presented another algorithm for the assignment problem which cost flow in fact a specialization of his relaxation algorithm for the minimum problem (see Bertsekas [1985]). Hence. The algorithm cost.m. is due to Bertsekas and uses basic ideas originally [1988] described a Bertsekas and Eckstein more recent its version of the auction algorithm. Balinski [1985] developed the signature method. it it (Although his basic algorithm maintains a is not a dual simplex algorithm in the traditional sense because at does not necessarily increase the dual objective algorithm do have this property. Ahuja and Orlin rule that performs 0(n^log C) pivots and can be implemented to run in 0(nm log C) time using simple data structures. Goldfarb [1985] described some implementations of O(n^) time using simple data structures and in Balinski's algorithm that run in 0(nm + n^log n) time using Fibonacci heaps. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots. some variants of this Balinski's algorithm performs O(n^) pivots and runs O(n^) time.C)) time. For example. dual feasible basis. essentially consists of pivoting in any arc with sufficiently large reduced The algorithm defines the term "sufficiently large" iteratively.) in every iteration. This algorithm essentially in amounts to solving n shortest path problems and runs 0(nS(n. . A naive implementation of the algorithm runs in [1988] described a scaling version of Dantzig's pivot 0(n^m log nC). his algorithm performs 0(n^log nC) pivots. which is a dual simplex algorithm for the eissignment problem. Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm and showed that for the eissignment problem this rule requires O(n^lognC) pivots. The auction algorithm suggested in Bertsekas [1979]. Hung [1983] describes a pivot rule that performs at at most O(n^) consecutive degenerate pivots and most 0(n log nC) nondegenerate pivots. the algorithm we have presented increases the prices of the objects by one unit at a time. analysis is Out presentation of the auction algorithm tmd somewhat different that the one given by Bertsekas and Eckstein [1988]. whereas the algorithm by Bertsekas and Eckstein increases prices that preserves e-optimality of the solution. initially.

using bit-scaling of costs. on the relaxation methods. This time bound For problems satisfying best time is comparable to that of Gabow and Tarjan 's algorithm. years. but the two algorithms would probably have different computational attributes. [1986] and Jonker and Volgenant [1988] [1987] appear to be the fastest. it is difficult to assess their computational merits. developed the algorithm for the assignment problem. and by Glover [1986] and Jonker and Volgenant [1987] on the successive shortest path methods. by McGinnis [1983] and Carpento. by Engquist et al. Section 5. Using the concept of e-optimality. The primal simplex algorithm is slower than the the latter primal-dual. His algorithm performs O(log C) scaling phases and solves each phase in OCn'^'^m) time. problem is 0(nm + n^ log n) which is achieved by many assignment Scaling algorithms can do better for problems that satisfy the similarity first scciling assumption. Since no paper has compared all of these zilgorithms. the successive shortest path algorithms Among due to Glover et al. the similarity assumption.11 has presented a modified version of algorithm in Orlin and Ahuja [1988]. three approaches.Currently. most of the research effort devoted to assignment algorithms has stressed the development of empirically faster algorithms. Bertsekas and Eckstein is found that the scaling version of the auction algorithm competitive with Jonker and Volgenant's algorithm. Glover and Klingman [1977a] on the network simplex method. They also improved the time bound of the auction algorithm to 0(n^'^m lognC). showed that the scaling version of the auction Bertsekas and Eckstein [1988] algorithm runs in this 0(nm log nC). Carpento. Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5. the best strongly polynomial-time bound to solve the assignment algorithms. relaxation and successive shortest path algorithms. Some representative computational studies are those conducted by Barr. Martello and Trlh [1988] present . As mentioned previously. Gabow [1985] .8 solves problem in 0(nm log nC) since every push is a saturating push. these two algorithms achieve the boimd to solve the assignment problem without using any sophisticated data structure. results to date seem to justify the following observations about the algorithms' relative performance. thereby achieving jm OCn'^' ^m log C) time bound. Nevertheless. algorithm running in time 0(n^' Gabow and Tarjan [1987] developed another scaling push algorithm the assignment ^m log nC). Martello and Toth [1982] [1988] on the primal-dual method. Over the many computational studies have compared one algorithm with a few other algorithms.

in this chapter assume that arcs the flow entering an arc equals the flow leaving the arc. t for aU i E N (6. i. is a is nonnegative flow multiplier dissociated with the lossy and. four other topics deserve mention: (ii) generalized network flows. Generalized Network Flows The flow problems we have considered conserve flows. (iii) multicommodity flows. FORTRAN implementations of assignment algorithms for dense and sparse 6.6 Other Topics Our domain of discussion in this paper has featured single costs. For example.e. 1 < rj: < then the arc Tjj if 1 < Tj. j.j) € A) € A) s. if i = .t. Maximize v^ (6ia) subject to X {j: "ij {j: S (j.1b) [vj. In the conventional flow networks.183 several cases. If node 1. In particular. (iv) convex cost flows. then Tj: Xj: units "arrive" at arc. if i ?t (i. = for all arcs. then the arc is gainy. arcs do not necessarily conserve flow. commodity network flow problems with linear Several other generic topics in the broader problem theoretical (i) network optimization are of considerable and practical interest. the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods.. Tj. If In xj: models of generalized network flows. j). and network design. Generalized network flows arise in may application contexts. extension of the conventional An maximum two flow problem is the generalized maximum flow problem which either maximizes the flow out of a source the flow into a sink node or maximizes of node (these objectives are different!) The source version the problem can be states as the following linear program. units of flow enter an arc (i. Researchers have studied several generalized network flow problems.i) "'ji'^ji = K'if» = s S 0. We shall now discuss these topics briefly. < «>.

The recent paper by Goldberg. find their implementation to be very efficient in practice. because of flow losses and gains within arcs. The paper by Truemper [1977] surveys these approaches. and the primal-dual algorithm for the cost flow problem apply to the generalized maximum flow problem.. for all (i.184 < x^j < uj: . These algorithms. the negative cycle algorithm. Glover others. The approach. . but convex objective functions are more difficult to solve. we wish to determine the minimum first cost flow in a generalized network satisfying the specified supply/demand requirements of nodes. the objective function can be written in the form V (i. The generalized maximum flow problem has many similarities with the minimum minimum cost flow problem.j) Cjj (x^j). typically. In the generalized minimum cost flow problem. The second approach [1979] the primal simplex algorithm studied by Elam. Problems containing nonconvex nonseparable cost terms such as xj2 e A are substantially X-J3 more difficult to solve and continue to pose a significant challenge for the mathematical programming community. are not pseudopolynomial-time. Extended versions of the successive shortest path algorithm. note that Vg not necessarily equal to v^.e. is due to Jewell [1982]. is essentially a primal-dual algorithm. which is an extension of the ordinary minimum cost flow problem. and Klingman among they Elam it is et al. These are three main approaches to solve this problem. Note that the capacity restrictions apply to the flows entering is the arcs. convex cost flow problems with separable cost functions. however. mainly because the optimal arc flows and node potentials might be fractional. find that about 2 to 3 times slower than their implementations for the ordinary minimum [1988b]. cost flow algorithm. Plotkin and Tardos [1986] describes the first polynomial-time combinatorial algorithms for the generalized maximum flow problem. The third approach. due to Bertsekeis and Tseng generalizes their minimum cost flow relaxation algorithm for the generalized minimum cost flow problem. Even problems with nonseparable. Further. j) e A. Convex Cost Flows We shall restrict this brief discussion to i.

More elaborate For example. with linear necessary) with sufficiently small size. thus increasing the problem size. (xj.g. Hax This transformation reduces the convex cost flow problem to a it minimum cost flow problem: introduces one arc for each linear segment in the cost functions. convex problem a priori (which of we knew the optimal solution to a separable course.185 analysts rely on the general nonlinear programming techniques to solve these problems. (xj.j) ^i] {j: € A S (j. we don't). to solve convex cost flow problems without increasing the problem [1984] illustrates this technique size. Observe that segments chosen (if it is possible to use a piecewise linear function.2b) e A < Ujj . However. < x^j for all (i. The paper by Ahuja. (6. (62c) In this formulation.) (6. program (see. negative cycle algorithm. alternatives are possible. primal-dual and out-of-kilter algorithms. and Gupta and suggests a pseudopolynomial time algorithm.. (xjj) for each (i. j) e A. The research community has focused on two (i) classes of separable convex costs flow each Cj. to approximate a convex function of one variable to any desired degree of accuracy.2a) e A subject to Y {j: (i. is a convex function. j) with only three . There a well-known technique for transforming linear functions to a linear a separable convex program with piecewise and Magnanti standard [1972]). Bradley. Cj. The separable convex cost flow problem has the follow^ing formulation: Minimize V (i.i) ''ji = ^^'^' ^°^ all i € N.) is problems: each Cj. (xjj) is a piecewise linear function. it is possible to cost carry out this transformation implicitly and therefore modify many minimum flow algorithms such as the successive shortest path algorithm.j) e A. then we could solve the if problem exactly using a linear approximation for any arc (i. e. of (ii) a continuously differentiate function.j) Cj. Batra. classes of Solution techniques used to solve the two problems are quite is different.

Any other breakpoint in the linear approximation would be irrelevant and adding other points would be computationally wasteful. and the optimal flow on the arc. Florian [1986]. If (See Meyer [1979] for an example could we were interested in only integer solutions. cases. to obtain Minoux has also developed a polynomial-time algorithm the convex const flow problem. same underlying network. approximation. coarser. topic are Ali. Rockafellar [1984]. and therefore solve the problem in pseudopolynomial time.3a) A subject to . using ideas from nonlinear progamming for solving this general separable convex cost flow problems. Kennington and Helgason Meyer and Kao [1981].j)e k c^: k x^(6. an integer optimum solution of Muticommodity Flows Multicommodity flow problems arise when several commodities use the In this section. Klincewicz [1983]. Some important references on this [1980]. Some time. This observation has prompted researchers to devise adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous. Researchers have suggested other solution strategies. the versions of the convex cost flow problems can be solved in polynomial [1984] has devised a polynomial-time algorithm for Minoux one of [1986] its special mininimum quadratic cost flow problem. 1 Let denote the supply/demand vector of commodity cost flow Then the multicommodity minimum ^ problem can be formulated as follows: Minimize V 1^=1 V (i. Helgason and Kennington [1978]. of this approach). Dembo and Klincewicz [1981]. then we choose the breakpoints of the linear approximation at the set of integer values. we state programming formulation of the multicommodity minimum problem and its cost flow problem and point the reader to contributions to this specializations.186 breakpoints: at 0. and Bertsekas. Hosein and Tseng [1987]. that the b*^ problem contains r distinct commodities numbered k. Uj. but share common a linear arc capacities. Suppose through r.

as captured by (6. Further. . every s*^ commodity k has objective a is source node and a sink node.. commodities way that minimizes overall flow We problem is first consider some special cases. for ^ all (i. 1] {j: {j: V (i. subsequently generalized this decomposition approach to linear programming. We refer the reader to .3c). Ford and Fulkerson [1958] solved the general multicommodity Dantzig and Wolfe maximum [1960] flow problem using a column generation algorithm. Shein and pseudopolynomial time by a labeling algorithm.3).3b) ''ii (i. (63c) < k Xj.3c). (6. decomposition and partitioning methods. < k u. With the presence of the bundle the essential problem in a is to distribute the capacity of each arc to individual costs. the total flow on any arc cannot exceed capacity.j) e A) e A y ktl ' k X. the model contains additional capacity each arc. Frisch [1968] showed how source or a to solve the multicommodity maximum flow problem with a common common sink by a single application of any maximum flow algorithm. (6.3d). (6.. As indicated by its the "bundle constraints" (6.3d) k In this formulation.187 k X. represented respectively by to and tK The t*^ maximize the sum of flows that can be sent from s*^ to for all k. for all (i. Hu [1963] showed how network in to solve the two-commodity maximum flow problem on an undirected Rothfarb. restrictions on the flow of each commodity on Observe that it if the multicommodity flow problem does not contain bundle into r constraints. '^ < u:j. (6. The multicommodity maximum flow a special instance of In this problem. x-- and k c-- represent the amont of flow and the unit cost of flow for commodity k on arc (i. one for each commodity. Researchers have proposed three basic approaches for solving the general multicommodity minimum resource-directive cost flow problems: price-directive decomposition. then decomposes single commodity minimum cost flow corxstraints problems.j) k k ~ ^i ' ^OT a\] i and k.j) and all k .j).j).

the network might .3). the network must be a tree. The book by Kennington and Helgason [1980] describes the details of a primal simplex decomposition algorithm for the multicommodity minimum cost flow problem. Unfortunately.188 the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these methods.3c) in the convex cost k These constraints force the flow the arc is x^- of each if commodity k on the arc is arc (i. for finding optimal routings in a on analysis rather than synthesis. The design problem is of its considerable importance in practice and has generated an extensive literature of own. These network design models contain is that indicate whether or not an arc included in the network. Many design problems can be stated as fixed cost network flow problems: is (some) arcs have an associated fixed cost which incurred whenever the arc carries 0-1 variables yjj any flow. some may restrict the underlying network topology (for instance. these models involve k x^. in other applications. Network Design We network. Although specialized primal simplex software can solve the single commodity problem 10 to 100 times faster than the general purpose linear programming systems. the algorithms developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et [1984]). Typically. algorithmic developments on the multicommodity minimum made on cost flow problem have not progressed at nearly the pace as the progress the single commodity minimum cost flow problem. of the form (6. in some applications. related The design decisions yjj and routing decisions by "forcing" constraints of the form 2 k=l ''ii - "ij yij ^^^ ' ^" ^^'^^ which replace the bundle constraints multicommodity flow problem (6. have focused on solution methods that is. restricts the total included. al. for example. the constraint on arc Ujj (i.j) flow to be the arc's design capacity constraints Many modelling enhancements are possible.are multicommodity flows.j) to be zero if not included in the network design.

and by Grants from Analog Devices. and integer programming decomposition (Lagrangian relaxation. by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research. Benders decomposition) as well as emerging ideas from the field of polyhedral combinatorics. One of the most popular "" Minimize £ ^ k=l (i^j)e k c• k x^^ + Y.j) A V ij € A (as well zs fixed costs k which models commodity dependent per unit routing costs c Fjj for • the design arcs). We are particularly grateful to William Cunningham many valuable and detailed comments. optimization-based heuristics. ^ (i. dual ascent procedures. and Prime Computer. 1987] have described the broad range of applicability of network design models and summarize solution methods network design literature. These solution methods include dynamic programming. . Apple Computer. is many different objective functions arise in practise. for these problems as well as many references from the [1988] discuss Nemhauser and Wolsey many underlying methods from integer programming and combinatorial optimization.Richard Robert Tarjan for a careful reading of the manuscript and many for useful suggestions. Hershel Safer. The research Presidential of the first and third authors was supported in part by the Young Investigator Grant 8451517-ECS of the National Science Foundation.189 need alternate paths to ensure reliable operations). Inc. Also. network design problems require solution techniques from any integer programming and other type of solution methods from combinatorial optimization. Lav^ence Wolsey .. Acknowledgments We Wong and are grateful to Michel Goemans. Usually. Magnanti and Wong [1984] and Minoux [1985.

.B. K. M. MA. J. ]. Finding Minimum-Cost Rows by Double of Scaling. R. Assignment and Minimum and Ahuja.. Orlin. and R. 1988. Research Report.. Ahuja. 1988.B.E.I.A. R. J. Orlin.K.B. Sloan School Management. J. MA.T. Ullman. M. R.I.T. 16. Mehlhom.B. Cambridge. 2047-88. for the Shortest Path.190 References Aashtiani. Technical Report No. N.E. Improved Time Bounds for the Maximum Flow M. OR Aho. MA. and J. M. A. A.E.E. Sloan School of Management. Res.B.. A Fast and Simple Algorithm for the Maximum M. and T. K. ..K. Cambridge. J. J. Res. 1976. R..B. Problem. The Design and Analysis of Computer Algorithms.. M.C. .T. Ahuja. Stein. A Parametric Algorithm for the Convex Cost Network Flow and Related Problems. Cambridge.E. Kodialam. 1988.K. 193. Addison-Wesley. To appear. Department State University. and R. and Orlin. Orlin. Personal Communication.B. Tarjan. 1974. R. 1984.. R. Cambridge. R. Computer Science and Operations Research. L. Orlin. Reading.T. Implementing Prin\al-E>ual Network Operations Research Center. MA. Sloan School of Management. J.. of Shortest Path and Simplex Method. Hop>croft. Gupta. Orlin. 1985a.K. and Ahuja.of Oper. R.K. Flow Problem.K. Orlin. and R.V. Euro. L. 1987.V. and J. Tarjan. and S. To appear Ahuja. Ahuja. Tarjan. Bipartite J. H. MA. Operations Research Center.I. Working Paper 1966-87. Ahuja. Improved Algorithms for Network Flow Problen«.. Improved Primal Simplex Algorithms Cost Flow Problems.. Faster Algorithms for the Shortest Path Problem. 055-76. MA. To appear. Ahuja. Working Paper 1905-87. 1987. Working Paper No. 1988. M. C...T.I. 1988.D. Tarjan. J.I. Flow Algorithms. North Carolina Raleigh. Batra. Magnanti.. in Oper.K. 222-25 Goldberg. Technical Report Cambridge. 1988. K. Akgul. R.

Southern Methodist University. Whitman.. R. D. K. Laboratory for Computer Science.. Dept. Res. Cambridge. A Primal Method for the Assignment and Transportation Problems. Balinski. 1977b. Barahona.. Oper. J.D. Symposium on .L. and D. and E. and D. M. Networks 8. F.37-91. 4. A Survey. J. N. Multicommodity Network Problems: Applications and Computations. Helgason. Note on Weintraub's Minimum Cost Flow Algorithm. Baratz. A. Barr. Operations Research.L. Patty. and J. 1987. 16. The Alternating Path for the Assignment Problem. B. Proceedings External Methods and System Analysis. North Carolina State University. F. Multicommodity Network Flows Balinski. Euro. The Convex Cost Netwrork Flow Problem: A State-of-the-Art Survey. Glover. V. Glover. Bamett.C. Prog. 403-420. and D. Shetty. Comory. Trans. B. M. Technical Report OREM 78001.E. 1977a. 1980. 1985. Ali. Tardos. Oper. Kennington.. Klingman. Res. and R. Construction and Analysis of a Network Flow Problem Which Technical Report TM-83. B. R. Sci. 1977.E. Implementation and Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. Armstrong. LIE. R. A Genuinely Polynomial Primal Simplex Algorithm for the Research Report. M. 1984. 1978. Forces Karzanov Algorithm to O(n^) Running Time. Klingman. L. 1985b. 578-593. Assad. Farhangian. McCarl and P. MA. A.I. of Mathematics. I. Basis Algorithm Ban.. Raleigh. Ali. 1964. A. Cambridge. F. 12. 527-536. 10. M. Department of Computer Science and Assignment Problem... 33. Math.. MIT.. MA. 1978.I.191 Akgul. A Network Augmenting of the International Path Basis Algorithm for Transshipment Problems. Signature Methods for the Assignment Problem.T. Texeis. D. Kennington. Klingman. Research Report.127-134. Wong. 1-13. Man. R.

1987. Athens. 1979.. 1978.P.1219-1243. Bazaraa. A Nev^ Algorithm for the Assignment Problem. Math. of Operations Research 14. Bertsekas.. Hosein. and A. Barr.P. D. Math.. Laboratory Cambridge. Games and Transportation Networks. John Wiley & Sons. 1986. Bertsekas. . A. 1981. and D. Bertsekas. To appear Bertsekas.I..T. M. 1962. Generalized Alternating Path Algorithm for Transportation Problems. D.. Oper. Klingman. IXial Coordinate Step Methods for Linear Network Flow Problems. M. Appl. Prog. and P. Report LIDS-P-1653. Bertsekas. 21. Series B. Working Paper.. & Sons. Also in Annals 1988. R. Bertsekas. The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem. 32. John Wiley 1979.192 Barr. SIAM of Control and Optimization .. Gallager. A Distributed Algorithm for the Assignment Problem. INFOR J. in Math. Tseng. Euro. Glover. D. 1985. Greece. 2.. Enhancement 17. and R. ]. Prentice-Hall.. MA. 1958. Bellman. 16. of 25th IEEE Conference on Decision and Control. 105-123. 87-90. Laboratory for Information Decision systems. Bertsekas. Flow Problems with Convex Arc Costs. Bertsekas. Distributed Relaxation Methods for Linear Network Flow Problems. Glover.I. On a Routing Problem. Data Networks. D. Prog. P. R. Berge. Jarvis. 16-34. A Unified Framev^ork for Primal-Dual Methods in Minimum Cost Network Flow Problems.P. R. 152-171. D.. QuaH. Res. D. Math. P.T. Prog. P. MA. D. C. Proc. 1987. of Spanning Tree Labeling Procedures for Network Optimization.J. P. M. Eckstein. 125-145. Linear Programming and Network Flows. Klingman. 25. Cambridge. D. 137-144. 1987. P. Relaxation Methods for Network J. Programming.P. F. and 1978. and D. and J. Ghouila-Houri. for Information Decision Systems.

O. R. 1977. P. Comp.. Tseng. 10.. In B.G. and E. Computer Science Group.O. Simeone. and J. Magnanti. Ithaca. J. Bradley. P. O. Design and Implementation of Large Sri. D. Cornell University.). Sys. Routing and Scheduling and Crews. and P.. Algorithms and Codes for the Assignment Problem. Busaker. 21. Optimization. Math. R. and Orlin.. G. 125-190. A.G.L. Graves. 23. and M. Scale Primal Transshipment Algorithms. Theory 10. Res. 1961. Technical Report No. . Bland. (eds. Golden. Baltimore.P. 93-114. An Efficient Algorithm for the Bipartite Matching Problem. Eur. 1-38. N. L. 1983. and P. 1988. On the Computational Behavior of a Polynomial-Time Network Flow Algorithm. The Relax Codes al. Carraresi. of Vehicles L. Kaas. Martello. R.. Oper. Toth. Applied Mathematical Programming. A Procedure for Determining a Family of 15. Simeone et al. India. Brown. 1988. Tseng.193 Bertsekas. 1977. 193-224. et (ed. 1988a. C. Technical Report. and T. Oper. of Operations Research 13. Cheriyan. 1985. FORTRAN Codes for Network As Annals and P. Parametrized Worst Case Networks for Preflow Push Algorithms. A. 65-211. S. Operational MD. Zijlstra. Van Emde. Minimal-Cost Network Flow Patterns. G. 99-127.J.B. Optimization. Technical Report 661.. D. Assad. D. Sodini. 36. FORTRAN Codes for Network As Annals and J. In B. Boas. and P. Res. for Linear Minimum Cost Network Flow Problems.P. Bodin. 1986..). 1986. and G. Hax. Res. G.R. A. Bombay. B. John Hopkins University. Oper. S. Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems. and D. 1977.. of Operations Research 33.. C. Jensen. 1988b. Bertsekas. Gowen. L. 86-93.. Research Office. Carpento. Bradley. Design and Implementation of an Efficient Priority Queue. P. A.Y. Addison-Wesley. Ball. Man. School of Operations Research and Industrial Engineering. Personal Communication. Boyd. Tata Institute of Fundamental Research.

Dantzig. 1962. On the Shortest Route through a Network. 1980. Math. ACM Trans. G. NJ. Princeton University Press.B. Algorithm for Cor\struction of Maximum Flow in Networks with Complexity of OCV^ Economical Problems 7. and S. 1976. . Maheshwari. Cunningham.. New Delhi. Theory of Gordon and Breach. Cunningham. 1951.B. G. 174-183. Mathematical Methods of Solution of 112-125 (in Russian). Academic Press. 11. Theoretical Properties of the Network Simplex Method. Dantzig. G. In H. 1956. Oper. Dantzig. of Oper. of Computer Science and Engineering. Graph Theory : An Algorithmic Approach.). on Math.. Analysis of Preflow Push Algorithms for Maximum Network Technical Report.B. Dantzig. A Network Simplex Method.H. 1977. On the Max-Flow Min-Cut Theorem of Networks. NY. John Wiley & Sons. and Block Triangularity Programming. India. G. 1987.C. Economeirica 23. Dept. Tucker (ed. in Linear 1955. Princeton. All Shortest Routes in a Graph. Pro^. Fulkerson. 91-92.W. 1979.W. Sd. 8..V. Man.B. Dantzig. In T. 187-190. Christophides. G. 1-16. Indian Institute of Technology. Secondary Constraints. Mafft.194 Cheriyan. 1967.H. 6.B. Inc. T. (ed. Cheung. 1960. 4. Linear Programming and Extensions. Princeton University Press. R. Kuhn and A. J.). Wolfe. Software 6. 196-208. G. Cherkasky. Rosenthiel Graphs. 1975. (ed. G.. W. Dantzig. Linear Inequalities and Related Systems. W. B. Vl ) Operation.N. N. Application of the Simplex Method to a Transportation Problem.). Res. Dantzig. 215-221. Upper Bounds. 101-111. Decomposition Principle for Linear Programs.R. Rfs. and P. In P. 1960. and D. Flow.B. Analysis of Production and Allocation. Computational Comparison of Eight Methods for the Mzocimum Network Flow Problem. Activity Koopmans 359-373. 105-116. Annals of Mathematics Study 38.

E. 1988. Prog.L.. J. West Germany. 1985. Networks 9. Canada. 1970. and C Pang. A Computational Arvalysis of Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees. Exponential Grov^h of the Simplex Method for the Shortest Path Problem.. 632-633. and J. Networks 14. Kronrod. and D. Unpublished paper. Edmonds. Dokl.269-271. Study 15. 1324-1326. 2-[5-248. 1277-1280. Reaching. 1979. U. University of Waterloo. Shortest-Route Methods: 1. . Shortest Path Algorithms: Taxonomy and Annotation. E. Meier. and B. Numeriche Mathematics 1. D. Dinic. Annals of Operations Research Derigs.195 Dembo.A. N. Network Flow Problen\s with Convex Separable Deo. 1970. Motivation and Computational Experience. 1959. 1988. R.A. Soviet Maths. S. Glover. Dinic... Kamey. Doklady 10. A Note on Two Problems in Connexion with Graphs. Programming in Networks and Graphs.A. Math. Dial. An Algorithm for Solution of the Assignment Problem. Comm. Technical Report. Algorithm 360: Shortest Path Forest with Topological Ordering.. 1979.57-102. ACM 12. Fox. 161-186. 300. University of Bayreuth. Implementing Goldberg's Max-Flow Algorithm: A Computational Investigation. 1969. Dijkstra. 275-323. Derigs. A Scaled Reduced Gradient Algorithm for Costs. U. 125-147. E. 11.V. Springer-Verlag. W. Ontario.. and M. E. The Shortest Augmenting Path Method for Solving Assignment Problems: 4. R. U. G. Math. R. Klincewicz. Res. and Vol. 1981. Klingman. Lecture Notes in Economics and Mathematical Systems. Pruning and Buckets. 1969. Denardo. F. Oper. 1984. Algorithm for Solution of a Problem of Soviet Maximum Flow in Networks with Power Estimation. 27. Derigs. Dial.

Math. and R. CA.. Shannon. 8. R. Jr. A Strongly Convergent Primal Simplex Algorithm for Generalized Networks.. F. 1962. Even. 1987. 167-196. J. Computer Science Press. Maximal Flow through a Network.. Jr.. A. 1982. 1979. Laboratory for Computer Science.. The Max-Flow Algorithm of Dinic and Karzanov: An Exposition. Canad.E. Technical Report TM-80. Ford. Research Report. 1979. Man. Santa Monica. Floyd. State University.R. S. On the Efficiency of Maximum Flow To appear in Algorithms on Networks with Small Integer Capacities. Ames. 248-264.T. Graph Algorithms. Tarjan.R. Florian. A Successive Shortest Path Algorithm for the Assignment Problem. Feiitstein.. 117-119. AM Comput. and D. 24-32. Klingman.E. Cambridge. MA. Algorithm 97: Shortest Path. L. Solving the Trar\sportation Problem. Elam. Note on Maximum Flow Through a Network.M..W. and D. Iowa Algorithmica.R. }. Department of Computer Science. M. Theory TT-2. Network Flow Theory. Comm. and R. L. IRE Trans. Fulkerson. M. Fulkerson. Even. 1975. Femandez-Baca. 1956. Res. 3..R. 370-384. Maryland. Glover... lA. Math. 1956. Karp. 1956. Nonlinear Cost Network Models in Transportation Analysis. Elias. INFOR 20. L. 1972. 345. D. Infor. 1976.. J. 399-404. Sd.196 Edmonds. and D. SI S. Math. ACM 19. 1986. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. Ford. 4. Prog. P. 507-518. and C.U. Network Flow and Testing Graph Connectivity.R. 1956. Martel. Jr.I. 4. Report Rand Corp. . S.. Ford. Study 26. of Oper. >4CM P-923. 39-59. M. /. Even. J. on Engquist. 5. and C.

on Found. 31. Fulkerson. John Wiley & Sons. Comp. of Computing 83 - 89. 1987.R. Oper. New Bounds 5. and P.. Fulkerson. Ford. Ford. 1958. and D. Jr. and Transportation Networks. also in /. SIAM ]. Princeton. Sci of ACM 34(1987). Tarjan.Sci. M. Ford. L. M. Fredman.. Jr. A Suggested Computation for Maximal Multicommodity Network Flow. Naval Res. and DR.. L. Francis. 197 Ford. 9.. Discrete Location Theory. Man. Prog. and Problems. R. An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm. 35. S.E. Fulkerson. D. Scaling Algorithms for Network Problems. Fulkerson. Logist. 5.... 47-54. H. Fibonacci Heaps and Their Uses in of Improved Network Optimization Algorithms. 1986.. and D. 6... 338-346. Constructing Maximal Dynamic Flows from Static Flows. Comput. Mirchandani (eds. Appl.B. 1986. 25th Annual IEEE Symp.N. and D. Quart. D. Computation of Maximum Flow in Networks. and R. 1961. 1984.. 1958. 596-615.R. R. Cost Circulation 298-309. Communication. Addison-Wesley. Gabow. 1962. 18-27. L. 148-168. 1957. 1985. To appear. Frank. and Frisch. Princeton University Press. 1971. Transmission.R.Sys.R. 419-433.N. 97-101.. Math. A Primal-Dual Algorithm for the Capacitated Hitchcock Problem. 1955.). Sci. Jr.ofComput. Math. R. 4. Gabow. and C. 2. R... Dantzig. Log.L.R. H.. on the Complexity of the Shortest Path Problem.R. Res. (submitted). Tarjan. Faster Scaling Algorithms for Network SIAM ].R.T. Fulkerson. H. NJ. . Fujishige. Flows in Networks. J. Naval Res. 277-283. An Out-of-Kilter Method for Minimal Cost Flow Problems. Fredman. 1988. Quart. Fulkerson. SIAM J.E. L. L. I.

Starchi. Gallo. Klingman.). Math. on the Found. 1983. 1984. Gallo.C. Bureau of Standards. R. Glover. Kamey. Sci. Z. Glover. The Threshold Shortest Path Algorithm. Shlifer. 1. Naamad. Washington. F. Glover. and S. Klingman. 3-79. On the Theoretical Efficiency of Various 103-111. Implementation and Computational for Comparisons of Primal. Rome. Shortest Paths: A Bibliography. Oper. and A. National Algorithms for Calculating Shortest Path Trees. Glover. Gibby. and Primal-Dual Computer Codes 4. 1980. Mead. J. EXial 1974. Maffioli. R. F.. A Performance Comparison of Labeling Technical Note 772. Klingman. Theoretical Comp. Minimum Cost Network Eow Problem. Gilsinn. G. and D. and Its E. 199-202. Klingman. OCV^/S E^/^) Algorithm for the Maximum Flow Problem. B. Pallottino. D. and S. Acta Informatica 14. 203-217. Witzgall. Simeone. and M. 136-146. Math. Schweitzer. Shortest Path Algorithms.. 21. B. Letters 2. 12-37. 27th Annual Symp. D. 221-242. The Zero Pivot Phenomenon in Transportation Problems and Computational Implications. S.198 GaUl. Sci. 1986. /. and D. Res. F. A Comparison of Pivot Selection Rules for Primal Simplex Based Network Codes. Gavish. Sofmat Document 81 -PI -4-SOFMAT-27. Italy. ofComput. Pallottino. An 0(VE log^ V) Algorithm for the Maximum Flow Problem. An 0(n^(m + n log n) log n) Sci. Proc. Glover.. Threshold Assignment Algorithm. G. Gallo. 12. 14. Study 26. F. No. 226-240... 1988. F. 1986. Networks 191-212. of Comp. Prog. P. Ruggen. Z. P. Network Flow Algorithms. . and C.. Galil. D. Z. .. C. Galil. Galil. G. Pallottino As Annals of Operations Research 13. Prog. 1973. Sys. and G. Netxvorks 14. (eds. 1980. 1981.. and D.. Z. Tardos.. In Fortran Codes for Network Optimization. Min-Cost Flow Algorithm. Toth. 1982. Glover. D. and E. 1977.

Glover. D. 1987. Man. A New Approach to the Maximum Flow /. Naval Res. D. INFOR Goldberg. Logis. Goldberg. Cambridge. Augmented Threaded Index Method for Network Optimization. 109-175. MA. S. D. Tarjan. Goldberg.F. 1106-1128. Problem. 793-813. A Computational Study on for Tranportation Start Procedures. Basis and Solution Algorithms Problem. Proc. D. Technical Report MIT/LCS/TM-291. A New Max-Flow for Algorithm. on the Theory of Comput. N. Science. Res.V. Napier. R. Laboratory for Computer MA. Whitman. 1976. Klingman. and R. Glover. F. Stutz. Combiiuitorial Algorithms for the Generalized Circulation Problem. and Tardos. Klingman. 136-146. and R.I.. Phillips. 1986. 363-376. Klingman. 1985. 41-61. Successive Approximation. A. A. F. A.V. Laboratory Computer Science. J. D. Klingman. Comprehensive Computer Evaluation and Enhancement of Maximum Flow Algorithms. and D. F.. Tarjan. Goldberg.. Glover. Man. A New Polynomially Bounded Shortest Path Algorithm. and D. 1985. Glover. E. New Polynomial Sci. 136-146.199 Glover. 293-298. D. Solving Minimum Cost Flow Problem by of Proc. 33. Quart. on the Theory Comp. AIIE Transactions Glover. Cambridge. Mote. Whitman. and D. .E. 18th ACM Symp. Applications of Management Glover. D. Kamey.T. A Primal Simplex Variant Maximum Flow F. and A. J.A.. To appear in ACM. M. Science 3. Klingman. Klingman. Change Criteria. 1984. 12. M. Netvk'ork Applications in Industry and Government. 20...I.. 19th ACM Symp.. Mote. A. 9. Plotkin. and N. 1974. for the F. 1988. 1979. 31.. Schneider. Research Report. Sd. 65-73... Phillips. and RE.V. and J. F. Shortest Path Algorithms and Their Computational Attributes.. Oper.V. Problem. 31. Klingman.T. 1974. 1985.

Goldfarb. Department of Operations Research and Industrial Engineering. As Annals of Operations Research 13. Taijan.. 388-397. 1961. . 1977. Hao. Canceling Negative Cycles. Department of Operations Research and Columbia University. Efficient Dual Simplex Algorithms for the Assignment Problem. D. 12. Goldfarb. D. Efficient Shortest Path Simplex Algorithms. Magnanti. Goldfarb. Successive Approximation. A Computational Comparison of the Dinic Flow. . 1988b. 1988. 1988a. Columbia University. Goldfarb. Hao.. Optimization. B. Finding Minimum-Cost Circulations by Symp. D. Solving Minimum Cost Flow Problem by [1987]. and R. and T. Department of Operations Research and Industrial Engineering. D. 83-124. and Network Simplex Methods for Maximum Simeone et al.V. 7. A Primal Simplex Algorithm that Solves the Maximum Flow Problem University. Research Report. A. in New York. Golden. 1S7-203. Grigoriadis. I. 1985. MA. Kai. Prog. Cambridge. 1987. L. M. Networks 149-183. NY. Controlled Rounding of Tabular Data for the Cerisus Bureau at the : An Application of LP and Networks.V.361-371.. D. and M. 1986. 1988. C. 2(Hh ACM Golden. J. (eds.. Gomory. J. and J.E. At Most nm Pivots and O(n^m) Time. A Practicable Steepest Edge Simplex Algorithm.. 33. Deterministic Network Optimization: A Bibliography.. and T. Columbia New York. 551-570. Research Report.. Industrial Engineering.. Oper. NY. Tarjan. and J. Goldfarb. Hu. and R. R. In B. Goldberg. Kai. Math. on the Theory of Comp. 1977. and S. Proc. Reid. Technical Report. Res.D.. NY. Math. Hao.ofSlAM 9.. T.E. 1986. D. Anti-Stalling Pivot Rules for the Network Simplex Algorithm. New York. and S. Multi-Terminal Network Flows. )To (A revision of Goldberg and Tarjan appear in Math.200 Goldberg. B. Prog. A. f.K. E.) FORTRAN Codes for Network Goldfarb. Seminar given OperatJons Research Center.

Bulletin of the ACM Gusfield. R. Hu. M. and M. An n ' Algorithm for Maximun Matching in Bipartite Graphs. 26. 83-111. Davis. An Efficient Procedure for 9. D. A Note on Shortest Path. /. L. Hsu. of a Product from Several Sources to Numerous Facilities. Maximum Flow in Undirected Planar Networks. Vol. C. SIAM of Comp.201 Gondran. and J. Computing Hassin. 1978. Very Simple Algorithms and Programs Dept. Oper. A. The Rutgers Minimum Cost Network Flow 26. 375-379. 1985. and R. Lecture Notes in Economics and Mathematical Systems. 17-18. Helgason. 17-29. F. M. 1988. . Multicommodity Network Flows. New Hamachar. Assignment. 1973. Graphs and Algorithms. . University of California. M. Martel. Johnson. 225-231. University. Markowitz. Prog. Log. T.. AIIE Trans. J. and T. Subroutines. 1985. 10. Grigoriadis. 1963.. E.. H. Femandez-Baca. Research Report No. V. Numerical Investigations on the Maximal Flow Algorithm of 22. 1941. J. M. 63-68.. Technical Report No.M. 612-^24. Computer Science and Engineering. Personal Communication. C. Phys . Math. 11. 2. Kennington.. Implementing Hitchcock. An Efficient Implementation of the Network Simplex Method. and D. Grigoriadis. J. 1979. 344-260. Minoux. SIGMAP 1987. 1984.-< Karzanov. D. Karp. Springer-Verlag. 1979. CSE-87-1. and D. 160. Network Row. YALEN/DCS/TR-356. L. R. Study Grigoriadis. The Distribution Math. CT.. Wiley-Interscience. Res. 224-230. D. of for All Pairs Network Flow Analysis. Hoffman. Res.. M. Hausman. CA. B. . Programming and Related Areas: A Classified Bibliography. 20. D. 1963. D. 1977. D. Yale Haven. Naval Hopcroft. An O(nlog^n) Algorithm for 14. Integer SIAM J. Comput. and H. Fast Algorithms for Bipartite Gusfield. 1986. a Dual-Simplex Network Flow Algorithm. Quart. and Transportation Problems.

202

Hu, T.C.

1969. Integer Programming and Network Flours.

Addison-Wesley.

Hung, M.
Oper.Res.

S.

1983.

A

Polynomial Simplex Method for the Assignment Problem.

31,595-600.

Hung, M.
Oper. Res
.

S.,

and W. O. Rom.

1980.

Solving the Assignment Problem by Relaxation.

28, 969-892.

Imai, H.

1983.

On

the Practical Efficiency of

Various

Maximum Flow

Algorithms,

/.

Oper. Res. Soc. Japan

26,61-82.

Imai, H.,

and M.

Iri.

1984.

Practical Efficiencies of Existing Shortest-Path Algorithms
/.

and
Iri,

a

New

Bucket Algorithm.

of the Oper. Res. Soc. Japan 27, 43-58.

M.

1960.

A New Method

of Solving Transportation-Network Problems.

J.

Oper.

Res. Soc. Japan 3, 27-87.

Iri,

M.

1969. Network Flaws, Transportation and Scheduling.

Academic

Press.

Itai,

A.,

and

Y. Shiloach.

1979.

Maximum Flow

in Planar

Networks.

SIAM

J.

Comput.

8,135-150.

Jensen, P.A., and

W.

Barnes.

1980.

Network Flow Programming. John Wiley

&

Sons.

Jewell,

W.

S.

1958.

Optimal Flow Through Networks.

Interim Technical Report

No.

8,

Operation Research Center, M.I.T., Cambridge,

MA.
Gair>s.

Jewell,
499.

W.

S.

1962.

Optimal Flow Through Networks with

Oper. Res.

10, 476-

Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.

/.

ACM

24,1-13.

JohT\son, D. B.

1977b.

Efficient Special

Purpose Priority Queues.
1-7.

Proc. 15th

Annual

Allerton Conference on

Comm., Control and Computing,

Johnson, D.

B.

1982.

A

Priority

Queue

in

Which

Initialization

and Queue

Operations Take

OGog

log D) Time. Math. Sys. Theory 15, 295-309.

203
Johnson, D.
B.,

and

S.

Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
Proceedings of the 20th Annual

Directed Planar Networks in O(n^/^logn) time. In
Allerton Conference on

Comm.

Control, and Computing.

Univ. of Dlinois, Urbana-

Champaign,
Johnson,

IL.

E. L.

1966.

Networks and Basic
1986.

Solutions. Oper. Res. 14, 619-624.

Jonker, R., and T. Volgenant.

Improving the Hungarian Assignment

Algorithm. Oper. Res.

Letters 5, 171-175.

Jonker, R.,

and A. Volgenant.

1987.

A

Shortest

Augmenting Path Algorithm
38, 325-340.

for

Dense and Sparse Linear Assignment Problems. Computing
Kantorovich, L. V.
of Production.
in Mfln. Sci.

1939.

Mathematical Methods in the Organization and Planning

Publication

House

of the Leningrad University, 68 pp.

Translated

6(1960), 366-422.

Kapoor,

S.,

and

P.

Vaidya.

1986.

Fast

Algorithms for Convex Quadratic
Proc. of the 18th

Programming and Multicommodity Flows,
Theory of Comp.
,

ACM

Symp.

on the

147-159.

Karmarkar, N.

1984.

A New

Polynomial-Time Algorithm

for Linear

Programming.

Combinatorica 4, 373-395.

Karzanov, A.V.

1974.

Determining the Maximal Flow in a Network by the Method

of Preflows. Soviet Math. Doklady 15, 434-437.

Kastning, C.

1976.

Integer

Programming and Related Areas:

A

Classified Bibliography.

Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.

Kelton,

W.

D.,

and A. M. Law.

1978.

A

Mean-time Comparison of Algorithms
Networks
8,

for

the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.

97-106.

Kennington,

J.L.

1978.

Survey of Linear Cost Multicommodity Network Flows. Oper.

Res. 26, 209-236.

Kennington,

J.

L.,

and

R. V. Helgason.

1980.

Algorithms for Network

Programming,

Wiley-Interscience,

NY.

204

Kershenbaum, A. 1981.
400.

A

Note on Finding Shortest Path Trees. Networks

11,

399-

Klein,

M.

1967.

A

Primal Method for Minimal Cost Flows. Man.

Sci.

14, 205-220.

Klincewicz,

J.

G.

1983.

A Newton Method

for

Convex Separable Network Flow

Problems. Networks

13, 427-442.

Klingman,

D., A. Napier,

and

Large Scale Capacitated

NETGEN: A Program for Assignment, Transportation, and Minimum
J.

Stutz.

1974.

Generating

Cost Flow

Network Problems. Man. So. 20,814-821.

Koopmans,

T.

C.

1947.

Optimum
17 (1949).

Utilization of the Transportation System.

Proceedings of the International Statistical Conference,

Washington, DC. Also

reprinted

as supplement to Econometrica

Kuhn, H. W.

1955.

The Hungarian Method

for the

Assignment Problem. Naval

Res.

Log. Quart. 2, 83-97.

Lawler, E.L. 1976. Combinatorial Optimization:

Networks and Matroids. Holt, Rinehart

and Winston.
Magnanti,
T. L.

1981.

Combinatorial Optimization and Vehicle Fleet Planning:

Perspectives and Prospects. Networks 11, 179-214.

Magnanti,

T.L.,

and

R. T.

Wong.

1984.

Network Design and Tranportation Planning:

Models and Algorithms.

Trans. Sci. 18, 1-56.

Malhotra, V. M., M. P. Kumar, and
for Finding

S.

N. Maheshwari. 1978.

An CK V
I

1

3)

Algorithm

Maximum Flows
1987.

in

Networks. Inform.

Process. Lett. 7

,

277-278.

Martel, C. V.

A

Comparison

of Phase

and Non-Phase Network Flow

Algorithms.

Research Report, Dept. of Electrical and Computer Engineering,

University of California, Davis, CA.

McGinnis,

L.F.

1983.

Implementation and Testing of a Primal-Dual Algorithm

for

the Assignment Problem. Oper. Res. 31, 277-291.

Mehlhom,

K. 1984.

Data Structures and Algorithms.

Springer Verlag.

205 Meyer, R.R. 1979.

Two Segment
C. Y. Kao.

Separable Programming. Man.

Sri. 25,

285-295.

Meyer,

R. R.

and

1981.

Secant Approximation Methods for Convex

Optimization. Math. Prog. Study 14, 143-162.

Minieka,

E.

1978.

Optimization Algorithms for Networks and Graphs.

Marcel Dekker,

New

York.

Minoux, M.

1984.
J.

A

Polynomial Algorithm for

Mirumum

Quadratic Cost Flow

Problems. Eur.

Oper. Res. 18, 377-387.

Minoux, M.

1985.

Network Synthesis and Optimum Network Design Problems:
Technical Report, Laboratoire MASI,

Models, Solution Methods and Applications.
Universite Pierre
et

Marie Curie,

Paris, France.

Minoux, M.

1986.

Solving Integer

Minimum

Cost Flows with Separable Convex

Cost Objective Polynomially. Math. Prog. Study 26, 237-239.

Minoux, M.

1987.

Network Synthesis and E>ynamic Network Optimization. Annals

of Discrete Mathematics 31, 283-324.

Minty, G.

J.

1960.

Monotone Networks.

Proc. Roy. Soc.

London

,

257 Series A, 194-212.

Moore,

E.

F.

1957.

The Shortest Path through a Maze.
the Theory of Switching Part

In Proceedings
II;

of the

International

Symposium on

The Annals of the

Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.

Mulvey,
266-270.

J.

1978a.

Pivot Strategies for Primal-Simplex

Network Codes.

J.

ACM

25,

Mulvey,

J.

1978b. Testing a Large-Scale

Network Optimization Program. Math.

Prog.

15,291-314.

Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley

&

Sons.

Nemhauser,
Wiley

G.L.,

and L.A. Wolsey.

1988.

Integer

and Combinatorial Optimization. John

&

Sons.

Orden, A. 1956. The Transshipment Problem. Man.

Sci. 2,

276-285.

1980.M. Oper.Algorithms for the Shortest Route Problem. 214-231. Discrete Structures and Algorithms . B. Cambridge. 1974. Potts..224-230. Massachusetts Ii\stitute of Working Paper 1908-87. Math. Pape.T. 1981. Implementation and Efficiency of Moore. J. Ahuja. Cambridge. Software 6.. 27. 1984. Munich. Orlin. Genuinely Polynomial Simplex and Non-Simplex Algorithms for the Minimum Cost Flow Problem. Solutions of the Shortest-Route Problem-A Review. Wiebenson. B. 1987. Steiglitz.. Working Paper No. 1982. D. ACM Trans. and Flow and Parametric Maximum Flow Problems. and R. Page . J. Floips in Transportation Netxvorks. Study Orlin. Orlin. B. Rock. Garcia-Diaz.. Prog. R.(ed. Carl Hansen. Proc. and W. 101-191. Technical Report No. B.. 1983. Math. and R. 1988. Math.B.. M. J. 1972..). 7. Orlin. Pollack. OR 178-88. MA. U. 8. J. K. 377-387. and K. Prentice-Hall. Combinatorial Optimization: Algorithms and Complexity. Academic Press. Prog. M. Sloan School of Management. On the Simplex Algorithm for 24.T.. J. Phillips. Fundamentals of Network Analysis. 1985. 1615-84. School of Management.. U.212-222. Networks and Generalized Networks. J. Sloan Technology. Ahuja. Prentice- HaU. Maximum-Throughput Dynamic Network Flows. New MA.I. 1960. 1980. A Faster Strongly Polynomial Minimum Cost Flow Algorithm. Papadimitriou. on the Theory of Comp. 166-178. 20th ACM and Symp. R. M. 1988. H.I. Scaling Algorithms for the Assignment Minimum Cycle Mean Problems. Pape. Math. and A. Cambridge. Prog. Res. Oliver. Algorithm 562: Shortest Path Lenghts. Scaling Techniques for Miiumal Cost Network Flows. K. .H. In V.B. B. Operations Research Center. Orlin. New E>istance-E>irected Algorithms for Maximum MA.106 Orlin.T. 450-455. C.

L. N.. Networks. Math. 202-205.. New Jersey.. and R.. Ottawa. R. Y. Network Optimisation Practice: A Computational Guide. Sleator. - Srinivasan. /. Canada. 16. and J. Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods. Oper. An 0(nl log^(I)) Maximum Flow Algorithm. D. Thompson. Philadelphia. K. Carleton University. A Data Structure for Dynamic Trees. Prentice-Hall. A Strongly Polynomial Minimum Cost Circulation Algorithm. 1983.207 Rockafellar. Discrete Optimization Algorithms. Sons. Wiley Syslo. /. Tabourier. Wiley- Roohy-Laleh. D. Y.S. 1973. T.D. V.. B. N. PA. Rothfarb. John & M. Tarjan. 194-213. Common Terminal MuJticommodity Flow. Data Structures and Network Algorithms. Combinatorica 247-255. Improvements to the Theoretical Efficiency of the Network Simplex Unpublished Ph.T. Shiloach. E.. 1978. CA. M. Kowalik. 1983. R. 1985. . and Algorithms. 24.. Deo. Sons. 5. Interscience. ACM 20. Y.. Comput. Graphs. All Shortest Distances in a Graph: An Improvement to Dantzig's Inductive Algorithm. & -. 1985.E. Tardos. 83-87. Algorithms 3 . and I. Shein. Thulsiraman. Tarjan.N. Techniques for Primal Transportation Algorithm. Network Flows and Monotropic Optimization. Frisch. 1981. D. Res. Smith. 1973. 1982. SIAM. John Wiley . and U.128-'i46. and G. E. Dissertation. 1983. Computer Science Dept. E. Disc. Y. 1984. 1968. Sys.S.Sci.362-391. Vishkin. P.M. Stanford University. Benefit-Cost Analysis of Coding /. 1980. and K. Technical Report STAN-CS-78-702. Method. Shiloach. 1982. 4. Prentice-Hall.. Sheffi. Swamy. An OCn^ log n) Parallel Max-Flow Algorithm.

1984. /..Math. An Algorithm for Linear Programming which Requires 0(((m Proc. 12. Tomizava. Techniques Useful for Solution of Transportation Network Problems. P. 1978. 87-97. 1987. Oper. Res. 1962. Algorithms for Maximum Network Flow. Networks Truemper. Von Randow. 32. 1978-1981. A. Vaidya.450-456. E. ACM 9. 29-38. S. 1972. 173-194. N.11-12. A Shortest Path Algorithm for Edge - Sparse Graphs. D. ACM Symp. Tarjan.Res. 1974. 1986. Appl. Vol. R. Tarjan. Weintraub. Vol. 265-268. R. 1976. 1-11. Study 26. Theory of Comp. of the 19th +n)n^ + (m+n)^-^n)L) Arithmetic Operations. Prog. Improved Shortest Path Algorithms for Transport Networks.208 Tarjan. Math. A. Springer-Verlag. A Theorem on Boolean A Matrices. R. J. A Simple Version of Karzanov's Blocking Flow Algorithm. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. 1981-1984. R. Integer Programming and Related Areas: A Classified Bibliography Lecture Notes in Economics and Mathematical Systems. R. E. Wagner. 1977.197. Primal Algorithm to Solve Network Flow Problems with Convex Costs. Letters 2 . SI AM ]. 1987. . 243. Personal Communication. On Some 1. Man.7-20. Transp. 1985. Sci. R. 1982. Personal Communication. R. K. on the Van Vliet. E. ACM Warshall. E. Von Randow. 21. 1988. On Max Flow with Gair\s and Pure Min-Cost Flows. 23^-57. Springer-Verlag. Tarjan.

y4CM 19. 26. Near Equivalence of Network Flow Algorithms. Barahona. 37-40. N. W. A Method for Finding the Shortest Route Through a Road Network. 1960. Whiting. CA. P. 255-266. Zadeh. A Bad Network Problem 5. Zadeh. Comm. Dept. Quart. WiUiams. Stanford University. Technical Report No. of Operations Research. Problem. y4CM 7 . and F. Chile.209 Weintraub. A. 347-348. 1979. Universidad de Chile-Sede Occidente. Zadeh. and J. Prog. Math. J. 1964. 1972. N.217-224. Math. 11. Oper.. 184-192. . N. Departmente de Industrias Report No.. A. 5. D. N. Algorithm 232: Heapsort. Edmonds-Karp Algorithm for Computing Maximal Flows. Res. 1979. A Ehial Algorithm for the Assignment 2. Cost Flow Algorithms. 1973a. . Hillier. 1973b. More Pathological Examples for Network Flow Problems. Prog. J. Theoretical Efficiency of the /. for the Simplex Method and other Minimum Zadeh.

l^8^7 U^6 .

.

.

.

0.Date Due ne m^ ?«.5 4Pi? 2 7 1991 W t 1 . f^cr J CM- OS 1992 • ::m \995t- o 1994 Lib-26-67 .„_ .* > SZQ0^ nrr ^^.

MIT LIBRARIES DUPl I 3 TDSD DQ5b72fl2 b .

Sign up to vote on this title
UsefulNot useful