Você está na página 1de 157

What is Optimization?

Optimization is derived from the Latin word

optimus, the best.

Optimization characterizes the activities involved to

find the best.

What is Optimization?

Optimization is the mathematical discipline which is concerned with finding the maxima and minima of functions, possibly subject to constraints.

Optimization is the act of obtaining the best results

under given circumstances.

Objective function: This is the quantity (or quantities) that you are trying to optimize. It is sometimes referred to as a target.

Optimization variables: These are the variables you can change (sometimes called the changing variables) in order to achieve your optimum solution.
Maximize: In some optimization problems, you seek to make the objective function as large as possible. Such problems are maximization problems. Minimize: In some optimization problems, you seek to make the objective function as small as possible. Such problems are minimization problems.

Explicit constraint: Explicit constraints describe those items that clearly


given to you as goals during your optimization process. For example, limitations on resources (materials, labour) and limitations on demand are often stated explicitly.

Implicit constraint: Implicit constraints refer to those quantities that you must recognize are also constraints on your optimization process.

For example, in optimizing company profits by producing different


quantities of different goods, the number of units of each goods to produce might need to be an integer. The quantity produced must also be nonnegative.

Optimality Criteria
In considering optimization problems, two questions generally must be addressed: 1. Static Question. How can one determine whether a given point x* is the optimal solution?

2. Dynamic Question. If x* is not the optimal point,


then how does one go about finding a solution that is optimal?

General Ideas of Optimization


There are two ways of examining optimization. Maximization (example: maximize profit) In this case you are looking for the highest point on the function. Minimization (example: minimize cost) In this case you are looking for the lowest point

on the function.

Optimization theory finds ready application in all branches of engineering in four primary areas:

1. Design of components or entire systems


2. Planning and analysis of existing operations 3. Engineering analysis and data reduction 4. Control of dynamic systems

We use optimization to obtain Minimal Cost

Maximum Profit
Best Approximation Optimal Design Optimal Management or Control etc.,

Where would we use optimization? Design of civil engineering structures such as frames, foundations, bridges, towers, chimneys and dams for minimum cost. Optimal plastic design of frame structures (e.g., to determine the ultimate moment capacity for minimum weight of the frame). Design of water resources systems for obtaining maximum benefit. Design of optimum pipeline networks for process industry.

Where would we use optimization? . .

Finding the optimal trajectories of space vehicles. Optimum design of linkages, cams, gears, machine tools, and other mechanical components. Selection of machining conditions in metal-cutting processes for minimizing the product cost. Design of material handling equipment such as conveyors, trucks and cranes for minimizing cost.

Design of pumps, turbines and heat transfer equipment for maximum efficiency.

Where would we use optimization? . .

Optimum design of control systems. Optimum design of chemical processing equipments and plants. Selection of a site for an industry. Planning of maintenance and replacement of equipment to reduce operating costs. Allocation of resources or services among several activities to maximize the benefit. Controlling the waiting and idle times in production lines to reduce the cost of production.

Where would we use optimization? . .

Planning the best strategy to obtain maximum profit in the presence of a competitor. Designing the shortest route to be taken by a salesperson to visit various cities in a single tour. Optimal production scheduling. planning, controlling and

Analysis of statistical data and building empirical models to obtain the most accurate representation of the statistical phenomenon.

Where would we use optimization? . .

Design of aircraft and aerospace structure for minimum weight Optimum design of electrical machinery such as motors, generators and transformers. Optimal location of telecommunication towers

What is a Function?

Is a rule that assigns to every choice of x a unique value y =(x). Domain of a function is the set of all possible input values (usually x), which allows the function formula to work. Range is the set of all possible output values (usually y), which result from using the function formula.

What is a Function?
Unconstrained and constrained function Unconstrained: when domain is the entire set of real

numbers R
Constrained: domain is a proper subset of R Continuous, discontinuous and discrete

What is a Function?
Monotonic and unimodal functions Monotonic:

Unimodal: (x) is unimodal on the interval if and only if it is monotonic on either side of the single optimal point x* in the interval. Unimodality is an extremely important functional property used in optimization.

A monotonic increasing function

A monotonic decreasing function

An unimodal function

An objective function is defined which needs to be either maximized or minimized. The objective function may be technical or economic.

Examples of economic objective are profits,


costs of production etc.. Technical objective may be the yield from the reactor that needs to be maximized, minimum size of an equipment etc..

The choice of the

Objective function

is governed by the nature of the problem

The geometric characteristics of the objective function plays an Important role in solution of the optimization. Two different types of geometric Characteristics
A A C E

a b

max Uni-Model Function

min

b a Multi -Model Function

Classification of the objective functions

Maximization f(x) is equivalent to minimization f(x)

It can be seen from Fig.

that

if

point
to

x
the

corresponds

minimum value of function f (x), the same point also corresponds maximum value to of the the

negative of the function, f

(x).

The following operations on the objective function will not change the optimum solution x

1.
2.

Multiplication (or division) of f (x) by a positive constant c.


Addition (or subtraction) of a positive constant c to from) f (x). (or

STATEMENT OF A CONSTRAINED OPTIMIZATION PROBLEM Problems where a set of optimal conditions needs to be find subject to a set of additional constraints on the variables.

subject to the constraints

where X is an n-dimensional vector called the design vector, f (X) is termed the objective function, and gj (X) and lj (X) are known as inequality and equality constraints, respectively. The number of variables n and the number of constraints m and/or p need not be related in any way

Optimization with constraints

min f ( x, y ) x x 0

2y

or
min f ( x, y ) x 2 2 y 2 2 x 5, y 1

or
min f ( x, y ) x x y 2
2

2y

STATEMENT OF A UNCONSTRAINED OPTIMIZATION PROBLEM Problems where a set of optimal conditions needs to be find without any

additional constraints on the variables (or) Unconstrained Optimization is


concerned with the practical computational task of finding minima or maxima of functions of one, several or even millions of variables

f(X) is called the Objective Function


Where X is an n dimensional Vector Called Design Vector and the variables are called the design or decision variables. The design variables are collectively represented as a design vector X.

Unconstrained optimization

min f ( x, y) x 2 y
2

Essential features of optimization problem

An objective function is defined which needs to be

either maximized or minimized.

The

objective

function

may

be

technical

or

economic.
Examples

of economic objective are

profits,

costs of production etc..


Technical

objective may be the yield from the

reactor that needs to be maximized, minimum size of an equipment etc..

Essential features of optimization problem. .

Underdetermined system:

If all the design variables are fixed.

There is no optimization.
Thus one or more variables is relaxed and the

system becomes an underdetermined system which


has at least in principle infinite number of solutions.

Essential features of optimization problem. .

Restrictions:

Usually the optimization is done keeping certain restrictions or constraints. Thus, the amount of row material may be fixed or there may be

other design restrictions.


Hence in most problems the absolute

minimum or maximum is not needed but a restricted optimum i.e. the best possible in the given condition

Constraint surfaces in a hypothetical two-dimensional design space

Depending on whether a particular design point belongs to the acceptable or unacceptable region, it can be identified as one of the following four types: 1. Free and acceptable point 2. Free and

unacceptable point

3.

Bound

and

acceptable point 4. Bound and

unacceptable point

Design points that do not lie on any constraint surface are known as free points

The set of values of X that satisfy the equation gj (X) = 0 forms a hypersurface in the design space and is called a constraint surface.

A design point that lies on one or more than one constraint surface is called a bound point , and the associated constraint is called constraint. an active

A contour line of a function of two variables is a curve along which the function has a constant value. A contour plot consists of contour lines where each contour line indicates a specific value of the function

The locus of all points satisfying f (X) = C = constant forms a


hyper-surface in the design space, and each value of C corresponds to a different member of a family of surfaces. These surfaces, called objective function surfaces, are shown in a hypothetical twodimensional design space.

Once the objective function surfaces are drawn along with the constraint surfaces, the optimum point can be determined without much difficulty. But the main problem is that as the number of design variables exceeds two or three, the constraint and objective function surfaces become complex even for visualization and the problem has to be solved purely as a mathematical problem

STEPS IN FORMULISATION OF AN OPTIMISATION PROBLEM

Phases of Solving Problems

There is no single method available for solving all optimization problems efficiently.
Hence a number of optimization methods have been

developed for solving different types of optimization problems

Classification of Optimization problems


Classification based on the existence of the constraints Unconstrained

Constrained
Classification based on nature of the design variables Parameter or Static optimization ( find the set of design parameters) Trajectory or Dynamic optimization (design variable is a function of one or more parameters) Classification based on physical structure of the problem

optimal control (mathematical program problem involving no of stages )


non optimal control

Classification based on the nature of the equations involved Linear Non linear Geometric Quadratic programming problems

Classification based on permissible values of the design variables Integer Real valued programming problems
(Design variables restricted to)

Mixed Classification based on no of objective functions Single Multi objective programming problems Classification based on the deterministic of the variables Deterministic Stochastic programming
(in which some or all the parameters are probabilistic)

Classification based on Capability of the search algorithm search for a local minimum global optimization; multiple objectives; etc. Classification based on type of solution. Analytical methods Search Methods Graphical methods Experimental methods Numerical methods

For an Unconstrained minimization problem


Function Characteristics Solution exists, smooth Complicated (multiple minima or maxima) Good starting points unknown/difficult to compute

Challenges Finding solution in reasonable amount of time Knowing when solution has been found

SOLUTION METHODS FOR UNCONSTRAINED OPTIMIZATION

1. Descent method

2. Newtons method
3. Conjugate direction method 4. Conjugate gradient algorithm 5. Quasi Newtons method

Unconstrained multi-parameter optimization techniques Direct search (no information on derivatives used): Hooke-Jeeves pattern search Nelder-Meads sequential simplex method Powell's conjugate directions method various evolutionary techniques

Unconstrained multi-parameter optimization techniques Gradient-based methods (information on derivatives is used): Steepest Descent Fletcher-Reeves' Conjugate Gradient method Second order methods (information on the second

derivatives is used):
Newton's Method Quasi-Newton Method (constructs an approximation of the matrix of second derivatives)

Constrained Optimization

Constrained Optimization involves finding the optimum to some decision problem in which the decision-maker faces constraints.
Examples: constraints of money, time, capacity, or energy.

Methods for Problems


Solving

Constrained

Optimization

Penalty Function Method Lagrange Multiplier Augmented Lagrange for Inequality Constraints Quadratic Programming

Gradient Projection Method for Equality Constraints


Gradient Projection Method for Inequality Constraints

Nonlinear Programming Optimization Methods:


Sequential quadratic programming (SQP) Augmented Lagrangian method Generalized reduced gradient method Projected augmented Lagrangian Successive linear programming (SLP)

Interior point methods etc.,

Methodologies in Optimization
Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming

or as generalization of linear or convex quadratic programming.

Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the set of constraints is specified using only linear equalities and inequalities. Such a set is called a polyhedron or a polytype if it is bounded. Second order cone programming (SOCP) is a convex program, and

includes certain types of quadratic programs.

Semidefinite programming (SDP) is a subfield of convex optimization


where the underlying variables are semidefinite matrices. It is generalization of linear and convex quadratic programming.

Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone.

Geometric programming is a technique whereby objective and


inequality constraints expressed as polynomials and equality

constraints as monomials can be transformed into a convex program.

Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming.

Quadratic programming allows the objective function to have quadratic


terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem.

Nonlinear programming studies the general case in which the objective


function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it.

Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust programming is, like

stochastic programming, an attempt to capture uncertainty in the data


underlying the optimization problem. This is not done through the use of random variables, but instead, the problem is solved taking into account inaccuracies in the input data. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one

Infinite-dimensional optimization studies the case when the set of feasible


solutions is a subset of an infinite-dimensional space, such as a space of functions.

Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to

find approximate solutions for many complicated optimization problems.

Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning).

Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling.

Calculus of variations seeks to optimize an objective defined over many


points in time, by considering how the objective function changes if there is a small change in the choice path. Optimal control theory is a generalization of the calculus of variations.

Dynamic programming studies the case in which the optimization strategy is based on splitting the problem into smaller sub-problems. The equation that describes the relationship between these sub-problems is called the Bellman equation.

Evolutionary Algorithms
Genetic Algorithm (GA) DE (Differential Evolution) Particle swarm optimization (PSO)

Ant colony optimization


Harmony search Gaussian adaptation etc.,

Classical Optimization
Direct Snobfit. Hybrid approach etc.,

What are common for an optimization problem?

There are multiple solutions to the problem; and the optimal solution is to

be identified. There exist one or more objectives to accomplish and a measure of how well these objectives are accomplished(measurable performance). Constraints of different forms are imposed. There are several key influencing variables. The change of their values will influence (either improve or worsen)the measurable performance and the degree of violation of the constraints.

In any practical problems , the design variables cannot be chosen arbitrarily rather they have to satisfy certain specified functional and other requirements The restrictions that must be satisfied to produce an acceptable design are collectively called the Design constraints. The constraints that represent limitations on the behavior or performance of the system are termed Behavior or Functional constraints The constraints that represents physical limitations on the design

variables such as, availability , etc are called Geometric or Side constraints

Properties of Practical Optimization Problems


They are non-smooth problems having their objectives and constraints are most likely to be non-differential and discontinuous Often, the decision variables are discrete making the search space discrete as well The problems may have mixed types (real, discrete, permutation, etc.) of variables Boolean,

They may have highly non-linear objective and constraint functions due to complicated relationships variables must form optimization problems. and equations which the decision

and satisfy. This makes the problems non-linear

Properties of Practical Optimization Problems . .

There are uncertainties associated with decision variables, due to which the true optimum solution may not of much importance to a practitioner. The objective and constraint functions may also non-deterministic.

The evaluation of objective and constraint functions


expensive.

is computationally

The problems give rise to multiple optimal solutions, of which some are globally best and many others are locally optimal. The problems involve multiple conflicting objectives, solution is best with respect to all chosen objectives. for which no one

Classical Optimization Techniques


The classical optimization techniques are useful in finding the optimum solution or unconstrained maxima or minima of continuous and differentiable functions. These are analytical methods and make use of differential calculus in locating the optimum solution.

The classical methods have limited scope in practical applications as some of them involve objective functions which are not continuous and /or differentiable.

Classical Optimization Techniques . .

Yet, the study of these classical techniques of optimization form a basis for developing most of the numerical techniques that have evolved into advanced techniques more suitable to todays practical problems These methods assume that the function is differentiable twice with respect to the design variables and the derivatives are continuous.

Linear Programming
optimization methods using calculus have several limitations and thus not suitable for many practical applications.

Linear programming is Most widely used constrained form of optimization method which deals with nonnegative

solutions(x1= 0 , x2= 1/2 x3= 5) to determine system of linear equations with corresponding finite value of the objective function.

Linear Programming is required that all the mathematical functions in the model be linear functions.

The term Linear is used to describe the proportionate relationship of two or more variables in a model. The given change in one variable will always cause a resulting proportional change in another variable. The term linear implies that the objective function and constraints are linear functions of nonnegative decision

variables (e.g., no squared terms, trigonometric functions, ratios


of variables)

Linear programming (LP) techniques consist of a sequence of steps that will lead to an optimal solution to problems, in cases where an optimum exists

Applications of Linear Programming


The number of applications of linear programming has been so large,
some of them are: Scheduling of flight times of aero planes Distribution of resources Selection of shares and stocks Assignment of jobs to people and many other problems Scheduling of production in many manufacturing units or industries. Use of available resources in an organization Engineering design problems Shipping & transportation Product mix Marketing research Food processing etc.,

Methods of Solving Linear Programming Problems


Trial and error: possible for very small problems; virtually impossible for large problems.

Graphical or Geometrical approach : It is possible to solve a 2variable problem graphically to find the optimal solution (not shown).

Simplex Method: This is a mathematical approach developed by George Dantzig. Can solve small problems by hand.

Computer Software : Most optimization software actually uses the Simplex Method to solve the problems.

Limitations of Linear Programming


The following will be the assumptions of linear programming problem that limit its applicability. Linearity: is a requirement of the model in both objective function and constraints Proportionality: Relationship between Outputs and inputs are proportional Additivity: Every function is the sum of individual contribution of respective activities a1x1+a2x2 Divisibility: All decision variables are continuous (can take on any

non-negative value including fractional ones) x1=12, x2=3.8


Certainty or Deterministic: All the coefficients in the linear programming models are assumed to be known exactly. a1=5, a2=2

The conditions of LP problems are 1. Objective function must be a linear function of

decision variables. 2. Constraints variables. 3. All the decision variables must be nonnegative. should be linear function of decision

For example

example shown above is in general form

Mathematical formulation of linear programming problem There are mainly four steps in the mathematical formulation of linear programming problem as a mathematical model. Identify the decision variables and assign symbols x and y to them. These decision variables are those quantities whose values we wish to determine. Identify the set of constraints and express them as linear equations / in equations in terms of the decision variables.

These constraints are the given conditions.

Mathematical formulation of linear programming problem. .

Identify the objective function and express it as a linear function of decision variables. It might take the form of

maximizing profit or production or minimizing cost.


Add the non-negativity restrictions on the decision variables, as in the physical problems, negative values of decision variables have no valid interpretation

There are many real life situations where an LPP may be formulated. The following examples will help to explain the mathematical formulation of an LPP.

Examples

Example

Constraints (Scarce Resources)

A company makes cheap tables and chairs using only wood and labor. To make a chair requires 10 hours of labor and 20 board feet of wood. To make a table requires 5 hours of labor and 30 board feet of wood. The profit per chair is $8 and $6 per table. If it has 300 board feet of wood and 110 hours of labor each day, how many tables and chairs should it make to maximize profits?

Objective

Setting Up the Problem


Profits: $6 per table and $8 per chair Total Profits = 6T + 8 C Constraints: 300 feet of wood per day 110 hours labor per day Wood Use: 30 feet per table 20 feet per chair Labor Use: 5 hours per table 10 hours per chair

and

Writing the Equations


Resources Unit profit Wood(board feet) Labor(hours) Requirements Amount Available Tables Chairs $6 $8 30 20 300 board feet 5 10 110 hours

Objective: Maximize Z = 6T + 8C
Maximum Profits = ($6 x # of tables) + ($8 x # of chairs)

Subject to:
30T + 20C < 300 board feet (wood constraint) 5T + 10C < 110 hours (labor constraint) T,C > 0 (non-negativity)

Writing the Equations


Resources Unit profit Wood(board feet) Labor(hours) Requirements Amount Available Tables Chairs $6 $8 30 20 300 board feet 5 10 110 hours

Maximize Subject to:

6T +

8 C
< 300 (wood constraint) < 110 (labor constraint) >0 (non-negativity)

30 T + 20 C 5 T + 10 C T, C

Inequalities
A resource may constrain a problem by being . . .
Equal-to = Equal-to or greater-than => or Equal-to or less-than =< or Greater-than > Less-than < . . .the amount of resource available. > <

Dealing with inequalities Converts Less-than or Equal-to variables, and Less-than variables to Equal-to variables by adding a slack variable.

30T + 20C < 300 (wood constraint)


becomes

30T + 20C + Sw = 300


Sw represents the difference, if any, between the
amount of wood used and the amount available. (It is unused resource) Slack variables also cannot be negative so S > 0

SURPLUS VARIABLES

If the labor constraint was greater than or equal to the 110 hours; expressed as
5 T + 10 C > 110 hours

Then a surplus variable would be needed to make it an equality.


5 T + 10 C - SL = 110 hours

SL represents the excess labor need, if any, above 110 hrs.


(Surplus variables cannot be negative so SL > 0)

Reformulation of the example with Slack Variables added


Maximize Subject to: Z = 6T + 8C 30T + 20C < 300 board feet of wood 5T + 10C < 110 hours of labor

The L.P. model adds any needed slack and surplus variables. But, if they are needed, they will appear in the program output. Below is how the program adds the slack variables.
Maximize Subject to: Z = 6T + 8C 30T + 20C + SW = 300 board feet of wood 5T + 10C + SL = 110 hours of labor T, C, SW, SL > 0

A company manufactures two products X and Y whose profit contributions are Rs.10 and Rs. 20 respectively. Product X

requires 5 hours on machine I, 3 hours on machine II and 2


hours on machine III. The requirement of product Y is 3 hours on machine I, 6 hours on machine II and 5 hours on machine III. The available capacities for the planning period for machine I, II and III are 30, 36 and 20 hours respectively. Find the optimal product mix.

A diet is to contain at least 4000 units of carbohydrates, 500 units of fat and 300 units of protein. Two foods A and B are available. Food A costs 2 dollars per unit and food B costs 4 dollars per unit. A unit of food A contains 10 units of carbohydrates, 20 units of fat and 15 units of protein. A unit of food B contains 25 units of carbohydrates, 10 units of fat and 20 units of protein. Formulate the problem as an LPP so as to find the minimum cost for a diet that

consists of a mixture of these two foods and also meets the


minimum requirements.
The above information can be represented as

Let the diet contain x units of A and y units of B. Total cost = 2x + 4y The LPP formulated for the given diet problem is Minimize Z = 2x + 4y subject to the constraints

In the production of 2 types of toys, a factory uses 3 machines A, B and C. The time required to produce the first type of toy is 6 hours, 8 hours and 12 hours in machines A, B and C respectively. The time required to make the second type of toy is 8 hours, 4 hours and 4 hours in machines A,

B and C respectively. The maximum available time (in hours) for the
machines A, B, C are 380, 300 and 404 respectively. The profit on the first type of toy is 5 dollars while that on the second type of toy is 3 dollars. Find the number of toys of each type that should be produced to get maximum The data given in the problem can be represented in a table as follows. profit

Let x = number of toys of type-I to be produced y = number of toys of the type - II to be produced Total profit = 5x + 3y The LPP formulated for the given problem is: Maximize Z = 5x + 3y

subject to the constraints

Standard form of LP problems


Standard form of LP problems must have following three

characteristics:

1. 2. 3.

Objective function should be of maximization type All the constraints should be of equality type All the decision variables should be nonnegative

Standard form
Standard form is a basic way of describing a LP problem. It consists of 3 parts:

A linear function to be maximized maximize

c1x1 + c2x2 + + cnxn

Problem constraints subject to

a11x1 + a12x2 + + a1nxn < b1 a21x1 + a22x2 + + a2nxn < b2 am1x1 + am2x2 + + amnxn < bm

Non-negative variables

e.g. x1, x2 > 0

The problems is usually expressed in matrix form and then it


becomes: maximize cTx subject to ax < b, x > 0 where X- Vector of decision variables

C- Objective function coefficients


a- Constraint coefficients b- Right hand side of the constraint

Other forms, such as minimization problems, problems with constraints on alternative forms, as well as problems involving negative variables can always be rewritten into an equivalent problem in standard form.

Any linear programming problem can be expressed in standard form by using the following transformations.
1. The maximization of a function f (x1, x2, . . . , xn) is equivalent

to the minimization of the negative of the same function. For


example, the objective function

Consequently, the objective function can be stated in the minimization form in any linear programming problem

2. The decision variables represent some physical dimensions, and hence the variables xj will be nonnegative. However, a variable may be unrestricted in sign in some problems. In such cases, an unrestricted variable (which can take a positive, negative, or zero value) can be written as the difference of two nonnegative variables. Thus if xj is unrestricted in sign, it can be written as xj = x j x j , where

It can be seen that xj will be negative, zero, or positive, depending on whether x j is greater than, equal to, or less than xj

3. If a constraint appears in the form of a less than or equal to type of inequality as

it can be converted into the equality form by adding a nonnegative slack variable xn+1 as follows:

Similarly, if the constraint is in the form of a greater than or


equal to type of inequality as

it can be converted into the equality form by subtracting a variable as

where xn+1 is a nonnegative variable known as a surplus variable.

Converting linear program in standard form into linear


program in slack form: N

Each constraint

a x
j=1

ij j

bi is represented

as xN+i= bi

- a x
j=1

ij j

and xN+i 0.

xN+i are basic variables, or slack variables. The original set of xi are non-basic variables.

General form Vs Standard form


General form
Violating points for standard form of LPP:

1.Objective function is of
minimization type. 2.Constraints are of inequality

type.
3.Decision variable, x2, is unrestricted, thus, may take negative values also. How to transform a general form of a LPP to the standard form ?

General form
General form

Transformation

Standard form
Standard form 1.Objective function

1.Objective function

2. First constraint.

2. First constraint.

3.Second constraint

3.Second constraint

4.Third constraint

4.Third constraint

5. Constraints for decision variables, x1 and x2

5. Constraints for decision variables, x1 and x2

Basic Definitions
Feasible solution. In a linear programming problem, any
solution that satisfies the constraints

is called a feasible solution


Basis. The collection of variables not set equal to zero to obtain the basic solution is called the basis.

Basic solution. A basic solution is one in which n m variables are set


equal to zero. A basic solution can be obtained by setting n m variables to zero and solving the constraint

simultaneously.

Basic feasible solution. This is a basic solution that satisfies the nonnegativity conditions of Eq.

Non-degenerate basic feasible solution. This is a basic feasible solution that has got exactly m positive xi . Optimal solution. A feasible solution that optimizes the objective function is called an optimal solution Optimal basic solution. This is a basic feasible solution for which the objective function is optimal.

Pivotal Operation
Operation at each step to eliminate one variable at a time, from all equations except one, is known as pivotal operation.

Number of pivotal operations are same as the number of variables in the set of equations.

Note: Pivotal equation is transformed first and using the


transformed pivotal equation other equations in the system are transformed. The set of equations (A3, B3and C3) is said to be in Canonical form which is equivalent to the original set of equations (A0, B0and C0)

Three pivotal operations were carried out to obtain the canonical form of set of equations in last example having three variables.

Basic variable, Nonbasic variable, Basic solution, Basic feasible solution

Find all the basic solutions corresponding to the


system of equations

Case 1

Case 2

Case 3

From case 3

The solution obtained by setting the independent variable equal to zero is called a basic solution

and x4 = 0 (nonbasic or independent variable). Since this basic solution has all xj 0 (j = 1, 2, 3, 4), it is a basic feasible solution

Flowchart for finding the optimal solution by the simplex algorithm.

Você também pode gostar