# 16. Linear programming

### Idea

Assume that we study a process, in which there are some variables, which depend on each other linearly and there are some constraints defined also in a linear form. We would like to maximize (e.g. profits) or minimize (e.g. costs), i.e. a property also given in a linear form. Such problem is called a problem of linear programming.

E.g.: under conditions means that we would like to try to find values of variables , such that is as big as possible while those values meet the given conditions.

### Polygons, polyhedrons, polytopes, simplexes

If we consider a problem in which there are two variables, the possible solutions can be visualized as points on the plane. Conditions and restrict possible set of solutions (feasible set) to some half-planes (and conditions — to lines). Satisfying all such conditions gives a polytope (generalized polygon), generalized because it can be unbounded.

Now if we draw levels of the function of costs, they would be parallel lines. It is easy to see that the optimal point (if it exists) is always a vertex of our polygon! Or maybe the whole edge consists of such points if an edge is parallel to cost levels lines.

In three dimensions we will see polyhedrons but again, we have to look for the optimal solution in one of the vertices.

### Standard form of a problem

To construct a method of solving such problems first we need to define a standard form of a problem. We will say that a problem is in the canonical form if:

• it is a problem of minimization,
• every variable has condition ,
• all other conditions are equations.

Every problem of linear programming can be transformed to the standard form in the following way:

• in a problem of maximization we can multiply the objective function by to get a cost function and a minimization problem,
• every variable , for which there is no condition , has to be changed to sum of two variables and add conditions ,
• for each condition of form e.g. we add new variable (called a slack variable) and modify this condition to and add condition . Analogously for every condition of form e.g. add new variable and modify it to and add condition .

E.g.: with conditions: , has the following canonical form: with conditions: i .

### Basic and feasible solutions

Therefore the problem of linear programming in the canonical form consists of a linear cost function, few equations and assumption that all variables have to be . Therefore our simplex is bounded by hyperplanes of type , so the vertices are the points with coordinates mainly zero, with except for some variables called basic. In other words, to calculate a vertex (called basic solution) we have to choose as many variables as we have independent equations (so called basic variables) and we set other variables to zero and calculated those basic variables.

E.g. in problem , we have the following basic solutions (vertices):

• basic, so , so , and , . .
• basic, so , so , and , . .
• basic, so , so , and , . .

Only the first of the above solutions is feasible, since rest of them violate constraints that .

### Simplex method

Actually we already are able to solve a problem of linear programming. We know that we have to look for the optimal value in a vertex. We know how to calculate vertices. So we can calculate all the vertices and check the value of the cost in each of them and the vertex with the lowest value is the solution. But if there are many vertices, checking all of them is tedious. We need to check only few of them and that is what the simplex method is all about.

The idea is as follows. First we find and acceptable basic solution (a vertex). There are some edges in this vertex. Edges can be understood as changing one basic variable to an another. We have to check whether there exists an edge, which has the property that the cost value decreases along it. If so we go along it to the next vertex. We need to know how far we can go (we cannot make any variable negative) — it will determine which variable is dropped from the set of basic variable. It is the first which reaches zero. And iterate this procedure. In the new vertex we again look for an improving edge.

Before we formalize the above idea, one more remark. It may happen that there is no basic solution. Then the problem is contradictory and there is no solution at all. Also there may exist an infinite improving edge. An edge with no restriction. Edge along which cost value decreases but no variable decreases to zero. In this case also there is no optimal solution. Every solution can be improved.

### Simplex array

We will use a tool called the simplex array. Consider a problem of linear programming in the standard form, e.g.: under conditions: . The simplex array is the matrix of this system of linear equations with additional row at the top representing the linear function which has to be minimized:

First we have to find any feasible basic solution. In our example we see, that if we take as basic variables and set , we will get a feasible solution. Therefore, we need to transform the simplex array to the form describing this vertex, which means that in columns of basic variables I would like to have in one row and in the other rows. So:

Notice, that in the free coefficients column we get values of the basic variables, so we are in vertex . The value in the right top corner is the value of the cost function we would like to minimize multiplied by . So its value in this vertex is .

Now we would like to go along an improving edge. What does it mean? It means transforming the array in such a way that we will have and zeros in another column and the function we would like to minimize decreases (value in the right top corner increases). One can see that this is possible if we choose a column in which there is a negative number in the top row. In other words columns of non-basic variables represent edges and numbers in this column in the top row represent their type. Improving edges have negative numbers, aggravating edges have positive numbers there. Zero means that the function which we are minimizing does not change along the edge. In our case we have two improving edges, and we have to go along one of them. Let us choose the third column. So we will have a new basic variable, namely . Now we need to find out which variable, or will cease to be basic. In other words, in which row I should have one in the third column? It is determined by the fact that no variable can have negative value, in other words in the column of free coefficients we cannot have any negative number. Or yet in other words it is determined by the length of this edge. How to check it? Notice, that it depends on the value in the free column divided by the value in the column we have chosen. In our case: and , respectively. We have to chose lower but positive number out of those. So we have to choose the first row. If we had chosen the second, in the first we would have negative number on the right. Additional remark is the following: if no quotient were positive or defined, our edge would be infinite, and no optimal solution would exist. Therefore, one will change its place in the first row, will be dropped from the set of basic variables and we will end with basic variables: :

We are in the vertex and the value of the function we are minimizing is . We have two vertices here: one aggravating (the first column) and one improving (the fourth column). We have to go along the second one (new basic variable: ). Restrictions: and , both positive, the first one is lower, so again we have to choose the first row (variable will cease to be basic):

So we are in vertex . There are two edges here, both of them are aggravating. Therefore, we are in the optimal vertex with function value . So this is the solution of our problem: .

### Finding the first basic solution

It may happen that we do not see immediately any feasible basic solution which can be used to start the simplex method. E.g. assume that we do not know (even though it is possible to notice it in this case), which basic variables we should choose in the following problems: . But we can do the following trick. We add a new variable to the second equation (then we will immediately see a basic solution): (if needed we can also add further variables to other equations). Next we modify the costs function to force the simplex method to substitute zero to the variable . Let be a very big number and let our new costs function be: . Now during minimization our algorithm will have to decrease down to zero.

Let us check:

describes vertex and has one (very) improving edge related to variable . The restrictions are i , so we have to choose the second row and so variable will cease to be basic. So:

which describes vertex , which has only aggregating edges, so it is an optimal vertex. In the original problem it is vertex , with costs value .