Название: Optimizations and Programming
Автор: Abdelkhalak El Hami
Издательство: John Wiley & Sons Limited
Жанр: Техническая литература
isbn: 9781119818267
isbn:
– for the variable leaving the basis, if there is equality in θ*, choose the variable with the lowest index.
1.6.4. Geometric structure of realizable solutions
Earlier, when we solved linear programs graphically, the optimal solutions were on the boundary of the convex set of realizable solutions. If there was a unique optimal solution, it was an extreme point.
Recall the canonical form of a linear program:
where the matrix A is assumed to be an m × n matrix of rank m (< n). The standard form is then given by
It is possible to show the following result.
Let Γ (respectively, Σ) be the set of feasible solutions of (P) (respectively of (Ps)):
Then Γ and Σ are closed and convex sets.
Recall that the simplex method only uses the basic solutions of (Ps). The next theorem tells us that these are in fact the extreme points of Σ.
THEOREM 1.5.–
Examples where the solution is not unique show that the optimal solutions lie on the boundary:
THEOREM 1.6.– Every optimal solution of (P) (respectively of (Ps)) belongs to the boundary of Γ (respectively of Σ).
A question presents itself: can there be optimal solutions but no optimal basic feasible solutions? If so, the simplex method, which only considers basic solutions, would not be able to find an optimal solution. The following theorem tells us that this does not happen.
THEOREM 1.7.– If (Ps) has an optimal solution, then (Ps) has an optimal basic feasible solution.
1.7. Interior-point algorithm
The simplex algorithm moves along a sequence of vertices of the polyhedron and converges rapidly on average. However, Klee and Minty [KLE 72] found pathological examples where the simplex algorithm visits almost every vertex, which can potentially be an enormous number. The existence of theoretical linear programs that derail the simplex algorithm motivated research to find an algorithm that has polynomial complexity in m and n, even in the worst-case scenarios. Khachian [KHA 79] found the first such algorithm, based on the ellipsoid method from nonlinear optimization. Although it is faster than the simplex method in Minty’s problems, it never succeeded in establishing itself because it tended to be much slower than the simplex algorithm on real problems in practice. Nevertheless, as a mathematical result, it inspired research into interior-point algorithms.
In 1984, Karmarkar [KAR 84] found a new algorithm. The computation time in the worst-case scenario is proportional to n3.4L, where L denotes the number of bits required to encode the tableaus A, b and c that define the linear program. The algorithm is too complicated for computations by hand; known as the interior-point algorithm, it enters into the polyhedron itself to converge on the optimal vertex more quickly.
Today, some variants of this algorithm are able to compete with the simplex algorithm. Some software programs already include an interior-point algorithm in addition to the simplex algorithm. On some types of linear programs, interior-point algorithms can be 10 times faster than the simplex algorithm, but nevertheless, there are still many cases where the simplex algorithm is quicker [PRI 11].
EXAMPLE 1.9.– Consider the LP
The optimum is x* = (2, 2)T with cost z* = 6. Starting from the interior point x0 = (1, 1)T and without a safety factor, the algorithm performs two iterations: it considers the point x1 = (2, 1.591)T, followed by the point x*.
1.8. Duality
Duality is an essential notion in linear programming. The duality operation associates any so-called primal linear programming problem with another such problem, known as the dual, that is defined solely in terms of the data of the primal problem.
Duality is interesting for two reasons:
– the dual problem has an important economic interpretation that offers another perspective of the primal problem;
– there are straightforward and powerful mathematical relations between the primal and dual problems; this will allow us to develop new algorithms that will be more efficient in many situations.
Suppose that we have the following primal program:
Its dual program is as follows:
REMARK.– The primal and dual programs satisfy the following relations:
– m = number of constraints of (P) = number of variables of (D),
– n = number of variables of (P) = number of constraints of (D).
If (P) has two constraints, (D) has two variables and can be solved graphically, regardless of the number of variables of (P).
So, what is the form of (D) when (P) is not in canonical form?
In such a case, we can determine (D) by reducing (P) to canonical form, as shown in the following example.
EXAMPLE 1.10.– Consider the LP:
By the above definition, the dual of (P) is given by
Setting y = u − v gives
(D) min w(y) = yb s.t. yA ≥ c, y with no sign restriction (denoted (y)).
In general, we have:
THEOREM СКАЧАТЬ