Optimizations and Programming. Abdelkhalak El Hami
Чтение книги онлайн.

Читать онлайн книгу Optimizations and Programming - Abdelkhalak El Hami страница 13

Название: Optimizations and Programming

Автор: Abdelkhalak El Hami

Издательство: John Wiley & Sons Limited

Жанр: Техническая литература

Серия:

isbn: 9781119818267

isbn:

СКАЧАТЬ is the primal.

      1.8.1. Duality theorem

      Now that we have defined the dual of a linear program, let us study the links between the solutions of programs that are dual to one another.

      THEOREM 1.9.– Let (x, y) be a feasible solution of (images) and (u, v) a feasible solution of (images). Then:

       1) z(x, y) ≤ w(u, v);

       2) z(x, y) = w(u, v) ⇒ (x, y) and (u, v) are optimal solutions of () and ( ).

      THEOREM 1.10.– One of the following three cases is satisfied:

       1) () and () have optimal solutions and max z(x, y) = min w(u, v);

       2) one of () and () has a feasible solution, but not the other;

       3) neither () nor () have feasible solutions.

      THEOREM 1.11.– Let (x, y) (respectively, (u, v)) be feasible solutions of (images) (respectively, (images)). Then (x, y) and (u, v) are optimal solutions of (images) and (images) if and only if the following statements hold:

       – if one constraint is strictly an inequality in () (respectively, ()), then the corresponding variable of () (respectively, of ()) is zero;

       – if the value of a restricted variable (i.e. a positive variable or a negative variable) in one of the programs () or () is not zero, then the corresponding constraint in the other program is an equality.

      This theorem can alternatively be stated as follows:

      THEOREM 1.12.– Let x and p be admissible solutions of the primal and dual programs, respectively. Then the vectors x and p are, respectively, optimal solutions of the two problems if and only if:

image

       Application

      Consider the LP

image

      The dual of this program is:

image

      The solution of the primal is x* = (1, 0, 1)T .

      The optimal dual solution can be constructed from additional slack conditions:

       – the condition = 0 is satisfied because x* is admissible;

       – the condition (cj − pT Aj)xj = 0 is satisfied for j = 2:- for j = 1, this condition becomes:5p1 + 3p2 = 13.- for j = 3, it becomes:3p1 = 6.

      These two conditions give p1 = 2 and p2 = 1. This solution is dual admissible. The total dual cost is 19, and so is the primal cost.

      In mathematical programming, good performance essentially depends on the program’s ability to construct useful bounds (lower bounds for maximization and upper bounds for minimization). These bounds can be obtained by relaxing the linear program, i.e.

       – either increasing the set of solutions to optimize over a larger space (that is easier to optimize); this includes constraint relaxation (and in particular continuous relaxation);

       – or replacing the objective function with an upper bound (in the case of maximization), for example, Lagrangian relaxation.

      For a mathematical program, the following continuous relaxations are possible:

       – (simple) continuous maximization, which is very effective for LPs (with the simplex algorithm, the interior-point method, or the volume algorithm); however, we will see that continuous relaxation is rarely the best option;

       – in the specific context of LPs, we can improve the value of linear relaxation by adding constraints – reinforcement;

       – we can also add variables – reformulation;

       – Lagrangian relaxation is another approach.

      1.9.1. Lagrangian relaxation

      Lagrangian duality provides a particularly fruitful framework for relaxation. It works as follows:

       – choose an inequality in the LP and remove it from the LP;

       – add this inequality to the objective function with a multiplier that penalizes the objective function if the solution found does not satisfy the inequality.

      This relaxation often produces a very good relaxation value, far better than continuous relaxation.

      [1.11]image

      To each constraint iI, we assign a real number λi ≥ 0 called the Lagrange multiplier. We also define the vector λ = (λi, 0 iI). The Lagrangian function is then defined as

image

      This function has the two vector arguments x and λ and takes values in ℝ. Suppose that the PL is such that the problem minx∈S images(x, λ) has an optimal solution for every λ ≥ 0.

      More details are given in Chapter 7. Now consider the following LP [FLE 10]:

      [1.12]image

      We obtain the optimal solution (0, 1.1, 4.4)T, which gives the optimal cost as z* = 6.6.

      By relaxing the third constraint (x1 + x2 ≥ 4.4) and keeping the integrality constraints on the variables xi, we obtain the following LP by adding the СКАЧАТЬ