Effective Methods and Transportation Processes Management Models at the Railway Transport. Textbook. Vadim Shmal
Чтение книги онлайн.

Читать онлайн книгу Effective Methods and Transportation Processes Management Models at the Railway Transport. Textbook - Vadim Shmal страница 6

СКАЧАТЬ did, choosing in a number of examples of operations, the outcome of which depends on random factors, as an indicator of efficiency, the average value of the value that we wanted to turn into a maximum (minimum). This is the «average income» per unit of time, «average relative downtime», etc. In most cases, this approach to solving stochastic problems of operations research is fully justified. If we choose a solution based on the requirement that the average value of the performance indicator is maximized, then, of course, we will do better than if we chose a solution at random.

      But what about the element of uncertainty? Of course, to some extent it remains. The success of each individual operation carried out with random values of the parameters ξ1, ξ2, …, can be very different from the expected average, both upwards and, unfortunately, downwards. We should be comforted by the following: by organizing the operation so that the average value of W is maximized and repeating the same (or similar) operations many times, we will ultimately gain more than if we did not use the calculation at all.

      Thus, the choice of a solution that maximizes the average W value of the W efficiency indicator W is fully justified when it comes to operations with repeatability. A loss in one case is compensated by a gain in the other, and in the end our solution will be profitable.

      But what if we are talking about an operation that is not repeatable, but unique, carried out only once? Here, a solution that simply maximizes the average value of W will be imprudent. It would be more cautious to guard yourself against unnecessary risk by demanding, for example, that the probability of obtaining an unacceptably small value of W, say, W˂w0, be sufficiently small:

      P (W ˂w0) ≤ γ,

      where γ is some small number, so small that an event with a probability of γ can be considered almost impossible. The condition-constraint can be taken into account when solving the problem of solution optimization along with others. Then we will look for a solution that maximizes the average value of W, but with an additional, «reinsurance» condition.

      The case of stochastic uncertainty of conditions considered by us is relatively prosperous. The situation is much worse when the unknown factors ξ1, ξ2, … cannot be described by statistical methods. This happens in two cases: either the probability distribution for the parameters ξ1, ξ2, … In principle, it exists, but the corresponding statistical data cannot be obtained, or the probability distribution for the parameters ξ1, ξ2, … does not exist at all.

      Let us give an example related to the last, most «harmful» category of uncertainty. Let’s assume that some commercial and industrial operation is planned, the success of which depends on the length of skirts ξ women will wear in the coming year. The probability distribution for the parameter ξ cannot, in principle, be obtained from any statistical data. One can only try to guess its plausible meanings in a purely speculative way.

      Let us consider just such a case of «bad uncertainty»: the effectiveness of the operation depends on the unknown parameters ξ1, ξ2, …, about which we have no information, but can only make suggestions. Let’s try to solve the problem.

      The first thing that comes to mind is to ask some (more or less plausible) values of the parameters ξ1, ξ2, … and find a conditionally optimal solution for them. Let’s assume that, having spent a lot of effort and time (our own and machine), we did it. So what? Will the conditionally optimal solution found be good for other conditions? As a rule, no. Therefore, its value is purely limited. In this case, it will be reasonable not to have a solution that is optimal for some conditions, but a compromise solution that, while not optimal for any conditions, will still be acceptable in their whole range. At present, a full-fledged scientific «theory of compromise» does not yet exist (although there are some attempts in this direction in decision theory). Usually, the final choice of a compromise solution is made by a person. Based on preliminary calculations, during which a large number of direct problems for different conditions and different solutions are solved, he can assess the strengths and weaknesses of each option and make a choice based on these estimates. To do this, it is not necessary (although sometimes curious) to know the exact conditional optimum for each set of conditions. Mathematical variational methods recede into the background in this case.

      When considering the problems of operations research with «bad uncertainty», it is always useful to confront different approaches and different points of view in a dispute. Among the latter, it should be noted one, often used because of its mathematical certainty, which can be called the «position of extreme pessimism». It boils down to the fact that one must always count on the worst conditions and choose the solution that gives the maximum effect in these worst conditions for oneself. If, under these conditions, it gives the value of the efficiency indicator equal to W *, then this means that under no circumstances will the efficiency of the operation be less than W * («guaranteed winnings»). This approach is tempting because it gives a clear formulation of the optimization problem and the possibility of solving it by correct mathematical methods. But, using it, we must not forget that this point of view is extreme, that on its basis you can only get an extremely cautious, «reinsurance» decision, which is unlikely to be reasonable. Calculations based on the point of view of «extreme pessimism» should always be adjusted with a reasonable dose of optimism. It is hardly advisable to take the opposite point of view – extreme or «dashing» optimism, always count on the most favorable conditions, but a certain amount of risk when making a decision should still be present.

      Let us mention one, rather original method used when choosing a solution in conditions of «bad uncertainty» – the so-called method of expert assessments. It is often used in other fields, such as futurology. Roughly speaking, it consists in the fact that a team of competent people («experts») gathers, each of them is asked to answer a question (for example, name the date when this or that discovery will be made); then the answers obtained are processed like statistical material, making it possible (to paraphrase T. L. Saati) «to give a bad answer to a question that cannot be answered in any other way.» Such expert assessments for unknown conditions can also be applied to solving problems of operations research under conditions of «bad uncertainty». Each of the experts evaluates the degree of plausibility of various variants of conditions, attributing to them some subjective probabilities. Despite the subjective nature of the estimates of probabilities by each expert, by averaging the estimates of the whole team, you can get something more objective and useful. By the way, the subjective assessments of different experts do not differ as much as one might expect. In this way, the solution of the problem of studying operations with «bad uncertainty» seems to be reduced to the solution of a relatively benign stochastic problem. Of course, the result obtained cannot be treated too trustingly, forgetting about its dubious origin, but along with others arising from other points of view, it can still help in choosing a solution.

      Let’s name another approach to choosing a solution in conditions of uncertainty – the so-called «adaptive algorithms» of control. Let the operation O in question belong to the category of repeating repeatedly, and some of its conditions are ξ1, ξ2,… Unknown in advance, random. However, we do not have statistics on the probability distribution for these conditions and there is no time to collect such data (for example, it takes a considerable amount of time to collect statistics, and the operation needs to be performed now). Then it is possible to build and apply an adapting (adapting) control algorithm, which gradually takes place in the course of its application. At first, some (probably not the best) algorithm is taken, but as it is applied, it improves from time to time, since the experience of application indicates how it should be changed. It turns out something like the activity of a person who, as you know, «learns from mistakes.» Such adaptable control algorithms seem to have a great future.

      Finally, we will consider a special case СКАЧАТЬ