Complex Decision-Making in Economy and Finance. Pierre Massotte
Чтение книги онлайн.

Читать онлайн книгу Complex Decision-Making in Economy and Finance - Pierre Massotte страница 22

Название: Complex Decision-Making in Economy and Finance

Автор: Pierre Massotte

Издательство: John Wiley & Sons Limited

Жанр: Экономика

Серия:

isbn: 9781119694984

isbn:

СКАЧАТЬ and converge towards an emerging global structure. These considerations therefore lead us to define the following schema of principle in which two totally different approaches to complex system management are included.

      Figure 2.1. Two approaches to managing complex systems (from Pierre Massotte – HDR thesis, 1995)

      These are in fact two visions of the world and two ways of understanding it:

       – on the left side of the figure, we find the Vitalist point of view, which is representative of the conventional approach to the processing of complex systems. A process is analyzed in a global and exhaustive way. By applying the principle of decomposition, the main, or global, tasks are divided into more elementary tasks and so on. The process is therefore modeled through a sequence of transformation functions. It is a static evolution model; by applying a stimulus, we observe and measure results. When the correct control parameters are adjusted, after a number of iterations or calculations, the real system can then be adjusted. We are in the old conception of a state of equilibrium dominated by the concept of action-reaction and predictability. In this static and top-down approach, we generally take the opportunity to simplify the so-called “complex” system or its process; it then becomes possible to automate it using computers. To solve a problem, many functions must be performed in parallel. The difficulty is only related to the performance of the calculation means, and it will always be possible, with appropriate time and investment, to find the right solution;

       – the right side of the figure represents the point of view of Mechanists and Connectionists. This is a dynamic, interaction-based approach, which we will call a bottom-up approach. Based on the principles just described, it is a question of generating a global function or of creating a structure or configuration based on the interactions existing in the interconnected network. This makes it possible to obtain a complex system (in the sense of behavior) from a great underlying simplicity (in terms of elementary functions and interactions). The implementation of such advanced concepts still raises many related problems nowadays, not to the performance of the calculation means, but to the overall performance of the emerging order (coherent with an overall objective). This requires an analysis of three points:- the exploitation of instabilities and low chaos to achieve optimal flexibility and responsiveness,- the definition of new associated methods for managing complex systems in order to better control them,- the development of new approaches and simulation tools to validate action plans to be applied to complex systems.

      In practice, it would be a mistake to apply only one of the approaches described above. These complement each other and highlight a feedback loop that operates accurately and continuously. The above diagram taken as a whole (right and left sides) forms a dynamic structural whole: one the left, being reductionist, the diversity of the system is reduced while defining strategies and tactics (optimal action plans), while one the right, concerning new forms, configurations and orders are generated. The dynamic is therefore intrinsic and comes from the internal evolution of the whole.

      To study the self-organization mechanism, we consider systems whose purpose is not known a priori. More specifically, the notion of chance is integrated into the system, and disruption is part of the system’s constraints. The basic principle is that agents, or elements of the system, do not self-organize to ensure that a particular result is achieved, but only to adapt to external disturbances and to facilitate the achievement of an overall objective at the system-wide level. The elements that make up the system pursue an individual, not a global, objective. Cooperation between these elements provides an overall result that can be judged by an observer outside the system who knows the reasons why the system was designed. These lead to the development of robust, adaptive and tolerant systems.

      Before analyzing the properties related to self-organization, it is necessary to recall notions related to its usefulness:

       – self-organization is a necessary skill in applications where you want to have high responsiveness, high fault tolerance (e.g. computer or machine failure), consideration of a disruption or stimulus or when the system is very complex;

       – the objective of self-organization is to allow the dynamic evolution of an existing system, depending on the context, in order to ensure its viability. It allows the entities composing the system to adapt to their environment either by specializing functions (learning) or by modifying the topology of the group and the corresponding interactions. This gives rise to a new organizational model.

      2.2.1. Emergence of self-organized patterns

      A concrete structure corresponds to a system’s stable state, i.e. a particular organization. Self-organization allows the transition, in an autonomous and reactive way, from one stable structure to another. The stability of a system’s structure depends on how long it remains stable despite disruptions that tend to destabilize it. Self-organization sometimes highlights phenomena of convergence towards particular structures. In this sense, it uses the concepts of attractors and basins of attraction, as defined in the chaos theory. This can be illustrated as follows:

       – a social organization is highly dependent on the nature of the problem being solved; it is contextual. In other words, an organization may be adequate to solve one problem but may be inadequate for another. We consider that a system adapts if, in the face of a situation not foreseen by the designer of the final application, it does not block itself but reacts by being able to modify its functions and structure on its own initiative in order to achieve the desired purpose. In this context, we need systems that are adaptable and have a learning capacity. In other words, the system can change its behavior in response to changes in its environment without drawing lasting consequences. We consider that a multi-agent system learns if it modifies its protocol over time, as well as if each agent in this system can modify its skills, beliefs and social attitudes according to the current moment and past experience. The system that learns to organize itself according to past experience makes it possible to arrive more quickly at the optimum that is the best organization responding to the problem at hand. It belongs here to the class of systems that we will call “reactive”;

       – programmable networks have communication functions between the actual network processing nodes. These networks (often of the Hopfield type) have an evolution that tends to bring them closer to a stable state through successive iterations. This is dynamic relaxation; it depends on an energy function, similar to that of Ising’s spin glasses [WIL 83], decreasing towards a local minimum. It is then said that the system evolves in a basin of attraction and converges towards an attractor whose trajectory depends on the context and its environment. This analogy with statistical physics (genetic algorithm, with its particular case, among others, simulated annealing) makes it possible to recover certain results, and to solve many allocation and optimization problems;

       – in a distributed production system, we are not faced with a scheduling problem, but with a problem involving configuration and reconfiguration of means and resources. The aim is therefore to highlight the self-organizing properties of these networks and to show how they converge towards stable, attractive states or orders in a given phase and state space. Thus, distributed production systems subject to disruptive conditions or moved to neighboring states will converge to the same stable state. This allows classifications to be made, for example, the automatic reconfiguration of a production system (allocation of resources and means) according to a context;

       – the same is true in logistics, with the possibility of organizing a round of distribution in terms of means of transport СКАЧАТЬ